Dashboard System
Environment, Deployment, Installation and Configuration Guide
Table of Contents
1 Introduction
2 Server Breakdown
3 Server Set-up
3.1 Software needed on the servers
3.2 Trap Processor machine set-up
3.3 Correlator machine set-up
3.3.1 MySQL Databases
3.3.2 Directory and Permissions set-up
3.4 Web Application machine set-up
3.5 OpsDB machine set up
4 Puppet Configuration
4.1 Existing Puppet Repositories (not foremanised)
4.2 Puppet environment-specific working copies
4.3 Formalisation of Puppet Repositories
4.3.1 Puppet formalisation for Web Application machine
4.3.2 Puppet formalisation for Correlator machine
5 Deployment through Bamboo
6 Installation and Execution
6.1 Installing and Running a WAR in Tomcat (correlator, web application and otrs messaging application)
6.2 Installing and Running the Network Map DB Repopulator JAR
6.3 Installing and Running the Trap Processor JAR
6.4 Installing and Running the OpsDB Copier JAR
7 Log files
7.1 Trap Processor and Correlator
7.2 Web application
7.3 Network Map DB Repopulator
7.4 OTRS Messaging Application
7.5 OpsDBCopier
8 RPM Repositories

Introduction

This guide describes how to install the entire Dashboard system (end-to-end).
The system consists of 6 applications:

  • Trap Processor
  • Correlator
  • Web Application (contains front-end among other things)
  • Network Map DB Repopulator (relies on but distinct from the "networkmap" maven project)
  • OTRS Messaging Application
  • OpsDB copier

Each of these applications consist of a deployable artifact (either a WAR or a JAR). They are built automatically on code "push" by the Bamboo continuous integration server, which also automatically deploys the artifact to its respective TEST server Bamboo is currently unable to build or deploy the opsDBCopier project because of a certificate issue between Bamboo and the dante.win.org.uk instance of Artifactory. Please speak to Michael Haller to resolve this. In the meantime, the project can be built locally and manually deployed if and when necessary (which should be very rarely – there are likely to be very few changes to this application). .

Server Breakdown

The servers are split into PROD, UAT and TEST. Within each, there are primary (01) and backup (02) environments for each (so 6 environments in total).
Each environment is, in turn, split into 3 machines The correlator and web application machines house additional applications besides just the correlator and web application respectively (they are simply named after the main application sitting on them).:

  • A Trap Processor machine
  • A Correlator machine
  • A Web Application machine

There are 5 main applications to be split across any given environment. The split occurs as follows (Prod 01 environment used in the examples):
prod-dboard01-trap.geant.net

  • Trap Processor application (JAR) (TrapProcessor / dashboard-fs-trap-processor)


prod-dboard01-corr.geant.net

  • Correlator application (WAR in tomcat) (dashboard-fs-correlator)
  • Network Map Populator application (JAR) (populate-web / dashboard-networkdb-repopulator)


prod-dboard01.geant.net

  • Web Application (WAR in tomcat) (dashboardV2)
  • OTRS application (WAR in tomcat) (otrs-messaging)


There is also a small 6th application, which sits on the Production OpsDB 01 box (prod-opsdb01.geant.net), whose purpose is to periodically (or on demand) provide updated versions of OpsDB for network map population. There are unlikely to be many (if any) new versions of this particular application, so it is unlikely to be necessary to deploy it regularly.

Server Set-up

Software needed on the servers

Please consult the original ticket raised for the set up of the Dashboard machines for details of software required, but in addition to any standard pre-requisities and as a bare minimum the machines should have (in each case, the version should be the one currently on UAT, except for the Nginx exception listed below):
prod-dboard01-trap.geant.net

  • JDK 8 (Oracle version)
  • Active MQ local instance If using the version of the trap processor which requires a local mq instance


prod-dboard01-corr.geant.net

  • JDK 8 (Oracle)
  • Apache Tomcat 7
  • MySQL Server


prod-dboard01.geant.net

  • JDK 8 (Oracle)
  • Apache Tomcat 7
  • Nginx (version should be that on the redundant test-newdboard01.geant.net server)


Trap Processor machine set-up

  • Unless moving to the later version trap processor jar currently on test (which does not require a local mq instance), please ensure activemq is up and running on the trap processor machine before you start the trap processor.
  • Ensure all relevant mibs are present under whatever MIB_PATH you are going to specify when running the application (use /usr/share/snmp/mibs, unless you have a good reason not to):
  • Create the following directory if it does not exist (will be used for logging as described below, unless you alter the trap processor's log4j.xml file): /var/log/snmptraps/new-dashboard/TrapProcessor


Correlator machine set-up

MySQL Databases


The following databases should be present on the correlator box:

  • alarms2
  • networkmap
  • maintenance
  • router_population
  • opsdb


Once these have been created, please import dump files in the following repo to set up the correct table structure, constraints etc:

  • TO BE POPULATED (once dump files have been added to a repo)


Run one of the following scripts to create the relevant MySQL users and add privileges:

  • TO BE POPULATED (based on privileges on correlator box)


Directory and Permissions set-up

  • Post tomcat7 installation, ensure privileges of /var/lib/apache-tomcat7/webapps directory to tomcat7:tomcat7 (otherwise tomcat will not be able to expand wars).

Web Application machine set-up

  • Post tomcat7 installation, ensure privileges of /var/lib/apache-tomcat7/webapps directory to tomcat7:tomcat7 (otherwise tomcat will not be able to expand wars

OpsDB machine set up

  • Create vdashboard view in the source OpsDB instance (prod01 or 02). Source code in the following repo: TO BE POPULATED (based on current production opsdb view)
  • Add .my.cnf in the home folder of the user who will run the copier rpm:
    • Contents should be as they are in the config subdirectory of dante.sanigar
    • Privileges should be set so that no one can read apart from the user who runs the copier
    • This allows mysqldump to be performed without supplying the password within the app or within a properties file (it is contained in the .my.cnf file, but is unreadable to all but the running user).
  • Add application.properties file in config sub-directory of the directory containing the jar (spring boot looks here automatically).


Puppet Configuration

Existing Puppet Repositories (not foremanised)

There are puppet configurations for 2 of the 3 machines in any given environment Please note that these puppet repos contain machine-wide configurations, they do not just apply to the correlator and webapp applications. : the correlator machine and the web application machine (trap processor configuration is currently very simple and consists of only 2 files bundled inside the ZIP build artifact produced by Bamboo but outside the executable JAR).
The repositories containing these configurations have the following urls:
git@git:dashboard-correlator-puppet
git@git:dashboard-webapp-puppet
They are configured in the win.dante.org.uk gitolite server, whose admin repository is contained at the following url:
git@git:gitolite-admin
The dashboard-webapp-puppet repository was adapted from the old dashboard puppet config, and still contains the odd manifest/template/file entries which should be removed (notably references to MySQL – there is no need for a MySQL server on the webapp box, as the only database it accesses is on the correlator machine).
Both repositories should be examined carefully by SysOps/Dev when they are moved into Foreman to ensure that they are as streamlined as possible and there are no unwanted services running on the machines in question which could tie up vital system resources (notably processor cores and threads) unnecessarily.
In the dashboard-correlator-puppet repository, there is an additional branch besides master called "oneMachine". This contains the base config for the time when GEANT move to the new physical multicore Blade servers (as the web application should be moved to the correlator box at that time to speed up DB access for front-end operations). The branch essentially contains the relevant parts of dashboard-webapp-puppet configuration integrated into the dashboard-correlator-puppet. However, please note that the very latest changes from the master branches of both dashboard-correlator-puppet and dashboard-webapp-puppet have not yet been incorporated into this branch, so merging and porting will be necessary to bring this branch up to date as and when it's needed.

Puppet environment-specific working copies

Although these puppet configurations have not yet been formalised by SysOps (including full integration into Foreman and set up of formalised working copies for the various machines), we do have manually run working copies of each repository for the TEST, UAT and PROD 01 environments. These working copies are contained in a "newdashboard" subdirectory of the dante.tipper home directory on each 01 machine (bar the trap processor box, for reasons explained above). To run puppet and apply these configurations, go to the dante.tipper directory and run the executable script "run-test.sh" (more details in the Installation and Execution below).
Each working copy will have slightly different environment-specific values in some of their config files. In the dashboard-correlator-puppet working copies, you will need to set the environment prefix to either "test", "uat" or "prod" depending on which environment's working copy it is in the following files (all within files/apachetomcat7):

  • correlator-esb.properties
  • populate-esb.properties

Similarly, in the dashboard-webapp-puppet working copies, you will need to set the environment prefix in:

  • webapp-esb.properties
  • otrs-esb.properties

And you will need to change the host prefix as appropriate in webapp-db.properties (again, to either test, uat or prod).
In the correlator, if installing to production, you will aslo need to alter correlator.properties to enable otrs integration (set the otrsIntegrationEnabled flag to true).
In the web application, if installing to production, change webapp.properties to set disableManualNetworkMapRefresh to false.
When the web application is running, system settings can be adjusted as appropriate including enabling/disabling maintenance check (this will be done by the OC).

Formalisation of Puppet Repositories

Puppet formalisation for Web Application machine

A partial formalisation of the dashboard-webapp-puppet repository has already been carried out by Michael Haller and is contained in the following repository:
it.geant.net:puppet-modules/puppet-dashboard-webapp
This repository also crucially contains the configuration for nginx (the web sockets compatible web server which should ultimately be installed on the web application box, which will forward all incoming requests aimed at the GUI on to tomcat 7).
By "partial formalisation", what is meant is that this repository was created some weeks ago, and so the latest changes in dashboard-webapp-puppet will need to be ported across into it. Once they have been, a ticket should be raised with Michael Haller to make this configuration the puppet master of test-dboard01 (and then uat-dboard01 and prod-dboard01 when it has been confirmed that test is running as desired, as well as the respective 02 servers when they are set up).

Puppet formalisation for Correlator machine

There is also a repository for formalising the correlator puppet configuration:
it.geant.net:puppet-modules/puppet-dashboard-correlator
but it is currently empty. A ticket should be raised with Michael Haller to formanise the puppet configuration in dashboard-correlator-puppet and to port it across to the new repository. This configuration can then be made the puppet master of test-dboard01-corr (and subsequently uat-dboard01-corr, as well as the respective 02 servers when they are set up).

Deployment through Bamboo

A link to all Dashboard "build" projects is below (the "deployment" projects can either be accessed through their associated build project, or through the "Deployment Projects" link at the top of the page):
https://ci.win.dante.org.uk/bamboo/browse/DBOARD
Note that some of these build projects are shared libraries used by the applications listed above; it is only the applications themselves which need to be deployed.
Each Bamboo build project is linked to a particular Git source repository. Bamboo then polls the repo in question for new commits, and rebuilds the project if changes are detected. Any "dependent" projects within Bamboo are also built at the same time (eg. changes to the "common" repo will trigger not only a build of the "common" Bamboo project, but also a build of all dependent Bamboo projects). Dependencies between Bamboo projects are established in the "Dependencies" tab when editing a build project.
As stated above, deployment of each of the applications (bar opsDBCopier) to its respective TEST environment is executed automatically at the end of a successful build (so, for example, on successful build of the "correlator" project "master" branch, the WAR produced will be automatically deployed to the /tmp directory on test-dboard01-corr.geant.net). For deployment of these applications to either UAT or PRODUCTION, you will need to manually trigger the deployment in question. To do this:

  • Go to the "Deploy x" deployment project in Bamboo (eg. "Deploy Correlator"):
  • Select the "deploy" icon on the right hand side for the environment in question:
    • The deployment icon looks a little like a cloud with an arrow on it (hover over the icons if in doubt)
  • Select whether you want to:
    • create a new release from the latest build result (or a prior build result)
    • promote an existing release (one which has already been deployed to a different environment)
  • Click Start Deployment:
    • this is currently set up to deploy to the /tmp directory on the environment in question


Installation and Execution

After the application in question has been deployed, it will need to be installed and run. How this is done will depend on what kind of artifact the application is packaged in. If it's a WAR, it should be run in Tomcat 7; if it's a JAR, it should be run from the command line (though configuration will vary depending on the application in question).

Installing and Running a WAR in Tomcat (correlator, dashboard web application, populator app and otrs messaging application)

  • Stop the apache-tomcat7 service
  • Remove the expanded webapp folder for the application in question from /var/lib/apache-tomcat7/webapps
    • (eg. sudo rm –rf /var/lib/apache-tomcat7/dashboard-fs-correlator/)
  • Move the new artifact into the webapps directory, renaming if appropriate These renamings are necessary to remove version information, and puppet will expect the files installed to lack version info. So dashboard-fs-correlator-1.0-SNAPSHOT.war should simply be called dashboard-fs-correlator.war. You could alternatively alter the maven poms of the projects in question to have <finalName> elements without version information., eg:
    • sudo mv /tmp/dashboard-fs-correlator-1.0-SNAPSHOT.war /var/lib/apache-tomcat7/webapps/dashboard-fs-correlator.war
    • sudo mv /tmp/dashboardV2-web.war /var/lib/apache-tomcat7/webapps/dashboardV2.war dashboardV2-web.war is the artifact name produced by maven, but needs to be altered to dashboardV2.war when installed in tomcat webapps for puppet configuration to be applied to it (these naming inconsistencies are a relic of a previous time and could be eliminated by either altering the puppet configuration to add the –web suffix, or removing it from the <finalName> of the maven build configuration in the dashboardV2-web module's pom).
  • Repeat for other Dashboard apps to be installed on that box
  • Start tomcat
  • Once all wars have been expanded, run puppet (locally for now):
    • Go to dante.tipper home directory
    • Make sure that the puppet local working copy for this environment is up-to-date:
      • The puppet configuration is contained under a directory within dante.tipper called "newdashboard"
      • Each of these working copies maps to its respective remote git repo in the win.dante.org instance of gitolite (run git remote –v when inside the "newdashboard" directory to see its respective remote url).
    • Navigate back to the dante.tipper directory and run the following command: ./run-test.sh:
      • The file concerned should exist and be executable
    • Check that puppet has run correctly (without errors The Web Application puppet configuration needs its mysql server config removed before it will run correctly.)
    • tail tomcat and application logs to ensure the application(s) have started correctly

Installing and Running the Trap Processor JAR

  • Move the already deployed Trap Processor ZIP file (currently in the /tmp directory) to whichever directory you want to be in
  • Unzip it
  • Go into the unzipped directory. It should contain:
    • a trap-processor-esb.properties file
    • a trapProcessor.properties file
    • a local-messaging.properties file (unless you have switched to the later version of the trap processor jar currently on test, which does not require a local MQ instance) <only relevant on production – test instance does not have local active mq>
  • If necessary, alter the trap-processor esb properties file to use the chosen ESB, ESB credentials and the correct environment prefix (ie. test if on test box, uat if on uat box, prod if on prod box). It should look like the below, if using the 01 esb:
    • brokerUrl=tcp://prod-dboard01-esb.geant.net:61616
    • userName=system
    • password=manager
    • environmentPrefix=prod (or uat or test as appropriate)
  • Run the trap processor as root Necessary to run as root to have rights to listen on port 162. Note that, if you are sudoing and have to enter a password, this can stop the application from running if you are using a separate shell (ie. If you are appending the command with ampersand). from the command line, using ampersand (&) to make it a separate process and passing in the following properties via command line parameter (–D="…"):
    • CONFIG_HOME (file url pointing to the directory containing the properties files listed above)
    • MIB_PATH (file url pointing to the directory containing the MIBs on this box, usually /usr/share/snmp/mibs)
  • tail the trap processor log file to make sure that the application starts up correctly and starts processing MIBs

Installing and Running the OpsDB Copier JAR

  • Use Bamboo to deploy OpsDB copier jar to the prod-opsdb01 (note that the deployment job has not yet been created).
  • Ensure the local config/ directory exists within whichever directory you want to run the JAR. This should contain a single application.properties file with all relevant properties:
    • see the file in dante.sanigar/application.properties or
    • create the file and use the contents of the application.properties file present within the OpsdbCopier application source code, amending for local paths etc if necessary.
  • Run the opsdbCopier jar from the command line, using ampersand (&) to make it a separate process
  • Ensure /home/local/GEANT/dante.sanigar/opsdb-open.key is present
  • tail the copier log file (or whichever file the command specified the app should log to) to make sure that the application starts up correctly and starts processing MIBs


Note: the dashboard webapp THE ABOVE SHOULD ALL BE AUTOMATED INSIDE AN RPM ON INSTALLATION (or using Bamboo to run scripts on the client machine).





Log files

Log4j.xml files for the Correlator, Network DB Repopulator, Web Application and OTRS messaging application are all contained in their respective puppet configurations.
The Trap Processor's log4j.xml file is internal (this could be externalised if there is ever any need to change it).
The opsdbCopier's log4j.xml file is also internal, but unlike the trap processor its only appender is currently to System.out, so the file it logs to can be determined when the jar is run.

Trap Processor and Correlator

On their respective machines, the trap processor and correlator log to the following directories respectively:
/var/log/snmptraps/new-dashboard/TrapProcessor
/var/log/snmptraps/new-dashboard/Correlator
Within these directories, you should find three files when the applications are up-and-running:
allLog.txt: contains all messages at level DEBUG and above
warnLog.txt: contains all WARN messages
errorLog.txt:contains all ERROR messages
The separate files for errors and warnings exist for monitoring and debugging convenience. All error and warning messages will also be present in the allLog.txt file.
Note that, where the correlator is concerned, you will still need to check catalina.out on start-up (and for general tomcat problems), as application logging will only be directed to the application-specific log files from the point in application context loading where the log4j log binder is initialised.

Web application

The web application is currently set up to log to dante.tipper/DashboardWeb/allLog.txt (though the puppet repo has been updated to log to /var/log/snmptraps/new-dashboard/DashboardWeb/allLog.txt instead; the changes have not yet been deployed).

Network Map DB Repopulator

The Network Map DB Repopulator is currently set up to log to dante.sanigar/populate.log (though the puppet repo has been updated to log to /var/log/snmptraps/new-dashboard/Populator/populate.log instead; the changes have not yet been deployed).

OTRS Messaging Application

The OTRS Messaging Application is currently set up to log to dante.tipper/otrs.log (though the puppet repo has been updated to log to /var/log/snmptraps/new-dashboard/OTRS/otrs.log instead; the changes have not yet been deployed).

OpsDBCopier

The OpsDBCopier application is currently logging to dante.sanigar/copier.log (but the file can be changed whenever the application is restarted).

RPM Repositories

There is one existing RPM repository for the Dashboard Web Application contained in the following repo:
git@git:rpms/dashboardV2
This repo could be used as a model for the creation of all WAR based RPMs (though ideally RPM creation would be integrated into Bamboo).
This repository follows the standard GEANT approach for creation of RPMs based on:

  • a pull from artifactory (rpmdev-bootstrap)
  • a build of the rpm from a spec file (rpmdev-build)
  • publication of the rpm to artifactory (rpmdev-publish)


  • No labels