Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The following table shows the DNS configuration and the role each machine plays in the cluster.

FQDNIPRole
wifimon-node1.rashexample.alorg10.254.24.230master-eligible / data node
wifimon-node2.rashexample.alorg10.254.24.232master-eligible / data node
wifimon-node3.rashexample.alorg10.254.24.237master-eligible / data node
wifimon-kibana.rashexample.alorg10.254.24.148coordinating node
wifimon-logstash.rashexample.alorg10.254.24.233pipeline node

...

NodeOpen ports
wifimon-node{1,2,3}.rashexample.alorg9200/tcp, 9300/tcp
wifimon-kibana.rashexample.alorg9200/tcp, 9300/tcp, 5601/tcp
wifimon-logstash.rashexample.alorg5044/tcp

Port 9200/tcp is used to query the cluster using the Elasticsearch REST API. Port 9300/tcp is used for internal communication between cluster nodes. Port 5044/tcp is where Logstash listens for beats of log events sent from Filebeat. Port 5601/tcp is used to access Kibana platform from the browser.

...

The cluster communication is secured by configuring SSL/TLS. The elasticsearch-certutil was used to generate a CA certificate utilized for signing while generating the cluster components certificates. This utility comes with the elasticsearch installation, and in this case was used the one installed in the wifimon-kibana.rashexample.al org node.

Create the instances.yml file with the following contents:

Code Block
titleinstances.yml
instances:
- name: node1
  dns: wifimon-node1.rashexample.alorg
  ip: 10.254.24.230
- name: node2
  dns: wifimon-node2.rashexample.alorg
  ip: 10.254.24.232
- name: node3
  dns: wifimon-node3.rashexample.alorg
  ip: 10.254.24.237
- name: kibana
  dns: wifimon-kibana.rashexample.alorg
  ip: 10.254.24.148
- name: logstash
  dns: wifimon-logstash.rashexample.alorg
- name: filebeat

Generate the CA certificate and key:

...

Create a directory named certs under each component’s configuration directory, and copy there the certificate authority and the corresponding component’s certificate and key. At the end, the certs directories on each component should look like the layouts shown below.

On wifimon-kibana.rashexample.al org node:

Code Block
/etc/elasticsearch/certs/
├── ca.crt
├── kibana.crt
└── kibana.key

/etc/kibana/certs/
├── ca.crt
├── kibana.crt
└── kibana.key

On wifimon-node1.rashexample.al org node:

Code Block
/etc/elasticsearch/certs/
├── ca.crt
├── node1.crt
└── node1.key

On wifimon-node2.rashexample.al org node:

Code Block
/etc/elasticsearch/certs/
├── ca.crt
├── node2.crt
└── node2.key

On wifimon-node3.rashexample.al org node:

Code Block
/etc/elasticsearch/certs/
├── ca.crt
├── node3.crt
└── node3.key

On wifimon-logstash.rashexample.al org node:

Code Block
/etc/logstash/certs/
├── ca.crt
├── logstash.crt
└── logstash.pkcs8.key

...

Note
titleNOTE

Elasticsearch keystore should be configured before running this configuration.

On wifimon-node1.rashexample.al org node:

Code Block
title/etc/elasticsearch/elasticearch.yml
cluster.name: wifimon
node.name: ${HOSTNAME}
node.master: true
node.voting_only: false
node.data: true
node.ingest: true
node.ml: false
cluster.remote.connect: false
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: wifimon-node1.rashexample.alorg
discovery.seed_hosts: [
    "wifimon-node1.rashexample.alorg",
    "wifimon-node2.rashexample.alorg",
    "wifimon-node3.rashexample.alorg"
]
#cluster.initial_master_nodes: [
#    "wifimon-node1.rashexample.alorg",
#    "wifimon-node2.rashexample.alorg",
#    "wifimon-node3.rashexample.alorg"
#]
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: full
xpack.security.http.ssl.key: /etc/elasticsearch/certs/node1.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/node1.crt
xpack.security.http.ssl.certificate_authorities: /etc/elasticsearch/certs/ca.crt
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/node1.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/node1.crt
xpack.security.transport.ssl.certificate_authorities:
/etc/elasticsearch/certs/ca.crt
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true

On wifimon-node2.rashexample.al org node:

Code Block
title/etc/elasticsearch/elasticearch.yml
cluster.name: wifimon
node.name: ${HOSTNAME}
node.master: true
node.voting_only: false
node.data: true
node.ingest: true
node.ml: false
cluster.remote.connect: false
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: wifimon-node2.rashexample.alorg
discovery.seed_hosts: [
    "wifimon-node1.rashexample.alorg",
    "wifimon-node2.rashexample.alorg",
    "wifimon-node3.rashexample.alorg"
]
#cluster.initial_master_nodes: [
#    "wifimon-node1.rashexample.alorg",
#    "wifimon-node2.rashexample.alorg",
#    "wifimon-node3.rashexample.alorg"
#]
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: full
xpack.security.http.ssl.key: /etc/elasticsearch/certs/node2.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/node2.crt
xpack.security.http.ssl.certificate_authorities: /etc/elasticsearch/certs/ca.crt
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/node2.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/node2.crt
xpack.security.transport.ssl.certificate_authorities:
/etc/elasticsearch/certs/ca.crt
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true

On wifimon-node3.rashexample.al org node:

Code Block
title/etc/elasticsearch/elasticearch.yml
cluster.name: wifimon
node.name: ${HOSTNAME}
node.master: true
node.voting_only: false
node.data: true
node.ingest: true
node.ml: false
cluster.remote.connect: false
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: wifimon-node3.rashexample.alorg
discovery.seed_hosts: [
    "wifimon-node1.rashexample.alorg",
    "wifimon-node2.rashexample.alorg",
    "wifimon-node3.rashexample.alorg"
]
#cluster.initial_master_nodes: [
#    "wifimon-node1.rashexample.alorg",
#    "wifimon-node2.rashexample.alorg",
#    "wifimon-node3.rashexample.alorg"
#]
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: full
xpack.security.http.ssl.key: /etc/elasticsearch/certs/node3.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/node3.crt
xpack.security.http.ssl.certificate_authorities: /etc/elasticsearch/certs/ca.crt
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/node3.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/node3.crt
xpack.security.transport.ssl.certificate_authorities:
/etc/elasticsearch/certs/ca.crt
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true

...

A coordinating node is an Elasticsearch node which joins the cluster as every cluster node. In this setup, the coordinating node is named wifimon-kibana.rashexample.alorg because the Kibana visualization platform has been installed and configured on it.

Below is the configuration of wifimon-kibana.rashexample.al org as an Elasticsearch coordinating node. It follows the same pattern as the master-eligible/data nodes, but with their functionalities set to false.

Note
titleNOTE

Elasticsearch keystore should be configured before running this configuration.

On wifimon-kibana.rashexample.al org node:

Code Block
title/etc/elasticsearch/elasticearch.yml
cluster.name: wifimon
node.name: ${HOSTNAME}
node.master: false
node.voting_only: false
node.data: false
node.ingest: false
node.ml: false
cluster.remote.connect: false
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: wifimon-kibana.rashexample.alorg
discovery.seed_hosts: [
    "wifimon-node1.rashexample.alorg",
    "wifimon-node2.rashexample.alorg",
    "wifimon-node3.rashexample.alorg"
]
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: full
xpack.security.http.ssl.key: /etc/elasticsearch/certs/kibana.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/kibana.crt
xpack.security.http.ssl.certificate_authorities: /etc/elasticsearch/certs/ca.crt
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/kibana.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/kibana.crt
xpack.security.transport.ssl.certificate_authorities:
/etc/elasticsearch/certs/ca.crt
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true

...

Passwords setup requires the nodes being up and running in a healthy cluster. Start the elasticsearch instance on each cluster node, and after ensuring each instance is running properly, run the following command in wifimon-kibana.rashexample.alorg node to setup the passwords.

Code Block
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto -u "https://wifimon-kibana.rashexample.alorg:9200"

The above command will randomly generate passwords for each built-in user. Save the output!

...

Note
titleNOTE

Kibana keystore should be configured before running this configuration.

On wifimon-kibana.rashexample.al org node:

Code Block
title/etc/kibana/kibana.yml
server.port: 5601
server.host: "wifimon-kibana.rashexample.alorg"
server.name: "wifimon-kibana"
elasticsearch.hosts: ["https://wifimon-kibana.rashexample.alorg:9200"]
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/certs/kibana.crt
server.ssl.key: /etc/kibana/certs/kibana.key
elasticsearch.ssl.certificateAuthorities: ["/etc/kibana/certs/ca.crt"]
elasticsearch.ssl.verificationMode: full

The elasticsearch.hosts is an array of URLs of nodes to send the queries. It is set to the https://wifimon-kibana.rashexample.alorg:9200 which is the coordinating node.

...

Start the kibana service, access the platform at https://wifimon-kibana.rashexample.alorg:5601 and authenticate with the elastic superuser and its password.

...

Even though it is possible to explore the cluster by using the Kibana platform, this section is about querying the cluster by using the REST API provided by Elasticsearch. The querying commands are executed in wifimon-kibana.rashexample.alorg node and the user elastic is used for authentication.

...

Code Block
curl -XGET --cacert /etc/elasticsearch/certs/ca.crt --user elastic 'https://wifimon-kibana.rashexample.alorg:9200/_cat/nodes?v'

Each node is represented by a row consisting of node's IP, heap and memory % usage, average loads as in uptime command output, the roles (m)aster, (d)ata, (i)ngest, which node is elected (*) as master, and node's name.

...

Code Block
curl --cacert /etc/elasticsearch/certs/ca.crt --user elastic -XGET 'https://wifimon-kibana.rashexample.alorg:9200/_cat/master?v'

Display health:

Code Block
curl -XGET --cacert /etc/elasticsearch/certs/ca.crt --user elastic 'https://wifimon-kibana.rashexample.alorg:9200/_cat/health?v'

Our cluster is of green status, but this will change to yellow after stopping the elasticsearch instance in the master node, which was intentionally chosen in order to see the election of the new master.

Our cluster is of green status, but this will change to yellow after stopping the elasticsearch instance in the master node, which was intentionally chosen in order to see the election of the new master.

On wifimon-node1.rashexample.al org (the master node) run "systemctl stop elasticsearch.service" to stop the elasticsearch instance.

Querying the cluster again from wifimon-kibana.rashexample.al org node shows that the wifimon-node1.rashexample.al org has gone and the wifimon-node3.rashexample.al org has been elected as the new master. The cluster status is now yellow.

Start the elasticsearch instance on wifimon-node1.rashexample.al org node and query the cluster again. The wifimon-node1.rashexample.al org will join the cluster and the status of the cluster will become green, while wifimon-node3.rashexample.al org continues to be the master node.

...

Code Block
title/tmp/radius_sample_logs
Sun Mar 10 08:16:05 2019
    Service-Type = Framed-User
    NAS-Port-Id = "wlan2"
    NAS-Port-Type = Wireless-802.11
    User-Name = "sgjeci@rashsgjeci@example.alorg"
    Acct-Session-Id = "82c000cd"
    Acct-Multi-Session-Id = "CC-2D-E0-9A-EB-A3-88-75-98-6C-31-AA-82-C0-00-00-00-00-00-CD"
    Calling-Station-Id = "88-75-98-6C-31-AA"
    Called-Station-Id = "CC-2D-E0-9A-EB-A3:eduroam"
    Acct-Authentic = RADIUS
    Acct-Status-Type = Start
    NAS-Identifier = "Eduroam"
    Acct-Delay-Time = 0
    NAS-IP-Address = 192.168.192.111
    Event-Timestamp = "Mar 8 2019 08:16:05 CET"
    Tmp-String-9 = "ai:"
    Acct-Unique-Session-Id = "e5450a4e16d951436a7c241eaf788f9b"
    Realm = "rashexample.alorg"
    Timestamp = 1552029365


Code Block
title/tmp/dhcp_sample_logs
Jun 18 19:15:20 centos dhcpd[11223]: DHCPREQUEST for 192.168.1.200 from a4:c4:94:cd:35:70 (galliumos) via wlp6s0
Jun 18 19:15:20 centos dhcpd[11223]: DHCPACK on 192.168.1.200 to a4:c4:94:cd:35:70 (galliumos) via wlp6s0

...

Code Block
title/tmp/sample_logs_output.json
{"@timestamp":"2020-06-28T13:07:37.183Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.8.0"},"logtype":"radius","message":"Sun Mar 10 08:16:05 2019\n\tService-Type = Framed-User\n\tNAS-Port-Id = \"wlan2\"\n\tNAS-Port-Type = Wireless-802.11\n\tUser-Name = \"sgjeci@rashsgjeci@example.alorg\"\n\tAcct-SessionId = \"82c000cd\"\n\tAcct-Multi-Session-Id = \"CC-2D-E0-9A-EB-A3-88-75-98-6C-31-AA82-C0-00-00-00-00-00-CD\"\n\tCalling-Station-Id = \"88-75-98-6C-31-AA\"\n\tCalledStation-Id = \"CC-2D-E0-9A-EB-A3:eduroam\"\n\tAcct-Authentic = RADIUS\n\tAcctStatus-Type = Start\n\tNAS-Identifier = \"Eduroam\"\n\tAcct-Delay-Time = 0\n\tNASIP-Address = 192.168.0.22\n\tEvent-Timestamp = \"Mar 8 2019 08:16:05 CET\"\n\tTmpString-9 = \"ai:\"\n\tAcct-Unique-Session-Id = \"e5450a4e16d951436a7c241eaf788f9b\"\n\tRealm = \"rashexample.alorg\"\n\tTimestamp = 1552029365"}

...

Code Block
output.logstash:
  hosts: ["wifimon-logstash.rashexample.alorg:5044"]
  ssl.certificate_authorities: ["/etc/filebeat/certs/ca.crt"]
  ssl.certificate: "/etc/filebeat/certs/filebeat.crt"
  ssl.key: "/etc/filebeat/certs/filebeat.key"
  ssl.key_passphrase: "${key_passphrase}"

...

Code Block
set +o history
filebeat setup --index-management \
-E output.logstash.enabled=false \
-E 'output.elasticsearch.hosts=["wifimon-kibana.rashexample.alorg:9200"]' \
-E output.elasticsearch.protocol=https \
-E output.elasticsearch.username=elastic \
-E output.elasticsearch.password=elastic-password-goes-here \
-E 'output.elasticsearch.ssl.certificate_authorities=["/etc/filebeat/certs/
ca.crt"]'
set -o history

The above command loads the template from wifimon-kibana.rashexample.al org node where elasticsearch is installed. Detailed information is written in the Filebeat log file.

...

Code Block
monitoring.enabled: true
monitoring.cluster_uuid: "cluster-id-goes-here"
monitoring.elasticsearch.ssl.certificate_authorities: ["/etc/filebeat/certs/ca.crt"]
monitoring.elasticsearch.ssl.certificate: "/etc/filebeat/certs/filebeat.crt"
monitoring.elasticsearch.ssl.key: "/etc/filebeat/certs/filebeat.key"
monitoring.elasticsearch.ssl.key_passphrase: "${key_passphrase}"
monitoring.elasticsearch.hosts: ["https://wifimon-kibana.rashexample.alorg:9200"]
monitoring.elasticsearch.username: beats_system
monitoring.elasticsearch.password: "${beats_system_password}"

...

Code Block
curl --cacert /etc/elasticsearch/certs/ca.crt --user elastic -XGET 'https://wifimon-kibana.rashexample.alorg:9200/_cluster/state/all?pretty'

...

Code Block
title/etc/logstash/logstash.yml
path.data: /var/lib/logstash
path.logs: /var/log/logstash
queue.type: persisted
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: "${logstash_system_password}"
xpack.monitoring.elasticsearch.hosts: "https://wifimon-kibana.rashexample.alorg:9200"
xpack.monitoring.elasticsearch.ssl.certificate_authority: "/etc/logstash/certs/ca.crt"
xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
xpack.monitoring.elasticsearch.sniffing: true

...

Having Filebeat agents configured to feed Logstash, whose pipelines are configured to dump data to STDOUT, makes it possible to test a data flowing through Filebeat → Logstash → Logstash_STDOUT.

On wifimon-logstash.rashexample.al org start the logstash service:

...

On radius server run the test_filebeat.sh script as root user.

On wifimon-logstash.rashexample.al org terminal should be shown something like:

...

On dhcp server run the test_filebeat.sh script as root user.

On wifimon-logstash.rashexample.alorg terminal should be shown something like:

...

Code Block
curl -X POST --cacert /etc/elasticsearch/certs/ca.crt --user elastic 'https://wifimon-kibana.rashexample.alorg:9200/_security/role/logstash_writer_role?pretty' -H 'Content-Type: application/json' -d'
{
  "cluster": [
    "monitor",
    "manage_index_templates"
  ],
  "indices": [
    {
      "names": [
        "radiuslogs",
        “dhcplogs”
      ],
      "privileges": [
        "write",
        "create_index"
      ],
      "field_security": {
        "grant": [
          "*"
        ]
      }
    }
  ],
  "run_as": [],
  "metadata": {},
  "transient_metadata": {
    "enabled": true
  }
}
'

...

Code Block
set +o history
curl -X POST --cacert /etc/elasticsearch/certs/ca.crt --user elastic 'https://wifimon-kibana.rashexample.alorg:9200/_security/user/logstash_writer?pretty' -H 'Content-Type: application/json' -d'
{
  "username": "logstash_writer",
  "roles": ["logstash_writer_role"],
  "full_name": null,
  "email": null,
  "password": "some-password-goes-here",
  "enabled": true
}
'
set -o history

...

Code Block
output {
    elasticsearch {
        ssl => true
        ssl_certificate_verification => true
        cacert => "/etc/logstash/certs/ca.crt"
        user => "logstash_writer"
        password => "${logstash_writer_password}"
        hosts => ["https://wifimon-kibana.rashexample.alorg"]
        index => "radiuslogs"
    }
}

...

Code Block
output {
    elasticsearch {
        ssl => true
        ssl_certificate_verification => true
        cacert => "/etc/logstash/certs/ca.crt"
        user => "logstash_writer"
        password => "${logstash_writer_password}"
        hosts => ["https://wifimon-kibana.rashexample.alorg"]
        index => "dhcplogs"
    }
}

...

Code Block
curl -XGET --cacert /etc/elasticsearch/certs/ca.crt --user elastic 'https://wifimon-kibana.rashexample.alorg:9200/_cat/indices/radiuslogs?v'
curl -XGET --cacert /etc/elasticsearch/certs/ca.crt --user elastic 'https://wifimon-kibana.rashexample.alorg:9200/_cat/indices/dhcplogs?v'

...

This setup is about deleting the index when it’s one day old. Run the following command in the wifimon-kibana.rashexample.al org node to create the wifimon_policy policy.

Code Block
curl -X PUT --cacert /etc/elasticsearch/certs/ca.crt --user elastic "https://wifimon-kibana.rashexample.alorg:9200/_ilm/policy/wifimon_policy?pretty" -H 'Content-Type: application/json' -d'
{
  "policy": {
    "phases": {
      "delete": {
        "min_age": "1d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}
'

...

Code Block
curl -XGET --cacert /etc/elasticsearch/certs/ca.crt --user elastic "https://wifimon-kibana.rashexample.alorg:9200/_ilm/policy/wifimon_policy?pretty"

...

The policy must be associated with the indexes upon which it will trigger the configured actions. For this to happen the policy must be configured in the index template used to create the index.

On wifimon-kibana.rashexample.al org node run the following command to apply the wifimon_policy to the wifimon_template index template matching the radiuslogs and dhcplogs indices.

Code Block
 curl -X PUT --cacert /etc/elasticsearch/certs/ca.crt --user elastic "https://wifimon-kibana.rashexample.alorg:9200/_template/wifimon_template?pretty" -H 'Content-Type: application/json' -d'
{
  "index_patterns": ["radiuslogs", “dhcplogs”],
  "settings": {"index.lifecycle.name": "wifimon_policy"}
}
'

...

Code Block
curl -XGET --cacert /etc/elasticsearch/certs/ca.crt --user elastic "https://wifimon-kibana.rashexample.alorg:9200/_template/wifimon_template?pretty"

...

Code Block
output {
    elasticsearch {
        ssl => true
        cacert => "/etc/logstash/certs/ca.crt"
        ssl_certificate_verification => true
        user => "logstash_writer"
        password => "${logstash_writer_password}"
        hosts => ["https://wifimon-kibana.rashexample.alorg"]
        ilm_enabled => true
        ilm_policy => "wifimon_policy"
        index => "radiuslogs"
    }
}

...

Code Block
output {
    elasticsearch {
        ssl => true
        cacert => "/etc/logstash/certs/ca.crt"
        ssl_certificate_verification => true
        user => "logstash_writer"
        password => "${logstash_writer_password}"
        hosts => ["https://wifimon-kibana.rashexample.alorg"]
        ilm_enabled => true
        ilm_policy => "wifimon_policy"
        index => "dhcplogs"
    }
}

...

To configure Kibana keystore run the following commands on wifimon-kibana.rashexample.al org node.

Create keystore:

Code Block
 sudo -u kibana /usr/share/kibana/bin/kibana-keystore create

...

To configure Logstash keystore run the following commands on wifimon-logstash.rashexample.al org node.

For more security, protect the Logstash keystore with a password stored in the environment variable LOGSTASH_KEYSTORE_PASS. This variable must be available to the running logstash instance, otherwise the keystore cannot be accessed.

...