You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Current »

1. Introduction

To achieve its purpose, correlating user information with network performance data, WiFiMon needs RADIUS and/or DHCP logs to be streamed in an Elasticsearch cluster.

The sources generating log files are a FreeRadius and a DHCP server where Filebeat was installed as an agent. Therefore, the data flow starts with Filebeat collecting log events and forwarding them to Logstash. At Logstash, logs are filtered/enriched according to the needs of WiFiMon, before sending them towards Elasticsearch nodes in the cluster.

2. Package Installation

The filebeat package was installed in the DHCP and the FreeRadius server which implements the eduroam Service Provider. For more information see Repositories for APT and YUM.

All the packages implementing the cluster's components (Elasticsearch, Logstash, Kibana, Filebeat) must be of the same version. The version of the ELK cluster can be easily found from the "Cluster Management" option in Kibana. You should install the appropriate Filebeat package.

All of the following commands should be executed as "root".

3. Filebeat Monitoring

Filebeat monitors log files for new content, collects log events, and forwards them to Elasticsearch, either directly or via Logstash. In Filebeat terms one speaks about a) the input which looks in the configured log data locations, b) the harvester which reads a single log for new content and sends new log data to libbeat, and c) the output which aggregates and sends data to the configured output. For more information see Filebeat overview.

3.1. Filebeat Configuration

The configuration of Filebeat is done by editing the /etc/filebeat/filebeat.yml file. Filebeat will be configured to forward the data towards Logstash.

3.1.1. RADIUS Server

In the following, you are required to insert the FQDN to which the logs will be forwarded. This FQDN is in the form "WAS_HOSTNAME-elastic.WAS_SUFFIX". For example, if the FQDN of the WAS is "was.example.org", you will have to insert "was-elastic.example.org".

The following is the Filebeat configuration on the RADIUS server that forwards data to Logstash:
/etc/filebeat/filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths: /path/to/your/radius_logs
  multiline.pattern: '^[[:space:]]'
  multiline.negate: false
  multiline.match: after
output.logstash:
  hosts: ["WAS_HOSTNAME-elastic.WASSUFFIX:5044"]
  ssl.certificate_authorities: ["/etc/ssl/certs/ca-certificates.crt"]
processors:
- add_fields:
    target: ''
    fields:
      logtype: radius
- drop_fields:
    fields: ['input', 'host', 'agent', 'acs', 'log', 'ecs']

The important settings here are the multiline.* ones which manage multiline formatted logs. The .pattern matches lines starting with white-space. The .negate and .match work together, and combined as false and after make consecutive lines that match the pattern to be appended to the previous line that doesn't match it. This makes all the lines starting with white-space to be appended to the line that hold the date, actually the first line in the radius_sample_logs. For more information see Manage multiline messages.

3.1.2. DHCP Server

The following is the Filebeat configuration on the DHCP server that forwards data to Logstash:
/etc/filebeat/filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths: /path/to/your/dhcp_logs
  include_lines: ['DHCPACK']
output.logstash:
  hosts: ["WAS_HOSTNAME-elastic.WASSUFFIX:5044"]
  ssl.certificate_authorities: ["/etc/ssl/certs/ca-certificates.crt"] processors:
- add_fields:
    target: ''
    fields:
      logtype: dhcp
- drop_fields:
    fields: ['input', 'host', 'agent', 'acs', 'log', 'ecs']

The lines to include from DHCP logs are the ones containing DHCPACK string, which represent the final phase of DHCP operations. These lines are filtered with the include_lines setting.

For this configuration to work, the Elasticsearch index template must be manually loaded. Template autoloading is only supported for the Elasticsearch output. Replace elastic-password-goes-here with the proper password and run:

set +o history
filebeat setup --index-management \
-E output.logstash.enabled=false \
-E 'output.elasticsearch.hosts=["WAS_HOSTNAME-elastic.WAS_SUFFIX:443"]' \
-E output.elasticsearch.protocol=https \
-E output.elasticsearch.username=elastic \
-E output.elasticsearch.password=elastic-password-goes-here \
-E 'output.elasticsearch.ssl.certificate_authorities=["/etc/ssl/certs/ca-certificates.crt"]'
set -o history

The above command loads the template from FQDN-elastic.example.org node where elasticsearch is installed. Detailed information is written in the Filebeat log file.

3.2. Log Format

Below are the sample log files used in tests. It's about a log event of a user interacting with the Eduroam Service Provider and another one interacting with the DHCP server. 
/tmp/radius_sample_logs

Sun Mar 10 08:16:05 2019
    Service-Type = Framed-User
    NAS-Port-Id = "wlan2"
    NAS-Port-Type = Wireless-802.11
    User-Name = "username@example.org"
    Acct-Session-Id = "82c000cd"
    Acct-Multi-Session-Id = "CC-2D-E0-9A-EB-A3-88-75-98-6C-31-AA-82-C0-00-00-00-00-00-CD"
    Calling-Station-Id = "88-75-98-6C-31-AA"
    Called-Station-Id = "CC-2D-E0-9A-EB-A3:eduroam"
    Acct-Authentic = RADIUS
    Acct-Status-Type = Start
    NAS-Identifier = "Eduroam"
    Acct-Delay-Time = 0
    NAS-IP-Address = 192.168.192.111
    Event-Timestamp = "Mar 8 2019 08:16:05 CET"
    Tmp-String-9 = "ai:"
    Acct-Unique-Session-Id = "e5450a4e16d951436a7c241eaf788f9b"
    Realm = "example.org"
    Timestamp = 1552029365


/tmp/dhcp_sample_logs

Jun 18 19:15:20 centos dhcpd[11223]: DHCPREQUEST for 192.168.1.200 from a4:c4:94:cd:35:70 (galliumos) via wlp6s0
Jun 18 19:15:20 centos dhcpd[11223]: DHCPACK on 192.168.1.200 to a4:c4:94:cd:35:70 (galliumos) via wlp6s0

4. References

The following links were very useful while writing this material and performing the tests mentioned in it.

  • No labels