31 October 2017

pfSense v2.4.X+|Kibana+Elasticsearch+Logstash [ELK] v5+/v6+|Ubuntu 16.04+



Prerequisites 
Ubuntu Server v16.04+
pfSense v2.4.1+

Navigate to the following within pfSense
Status>>System Logs [Settings]
Provide 'Server 1' address (this is the IP address of the ELK your installing - example: 192.168.1.60:5140)
Select "Firewall events"


Preparation

Edit host file 
sudo nano /etc/hosts
Amend host file (/etc/hosts)
192.168.1.1 logs.YOURYRL.com logs
Edit host file 
sudo nano /etc/hostname
Amend host file (/etc/hostname)
logs
Add Oracle Java Repository
sudo add-apt-repository ppa:webupd8team/java

Download and install the public GPG signing key
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Download and install apt-transport-https package (Debian)
sudo apt-get install apt-transport-https

NOTE: Pick only one of two repositories listed below:
Add Elasticsearch|Logstash|Kibana Repositories (version 5+)
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
Add Elasticsearch|Logstash|Kibana Repositories (version 6+)
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list


Update

sudo apt-get update

Install Java8 (At this time Java9 will not work with this configuration)
sudo apt-get install oracle-java8-installer


Install
Elasticsearch v5+ | Kibana v5+ | Logstash v5+
ELK Stack




Install Elasticsearch|Kibana|Logstash
sudo apt-get install elasticsearch && sudo apt-get install kibana && sudo apt-get install logstash

Configure Kibana|v5+

Configure Kibana
sudo nano /etc/kibana/kibana.yml

Amend host file (/etc/kibana/kibana.yml)
server.port: 5601
server.host: "0.0.0.0"
Configure Logstash|v5+

Change Directory (preparation for configuration files)
cd /etc/logstash/conf.d
Create the following configuration file
sudo nano 01-inputs.conf

Paste the following (01-inputs.conf)
https://github.com/a3ilson/pfelk/blob/master/01-inputs.conf
#tcp syslog stream via 5140
input {  
  tcp {
    type => "syslog"
    port => 5140
  }
}
#udp syslogs stream via 5140
input {  
  udp {
    type => "syslog"
    port => 5140
  }
}
Create the following configuration file
sudo nano 10-syslog.conf
Paste the following (10-syslog.conf)
https://github.com/a3ilson/pfelk/blob/master/10-syslog.conf
filter {  
  if [type] == "syslog" {
    #change to pfSense ip address
    if [host] =~ /192\.168\.1\.1/ {
      mutate {
        add_tag => ["PFSense", "Ready"]
      }
    }
    if "Ready" not in [tags] {
      mutate {
        add_tag => [ "syslog" ]
      }
    }
  }
}
filter {  
  if [type] == "syslog" {
    mutate {
      remove_tag => "Ready"
    }
  }
}
filter {  
  if "syslog" in [tags] {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM  dd HH:mm:ss" ]
      locale => "en"
    }
    if !("_grokparsefailure" in [tags]) {
      mutate {
        replace => [ "@source_host", "%{syslog_hostname}" ]
        replace => [ "@message", "%{syslog_message}" ]
      }
    }
    mutate {
      remove_field => [ "syslog_hostname", "syslog_message", "syslog_timestamp" ]
    }
#    if "_grokparsefailure" in [tags] {
#      drop { }
#    }
  }
}
Create the following configuration file
https://github.com/a3ilson/pfelk/blob/master/30-outputs.conf

sudo nano 30-outputs.conf
Paste the following (30-outputs.conf)
output {  
          elasticsearch { 
          hosts => ["http://localhost:9200"] 
#X-Pack   user => "elastic"
#X-Pack   password => "changeme"
          index => "logstash-%{+YYYY.MM.dd}" }  
#         stdout { codec => rubydebug }  
}
Create the following configuration file
sudo nano 11-pfsense.conf
Paste the following (11-pfsense.conf)
//Update the timezone as needed - http://joda-time.sourceforge.net/timezones.html //
https://github.com/a3ilson/pfelk/blob/master/11-pfsense.conf
filter {  
  if "PFSense" in [tags] {
    grok {
      add_tag => [ "firewall" ]
      match => [ "message", "<(?<evtid>.*)>(?<datetime>(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\s+(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9]) (?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:[0-5][0-9])) (?<prog>.*?): (?<msg>.*)" ]
    }
    mutate {
      gsub => ["datetime","  "," "]
    }
    date {
      match => [ "datetime", "MMM dd HH:mm:ss" ]
      timezone => "America/New_York"
    }
    mutate {
      replace => [ "message", "%{msg}" ]
    }
    mutate {
      remove_field => [ "msg", "datetime" ]
    }
}
if [prog] =~ /^filterlog$/ {  
    mutate {
      remove_field => [ "msg", "datetime" ]
    }
    grok {
      patterns_dir => "/etc/logstash/conf.d/patterns"
      match => [ "message", "%{PFSENSE_LOG_DATA}%{PFSENSE_IP_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}",
                 "message", "%{PFSENSE_LOG_DATA}%{PFSENSE_IPv4_SPECIFIC_DATA_ECN}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}",
                 "message", "%{PFSENSE_LOG_DATA}%{PFSENSE_IPv6_SPECIFIC_DATA}"]
    }
    mutate {
      lowercase => [ 'proto' ]
    }
    geoip {
      add_tag => [ "GeoIP" ]
      source => "src_ip"
      # Optional GeoIP database
      # Comment out the below if you do not wise to utilize and omit last three steps dealing with (recommended) suffix
      database => "/etc/logstash/GeoLite2-City.mmdb"
    }
  }
}
Create a patterns directory (Referenced in the configuration files above)
sudo mkdir /etc/logstash/conf.d/patterns
Create the following pattern file
sudo nano /etc/logstash/conf.d/patterns/pfsense2-4.grok
Paste the following (pfsense2-4.grok)
https://github.com/a3ilson/pfelk/blob/master/pfsense2-4.grok
# GROK Custom Patterns (add to patterns directory and reference in GROK filter for pfSense events):
# GROK Patterns for pfSense 2.4 Logging Format
#
# Created 27 Jan 2015 by J. Pisano (Handles TCP, UDP, and ICMP log entries)
# Edited 14 Feb 2015 by Elijah Paul elijah.paul@gmail.com
# Edited 10 Mar 2015 by Bernd Zeimetz <bernd@bzed.de>
# Edited 28 Oct 2017 by Brian Turek <brian.turek@gmail.com>
# Edited 31 Oct 2017 by Andrew Wilson <andrew@3ilson.com>
# taken from https://gist.github.com/elijahpaul/3d80030ac3e8138848b5
#
# - Adjusted IPv4 to accept pfSense 2.4.X
# - Adjusted IPv6 to accept pfSense 2.4.X
#
# TODO: Add/expand support for IPv6 messages.
#
# Usage: Use the PFSENSE_LOG_ENTRY pattern

PFSENSE_LOG_ENTRY %{PFSENSE_LOG_DATA}%{PFSENSE_IP_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}?
PFSENSE_LOG_DATA %{INT:rule},%{INT:sub_rule}?,,%{INT:tracker},%{WORD:iface},%{WORD:reason},%{WORD:action},%{WORD:direction},
PFSENSE_IP_SPECIFIC_DATA %{PFSENSE_IPv4_SPECIFIC_DATA}|%{PFSENSE_IPv6_SPECIFIC_DATA}
PFSENSE_IPv4_SPECIFIC_DATA (?<ip_ver>(4)),%{BASE16NUM:tos},%{WORD:ecn}?,%{INT:ttl},%{INT:id},%{INT:offset},%{WORD:flags},%{INT:proto_id},%{WORD:proto},
PFSENSE_IPv4_SPECIFIC_DATA_ECN (?<ip_ver>(4)),%{BASE16NUM:tos},%{INT:ecn},%{INT:ttl},%{INT:id},%{INT:offset},%{WORD:flags},%{INT:proto_id},%{WORD:proto},
PFSENSE_IPv6_SPECIFIC_DATA (?<ip_ver>(6)),%{BASE16NUM:ipv6_Flag1},%{WORD:ipv6_Flag2},%{WORD:flow_label},%{WORD:options},%{INT:protocol_id},%{INT:length},%{IPV6:src_ip},%{IPV6:dest_ip},%{WORD:ipv6_HPH},%{WORD:ipv6_padn},%{WORD:ipv6_Alert},%{BASE16NUM:ipv6_Flag3},
PFSENSE_IP_DATA %{INT:length},%{IP:src_ip},%{IP:dest_ip},
PFSENSE_PROTOCOL_DATA %{PFSENSE_TCP_DATA}|%{PFSENSE_UDP_DATA}|%{PFSENSE_ICMP_DATA}|%{PFSENSE_CARP_DATA}|%{PFSENSE_IGMP_DATA}
PFSENSE_TCP_DATA %{INT:src_port},%{INT:dest_port},%{INT:data_length},%{WORD:tcp_flags},%{INT:sequence_number},%{INT:ack_number},%{INT:tcp_window},%{DATA:urg_data},%{GREEDYDATA:tcp_options}
PFSENSE_UDP_DATA %{INT:src_port},%{INT:dest_port},%{INT:data_length}
PFSENSE_IGMP_DATA datalength=%{INT:data_length}
PFSENSE_ICMP_DATA %{PFSENSE_ICMP_TYPE}%{PFSENSE_ICMP_RESPONSE}
PFSENSE_ICMP_TYPE (?<icmp_type>(request|reply|unreachproto|unreachport|unreach|timeexceed|paramprob|redirect|maskreply|needfrag|tstamp|tstampreply)),
PFSENSE_ICMP_RESPONSE %{PFSENSE_ICMP_ECHO_REQ_REPLY}|%{PFSENSE_ICMP_UNREACHPORT}| %{PFSENSE_ICMP_UNREACHPROTO}|%{PFSENSE_ICMP_UNREACHABLE}|%{PFSENSE_ICMP_NEED_FLAG}|%{PFSENSE_ICMP_TSTAMP}|%{PFSENSE_ICMP_TSTAMP_REPLY}
PFSENSE_ICMP_ECHO_REQ_REPLY %{INT:icmp_echo_id},%{INT:icmp_echo_sequence}
PFSENSE_ICMP_UNREACHPORT %{IP:icmp_unreachport_dest_ip},%{WORD:icmp_unreachport_protocol},%{INT:icmp_unreachport_port}
PFSENSE_ICMP_UNREACHPROTO %{IP:icmp_unreach_dest_ip},%{WORD:icmp_unreachproto_protocol}
PFSENSE_ICMP_UNREACHABLE %{GREEDYDATA:icmp_unreachable}
PFSENSE_ICMP_NEED_FLAG %{IP:icmp_need_flag_ip},%{INT:icmp_need_flag_mtu}
PFSENSE_ICMP_TSTAMP %{INT:icmp_tstamp_id},%{INT:icmp_tstamp_sequence}
PFSENSE_ICMP_TSTAMP_REPLY %{INT:icmp_tstamp_reply_id},%{INT:icmp_tstamp_reply_sequence},%{INT:icmp_tstamp_reply_otime},%{INT:icmp_tstamp_reply_rtime},%{INT:icmp_tstamp_reply_ttime}
PFSENSE_CARP_DATA %{WORD:carp_type},%{INT:carp_ttl},%{INT:carp_vhid},%{INT:carp_version},%{INT:carp_advbase},%{INT:carp_advskew}

# DHCP Optional Filter [Requires 20-dhcp.conf]
# DHCPD (%{DHCPDISCOVER}|%{DHCPOFFER}|%{DHCPREQUEST}|%{DHCPACK}|%{DHCPINFORM}|%{DHCPRELEASE})
# DHCPDISCOVER %{WORD:dhcp_action} from %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)(: %{GREEDYDATA:dhcp_load_balance})?
# DHCPOFFER %{WORD:dhcp_action} on %{IPV4:dhcp_client_ip} to %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)
# DHCPREQUEST %{WORD:dhcp_action} for %{IPV4:dhcp_client_ip}%{SPACE}(\(%{IPV4:dhcp_ip_unknown}\))? from %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)(: %{GREEDYDATA:dhcp_request_message})?
# DHCPACK %{WORD:dhcp_action} on %{IPV4:dhcp_client_ip} to %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)
# DHCPINFORM %{WORD:dhcp_action} from %{IPV4:dhcp_client_ip} via %(?<dhcp_client_vlan>[0-9a-z_]*)
# DHCPRELEASE %{WORD:dhcp_action} of %{IPV4:dhcp_client_ip} from %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via 


# Suricata Optional Filter [Requires 10-suricata.conf]
# PFSENSE_APP (%{DATA:pfsense_APP}):
# PFSENSE_APP_DATA (%{PFSENSE_APP_LOGOUT}|%{PFSENSE_APP_LOGIN}|%{PFSENSE_APP_ERROR}|%{PFSENSE_APP_GEN})
# PFSENSE_APP_LOGIN (%{DATA:pfsense_ACTION}) for user \'(%{DATA:pfsense_USER})\' from: (%{GREEDYDATA:pfsense_REMOTE_IP})
# PFSENSE_APP_LOGOUT User (%{DATA:pfsense_ACTION}) for user \'(%{DATA:pfsense_USER})\' from: (%{GREEDYDATA:pfsense_REMOTE_IP})
# PFSENSE_APP_ERROR webConfigurator (%{DATA:pfsense_ACTION}) for \'(%{DATA:pfsense_USER})\' from (%{GREEDYDATA:pfsense_REMOTE_IP})
# PFSENSE_APP_GEN (%{GREEDYDATA:pfsense_ACTION})
Download and install the MaxMind GeoIP database (recommended)
cd /etc/logstash
Download and install the MaxMind GeoIP database (recommended)
sudo wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz
Download and install the MaxMind GeoIP database (recommended)
sudo gunzip GeoLite2-City.mmdb.gz

NOTE
When using the RPM or Debian packages on systems that use systemd, system limits must be specified via systemd.  The systemd service file (/usr/lib/systemd/system/elasticsearch.service) contains the limits that are applied by default.  To override these, add a file called /etc/systemd/system/elasticsearch.service.d/elasticsearch.conf) and specify any changes in that file, such as:


LimitMEMLOCK-infinity
Configure Services


Automatic Start (on boot)
Start Services on Boot as Services (you'll need to reboot or start manually to proceed)
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service
sudo /bin/systemctl enable kibana.service
sudo /bin/systemctl enable logstash.service

Manual Start

Start Services Manually
sudo -i service elasticsearch start
sudo -i service kibana start
sudo -i service logstash start

Point browser to url:5601 (ex: 192.168.1.1:5601)
Select @timestamp and click 'Create'
*You may have to wait a few minutes...allowing log retrieval 


Testing/Troubleshooting



Elasticsearch
curl -X GET http://localhost:9200 { "name" : "NYLJDFe", "cluster_name" : "elasticsearch", "cluster_uuid" : "7krQg2MzR0irVJ6gNAB7fg", "version" : { "number" : "5.6.3", "build_hash" : "253032b", "build_date" : "2017-10-31T05:11:34.737Z", "build_snapshot" : false, "lucene_version" : "6.6.1" }, "tagline" : "You Know, for Search" }


Status (Elasticsearch)
systemctl status elasticsearch.service elasticsearch.service - Elasticsearch Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: enabled) Active: active (running) since Tue 2017-10-31 20:53:51 EDT; 13h ago Docs: http://www.elastic.co Main PID: 6121 (java) Tasks: 74 Memory: 2.4G CPU: 7min 46.327s CGroup: /system.slice/elasticsearch.service └─6121 /usr/bin/java -Xms2g -Xmx32g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=7 Oct 31 20:53:51 logs systemd[1]: Starting Elasticsearch... Oct 31 20:53:51 logs systemd[1]: Started Elasticsearch.

Status (Kibana)
systemctl status kibana.service kibana.service - Kibana Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor preset: enabled) Active: active (running) since Tue 2017-10-31 20:54:09 EDT; 13h ago Main PID: 6205 (node) Tasks: 10 Memory: 82.2M CPU: 2min 51.950s CGroup: /system.slice/kibana.service └─6205 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c Oct 31 10:43:16 logs kibana[6205]: {"type":"response","@timestamp":"2017-10-031T14:43:16Z","tags":[],"pid":

Status (Logstash)
systemctl status logstash.service logstash.service - logstash Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: enabled) Active: active (running) since Tue 2017-10-31 08:52:27 EDT; 1h 58min ago Main PID: 32366 (java) Tasks: 43 Memory: 405.6M CPU: 4min 43.959s CGroup: /system.slice/logstash.service └─32366 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFracti Oct 31 08:52:27 logs systemd[1]: Started logstash.

Logstash Log's
/var/log/logstash
#cat/nano/vi the files within this location to view Logstash logs



TUTORIAL VIDEO



Optional
X-Pack Plugin Installation



Install x-pack plugin
sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install x-pack
sudo /usr/share/logstash/bin/logstash-plugin install x-pack
sudo /usr/share/kibana/bin/kibana-plugin install x-pack

Kibana Configuration (/etc/kibana/kibana.yml)
sudo nano /etc/kibana/kibana.yml

Kibana.yml
# Uncomment and revise the following lines:
elasticsearch.username: "elastic"
elasticsearch.password: "changeme"

Logstash Configuration (/etc/logstash/logstash.yml)
sudo nano /etc/logstash/logstash.yml

Logstash.yml
# Add the following
xpack.monitoring.elasticsearch.url: "localhost:9200"
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "changeme"

Configure 30-outputs.conf (/etc/logstash/conf.d/30-outputs.conf)
output {
        elasticsearch {
              hosts => ["http://localhost:9200"]
              user => "elastic"
              password => "changeme"
              index => "logstash-%{+YYYY.MM.dd}" }
#             stdout {codex => rubydebug }
}

Restart ELK Services
systemctl restart elasticsearch.service
systemctl restart logstash.service
systemctl restart kibana.service

Changing Passwords

[Ensure You Adjust Any Changed Passwords In the Previously Configured Files]


Change Passwords
     Login to kibana (http://##.##.##.##:5601)

     Management>>Users>>

          Add New User(s)

          Modify Default Password(s)