05 November 2016

Kibana+Elasticsearch+Logstash [ELK] v5+|pfSense v2.3+|Ubuntu 16.04+






Prerequisites
Ubuntu Server v16.04.01+
pf / pfSense 2.3+


Navigate to the following within pfSense
Status>>System Logs [Settings]
Provide 'Server 1' address (this is the IP address of the ELK your installing - example: 192.168.1.60:5140)
Select "Firewall events"



Preparation 

Edit host file 
sudo nano /etc/hosts
Amend host file (/etc/hosts)
192.168.1.1 logs.YOURYRL.com logs
Edit host file 

sudo nano /etc/hostname
Amend host file (/etc/hostname)
logs
Add Oracle Java Repository

sudo add-apt-repository ppa:webupd8team/java
Download and install the public GPG signing key
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Preperation

Download and install apt-transport-https package (Debian)

sudo apt-get install apt-transport-https
Add Elasticsearch|Logstash|Kibana Repositories

echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
Update

sudo apt-get update
Install Java8 (At this time Java9 will not work with this configuration)
sudo apt-get install oracle-java8-installer

Install
Elasticsearch v5+ | Kibana v5+ | Logstash v5+
ELK Stack


Install Elasticsearch|Kibana|Logstash

sudo apt-get install elasticsearch && sudo apt-get install kibana && sudo apt-get install logstash

Configure Kibana|v5

Configure Kibana
sudo nano /etc/kibana/kibana.yml
Amend host file (/etc/kibana/kibana.yml)
server.port: 5601
server.host: "0.0.0.0"

Configure Logstash|v5


Change Directory (preparation for configuration files)
cd /etc/logstash/conf.d
Create the following configuration file

sudo nano 01-inputs.conf
Paste the following (01-inputs.conf)
#tcp syslog stream via 5140
input {  
  tcp {
    type => "syslog"
    port => 5140
  }
}
#udp syslogs stream via 5140
input {  
  udp {
    type => "syslog"
    port => 5140
  }
}
Create the following configuration file

sudo nano 10-syslog.conf
Paste the following (10-syslog.conf)
filter {  
  if [type] == "syslog" {
    #change to pfSense ip address
    if [host] =~ /0\.0\.0\.0/ {
      mutate {
        add_tag => ["PFSense", "Ready"]
      }
    }
    if "Ready" not in [tags] {
      mutate {
        add_tag => [ "syslog" ]
      }
    }
  }
}
filter {  
  if [type] == "syslog" {
    mutate {
      remove_tag => "Ready"
    }
  }
}
filter {  
  if "syslog" in [tags] {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM  dd HH:mm:ss" ]
      locale => "en"
    }
    if !("_grokparsefailure" in [tags]) {
      mutate {
        replace => [ "@source_host", "%{syslog_hostname}" ]
        replace => [ "@message", "%{syslog_message}" ]
      }
    }
    mutate {
      remove_field => [ "syslog_hostname", "syslog_message", "syslog_timestamp" ]
    }
#    if "_grokparsefailure" in [tags] {
#      drop { }
#    }
  }
}
Create the following configuration file

sudo nano 30-outputs.conf
Paste the following (30-outputs.conf)
output {  
elasticsearch { 

hosts => localhost 

index => "logstash-%{+YYYY.MM.dd}" }  
# stdout { codec => rubydebug }  
}
Create the following configuration file

sudo nano 11-pfsense.conf
Paste the following (11-pfsense.conf)
//Update the timezone as needed - http://joda-time.sourceforge.net/timezones.html //
filter {  
  if "PFSense" in [tags] {
    grok {
      add_tag => [ "firewall" ]
      match => [ "message", "<(?<evtid>.*)>(?<datetime>(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\s+(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9]) (?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:[0-5][0-9])) (?<prog>.*?): (?<msg>.*)" ]
    }
    mutate {
      gsub => ["datetime","  "," "]
    }
    date {
      match => [ "datetime", "MMM dd HH:mm:ss" ]
      timezone => "America/New_York"
    }
    mutate {
      replace => [ "message", "%{msg}" ]
    }
    mutate {
      remove_field => [ "msg", "datetime" ]
    }
}
if [prog] =~ /^filterlog$/ {  
    mutate {
      remove_field => [ "msg", "datetime" ]
    }
    grok {
      patterns_dir => "/etc/logstash/conf.d/patterns"
      match => [ "message", "%{PFSENSE_LOG_DATA}%{PFSENSE_IP_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}",
         "message", "%{PFSENSE_LOG_DATA}%{PFSENSE_IPv4_SPECIFIC_DATA_ECN}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}" ]
    }
    mutate {
      lowercase => [ 'proto' ]
    }
    geoip {
      add_tag => [ "GeoIP" ]
      source => "src_ip"
      # Optional GeoIP database
      # Comment out the below if you do not wise to utilize and omit last three steps dealing with (recommended) suffix
      database => "/etc/logstash/GeoLite2-City.mmdb"
    }
  }
}
Create a patterns directory (Referenced in the configuration files above)
sudo mkdir /etc/logstash/conf.d/patterns
Create the following pattern file

sudo nano /etc/logstash/conf.d/patterns/pfsense2-3.grok
Paste the following (pfsense2-2.grok)
# GROK match pattern for logstash.conf filter: %{PFSENSE_LOG_DATA}%{PFSENSE_IP_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}
# GROK Custom Patterns (add to patterns directory and reference in GROK filter for pfSense events):
# GROK Patterns for pfSense 2.3 Logging Format
#
# Created 27 Jan 2015 by J. Pisano (Handles TCP, UDP, and ICMP log entries)
# Edited 14 Feb 2015 by Elijah Paul elijah.paul@gmail.com
# Edited 10 Mar 2015 by Bernd Zeimetz <bernd@bzed.de>
# taken from https://gist.github.com/elijahpaul/f5f32d4e914dcb7fedd2
# - adding PFSENSE_ prefix
# - adding carp patterns
#
# Usage: Use with following GROK match pattern
#
# %{PFSENSE_LOG_DATA}%{PFSENSE_IP_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}

PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule}),,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
PFSENSE_IP_SPECIFIC_DATA (%{PFSENSE_IPv4_SPECIFIC_DATA}|%{PFSENSE_IPv6_SPECIFIC_DATA})
PFSENSE_IPv4_SPECIFIC_DATA (%{BASE16NUM:tos}),,(%{INT:ttl}),(%{INT:id}),(%{INT:offset}),(%{WORD:flags}),(%{INT:proto_id}),(%{WORD:proto}),
PFSENSE_IPv4_SPECIFIC_DATA_ECN (%{BASE16NUM:tos}),(%{INT:ecn}),(%{INT:ttl}),(%{INT:id}),(%{INT:offset}),(%{WORD:flags}),(%{INT:proto_id}),(%{WORD:proto}),
PFSENSE_IPv6_SPECIFIC_DATA (%{BASE16NUM:class}),(%{DATA:flow_label}),(%{INT:hop_limit}),(%{WORD:proto}),(%{INT:proto_id}),
PFSENSE_IP_DATA (%{INT:length}),(%{IP:src_ip}),(%{IP:dest_ip}),
PFSENSE_PROTOCOL_DATA (%{PFSENSE_TCP_DATA}|%{PFSENSE_UDP_DATA}|%{PFSENSE_ICMP_DATA}|%{PFSENSE_CARP_DATA})
PFSENSE_TCP_DATA (%{INT:src_port}),(%{INT:dest_port}),(%{INT:data_length}),(%{WORD:tcp_flags}),(%{INT:sequence_number}),(%{INT:ack_number}),(%{INT:tcp_window}),(%{DATA:urg_data}),(%{DATA:tcp_options})
PFSENSE_UDP_DATA (%{INT:src_port}),(%{INT:dest_port}),(%{INT:data_length})
PFSENSE_ICMP_DATA (%{PFSENSE_ICMP_TYPE}%{PFSENSE_ICMP_RESPONSE})
PFSENSE_ICMP_TYPE (?<icmp_type>(request|reply|unreachproto|unreachport|unreach|timeexceed|paramprob|redirect|maskreply|needfrag|tstamp|tstampreply)),
PFSENSE_ICMP_RESPONSE (%{PFSENSE_ICMP_ECHO_REQ_REPLY}|%{PFSENSE_ICMP_UNREACHPORT}| %{PFSENSE_ICMP_UNREACHPROTO}|%{PFSENSE_ICMP_UNREACHABLE}|%{PFSENSE_ICMP_NEED_FLAG}|%{PFSENSE_ICMP_TSTAMP}|%{PFSENSE_ICMP_TSTAMP_REPLY})
PFSENSE_ICMP_ECHO_REQ_REPLY (%{INT:icmp_echo_id}),(%{INT:icmp_echo_sequence})
PFSENSE_ICMP_UNREACHPORT (%{IP:icmp_unreachport_dest_ip}),(%{WORD:icmp_unreachport_protocol}),(%{INT:icmp_unreachport_port})
PFSENSE_ICMP_UNREACHPROTO (%{IP:icmp_unreach_dest_ip}),(%{WORD:icmp_unreachproto_protocol})
PFSENSE_ICMP_UNREACHABLE (%{GREEDYDATA:icmp_unreachable})
PFSENSE_ICMP_NEED_FLAG (%{IP:icmp_need_flag_ip}),(%{INT:icmp_need_flag_mtu})
PFSENSE_ICMP_TSTAMP (%{INT:icmp_tstamp_id}),(%{INT:icmp_tstamp_sequence})
PFSENSE_ICMP_TSTAMP_REPLY (%{INT:icmp_tstamp_reply_id}),(%{INT:icmp_tstamp_reply_sequence}),(%{INT:icmp_tstamp_reply_otime}),(%{INT:icmp_tstamp_reply_rtime}),(%{INT:icmp_tstamp_reply_ttime})

PFSENSE_CARP_DATA (%{WORD:carp_type}),(%{INT:carp_ttl}),(%{INT:carp_vhid}),(%{INT:carp_version}),(%{INT:carp_advbase}),(%{INT:carp_advskew})

DHCPD (%{DHCPDISCOVER}|%{DHCPOFFER}|%{DHCPREQUEST}|%{DHCPACK}|%{DHCPINFORM}|%{DHCPRELEASE})
DHCPDISCOVER %{WORD:dhcp_action} from %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)(: %{GREEDYDATA:dhcp_load_balance})?
DHCPOFFER %{WORD:dhcp_action} on %{IPV4:dhcp_client_ip} to %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)
DHCPREQUEST %{WORD:dhcp_action} for %{IPV4:dhcp_client_ip}%{SPACE}(\(%{IPV4:dhcp_ip_unknown}\))? from %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)(: %{GREEDYDATA:dhcp_request_message})?
DHCPACK %{WORD:dhcp_action} on %{IPV4:dhcp_client_ip} to %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)
DHCPINFORM %{WORD:dhcp_action} from %{IPV4:dhcp_client_ip} via %(?<dhcp_client_vlan>[0-9a-z_]*)
DHCPRELEASE %{WORD:dhcp_action} of %{IPV4:dhcp_client_ip} from %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via 
Download and install the MaxMind GeoIP database (recommended)
cd /etc/logstash
Download and install the MaxMind GeoIP database (recommended)
sudo wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz
Download and install the MaxMind GeoIP database (recommended)
sudo gunzip GeoLite2-City.mmdb.gz

NOTE
When using the RPM or Debian packages on systems that use systemd, system limits must be specified via systemd.  The systemd service file (/usr/lib/systemd/system/elasticsearch.service) contains the limits that are applied by default.  To override these, add a file called /etc/systemd/system/elasticsearch.service.d/elasticsearch.conf) and specify any changes in that file, such as:


LimitMEMLOCK-infinity
Configure Services

Automatic Start (on boot)
Start Services on Boot as Services (you'll need to reboot or start manually to proceed)
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service
sudo /bin/systemctl enable kibana.service
sudo /bin/systemctl enable logstash.service
Manual Start
Start Services Manually
sudo -i service elasticsearch start
sudo -i service kibana start
sudo -i service logstash start

Point browser to url:5601 (ex: 192.168.1.1:5601)


Select @timestamp and click 'Create'
*You may have to wait a few minutes...allowing log retrieval 



Testing/Troubleshooting

Elasticsearch
curl -X GET http://localhost:9200 { "name" : "NYLJDFe", "cluster_name" : "elasticsearch", "cluster_uuid" : "7krQg2MzR0irVJ6gNAB7fg", "version" : { "number" : "5.0.0", "build_hash" : "253032b", "build_date" : "2016-10-26T05:11:34.737Z", "build_snapshot" : false, "lucene_version" : "6.2.0" }, "tagline" : "You Know, for Search" }
Status (Elasticsearch)
systemctl status elasticsearch.service elasticsearch.service - Elasticsearch Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2016-11-04 20:53:51 EDT; 13h ago Docs: http://www.elastic.co Main PID: 6121 (java) Tasks: 74 Memory: 2.4G CPU: 7min 46.327s CGroup: /system.slice/elasticsearch.service └─6121 /usr/bin/java -Xms2g -Xmx2g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=7 Nov 04 20:53:51 logs systemd[1]: Starting Elasticsearch... Nov 04 20:53:51 logs systemd[1]: Started Elasticsearch.
Status (Kibana)
systemctl status kibana.service kibana.service - Kibana Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2016-11-04 20:54:09 EDT; 13h ago Main PID: 6205 (node) Tasks: 10 Memory: 82.2M CPU: 2min 51.950s CGroup: /system.slice/kibana.service └─6205 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c Nov 05 10:43:16 logs kibana[6205]: {"type":"response","@timestamp":"2016-11-05T14:43:16Z","tags":[],"pid":
Status (Logstash)
systemctl status logstash.service logstash.service - logstash Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: enabled) Active: active (running) since Sat 2016-11-05 08:52:27 EDT; 1h 58min ago Main PID: 32366 (java) Tasks: 43 Memory: 405.6M CPU: 4min 43.959s CGroup: /system.slice/logstash.service └─32366 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFracti Nov 05 08:52:27 logs systemd[1]: Started logstash.
Logstash Log's

/var/log/logstash
#cat/nano/vi the files within this location to view Logstash logs

Tutorial Video


Downloadable json Files
Dashboard

The pictured dashboard json files can be downloaded at the following:
https://drive.google.com/drive/folders/0Bz4ATy0CHc4jaWIxaGpPaDZUZW8? 
Three zipped files (Searches.json, Dashboard.json and Visualizations.json) 
import and customize as needed.