05 December 2015

pf (Firewall Logs) + Logstash + Elasticsearch + Kibana | Install / Guide

pf logs + ElasticSearch 2.3.0, Logstash 2.3.0, Kibana 4.5.0



Prerequisites
Ubuntu Server v14+
pf Firewall v2.2+


pfSense

Navigate to the following within pfSense
Status>>System Logs [Settings]

Provide 'Server 1' address (this is the IP address of the ELK your installing - example: 192.168.1.60:5140)
Select "Firewall events"

Edit host file 


sudo nano /etc/hosts
sudo nano /etc/hostname
Amend host file (/etc/hosts)
192.168.1.1 logs.YOURURL.com logs
Add Oracle Java Repository
sudo add-apt-repository ppa:webupd8team/java
Download and install the public GPG signing key


sudo wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Add Elasticsearch, Logstash and Kibana Repositories
sudo echo "deb http://packages.elastic.co/elasticsearch/2.x/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elk.list
sudo echo "deb http://packages.elastic.co/logstash/2.3/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elk.list sudo echo "deb http://packages.elastic.co/kibana/4.5/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elk.list
Update
sudo apt-get update 
Upgrade
sudo apt-get upgrade 
Distribution Upgrade
sudo apt-get dist-upgrade
Install Java8 (At this time Java9 will not work with this configuration)
sudo apt-get install oracle-java8-installer

Install
Elasticsearch v2.3.0 | Kibana v4.5.0 | Logstash v2.3.0

Install
sudo apt-get install elasticsearch logstash kibana
Run as startup (boot) service 
sudo update-rc.d elasticsearch defaults 95 10
sudo update-rc.d logstash defaults 95 10
sudo update-rc.d kibana defaults 95 10

Create SSL Certificate 

Create the following directory for your certificate
sudo mkdir -p /etc/pki/tls/certs
sudo mkdir /etc/pki/tls/private
Change directory
cd /etc/pki/tls
The following command will create a SSL certificate (replace "logs" with your hostname)
sudo openssl req -x509 -nodes -newkey rsa:2048 -days 3650 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt -subj /CN=logs
Configure Logstash 
v2.3.0

Change Directory (preparation for configuration files)
cd /etc/logstash/conf.d/
Create the following configuration file
sudo nano 01-inputs.conf
Paste the following (01-inputs.conf)
#logstash-forwarder [Not utilized by pfSense by default] #input { # lumberjack { # port => 5000 # type => "logs" # ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" # ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" # } #} #tcp syslog stream via 5140 input { tcp { type => "syslog" port => 5140 } } #udp syslogs tream via 5140 input { udp { type => "syslog" port => 5140 } }
Create the following configuration file
sudo nano 10-syslog.conf
Paste the following (10-syslog.conf)
filter { if [type] == "syslog" { #change to pfSense ip address | to add multiple pfSenses replace the following line with "if [host] =~ /0\.0\.0\.0/ or [host] =~ /0\.0\.0\.0/ {" if [host] =~ /0\.0\.0\.0/ { mutate { add_tag => ["PFSense", "Ready"] } } if "Ready" not in [tags] { mutate { add_tag => [ "syslog" ] } } } } filter { if [type] == "syslog" { mutate { remove_tag => "Ready" } } } filter { if "syslog" in [tags] { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] locale => "en" } if !("_grokparsefailure" in [tags]) { mutate { replace => [ "@source_host", "%{syslog_hostname}" ] replace => [ "@message", "%{syslog_message}" ] } } mutate { remove_field => [ "syslog_hostname", "syslog_message", "syslog_timestamp" ] } # if "_grokparsefailure" in [tags] { # drop { } # } } }
Create the following configuration file
sudo nano 30-outputs.conf
Paste the following (30-outputs.conf)
output { elasticsearch {
hosts => localhost
index => "logstash-%{+YYYY.MM.dd}" } # stdout { codec => rubydebug } }
Create the following configuration file
sudo nano 11-pfsense.conf
Paste the following (11-pfsense.conf)
//Update the timezone as needed//

// "+0400" will display data in EST/EDT //
filter { if "PFSense" in [tags] { grok { add_tag => [ "firewall" ] match => [ "message", "<(?<evtid>.*)>(?<datetime>(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\s+(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9]) (?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:[0-5][0-9])) (?<prog>.*?): (?<msg>.*)" ] } mutate { gsub => ["datetime"," "," "] } date { match => [ "datetime", "MMM dd HH:mm:ss +0400" ] timezone => "UTC" } mutate { replace => [ "message", "%{msg}" ] } mutate { remove_field => [ "msg", "datetime" ] } } if [prog] =~ /^filterlog$/ { mutate { remove_field => [ "msg", "datetime" ] } grok { patterns_dir => "/etc/logstash/conf.d/patterns" match => [ "message", "%{PFSENSE_LOG_DATA}%{PFSENSE_IP_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}", "message", "%{PFSENSE_LOG_DATA}%{PFSENSE_IPv4_SPECIFIC_DATA_ECN}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}" ] } mutate { lowercase => [ 'proto' ] } geoip { add_tag => [ "GeoIP" ] source => "src_ip" # Optional GeoIP database database => "/etc/logstash/GeoLiteCity.dat" } } }
Create a patterns directory (Referenced in the configuration files above)
sudo mkdir /etc/logstash/conf.d/patterns
Create the following pattern file
sudo nano /etc/logstash/conf.d/patterns/pfsense2-2.grok
Paste the following (pfsense2-2.grok)
# GROK match pattern for logstash.conf filter: %{PFSENSE_LOG_DATA}%{PFSENSE_IP_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}

# GROK Custom Patterns (add to patterns directory and reference in GROK filter for pfSense events):

# GROK Patterns for pfSense 2.2 Logging Format
#
# Created 27 Jan 2015 by J. Pisano (Handles TCP, UDP, and ICMP log entries)
# Edited 14 Feb 2015 by Elijah Paul elijah.paul@gmail.com
# Edited 10 Mar 2015 by Bernd Zeimetz <bernd@bzed.de>
# taken from https://gist.github.com/elijahpaul/f5f32d4e914dcb7fedd2
# - adding PFSENSE_ prefix
# - adding carp patterns
#
# Usage: Use with following GROK match pattern
#
# %{PFSENSE_LOG_DATA}%{PFSENSE_IP_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}

PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule}),,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
PFSENSE_IP_SPECIFIC_DATA (%{PFSENSE_IPv4_SPECIFIC_DATA}|%{PFSENSE_IPv6_SPECIFIC_DATA})
PFSENSE_IPv4_SPECIFIC_DATA (%{BASE16NUM:tos}),,(%{INT:ttl}),(%{INT:id}),(%{INT:offset}),(%{WORD:flags}),(%{INT:proto_id}),(%{WORD:proto}),
PFSENSE_IPv4_SPECIFIC_DATA_ECN (%{BASE16NUM:tos}),(%{INT:ecn}),(%{INT:ttl}),(%{INT:id}),(%{INT:offset}),(%{WORD:flags}),(%{INT:proto_id}),(%{WORD:proto}),
PFSENSE_IPv6_SPECIFIC_DATA (%{BASE16NUM:class}),(%{DATA:flow_label}),(%{INT:hop_limit}),(%{WORD:proto}),(%{INT:proto_id}),
PFSENSE_IP_DATA (%{INT:length}),(%{IP:src_ip}),(%{IP:dest_ip}),
PFSENSE_PROTOCOL_DATA (%{PFSENSE_TCP_DATA}|%{PFSENSE_UDP_DATA}|%{PFSENSE_ICMP_DATA}|%{PFSENSE_CARP_DATA})
PFSENSE_TCP_DATA (%{INT:src_port}),(%{INT:dest_port}),(%{INT:data_length}),(%{WORD:tcp_flags}),(%{INT:sequence_number}),(%{INT:ack_number}),(%{INT:tcp_window}),(%{DATA:urg_data}),(%{DATA:tcp_options})
PFSENSE_UDP_DATA (%{INT:src_port}),(%{INT:dest_port}),(%{INT:data_length})
PFSENSE_ICMP_DATA (%{PFSENSE_ICMP_TYPE}%{PFSENSE_ICMP_RESPONSE})
PFSENSE_ICMP_TYPE (?<icmp_type>(request|reply|unreachproto|unreachport|unreach|timeexceed|paramprob|redirect|maskreply|needfrag|tstamp|tstampreply)),
PFSENSE_ICMP_RESPONSE (%{PFSENSE_ICMP_ECHO_REQ_REPLY}|%{PFSENSE_ICMP_UNREACHPORT}| %{PFSENSE_ICMP_UNREACHPROTO}|%{PFSENSE_ICMP_UNREACHABLE}|%{PFSENSE_ICMP_NEED_FLAG}|%{PFSENSE_ICMP_TSTAMP}|%{PFSENSE_ICMP_TSTAMP_REPLY})
PFSENSE_ICMP_ECHO_REQ_REPLY (%{INT:icmp_echo_id}),(%{INT:icmp_echo_sequence})
PFSENSE_ICMP_UNREACHPORT (%{IP:icmp_unreachport_dest_ip}),(%{WORD:icmp_unreachport_protocol}),(%{INT:icmp_unreachport_port})
PFSENSE_ICMP_UNREACHPROTO (%{IP:icmp_unreach_dest_ip}),(%{WORD:icmp_unreachproto_protocol})
PFSENSE_ICMP_UNREACHABLE (%{GREEDYDATA:icmp_unreachable})
PFSENSE_ICMP_NEED_FLAG (%{IP:icmp_need_flag_ip}),(%{INT:icmp_need_flag_mtu})
PFSENSE_ICMP_TSTAMP (%{INT:icmp_tstamp_id}),(%{INT:icmp_tstamp_sequence})
PFSENSE_ICMP_TSTAMP_REPLY (%{INT:icmp_tstamp_reply_id}),(%{INT:icmp_tstamp_reply_sequence}),(%{INT:icmp_tstamp_reply_otime}),(%{INT:icmp_tstamp_reply_rtime}),(%{INT:icmp_tstamp_reply_ttime})

PFSENSE_CARP_DATA (%{WORD:carp_type}),(%{INT:carp_ttl}),(%{INT:carp_vhid}),(%{INT:carp_version}),(%{INT:carp_advbase}),(%{INT:carp_advskew})
Download and install the MaxMind GeoIP database (optional)
cd /etc/logstash
Download and install the MaxMind GeoIP database (optional)
sudo curl -O "http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz"
Download and install the MaxMind GeoIP database (optional)
sudo gunzip GeoLiteCity.dat.gz
Reboot

sudo reboot
Point browser to url:5601 (ex: 192.168.1.1:5601)


Select @timestamp and click 'Create'



Testing

Elasticsearch
curl -X GET http://localhost:9200 { "name" : "Match", "cluster_name" : "elasticsearch", "version" : { "number" : "2.2.1", "build_hash" : "d045fc29d1932bce18b2e65ab8b297fbf6cd41a1", "build_timestamp" : "2016-03-09T09:38:54Z", "build_snapshot" : false, "lucene_version" : "5.4.1" }, "tagline" : "You Know, for Search" }
Logstash (log)
cat /var/log/logstash/logstash.log
Logstash (pfSense logs)

tail -f /var/log/logstash/logstash.stdout




Building your Kibana Dashboard:




Installation Video (Tutorial/Guide)