05 December 2015

pf (Firewall Logs) + Logstash + Elasticsearch + Kibana | Install / Guide

pf logs + ElasticSearch 2.3.0, Logstash 2.3.0, Kibana 4.5.0

Ubuntu Server v14+
pf Firewall v2.2+


Navigate to the following within pfSense
Status>>System Logs [Settings]

Provide 'Server 1' address (this is the IP address of the ELK your installing - example:
Select "Firewall events"

Edit host file 

sudo nano /etc/hosts
sudo nano /etc/hostname
Amend host file (/etc/hosts) logs.YOURURL.com logs
Add Oracle Java Repository
sudo add-apt-repository ppa:webupd8team/java
Download and install the public GPG signing key

sudo wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Add Elasticsearch, Logstash and Kibana Repositories
sudo echo "deb http://packages.elastic.co/elasticsearch/2.x/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elk.list
sudo echo "deb http://packages.elastic.co/logstash/2.3/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elk.list sudo echo "deb http://packages.elastic.co/kibana/4.5/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elk.list
sudo apt-get update 
sudo apt-get upgrade 
Distribution Upgrade
sudo apt-get dist-upgrade
Install Java8 (At this time Java9 will not work with this configuration)
sudo apt-get install oracle-java8-installer

Elasticsearch v2.3.0 | Kibana v4.5.0 | Logstash v2.3.0

sudo apt-get install elasticsearch logstash kibana
Run as startup (boot) service 
sudo update-rc.d elasticsearch defaults 95 10
sudo update-rc.d logstash defaults 95 10
sudo update-rc.d kibana defaults 95 10

Create SSL Certificate 

Create the following directory for your certificate
sudo mkdir -p /etc/pki/tls/certs
sudo mkdir /etc/pki/tls/private
Change directory
cd /etc/pki/tls
The following command will create a SSL certificate (replace "logs" with your hostname)
sudo openssl req -x509 -nodes -newkey rsa:2048 -days 3650 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt -subj /CN=logs
Configure Logstash 

Change Directory (preparation for configuration files)
cd /etc/logstash/conf.d/
Create the following configuration file
sudo nano 01-inputs.conf
Paste the following (01-inputs.conf)
#logstash-forwarder [Not utilized by pfSense by default] #input { # lumberjack { # port => 5000 # type => "logs" # ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" # ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" # } #} #tcp syslog stream via 5140 input { tcp { type => "syslog" port => 5140 } } #udp syslogs tream via 5140 input { udp { type => "syslog" port => 5140 } }
Create the following configuration file
sudo nano 10-syslog.conf
Paste the following (10-syslog.conf)
filter { if [type] == "syslog" { #change to pfSense ip address | to add multiple pfSenses replace the following line with "if [host] =~ /0\.0\.0\.0/ or [host] =~ /0\.0\.0\.0/ {" if [host] =~ /0\.0\.0\.0/ { mutate { add_tag => ["PFSense", "Ready"] } } if "Ready" not in [tags] { mutate { add_tag => [ "syslog" ] } } } } filter { if [type] == "syslog" { mutate { remove_tag => "Ready" } } } filter { if "syslog" in [tags] { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] locale => "en" } if !("_grokparsefailure" in [tags]) { mutate { replace => [ "@source_host", "%{syslog_hostname}" ] replace => [ "@message", "%{syslog_message}" ] } } mutate { remove_field => [ "syslog_hostname", "syslog_message", "syslog_timestamp" ] } # if "_grokparsefailure" in [tags] { # drop { } # } } }
Create the following configuration file
sudo nano 30-outputs.conf
Paste the following (30-outputs.conf)
output { elasticsearch {
hosts => localhost
index => "logstash-%{+YYYY.MM.dd}" } # stdout { codec => rubydebug } }
Create the following configuration file
sudo nano 11-pfsense.conf
Paste the following (11-pfsense.conf)
//Update the timezone as needed//

// "+0400" will display data in EST/EDT //
filter { if "PFSense" in [tags] { grok { add_tag => [ "firewall" ] match => [ "message", "<(?<evtid>.*)>(?<datetime>(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\s+(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9]) (?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:[0-5][0-9])) (?<prog>.*?): (?<msg>.*)" ] } mutate { gsub => ["datetime"," "," "] } date { match => [ "datetime", "MMM dd HH:mm:ss +0400" ] timezone => "UTC" } mutate { replace => [ "message", "%{msg}" ] } mutate { remove_field => [ "msg", "datetime" ] } } if [prog] =~ /^filterlog$/ { mutate { remove_field => [ "msg", "datetime" ] } grok { patterns_dir => "/etc/logstash/conf.d/patterns" match => [ "message", "%{PFSENSE_LOG_DATA}%{PFSENSE_IP_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}", "message", "%{PFSENSE_LOG_DATA}%{PFSENSE_IPv4_SPECIFIC_DATA_ECN}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}" ] } mutate { lowercase => [ 'proto' ] } geoip { add_tag => [ "GeoIP" ] source => "src_ip" # Optional GeoIP database database => "/etc/logstash/GeoLiteCity.dat" } } }
Create a patterns directory (Referenced in the configuration files above)
sudo mkdir /etc/logstash/conf.d/patterns
Create the following pattern file
sudo nano /etc/logstash/conf.d/patterns/pfsense2-2.grok
Paste the following (pfsense2-2.grok)

# GROK Custom Patterns (add to patterns directory and reference in GROK filter for pfSense events):

# GROK Patterns for pfSense 2.2 Logging Format
# Created 27 Jan 2015 by J. Pisano (Handles TCP, UDP, and ICMP log entries)
# Edited 14 Feb 2015 by Elijah Paul elijah.paul@gmail.com
# Edited 10 Mar 2015 by Bernd Zeimetz <bernd@bzed.de>
# taken from https://gist.github.com/elijahpaul/f5f32d4e914dcb7fedd2
# - adding PFSENSE_ prefix
# - adding carp patterns
# Usage: Use with following GROK match pattern

PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule}),,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
PFSENSE_IPv4_SPECIFIC_DATA (%{BASE16NUM:tos}),,(%{INT:ttl}),(%{INT:id}),(%{INT:offset}),(%{WORD:flags}),(%{INT:proto_id}),(%{WORD:proto}),
PFSENSE_IPv4_SPECIFIC_DATA_ECN (%{BASE16NUM:tos}),(%{INT:ecn}),(%{INT:ttl}),(%{INT:id}),(%{INT:offset}),(%{WORD:flags}),(%{INT:proto_id}),(%{WORD:proto}),
PFSENSE_IPv6_SPECIFIC_DATA (%{BASE16NUM:class}),(%{DATA:flow_label}),(%{INT:hop_limit}),(%{WORD:proto}),(%{INT:proto_id}),
PFSENSE_IP_DATA (%{INT:length}),(%{IP:src_ip}),(%{IP:dest_ip}),
PFSENSE_TCP_DATA (%{INT:src_port}),(%{INT:dest_port}),(%{INT:data_length}),(%{WORD:tcp_flags}),(%{INT:sequence_number}),(%{INT:ack_number}),(%{INT:tcp_window}),(%{DATA:urg_data}),(%{DATA:tcp_options})
PFSENSE_UDP_DATA (%{INT:src_port}),(%{INT:dest_port}),(%{INT:data_length})
PFSENSE_ICMP_TYPE (?<icmp_type>(request|reply|unreachproto|unreachport|unreach|timeexceed|paramprob|redirect|maskreply|needfrag|tstamp|tstampreply)),
PFSENSE_ICMP_ECHO_REQ_REPLY (%{INT:icmp_echo_id}),(%{INT:icmp_echo_sequence})
PFSENSE_ICMP_UNREACHPORT (%{IP:icmp_unreachport_dest_ip}),(%{WORD:icmp_unreachport_protocol}),(%{INT:icmp_unreachport_port})
PFSENSE_ICMP_UNREACHPROTO (%{IP:icmp_unreach_dest_ip}),(%{WORD:icmp_unreachproto_protocol})
PFSENSE_ICMP_NEED_FLAG (%{IP:icmp_need_flag_ip}),(%{INT:icmp_need_flag_mtu})
PFSENSE_ICMP_TSTAMP (%{INT:icmp_tstamp_id}),(%{INT:icmp_tstamp_sequence})
PFSENSE_ICMP_TSTAMP_REPLY (%{INT:icmp_tstamp_reply_id}),(%{INT:icmp_tstamp_reply_sequence}),(%{INT:icmp_tstamp_reply_otime}),(%{INT:icmp_tstamp_reply_rtime}),(%{INT:icmp_tstamp_reply_ttime})

PFSENSE_CARP_DATA (%{WORD:carp_type}),(%{INT:carp_ttl}),(%{INT:carp_vhid}),(%{INT:carp_version}),(%{INT:carp_advbase}),(%{INT:carp_advskew})
Download and install the MaxMind GeoIP database (optional)
cd /etc/logstash
Download and install the MaxMind GeoIP database (optional)
sudo curl -O "http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz"
Download and install the MaxMind GeoIP database (optional)
sudo gunzip GeoLiteCity.dat.gz

sudo reboot
Point browser to url:5601 (ex:

Select @timestamp and click 'Create'


curl -X GET http://localhost:9200 { "name" : "Match", "cluster_name" : "elasticsearch", "version" : { "number" : "2.2.1", "build_hash" : "d045fc29d1932bce18b2e65ab8b297fbf6cd41a1", "build_timestamp" : "2016-03-09T09:38:54Z", "build_snapshot" : false, "lucene_version" : "5.4.1" }, "tagline" : "You Know, for Search" }
Logstash (log)
cat /var/log/logstash/logstash.log
Logstash (pfSense logs)

tail -f /var/log/logstash/logstash.stdout

Building your Kibana Dashboard:

Installation Video (Tutorial/Guide)


  1. Andrew, great write-up. Would it be possible to see your Kibana 4 dashboard JSON? I would love to do some of the visualization and it would be one heck of a head-start.

    1. Thanks for the feedback, Francisco. I'm not opposed to providing the JSON data but take a look at the tutorial video below. I'm optimistic you'll enjoy building and customizing your visualizations more so but if not let me know and I'll post the JSON data.


  2. Andrew thanks for this. I have been struggling with getting this working from other tutorials but yours was perfect. I am only having one issue. I am not seeing anything coming into Kibana or unable to add anything under logstash* per your video. I am seeing perfectly formatted outputs when I run tail -f /var/log/logstash/logstash.stdout and also when I run curl localhost:9200/_cat/indices i see the following:
    yellow open .kibana 1 1 2 0 11kb 11kb
    yellow open logstash-2016.01.30 5 1 457 0 1.3mb 1.3mb

    Any idea where my issue maybe?

  3. What do you see when you run: tail -f /var/log/logstash/logstash.stdout - can you post the output?
    Did you reboot after running all through the guide above?
    Can you take a look and post the outputs for:
    This is where the "logstash.stdout" is located and you should see the file size increasing as more logs are received.

    I'll post an installation video shortly.

  4. Andrew

    Thanks first for the prompt reply. When I run the tail stdout command i see perfectly formatted alerts coming from PF into ELK. I did reboot when eventually when I wasnt seeing anything in Kibana. Tried starting/stopping services first. And the size of my logs are increasing so I know the communications are working. Here are the output of the commands your requested.






    No output from command

  5. Todd,

    The good news is, it's working. It appears that logstash is unable to connect to ElasticSearch via http://localhost:9200

    Can you provide the outputs for the /etc/hosts


  6. localhost vmelk.home vmelk

    # The following lines are desirable for IPv6 capable hosts
    ::1 localhost ip6-localhost ip6-loopback
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters

    1. Todd,

      I just ran through the guide with a brand new instance. I created the following video: https://youtu.be/NlZCpQDMJ3w

      Hope it helps - let me know.

  7. Give this a try:
    amend your second line to: vmelk
    Then add the following line:
    #.#.#.# vmelk.home vmelk
    (Replace the #'s with the IP address of your vmlek box)

    1. I doubt that was the fix....I'm guessing its an encryption issue which is preventing communication between ES.

    2. Yea I tried changing the host file as you specified and still not working. I am at a loss on where to go with this. I really would like to have this working but I am burned out mentally. Thanks again for your assistance Andrew. I greatly appreciate it.

    3. I'll be back at the end of the week (6-Feb-16) and will build another instance w/video recording - perhaps that'll help.

    4. Thank you sir for the new video. That worked!!! Greatly appreciate this!

  8. Hello Andrew. I finished my setup and everything is working perfectly. Great job on this write up. I was wondering if you can help on importing a dashboard. I am trying to import this: https://gist.github.com/elijahpaul/a1b0296ff442a95e9046 and it fails. Any thoughts?


  9. Absolutely, I'll give it another attempt this weekend but the referenced .json appears to be an older version of Kibana and may not work. I could possibly rebuild and export the .json. Let me see what i can do and I'll reply later this week.

  10. Hi,

    I just followed your great howto step-by-step on Ubuntu server 14.04.3 (fresh install on a virtualbox-vm) , in the end the Webinterface opens, but there is not field for "Time field name".

    At the bottom of the GUI there is an error message stating: "unable to fetch mapping. Do you have indices matching the pattern?".

    1. I ran through it again without any issues... I made a new video: https://youtu.be/lQPm8M-0FKo
      Tested using: Ubuntu Server 14.04.4; Elasticsearch 2.2.0; Logstash 2.2.1; Kibana 4.4.1; pfSense 2.2.6; Java 8 64bit

  11. I would check your connectivity. The virutalbox-vm may need a properly configured NIC. Check and see if you can ping your pfsense from within the virtualbox-vm.

  12. I also tried this guide and end up with this error:

    No default index pattern. You must select or create one to continue.

    I am not sure what I am doing wrong :(

    1. I ran through it again without any issues... I made a new video: https://youtu.be/lQPm8M-0FKo
      Tested using: Ubuntu Server 14.04.4; Elasticsearch 2.2.0; Logstash 2.2.1; Kibana 4.4.1; pfSense 2.2.6; Java 8 64bit

  13. Did you configure PFSense to forward logs and configure the IP address from 10-syslog.conf? You should wait ~5min for it to receive logs if that doesn't work it appears that your network might be misconfigured. Set and confirm your IP address, PFSense log forwarding is enabled...

  14. Hello Andrew

    I did verify these settings and they are all correct. I am not sure why this is happening? :(

    1. I will conduct a fresh/new/clean install and provide any updates within 12hrs.

    2. I ran through it again without any issues... I made a new video: https://youtu.be/lQPm8M-0FKo
      Tested using: Ubuntu Server 14.04.4; Elasticsearch 2.2.0; Logstash 2.2.1; Kibana 4.4.1; pfSense 2.2.6; Java 8 64bit

    3. First of all: thanks for your work !

      Did you also include the changes you made in your written howto above or just in the videos?

  15. Hi Andrew,

    You tuto is really good. Tks.

    But, (;)) I have an issue with the date of the log, it seems that the filter transform the year 2016 into year 2000.

    It's weird because when I don't put the IP of the pfSense the date is good, but obviously I got a grokparsefailure, and when I put my pfSense IP the date is wrong. So I think the error is in the part

    if "syslog" in [tags]

    but I can't locate it...

    An example with a wrong IP in the filter :

    "message" => "<134>Mar 9 13:53:31 filterlog: 5,16777216,,1000000103,em0,match,block,in,4,0x28,,54,0,0,DF,17,udp,129,,,14550,49852,109",
    "@version" => "1",
    "@timestamp" => "2016-03-09T13:53:27.303Z",
    "type" => "syslog",
    "host" => "",
    "tags" => [
    [0] "syslog",
    [1] "_grokparsefailure"
    "syslog_severity_code" => 5,
    "syslog_facility_code" => 1,
    "syslog_facility" => "user-level",
    "syslog_severity" => "notice"

    And with the right IP :

    "message" => "5,16777216,,1000000103,em0,match,block,in,4,0x0,,56,28722,0,DF,6,tcp,60,,,56322,22,0,S,1497785896,,29200,,mss;sackOK;TS;nop;wscale",
    "@version" => "1",
    "@timestamp" => "2000-03-09T13:57:17.000Z",
    "type" => "syslog",
    "host" => "",
    "tags" => [
    [0] "PFSense",
    [1] "firewall",
    [2] "GeoIP"
    "evtid" => "134",
    "prog" => "filterlog",
    "rule" => "5",
    "sub_rule" => "16777216",
    "tracker" => "1000000103",
    "iface" => "em0",
    "reason" => "match",
    "action" => "block",
    "direction" => "in",
    "ip_ver" => "4",
    "tos" => "0x0",
    "ttl" => "56",
    "id" => "28722",
    "offset" => "0",
    "flags" => "DF",
    "proto_id" => "6",
    "proto" => "tcp",
    "length" => "60",
    "src_ip" => "",
    "dest_ip" => "",
    "src_port" => "56322",
    "dest_port" => "22",
    "data_length" => "0",
    "geoip" => {
    "ip" => "",
    "country_code2" => "FR",
    "country_code3" => "FRA",
    "country_name" => "France",
    "continent_code" => "EU",
    "latitude" => 48.860000000000014,
    "longitude" => 2.3499999999999943,
    "timezone" => "Europe/Paris",
    "location" => [
    [0] 2.3499999999999943,
    [1] 48.860000000000014

    If you have an idea !

    1. Very odd...I've check my system (same setup/install as the tutorial) but couldn't replicate the error/issue. I'll keep messing around with it...let me know if you have any luck.

      Grok Debugger:

  16. I followed your guide above for installing ELS and then on Youtube for building the Dashboard, but have problems building the Vizualisations.

    You mention a field "id.raw" several times, but I do not have that entry in the drop-down-list.

    Any hints?

    1. I had a similar issue with another pfSense. Depending on how many logs pfSense has sent might limit the scope of fields available (it hasn't received any logs with those fields). Try the following:
      Login to your ELK instance via web browser
      -Click on "Settings"
      -Click on "logstash-*" under Index Patterns
      -Click on the Orange/Yellow refresh icon (between the green box with start and red box with trash can.

      Let me know if that works.

    2. Hi Andrew,

      great, that helped, it works now, thanks !

      As I am obviously a complete noob in network traffic logging :), maybe you could help me again:

      how can I make another pfsense-box log to the same ELS-Server?

      I think I have to create a new index, right? What settings do I have to change on ELS for a second pfsense connected?

  17. Well Done Mate!!! Great Job.... Works Well!!