20 March 2020

pfSense/OPNsense | Elastic Stack v7.6+ | Ubuntu 18.04+

pfELK (pfSense or OPNsense)

Visit https://pfelk.3ilson.dev for Scripted, Ansible and Docker installations

https://raw.githubusercontent.com/3ilson/pfelk/master/Images/pfelkdashboard.png

Prerequisites 
Ubuntu Server v18.04+
pfSense v2.4.4+ or OPNsense 20.1+

Navigate to the following within pfSense
Status>>System Logs [Settings]
1) Enable Remote Logging
2) Provide 'Server 1' address (this is the IP address of the ELK installation - ex: 192.168.1.60:5140)
3) Select "Firewall events"

Preparation

Add Oracle Java Repository
sudo add-apt-repository ppa:linuxuprising/java

Add Maxmind Repository
sudo add-apt-repository ppa:maxmind/ppa
Download and install the public GPG signing key
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Download and install apt-transport-https package (Debian)
sudo apt-get install apt-transport-https

Add Elasticsearch|Logstash|Kibana Repositories (version 7+
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
Update
sudo apt-get update

Install Java 13
sudo apt-get install oracle-java13-installer
Install Maxmind
sudo apt-get install geoipupdate
Update Maxmind with Credentials:
sudo nano /etc/GeoIP.conf
Modify lines 7 & 8 as follows (without < >):
AccountID <Input Your Account ID>
LicenseKey <Input Your LicenseKey>

Modify line 13 as follows:
EditionIDs GeoLite2-City GeoLite2-Country GeoLite2-ASN

Download Maxmind Databases:
sudo geoipupdate -d /usr/share/GeoIP/

Add cron (automatically updates Maxmind everyweek on Sunday at 1700hrs):
sudo nano /etc/cron.weekly/geoipupdate

Add the following and save/exit
00 17 * * 0 geoipupdate -d /usr/share/GeoIP

Install
Elasticsearch v7.6+ | Kibana v7.6+ | Logstash v7.6+
Elastic Stack

Install Elasticsearch|Kibana|Logstash
sudo apt-get install elasticsearch && sudo apt-get install kibana && sudo apt-get install logstash

Configure Kibana|v7.6+

Configure Kibana
sudo nano /etc/kibana/kibana.yml
Amend host file (/etc/kibana/kibana.yml)
server.port: 5601
server.host: "0.0.0.0"
Configure Logstash|v7.6+

Change Directory (preparation for configuration files)
cd /etc/logstash/conf.d
Download the following configuration files
sudo wget https://raw.githubusercontent.com/3ilson/pfelk/master/conf.d/01-inputs.conf
sudo wget https://raw.githubusercontent.com/3ilson/pfelk/master/conf.d/11-firewall.conf
sudo wget https://raw.githubusercontent.com/3ilson/pfelk/master/conf.d/20-geoip.conf
sudo wget https://raw.githubusercontent.com/3ilson/pfelk/master/conf.d/50-outputs.conf
Create Patterns Folder

sudo mkdir /etc/logstash/conf.d/patterns
Navigate to Patterns Folder

cd /etc/logstash/conf.d/patterns/
Download the following configuration file

sudo wget https://raw.githubusercontent.com/3ilson/pfelk/master/conf.d/pf-12.2019.grok
Edit (01-inputs.conf)
sudo nano /etc/logstash/conf.d/01-inputs.conf
Revise/Update w/pfsense IP address (01-inputs.conf)
# 01-inputs.conf
input {
  udp {
     port => 5140
  }
}
filter {
  #Adjust to match the IP address of pfSense or OPNsense
  if [host] =~ /172\.22\.33\.1/ {
    mutate {
      add_tag => ["pf", "Ready"]
    }
  }
  #To enable or ingest multiple pfSense or OPNsense instances uncomment the below section
  ##############################
  #if [host] =~ /172\.2\.22\.1/ {
  #  mutate {
  #    add_tag => ["pf", "firewall-2", "Ready"]
  #  }
  #}
  ##############################
  if "pf" in [tags] {
    grok {
      # OPNsense - Enable/Disable the line below based on firewall platform
      match => { "message" => "%{SYSLOGTIMESTAMP:pf_timestamp} %{SYSLOGHOST:pf_hostname} %{DATA:pf_program}(?:\[%{POSINT:pf_pid}\])?: %{GREEDYDATA:pf_message}" }
      # OPNsense
      # pfSense - Enable/Disable the line below based on firewall platform
      # match => { "message" => "%{SYSLOGTIMESTAMP:pf_timestamp} %{DATA:pf_program}(?:\[%{POSINT:pf_pid}\])?: %{GREEDYDATA:pf_message}" }
      # pfSense
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    mutate {
      rename => { "[message]" => "[event][original]"}
      remove_tag => "Ready"
    }
  }
}
Disable Swap
sudo swapoff -a
Update TimeZone  
//Update the timezone as needed - http://joda-time.sourceforge.net/timezones.html //
sudo timedatectl set-timezone EST


Configure Services


Automatic Start (on boot)
Start Services on Boot as Services (you'll need to reboot or start manually to proceed)
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service
sudo /bin/systemctl enable kibana.service
sudo /bin/systemctl enable logstash.service

Manual Start

Start Services Manually
sudo -i service elasticsearch start
sudo -i service kibana start
sudo -i service logstash start

Point browser to url or IP:5601 (ex: 192.168.1.1:5601)
Select @timestamp and click 'Create'
*You may have to wait a few minutes to allow log retrieval 

Configuring Patterns

  • In your web browser go to the ELK local IP using port 5601 (ex: 192.168.0.1:5601)
  • Click the wrench (Dev Tools) icon in the left pannel
  • Input the following and press the click to send request button (triangle)
  • https://raw.githubusercontent.com/3ilson/pfelk/master/Dashboard/GeoIP(Template)
  • Click the gear icon (management) in the lower left
  • Click Kibana -> Index Patters
  • Click Create New Index Pattern
  • Type "pf-*" into the input box, then click Next Step
Import dashboards

  • In your web browser go to the ELK local IP using port 5601 (ex: 192.168.0.1:5601)
  • Click Management -> Saved Objects
  • You can import the dashboards found in the Dashboard folder via the Import buttom in the top-right corner.


Testing/Troubleshooting

Elasticsearch
curl -X GET http://localhost:9200 { "name" : "NYLJDFe", "cluster_name" : "elasticsearch", "cluster_uuid" : "7krQg2MzR0irVJ6gNAB7fg", "version" : { "number" : "5.6.3", "build_hash" : "253032b", "build_date" : "2017-10-31T05:11:34.737Z", "build_snapshot" : false, "lucene_version" : "6.6.1" }, "tagline" : "You Know, for Search" }
Status (Elasticsearch)
systemctl status elasticsearch.service elasticsearch.service - Elasticsearch Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: enabled) Active: active (running) since Tue 2020-03-13 20:53:51 EDT; 13h ago Docs: http://www.elastic.co Main PID: 6121 (java) Tasks: 74 Memory: 2.4G CPU: 7min 46.327s CGroup: /system.slice/elasticsearch.service └─6121 /usr/bin/java -Xms16g -Xmx16g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=7 Mar 13 20:53:51 logs systemd[1]: Starting Elasticsearch... Mar 13 20:53:51 logs systemd[1]: Started Elasticsearch.
Status (Kibana)
systemctl status kibana.service kibana.service - Kibana Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor preset: enabled) Active: active (running) since Tue 2020-03-13 20:54:09 EDT; 13h ago Main PID: 6205 (node) Tasks: 10 Memory: 82.2M CPU: 2min 51.950s CGroup: /system.slice/kibana.service └─6205 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c Mar 13 10:43:16 logs kibana[6205]: {"type":"response","@timestamp":"2020-03-131T14:43:16Z","tags":[],"pid":
Status (Logstash)
systemctl status logstash.service logstash.service - logstash Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: enabled) Active: active (running) since Tue 2020-03-13 08:52:27 EDT; 1h 58min ago Main PID: 32366 (java) Tasks: 43 Memory: 405.6M CPU: 4min 43.959s CGroup: /system.slice/logstash.service └─32366 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFracti Mar 13 08:52:27 logs systemd[1]: Started logstash.
Logstash Log's
/var/log/logstash
#cat/nano/vi the files within this location to view Logstash logs





No comments:

Post a Comment