Anticipate cyber-threats with PatrOwl, manage them with TheHive
TL;DR: How to quickly setup a comprehensive SOC infrastructure based on open-source tools PatrOwl and TheHive.
This article is mostly inspired from a document written in french by Nicolas Schmitz, operational CISO at the ENS Lyon, a French public graduate school. See the original LinkedIn article here
- PatrOwl is a scalable, smart and open-source solution for orchestrating Security Operations. Written in Python, PatrOwl is composed of a Front-end application PatrowlManager communicating with one or multiple PatrowlEngines micro-applications. These probes perform the scans, analyze the results and format them in a normalized way. It remains incredibly easy to customize all components. Several connectors are already publicly available, including Nessus, OpenVAS, Nmap, SSLScan, Arachni, DroopScan, Sublist3r, SonarQube, CertStream, … Officially supported by the PatrOwl team;
- TheHive is a scalable 4-in-1 open source and free Security Incident Response Platform designed to make life easier for SOCs, CSIRTs, CERTs and any information security practitioner dealing with security incidents that need to be investigated and acted upon swiftly. It is the perfect companion for MISP. Thanks to Cortex, our powerful free and open source analysis engine, you can analyze (and triage) observables at scale using more than 100 analyzers.Officially supported by the StrangeBee team.
This document is intended for operational CISO and security engineers who must juggle several “vulnerability scanners”, each with their own capacities, their own repository of assets to scan, their own reports, etc. Once the scan policies have been defined and implemented in each tool, the security expert must still, often manually, inject the vulnerabilities found in a circuit allowing remediation: ticketing platform, emails or the famous shared .xls files.
Once the vulnerability is in the process, the security expert must follow the effective resolution of the incident, even though these circuits are often unsuitable for cyber-security incident management. This could be messy and hard to manage effectively in day to day operations. But that was before :)
In this article, we will install a set of open-source security tools which will work together thanks to an orchestrator which will bring them real added value.
3 scanners will be installed:
- OpenVAS is an excellent general-purpose opensource scanner ideal for scanning entire subnets, performing authenticated scans for an extended vulnerability scanning. This product is used as a core system by many commercial software on the market.
- Nmap will scan the infrastructure, and compare the versions of the applications with the vulnerability database. You’ll quickly find out if applications listening to a port on your network are vulnerable.
- Arachni will be our web vulnerability scanner (SQL injections, XSS, file inclusion ..). Note that Nikto and Zap are also available.
This document covers only few use cases of the products capacities. It mostly focus on the technical aspect of the installation. A video is available to quickly show you a use case, once the platform is installed.
Prerequisites and system requirements
- We will install the entire solution on a single virtual machine in Debian Buster. The setup is comfortable with at least 4 CPUs, 16GB of RAM, and 64GB of SSD disk space. Production installations should be designed regarding the quantity of elements scanned and your resilience needs.
- In the following parts, the VM will use the IP address ‘192.168.0.154’. Of course, it have to be replaced by the IP of your own VM.
- Some of the operations described below must be performed as root.
- This documentation does not cover changing the different default passwords, SSL/TLS encryption & certificates, as many things that will of course be necessary before going into production.
The official documentation is available here. We will use the Docker installation with a persistant storage of the DB.
apt-get install docker-composemkdir -p /data/patrowl_pg_data
mkdir -p /sources; cd /sources/
git clone https://github.com/Patrowl/PatrowlManager.git
The Dockerfile has to be edited for enabling the persistant storage. Open the ‘docker-compose.yml’ file and add the following line (in bold):
Now build the docker image from source and run the container :
docker-compose build — force-rm
docker-compose up -d
Tip: It is also possible to use the official Docker images available here on Docker Hub
You are now able to connect to the PatrowlManager Web GUI at http://192.168.0.154:8083 with the default credentials : admin/Bonjour1!
By default, PatrowlManager is like a general without an army. So we will provide him with a few soldiers, called here PatrowlEngines or simply “engines”.
The official documentation is available here. We will use the Docker installation as well.
cd /sources/ ;
git clone https://github.com/Patrowl/PatrowlEngines.git
Each engine will be available in ‘/sources/PatrowlEngines/engines’ dir.
We use the mikesplain/openvas Docker image with the options that are good for being able to connect to it and have data persistence:
mkdir -p /data/openvas/
docker run --rm -d -p 4443:443 -p 9390:9390 -e PUBLIC_HOSTNAME=192.168.0.154 -v /data/openvas/:/var/lib/openvas/mgr/ -name openvas mikesplain/openvas:9
Please note: OpenVAS takes several minutes to initialize (it takes a long time to retrieve signatures). Do not hesitate to go for a coffee! You can check the progress in logs or with a ‘top’ :
docker exec -it openvas top
Hint: As long as the CPU is at 100% you have to wait :-)
You can now connect to https://192.168.0.154:4443 with default credentials admin/admin and access a blank and old-fashioned OpenVAS console:
No further configuration is needed here ! PatrowlManager and its engine will be in charge to configure and start the scans.
OpenVAS Engine installation
cp openvas.json.sample openvas.json
Update the gmp_host parameter in the openvas.json file to set you IP address:
gmp_host : 192.168.0.154
Build and run the OpenVAS engine:
docker build — tag "patrowl-openvas" .
docker run -d --rm -p 5116:5016 -v /sources/PatrowlEngines/engines/openvas/openvas.json:/opt/patrowl-engines/openvas/openvas.json --name="openvas-engine" patrowl-openvas
Configuration of OpenVAS engine in PatrowlManager
Go back to http://192.168.0.154:8083/, menu `Engines/+Add scan engine instance` and set the API URL of the fresh Docker container :
Hint #1: Do not forget the final slash ‘/’
Hint #2: Consider using the IP address when you work with Docker containers running in separate contexts. Avoid using localhost or 0.0.0.0
The engine must then appear in the `Engines/List Engines` menu, with Oper.State at “Ready”.
Nmap Engine installation
No need to install Nmap, it’s embedded within the engine
docker build -tag "patrowl-nmap" .
docker run -d --rm -p 5101:5001 --name="nmap-engine" patrowl-nmap
That’s all ? Yes.
Configuration of Nmap engine in PatrowlManager
Exactly the same as the OpenVAS engine with the following API URL :
Then ensure the Operational State (Oper.State) is set to ‘Ready’.
Arachni Engine installation
The web application scanner Arachni is embedded in the Docker image too. Build and run the Arachni engine :
cp arachmi.json.sample arachmi.json
docker build --tag "patrowl-arachni" .
docker run -d --rm -p 5105:5005 -v /sources/PatrowlEngines/engines/arachni/arachni.json:/opt/patrowl-engines/arachni/arachni.json --name="arachni-engine" patrowl-arachni
Configuration of Arachni engine in PatrowlManager
Exactly the same as the other engines with the following API URL :
Then ensure the Operational State (Oper.State) is set to ‘Ready’.
Note: Arachni scanner consumes a lots of CPU/RAM resources (F*ck Ruby). Don’t panic if your fan is over-running.
TheHive installation — The quick and easy way
A lot of very good and comprehensive articles already talk about TheHive & Cortex installation (see here or here).
Here is a (quick and dirty) simple way to get started:
mkdir -p /data/elasticsearch /data/thehiveconf /sources/thehive
chmod 770 /data/elasticsearch
wget https://gist.github.com/MaKyOtOx/f2e727a4d9d2f7d9a3b6f1f4c7bd15ae -O /sources/thehive/docker-compose.yml
wget https://raw.githubusercontent.com/TheHive-Project/TheHive/master/conf/application.sample -O /data/thehiveconf/application.conf
Then edit the /data/thehiveconf/application.conf and update:
- ‘play.http.secret.key’ with a random value
- ‘uri’ with the value “http://192.168.0.154:9200/” (ElasticSearch)
Now it’s time to download and run TheHive:
docker-compose up -d
Note: Cortex is not installed in this tutorial. But feel free to install, use it massively and contribute with your own analyzers.
After a few moments, you can access http://192.168.0.154:9000/ and click on the “Update Database” button. After initializing its Elastic databases, TheHive will offer you the creation of an admin account then you will be able to discover the interface.
Make PatrOwl and TheHive working together
Here is a basic use case: PatrOwl continuously run security scans on assets. If a new vulnerability with a high severity is found, send an alert to TheHive. The security analyst will be aware of a new security incident and start his investigation. Let’s make this happen.
Create an API Key on TheHive
In order for Patrowl to “push” alerts into TheHive, you must first generate and retrieve an API key there. In your TheHive instance, go to the “Admin” / “Users” menu. Click on “Add User” then indicate the following:
Note: Do not forget to check “Allow alerts creation”
Once the user is registered, click on his “Create API Key” button, then “Reveal:” copy the displayed value.
Configure TheHive alerts on PatrowlManager
Now, in PatrowlManager, go to the “Admin” / “Settings” menu then click on “Add new setting”:
- In the ‘Key’ field, indicate: alerts.endpoint.thehive.apikey
- In the ‘Value’ field, paste the value of your API Key generated in TheHive
Repeat the operation for the url, with the following values:
- In the ‘Key’ field, enter alerts.endpoint.thehive.url
- In the ‘Value’ field, enter http://192.168.0.154:9000
Repeat the operation to use it, with the following values:
- In the ‘Key’ field, enter alerts.endpoint.thehive.user
- In the ‘Value’ field, indicate api
Configure alerting rules on PatrowlManager
Still in PatrowlManager, we will have to establish rules for the export of findings to TheHive: Go to the “Rules” / “List rules” menu then fill as below:
It is necessary to repeat these steps for low or medium alerts if needed. It is a one-time operation ! You got it.
Oh, that’s all :) If all commands worked like a charm until now, you should be able to start scans and receive alerts.
Such a platform may seem like a Health Robinson machine. It should rather be seen as an assembly of dedicated tools in the pure KISS philosophy, scalable and adjusted to your needs and IT ecosystem. The maintenance workload, updates, etc. should not be underestimated, but these operations are greatly simplified thanks to the use of Docker containers.
During the migration phase, it will be necessary to redefine in Patrowl the set of scan targets that were previously scattered in different tools. CSV import modules are available and if you change the scanner tomorrow, you will no longer have to do this :)
These projects are open-source, the community is very active worldwide and already adopted by medium and large organizations, MSSP and global SOC and CERT/CSIRT teams.
If you do not have enough manpower in system administration, do not hesitate to be accompanied by the Patrowl team which offer help for the deployment! A SaaS edition is also available now.
Several interesting use cases that should be presented too:
Bonus: the All-in-One script
WTF man !?! Why didn’t you start there !!
Find below an example of a script for starting all components on startup.
#!/bin/shecho "Starting OpenVAS scanner"/usr/bin/docker run -d --rm -p 4443:443 -p 9390:9390 -e PUBLIC_HOSTNAME=192.168.0.154 -v /data/openvas/:/var/lib/openvas/mgr/ --name openvas mikesplain/openvas:9echo "Starting OpenVAS engine"/usr/bin/docker run -d --rm -p 5116:5016 -v /sources/PatrowlEngines/engines/openvas/openvas.json:/opt/patrowl-engines/openvas/openvas.json --name="openvas-engine" patrowl-openvasecho "Starting Nmap engine"/usr/bin/docker run -d --rm -p 5101:5001 --name="nmap-engine" patrowl-nmapecho "Starting Arachni engine"/usr/bin/docker run -d --rm -p 5105:5005 -v /sources/PatrowlEngines/engines/arachni/arachni.json:/opt/patrowl-engines/arachni/arachni.json --name="arachni-engine" patrowl-arachniecho "Starting TheHive"/usr/bin/docker-compose -f /sources/thehive/docker-compose.yml up -decho "Starting PatrowlManager"/usr/bin/docker-compose -f /sources/PatrowlManager/docker-compose.yml up -d
Thanks again to:
- Nabil Adouani, Thomas Franco and Jérôme Léonard, the founders of StrangeBee and co-creators of TheHive and Cortex
- My awesome team at Patrowl and GreenLock Advisory, working hard to build useful security products and deliver high-quality services
- Our customers and contributors ❤️
- Nicolas Schmitz for the original document and his contributions on the project :)
PatrOwl is also available on SaaS. Looking for official Support or Professional Services ? Contact firstname.lastname@example.org or visit https://patrowl.io for more information.
— Nicolas MATTIOCCO, CEO of PatrOwl and GreenLock Advisory