Security Research

Where's wald0: Sniffing out the Bloodhound

Where's wald0: Sniffing out the Bloodhound


Blog post by Neil Desai
HPE Security Research

Regardless of what attack life cycle you follow, there are couple items which everyone can agree on:

  1. Companies have to assume they are already compromised[1].
  2. The earlier in the lifecycle that you can catch an attacker, the lower the overall cost of remediation will be.

After a host is compromised, the attacker has to create a command and control channel, establish persistence and start to do internal reconnaissance.

Veris’s ATD (Adaptive Threat Division) continues to raise the bar with the way they (ab)use PowerShell for reconnaissance, lateral movement, and privilege escalation. Their tools are meant to mimic advanced attackers and help the blue team see how they can improve. Many red teams/pentesters use their tools and at least one hacker, Phineas Fisher, used it when he successfully hacked Hacking Team[2].


John Lambert’s blog post “Defenders think in lists. Attackers think in graphs. As long as this is true, attackers win," explains it well:

“Defenders don’t have a list of assets—they have a graph. Assets are connected to each other by security relationships. Attackers breach a network by landing somewhere in the graph using a technique such as spearphishing and they hack, finding vulnerable systems by navigating the graph.”

BloodHound is the latest project from Veris’s ADT. It takes Active Directory reconnaissance and exploitation down a slightly different path through the use of graph theory. By using graph theory and applying it to information extracted from Active Directory they are able to see how many hops and the exact path it takes to get to a Domain Admin. This allows them to focus on certain targets to keep their noise at a minimum.

As defenders, this tool can be used to identify the ways their environments would allow an attacker to get do Domain Admin and possibly eliminate some of the easier paths, but just because the easy path(s) can possibly be eliminated doesn’t mean that we shouldn’t be watching for this type of activity though.

Detection Through Dissection

As we look at various ways to detect Bloodhound and tools like it we need to keep a few things in mind:

  • Don’t look for signatures/hashes of a file. While this may have some merit, it is too easy to bypass. Just because a vendor does, it doesn’t mean it’s a good choice:

twitter research.png

  •  Client side detection is going to present a few challenges such as:
  • Right version of PowerShell to get logs[3]  
  • WEF setup and configuration
  • Volume of PowerShell related events
  • Ratio of false positives/actual attacks

No matter how much defenders get right, there is the never-ending game of cat and mouse as the attackers continually find ways to bypass enforcement and/or detection. At Derbycon 6, Ben 0XA’s talk, “PowerShell Secrets and Tactics," goes into some of these bypasses. Casey Smith (@subtee) constantly posts innovative ways to run PowerShell without running powershell.exe or bypassing AppLocker policies to get PowerShell scripts to run.

To get a better understanding of what BloodHound is doing and how/where to detect, it we need to do few things:

  1. Set up a lab environment that will allow us to run BloodHound and get some ideas of how it works. For this I have set up a Windows 2012R2 Domain Controller, a Windows 2012 R2 member server, a Windows 2012 R2 server with SQL 2012 and a Windows 10 client. All of them are part of the ‘’ domain. To get make it a little more realistic I created 20,000 users and 20,000 groups.
  2. Look at the source code[4], read the Github Wiki[5], and watch the presentation[6].
  3. In order to see the events in near real-time, I set up ArcSight ESM 6.9.1c Patch1 and am using the ArcSight SmartConnector (Windows Native) to monitor the security, application and system logs of the domain controller.

Bloodhound is broken up into two distinct parts: data ingestion (i.e. gathering/pilfering) and data visualization. Data gathering can be done by itself with the output sent to CSV files or sent right into the backend Neo4J graph database.

Under the hood BloodHound’s reconnaissance is a specialized version of Powerview[7]. One of the interesting features of Powerview is that it only needs Powershell version 2.0 to work; no additional modules or RSTAT (Remote Server Administration Tools). This means that it can run by default on any Windows 7 or newer OS. It can also be run by a regular user on the network. There are no special rights required to get the information.

While there are eight different collection methods, all of them the enumerate the users and groups. All of this enumeration is done using LDAP (Lightweight Directory Access Protocol) via ADSI (Active Directory Services Interface). From a defender standpoint, this is a nightmare since there is no logging for this, as we will see during the testing.

To start understanding when we run BloodHound we are going to enable Windows Event Logging on the Domain Controller. To enable all the categories/subcategories for both success and failure auditing I run the command “AUDITPOL /SET /CATEGORY:* /SUCCESS:ENABLE /FAILURE:ENABLE” . To validate that everything is set properly I run the command “AUDITPOL /GET CATEGORY:*” (Image 1). 

research image 1.png

Before I run BloodHound I baseline the events. I have two separate continuous active channels running; one for all Windows events and one for “Target UserName = apu” (the account I am using to test with). The number of events generated before running BloodHound is minimal as seen in the screenshots below.

another graphic.png

Before I run BloodHound, I will start Wireshark with no capture filters applied to see what we can learn about its traffic patterns. Since we don’t have a lot of hosts in the lab we will focus on the traffic to the domain controller.

After running BloodHound we can see the amount of logs generated was negligible, especially considering the lab has nothing running. In a production environment there would be no way to see the minimal increase in logs related to BloodHound.

another graphic 2.png

Using Wireshark to generate some statistics on the capture we can notice that there is one particular conversation that is significantly larger than the rest. At 70M it is larger than the rest of the conversations combined. 


Looking at the protocol breakdown of the traffic we can see that most of it is LDAP (TCP port 389).


 Using the information we gained from the event logging and Wireshark we can see that regular event logging is not going to provide us any events that would be useful. The reason for this is all the traffic is LDAP (Lightweight Directory Access Protocol) related and will not trigger any Kerberos or NTLM (NT Lan Manager) events outside of a logon event when the LDAP query first authenticates.

Network Traffic Analysis

If you have flow (netflow/sflow) logs you can look for high volumes of LDAP traffic to your DC’s, or if you have firewalls between your DC’s and parts of the network that you want to monitor for, you can enable a rule that would log that traffic and look for high volumes of LDAP traffic. You will also want to look for LDAP sessions that take significantly longer that the rest. Each environment will be different. To see what you should be looking for, run BloodHound in your environment while monitoring the traffic with any network capture program and then see how much data was sent (total and each direction) and the time taken for Bloodhound to do its LDAP query(s). Since LDAP is primarily used for searching for information, the queries should be specific to certain item(s) and as a result they should be quick and transfer minimal amounts of data. The query that BloodHound enumerates all user accounts which is abnormal, especially when it comes from the user segment.


Using honeytokens to detect malicious activity is nothing new. However, the detection is centered around the use of the information, not the enumeration of them. User and groups are directory objects and can be audited just like files/folders giving valuable audit information. To enable this first the “Directory Service Access” subcategory is enabled under “DS Access”.

To properly detect AD enumeration, the honeytokens need to be setup accordingly:

  • User and Group accounts need to be created
  • The naming convention of the user and group accounts need to spread out across the alphabet.
  • The more accounts the more accurate the detection.
  • Group accounts should contain regular user accounts as well as honeytoken user accounts.

To enable auditing the “Advanced Features” option needs to enabled on the “Active Directory Users and Computers” MMC.


 This will expose the “Security” tab for the object:


 After clicking on the Security tab, click on “Advanced (1)-> Auditing (2) -> Add (3)”


 Set the following properties:

  • Principal = Everyone
  • Applies to = This object only
  • Permissions = Read all properties



 After enabling all the proper settings event ID 4662’s will be logged anytime one of these objects is enumerated. 


 Before the logs can be consumed by ESM, there are some customizations that need to be done on the Connector. The information shown in the Event Viewer is easy to digest, but when viewing the “Details” tab you will see the events in their raw form.


 As you can see, the GUIDs for “ObjectType” and “ObjectName” are given. While the SmartConnector can look up GUIDs there can be some issues with this (i.e. unresolved GUID, GUID not found, GUID lookup timeout, etc.) in a busy environment. Since this information is critical for us I added a parser override to normalize the GUID and a two map files: one to map the schema GUID’s (user and group) and one to map the user/group specific information. The files can be found at

After the auditing is turned on, I run BloodHound again to see what information is logged. There were 37,4662 audit events generated.


 This give us the user name of the person who enumerated the objects, not the host/IP/device that they are on. A simple pivot off the user name will give us possibly compromised host. When first implementing this a one week bake in period should be observed. During this time investigate any accounts that are enumerating AD user/group objects and document them. There will be servers/applications that need to do this type of activity as part of their function, but they should be filtered out after have been vetted.


With BloodHound advancing the state of internal reconnaissance and being nearly invisible we need to understand how it works to see where we can possibly detect it. By moving the detection to the network and AD event logs we can stay hidden. Attackers can’t see the monitoring or even know they are monitored until they have trigged the events. By dissecting the tool we can better understand the functionality and then monitor for that instead of signatures that are easily defeated.





















  • Threat intelligence
0 Kudos
About the Author


//Add this to "OnDomLoad" event