Black Hat Europe 2024 NOC/SOC: Security Cloud


Cisco is the Official Security Cloud Provider for the Black Hat Network Operations Center (NOC). We work with the other official partners to bring the hardware, software and engineers to build and secure the network, for our joint customer: Black Hat

  • Arista: Wired and Wireless Network Equipment 
  • Corelight: Open Network Detection and Response 
  • Palo Alto Networks: Network Security and SOC Platform 

This was our 8th year supporting Black Hat Europe and the primary mission in the NOC is network resilience. The partners also provide integrated security, visibility and automation, a Security Operations Center (SOC) inside the NOC.  

When the partners deploy to each event, we set up a world class network and security operations center in a few days. Our goal remains network up time and creating better integrated visibility and automation. Black Hat has the pick of the security industry tools and no company can sponsor/buy their way into the NOC. It is invitation only, with the intention of diversity in partners, and an expectation of full collaboration. As a NOC team comprised of many technologies and companies, we are continuously innovating and integrating, to provide an overall cybersecurity architecture solution. 

Outside the NOC partner dashboards were displayed for the attendees to view the volume and security of the network traffic.  

The role of Cisco in the Black Hat NOC continues to evolve since we were invited to partner in 2016. Black Hat has unlimited access to the Cisco Security Cloud and its capabilities. Working with the NOC leaders (Neil “Grifter” Wyler & Bart Stump) and the chief architect (Steve Fink), we tested, deployed and integrated the following technologies: 

Breach Protection Suite 

User Protection Suite 

ThousandEyes: Network visibility 

The NOC leaders allowed Cisco (and the other NOC partners) to bring in additional software to make our internal work more efficient and have greater visibility; however, Cisco is not the official provider for Extended Detection and Response (XDR), Security Incident and Event Management (SIEM), Network Detection and Response (NDR), Security Operations and Automated Response (SOAR) or collaboration.  

To better support Black Hat, we also implemented: 

  • Cisco XDR: Threat Hunting / Threat Intelligence Enrichment / Analyst dashboards / Automation with Webex 
  • Splunk Enterprise Security Cloud: platform for Cisco Security Cloud data sharing, and with ThousandEyes, Palo Alto Networks and Corelight integrations with Cisco XDR; also executive dashboards
  • Splunk Attack Analyzer: Integrated with Secure Malware Analytics
  • Cisco Webex: Incident notification and team collaboration

Introducing Cisco Duo and Identity Intelligence, by Ryan Maclennan

Cisco Duo is a new addition to the Black Hat NOC. We started with a Proof-of-Concept (PoC) in Black Hat Asia 2024 and turned it into a full deployment at Black Hat Europe. With this deployment, our goal was to create an environment where each partner would have a single sign-on (SSO) user to log into each product provided by a partner. We would create groups for each user, which mapped to being an analyst, administrator or an approver role.

As an example, if we wanted to use Palo Alto Networks (PANW) XSIAM product, we could log in with our user, but they would only be an analyst and could not make changes on the platform. However, if a PANW admin logged in, they could make changes as needed. This was vice versa for them as well, the PANW admins would be analysts within our Cisco products, but we could make changes as necessary on our own products, in coordination and approval of NOC Leaders.  

We were able to integrate Duo SSO into the following partner products: 

  • PANW XSIAM 
  • PANW NGFW 
  • PANW Cortex 
  • PANW Panorama 
  • Corelight Investigator 
  • Arista Cloud Vision 

Most of these integrations were for on-prem products (not publicly available) and a few were cloud-based, showing that we are able to protect an application whether it is publicly available or private. The Cisco products already had an SSO architecture with our corporate accounts and we will transition to the Black Hat SSO infrastructure for Asia 2025. 

After getting all the Duo applications setup, we were able to start getting authentication requests into Duo: 

Below, you can see all the applications we created to integrate Duo SSO.

After the applications were configured and the users enrolled in Duo, we were able to start using the new Cisco Identity Intelligence, from within Duo. 

Cisco Identity Intelligence

Cisco Identity intelligence (CII) is an AI-powered solution that bridges the gap between authentication and access. It allows us to bring in multiple authentication source logs into a single entity and then analyze them to determine if a user is trustworthy. CII will give a user a trust score based on geographic location, login times, Operating System (OS), device types, number of login attempts, correct and incorrect logins, device trust and many more criterion. CII takes all these indicators into account and then makes trust levels for each user. You can see our trust score spread in the below screenshot: 

You can see in the screenshot above that there was an untrusted user, three neutral, and nine trusted users. Many of the neutral users were because CII didn’t have enough data to baseline the user yet and was still determining how it should classify them. The one untrusted user was me; because the user I used to administer Duo and CII was the same that I used login with to all the other applications.

Before the London based conference, I was administering Duo and CII in the United States. I then used a VPN a few times while in Europe, so my geography was quickly changing. These events contributed to my ‘Untrusted’ status, worthy of investigation.

Below, we can see the dashboard view of CII, with the quick view of information which an administrator may be interested in seeing.

In the above screenshot, we can see the monthly sign-ins and whether they were successful or not. Also, the type of Multifactor Authentication (MFA) used by users, sensitive applications and the countries where logins were attempted from. 

As the Black Hat conference global circuit continues, I am excited to see where we can take CII and use its data to better secure our NOC partner products. 

Dynamic Malware Analysis, by Ryan Maclennan

For Cisco, a core integrated function in the Black Hat NOC/SOC is providing the platform for our partners to send suspicious files to Secure Malware Analytics (aka Threat Grid) for dynamic malware analysis (aka sandboxing). We have expanded the integration over the years, with both Corelight OpenNDR and Palo Alto Networks Firewalls submitting samples. At Black Hat Europe 2024, over 12,000 supported samples were submitted. 

The threat hunters also used Secure Malware Analytics to investigate suspicious URLs and files, without the risk of infection. Most of the convictions were on URLs submitted by the NOC analysts. 

At each conference, we see examples of personal identifying information sent over the network in the clear. One that stood out was a college student’s transcript in clear text. This is what happens when you use http on port 80 for communications (instead of https). The following details of the student were clearly available from the contents downloaded from the self-hosted domain: 

  • Name 
  • Date Of Birth 
  • Social Security Number 
  • College attended and when 

…and that is all you need to craft an identity theft and/or phishing attack on the unassuming student. Always verify your connection security! 

Splunk Attack Analyzer

As a PoC at Black Hat USA, we deployed Splunk Attack Analyzer (SAA) as another malware sandboxing tool. This was a new integration created with the help of the Corelight team and was made on the spot. This time around in Europe, we were able to enable all of SAA’s capabilities and sent all files to it to match with Secure Malware Analytics. Here is dashboard summary of the files analyzed by SAA: 

Looking at this we can see the total amount of files analyzed by SAA and what was convicted as malicious. Of the convictions we got, we found that two were Phish Kits.   

You may have noticed that Secure Malware Analytics analyzed thousands more files than SAA. This is because we started to hit a rate limit, and our SAA instance didn’t catch it in time.  For the next conference, we will be working with Corelight to make the integration more robust to handle the rate limiting efficiently.

In case you missed it, SAA now has Secure Malware Analytics (SMA) as an engine. This means, when you link your SMA account to SAA, SAA will now send files to be analyzed by SMA as well and use its determination as part of its own scoring.

Extended Detection and Automation, by Ivan Berlinson and Aditya Raghavan

The Cisco XDR Command Center dashboard tiles made it easy to see the status of each of the connected Cisco Secure technologies, and the automation workflows iterations over the week. 

Below are the Cisco XDR integrations for Black Hat Europe, empowering our threat hunters to investigate Indicators of Compromise (IOC) very quickly, with one search. 

We appreciate alphaMountain.ai, Pulsedive and Recorded Future donating full licenses to Cisco, for use in the Black Hat Europe 2024 NOC. 

The view in the XDR Integrations user interface: 

Unleashing the Power of Cisco XDR Automate at Black Hat Europe

With the ever-evolving technological landscape, automation stands as a cornerstone in achieving XDR outcomes. It’s indeed a testament to the prowess of Cisco XDR that it boasts a fully integrated, robust automation engine.

Cisco XDR Automation embodies a user-friendly, no-to-low code platform with a drag-and-drop workflow editor. This innovative feature empowers your SOC to speed up its investigative and response capabilities. You can tap into this potential by importing workflows within the XDR Automate Exchange from Cisco, or by flexing your creative muscles and crafting your own.

Remember from our past blogs, we used automation for incident notifications into Webex, as well as ‘Creating an Incident’ in XDR for Umbrella category blocks. Both these workflows were spruced up and used extensively at Black Hat Europe 2024. We now see the last update timestamp right in the incident title itself and the Webex message, which greatly simplifies the understanding of a detection for our threat hunters.

The following automation workflows were built specifically for Black Hat use cases: 

  1. Cisco SMA Malicious submission – XDR incident and notification 
  2. Cisco SMA – Monitor Non-Malicious documents submission 
  3. Palo Alto Networks Firewall – Create Cisco XDR incident – V2 
  4. Splunk – Corelight – Create Cisco XDR incident V2 
  5. Splunk – ThousandEyes – Create XDR create incident V2 
  6. Incident Enrichment – Add Room and Purpose 
  7. Palo Alto Threat Logs to Splunk 

Besides #1 and #3, the rest of these workflows were premiered at Black Hat Europe 2024, thanks to the work and inspiration of Ivan. 

Splunk Enterprise Security Cloud, by Ivan Berlinson, Aditya Raghavan and Ryan Maclennan

To make our threat hunters’ lives richer with more context from ours and our partners’ tools, we brought in Splunk Enterprise Security Cloud at this Black Hat event to ingest detections from Cisco XDR, Secure Malware Analytics, Umbrella, ThousandEyes, Corelight and Palo Alto Networks and visualize them into functional dashboards for executive reporting. The Splunk Cloud instance was configured with the following integrations:

  1. Cisco XDR and Cisco Secure Malware Analytics, using the Cisco Security Cloud app
  2. Cisco Umbrella, using the Cisco Cloud Security App for Splunk 
  3. ThousandEyes, using the Splunk HTTP Event Collector (HEC) 
  4. Corelight, using Splunk HTTP Event Collector (HEC) 
  5. Palo Alto Networks, using the Splunk HTTP Event Collector (HEC) 

The ingested data for each integrated platform was deposited into their respective indexes. That made data searches for our threat hunters cleaner. Searching for data is where Splunk shines! You begin by simply navigating to Apps > Search and Reporting and typing your search query. You do need to know the Splunk Query Language (SQL) to build your queries but that is just a quick tutorial away.  

We found our way through looking at the data and iterating. An example of a simple search for obtaining the count of all alerts from the Suricata engine of Corelight logs is below.

The Visualization tab allows you to quickly convert this data into a visual format for previewing. And now, off we went to build search queries across all the datasets we ingested. Those search queries were then aggregated and visualized into an executive view using Splunk Dashboard Studio. Since we ended up with more widgets than can fit in a single Executive screen, we utilized the tabbed dashboard feature. The following two screenshots show the final dashboards along with callouts for the sources of the various widgets. 

The Splunk dashboard in the BH Europe NOC
The Splunk dashboard in the BH Europe NOC

With the charter for us at Black Hat being a ‘SOC within a NOC’, the executive dashboards were reflective of bringing networking and security reporting together. This is quite powerful and will be expanded in future Black Hat events, to add more functionality and expand its usage as one of the primary consoles for our threat hunters as well as reporting dashboards on the large screens in the NOC. 

Threat Hunters’ Story, by Ivan Berlinson

During the Black Hat event, the NOC opens early before the event Registration and closes after the trainings and briefings complete for the day. This means that every threat hunter’s position must be covered by physical, uninterrupted presence for about 11 hours per day. Even with the utmost dedication to your role, sometimes you need a break, and a new potential incident doesn’t wait until you’ve finished the previous one.  

Aditya and I shared the responsibilities as Threat Hunters staffing the Cisco XDR, Malware Analytics and Splunk Cloud consoles, alternating between morning and afternoon shifts. Though in reality both of us stayed on most of the day as we had so much fun writing automation workflows and building dashboards, besides carrying out our primary responsibilities. 

An example of some of these workflows in action together helped us hunt down a potential case of cryptomining in the NOC itself, during the early hours of Dec 12! Thanks to the Corelight and PANW firewalls integrations in XDR, we had ourselves a singular correlated incident, with detections from both partners.  

The workflows I built include a check to find any open incidents with the assets and/or observables in question, to append the detection under process. If there is none, it would create a new incident. As we can see, the detection from Corelight came in at 09:40 GMT, followed by the detection from PANW firewalls a few minutes later.  

As new detections were getting appended into the incident, I quickly updated the automation workflows to include a timestamp indicating the last seen sighting for that incident right in the title. While this might not be what you would do in a production environment, it greatly simplifies the ability for our threat hunters to analyze all the incidents as they come in.

As I investigated the incident, I uncovered another detection that had come in just before, this time the source of the detection was Umbrella and almost went under the radar due to it bearing a lower priority score. This incident came in through the automation workflow used in the past years. This provided the confirmation of the cryptomining activity on the endpoint.  

Next question – who is this 10.X.X.X device? And should they be cryptomining at Black Hat? Thanks to another automation workflow, with a click of a Response action of the Incident Response playbook we had attribution with physical room name and location within the event center from a pre-defined database for the identified asset in the incident. 

Lo and behold – the asset was in Level 3, Capital Suite, Room 1 connected to the NOC Wi-Fi; right in the same room as me! I had built another automation workflow that brings in Corelight and PANW firewall threat detections into Splunk Cloud, through which we were able to track down the device in the room to a MacBook and an MAC address.  

Time to tap on someone’s shoulder. 

Network Visibility with ThousandEyes, by Jessica Santos, MD Foysol Ferdous, Ryan MacLennan

Black Hat Europe 2024 is the sixth consecutive event with a ThousandEyes (TE) deployment. We spread that visibility across core switching, Registration, the Business Hall, two- and four-day training rooms, and Keynote areas. Below is Some of the hardware Black Hat purchased for the ThousandEyes agents. 

We worked with Michael Spicer on the location of the agent deployment to ensure representative coverage and the types / frequency of scheduled testing.  

Optimizing Network Monitoring with Thousand Eyes 

We had a dashboard in the NOC, so the leaders and architect could see issues in real time, and ThousandEyes widgets in the Splunk executive dashboard, as seen earlier in the blog.  

At Black Hat Europe 2024, we had a problem where the ThousandEyes agents were showing a high latency time to Azure. We were receiving calls about access to Azure being slow, but being the proactive NOC we are, we went ahead and investigated what is causing the high response time.  

We investigated the Azure Network path that is recorded by ThousandEyes and found there are three destinations the Azure status portal uses.

Two of those destinations are outside of the United Kingdom: one in the United States and one in Japan. If you look in the screenshot above, you can see a single red link and it can be used for either the US or Japan Azure status portal. This is the most likely cause for the increased response time we were seeing besides the geographic distance. Seeing this, we SSH’ed into one the ThousandEyes agents and used the HTTP’ing tool to do a similar test to Azure. When we ran the test to the Azure portal status page, we would see some normal response times and then many latent response times. This matched up to what ThousandEyes reported. This led us to the conclusion that the Azure status portal workload balances but does not do geographic load balancing. 

With this data, we decided to hard code the IP of the United Kingdom server into the ThousandEyes test to better represent how attendees will access Azure.   

ThousandEyes is a very powerful tool, and it is able to determine whether the issue resides inside the network or outside where we cannot control it. Below is a screenshot of how many different network paths can take to a single resource. This shows the importance of being able to pinpoint exactly where an issue is taking place. 

Meraki Systems Manager, by Paul Fidler and Connor Loughlin

Our fourth year of deploying Meraki Systems Manager at Black Hat Europe, as the official Mobile Devices Management platform, went very smoothly. We introduced a new caching operation to update iOS devices on the local network, for speed and efficiency. Going into the event, we planned for the following number of devices and purposes: 

  • iPhone Lead Scanning Devices: 68 
  • iPads for Registration: 9 
  • iPads for Session Scanning: 12 
  • Number of devices planned in total: 89

We registered the devices in advance of the event. Upon arrival, we turned each device on.  

The Wi-Fi profile that we need for the Black Hat iOS devices was not installed. However, I brought a Meraki Z3C, with cellular and Wi-Fi capability. I’d brought this because it normally takes a couple of days to get Wi-Fi down in Registration, where we set up the devices before deployment. So, within literally 15 seconds, I’d spun up a new SSID, prefixed it with a full stop so that it appeared at the top of the available Wi-Fi networks, and before the first iOS device had powered on, the Z3C was broadcasting this. So, with just a handful of seconds toil on each device, we’d got them connecting back to Meraki Dashboard to get the correct Wi-Fi profile. 

More pain: Location services

Location services is a pain point for mobile device management. Firstly, you must ensure that Location is NOT skipped at the time of device supervision using Apple Configurator. This means that it is an extra step at the time of enrollment (meaning half a second of toil), rather than having to open settings, scroll down to Privacy, then Location, then flick the toggle. Sadly, this was skipped for the event preparation by the contractor, so we had to enable manually this for each device. 

Location of devices is important for theft retrieval or if the device is misplaced. Location is enabled by opening the System Manager app and tapping Location, then Enable, then “Whilst using the application.” You can then tap it again and click “Always allow” which allows for location when the app is in the background. Because of (and this is where you could end up in a heated discussion) Apple’s stance on privacy, it’s such a shame that this can’t be managed. 

Below is a quick screenshot of the Z3C client dashboard after an hour of the devices being turned on. 

Interesting to see where the device is calling out to, in the screenshot below. 

Application Updates 

Having applications updated in the middle of an event can have disastrous consequences. Whilst there isn’t an overall setting or restriction to prevent this, it is possible to do it at the application layer. And, of course, Meraki Systems Manager allowed us to do this for all apps at the same time. 

OS Updates 

I think this goes without saying, but the ability to remotely update devices in the event of an urgent vulnerability fix is invaluable, like we had last year in Las Vegas with Apple. 

Firewall Rules 

The security of the Registration network is paramount as there is Personal Identifying Information data on this network. So, we have some pretty strict inbound and outbound rules. 

Managing Apple devices requires that 17.0.0.0/8 be kept open for port 80 and 443 traffic. Furthermore, Meraki allows you to download a dynamically created list of servers that it needs to be open to so that endpoints can be managed. But, as we are using Cisco Umbrella and AMP (Secure Endpoint), there’s a whole host of other endpoints that need to be opened. These are listed here.  

Content Caching 

One of the biggest problems affecting the iOS devices at past Black Hat events was the immediate need to both update the iOS device’s OS due to a patch to fix a zero-day vulnerability and to update the Black Hat iOS app on the devices. At the USA events, there are hundreds of devices, so this was a challenge for each to download and install. So, I took the initiative into looking into Apple’s Content Caching service built into macOS. 

Now, just to be clear, this wasn’t caching EVERYTHING… Just Apple App store updates and OS updates. 

This is turned on withing System Setting and starts working immediately.  

I’m not going to get into the weeds of setting this up, because there’s so much to plan for. But I’d suggest that you start here. The setting I did change was: 

I checked to see that we had one point of egress from Black Hat to the Internet. Apple doesn’t go into too much detail as to how this all works, but I’m assuming that the caching server registers with Apple and when devices check in for App store / OS update queries, they are then told where to look on the network for the caching server. 

Immediately after turning this on, you can see the default settings and metrics: 

% AssetCacheManagerUtil settings
Content caching settings:


AllowPersonalCaching: true
AllowSharedCaching: true
AllowTetheredCaching: true
CacheLimit: 150 GB
DataPath: /Library/Application Support/Apple/AssetCache/Data
ListenRangesOnly: false
LocalSubnetsOnly: true
ParentSelectionPolicy: round-robin
PeerLocalSubnetsOnly: true

And after having this run for some time: 

% AssetCacheManagerUtil settings
Content caching status


Activated: true
Active: true
ActualCacheUsed: 528.2 MB
CacheDetails: (1)

Other: 528.2 MB

CacheFree: 149.47 GB
CacheLimit: 150 GB
CacheStatus: OK
CacheUsed: 528.2 MB
MaxCachePressureLast1Hour: 0%
Parents: (none)
Peers: (none)
PersonalCacheFree: 150 GB
PersonalCacheLimit: 150 GB
PersonalCacheUsed: Zero KB
Port: 49180
PrivateAddresses: (1)

x.x.x.x

PublicAddress: x.x.x.x
RegistrationStatus: 1
RestrictedMedia: false
ServerGUID: xxxxxxxxxxxxxxxxxx
StartupStatus: OK
TetheratorStatus: 1
TotalBytesAreSince: 2023-12-01 13:35:10
TotalBytesDropped: Zero KB

TotalBytesImported: Zero KB
TotalBytesReturnedToClients: 528.2 MB
TotalBytesStoredFromOrigin: 528.2 MB

Now, helpfully, Apple also pop this data periodically into a database located at:  

Library/Application Support/Apple/AssetCache/Metrics/Metrics.db in a table called ZMETRICS. 

This is also available in Activity Monitor

And with a small number of devices, you can see how quickly the server starts reducing the impact on the WAN. Now, given the above, getting the data from the command line occasionally is painful, especially in the format that it’s presented.  

Apple, helpfully, allow to append a –j to the end of the status command to present the information in JSON: 

{"name":"status","result":{"Activated":true,"Active":true,"ActualCacheUsed":2327774501,"CacheDetails":{"iCloud":109949295,"iOS Software":20800617,"Mac Software":11984379,"Other":2226505758},"CacheFree":247630759951,"CacheLimit":250000000000,"CacheStatus":"OK","CacheUsed":2369240049,"MaxCachePressureLast1Hour":0,"Parents":[],"Peers":[],"PersonalCacheFree":249890050705,"PersonalCacheLimit":250000000000,"PersonalCacheUsed":109949295,"Port":49181,"PrivateAddresses":["10.10.10.10"],"PublicAddress":"X.X.X.X","RegistrationStatus":1,"RestrictedMedia":false,"ServerGUID":"FDE578EE-XXXX-XXXX-XXXX-102B60869501","StartupStatus":"OK","TetheratorStatus":0,"TotalBytesAreSince":"2024-12-12 11:58:04 +0000","TotalBytesDropped":0,"TotalBytesImported":0,"TotalBytesReturnedToChildren":0,"TotalBytesReturnedToClients":11482694,"TotalBytesReturnedToPeers":0,"TotalBytesStoredFromOrigin":1164874,"TotalBytesStoredFromParents":0,"TotalBytesStoredFromPeers":0}}

ThousandEyes Agent for the Caching Server 

Given that we have an Apple MacMini on the Registration network, it was a simple decision to install the ThousandEyes macOS agent on it systematically using Meraki Systems Manager. 

This can be downloaded from Endpoint Agents > Agent Settings > Add new Endpoint Agent.

However, as I found to my detriment, there’s not yet a Universal installer, so make sure you get your processor architecture right (ARM vs x86!) 

In Meraki Systems Manager, we configure the app like this: 

Now, we talked earlier about firewall settings. A System Manager custom app can be hosted in two ways: 

  • Hosted on your own Infra or 
  • Meraki will host it 

If you’re choosing the latter, just be mindful that Meraki actually hosts it on AWS. Details here

So, make sure that you have the right AWS instance open on your firewalls, or host packages yourself.  

Checking with the PANW firewall team, we determined the caching server saved 5% of the traffic for the week, freeing up bandwidth for training and demos. For Black Hat Asia 2025, we plan to explore how to host Windows Updates, a large consumer of bandwidth, on the first day of training and briefings. 

Keeping up with Encrypted DNS, by Christian Clasen and Justin Murphy 

For the past couple of years, we have been utilizing the PANW edge firewalls to redirect outbound DNS queries towards our internal resolvers. This closed a gap in policy and visibility that existed for attendees at the Black Hat event. As evidenced by the DNS Statistics charts later in the blog, the strategy paid off with a noticeable jump in observed queries. But as it so often goes in technology (and security in specific), these types of strategies can lead to an arms race of sorts. 

In those same years, browser and operating system builders have expanded the deployment of encrypted DNS protocols. In addition to wrapping DNS in raw TLS and HTTPS, more exotic technologies are now in the mix. Chief among them is Apple’s implementation of Oblivious DNS over HTTPS (ODoT). The purpose of ODoT is to prevent the snooping of DNS queries –not just on the local LAN, but also by the DNS providers themselves. I provided an overview of this technology in my “History of DNS Security” Cisco Live talk. 

The gist of the ODoH is as follows:  

  • The “first hop” recursive DNS resolver accepts receives the client lookup, but the client has done something sneaky to prevent this initial resolver from knowing what the domain name is that the client is looking for. It has wrapped the original query in an encrypted blob and added a bogus name to the “outer” message.
  • When the recursive resolver sends the query upstream to the authoritative name server for the “bogus” domain, the message can be decrypted because that name server is an ODoH-aware server that expects this encrypted message! 
  • The server sees the query information and can recurse the DNS for the answer as normal but is never made aware of the original client IP…it is only servicing the first recursive resolver as its client.

This separation of duties ensures that, in the absence of collusion between the first and second DNS providers, a client and its queries can never be correlated, and useful tracking is rendered impossible. 

Apple implements this architecture in its Private Relay feature. In addition to all the privacy features detailed above, Private Relay uses QUIC to transport packets to Apple making the communication even more opaque to network operators like us in the Black Hat NOC. The wide deployment of Private Relay has led to a drop in DNS queries evaluated and logged by Umbrella.  

We presented our observations and recommendation to the NOC Leaders, who decided it would be best to try and block these protocols (DoT, DoH and Private Relay) for better visibility. The morning of last day, we added the policy to Umbrella. 

We immediately saw blocks for domains associated with Apple’s MASQUE proxies in the activity search, as well as those used by Android phones for DoT. The end user experience was not impacted.  

Over 129k of these blocks occurred from 11:05am to the shutdown at 6pm on the last day, 12 December. 

We will continue this policy going forward at other Black Hat events and monitor the statistics as usual. 

DNS Statistics, by Christian Clasen and Justin Murphy 

We can see the jump in queries due to forced DNS redirection at the edge, and the drop due to the expansion of Apple Private Relay (see previous blog section for detailed analysis). 

The top categories for 2024 (and 2023) are below.  

Umbrella tracks the unique apps connecting to network. We saw a marked increase in GenAI. If needed, we can block apps that demonstrate a threat to the conference. 

2021: 2,162 apps 

2022: 4,159 apps  

2023: 4,340 apps 

2024: 4,902 apps 

All in all, we are very proud of the collaborative efforts made here at Black Hat Europe by both the Cisco team and our partners in the NOC. Great work everybody! 

Black Hat Asia will be in April 2025, at the Marina Bay Sands, Singapore…hope to see you there! 

Acknowledgments 

Thank you to the Cisco NOC team: 

  • Cisco Security: Ivan Berlinson, Aditya Raghavan, Christian Clasen, Justin Murphy and Ryan Maclennan 
  • Meraki Systems Manager: Paul Fidler and Connor Loughlin
  • ThousandEyes: MD Foysol Ferdous and Jessica Santos
  • Additional Support and Expertise: Tony Iacobelli and Abhishek Sha

Also, to our NOC partners Palo Alto Networks (especially James Holland and Jason Reverri), Corelight (especially Dustin Lee and Mark Overholser), Arista Networks (especially Jonathan Smith), and the entire Black Hat / Informa Tech staff (especially Grifter ‘Neil Wyler’, Bart Stump, Steve Fink, James Pope, Michael Spicer, Jess Stafford and Steve Oldenbourg). 

About Black Hat 

Black Hat is the cybersecurity industry’s most established and in-depth security event series. Founded in 1997, these annual, multi-day events provide attendees with the latest in cybersecurity research, development, and trends. Driven by the needs of the community, Black Hat events showcase content directly from the community through Briefings presentations, Trainings courses, Summits, and more. As the event series where all career levels and academic disciplines convene to collaborate, network, and discuss the cybersecurity topics that matter most to them, attendees can find Black Hat events in the United States, Canada, Europe, Middle East and Africa and Asia at: Black Hat.com. Black Hat is brought to you by Informa Tech. 

Share:



Source link

Leave a Comment