Skip navigation
All Places > RSA Labs > Blog
1 2 Previous Next

RSA Labs

16 posts
As described in our first blog post about Project Mercury, the vision for Project Mercury is to build a suite of cloud-hosted services to enable companies to utilize verifiable credentials to improve their business. But why?
Business interactions are rooted in knowledge. Businesses need to know certain things about their users to provide their services. For example, an employer needs to know the social security numbers of their employees in order to report their salaries to the government; financial institutions are bound by regulation to know their clients to prevent money laundering; businesses need proof of licensure from their staff. But on the internet, how do you really know anything?

Credit: Peter Steiner, "On the Internet, nobody knows you're a dog." The New Yorker, 5.

July 1993. Bild: PeterSteiner/The New Yorker/The Cartoon Bank

Solving this problem is why we started RSA Project Mercury - to allow businesses to truly know their users. As businesses evolve and expand online to achieve global scale, their ability to know is limited. We trust, but we need a way to verify.
With verifiable credentials and RSA’s Project Mercury, businesses can finally verify what they need to know. It's now possible for users to collect (cryptographically) verifiable attestations about all aspects of their life, which they can then present to you to enable your workflow. Employees can present digital, verifiable, proof of their social security number; third-party vendors can provide attestations for active employees which can be used to grant access; customers can present claims which attest to their identity allowing financial institutions to achieve compliance with little to no effort; all of which can be instantaneously verified by RSA. Verification creates knowledge, and knowledge is power.
What do you need to know about your customers, your employees, or your third-party vendors? Which of your workflows could be transformed to be faster, more user-friendly, and more secure?
Leave a comment below to start a conversation with our team.
Matthew Tharp

Project Duma

Posted by Matthew Tharp Employee Apr 7, 2020

Today's security teams need help in analyzing network traffic to find threats. More and more attackers are using encrypted traffic. Previously defenders would rely on DNS lookups to identify the type of traffic that an encrypted session contained, but with domain fronting and DNS over HTTPS (DoH) defenders are losing their visibility. Project Duma is about helping analysts uncover threats in encrypted traffic by attempting to classify both normal and suspicious use of encrypted protocols.

The hypothesis behind Project Duma is that although encryption hides the actual data in route, it doesn't hide the application behavior. The application state machine can be profiled by looking at which endpoint sends how much data and how quickly. For example, if the connection is a long one with a large amount of data with a near constant throughput rate (like 1-3 Mbps) it is likely to be a streaming media connection. This technique could have many applications in profiling TLS traffic and unknown protocols. The project begins with a focus on identifying interactive SSH sessions and reverse shells.


The Secure Shell (SSH)

The Secure Shell protocol was developed in 1995 to provide a secure login connection for remote machines [2]. The protocol was intended to replace protocols like telnet and ftp that don't encrypt their traffic and send their authentication in the clear. The protocol became an IETF standard (RFC 4251) and is widely used for managing servers and transferring files.

Attackers want to use SSH because it provides them Command Line Interface (CLI) access to remote machines. If the attacker can further exploit the remote host to gain additional privileges or connect to additional hosts in the victim network they can use the encrypted network traffic to hide their exploits. Because SSH is so widely deployed for managing servers and remote connections attackers can abuse it while minimizing the amount of malware they have to install in the environment. Such a technique is called Living off the LAN and helps the attackers avoid detection.

Most security teams try to stop attackers by blocking inbound SSH traffic at their firewalls. But outbound SSH traffic is often allowed because developers want to SSH to their cloud instances or because the corporation needs to either send or receive files using the SFTP protocol. Astute security organizations will carefully limit what machines can use SSH to communicate to other machines outside their environment. However, with the increasing popularity of cloud services and SFTP to transfer files it is becoming difficult to police all those connections. This is especially true in the financial sector where we see an abundance of SFTP transfers and data sharing.

Project Duma looks beyond encryption to profile how the SSH protocol is being used. The protocol could be used to transfer files as in the case of SFTP. The protocol could also be used to remotely administer machines automatically through some scripts or other tools. It could be used interactively to control machines or it could be used as a reverse tunnel where although the SSH connection is initiated from inside the corporate network it is reaching out to an attacker who is actually sending commands to the machine that initiated the connection [3]. Detecting these reverse shells is the ultimate target of the project.



Project Duma is currently in development and is showing some promising results. However, we are developing based on open sourced flows. These resources are typically published by universities for research and therefore don't accurately represent our customer base. Since the machine learning and data science approaches used in this project benefit from highly representative data we are looking for development partners that could contribute data sets for testing. If you or someone you know may be interested in partnering with us on this project please reach out and let us know by leaving your comments below.





RSA Archer is a leading platform for integrated risk management, and a major component of the product suite is targeted towards mitigating third-party vendor risk. An Archer customer organization may have many vendors which supply products or services. Each of those suppliers carries with it a level of uncertainty with regard to reliability, security, and other factors that could impact the organization. Currently, Archer enables customers to use questionnaires, which are sent to internal employees and external vendors, to conduct due-diligence on such third parties and assess their level of risk.


Because customers may need to calculate the risk for hundreds to tens of thousands of third parties, they need to provide a way to complete these questionnaires and submit their documentation in an efficient manner. In addition, following the completion of each assessment, customers also need to collaborate with vendors to collect information for findings, contracts, insurance documents, performance metrics, and other risk management processes. Doing so can quickly become a complex and time-consuming task, which is made more difficult by the lack of a consistent means of sending and receiving the questionnaires and tracking the results. Depending on the vendor, everything from importing/exporting spreadsheets to answering questions over the phone has been attempted, with varying degrees of success.


The Archer Third-Party Portal is here to save the day! The portal was designed to address many of the shortcomings of the existing system and provides a simple, efficient, and centralized place to manage third-party questionnaires and their results. Some of the main features of the Third-Party Portal are:


User Segregation

  • External vendor users are completely segregated from the RSA Archer instance.
  • There is no need to worry about a misconfigured access role accidentally giving access to sensitive data within RSA Archer.


Maximum Availability

  • The vendor portal is hosted in the AWS cloud and managed by RSA.
  • No need for platform administration by the customer.


Archer Synchronization

  • Native synchronization is present between content in the Archer platform and the vendor portal.
  • Automatic publish process for assessments.
  • Synchronizing submitted portal content back into RSA Archer is automated and native to the service.


Automated User Provisioning

  • Vendor users are automatically provisioned based on the RSA Archer publish process.
  • Vendor users can invite colleagues to collaborate on assessments by providing only a name and email address.
  • The system automatically generates email invitations to the vendor user.


Vendor Portal Experience

  • Vendors have a centralized portal to log in and view assessments from all customers in a single dashboard.
  • An automated password reset capability reduces administrative overhead.
  • The portal UI provides an intuitive and consistent user experience.
  • Questions are displayed in an easy-to-answer format and the ability to add supporting documentation via attachments is provided.
  • Supports simultaneous editing by multiple users.


Please stay tuned to the RSA Labs blog for further updates. And let us know if you have any questions or feedback in the comments section below. Thanks for reading!

Brian Mullins

Mercury Rising

Posted by Brian Mullins Employee Apr 7, 2020

When the world underwent a major digital transformation in the mid-to-late 90s with the introduction of eCommerce, RSA was there. RSA pioneered the BSAFE cryptographic library, which Netscape then embedded into their browser to enable secure financial transactions over the web. Now as the world shifts again, this time towards decentralized infrastructure, RSA once again has a role to play. It’s in this context that we’d like to introduce the latest project out of RSA Labs: Project Mercury.


In 2018, RSA Labs investigated decentralized identity with Project Sif. Since then a lot of progress has been made in the field. Standards have emerged and development tools have become available. One area that has developed that looks particularly promising is around Verifiable Credentials. The idea behind verifiable credentials is simple but powerful. Verifiable credentials are cryptographically signed attestations that can be issued by anyone including governments, banks, or even a friend or family member. The key to their utility is that they can be instantly verified by anyone. The number of applications of this technology is limitless. Need a way to prove you’re licensed to operate a vehicle? The DMV can issue you a credential. Only want to interview candidates that can prove they have a college degree? Universities can issue credentials to their alumni. The list goes on and on.


The problem companies face in adopting verifiable credential technology is that much of the infrastructure must be custom built. It’s not simple to work with and requires specialized expertise. We’ve seen with Amazon’s AWS the value that companies can realize by building atop undifferentiated infrastructure. Similarly, there is a need to provide a set of tools and services that make verifiable credential technology easy for companies to build upon. The vision for Project Mercury is to build a suite of cloud-hosted services to enable companies to utilize verifiable credentials to improve their business. Improvement can come in the form of a streamlined user experience and/or reducing the cost and/or risk of doing business. Once this infrastructure is built companies will be free to innovate around the technology to achieve things not currently possible. RSA wants to be the catalyst that enables those transformations just as we enabled eCommerce 25 years ago.


To illustrate what’s possible consider the case of proving ownership of your bank account to your utility company to setup automatic bill pay. Today this is achieved by your utility company depositing a small amount of money into the account; a transaction which take several days to clear. Once you see the transaction appear in your bank account, you then must go back to your utility provider and enter the transaction amount, and then finally try to remember what it was you were doing in the first place. Instead consider this flow: The bank issues you a verifiable credential attesting to your ownership of your bank account. When your utility company requests your bank account information to setup automatic bill pay, you only need to present your credential. Upon showing your credential the utility provider can verify the information within seconds and you can be on your way. Another example where this technology may prove essential is in a post-Covid-19 world. Bill Gates and others have discussed the need for digital credentials that can provide assurances about vaccination or exposure. It’s possible that such credentials may even be required for traveling internationally. Verifiable credentials could fulfill such a need.


Things are just getting started with Project Mercury. Please stay tuned to the RSA Labs blog for the latest updates. Let us know if you have any questions or feedback through the comments below. Thanks!


RSA Labs Project Mercury 

As everyone grapples with lifestyle adjustments brought upon by a global pandemic it can be beneficial to stop and reflect on the unintended consequences of those adjustments.  One possibility worth considering is that COVID-19 may accelerate or drive increased adoption of IoT.


As we have gained more insight and understanding about SARS-CoV-2, the virus that causes COVID-19, we have learned just how long it can survive on surfaces, which may contribute to the high transmission rates being observed.  IoT can play a role in reducing this risk by creating touchless interactions with the world around us, reducing the number of shared surfaces we must touch.


Take a concrete example of a grocery store employee.  This employee may have to adjust the thermostat in the refrigerated section, something that a dozen other employees have touched in the last 24 hours.  If instead the thermostat was an IoT, connected device, it might automatically sense people in the building and adjust temperature without interaction.  If nothing else, the employee could interact with it remotely and not be forced to physically manipulate it.  This would safeguard employees by reducing interaction with a shared physical device that can act as a transmission vector.  There are countless other examples of where smarter, connected devices can lead to a better and hopefully safer environment.


In a 2019 white paper  ( we cited Gartner’s forecasts of IoT usage for 2019 to be around 14.2 billion devices with projections reaching 25 billion by 2021. Governments were already actively developing “smart cities” where new and existing infrastructure was being outfitted with IoT devices for the purpose of increase operational efficiency. Likewise, consumer market for smart thermostats, cars, speakers, door-bells, and other devices is also on the rise as companies offer modernized solutions to existing products. In a post-COVID world, we can reasonably assume that demand and adoption of IoT will increase.


As companies rush to introduce new IoT solutions they need to take measures and safeguard against threats posed by those devices. This requires visibility and monitoring, as would exist in traditional IT environments, with analytics focused on detecting the threats that are common in IoT.  In particular, the IoT gateway, which serves as the last hop between the IoT device and the edge network leading back to the cloud, is a critical piece of infrastructure.  As such they also make attractive targets for attackers and need to be protected.


It is precisely this choke point where the RSA IoT Security Monitor is deployed to collect data, not just into the gateway, but by proxy into the connected IoT devices as well.  Meta data from these gateways is then fed back to a cloud service where machine learning and other behavioral analytics are performed; and visibility is provided to the customer.

Understanding that modern solutions require modern technologies, the RSA IoT Security Monitor solution is a cloud-native, microservice application provided on top of the AWS infrastructure enabling rapid scaling and high availability.  Insights can be viewed directly in a ReactJS UI (see Figure 1), or alerts can be consumed by any SIEM tool the customer may already be using; enabling a single pane of glass to see all incidents across an organization, whether they are typical IT assets or newer IoT assets.


Figure 1:

RSA IoT Security Monitor - Alerts


With this capability, RSA can continue to be a trusted partner that helps companies build durable solutions to security challenges.  For more information visit

Greetings fellow innovators!


RSA is in the midst of an internal innovation challenge and we are actively seeking feedback from customers and partners. Specifically, we have published concept summaries and would LOVE to hear your thoughts, including what aspects are valuable and ways to improve. 


The published ideas encompass both product and non-product (innovation) concepts.  Please follow this link for a list of all 8 ideas, and then share your reactions to each idea by Voting and/or Commenting on each page. We hope you take a few minutes to provide feedback now through Aug 23, 2019.


** REMINDER ** These are internal RSA concepts ONLY and have not been committed to development or product roadmap.


Thank you!

- RSA Labs

Brian Mullins

Inside Project Sif

Posted by Brian Mullins Employee Dec 3, 2018

My previous blog post described how combining the concepts of decentralized identity with verifiable claims creates a powerful new model that allows any person, organization, or thing to interact with any other entity with trust and privacy. This post will delve deeper into the inner workings of Project Sif.


Decentralized IDs (DIDs)

A decentralized identity is a digital identity an individual creates, owns, and controls without requiring the involvement of any centralized 3rd party. Decentralized identities are accessible to everyone and designed with privacy in mind. There are no passwords and no centralized repositories of identity data. The idea is that instead of creating a new digital identity for every digital service you want to consume, you can bring your existing IDs with you, similar to how things work in the physical world.


The RSA Labs Identity Wallet mobile app allows you to manage your decentralized identities.  This includes creating a decentralized identity (equivalent to a pseudonym or persona) which is backed by a public/private keypair. The public key is stored in a publicly accessible location, in this case a blockchain, where it can be accessed by anyone; the private key is stored encrypted on your mobile device.


Verifiable Claims

Verifiable claims are cryptographically signed attestations which can be instantly verified by anyone. They can be issued by governments, banks, or even a friend or family member. Upon reading more about verifiable claims, you’ll undoubtedly stumble across this diagram from W3C:

W3C Verifiable Claims Components


Let’s briefly go through each component to better understand how the model works:

  • Holder – Entity storing and controlling verifiable claims
  • Issuer – Generates claims and sends to Holder
  • Inspector-Verifier – Requests claims from Holder to verify
  • Identifier Registry – Stores a mapping of DIDs with their public attributes (e.g. public keys)


In this model, these components can be provided by disparate vendors. The only trust relationship that exists is between the inspector-verifier and the issuer. This is analogous to how trust works with physical credentials in the real-world. A driver’s license, for example, is issued by a DMV (the issuer) and presented, by the holder, to a liquor store (the inspector-verifier) to prove age.  The liquor store must trust the DMV to only issue valid licenses, which in turn allows it to trust the age claim of the holder of the license.


The RSA Labs Identity Wallet mobile app allows you to store and manage your verifiable claims given to you by issuers. Claims can be imported onto your mobile device and later presented to any inspector-verifier that requests them. Here, the mobile app is fulfilling the role of the holder.


Project Sif Architecture

Putting these pieces together, the Project Sif architecture takes shape:


 Project Sif Architecture Diagram



  • The Android Identity Wallet app is the Holder
  • A Demo Issuer and Inspector-Verifier were introduced for testing
  • The Blockchain is the Identifier Registry and maps DIDs to their public attributes, in this case the DID public keys


Note that in this solution the DID is not the public key. This was done to be able to support the use case of a user revoking a public key and associating a new one with an existing DID.


Data Flow

It’s always helpful to understand a system by seeing how data flows through it. Here’s a sample sequence diagram of a user registering for a new website that requires a verified age claim:


Project Sif: Website registration data flow


Why Blockchain?

It’s important to note that the Identifier Registry as defined by W3C makes no mention of blockchain or any other underlying data storage technology. When considering the combination of decentralized identity with verifiable claims, the Identifier Registry should ideally have the following attributes:


  • Public – To adhere to the philosophy of decentralized identity being available to anyone
  • Decentralized – To prevent a single point of failure/attack
  • Immutable – To provide strong assurances about data integrity
  • Auditable


RSA or any other organization could stand up its own Identifier Registry server, but it would represent a centralized component in a decentralized solution. The solution is more resilient when every component is decentralized. When considering these ideal attributes, a public blockchain checks most of the boxes. The biggest limitation imposed by a blockchain is the throughput, an area of active research by many groups. Other distributed ledger solutions could also fill the role of the Identifier Registry if properly configured – a blockchain is not the only solution.



Project Sif demonstrates how the concepts of decentralized identity and verifiable claims can be combined to create a new model for identity management; a model that brings advantages regarding both security and usability. As digital services move to a decentralized model, decentralized identity solutions will be required. If you have a use case where decentralized identity and verifiable claims could be helpful or want to learn more about Project Sif please reach out to We’d love to hear from you!

Today Dell Technologies joined with the San Diego Supercomputer Center, industry companies, and academic partners to launch a new blockchain research lab: BlockLAB. The BlockLAB will focus on business use cases for distributed ledgers and evaluation of technology stacks. One area where blockchains can provide real value is in enabling decentralized identities, an area we have been researching at RSA Labs as part of Project Sif.  Project Sif explores how we can move from the familiar world of centralized identity to a more distributed and decentralized model.


Decentralized identity is a fundamentally different view on identity management as compared to the centralized model that predominantly exists today. Centralized identity has several shortcomings. Users today create new user credentials for nearly every service they want to consume. This leads to users having to maintain too many usernames and passwords (not to mention the security and usability problems surrounding passwords). Making matters worse, users are not in control of their data. Should Google or Apple cease to exist then so would everyone’s online identity that’s tied to them. Companies holding identity data also represent very rich targets for hackers. In short, the problem is that the web as we know it today wasn’t built with an identity layer.


A decentralized identity is a digital identity an individual creates, owns, and controls without requiring the involvement of any centralized 3rd party. Decentralized identities are accessible to everyone and designed with privacy in mind. There are no passwords and no centralized repositories of identity data. The benefits of this approach differ depending on the end-user.


For consumers, decentralized identities allow:

  • Ownership and control of data
  • Securely sharing data
  • Accessing online services without passwords
  • Digitally signing claims, transactions, documents


For enterprises, decentralized identities allow:

  • Easy on-boarding of employees, partners, and customers
  • Reduced liability from not holding sensitive customer data
  • Increased compliance (KYC, HIPAA, GDPR)


Through Project Sif, RSA Labs is prototyping an Identity Wallet mobile app to allow you to manage your decentralized identities. This includes creating a new decentralized identity backed by a public/private keypair. The public key is stored in a public blockchain where it can be accessed and verified by anyone. The Identity Wallet app also helps you store and manage verifiable claims. These are cryptographically signed attestations which can be instantly verified by anyone (similar to government-issued IDs or legal documents). They can be issued by governments, banks, or even a friend or family member. Combining the concepts of decentralized identity with verifiable claims creates a powerful new model that allows any person, organization, or thing to interact with any other entity with trust and privacy.


Please stay tuned to the RSA Labs blog for the latest on Project Sif. Let us know if you have any questions or feedback through the comments below. We’d love to know what you think! Check out the following links for additional information on the BlockLAB announcement and Dell Technologies support for the lab.

Today we are announcing support for Azure IoT Edge, which is Microsoft's solution for edge computing suitable for IoT gateways. Project Iris now brings visibility and threat detection to the Azure IoT Edge platform and connected edge devices managed by it.


Azure IoT Edge Architecture

Azure IoT Edge extends Microsoft's cloud-based Azure IoT Hub architecture to the edge. 



Azure IoT Hub provides a bidirectional communication channel between devices and the cloud, enabling users to perform tasks such as configuration, data collection, and command execution from the cloud. With just Azure IoT Hub in the picture (and prior to Azure IoT Edge), IoT devices would be required to implement the Azure IoT SDK to directly communicate with Azure IoT Hub in the cloud. The supported protocols between an IoT device and Azure IoT Hub are MQTT, AMQP, and HTTPS.


Azure IoT Edge opens up the picture, allowing IoT devices not using the Azure IoT SDK to be brought into the fold. These devices make up the vast number of existing IoT devices out there, and they use an alphabet soup of IoT protocols such as modbus, BACnet, and OPC-UA. Azure IoT Edge proxies communication between these devices and the cloud. This model is especially helpful from a security perspective. 


Going into more depth, this post describes three different patterns for how Azure IoT Edge can used at the gateway:

  • Transparent gateway: Device identities are managed by the Azure IoT Hub, and devices integrate with the Azure IoT SDK. Devices use Azure IoT Edge as a proxy to Azure IoT Hub in the cloud. No protocol translation is needed since devices are already using the Azure IoT SDK.
  • Identity translation: Device identities are managed by the Azure IoT Hub, but devices don't integrate with the Azure SDK. Azure IoT Edge (or other software like EdgeX) performs protocol translation and communicates on behalf of these devices with the cloud.
  • Protocol translation: Azure IoT Hub doesn't know anything about device identities (they are managed elsewhere). Azure IoT Edge performs protocol translation and exchanges device data with the cloud. Other software on the cloud side needs to make sense of the device data to do something meaningful with it.


In addition to protocol translation, Azure IoT Edge allows for general purpose computing at the edge. For instance, running analytics at the edge can save on overall IoT solution costs, compared to shipping all the data to the cloud for processing.


Project Iris and Azure IoT Edge

To use Project Iris to monitor Azure IoT Edge, deploy the Project Iris Docker container side by side with Azure IoT Edge running on the same IoT gateway host.



Azure IoT Edge uses modules to achieve a general purpose edge computing framework. Modules are simply Docker containers. There are two special modules provided by Microsoft, edgeAgent and edgeHub. The edgeAgent module uses the Docker service to manage other modules, and the edgeHub module handles communication between other modules and the cloud. Other modules, such as the Microsoft-provided modbus module, can perform protocol translation, edge analytics, or other activities.


The Project Iris container passively monitors all Azure IoT Edge modules and their communication with other edge devices, and passes up data to the Project Iris cloud service. Based on the data gathered, the Project Iris cloud service dynamically builds out profiles of expected behavior for Azure IoT Edge modules and edge devices tailored to your deployment. Alerts are triggered when significant deviations or anomalies from expected behavior are detected.


Project Iris Runtime Arguments

The Project Iris container should be deployed with the following environment variables as container arguments:

  • AZURE_IOT_HUB_CONNECTION_STRING: This is used by the Iris container to gather metadata about Azure-managed edge devices and Azure IoT Edge deployments.
  • AZURE_EVENT_HUB_CONNECTION_STRING: (optional) Azure IoT hub can be configured to stream diagnostics to an Azure Event Hub. Diagnostics can include useful security related events such as unauthorized device access. Set this environment variable to let Project Iris capture and surface these events.


Device Inference

Device identities can be managed in Azure or elsewhere. Project Iris is intelligent about surfacing these identities, depending on the type of architectural pattern under which Azure IoT Edge is deployed at the gateway (see above).


In the "Transparent gateway" pattern, device identities are fully managed in Azure, and Project Iris gets all device related metadata from Azure. This metadata includes arbitrary tags and configuration properties that can be set in the cloud.


In the "Identity translation" pattern, device identities are managed in Azure and in another piece of software such as EdgeX. Project Iris gathers identity data from both Azure and EdgeX and merges the data together, creating a unified view of identities across both sources.


In the "Protocol translation" pattern, identities are managed outside of Azure. However Project Iris can infer device identities by inspecting Azure IoT Edge module configuration. For instance, the modbus module contains configuration describing how that module can connect to downstream modbus slaves. Project Iris manufactures device identities based off this configuration. 


In the future, as Project Iris continues to support more IoT gateway platforms, it will continue to merge device data together from disparate stores in a intelligent way to surface up a meaningful set of identities.


Threat Detection

Project Iris raises alerts when Azure IoT Edge modules running on the gateway or connected edge devices exhibit behavior that deviates significantly from an established norm. The usage of containers by the Azure IoT Edge runtime permits the development of precise behaviorlal models for describing Azure IoT Edge modules. The types of alerts covered by Project Iris include initial infection, lateral movement, command and control, data exfiltration, and denial of service. These alerts are described in more detail in this blog post.


Below are some hypothetical alerts focused on the edgeHub module. This module has perhaps the most surface area for attack as it exposes several ports for access outside the gateway host.


This alert shows the edgeHub module making an unexpected outbound network connection, for instance in the case of an initial infection to download an exploit payload or reaching out to a command and control host.



Suppose malicious code injected into the edgeHub module attempts to move laterally by probing the network. Project Iris can pick this up - in the example below the edgeHub module is shown reaching out to a modbus device. This is unusual as the edgeHub module by design doesn't directly communicate with any IoT devices.



Now suppose the edgeHub module unexpectedly crashed or was unexpectedly killed:



If configured to integrate with Azure Event Hub, Project Iris can pull in diagnostic events raised by the Azure IoT Hub. Project Iris filters these events to raise interesting security-relevant events. For instance, below is an example of unauthorized access by a device reporting to be a thermostat.



All alerts includes applicable device details gathered from Azure IoT Hub. For instance, the sample below shows configuration details and tags for the aforementioned thermostat:




Whether you're using Azure IoT Edge or other technologies at the edge, we want to hear from you! If you want to learn more about Project Iris, visit the Project Iris web site and click Notify Me. Fill out the contact form and we'll be in touch!

By design containers are meant to be disposable. They are meant to be shipped around to different environments and brought up and down at will. For instance, a container orchestration technology like Kubernetes can automatically bring up new containers in response to a spike in demand, and then tear down the same containers when the demand subsides. Or, as part of the continuous delivery life cycle, the same container image running on a developer's laptop can be spun up in a test environment for verification and then deployed in production by an operations team.


AI-based security solutions like Project Syn use training data to learn what's normal and flag abnormal activity. But the impermanence of containers poses an interesting problem: how can a machine learning system gain the right amount of insight about individual containers in order to raise meaningful alerts? An overly sensitive system that raises alerts before it has enough data to extrapolate from will generate noise and false positives. But an overly conservative system that waits too long for enough data to become available will fail to raise important alerts and result in false negatives.


Container Profiles


Project Syn addresses this problem with the concept of container profiles. In a nutshell, container profiles allow for behavior learned about one container to be shared with similar containers run later in time. This means that for many containers, the training stage can be bypassed altogether, and alerts can be generated immediately after deployment.


Let's take the example of continuous delivery, shown in the figure below. Suppose a new container image is in the process of being deployed to production. First a container from this new image is deployed in a staging environment. After a set training period, Project Syn creates a profile for this container, which captures the behavior learned about this container.



At a later point in time, after the container image has passed the requisite checks in staging, a new container (or set of containers) is deployed in production from the same image. Since Project Syn already has a profile from a previous container coming from the same image, it applies that profile to the new container. The new container bypasses training, and if it happens to be compromised shortly after deployment, Project Syn can immediately raise alerts to that effect.


Profile Matching


How does Project Syn determine when a profile can be applied to a container? It's not based simply on the container's image. Containers run from the same image can exhibit very different behavior based on how they are run. Project Syn uses containers' runtime metadata, such as command line arguments and ports, as part of profile matching.


For example, let's compare three nginx web server containers that are run from the same nginx image. Container A runs only with a private port and is only accessible on the same local virtual network as the container. Container B exposes its private port on port 80 and is accessible from outside the container host (assuming the host firewall is open). Container C exposes its private port on port 8080 and is also accessible outside the container host.



In this case, there are two unique profiles, one profile for container A, and one profile for both container B and C. The difference in public port for container B and C (80 vs 8080) doesn't represent a meaningful difference in behavior.


Profile Matching Using Labels


If you want to, you can explicitly control how profile matching works using Docker object labels. Labels are custom metadata in the form of key-value pairs that can be attached to containers.


Here's how it works: first, you tell Project Syn which label keys you want Project Syn to use for profile matching. When you run a container, you run it with those same labels, and set the label values appropriately. Containers with the same label key-value pairs are matched to the same profile.


What's Next


Container profiles today only work within the context of a single customer. It's not hard to see a future in which customers can opt-in to share profiles with and use profiles from RSA and other customers. This would enable the community to collectively improve container security for everyone. 


Stay tuned for more updates!

It’s an essential question for security teams following a cyber attack: Where did the threat originate? In the days and weeks following the WannaCry ransomware attack—which swept through 150 countries, infecting hundreds of thousands of computers—reports emerged pointing to various potential actors. But none of the insights came soon enough to help defend against the attack. Unfortunately, the type of analysis used to derive them just doesn’t work that fast. The good news is there are other approaches that do.


Dynamic analysis of WannaCry and its possible origins required hours of manual code inspection. As a result, the first clues took several days to emerge, and further insights took weeks. The problem is the process entails manually comparing thousands of code segments from dozens of known malicious actors. As the volume of new malware threats grows (the AV-TEST Institute reports registering over 390,000 new malicious programs daily), that problem is only going to get worse. Dynamic analysis simply can’t scale to compare code quickly enough to identify the origins of a new piece of malware in a timely way.


Dynamic analysis can help determine the runtime effects of a piece of malware, but with tools for sandbox detection and evasion becoming increasingly common, its value is limited. Besides, knowing what a piece of malware does won’t help with file similarity analysis, as there may be dozens of ways to achieve that result. Comparing file hashes has never really been useful, either, since attackers routinely leverage code polymorphism to ensure each piece of malware has a unique hash. What about fuzzy hashing as a tool for file similarity analysis? It’s increasingly being used to measure how similar two binaries are. The challenge is fuzzy hashing tools like ssdeep are applied to the entire file and can’t catch similarities more complex than one file being related to another.


But what if fuzzy hashing could be applied to pick up code similarity at a more granular level? That thinking has led RSA to a new static analysis technique for detecting complex similarities and, moreover, identifying similarities from multiple pieces of malware. Through this approach, we can create a malware genome, if you will, that provides an understanding of how malware evolved, even when it’s an amalgamation of multiple malicious tools. Beyond mapping out code capabilities, this genealogy may shine some light on the malicious infrastructure and exchange of tools happening on the attacker side.


As a service to others engaged in threat investigation, we’re freely sharing the tool we’ve been using to explore this approach. Our hope is WhatsThisFile will help defenders evaluate unknown files faster, discover similarities to known malware and quickly gain the insights needed to better defend their enterprises.

IoT gateways are critical pieces of enterprise infrastructure that facilitate secure communication between IoT edge devices and the cloud. As IoT gateways serve as single points of control for all edge devices, they can make an attractive target for attackers, and protecting them is paramount.


RSA Project Iris provides security monitoring and visibility at the IoT edge. This post walks through several examples of how Project Iris can monitor IoT gateways, using the open source EdgeX Foundry platform as a motivating example.


EdgeX Foundry with Project Iris

The EdgeX Foundry platform for IoT gateways consists of many microservices that are deployed as Docker containers. Almost all of these microservices expose web APIs, some for internal consumption within the gateway and others for external use. MongoDb is used for storing data, such as IoT device metadata, logs, and sensor readings from connected IoT edge devices.


Setting up Project Iris on an EdgeX Foundry gateway involves simply deploying the Project Iris Docker container on the gateway. The Project Iris container passively collects data about local EdgeX microservices and securely sends the data to the Project Iris cloud service. The Project Iris cloud service analyzes the data and uses anomaly detection techniques and threat intelligence to identify suspicious activities and raise security alerts.


Threat Detection in Action

So what can Project Iris do? Below are examples of interesting security events that Project Iris can detect.


Initial Infection and Command and Control

A compromised host or microservice container will often execute a malicious payload and initiate suspicious network connections to risky sites, from which further payloads may be downloaded or "command and control" instructions are received to execute.


Project Iris can show when these suspicious payloads are executed or suspicious network connections are made. Below are example alerts for a compromised edgex-device-bacnet service, which is responsible for managing communications to IoT devices that support the BACnet protocol. The first alert shows an anomalous Python process that runs code to connect to an external site, download a payload, and execute it. The second alert is raised for the network connection being made to a known high risk IP address based in Germany.



Lateral Movement

Malicious payloads may probe the network for other endpoints to compromise. This is especially of concern for IoT gateways, which sit on many local edge networks and have privileged access to edge devices.


Project Iris can detect when a microservice container initiates these suspicious probes. The example alerts below show the compromised edgex-support-logging container probing another IoT device, a KMC thermostat, and also trying to connect to another microservice, edgex-device-snmp, on the same host. An alert is also raised for the execution of the ping command used for probing. Project Iris understands that these activities are not typical for the edgex-support-logging microservice and flags them.



Data Exfiltration

Data exfiltration is often the end goal of a compromise. IoT gateways often contain a wealth of sensitive information about edge devices including raw device data, device metadata, and credentials and keys for secure access to edge devices. On the EdgeX Foundry platform, this information is housed within MongoDB.


In the current pre-release version of EdgeX Foundry, MongoDB is set up with remote access enabled and well known default usernames and passwords. As an example of data exfiltration, we can dump the contents of the MongoDB database remotely using the mongodump tool:



This type of activity would cause Project Iris to raise several type of alerts, as shown below. The first alert is raised for a remote network connection to MongoDB.  This connection was flagged as unusual because the database is normally only meant for local use on the gateway itself. The second alert is triggered because of an unusually large data transfer out of MongoDb.



Denial of Service

IoT gateways are especially susceptible to denial of service attacks because of the large number of edge devices they manage. Compromised edge devices could launch denial of service attacks at the gateway or through the gateway to other hosts.


As an example, we used a compromised network signal tower device to initiate a large volume of network connections to the gateway. Project Iris can detect this type of activity, as shown in the first alert below:



A denial of service attack can subsequently lead to one or more microservice containers crashing in an unexpected way. Project Iris can also detect this, as shown in the second alert above.



The goal of Project Iris is to bring security monitoring and threat detection capabilities to the IoT edge. In this post we walked through how Project Iris can be used to secure IoT gateways, which are critical enterprise assets responsible for managing edge devices. In a subsequent post, we'll talk about what Project Iris can do to bring similar visibility down to the edge devices themselves.


If you're interested in trying out Project Iris, register here and the RSA Labs team will notify you when it's available.

Web applications and web services are probably the most commonly produced type of software, and they are increasingly being developed and deployed as containers. Among the top downloaded container images on the public Docker Hub are many related to web application development, such as nginx, MySQL, PostgreSQL, the Apache HTTP server, Ruby, PHP, Tomcat, and Django.


This post walks through an example scenario of detecting a web application attack using Project Syn. The scenario is admittedly simple and contrived, but we believe it's illustrative of how Syn can help in the real world.



In our scenario we use the Damn Vulnerable Web Application (DVWA) as the web app to be exploited. The application is intentionally riddled with vulnerabilities and is often used in security pen-test training. We deploy the entire web app, based on the LAMP stack, in a single Docker container. (Typically a web application would be deployed as many containers but a single container is sufficient for our purposes.)


We also deploy the Project Syn container side-by-side with the DVWA container on the same Docker host. The Project Syn container collects security-related data about other containers on the same host (in this case the DVWA container) and forwards them to the Syn cloud service for analysis and alerting.


Here's the output of docker ps:


Exploit, Payload Delivery and Execution

Among the many vulnerabilities in the DVWA is one that permits the upload and execution of malicious code disguised as image files.



We use the OWASP Zed Attack Proxy to exploit the vulnerability to install a malicious PHP file, bad.php, and execute it. The PHP file contains a small bit of code that when executed launches a Python process that connects to an external IP hosting a Remote Admin Tool (RAT). The Python process downloads a full payload from the external IP and executes it, giving the operator of the RAT full control over the container.



The Syn service raises alerts when it detects the launching of the malicious Python process:




In addition, the Syn service raises alerts when it detects network traffic to the malicious external IP on ports 8080 and 443:

Data Exfiltration

Once the payload is installed, the operator of the RAT has full control over the container and can do any number of things. In our case, we are using the open source Pupy RAT tool. We start an interactive shell on the container,  dump the MySQL database to a file, and download it.



The Syn service detects the anomalous execution of the mysqldump process:


The Syn service also detects that data exfiltration through the producer-consumer ratio (PCR) metric.


The PCR metric tracks a normalized ratio of network bytes in and out of a component. Producers (PCR value between 0 and 1) have more data flowing out than in, while consumers (PCR value between -1 and 0) have more data flowing in than out. Components tend to have pretty stable PCR values over time.


In our scenario, the Syn service detected a significant change in the DVWA container's PCR. It changed from being a moderate producer (.613) to being a strong producer (.979) at the moment the database dump was downloaded.




The beauty of containers is that they are designed to be limited in function and behavior. As such, from a security perspective, we believe we can precisely model what the expected/normal behavior for any container should be, and raise targeted alerts when anomalies arise. We walked through a simplified scenario above of using Project Syn to detect the exploitation of a containerized web application.


If you're interested in giving Project Syn a spin, check out or the Getting Started video. Feedback is welcome!

Brian Girardi


Posted by Brian Girardi Employee Sep 7, 2017

Dynamic analysis. Sandboxing. Is that all we got? Am I right?!?


Fact. Sandboxing is a necessity to understand malware behavior. It’s the defacto standard for our industry. However, for the average enterprise security team it feels overwhelming to consistently operationalize. In addition, for security vendors trying to keep up with the millions of samples that emerge daily, the infrastructure and expense needed to support and scale long-term may have no ceiling. Thousands of virtual hosts, running for several minutes each, not to mention deception techniques, dynamic IoC's, etc., etc., etc., the long-term math to keep ahead of the malware problem just doesn’t seem to add up.


The concept for What's this file? was born from that perspective. Can we accurately detect and classify known and unknown malware without ever executing it? It seemed like a worthy challenge for RSA Labs.


RSA Labs developed novel techniques to identify and classify malware, and we packaged them into a cloud service that operates like your typical multi-scanner, but its FAR from typical in approach. In addition, we bundled in a light-weight static-analysis UX to round out what we believe is a useful tool for security analysts.


What makes What's this File? different from other multi-scanner type services is:

  1. WTF does not execute the samples you submit.
  2. WTF does not use AV engines for analysis.
  3. WTF uses patented Attack Vector Inspection to identify malware droppers.
  4. WTF uses patented Malware Genealogy to identify malware from its descendants.
  5. WTF gives the analyst the ability to inspect hundreds of extracted file characteristics.


We would greatly appreciate your feedback on its effectiveness, we think it is pretty cool! If you can fool the service let us know. And, WTF is free to use!



Brian Girardi
VP, RSA labs

Deployments of micro services and applications alike is changing rapidly, moving towards container based environments.  As this paradigm shift happens, similar to the paradigm shift with the advent of VM’s the IT security paradigm must also shift.  RSA labs created Project Syn as a test bed for enabling visibility and threat detection in Docker container environments.   We believe that container based technologies will be a well adopted way for IT, Devops and developers alike to create, manage and distribute new technologies.  With every new technological advancement, there comes inherent security risk. 


Project Syn can help!  If you’re a Netwitness for Logs customer, great, we can feed alert data directly into Netwitness.  If not, that’s cool too!   Our online dashboard will allow you to monitor the health of your Docker Hosts, monitor alerts and drill down into pertinent meta data to help gain visibility into the threats your environments are facing.  Advanced Behavioral Analytics techniques are being developed from our data science group to ensure the alerts are fined tuned to the latest threats.  We also leverage RSA Live Connect for current known malicious website blacklist data. 


Project Syn works hard to protect your Docker Environments, but as always, there’s room for improvement!  Feedback is encouraged!  We’re always looking for ways to improve our value to our customers!  Best of all, Project Syn is free of charge!  All we ask is you install our lightweight container in your Docker environment and we’ll do the rest!


Interested?  Please visit for more information and to request access!



RSA Labs team!