The RSA NetWitness is run by many of our customers on RSA's physical appliances, but the entire stack can run in AWS, Azure, VMware, or Hyper-V just fine. You can even mix-and-match hardware between physical and virtual hosts however you prefer. Our Virtual Host Installation Guide does a great job outlining the steps to building a virtual RSA NetWitness Platform host.
However, there is frequently a need to build smaller hosts to gather data in smaller remote locations. Small issues that don't apply to larger hosts can cause RSA NetWitness Platform folders to overrun their allotments and cause NetWitness to stop capture or aggregation. This post will primarily focus on the settings to focus on when building smaller virtual hosts. It will also include some tricks to monitor your NetWitness hosts to make sure they don't reach unhealthy levels of storage. Of course, many of these tips will also apply to virtual hosts of all sizes, so hopefully you can benefit regardless of your particular virtual implementation.
To ISO or Not to ISO
RSA provides both an ISO and an OVA (and a Hyper-V VHD) to use to build your virtual hosts. Which should you use? If you are building a full RSA NetWitness Platform implementation virtually, you will have to use the ISO to build your Admin Server because the OVA does not come with all of the required RPMs. As for the other hosts, using the OVA isn't a bad idea. The OVA is a much smaller file to deal with (~450MB OVA vs ~6GB ISO) and it has already completed the bootstrap, which is one of the longest steps of the installation. However, the OVA has already provisioned the logical volumes for a 195GB host. That is the recommended size for the OS drive, but if you're wanting to give more than that, the ISO is the easiest option - and I say that as someone who rather enjoys partitioning Linux file systems! As for assigning less than the 195GB, I would recommend you thin provision your host's OS drive before you install with less than what RSA recommends.
Keep in mind that your log, network, and endpoint data stores will be separate from this. The OS drive is strictly for holding OS files, NetWitness internal service log entries, temporary data, and some other miscellaneous data. You will add disks to accommodate storing your log, network, and/or endpoint data in the step.
Installing the ISO is extremely simple: create your virtual host, give it the CPU, RAM, and HDD storage as recommended in the installation guide or by your RSA engineer (different requirements for different services and different levels of throughput), attach the ISO, and turn on the VM. It will boot to the blue installation screen where you will hit <Enter>. Once you get to the following screen...
...make sure you enter "y" or "Y" and hit <Enter>. Once the bootstrap is complete, the system will reboot to the login prompt. After logging in, you will run "nwsetup-tui" and you can refer to the installation guide for instructions on how to properly orchestrate a host from there.
VM Host Sizing
In the previous step, you installed the bootstrapped host via the ISO or the OVA and possibly orchestrated the services as well. In the case of any host that will retain data - Decoders (network / log), Hybrids (endpoint / network / log), Concentrators, or Archivers - you will need to also provision storage for that data. Sizing that can be difficult, but I have a calculator that can help size most of those appropriately.
...except Archivers. Why not Archivers? Archivers are employed, generally, for regulatory purposes. You should engage your RSA Engineer to make sure you size them appropriately so that you don't run into issues with auditors. You might be logging especially large logging sources, while the calculator only uses a static 600 bytes per message. You can also retain more or less meta keys which can drastically affect how much storage to assign. And after all, while the "[Small]" in the title of this post was in hard brackets, this guide is generally geared towards smaller deployments / hosts. The sole reason to use an Archiver is because the amount of storage has reached significantly beyond any definition of the word "small".
To use the calculator, there are a number of things to understand:
- The calculator is used to calculate Hybrid storage, because most "small" environments will use Hybrids rather than discrete Decoder and Concentrator pairs. If you are using separate Decoders and Concentrators, you can simply break up the calculated storage per service and split up the provisioning commands. NOTE: There is no such thing as a "discrete Endpoint Decoder". Endpoint servers only come as Hybrids, whether virtual or physical.
- When you enter information to size up your storage, at the bottom of the calculator you will get provisioning commands to setup your hosts. If you have any Hosts entered in rows 6 or 7, you'll get commands to provision storage for an Endpoint Log Hybrid. If you don't have any Hosts, but you have Log Events >0 GB/day, you will get commands to provision storage for a Log Hybrid. If you have Log Events at 0 GB/day and you have 0 Hosts but your network traffic is >0 GB/day, you will get commands to provision storage for a Network Hybrid.
- If you are sizing an Endpoint Log Hybrid, keep in mind that you cannot currently download modules automatically, download memory dumps, or download Master File Tables from hosts. Those features which were in ECAT 4.x will be back in the product as of 11.4, and I've included commands to provision them. However, the amount of storage you provision for those purposes is entirely up to you, so you will need to just type the numbers into that cell. They can both be relatively small (10 - 30GB) if you don't plan to auto-download unsigned, new modules. However, once the feature is back, we do highly recommend that you automatically have NetWitness Endpoint download any unsigned, unknown modules less than 5MB - 10MB, and estimate storage for your environment appropriately.
- Once storage is provisioned for each of the given volumes, the last provisioning command is to give 100% of the remaining space to the MetaDB on the Concentrator. That is done on purpose because if I have any extra space left over, that is where I want it. However, you also must make sure (likely with df -h) that you enough storage in that logical volume. If not, you likely didn't give the entire partition enough space.
- For this same reason, if you end up using this calculator to build a discrete Decoder, you'll likely want to change the command that would provision your PacketDB to use the "100%FREE" version of the lvcreate command. The syntax would be the same as the one I use for the Concentrator's MetaDB.
- When you enter the scale information for Network Traffic, you might wonder, "But I don't know how many GB/day of network traffic I plan to send to NetWitness!" The easiest rule of thumb is that if you expect to see 100Mbps on average for a 24-hour period (that would mean ~175Mbps over the peak hour and 10Mbps overnight), that is 1TB/day of traffic. If you expect to see 10Mbps because it's a small office or home environment, assume 100GB/day. If you have absolutely no idea, just throw a number in there.
- For logs, in a small environment, if you had any log management system you can probably figure out how many GB/day of day you were generating before. If you expect a certain number of Events per Second, I put a handy calculator to turn that into GB/day on row 10. If you have no idea, then once again, I suggest you just throw something in there.
- You can edit the calculator if you like. The password is just "rsa". I only password protect it to make sure that first-time users aren't editing cells they shouldn't and break it.
The calculator is called NW Virtual Hybrid Sizing Calculator v1.0.xlsx. PLEASE, if you find any errors, leave a comment below or contact me somehow so that I can fix it for others.
Raw Event Data Storage
The Virtual Host Installation Guide covers how to add storage for the various RSA NetWitness Platform databases in Step 3. It also covers how to calculate the amount of storage you'll need to allocate to each database for any given host/service. For the Admin Server, Archiver, Broker, ESA, Log Collector, and UEBA hosts, all storage will get dumped into the /var/netwitness/ folder. The instructions for extending that volume group and logical volume are in the installation guide and generally involve: pvcreate, then vgextend, then lvextend, and finally xfs_growfs.
For Decoders, Concentrators, and Hybrids, I've put together the commands that you need in the attached
*Commands.txt text files to setup the storage for those hosts. I recommend running all of these scripts to build the partitions, volume groups, and logical volumes after you run nwsetup-tui, but *BEFORE* you install the services on the hosts. A few things to note:
- I name the volume group "vg01" for the sake of brevity. The name you assign does not matter at all.
- In Step 5, I assign storage to the "root" folder for each respective service; /var/netwitness/decoder for Network Decoders, /var/netwitness/concentrator for Concentrators, and /var/netwitness/logdecoder for Log Decoders. This is not required, but I prefer to create these volumes so that I can monitor them in case they fill up. Note: they must have at least 5GB of storage assigned, but larger VMs can have as much as 30GB.
- Also in Step 5, you will need to replace the lv sizes with the proper sizes based on the Installation Guide and/or your RSA NetWitness Platform engineer. In my scripts, I assign specific sizes to every volume except the last one, which I then assign whatever free space is left with the "100%FREE" command.
- For Step 10, I wrote that so that you can copy and paste it directly into an SSH session into the /etc/fstab file on the host. You can paste that directly to the bottom of the existing file. Once that is done, before you install services, make sure to reboot the host to make sure there aren't any errors in that fstab file. The syntax is very particular and any errors will cause the system to fail to come up. If that happens, just open a Console window to the machine, hit CTRL+D to enter maintenance mode, and then fix the fstab file.
- I want to say this again because it's very important: after adding your changes to the fstab file, reboot the machine and make sure your syntax was correct!
Just view the *Commands.txt file attached to this post that corresponds to the type of host you're trying to install.
This step is straightforward. If you haven't already, go to Admin --> Hosts and enable the host. Then install the services just as outlined in the Installation Guide.
Validate Folder Sizes - RSA NetWitness Platform Databases
In order to properly roll off the oldest entries in NWDB (NetWitness Database, our proprietary database format), we have to make sure that the RSA NetWitness Platform knows how much storage each database has to fill. Navigate to Admin --> Services, and for any Concentrator or Decoder/Log Decoder service, go to the Explore page. Expand the "database" menu item on the left-hand side, and click on "config". Here I show the page for an RSA Log Decoder service on a physical Endpoint Log Hybrid:
The sizes you see there are 95% of the corresponding folders we built using the provisioning commands, measured in 1,073,741,824 byte blocks. If you want to get to exact, you can run "df --block-size=G", multiply a folder by 95%, and round to the nearest two digits to get the value RSA NetWitness Platform will place in the corresponding line above. Once the data in one of these folders exceeds these limits, RSA NetWitness Platform rolls off data.
If you followed this guide and the Virtual Host Installation Guide, you will see folder sizes here that match what you provisioned. But what if they don't match or you made a mistake? Well, you can reset those by right-clicking on the "database" menu item and clicking "Properties":
At the bottom-right of the window, the Properties pane will open up. Select "reconfig" from the drop-down and click the Send button:
You can see that these values match what we saw in the previous screen. If these values still don't look correct - usually, if they are all the same - then your folders aren't mounted to separate logical volumes. If these values do look correct, you can remove the "=xx.xxTB" or "=xx.xxGB" from the entries on the previous screen. Then, back in the Properties pane, in the Parameters box, type update=1 and click Send again. It will append those values to the appropriate entries at the top, though you'll have to refresh the screen to see the update.
The indexes for each of these services has a separate entry. On the Explore page, you will see a menu item called "Index", and the settings are under the "config" sub-menu. Just like above, if you need to reset the folder size for that, you can right-click on "Index" and run the reconfig commands like before.
Validate Thresholds - MongoDB
In addition to NWDB, NetWitness also stores Endpoint scan results (primarily, what you see in Navigate --> Hosts) in mongoDB on the Endpoint Log Hybrid in the /var/netwitness/mongo folder. NetWitness does not display the folder sizes in the Endpoint Server service's Explore page as it does for those services above. Instead, it just looks at the amount of storage in the /var/netwitness/mongo folder, or, if that isn't separately partitioned, in the /var/netwitness folder. Then it compares the current usage to the value in the "rollover-after" setting here:
Your system may not use this setting if your Data Retention policies (found at Admin --> Services --> Endpoint Server --> Config --> Data Retention Scheduler tab) don't already roll over data before the folder hits 80%. You should also be aware of the settings under endpoint/data-store-thresholds:
If the storage in the corresponding folder (/var/netwitness/mongo if it's partitioned separately, and /var/netwitness if it's not) crosses these thresholds, you will eventually receive Health & Wellness alerts that correspond to those thresholds.
Minimum Available Space - The Key to Reliability
The other setting you may have noticed in the previous screenshots that we ignored were the <database_name>.free.space.min settings. A given database can grow past the maximum size we've setup above with no issues, but capture/aggregation will stop if there is less free space than what is specified in the free.space.min setting for the corresponding service. Just like the folder size above is set as 95% of the total volume size, the free.space.min is set to 0.865% of the total size, by default. In both cases, the default setting can be replaced manually with whatever you would like to enter. For most large VMs, the default is fine. However, for smaller hosts capturing small amounts of data, this default may be a bit high and can be adjusted.
Please note: the indexes do not have a similar free.space.min setting, and capture/aggregation will continue to run, even if the index volumes are essentially full.
For Mongo, you should also be aware of the settings under Admin --> Services --> Endpoint Server --> Explore --> endpoint/data-store-thresholds:
If the storage in the corresponding folder (/var/netwitness/mongo if it's partitioned; /var/netwitness if it's not) crosses the warning-percent level, <this will happen>. If it crosses the fatal-percent level, <this will happen>.
Monitoring Part 1: Folder Sizes
As I mentioned in the Overview, for small hosts (roughly <1TB of total storage), I recommend monitoring your volumes to make sure that they don't fill up. To do this, I modified a script I found here to monitor file system usage:
It pulls back every folder other than temp and boot folders, and if any are at 90% or higher, it will generate a syslog, sent to the IP designated by the -n switch (10.10.10.10 in the image above). I've attached that script below as checkVolumeSizes.sh. (Remember, use chmod to make it executable!) If you run chrontab -e from an SSH terminal, the RSA NetWitness Platform's underlying Centos OS will open vi and allow you to set a schedule to run the script. I imagine most of you reading this are familiar with crontab syntax, but if you're not, or if you want to design something overly tricky, this site takes all the work out of it for you: https://crontab.guru/.
The messages generated will look like this:
You can ingest that into any system that can ingest syslog messages and alert on it as you see fit. Seeing as RSA NetWitness Platform *IS* a SIEM, it seemed only right to go ahead and monitor that using the RSA NetWitness Platform . The first step involved in that is properly parsing the message, so I built a parser for that using the NetWitness Log Parser Tool (download here: https://community.rsa.com/docs/DOC-94172, learn how to use it here: RSA ESI Beta 3 - YouTube and Parser Development When No Message ID Exists - YouTube). It took maybe 5 minutes.
But there aren't any out-of-the-box keys meant to store the size of logical volumes, and I wanted to include that in the e-mail I send to myself, so I added a meta key to the RSA NetWitness Platform for that. If you use my parser you *MUST* create a custom meta key in your system in order for the parser to work properly. Add the custom meta key to the table-map-custom.xml file on the Log Decoder where you are directing these messages.
You can find that attached as table-map-custom.txt. I didn't want to call it table-map-custom.xml because it needs to be added to the existing file, not pasted over the existing file in its entirety.
Now, download nwdiskalert.envision, navigate to Admin --> Log Decoder --> Config, click the Parsers tab, and upload that file. After uploading, if you want to make sure the Log Decoder reloaded its parsers, you can switch from Config to Explore:
Once the page loads, expand the "decoder" menu, right-click on "parsers", and choose "Properties".
In the Properties pane, select "reload" from the drop-down menu and then click Send. Now the parsers have been reloaded and you're all set to ingest these messages!
Monitoring Part 2: ESA Correlation Rules
I built three ESA rules to monitor my file system at home, one each for medium, high, and critical severity alerts. Here is what I classify as each:
- Medium Severity:
- Monitor folders that shouldn't ever fill up when they reach high levels of utilization, but won't cause any service issues.
- Any of the following folders are at least 90% but no more than 94% disk usage:
- Any of the following folders are at least 90% but no more than 94% disk usage:
- High Severity:
- Monitor folders that shouldn't ever fill up when they reach extremely high levels of utilization, but won't cause any service issues
- Monitor folders that could cause service interruption once they pass 95% (which is where many of them will sit most of the time) but haven't yet reached a point where service interruption will occur
- Monitor the mongodb folder if it reaches concerning levels
- Any of the following folders are at least 95% but no more than 97% disk usage:
- Any of the following folders are at 96% or 97%:
- Any of the following folders are at least 95% but no more than 97% disk usage:
- The /var/netwitness/mongo folder is at least 90% and no more than 94%
- Critical Severity:
- Monitor folders that shouldn't ever fill up when they reach critical levels of utilization
- Monitor folders that could cause service interruption once they pass 97% and will soon - or are currently - causing service interruption
- Monitor the mongodb folder if it reaches its "fatal-percent" setting
- Any of the folders in the High Severity list are at 98% or above
- The /var/netwitness/mongo folder is at 95% or above
You can find those attached as nwDiskMonitoringESARules_<severity>_Basic.txt. You might ask yourself, "Why did he call them "Basic"? Well, that's because I actually built more detailed rules in my lab to monitor for the free size returned from the event logs. It's absolutely overkill, and it causes the rules to look like this:
Do you really want to do that to yourself? You really shouldn't, but if you insist, feel free to reach out to me and I'll send you those rules as well.
Monitoring Part 3: Generating Notifications
When these rules detect something, of course you'll want to generate an e-mail to notify you of their current state. I use a single notification template for all three ESA Rules. I put my notification template in the attached file nwDiskMonitoringNotificationTemplate.txt. The template breaks down like this:
- Lines 1 - 20: Builds a banner at the top of the e-mail that is yellow for medium alerts, orange for high, and red for critical
- Line 25: Prints the time the event was generated
- Line 27: Prints the IP of the RSA NetWitness Platform host that generated the event log
- Line 29: Prints the folder that the alert is related to
- Line 31: Prints the % utilization of the folder
- Line 33: Prints the amount of free space, in MB, left in that folder
- Line 35: Generates a hyperlink to the raw event log in the RSA NetWitness Platform; make sure you edit both the <NW_URL_or_IP> and the device ID (mine is 6)
(Have questions about any other items in this notification template? Check out my other relevant blog post here: Building the Notifications of Your Dreams in the RSA NetWitness Platform.)
Once you've updated those items, place it under Admin --> System --> Global Notifications --> Template (tab), and make sure you select that template when adding your ESA Rules. You can also build an Incident Rule in the RSA NetWitness Platform if you want to generate incidents for these alerts. Here is mine, for reference:
I can't emphasize enough that the Virtual Host Installation Guide has very comprehensive instructions for setting up a virtual RSA NetWitness Platform host, and you should make sure you follow those instructions. However, following some of the additional steps included in this guide can give you peace of mind that your RSA NetWitness Platform environment is running smoothly and collecting your critical security forensic information.
Future note: I plan to build some Event Source Monitoring rules to make sure that my hosts are still sending logs. For example, the packetdb folder on your Decoders and Log Decoders should reach 95% eventually and then roll off data, while your Concentrators should reach 95% on their metadb folder. Those should continue to generate logs once they hit 90% utilization at every interval you specified in the cron job. If I ever get the free time to create those, I'll update this post with that information. If someone wants to build that on their own, be my guest!!