Use Splunk's analytics-driven security for your environment, from security monitoring to detecting insiders and advanced attackers in your environment with this free app. The app uses Splunk Enterprise and the power of our Search Processing Language (SPL) to showcase tons of working examples.
Each use case has examples with sample data and real searches. We've also included extensive documentation and you can save searches directly from the app to create a Notable Event or Risk Indicator in ES, an External Alarm in UBA, or send an email for review. This gives analysts the ability to detect anomalous activities, leverage best practice detections for small or large environments, even improve your GDPR stance.
Best of all, SSE provides a map of all Splunk security detections to six stages that represent the Splunk Security Journey and categorizes them by use cases, providing you a maturity path to get you from day one to a thousand. Improve your security, starting now.
Video Walk Through of Installing: https://youtu.be/RVUmSsS-81M
Install the app only on a search head. This app is safe to install in large size clusters, as it will not have an impact on indexers (unless you choose to enable many searches). The app includes many lookups with demo data that shouldn't be replicated to the indexers, but also includes a distsearch.conf file to prevent that replication, so that you needn't worry.
SSE installs into a SHC like any other SHC app, the only area where there is some minimal risk in a SHC setup is when using the Lookup Cache acceleration technique under the First Time Seen detection with very large lookups (See First Time Seen Detection -> Considerations for implementing the large scale version in this doc). This wouldn't be used by default, and even when used would be safe for virtually all scenarios as Search Head Clustering has a robust replication mechanism that works well for larger files. The docs below detail that most SSE lookups using this technique would be a few MB in size, and it's difficult to conceive of a lookup more than 1 GB. I have hunted and the only issue with SHC replication I've found was with a 54 GB KV Store, so you should feel very comfortable using SSE including this technique.
Unless you save or enable searches included with the app, there is no increase in indexed data, searches or others. Because the app includes demo data, the app takes about 250MB of storage on the search head.
This app does not interfere or impact ES, and can be installed on an ES Search Head (or Search Head Cluster) safely.
In addition to the above described scenarios, this app is periodically tested with the following client platforms:
OSX 10.12 (Sierra) - Chrome (Primary)
OSX 10.12 (Sierra) - Safari
OSX 10.12 (Sierra) - Firefox
Windows 10 - Chrome
Windows 10 - Safari
Windows 10 - Firefox
If you save and enable searches included with the app in your environment, you could see changes in the performance of your Splunk deployment.
As is true for all searches in Splunk, the amount of data that you search affects the search performance you see in your deployment. For example, if you search Windows logs for two desktops, even the most intensive searches in this app add no discernible load to your indexers. If you instead search domain controller logs with hundreds of thousands of users included, you would see additional load.
The searches included with the app are generally scheduled to run once a day, and leverage acceleration and efficient search techniques wherever possible. In addition, the searches have been vetted by performance experts at Splunk to ensure they are as performant as possible. If you are concerned about resource constraints, schedule any searches you save to run during off-peak times.
You can also against configure these searches to run against cached or summary index data (see "Large Scale" headers below). If you have a large scale deployment, use the lookup cache for first time seen searches and select the "High Scale / High Cardinality" option for time series analysis searches. See the details for large scale versions of these searches below.
In Splunk Security Essentials, every example with an available search has prerequisites defined, so that you will know whether a given search should work in your environment. (This also gives you some insight into what data sources you might want to add in the future.) When you click "Start Searches," more than sixty searches will launch. Each search is super fast, and the dashboard will throttle to five concurrent searches to minimize impact on your Splunk environment. The searches are highly efficient, so in most environments this entire load should take less than five minutes to run, usually around one and a half minutes. The result will be a chart showing you which use cases you can expect to run smoothly.
The detection methods for the use cases in the app fall into three categories:
This method of anomaly detection tracks numeric values over time and looks for spikes in the numbers. Using the standard deviation in the stats command, you can look for data samples many standard deviations away from the average, allowing you to identify outliers over time. For example, use a time series analysis to identify spikes in the number of pages printed per user, the number of interactive logon sessions per account, and other statistics where if a spike is seen, would indicate suspicious behavior.
The time series analysis is also performed on a per-entity basis (e.g., per-user, per-system, per-file hash, etc.), leading to more accurate alerts. It is more helpful to know if a user prints more than 3 standard deviations above their personal average, but less useful to alert if more than 150 pages are printed. Using a time series analysis with Splunk, you can detect anomalies accurately.
The time series searches address use cases that detect spikes, such as "Increase in Pages Printed" or "Healthcare Worker Opening More Patient Records Than Normal" or any other use case you might describe with the word "Increase."
In a large-scale environment, utilize summary indexing for searches of this type. The app allows you to save any time series use case in two ways:
For the High Scale / High Cardinality versions, the app schedules two searches. One search aggregates activity every day and stores that daily summary in a summary index. The second search actually does the anomaly detection, but rather that reviewing every single raw event it reviews the summary indexed data. This allows that search to analyze more data (such as terabytes instead of gigabytes), and a greater number of values (such as 300k usernames rather than 3k usernames).
For example, the small-scale version of the "Healthcare Worker Opening More Patient Records Than Normal" search runs across a time range and reviews raw events for each healthcare worker to pull the number of unique patient records per day, and then calculates the average and standard deviation all in one. If you use the large-scale version, the first search runs every day to calculate how many patient records were viewed yesterday, and then outputs one record per worker with a username, timestamp, and the number of patient records viewed for each healthcare worker to a summary index. Then the large-scale version of the search would run against the summary indexed data to calculate the average, standard deviation, and most recent value.
With lower cardinality to manage in the dataset and fewer raw records to retrieve each day, the amount of data that the Splunk platform has to store in memory is reduced, leading to better search performance and reduced indexer load.
However, summary indexing means that you have to manage two scheduled searches instead of just one. In addition, the data integrity of the summary index relies on the summary indexing search not being skipped. Summary indexed data also takes up storage space on your indexers, though generally not very much, and summary indexed data does not count against your Splunk license.
For more on how to use summary indexing to improve performance, see http://www.davidveuve.com/tech/how-i-do-summary-indexing-in-splunk/.
First time analysis detects the first time that an action is performed. This helps you identify out of the ordinary behavior that could indicate suspicious or malicious activity. For example, service accounts typically log in to the same set of servers. If a service account logs into a new device one day, or logs in interactively, that new behavior could indicate malicious activity. You typically want to see an alert of first time behavior if the last time that this activity has been seen is within the last 24 hours.
You can also perform first time analysis based on a peer group with this app. Filter out activity that is new for a particular person, but not for the people in their group or department. For example, if John Seyoto hasn't checked out code from a particular git repo before, but John's teammate Bob regularly checks out code from that repo, that first time activity might not be suspicious.
Detect first time behavior with the stats command and first() and last() functions. Integrate peer groups first seen activity using eventstats. In the app, the demo data compares against the most recent value of latest(), rather than "now" because events do not flow into the demo data in real time so there is no value for "now."
The ability to detect first time seen behavior is a major feature of many security data science tools on the market, and you can replicate it with these searches in the Splunk platform out of the box, for free.
The first time seen searches address use cases that detect new values, such as the "First Logon to New Server" or "New Interactive Logon from Service Account" or any other search with "New" or "First" in the name.
In a large-scale deployment, use caching with a lookup for searches of this type. If you select a lookup from the "(Optional) Lookup to Cache Results" dropdown, it will automatically configure the search to use that lookup to cache the data. If you leave the value at "No Lookup Cache" then it will run over the raw data.
For example, to detect new interactive logons by service account, you would need to run a search against raw events with a time window of 30, 45, or even 100 days. The search might run against several tens of millions of events, and depending on the performance you expect from the search, it might make sense to cache the data locally.
The more performant version of these searches rely on a lookup to cache the historical data. The search then runs over the last 24 hours, adds the historical data from the lookup to recompute the earliest and latest times, updates the cache in the lookup, and finds the new values.
Implementing historical data caching can improve performance. For a baseline data comparison of 100 days, and assuming that some of that data is in cold storage, historical data caching could improve performance up to 100 times.
Relying on a cache also means storing a cache. The caches are stored in CSV lookup files stored on a search head. The more unique combinations of data that need to be stored, the more space needed on a search head. If a lookup has 300 million combinations to store, that lookup file can take up 230MB of space. If you implement the large-scale version of the searches, ensure that there is available storage on the search head for the lookups needed to provide historical data caching for these searches.
Lookups in this app are excluded from bundle replication to your indexers. This prevents your bundles from getting too large, and maintains Splunk reliability. However, if you move the searches or lookups to a different app, our configurations won't protect them. In this case, make sure you replicate the settings in distsearch.conf so that those lookups are not distributed to the indexers. The risks associated with large bundles are that it can take longer for changes to go into effect, and in extreme cases can even take indexers offline (bundles too out of date).
The remainder of the searches in the app are straightforward Splunk searches. The searches rely on tools included in Splunk platform to perform anomaly detection, such as the URL toolbox to detect Shannon entropy in URLs, the Levenshtein distance to identify filename mismatches, the transaction command to perform detection, and more. They typically don't require a historical baseline of data, so can be run over the last half hour of data easily. You can get the most value from these searches if you copy-paste the raw search strings into your Splunk deployment and start using them.
Splunk Security Essentials is a free app, and because of that we often don't really know what people care about in the app. We've got lots of ideas for what we should build next, but we want to know what people find valuable. For customers who opt in, collecting data tells us what you care about! For example, we are shipping some Data Onboarding Guides. How often people actually use these will help us to know whether it's worth it to build more. Everything is anonymized, so you info is always private.
If you opt in globally on your Splunk environment, the app enables an internal library to track basic usage and crash information. The library uses browser cookies to track app user visitor uniqueness and sessions and sends events to Splunk using XHR in JSON format, with all user or system identifying data resolved to GUIDs.
Event | Description | Example Fields in addition to Common Fields |
---|---|---|
Example Opened | You were interested enough to open an example | status - exampleLoaded; exampleName - the name from the contents; searchName - which search for that example. |
SPL Viewed | You though the SPL for an example was worth seeing! | status - SPLViewed; name - the searchName from row 1 |
Schedule Search (Started) | An example so useful that you decided to schedule an alert | status - scheduleAlertStarted; name - the searchName from row 1 |
Schedule Search (Finished) | An example so useful you actually scheduled an alert! | status - scheduleAlertCompleted; name - the searchName from row 1 |
Doc Loaded | You were curious about onboarding and opened a guide | status - docLoaded; pageName - whatever page you are viewing (e.g., Windows Security Logs) |
Filters Updated | You updated your filters to filter for specific examples | status - filtersUpdated; name - the filter you change; value - the value; enabledFilters - the filters in use |
Selected Intro Use Case | From the intro page, you clicked on a use case for more | status - selectedIntroUseCase; useCase - whatever you clicked on, like "Security Monitoring" |
Event | Example Message |
---|---|
Example Opened | {status: "exampleLoaded", exampleName: "New Interactive Logon from a Service Account", searchName: "New Interactive Logon from a Service Account - Demo"} |
SPL Viewed | {status: "SPLViewed", name: "New Interactive Logon from a Service Account - Demo"} |
Schedule Search (Started) | {status: "scheduleAlertStarted", name: "New Interactive Logon from a Service Account - Demo"} |
Schedule Search (Finished) | {status: "scheduleAlertCompleted", searchName: "New Interactive Logon from a Service Account - Demo"} |
Doc Loaded | {status: "docLoaded", pageName: "Windows Security Logs"} |
Filters Updated | {status: "filtersUpdated", name: "category", value: "Account_Sharing", enabledFilters: ["journey", "usecase", "category", "datasource", "highlight"]} |
Selected Intro Use Case | {status: "selectedIntroUseCase", useCase: "Security Monitoring"} |
When first installing Splunk (or upgrading to a version of Splunk that supported usage data collection) an administrator was asked whether to opt in or not. That setting can be viewed or changed (aka, you can opt in/out) by enabling/disabling Anonymized Usage Data under Settings > Instrumentation on the Splunk Web UI.
Splunk itself sends some usage data (again, if you've opt'd in). Splunk Security Essentials doesn't touch that stuff, but you can go read about it here: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Shareperformancedata
2.3.1 Release Notes
* Numerous Windows TA 5 bug fixes (missed a specific format before, thank you to those on answers who pointed out the problem!)
* Few Bug Fixes for the new 2.3 Dashboarding feature
* The main page no longer says "What's New in 2.2" with outdated info, it now says "What's New in 2.3" with updated info!
Version 2.3 Release:
* Dashboard Panels! Not only will you find a collection of dashboard panels on several commonly used examples, but from the Data Source Check you can click the new "Create Posture Dashboards" button and it will guide you through creating a series of dashboards that provide visibility into your environment based on the data sources you already have. Great for those who want to periodically look at dashboards instead of getting alerts!
* More complete mapping of Mitre and Kill Chain
* Several bug fixes
Version 2.2 Release Notes:
* 25 New Detections, many focused around Insider Threat, but including two for ES Risk, and that new Microsoft Scheduler Zero-Day reported on twitter!
* Improved Print-to-PDF Export from the Bookmarked Content page!
* Improved integrations with ES and ESCU, and content from UBA, for users who also have those products.
* Many bugs resolved and functionality improved!
Version 2.1.1 Release Notes:
- Splunk 7.1 Compatibility!
- New search bar (type in the name of something you're looking for, such as "lockout" to find everything related to Account Lockouts) courtesy of elasticlunr.js add-in.
- Bookmark "Print" view now dramatically cleaner and better
- Several UI tweaks
- Several bugs squashed
Version 2.1.0 Release Notes:
- New Bookmark Feature! Track content that you would like to implement, and even your implemenation status. Security Essentials is no replacement for a full project tracking system, but we'll help you remember what you care about, and help you get started with building it out!
- Four New Examples:
- Public S3 Buckets (CloudTrail)
- Connection to a new Domain
- Old Passwords in Use
- Unusual AWS Regions
- New Use Case Analytics dashboard (called simply: Overview)
- Expanded GDPR Content (description, screenshots, visualizations, oh my!)
- Many bug fixes and formatting improvements
Major overhaul for Splunk Security Essentials 2.0.
* All content mapped to meaningful use cases. No more looking at log sources, now focus based on Security Monitoring or Insider Threat!
* 10+ Data Onboarding Guides to walk you through getting data in, including the configuration of third party systems
* 40+ new examples, with live and usable searches covering AWS, GDPR, basic Security Monitoring, and Ransomware!
* Integrated mapping of entire Splunk Security ecosystem, including analytics in ES, ESCU, UBA, and Professional Services.
* All examples oriented to a Security Journey, to help you focus on what to do first, second, and after that.
* Complete overhaul of the UI to be more usable and intuitive.
Note: There are a *ton* of code changes in this release, and sometimes splunkweb gets funny with that. If you do notice any behavior that doesn't seem right, try telling Splunk to use the latest copy by going to /en-US/_bump.
Version 1.4.6 Release Notes:
* Fixed a couple of major 7.0 bugs (sorry)
* One outstanding issue with saving High Cardinality use cases
Splunk 7.0 Note: There are a couple of big bugs with the app in Splunk 7.0. 1.4.5 doesn't work in 7.0, but 1.4.6 will! It's awaiting review at the moment, and we will have it out shortly.
Version 1.4.5 Release Notes:
* Several small bug fixes, including the pre-requisite checks for the demo data on distributed environments (thank you to tkreiner on answers for the bug report!)
* A checkbox now requires that you acknowledge "I Understand" instead of "I Do Not Understand" before continuing, in one of the most unusual bug reports of my career.
Version 1.4.4 Release Notes:
* Several small bug fixes, including the pre-requisite checks for the demo data on distributed environments (thank you to tkreiner on answers for the bug report!)
* A checkbox now requires that you acknowledge "I Understand" instead of "I Do Not Understand" before continuing, in one of the most unusual bug reports of my career.
1.4.3 Release Notes:
* Now doesn't trigger virus scanners! (Turns out a lot of vbscript can confuse some scanners)
* Adjusted the user for SFDC data to Chris, instead of Chuck
* Few small bug fixes
1.4.2 Release Notes
* Fixed a bug with the Schedule Alert functionality in the Search Based Use Cases where the expected box didn't appear.
1.4.1 Release Notes:
* Added anonymized_box_logs.csv -- this is an orphaned dataset you can use to generate a custom Detect Spikes use case. Go explore, or come to SplunkLive DC this Thursday (03/23) to walk through it with me live.
* Fixed a few live searches, and many pre-req searches that were missing index=* -- thank you {I didn't ask permission to use your name} for reporting that bug on the recent downloader survey!
* Fixed the broken pre-reqs for the Emails with Lookalike Domains use case.
1.4.0 Release Notes
- New Data Source Check dashboard showcased! See what use cases you can run with the data you have today -- explore if there are use cases you can't run that you would like to!
- Added Docs + Support to the Nav Menu
- Wide array of small fixes and adjustments.
1.3.2 Release Notes:
- Added a debug capability to the new Data Source Check dashboard -- just add ?debug=true to the URL (e.g., /app/Splunk_Security_Essentials/data_source_check?debug=true) to get a text box that you can send me to help debug. (SSE Diags are on the roadmap)
- Added docs and support links to the nav -- support just pointing to Splunk Answers
- Small bug fixes to six pre-req searches (Thank you Andreas!)
1.3.1 New Features:
- Six new use cases for Salesforce.com Event Log Format data!
- New Data Check dashboard (currently in beta) that will look through all of your data and then run all of the data source pre-req checks. It's not currently on the dashboard, but if you're interested check out the details page to see more. https://splunkbase.splunk.com/app/3435/#/details
1.3.0 New Features:
- Added Filters that breakdown use cases not just by security domain, but also by data source, alert volume, and the app version when that use case was added.
1.3.1 Release Notes:
- Six new use cases for Salesforce.com Event Log Format data!
- New Application Accessing Salesforce.com API for User
- New High Risk Event Types for Salesforce.com User
- New Tables Queried by Salesforce.com Peer Group
- New Tables Queried by Salesforce.com User
- Spike in Downloaded Documents Per User from Salesforce.com
- Spike in Exported Records Per User from Salesforce.com
- New Data Check dashboard (currently in beta) that will look through all of your data and then run all of the data source pre-req checks. It's not currently on the dashboard, but if you're interested check out the details page to see more. https://splunkbase.splunk.com/app/3435/#/details
- Misc Bug Fixes
Happy #RSAC Everyone!
Version 1.3.0 Release Notes:
- Added Filters that breakdown use cases not just by security domain, but also by data source, alert volume, and the app version when that use case was added.
- Fixed a few small display bugs
- Fixed a bug where the app had an embarrassingly error leading to pre-req checks marked as present when they weren't
1.2.0 Release:
* Four new email use cases centered around phishing victims
* One new use case that detects if a particular sourcetype (e.g., security software) goes offline while the rest of the system is online.
* Major enhancement to the new lookup caching, in the form of the "Create Blank Lookup" button
* Several small bug fixes
1.1.1 Release Notes:
* Changed the Success/Error/Processing for the data checks to icons, so that people won't accidentally miss them!
* Fixed bug with the demo lookup cache, along with a few other small UI bugs.
1.1.0 Release Notes:
* Major New Feature: For First Seen Detections, you can cache historical results in a lookup, allowing searches to run dramatically faster in exchange for disk space. (30x performance improvement would be common)
* Several bugs fixed (thank you to andreasz for finding, and also proposing fixes to a few errors!)
1.0.3 Release Notes:
* Removed two features that weren't actually implemented. Thank you andreasz for the discovery!
1.0.2 Release Notes:
* New Logo
* Enhancement: you can now click Show SPL even if you don't have the data models (or data) present, to make it easy to copy-paste searches into production.
1.0.1 Release Notes:
* Fixed a bug that affected distributed deployments (no impact to the deployment, but prevented the app from working)
* New Logo
Initial Release
Splunk AppInspect evaluates Splunk apps against a set of Splunk-defined criteria to assess the validity and security of an app package and components.
As a Splunkbase app developer, you will have access to all Splunk development resources and receive a 50GB license to build an app that will help solve use cases for customers all over the world. Splunkbase has 1000+ apps and add-ons from Splunk, our partners and our community. Find an app or add-on for most any data source and user need, or simply create your own with help from our developer portal.