Accept License Agreements

Thank You

Downloading Splunk Security Essentials
MD5 checksum (splunk-security-essentials_145.tgz) 85800a359f9dae944cf59329195ee915 MD5 checksum (splunk-security-essentials_144.tgz) eba8c20a1aedfdae3f0d315ac1bd4ae8 MD5 checksum (splunk-security-essentials_143.tgz) ded1a6cb7cf3480fa58d9a12316754df MD5 checksum (splunk-security-essentials_142.tgz) 8f2ff4e99c94bb235738af8a40cfd225 MD5 checksum (splunk-security-essentials_141.tgz) e273f595bba71ed644adf48e199a5883 MD5 checksum (splunk-security-essentials_140.tgz) d7d5b7cce0eb5de3db30514a9ad24f08 MD5 checksum (splunk-security-essentials_132.tgz) 3ac53f216aca355358f3f88cf5107349 MD5 checksum (splunk-security-essentials_131.tgz) 3066ef794a98f4324abe5c4173701263 MD5 checksum (splunk-security-essentials_130.tgz) 45ee4f118215a358671e5f77c81fd8af MD5 checksum (splunk-security-essentials_120.tgz) af22a1c17014b6ce94e6c2395e487a85 MD5 checksum (splunk-security-essentials_111.tgz) b1b6c0dab74f124df2564bd29ee0839b MD5 checksum (splunk-security-essentials_110.tgz) c9a8ae356dfc2db8f123042948d71721 MD5 checksum (splunk-security-essentials_103.tgz) 207b2a29169699dc6030358a940740ee MD5 checksum (splunk-security-essentials_102.tgz) e1f55c9babb58addbb66bbf4a1106c3b MD5 checksum (splunk-security-essentials_101.tgz) d161d1e57bbcf60a5e4898b8ad9854fc MD5 checksum (splunk-security-essentials_10.tgz) 646c313a9d7ae2773916a63cbd02fd04
To install your download
For instructions specific to your download, click the Details tab after closing this window.

Flag As Inappropriate

Splunk Security Essentials

Splunk Built
Overview
Details
Detect insiders and advanced attackers in your environment with the free Splunk Security Essentials app. This app uses Splunk Enterprise and the power of our Search Processing Language (SPL) to showcase 55+ working examples of anomaly detection related to entity behavior analysis (UEBA). Each use case includes sample data and actionable searches that can immediately be put to use in your environment.

The use cases leverage analytics to give analysts the ability to detect unusual activities like users who print more pages than usual (spike detection) or logon to new servers (first seen behavior), the ability to see when adversaries change file names to evade detection, and more. Each use case includes the expected alert volume, an explanation of how the search works, description of the security impact, and you can save searches directly from the app to leverage any alert actions you have installed such as creating a Notable Event or Risk Indicator in ES, an External Alarm in UBA, or sending email for review.

https://splunkbase.splunk.com/app/3435/#/details

Table of Contents

  1. App Description
  2. Use Cases
  3. Data Sources Used
  4. Installation
  5. Performance Impact
  6. (Beta) Data Source Detection
  7. Detection Methods Used by the Searches
  8. Release Notes

App Description

Detect insiders and advanced attackers in your environment with the free Splunk Security Essentials app. This app uses Splunk Enterprise and the power of our Search Processing Language (SPL) to showcase 40+ working examples of anomaly detection related to entity behavior analysis (UEBA). Each use case includes sample data and actionable searches that can immediately be put to use in your environment.

The use cases leverage analytics to give analysts the ability to detect unusual activities like users who print more pages than usual (spike detection) or logon to new servers (first seen behavior), the ability to see when adversaries change file names to evade detection, and more. Each use case includes the expected alert volume, an explanation of how the search works, description of the security impact, and you can save searches directly from the app to leverage any alert actions you have installed such as creating a Notable Event or Risk Indicator in ES, an External Alarm in UBA, or sending email for review.

Use Cases

Search examples for the following use cases are included in the app.

Access Domain

  • Authentication Against a New Domain Controller
  • First Time Logon to New Server
  • Significant Increase in Interactively Logged On Users
  • Geographically Improbable Access (Superman)
  • Increase in # of Hosts Logged into
  • New AD Domain Detected
  • New Interactive Logon from a Service Account
  • New Local Admin Account
  • New Logon Type for User
  • Short Lived Admin Accounts
  • Significant Increase in Interactive Logons

Data Domain

  • First Time Accessing a Git Repository
  • First Time Accessing a Git Repository Not Viewed by Peers
  • Healthcare Worker Opening More Patient Records Than Usual
  • Increase in Pages Printed
  • First Time USB Usage
  • Increase in Source Code (Git) Downloads

Network Domain

  • Detect Algorithmically Generated Domains
  • Remote PowerShell Launches
  • Source IPs Communicating with Far More Hosts Than Normal
  • Sources Sending Many DNS Requests
  • Sources Sending a High Volume of DNS Traffic

Threat Domain

  • Detect Data Exfiltration
  • Sources Sending Many DNS Requests
  • Sources Sending a High Volume of DNS Traffic

Endpoint Domain

  • Concentration of Hacker Tools by Filename
  • Anomalous New Listening Port
  • Concentration of Discovery Tools by Filename
  • Concentration of Discovery Tools by SHA1 Hash
  • Concentration of Hacker Tools by SHA1 Hash
  • Familiar Filename Launched with New Path on Host
  • Find Processes with Renamed Executables
  • Find Unusually Long CLI Commands
  • Hosts with Varied and Future Timestamps
  • New Host with Suspicious cmd.exe / regedit.exe / powershell.exe Service Launch
  • New Parent Process for cmd.exe or regedit.exe
  • New Path for a Common Filename with Process Launch
  • New RunAs Host / Privileged Account Combination
  • New Service Paths for Host
  • New Suspicious Executable Launch for User
  • Processes with High Entropy Names
  • Processes with Lookalike (typo) Filenames
  • Remote PowerShell Launches
  • Significant Increase in Windows Privilege Escalations

Data Sources Used

  • Active Directory Domain Controller Logs
  • Windows Process Launch Logs (Event ID 4688)
  • Endpoint Agent Logs (Carbon Black, Sysmon)
  • Windows System Logs
  • Source Code Repository Logs
  • Firewall Logs
  • Windows Account Management Logs
  • Electronic Medical Record System Access Logs

Installation

In a single-instance deployment

  • If you have internet access from your Splunk server, download and install the app by clicking '''Browse More Apps''' from the Manage Apps page in Splunk platform.
  • If your Splunk server is not connected to the internet, download the app from Splunkbase and install it using the Manage Apps page in Splunk platform.
    Note: If you download the app as a tgz file, Google Chrome could automatically decompress it as a tar file. If that happens to you, use a different browser to download the app file.

In a distributed deployment

Install the app only on a search head. This app is safe to install in large size clusters, as it will not have an impact on indexers (unless you choose to enable many searches). The app includes many lookups with demo data that shouldn't be replicated to the indexers, but also includes a distsearch.conf file to prevent that replication, so that you needn't worry.

After installation

Unless you save or enable searches included with the app, there is no increase in indexed data, searches or others. Because the app includes demo data, the app takes about 250MB of storage on the search head.

Performance Impact

If you save and enable searches included with the app in your environment, you could see changes in the performance of your Splunk deployment.

As is true for all searches in Splunk, the amount of data that you search affects the search performance you see in your deployment. For example, if you search Windows logs for two desktops, even the most intensive searches in this app add no discernible load to your indexers. If you instead search domain controller logs with hundreds of thousands of users included, you would see additional load.

The searches included with the app are generally scheduled to run once a day, and leverage acceleration and efficient search techniques wherever possible. In addition, the searches have been vetted by performance experts at Splunk to ensure they are as performant as possible. If you are concerned about resource constraints, schedule any searches you save to run during off-peak times.

You can also against configure these searches to run against cached or summary index data (see "Large Scale" headers below). If your Splunk deployment is a large scale deployment, use the lookup cache for first time seen searches and select the "High Scale / High Cardinality" option for time series analysis searches. See the details for large scale versions of these searches below.

Data Source Check Dashboard

In Splunk Security Essentials, every use case has pre-requisites defined, so that you will know whether a given search should work in your environment. (This also gives you some insight into what data sources you might want to add in the future.) When you click "Start Searches," about sixty searches will launch. Each search is super fast, and the dashboard will throttle to five concurrent searches to minimize impact on your Splunk environment. The searches are highly efficient, so in most environments this entire load should take less than five minutes to run, usually around one and a half minutes. The result will be a chart showing you which use cases you can expect to run smoothly.

Detection Methods Used by the Searches

The detection methods for the use cases in the app fall into three categories:

  • Time series analysis
  • First time analysis
  • General Splunk searches

Time Series Searches

This method of anomaly detection tracks numeric values over time and looks for spikes in the numbers. Using the standard deviation in the stats command, you can look for data samples many standard deviations away from the average, allowing you to identify outliers over time. For example, use a time series analysis to identify spikes in the number of pages printed per user, the number of interactive logon sessions per account, and other statistics where if a spike is seen, would indicate suspicious behavior.

The time series analysis is also performed on a per-entity basis (e.g., per-user, per-system, per-file hash, etc.), leading to more accurate alerts. It is more helpful to know if a user prints more than 3 standard deviations above their personal average, but less useful to alert if more than 150 pages are printed. Using a time series analysis with Splunk, you can detect anomalies accurately.

The time series searches address use cases that detect spikes, such as "Increase in Pages Printed" or "Healthcare Worker Opening More Patient Records Than Normal" or any other use case you might describe with the word "Increase."

Large Scale Version of Time Series Searches

In a large-scale data environment, utilize summary indexing for searches of this type. The app allows you to save any time series use case in two ways:

  • Click "Schedule Alert" to detect anomalies directly from raw data. Run this search for low data volumes.
  • Click "Schedule High Scale / High Cardinality Alert" to save a version optimized for performance in large-scale deployments, actually including two searches. Run these searches in a large-scale environment to take advantage of summary indexing.

For the High Scale / High Cardinality versions, the app schedules two searches. One search aggregates activity every day and stores that daily summary in a summary index. The second search actually does the anomaly detection, but rather that reviewing every single raw event it reviews the summary indexed data. This allows that search to analyze more data (such as terabytes instead of gigabytes), and a greater number of values (such as 300k usernames rather than 3k usernames).

For example, the small-scale version of the "Healthcare Worker Opening More Patient Records Than Normal" search runs across a time range and reviews raw events for each healthcare worker to pull the number of unique patient records per day, and then calculates the average and standard deviation all in one. If you use the large-scale version, the first search runs every day to calculate how many patient records were viewed yesterday, and then outputs one record per worker with a username, timestamp, and the number of patient records viewed for each healthcare worker to a summary index. Then the large-scale version of the search would run against the summary indexed data to calculate the average, standard deviation, and most recent value.

Considerations for implementing the large scale version

With lower cardinality to manage in the dataset and fewer raw records to retrieve each day, the amount of data that the Splunk platform has to store in memory is reduced, leading to better search performance and reduced indexer load.

However, summary indexing means that you have to manage two scheduled searches instead of just one. In addition, the data integrity of the summary index relies on the summary indexing search not being skipped. Summary indexed data also takes up storage space on your indexers, though generally not very much, and summary indexed data does not count against your Splunk license.

Note: If the summary index search does skip events, you can backfill the summary index. Search for information about the fill_summary_index.py script and the --dedup flag online.

For more on how to use summary indexing to improve performance, see http://www.davidveuve.com/tech/how-i-do-summary-indexing-in-splunk/.

First Time Seen Searches

First time analysis detects the first time that an action is performed. This helps you identify out of the ordinary behavior that could indicate suspicious or malicious activity. For example, service accounts typically log in to the same set of servers. If a service account logs into a new device one day, or logs in interactively, that new behavior could indicate malicious activity. You typically want to see an alert of first time behavior if the last time that this activity has been seen is within the last 24 hours.

You can also perform first time analysis based on a peer group with this app. Find the values common among a group of people (a peer group) in order to filter out activity that is new for a particular person, but not for the people in a similar group as that person. For example, if John Seyoto hasn't checked out code from a particular git repo before, but John's teammate Bob regularly checks out code from that repo, that first time activity might not be suspicious.

Detect first time behavior with the stats command and first() and last() functions. Integrate peer groups first seen activity using eventstats. In the app, the demo data compares against the most recent value of latest(), rather than "now" because events do not flow into the demo data in real time so there is no value for "now."

The ability to detect first time seen behavior is a major feature of many security data science tools on the market, and you can replicate it with these searches in the Splunk platform out of the box, for free.

The first time seen searches address use cases that detect new values, such as the "First Logon to New Server" or "New Interactive Logon from Service Account" or any other search with "New" or "First" in the name.

Large Scale Version of First Time Seen Searches

In a large-scale deployment, use caching with a lookup for searches of this type. If you select a lookup from the "(Optional) Lookup to Cache Results" dropdown, it will automatically configure the search to use that lookup to cache the data. If you leave the value at "No Lookup Cache" then it will run over the raw data.

For example, to detect new interactive logons by service account, you would need to run a search against raw events with a time window of 30, 45, or even 100 days. The search might run against several tens of millions of events, and depending on the performance you expect from the search, it might make sense to cache the data locally.

The more performant version of these searches rely on a lookup to cache the historical data. The search then runs over the last 24 hours, adds the historical data from the lookup to recompute the earliest and latest times, updates the cache in the lookup, and finds the new values.

Considerations for implementing the large scale version

Implementing historical data caching can improve performance. For a baseline data comparison of 100 days, and assuming that some of that data is in cold storage, historical data caching could improve performance up to 100 times.

However, relying on a cache also means storing a cache. The caches for the searches are stored in CSV lookup files stored on a search head. The more unique combinations of data that need to be stored, the more space needed on a search head. If a lookup has 300 million combinations to store, that lookup file can take up 230MB of space, which is non-trivial in some environments. If you implement the large-scale version of the searches, ensure that there is available storage on the search head for the lookups needed to provide historical data caching for these searches.

Lookups in this app are excluded from bundle replication, and therefore are not distributed to the indexers. This prevents your bundles from getting too large, allowing you to keep your Splunk installation reliable. However, if you move the searches or lookups to a different app, the lookups associated with the searches will no longer be excluded from bundle replication. In this case, make sure you replicate the settings in distsearch.conf so that those lookups are not distributed to the indexers. The risks associated with large bundles are that it can take longer for changes to go into effect, and in extreme cases can even take indexers offline (bundles too out of date).

General Splunk Searches

The remainder of the searches in the app are straightforward Splunk searches. The searches rely on tools included in Splunk platform to perform anomaly detection, such as the URL toolbox to detect Shannon entropy in URLs, the Levenshtein distance to identify filename mismatches, the transaction command to perform detection, and more. They typically don't require a historical baseline of data, so can be run over the last half hour of data easily. You can get the most value from these searches if you copy-paste the raw search strings into your Splunk deployment and start using them.

Release Notes

Version 1.4.0 Release Notes:

  • New Data Source Check dashboard showcased! See what use cases you can run with the data you have today -- explore if there are use cases you can't run that you would like to!
  • Added Docs + Support to the Nav Menu
  • Wide array of small fixes and adjustments.

Version 1.3.2 Release Notes:

  • Added a debug capability to the new Data Source Check dashboard -- just add ?debug=true to the URL (e.g., /app/Splunk_Security_Essentials/data_source_check?debug=true) to get a text box that you can send me to help debug. (SSE Diags are on the roadmap)
  • Added docs and support links to the nav -- support just pointing to Splunk Answers
  • Small bug fixes to six pre-req searches (Thank you Andreas!)

Version 1.3.1 Release Notes:

  • Six new use cases for Salesforce.com Event Log Format data!
  1. New Application Accessing Salesforce.com API for User
  2. New High Risk Event Types for Salesforce.com User
  3. New Tables Queried by Salesforce.com Peer Group
  4. New Tables Queried by Salesforce.com User
  5. Spike in Downloaded Documents Per User from Salesforce.com
  6. Spike in Exported Records Per User from Salesforce.com
  • New Data Check dashboard (currently in beta) that will look through all of your data and then run all of the data source pre+req checks. It's not currently on the dashboard, but if you're interested check out the details page to see more. https://splunkbase.splunk.com/app/3435/#/details
  • Misc Bug Fixes

Version 1.3.0 Release Notes:

  • Added Filters that breakdown use cases not just by security domain, but also by data source, alert volume, and the app version when that use case was added.
  • Fixed a few small display bugs
  • Fixed a bug where the app had an embarrassingly error leading to pre-req checks marked as present when they weren't

Version 1.2.0 Release Notes:

  • Four new email use cases centered around phishing victims
  • One new use case that detects if a particular sourcetype (e.g., security software) goes offline while the rest of the system is online.
  • Major enhancement to the new lookup caching, in the form of the "Create Blank Lookup" button
  • Several small bug fixes

Version 1.1.0 Release Notes:

  • Major New Feature: For First Seen Detections, you can cache historical results in a lookup, allowing searches to run dramatically faster in exchange for disk space. (30x performance improvement would be common)
  • Several bugs fixed (thank you to andreasz for finding, and also proposing fixes to a few errors!)

Version 1.0.0 Release Notes:

  • Initial Release

Release Notes

Version 1.4.5
June 20, 2017

Version 1.4.5 Release Notes:
* Several small bug fixes, including the pre-requisite checks for the demo data on distributed environments (thank you to tkreiner on answers for the bug report!)
* A checkbox now requires that you acknowledge "I Understand" instead of "I Do Not Understand" before continuing, in one of the most unusual bug reports of my career.

Version 1.4.4
June 13, 2017

Version 1.4.4 Release Notes:
* Several small bug fixes, including the pre-requisite checks for the demo data on distributed environments (thank you to tkreiner on answers for the bug report!)
* A checkbox now requires that you acknowledge "I Understand" instead of "I Do Not Understand" before continuing, in one of the most unusual bug reports of my career.

Version 1.4.3
May 23, 2017

1.4.3 Release Notes:
* Now doesn't trigger virus scanners! (Turns out a lot of vbscript can confuse some scanners)
* Adjusted the user for SFDC data to Chris, instead of Chuck
* Few small bug fixes

Version 1.4.2
April 18, 2017

1.4.2 Release Notes
* Fixed a bug with the Schedule Alert functionality in the Search Based Use Cases where the expected box didn't appear.

Version 1.4.1
March 19, 2017

1.4.1 Release Notes:
* Added anonymized_box_logs.csv -- this is an orphaned dataset you can use to generate a custom Detect Spikes use case. Go explore, or come to SplunkLive DC this Thursday (03/23) to walk through it with me live.
* Fixed a few live searches, and many pre-req searches that were missing index=* -- thank you {I didn't ask permission to use your name} for reporting that bug on the recent downloader survey!
* Fixed the broken pre-reqs for the Emails with Lookalike Domains use case.

Version 1.4.0
March 6, 2017

1.4.0 Release Notes
- New Data Source Check dashboard showcased! See what use cases you can run with the data you have today -- explore if there are use cases you can't run that you would like to!
- Added Docs + Support to the Nav Menu
- Wide array of small fixes and adjustments.

Version 1.3.2
Feb. 28, 2017

1.3.2 Release Notes:
- Added a debug capability to the new Data Source Check dashboard -- just add ?debug=true to the URL (e.g., /app/Splunk_Security_Essentials/data_source_check?debug=true) to get a text box that you can send me to help debug. (SSE Diags are on the roadmap)
- Added docs and support links to the nav -- support just pointing to Splunk Answers
- Small bug fixes to six pre-req searches (Thank you Andreas!)

1.3.1 New Features:
- Six new use cases for Salesforce.com Event Log Format data!
- New Data Check dashboard (currently in beta) that will look through all of your data and then run all of the data source pre-req checks. It's not currently on the dashboard, but if you're interested check out the details page to see more. https://splunkbase.splunk.com/app/3435/#/details

1.3.0 New Features:
- Added Filters that breakdown use cases not just by security domain, but also by data source, alert volume, and the app version when that use case was added.

Version 1.3.1
Feb. 23, 2017

1.3.1 Release Notes:
- Six new use cases for Salesforce.com Event Log Format data!
- New Application Accessing Salesforce.com API for User
- New High Risk Event Types for Salesforce.com User
- New Tables Queried by Salesforce.com Peer Group
- New Tables Queried by Salesforce.com User
- Spike in Downloaded Documents Per User from Salesforce.com
- Spike in Exported Records Per User from Salesforce.com
- New Data Check dashboard (currently in beta) that will look through all of your data and then run all of the data source pre-req checks. It's not currently on the dashboard, but if you're interested check out the details page to see more. https://splunkbase.splunk.com/app/3435/#/details
- Misc Bug Fixes

Version 1.3.0
Feb. 13, 2017

Happy #RSAC Everyone!

Version 1.3.0 Release Notes:
- Added Filters that breakdown use cases not just by security domain, but also by data source, alert volume, and the app version when that use case was added.
- Fixed a few small display bugs
- Fixed a bug where the app had an embarrassingly error leading to pre-req checks marked as present when they weren't

Version 1.2.0
Feb. 6, 2017

1.2.0 Release:
* Four new email use cases centered around phishing victims
* One new use case that detects if a particular sourcetype (e.g., security software) goes offline while the rest of the system is online.
* Major enhancement to the new lookup caching, in the form of the "Create Blank Lookup" button
* Several small bug fixes

Version 1.1.1
Jan. 30, 2017

1.1.1 Release Notes:
* Changed the Success/Error/Processing for the data checks to icons, so that people won't accidentally miss them!
* Fixed bug with the demo lookup cache, along with a few other small UI bugs.

Version 1.1.0
Jan. 20, 2017

1.1.0 Release Notes:
* Major New Feature: For First Seen Detections, you can cache historical results in a lookup, allowing searches to run dramatically faster in exchange for disk space. (30x performance improvement would be common)
* Several bugs fixed (thank you to andreasz for finding, and also proposing fixes to a few errors!)

Version 1.0.3
Jan. 17, 2017

1.0.3 Release Notes:
* Removed two features that weren't actually implemented. Thank you andreasz for the discovery!

Version 1.0.2
Jan. 12, 2017

1.0.2 Release Notes:
* New Logo
* Enhancement: you can now click Show SPL even if you don't have the data models (or data) present, to make it easy to copy-paste searches into production.

Version 1.0.1
Jan. 11, 2017

1.0.1 Release Notes:
* Fixed a bug that affected distributed deployments (no impact to the deployment, but prevented the app from working)
* New Logo

Version 1.0
Jan. 7, 2017

Initial Release

1,435
Installs
5,995
Downloads
Share Subscribe LOGIN TO DOWNLOAD

Subscribe Share

Splunk Certification Program

Splunk's App Certification program uses a specific set of criteria to evaluate the level of quality, usability and security your app offers to its users. In addition, we evaluate the documentation and support you offer to your app's users.

Are you a developer?

As a Splunkbase app developer, you will have access to all Splunk development resources and receive a 50GB license to build an app that will help solve use cases for customers all over the world. Splunkbase has 1000+ apps and add-ons from Splunk, our partners and our community. Find an app or add-on for most any data source and user need, or simply create your own with help from our developer portal.

Follow Us:
© 2005-2017 Splunk Inc. All rights reserved.
Splunk®, Splunk>®, Listen to Your Data®, The Engine for Machine Data®, Hunk®, Splunk Cloud™, Splunk Light™, SPL™ and Splunk MINT™ are trademarks and registered trademarks of Splunk Inc. in the United States and other countries. All other brand names, product names, or trademarks belong to their respective owners.