In Q3’19, IRI created a new, free app for Splunk Enterprise and Enterprise Security users that seamlessly indexes data from any Voracity platform data wrangling or protection job. The app -- available on Splunkbase -- is the modern successor to the 2016 Voracity add-on to Splunk, and an alternative to using Splunk Universal Forwarder to index the data.
The app functions as an invocation and integration point for the execution and use of different SortCL-compatible jobs and their results; i.e.,
IRI CoSort data cleansing and transformation
IRI NextForm data migration and replication
IRI FieldShield PII discovery and de-identification
IRI RowGen test data synthesis and population
The app launches these existing jobs -- usually created graphically in IRI Workbench, an IDE built-on Eclipse, -and automatically ingests and indexes the output data from those jobs into Splunk. From there, the data is ready for Splunk’s usual analytics, visuals and actions.
The app works by running any specified IRI job script as a modular input given its specified location. It also accepts and executes additional SortCL command line arguments to the script, such as /WARNINGSON, /STATISTICS, or another named /OUTFILE target. The data flowing to Splunk through the app should be specified in a structured record format.
Installing the IRI App for Splunk
There are two options to install the IRI app:
download the app from Splunkbase and install it via the “install from file” button within your Splunk instance; or,
search for “IRI Voracity Data Munging and Masking App for Splunk” and have it installed directly.
Creating a Data Input / Modular Input
The input to Splunk is the output of a Voracity job. The first /OUTFILE target in the job must be set to stdout for the data to be indexed into Splunk. Additional command line arguments can be specified in a modular input, including additional /OUTFILE= sections to add other output locations in addition to the pipe into Splunk.
Select the Voracity app from the dropdown menu in the upper-left corner of Splunk. “Inputs” should be the default page of the app. To create an input, select the “Create New Input” button and select the IRI Voracity (SortCL) Job to run, which will produce the input data for Splunk.
Enter a unique name for the data input (without spaces). Set the interval and index location for the data input.
The interval indicates how often the modular input will be run. The modular input involves invoking SortCL from the command line with a specified IRI Job location and any other specified additional command line arguments. The index selector specifies what index the data will be stored in. The IRI Job Location field must be filled with the full path to the IRI job (.*cl file).
Additionally, seven different command line arguments can be added to the Modular Input to be run. These commands include /STATISTICS=, /WARNINGSON, /WARNINGSOFF, /DEBUG, /RC, /MONITOR=, and /OUTFILE=. Background on the SortCL program and its syntax can be found in the CoSort overview booklet.
An extra file path must be specified if a command ends in '='. This field should be left blank if no command is selected or a command that does not end in '=' is selected.
Finally, click the “add” button at the bottom of the dialog. The name of your new input (*.scl job script file) should now be listed in the data inputs table.
Searching the Results
To search the results now available in Splunk, select “Search” from the navbar of the IRI App. Type “Source= “name_of_modular_input_type://modular_input_name”” to search the data indexed by the modular input.
This can also be accessed by clicking “Data Summary” and selecting the source from the “Sources” menu. Use Splunk commands such as |stats, |chart, and |timechart to help visualize your data. See this article for a prior example.
Here is an example of U.S. president data from 1900 onward sorted in a simple CoSort SortCL job, and then indexed and visualized by Splunk:
In addition to visual analytics, you can also use the Splunk Adaptive Response Framework or a Phantom Playbook to take action on production or log data prepared in Voracity across a wide range of industries and applications. Examples would be wrangling customer transactions for Splunk to flag for promotion and pricing decisions, and MQTT or Kafka fed sensor data aggregated (and anonymized) by Voracity for Splunk to use in diagnostic or preventive alerts.
Contact firstname.lastname@example.org if you would like help building out data preparation, presentation, and prescriptive scenarios using Voracity and Splunk.
Splunk Enterprise 8.0 Compatibility- migrated to Python 3
Updated app icon
Fixed permissions issues within the app.
As a Splunkbase app developer, you will have access to all Splunk development resources and receive a 10GB license to build an app that will help solve use cases for customers all over the world. Splunkbase has 1000+ apps and add-ons from Splunk, our partners and our community. Find an app or add-on for most any data source and user need, or simply create your own with help from our developer portal.