icon/x Created with Sketch.

Splunk Cookie Policy

We use our own and third-party cookies to provide you with a great online experience. We also use these cookies to improve our products and services, support our marketing campaigns, and advertise to you on our website and other websites. Some cookies may continue to collect information after you have left our website. Learn more (including how to update your settings) here.
Accept Cookie Policy

Accept License Agreements

This app is provided by a third party and your right to use the app is in accordance with the license provided by that third-party licensor. Splunk is not responsible for any third-party apps and does not provide any warranty or support. If you have any questions, complaints or claims with respect to this app, please contact the licensor directly.

Thank You

Downloading Protocol Data Inputs
SHA256 checksum (protocol-data-inputs_165.tgz) 3866228897eaa995e136112a5b908bbf58c0282821fe5e8020a23a4ceab0274b SHA256 checksum (protocol-data-inputs_164.tgz) 870b7cafea75f9b7ac1b76c71e2931128bbe7029ddaf5eddf861b7326b9bb61c SHA256 checksum (protocol-data-inputs_163.tgz) 551d4288504d58be05c1777326743dd12513f94981c6e5f04e9cbfa94ffc8c15 SHA256 checksum (protocol-data-inputs_162.tgz) babd55485f01eaad13f74e659ee803d8c280c48d4c3b694342b758e72637c354 SHA256 checksum (protocol-data-inputs_161.tgz) 836ae09b374b5acd1d163373116fd6130c7fe582776cda568abc1a5c0cabad58 SHA256 checksum (protocol-data-inputs_16.tgz) 71610f92ba8d1d8ad2f66c962600b50380db7f2162ad9b00ee82302173ea0d2d SHA256 checksum (protocol-data-inputs_151.tgz) 7e48e90ea2617aefb1fb41452ffa53fab2b0079115dc2a36cc398385895373b4 SHA256 checksum (protocol-data-inputs_15.tgz) a5943e5a7bf5e734018529c451418a95cf9119d670f48816dcba7b49ba0596f6 SHA256 checksum (protocol-data-inputs_13.tgz) 9d78f881683cfc491533f2453f1138452aaf9181a2c2f34bf6d6bc00d3decf11 SHA256 checksum (protocol-data-inputs_12.tgz) 4c7272fadc813d4ae8373fa3869361e7a4959536efdb12085f02adfff6e067c0 SHA256 checksum (protocol-data-inputs_11.tgz) a562cd46007bd69716d19c97d6a551b8bc9bac1cdc920c4b7b8423d8028bdf49 SHA256 checksum (protocol-data-inputs_10.tgz) ae97cced0f5e1ba906c50d8e71d294290a92f24f03946b29e4f4b7854f5e8719 SHA256 checksum (protocol-data-inputs_07.tgz) 6d6edccdd48e472e472dd7d3b872282916add10bc42dd19c3fa9f4ddac4e4551 SHA256 checksum (protocol-data-inputs_06.tgz) 2525505a1058e89b0f0a0565e0d0282ddcc2c3a43df9072983bdd5bd437f195d SHA256 checksum (protocol-data-inputs_051.tgz) c3fcc64ba6a94d53a568b4abfd3638d6579c4727db56d3a4f3012cdbd0f34cdc SHA256 checksum (protocol-data-inputs_05.tgz) 07d3e4cd0312ce37883c09f9835a04cf6eb4cb66527ae6cd332f3785b7db7fe2
To install your download
For instructions specific to your download, click the Details tab after closing this window.

Flag As Inappropriate

Protocol Data Inputs

Admins: Please read about Splunk Enterprise 8.0 and the Python 2.7 end-of-life changes and impact on apps and upgradeshere.
This is a Splunk Add-On for receiving data via a number of different data protocols such as TCP , TCP(s) ,HTTP(s) PUT/POST/File Upload , UDP , Websockets , SockJS. The event driven , non blocking , asynchronous architecture is designed to handle connections and data at scale. The polyglot event bus allows you to declaratively plug in custom data handlers in numerous different languages(Java , Javascript , Python, Groovy , Scala , Clojure , Ruby etc..) to pre-process raw data before indexing in Splunk. Secure transport channels also allow for client certificate authentication.

Protocol Data Inputs v1.6.5


This is a Splunk Add-On for receiving data via a number of different data protocols.


  • TCP
  • TCP w/ TLS , optional client certificate authentication
  • UDP (unicast and multicast)
  • HTTP (PUT and POST methods only , data in request body & file uploads)
  • HTTPS (PUT and POST methods only , data in request body & file uploads) , optional client certificate authentication
  • Websockets
  • SockJS

But we already have TCP/UDP natively in Splunk

Yes we do. And by all means use those. But if you want to perform some custom data handling and pre-processing of the received data before it gets indexed (above and beyond what you can accomplish using Splunk conf files) , then this Modular Input presents another option for you.

Furthermore , this Modular Input also implements several other protocols for sending data to Splunk.


This Modular Input utilizes VERTX.IO version 2.1.4 under the hood.http://vertx.io/vertx2/manual.html#what-is-vertx.

This framework provides for an implementation that is :

  • asynchronous
  • event driven (reactive)
  • polyglot (code custom data handlers in java , javascript , groovy , scala , clojure , ruby , python , any JVM lang with a vertx module)
  • non blocking IO
  • scales over all your available cores
  • can serve high volumes of concurrent client connections

Polyglot Custom Data Handling / Pre Processing

The way in which the Modular Input processes the received raw data is entirely pluggable with custom implementations should you wish.

This allows you to :

  • pre process the raw data before indexing
  • transform the data into a more optimum state for Splunk
  • perform custom computations on the data that the Splunk Search language is not the best fit for
  • decode binary data (encrypted , compressed , images , proprietary protocols , EBCDIC etc....)
  • enforce CIM compliance on the data you feed into the Splunk indexing pipeline
  • basically do anything programmatic to the raw byte data you want

To do this you code a Vertx "Verticle" to handle the received data. http://vertx.io/vertx2/manual.html#verticle

These data handlers can be written in numerous JVM languages. http://vertx.io/vertx2/manual.html#polyglot

You then place the handler in the $SPLUNK_HOME/etc/apps/protocol_ta/bin/datahandlers directory.

On the Splunk config screen for the Modular Input there is a field where you can then specify the name of this handler to be applied.

If you don't need a custom handler then the default handler com.splunk.modinput.protocolverticle.DefaultHandlerVerticle will be used.

To get started , you can refer to the default handler examples in the datahandlers directory.

Supported languages and file extensions

  • Javascript .js
  • CoffeeScript .coffee
  • Ruby .rb
  • Python .py
  • Groovy .groovy
  • Java .java (compiled to .class)
  • Scala .scala
  • Clojure .clj
  • PHP .php
  • Ceylon .ceylon

Note : experimental Nashorn support is included for js and coffee (requires Java 8). To use the Nashorn JS/Coffee engine rather than the default Rhino engine , then edit protocol_ta/bin/vertx_conf/langs.properties


This is provisioned using your own Java Keystore that you can create using the keytool utility that is part of the JDK.

Refer to http://vertx.io/vertx2/core_manual_java.html#ssl-servers


Client certificate based authentication can be enabled for the TLS/SSL channels you setup.

VERTX Modules and Repositorys

Any required Vertx modules , such as various language modules for the polyglot functionality (JS , Scala , Groovy etc...) will be dynamically downloaded from online repositorys and installed in your protocol_ta/bin/vertx_modules directory.

You can edit your repository locations in protocol_ta/bin/vertx_conf/repos.txt

Performance tuning tips

Due to the nature of the async/event driven/non blocking architecture , the out of the box default settings may just well suffice for you.

But there are some other parameters that you can tune to take more advantage of your underlying computing resource(ie: cpu cores) available to you.

These are the "server_verticle_instances" and "handler_verticle_instances" params.

Refer to http://vertx.io/vertx2/core_manual_java.html#specifying-number-of-instances for an explanation of how increasing the number of instances may help you.

You can also tune the TCP accept queue settings (also requires OS tweaks) , particularly if you are receiving lots of connections within a short time span.

Refer to http://vertx.io/vertx2/manual.html#improving-connection-time

Data Output

By default data will be output to STDOUT in Modular Input Stream XML format.

However you can bypass this if you wish and declare that data is output to a Splunk TCP port or via Splunk's HTTP Event Collector.


  • Splunk 5.0+
  • Java Runtime 1.7+
  • Supported on Windows, Linux, MacOS, Solaris, FreeBSD, HP-UX, AIX


  • Optionally set your JAVA_HOME environment variable to the root directory of your JRE installation.If you don't set this , the input will look for a default installed java executable on the path.
  • Untar the release to your $SPLUNK_HOME/etc/apps directory
  • Restart Splunk
  • If you are using a Splunk UI Browse to Settings -- Data Inputs -- Protocol Data Inputs to add a new Input stanza via the UI
  • If you are not using a Splunk UI (ie: you are running on a Universal Forwarder) , you need to add a stanza to inputs.conf directly as per the specification in README/inputs.conf.spec. The inputs.conf file should be placed in a local directory under an App or User context.

Activation Key

You require an activation key to use this App. Visit http://www.baboonbones.com/#activation to obtain a non-expiring key


Any log entries/errors will get written to $SPLUNK_HOME/var/log/splunk/splunkd.log

These are also searchable in Splunk : index=_internal error protocol.py

JVM Heap Size

The default heap maximum is 256MB.
If you require a larger heap, then you can alter this in $SPLUNK_HOME/etc/apps/protocol_ta/bin/protocol.py on line 95

JVM System Properties

You can declare custom JVM System Properties when setting up new input stanzas.
Note : these JVM System Properties will apply to the entire JVM context and all stanzas you have setup


  • JAVA_HOME environment variable is set or "java" is on the PATH for the user's environment you are running Splunk as
  • You are using Splunk 5+
  • You are using a 1.7+ Java Runtime
  • You are running on a supported operating system
  • Look for any errors in $SPLUNK_HOME/var/log/splunk/splunkd.log
  • Run this command as the same user that you are running Splunk as and observe console output : "$SPLUNK_HOME/bin/splunk cmd python ../etc/apps/protocol_ta/bin/protocol.py --scheme"


This project was initiated by Damien Dallimore , damien@baboonbones.com

Release Notes

Version 1.6.5
June 30, 2019

Search/Replace (with chars or a hash) Custom Data Handler Example

Version 1.6.4
June 25, 2019

cosmetic fixes

Version 1.6.3
May 10, 2019

cosmetic fixes

Version 1.6.2
April 23, 2019

updated docs

Version 1.6.1
April 19, 2019

added trial key functionality

Version 1.6
March 28, 2019

docs updated

Version 1.5.1
June 3, 2018

minor manager xml ui tweak for 7.1

Version 1.5
May 27, 2018

Added an activation key requirement , visit http://www.baboonbones.com/#activation to obtain a free,non-expiring key
Docs updated
Splunk 7.1 compatible

Version 1.3
Nov. 17, 2016

Added the latest jython jar to the main classpath because the jython language module that
is dynamically installed is missing some useful jython modules ie:json

Version 1.2
July 28, 2016

Added an example handler for decompressing gzip content

Version 1.1
Nov. 24, 2015

Minor HEC data handling tweaks

Version 1.0
Sept. 22, 2015

Added support to optional output to Splunk via a HEC (HTTP Event Collector) endpoint

Version 0.7
Feb. 11, 2015

Enabled TLS1.2 support by default.
Made the core Modular Input Framework compatible with latest Splunk Java SDK
Please use a Java Runtime version 7+
If you need to use SSLv3 , you can turn this on in bin/protocol.py

Version 0.6
Nov. 15, 2014

Abstracted the output transport logic out into verticles.
So you can choose from STDOUT (default for Modular Inputs) or bypass this and output
data to Splunk over other transports ie: TCP.
This also makes it easy to add other output transports in the future.
Futhermore , this makes the implementation of custom data handlers much cleaner as you don't have
worry out output transport logic or formatting Modular Input Stream XML for STDOUT transports.

Version 0.5.1
Nov. 11, 2014

Added langs.properties and repos.txt to the classpath

Version 0.5
Nov. 10, 2014

Initial beta release


Subscribe Share

AppInspect Tooling

Splunk AppInspect evaluates Splunk apps against a set of Splunk-defined criteria to assess the validity and security of an app package and components.

Are you a developer?

As a Splunkbase app developer, you will have access to all Splunk development resources and receive a 10GB license to build an app that will help solve use cases for customers all over the world. Splunkbase has 1000+ apps and add-ons from Splunk, our partners and our community. Find an app or add-on for most any data source and user need, or simply create your own with help from our developer portal.

Follow Us:
© 2005-2019 Splunk Inc. All rights reserved.
Splunk®, Splunk>®, Listen to Your Data®, The Engine for Machine Data®, Hunk®, Splunk Cloud™, Splunk Light™, SPL™ and Splunk MINT™ are trademarks and registered trademarks of Splunk Inc. in the United States and other countries. All other brand names, product names, or trademarks belong to their respective owners.