Splunk Cookie Policy

We use our own and third-party cookies to provide you with a great online experience. We also use these cookies to improve our products and services, support our marketing campaigns, and advertise to you on our website and other websites. Some cookies may continue to collect information after you have left our website. Learn more (including how to update your settings) here.
Accept Cookie Policy

Accept License Agreements

This app is provided by a third party and your right to use the app is in accordance with the license provided by that third-party licensor. Splunk is not responsible for any third-party apps and does not provide any warranty or support. If you have any questions, complaints or claims with respect to this app, please contact the licensor directly.

Thank You

Downloading Technology Add-On for NetApp SANtricity
SHA256 checksum (technology-add-on-for-netapp-santricity_100.tgz) 26fc2d524bf1577d978148222f4ffea3dc17db99f3eab65c1325cb38556b315b SHA256 checksum (technology-add-on-for-netapp-santricity_09.tgz) b76ce0a2738652ef6c0efa82a63434affa61821bad67289d4a38b9e143592a14
To install your download
For instructions specific to your download, click the Details tab after closing this window.

Flag As Inappropriate

Technology Add-On for NetApp SANtricity

This technology add-on collects performance, configuration, and event monitoring data from NetApp E-Series/EF-Series storage arrays. This data is then visualized by the NetApp SANtricity Performance App for Splunk Enterprise.

This App requires Splunk version 6.1 or 6.2.

Version 1.0

The NetApp SANtricity Performance App for Splunk Enterprise provides visibility into the health and performance of NetApp E-Series and EF-Series storage systems. This document is intended to provide installation and deployment information for the app and background omits components.

For additional support with the NetApp SANtricity Performance App for Splunk Enterprise, please visit: http://community.netapp.com/t5/E-Series-SANtricity-and-Related-Plug-ins-Discussions/bd-p/e-series-santricity-and-related-plug-ins-discussions.

Preparing to Install

Prerequisites and Requirements

  • Splunk 6.1 or higher installed On a Linux server.

  • NetApp E-Series/EF-Series Storage Arrays (any E-Series or EF-Series model running controller firmware 7.84 or higher).

    Note: It may be necessary to enable the "Legacy Management Interface" on newer E-Series controllers in order for them to provide data to the NetApp SANtricity Performance App.

  • Storage capacity for configuration and performance data to accommodate one year of data collection. Additional Splunk license may be required.

    • For smaller storage arrays up to 24 drives, estimate 100MB per day per storage array.

    • For larger storage arrays of 300 drives or more, estimate 500MB per day per storage array.

  • NetApp SANtricity Web Services Proxy v1.3 or greater.

  • Technology Add-on for NetApp SANtricity - The data collection app, which also requires python.

  • NetApp SANtricity Performance App - The data visualization app.

Installing in a Single-Instance Splunk Environment

These instructions describe installation of all the components of the solution on a single server where Splunk version 6.1 or higher is installed. Because this node requires a Python interpreter to drive the REST collection mechanism, it is simpler if this is a full instance of Splunk (splunk package, not splunk forwarder).

Note: These instructions assume all the components described here are installed in a single directory; that is Splunk Enterprise, Web Services Proxy, and the app components.

Note: The data collection app will ingest a variable amount of data, depending upon the number and size of the monitored arrays. The volume of data may exceed the bounds of the trial license provided with the base package (500MB / day). Therefore, it may be required to purchase a separate Splunk license to account for the data of the monitored arrays.

  1. Install the NetApp SANtricity Performance App.

    • Within the Splunk user interface an "app:" menu appears within the upper left corner. Clicking this pops up a button reading Manage Apps.

    • From within the next page (listing the installed applications), the button Install app from file will allow an administrator to upload an app directly to Splunk from their local system.

    • Restart Splunk if required.

  2. Install the Technology Add-on for NetApp SANtricity.

  3. Install and configure the NetApp SANtricity Web Services Proxy. The NetApp SANtricity Web Services Proxy is available from the NetApp downloads area and includes installation instructions.

  4. Enable statistics collection within the Netapp SANtricity Web Services Proxy. The wsconfig.xml file in the proxy. working directory (typically /opt/netapp/santricity_web_services_proxy) must be edited to include the stats poll interval. The stats.poll.interval is configured in seconds, and the suggested value of 30 (seconds) is considered the baseline configuration.

  5. Add arrays to NetApp SANtricity Web Services Proxy.

    Single step method (preferred):

    • The data collection app provides a shell script which generates the uses REST API commands to add information about an array to SANtricity Web Services Proxy while simultaneously configuring data collection in splunk. By default, scripts are located in /opt/splunk/etc/apps/TA-netapp_eseries/bin. Invoke the script as below:

      ./Add_Array.sh [IP port of proxy] [IP 1 of array] [IP 2 of array] [username of proxy] [password of proxy]

      As an example:

      ./Add_Array.sh rw rw
      Note: The username and password provided must have read/write capabilities. By default, a read/write account called 'rw' with password 'rw' is provided in the NetApp SANtricity Web Services Proxy.
    • Restart Splunk.

    Multi step method:

    • The Add_Array.sh script used above invokes several other scripts, which can be invoked separately for a more manual install.

    • Add each array to the proxy by invoking the script as below:

      ./add_array_to_proxy.sh [IP port of proxy] [IP 1 of array] [IP 2 of array] [username of proxy] [password of proxy]
      As an example:
      ./Add_Array.sh rw rw
      Note: The username and password provided must have read/write capabilities. By default, a read/write account called 'rw' with password 'rw' is provided in the NetApp SANtricity Web Services Proxy.
    • Record the array ID of each of the configured arrays.

      Note: The name of the reported array is the user-facing name configured for the array. The array ID is a string of 32 hexadecimal digits (i.e. 03e01d53-5ea1-4a7a-8d87-ba6bfb6219a6). This array ID is used both in the REST API and in the Splunk data collection app to uniquely identify the array. To capture the array ID's you can query the REST API of the NetApp Web Services Proxy itself. In the sample below, the IP address of the Web Services Proxy host is given as, and the port as 8443 (default). Adjust the URL as needed for your IP address and port.
    • Run the provided script for creating the inputs.conf definitions for each of the four REST API endpoints for each array and save the output to a file. The script is called create_splunk_inputs_for_array.sh. The required arguments are the IP and port of the proxy (as above), the array ID as recorded in the step above, and a read-only username and password pair. The output from the script creates four Splunk inputs.conf declarations. Example invocation:

      ./create_splunk_inputs_for_array.sh 03e01d53-5ea1-4a7a-8d87-ba6bfb6219a6roro

      Note: The Web Services Proxy features a default read-only account called "ro", with password "ro". This set of credentials has sufficient access to read all of the four data endpoints required by the visualization app.

    • Place "inputs.conf" into the local subdirectory of the data collection app (/opt/splunk/etc/apps/TA-netapp_eseries/local/inputs.conf is the default location).

      Note: The file must be called inputs.conf for Splunk to recognize it, and it is placed in the local/ subdirectory (instead of default/) to protect these localcustomizations from an upgrade of the data collection app itself, which would replace anything in the default/ subdirectory.

    • Append the contents of "inputs.conf" to the file of the same name in the default/ subdirectory.

    • Restart Splunk.

Integrating With a Larger Splunk Infrastructure

  1. A large environment in this context has more than 50 storage arrays to be monitored. For information about distributed deployment for Splunk, please visit Splunk's documentation:

  2. Splunk version 6.1 or higher must be installed on the data collection node. Because this node requires a Python interpreter to drive the REST collection mechanism, it is simpler if this is a full instance of Splunk (splunk package, not splunkforwarder).

    Note: The data collection app will ingest a variable amount of data, depending upon the number and size of the monitored arrays. The volume of data may exceed the bounds of the trial license provided with the base package (500 MB per day per storage array). Therefore, it may be required to purchase a separate Splunk license to account for the data of the monitored arrays.

  3. Configure the outputs.conf configuration to specify where the collector should send its data. This destination may in fact include several distinct indexers, across which the data will be spread. Consult the administrator of the Splunk deployment to figure out the appropriate settings.

  4. For the rest of the installation, steps 2-5 in the previous section can be followed. Note that multiple instances of the SANtricity Web Services Proxy may be required.

Upgrading Add-on from 0.9 to 1.0.0

  • User is expected to run script upgrade_from_0.9_to_1.0.py after the app is upgraded because the changes done in the app requires this one time manual operation

Compatibility of the app, add-on and web service proxy versions

Add-on version 0.9 is compatible with Web Service Proxy versions 1.1, 1.2, 1.3 and 2.0
Add-on version 1.0.0 is compatible with Web Service Proxy versions >=2.0
All Add-on versions are compatible with all main app versions

Components Additional Information

Data Collection

  • The data collection elements for the NetApp E-Series/EF-Series App (hereafter data collection app) come packaged as the Technology Add-on for NetApp E-Series. This includes a partial copy of the REST API Modular Input app (http://apps.splunk.com/app/1546/) written by Damien Dallimore of Splunk, Inc.

  • The REST API app implements (in Python) a data collector for Splunk to retrieve data from an arbitrary REST API. The REST app supports a wide range of options for interacting with RESTful APIs, a few of which are leveraged by the data collection app. In addition, some of the data returned by the NetApp SANtricity Web Services Proxy is a bare JSON array (that is, a list of JSON objects, without any containing metadata). Code was added to the responsehandlers.py Python module to deal with these endpoints. The data collection app contains both the original base code from the REST API modular input app, as well as the special response handler.

  • Also contained within the data collection app are utility scripts to assist Splunk administrators in getting the E-Series metrics into Splunk. The first of these is a simple shell script called add_array_to_proxy.sh. It provides a mechanism to quickly add a new array to the SANtricity Web Services Proxy. Note that script is not the only way to monitor an array with the Web Services Proxy; it is provided as an optional aid. Note that it depends upon the Urlcommand line utility to interact with the Web Services Proxy. This utility is provided by default in many standard Linux builds. The second script, called create_splunk_inputs_for_array.sh helps to populate the REST API data inputs. The modular input itself provides a form within the Splunk manager interface to define inputs. However, given that four separate inputs are required (see the Construction section below), each with a handful of options, the script is provided as a shorthand to create all of those input definitions.

  • In addition to providing a RESTful proxy to query information about the storage arrays, the SANtricity Web Services Proxy produces logs detailing its own operation. Contained within the data collection app are Splunk configuration files to read (inputs.conf) and interpret (props.conf and transforms.conf) these log files.


The Splunk visualizations (commonly called dashboards) for the NetApp E-Series array are provided in a separate Splunk app called NetApp SANtricity Performance Dashboard for Splunk Enterprise (hereafter simply the app). The app contains Splunk views to quickly interpret the running configuration of the array, evaluate its performance, and provide insight to any unusual events emitted by the array itself. This app can be installed on the same host as the data collection app, but in larger Splunk environments, these would likely be separate hosts. More information on the installation can be found in the final section of this document.

Construction additional information

This section of the document is intended to elaborate on the moving parts of both the data collection and visualization apps. SomeSplunk knowledge is assumed.

Data Sources

  • The Web Services Proxy for SANtricity communicates with the configured arrays to collect various metrics and configuration state from the arrays themselves. This communication is done via the SYMbol communication library, and is proprietary to NetApp. The RESTful API provided by the Web Services Proxy, however, can be accessed by any tool capable of making REST queries. When configured for stats collection, the Web Services Proxy will also periodically collect metrics and aggregate them to provide additional endpoints (the analysed-*-statistics endpoints) as described in the documentation.

  • The data collection app is aimed at collecting data from four distinct REST endpoints of the Web Services Proxy. These are the graphendpoint for configuration state, the mel-eventsendpoint to query the arrays for any status events, and the analysed-volume-statistics and analysed-drive-statistics endpoints for volume and drive specific performance metrics, respectively. The visualization app builds upon the data provided by each of these endpoints. Failure to collect the data from any of them may result in the dashboards of the app being blank.

Data Types

  • The four data endpoints mentioned above each provide distinct data sets. The shape and structure of the events describing drive performance are naturally very different from the single (large) event describing the array's configuration. In order to be able to isolate each of these when searching for data, they are each branded with a distinct sourcetype. This metadata field in Splunk determines which set of parsing rules is used to make sense of the raw events. The sourcetypes for the data collection app are all prefixed with the substring eserie. This is to ensure that the data stands out in a list of sourcetypes, but also to ensure that name collisions with other data sources do not occur. For reference, the map of sourcetypes to their respective endpoints (as well as the Web Services Proxy's own logs) is found in the table below.

  • The input configuration for the data collection app specifically requests JSON-formatted data from the REST API. JSON is JavaScript Object Notation, and is a structured data format natively understood by Splunk. Three of the data endpoints (all except /graph) emit a list of these JSON objects in reply to the REST query. The data collection app has been created with a special handler to break each of these listed objects into a single event within Splunk

  • Collecting the log events from the Web Services Proxy itself can also help alert administrators to potential problems. These logs are contained within a subdirectory of the proxy install location. Each of the logs contains different information about the activities of the proxy. The system.log<n> files record the calls made to the REST proxy, as well as the proxy own calls to the array. The latter are helpful if there are communication issues when polling the array. The webserver.log<n> files contain startup and shutdown messages from the web server. Also contained in the log directory are the trace.log.<n> files. Events captured here include the queries to the REST API, and also the stack trace of any Java exceptions encountered during its operation. As such, it acts as a superset of system.log.<n> logs. The base filenames and sourcetypes of the proxy logs are also captured in the table at the end of this section.

  • Splunk ingests the text-based log files natively as well, even though they are not in a JSON format. Data parsing rules have been provided in the visualization app to identify useful fields within the data.

Endpoint/Logfile => Sourcetype

  • REST: <arrayid>/graph => eseries:graph

  • REST: <arrayid>/analysed-drive-statistics => eseries:drive-stats

  • REST: <arrayid>/analysed-volume-statistics => eseries:volume-stats

  • REST: <arrayid>/mel-events => eseries:mel-events

  • Log: logs/audit.log.<n> => eseries:web-proxy:audit

  • Log: logs/system.log.<n> => eseries:web-proxy:system

  • Log: logs/trace.log.<n> => eseries:web-proxy:trace

  • Log: logs/webserver.log.<n> => eseries:web-proxy:webserver

Splunk Components

  • The REST API modular input is an app built upon the modular input functionality present in versions of Splunk numbered 5.0 or higher. This kind of input provides the flexibility required to seamlessly intake data from a myriad of sources, including RESTful APIs like that found in the Web Services Proxy. Because the modular input is implemented in Python, a Python interpreter must be present on the system doing data collection. A full instance of Splunk (that is, one that is not a Universal Forwarder) bundles a Python interpreter, so this may be the quickest path to success when installing the data collection app.

  • Naturally, of course, there must be some place to store the data once it has been collected. This could take the form of a standalone Splunk instance, which could simultaneously do data collection as well. In larger environments, however, it may be part of a larger Splunk installation, with many indexers, and several separate search heads. (See here for information about the topology of large-scale Splunk environments.) When integrating the data collection app with a larger Splunk installation, the outputs.conf file (dictating where data should be sent) for the collector Splunk instance should refer to the indexers of the larger installation. This step is unnecessary for the single instance configuration.

  • The views and dashboards of the visualization app have been created using Splunk's Simple XML framework. This allows administrators to do editing of dashboards, adjusting panel placement and content. It also permits relatively rapid development of new content. The capabilities of this framework are always expanding thanks to the Splunk development team but an attempt to find a least common denominator has been made. At present, the functional requirement for the Splunk software rests at version 6.0+. One special dashboard (implemented as a proof of concept ) utilizes features that require Splunk 6.1 or higher. In light of this, it has not been added to the base navigation menu, and instead exists as an Easter egg, requiring direct entry of the URI in the Splunkweb interface.

Release Notes

Version 1.0.0
Aug. 16, 2017

Version 0.9
Dec. 17, 2014


Subscribe Share

Are you a developer?

As a Splunkbase app developer, you will have access to all Splunk development resources and receive a 50GB license to build an app that will help solve use cases for customers all over the world. Splunkbase has 1000+ apps and add-ons from Splunk, our partners and our community. Find an app or add-on for most any data source and user need, or simply create your own with help from our developer portal.

Follow Us:
© 2005-2018 Splunk Inc. All rights reserved.
Splunk®, Splunk>®, Listen to Your Data®, The Engine for Machine Data®, Hunk®, Splunk Cloud™, Splunk Light™, SPL™ and Splunk MINT™ are trademarks and registered trademarks of Splunk Inc. in the United States and other countries. All other brand names, product names, or trademarks belong to their respective owners.