Atlas Return on Investment
The Atlas platform is built to equip Splunk users and admins with the tools they need to effectively make the most of their Splunk investment. The Atlas ROI Calculator was designed to help reveal the cost savings associated with a Splunk team equipped with the Atlas tooling, and the Expertise on Demand (EoD) services. By leveraging the platform and expertise, an Atlas owner will be able to achieve their Splunk goals and make tangible cost savings related to labor and license.
This page will walk through the Atlas ROI Calculator so you can assess how Atlas has, or will provide return on your investment.
Notice: Many calculator input values require identification by a Splunk Admin, or an Admin utilizing the Atlas Assessment as described in the Autofill section. If you need assistance, reach out to Expertise on Demand for a walkthrough.
Overview
Upon visiting the ROI Calculator Site, multiple tiles will populate in two sections, Labor and License. Each tile is associated with a specific benefit provided by Atlas & Expertise on Demand working together. Each tile has unique inputs, with default values shown in gray as recommendations based on best practices or experience. Users can input their own numbers based on their personal experience.
Clicking on the Information button on the top right of each tile will provide additional details, and also in the top right of the tile is an On/Off switch that will activate or deactivate the tile from the overall estimated hours and dollars saved values reported on the top right of the page.
All Labor tiles have an Hourly Rate input. This is the estimated, fully burdened cost, of a corresponding full time engineer performing the tasks.
More detail for each input can be found by selecting the blue ? to the right of each input label.
All tiles have a summary of hours and dollars saved for that specific benefit. The active tiles are summed up to the summary values on the top right of the page.
Autofill
Many of the tiles rely on fixed values from your Splunk system. These can be 'autofill-ed' leveraging the Atlas Assessment. On the Atlas Assessment, navigate to the 'Environment Info' page, and run the searches provided. Then press the 'View ROI Calculator' button that appears on the top right.
This button will just leverage data seen on the dashboard, and no information is saved.
Labor
The Labor tiles provide an overview on how Atlas tooling and Expertise on Demand provide savings by reducing time to achieve outcomes in Splunk.
Search Performance
Maintaining scheduler performance in Splunk is critical to ensure that your Splunk environment is performant and running optimally. This typically involves activities like adjusting search time ranges, rescheduling searches, disabling unused searches, and eliminating search concurrency. Doing this work manually in Splunk can be difficult and time-consuming. The Search Performance tile provides an estimate on how Atlas and EoD will reduce labor and therefore costs associated with improving and maintaining scheduler health.
Let's walk through the inputs on the Search Performance tile:
- Number of Scheduled Searches: Number of scheduled searches in the Splunk environment.
- Search Investigation Rate: Number of scheduled searches that could be investigated by a Splunk Admin in an hour (without Atlas) for latency, SPL, configuration issues, and concurrency problems.
- Search Remediation Rate: Number of problematic scheduled searches a Splunk Admin could resolve in an hour (without Atlas) by testing reschedules and resolving configuration issues.
- Frequency Per Year: Number of times a year scheduled searches will be audited for Splunk best practice.
The final values are calculated by comparing the time inputted to investigate and remediate searches without Atlas compared to with Atlas. To review those values, select the Information button on the tile. The hours saved is multiplied by the Hourly Rate provided, producing a final estimated Dollars Saved for Search Performance.
Data Source Integrity
Data Source Integrity is the active and proactive management and alerting of ingests in Splunk. Splunk is only as good as the stability of its datasets, and silent drops from unruly Splunk Forwarders or network changes can lead to dashboard and alerting inaccuracies. Atlas assists with coalescing information, visualizations, metadata, and normalizes alerting in an efficient manner to reduce these issues, while reducing labor and costs associated with great data source integrity.
Let's walk through the inputs on the Data Source Integrity tile:
- Number of Forwarders: Number of Splunk Forwarders in the environment. These are frequently reported as 'hosts' in the Splunk system.
- Number of Index-Source Type Pairs: Number of Index Source Type pairs. This can be determined by searching recent data by index sourcetype.
- Forwarder Awareness Rate: Estimated number of hours dedicated to triaging a Splunk Forwarder for downtime and data issues per year without Atlas.
- Data Awareness Rate: Estimated number of hours dedicated to triaging data outages in your Splunk system per year without Atlas.
The final values are calculated by comparing the time inputted to investigate, and set up proactive alerting without Atlas compared to with Atlas. To review those values, select the Information button on the tile. The hours saved is multiplied by the Hourly Rate provided, producing a final estimated Dollars Saved for Data Source Integrity.
System Performance
System Performance is the process of finding issues and creation of reporting documentation. Atlas assists by providing prebuilt tools that coalesce and organize Splunk system data for easier consumption and tracking.
Let's walk through the inputs on the System Performance tile:
- Number of Splunk Servers: Number of Splunk Search Heads, Indexers, and Support servers in the environment.
- Hours of Investigation per Splunk Server: Estimated number of hours per year per Splunk Server in the environment.
The final values are calculated by comparing the time inputted to investigate system performance without Atlas compared to with Atlas. To review those values, select the Information button on the tile. The hours saved is multiplied by the Hourly Rate provided, producing a final estimated Dollars Saved for Data Source Integrity.
STIG Compliance
STIG Compliance is a premium tool that is part of the Atlas platform. It specializes in ingesting and reporting on DISA STIGs CKL data and reduces labor spent on reporting and communication on maintaining STIG Compliance. This tile provides an estimate on labor savings from leveraging the Atlas platform.
Let's walk through the inputs on the STIG Compliance tile:
- Number of Targets: Number of individual machines that require a target. Frequently called 'hosts'.
- Average STIGs per Target: Average number of STIG checklists per target. Typically estimated to be three per target.
- Average Hours Spent per STIG Checklist: Average number of hours dedicated to creating, reporting, collecting, and managing a single STIG checklist.
- Frequency per Year: Number of times a year STIG checklists are created and reported on. Usually estimated to occur once a quarter.
The final values are calculated by comparing the time spent collecting, managing, reporting, and updating for STIG Compliance without Atlas compared to with Atlas. To review those values, select the Information button on the tile. The hours saved is multiplied by the Hourly Rate provided, producing a final estimated Dollars Saved for STIG Compliance
License
The License tiles provide a mechanism to estimate Dollars Saved on purchasing Splunk entitlements in either the SVC Workload methodology for Splunk Cloud, or the on-premise Ingest methodology.
Workload
Workload-based pricing tracks to how much SVCs (Splunk Virtual Compute) on a daily basis compared to a license limit. Searching, dashboards loading, alerts, and data ingests all factor into how SVCs are utilized. This tile enables users to track dollars savings on licensing in theoretical reductions of SVC usage through Atlas and Expertise on Demand.
Let's walk through the inputs on the Workload tile:
- Price per SVC: Amount of dollars per SVC-based on overall license cost.
- License Entitlement: The total amount of daily SVCs in the Splunk entitlement.
- Estimated SVC Reduction: Percent of hypothetical SVC reduction.
The final Dollars Saved is the License Entitlement multiplied by the Price per SVC and Estimated SVC Reduction. This tile does not capture cost savings from maintaining current license instead of uplift.
Ingest
Ingest-based pricing compares the total sum of daily data being ingested into Splunk (ignoring internal indexes) and compares it to the Splunk entitlement purchased. This tile enables users to track dollars savings on licensing in the theoretical reduction of data ingests through the use of Atlas and Expertise on Demand.
Let's walk through the inputs on the Ingest tile:
- Price per GP: Amount of dollars per GB based on overall license cost.
- License Entitlement: The total amount of daily ingest GBs in the Splunk entitlement.
- Average License Utilization: The average number ingest (in GBs) per day.
- Data Ingest Reduction: Percent of hypothetical ingest reduction.
The final Dollars Saved is the difference between License Entitlement and Average License Utilization, added to the product of Average License Utilization and Data Ingest Reduction. This sum is then multiplied by the Price per GP to produce the final value. This tile does not capture cost savings from maintaining a current license instead of uplift.