Skip to main content
Version: Atlas 2022.3.3

Platform Installation for Splunk Enterprise

Installation Requirements

Atlas will be installed on a dedicated Search Head or a Search Head Cluster deployed in your local on-premises environment. This Search Head solution will be referred to as the “Atlas Search Head” throughout this document.

If your Splunk deployment includes a Splunk Cloud environment, the Atlas installation process requires additional steps. Please refer to the installation instructions in Platform Installation (Splunk Cloud) instead of this document.

The Atlas Search Head

  • Can be a clustered or non clustered Search Head or an all-in-one (AIO) Splunk deployment
  • Must be able to connect to the Internet

You should have obtained the following artifacts and access to proceed with the installation:

  • Atlas Artifacts ZIPs (In .spl or tar.gz format)
  • Atlas License Key
  • Admin access to the Atlas Search Head
  • Atlas Audit logging index (atlas_audit by default) configured
  • Appropriately sized Splunk hardware resource running Splunk Enterprise version 8.X.X or 9.0.X

Atlas Search Head Sizing Requirements

The Atlas Search Head should reflect the system resource recommendations for a Splunk Enterprise on-premise search head as provided in Splunk’s Capacity Planning documentation found at this location.

If you are evaluating Atlas, you can consider the following 'Evaluation only' specifications as minimum requirements for the Atlas Search Head. This should provide enough resources to evaluate Atlas features, but as you start adding more users on the Atlas Search Head, you will need to increase the resources accordingly.

Always remember that the Atlas Expertise on Demand Team can be leveraged to provide you with customized guidance for achieving optimal Atlas performance in your environment.

Evaluation only Atlas Search Head Specifications

  • An x86 64-bit chip architecture
  • 8 vCPU at 2Ghz or greater speed per core
  • 8GB RAM
  • 100GB of dedicated storage (SSD Based storage system with no less than 800 sustained IOPS, can be thin-provisioned)
  • A 1GB Ethernet NIC
  • A 64-bit Linux or Windows distribution

Minimum Search Head specification from Splunk Documentation

  • An x86 64-bit chip architecture
  • 16 physical CPU cores, or 32 vCPU at 2Ghz or greater speed per core
  • 12GB RAM
  • Search heads with a high ad-hoc or scheduled search loads should use SSD. An HDD-based storage system must provide no less than 800 sustained IOPS. A search head requires at least 300GB of dedicated storage space.
  • A 1GB Ethernet NIC
  • A 64-bit Linux or Windows distribution

Software Requirements

Splunk Enterprise

Splunk Enterprise 8.X.X or 9.X.X

Atlas Software

Provided by Kinney Group - see Downloading Atlas and Request a License Key

Atlas Platform Installation Overview

This guide will outline the steps required to install the Atlas Platform on your on-premises Atlas Search Head.
If you are installing Atlas to work with a Splunk Cloud Deployment, please switch to the Platform Installation (Splunk Cloud) page.

Getting Atlas up and running will take under two hours. The Atlas Platform comes paired with Expertise on Demand (EoD), and you are encouraged to reach out to EoD for Atlas installation support should you need help.

Atlas Distributed Install Matrix

Use the tables below to determine where and how to install the Atlas Platform in a distributed deployment of Splunk Enterprise or any deployment for which you are using forwarders to get your data in. Depending on your environment, your preferences, and the requirements of the add-on, you may need to install Atlas Elements and Technical Add-Ons (TAs) in multiple places.

Atlas ElementSearch HeadsIndexersHeavy ForwardersUniversal ForwardersComments
Atlas CoreYesNoNoNo-
Atlas AssessmentYesNoNoNo-
App AwarenessYesNoNoNo-
Data ManagementYesNoNoNo-
Data UtilizationYesNoNoNo-
ES HelperYesNoNoNoOther deployment options may be considered, see ES Helper documentation for more information
Forwarder AwarenessYesNoNoNo-
MonitorYesNoNoNo-
Monitor TAYesYesNoNoInstall on Index & Search Head Layer to create required Indexes
Scheduling AssistantYesNoNoNo-
Scheduling InspectorYesNoNoNo-
Splunk Migration AssistantYesNoNoNo-
STIG ComplianceYesNoNoNo-
STIG Compliance STIG TAYesYesYesYesInstall on Index & Search Head Layer to create required Indexes & Data Transformations
STIG Compliance SCAP TAYesYesYesYesInstall on Index & Search Head Layer to create required Indexes & Data Transformations

For Atlas Monitor and STIG Compliance, further information can be found on their Documentation Page.

  • If your deployment will include the Atlas Element Atlas Monitor, please visit the Atlas Monitor page for installation instructions and requirements

  • If your deployment will include the Atlas Element STIG Compliance, please visit the Atlas STIG Compliance page for installation instructions and requirements

Installing the Atlas Platform on the Search Head

  1. Locate the Atlas Artifacts .zip file(s) you downloaded from the Downloading Atlas step. If you were given a single compressed file, un-zip the file so it becomes numerous sub-files. Each sub-file should be named related to an Atlas Element, such as Core, Search Library, Data Management, and so on. There may be anywhere from five to more than ten of these files.

    • Example: atlas_data_management-1.2.0.tar.gz
  2. Sign in as the Admin on the Splunk Search Box, navigate to ‘Manage Apps’.

    Step 2

  3. Select "Install App from File" button located in the top right.

    Step 3

  4. Choose one of the Atlas Elements ZIPs identified in Step 1 (the order they are uploaded does not matter). Click "Upload". If you experience an issue, try selecting the “Upgrade App” checkbox and try again.

    Step 4

  5. Repeat Steps 3-4 with all the remaining elements. Make sure to keep track of which elements have yet to be installed.

    • Check your progress by searching “Atlas” on the Manage apps screen

    Step 5

  6. After all apps have been installed, click "Apps" and select "Atlas".

    Step 6

  7. A notice should appear notifying you that you need to configure Atlas. Click "Continue to Configurations" and it should take you to the Licensing Dashboard on Atlas Core.

  8. Paste your License Key into the box and click "Save". Ensure that when you copy the key, you don’t add any new lines or spaces.

  9. Your Atlas Applications should be ready to roll! If you have any issues, please reach out for Expertise On Demand.

Post Install Configuration

Configure Atlas Audit

Auditing is important for tracking utilization of Atlas’s many useful tools and automation that can speed up Splunk actions. The Auditing feature helps Admins easily track their and their users’ actions on the Atlas platform. This auditing does not share information with third parties and does not ‘reach out’ over the network. It remains entirely internal to the Splunk deployment, much like Atlas itself. Atlas logging should not generate more than 5 MB of index data a day.

  1. Configure the Atlas Audit Index

    • By default, Atlas Audit will store audit events in a Splunk index named atlas_audit, which is specified in the atlas.conf file located at $SPLUNK_HOME/etc/apps/atlas_core/default/. An example of the contents of the /default/atlas.conf file is:
    [license]
    license_key =

    [atlaslogs]
    index = atlas_audit
    sourcetype = atlas_logs
    • If you wish to use a different index name or already existing index, create or edit an atlas.conf file in the $SPLUNK_HOME/etc/apps/atlas_core/local folder, add an [atlaslogs] stanza and an index = entry with the preferred index name. An example of an atlas.conf file in the /local folder which has been edited to specify a different audit logs index is as follows:
    [license]
    app = atlas_core
    disabled = 0
    license_key = (your Atlas license key)

    owner = nobody

    [atlaslogs]
    index = preferred_atlas_audit_index_name
    • Do not change or make any entry that would alter the sourcetype - the source type must remain atlas_logs for proper operation.
    • Do not edit the atlas.conf file in the /atlas_core/default/ folder - any changes made there will be overwritten during an upgrade.
  2. Create Atlas Audit Index (*required)

    • Using your preferred process, create the atlas_audit index (or preferred index name, if different) on both your indexing and search tiers.
  3. Restart or Refresh the Splunk Search Head to start capturing audit events.

To test the audit logging feature, view the Data Utilization dashboard and click on an index : sourcetype entry in the UTILIZATION BY DATASET panel. Then navigate to the Atlas Audit dashboard (top menu: Atlas > Audit) - the AUDIT LOGS panel should show that a TableDrilldown event has been logged.

ES Helper Configuration

The ES Helper Atlas app requires an “ES” Distributed Search Group (DSG) to be created to facilitate communication with the ES Search Head.

  1. In the ES Helper app: Change the es_helper_target macro’s definition to “splunk_server_group=ES” if you have created an ES DSG, or “splunk_server=<ES Search Head server name>” if you have not, or if there is only one ES Search Head.

    • The server name can be found by running the following on the ES Search Head:

      | rest /services/server/info
      | table splunk_server
    • Navigate to Settings > Advanced search > Search macros, click the es_helper_target marco, edit the Definition field per the instructions above and Save.

    • Edit the es_helper_target macro

  2. Configure Data Model Acceleration:

    • Atlas ES Helper requires the Search Head where it is installed to define the same Data Models and Acceleration settings as the Enterprise Security Search Head.

      • Install Splunk_SA_CIM from Splunkbase and configure datamodels.conf in the $SPLUNK_HOME/etc/apps/Splunk_SA_CIM/local directory to match the same file on your Enterprise Security Search Head.
    • By default, Splunk Indexers create separate Data Model Summaries for each Search Head or Search Head cluster that defines a Data Model even if the definitions are identical. However, you can configure Splunk to use another Search Head’s Data Model summaries instead. Therefore, you should configure remote summaries to save indexer space and compute. Follow the following steps to configure a Remote Data Model Summary:

      • Ensure the ES Search Head has been added as a search peer on the Atlas Search Head.

      • To find the GUID of the ES Search Head, run the following search:

        | rest splunk_server=local /services/search/distributed/peers
        | table peerName title guid search_groups
      • In the $SPLUNK_HOME/etc/apps/Splunk_SA_CIM/local folder edit datamodels.conf. For each data model, add the following property:

        • acceleration.source_guid=<GUID from step ii>
      • Additional information about acceleration summaries can be found here

Review Splunk Permissions Required

Certain features in Atlas require users to have select Splunk Permissions to function. Refer to the feature matrixes below to ensure functionality in the Atlas Platform. Splunk Administrators with out of the box permissions have access to all these Splunk Permissions by default.

Atlas General Use

FeatureSplunk Permissions
Access the Splunk API to utilize Splunk infrastructure informationrest_access_server_endpoints
Leverage KV Stores and set Macrosdispatch_rest_to_indexers, edit_kvstore
Write to Splunk Conf Filesrest_properties_set, rest_properties_get

Scheduling Assistant

FeatureSplunk Permissions
Update Scheduled Search Schedules using Element Automationschedule_search, Write access to the saved search
View Scheduled Search in dashboard tablesRead access to the saved search
View Scheduled Search in dashboard KPI visualsschedule_search, Read access to the saved search

Scheduling Inspector

FeatureSplunk Permissions
Update Scheduled Search Time Range using Element Automationschedule_search

Optional: Configure Distributed Search Groups

Distributed Search Groups (DSGs) enable Atlas users to search data over a specific set of search peers, such as all Search Heads or all Indexers.

DSGs cannot be configured in Splunk Web; the configuration file must be edited directly. Atlas Core comes with a distsearch.conf template to make setting up DSGs as simple as possible. In each stanza, the servers property consists of a comma-delimited list of servers in the following format: https://192.168.1.44:8089,https://192.168.1.62:8089,....

  1. Copy the distsearch.conf file in $SPLUNK_HOME/etc/apps/atlas_core/default/ to the /atlas_core/local folder.

  2. Edit the /local/distsearch.conf file by uncommenting and filling out each stanza that is relevant to your environment.

    Some of the applicable stanzas may include the following:

    [distributedSearch] - the base stanza

    If any search peers have been added using Splunk Web, this stanza will be populated with a comma-delimited list of these servers in the system-level distsearch.conf file located at $SPLUNK_HOME/etc/system/local/distsearch.conf

    • Copy the servers = entries from the [distributedSearch] stanza in /etc/system/local/distsearch.conf into (under) the same stanza in /etc/apps/atlas_core/local/distsearch.conf

    • Add all of the indexers to the servers list in this stanza. They are listed in the Search Peer page on Splunk Web

    • This stanza should now include all search peers shown in Splunk Web

[distributedSearch:ENV] - DSG for entire environment

This stanza allows the entire environment to be searched at once

  • Copy the server list from the base stanza onto this one, adding localhost:localhost to include the Atlas Search Head itself

[distributedSearch:DEF] - DSG to be searched by default

This is the only stanza with default = true

This stanza consists of localhost:localhost as well as all Indexers. This ensures standard search behavior, and prevents search requests from being sent to non-indexers

caution

Note that Indexer Discovery is not currently supported — any newly discovered Indexer Cluster Members will not automatically be added to either the base stanza or the DEF stanza and must be manually added. If you use indexer discovery, and do not care about sending search requests to non-indexers, it is recommended to exclude this group.

  • You can add servers into additional groups (stanzas) as needed for your environment. The distsearch.conf template in $SPLUNK_HOME/etc/apps/atlas_core/default contains examples of additional groups you can use if needed.

Remember: Do not edit the distsearch.conf file in the /atlas_core/default/ folder - any changes made there will be overwritten during an upgrade. All edits should be done in the distsearch.conf file in /etc/atlas_core/local.