Overview

After setting up a Splunk Environment using Gemini Enterprise Manage, one task that remains is the provision of a Splunk Monitoring Console. If you wish to take advantage of our new Gemini Splunk Inspect app then this task is essential.

The Splunk Monitoring Console is an invaluable tool for maintaining your Splunk Environment and assisting in troubleshooting, however, it is not enabled by default and we strongly recommend that this app is enabled on one of your Gemini instances. This could be the Gemini Management Center node, or alternatively, if you have less than 20 Instances, the Cluster Master instance would be an ideal choice. For larger sites, we would recommend either dedicating a specific stand-alone search head or to use another low usage instance that is perhaps running as a Splunk License Master or Deployment Server.

In this document, we will show you how to add a Splunk Monitoring Console to your Gemini Cluster Master instance. To use another Gemini Instance such as the Management Center node, the secret is simply to create a Splunk distributed search arrangement from the chosen instance to include all the instances you wish to monitor.

 

Enable a Splunk Monitoring Console at the Cluster Master

Using a combination of the Gemini Splunk Environments and Node Name dashboards, determine which Gemini instance is running as the Cluster Master in your environment.

Login to the Splunk web interface of the Cluster Master using the following URI:

http://<CM_gemini_instance>:8000

The default admin passwords used for the Splunk admin on a Gemini instance are as follows, but check with the local Administrator as these may have been changed.

  • changeme (Gemini Manage 2.2 - 2.7)
  • gemini123 (Gemini Manage 2.8 and above)

Navigate to the Settings menu and select the Monitoring Console icon.

The Monitoring Console first needs to be enabled. Navigate to the Settings menu and select the General Setup option from the list of options.

  • At the Mode selector, choose the ‘Distributed’ option that will enable the monitoring to include multiple instances that you choose to include using ‘Distributed Search’.
  • Select the ‘Apply Changes’ button to enable the Monitoring Console on this instance. Ignore any warning messages at this stage, and ‘Refresh’ the dashboard

Following a dashboard refresh, Indexers that make up the Splunk cluster should have been added to the dashboard, but Search Heads will not appear at this stage.

Splunk will make assumptions about the instances regarding ‘Server roles’ that are present in each instance. There are usually inconsistencies here that need to be addressed. In order to correct the inconsistencies;

  • Use the ‘Edit Server Roles’ option found under the ‘Edit’ button alongside each instance and correct Server Roles accordingly.
  • Instances acting as Indexers should have the ‘Indexer’ option selected only.
  • Select the ‘Save’ button after each edit.
  • Select the ‘Apply Changes’ button at the General Setup dashboard to save the edits. Ignore any warnings at this stage.

Adding Search Heads and other instances to the Monitoring Console

It is important to note that the Monitoring Console will only be useful if it has been set up correctly to include all the relevant instances in your environment. This could include other Gemini instances that have been used as Search Heads, a Deployer, a License Master, a Deployment Server, etc..

Creating a 'Distributed Search' relationship

Adding instances to the Monitoring Console is simply a matter of creating a Distributed Search relationship from the Monitoring Console instance itself. This is achieved using the Splunk ‘add search-server’ CLI command. There are two options for running this command; using the Gemini web interface, or using an SSH terminal session, as detailed below;

 

Option 1 - Gemini web interface

At the Gemini instance that is being used as the Monitoring Console, login as 'admin' using the following URI:

`https://<CM_gemini_instance>

Navigate to the Splunk / Command dashboard, and enter the following command:

add search-server <instance_you_want_to_add>:8089 -remoteUsername admin -remotePassword gemini123 -auth admin:gemini123

If the Splunk Icon is not yet present, first ‘Activate’ Splunk from the Home menu.

The default Splunk password of gemini123 has been added to the example above. This may have been changed by your Administrator.
If the command is successful the message, ‘Peer added’ should appear below the command entry box. *Repeat this process for all instances that are required to be added

 

Option 2 - SSH terminal

If you prefer, you can use the CLI command from a terminal. SSH into the instance being used as the Monitoring Console, and use the following command;

/opt/splunk/bin/splunk add search-server <instance_you _want_to_add>:8089 -remoteUsername admin -remotePassword gemini123 -auth admin:gemini123
The default Splunk password of gemini123 has been added to the example above. This may have been changed by your Administrator.

The ‘Peer added’ message should appear at the console for a successful addition.

*Repeat this process for all instances that are required to be added

 

Editing Instances at the Monitoring Console

Once you have added your chosen instances using a Distributed Search relationship from the Monitoring Console, you will once again need to correct Splunk assumptions regarding the Server roles assigned.

  • Return to the Splunk web interface running as the Monitoring Console, and open the Monitoring Console dashboard, from the Settings menu.
  • Navigate to the Settings / General Setup option, which should reveal additional instances added using a Distributed Search link.

  • Use the ‘Edit Server Roles’ option found under the ‘Edit’ button alongside each new instance and correct the Server roles accordingly.
  • For instance, instances acting as Search Heads should have the ‘Search Head’ and ‘KV Store’ options selected.
  • The Deployer instance should have just the ‘Deployer’ option selected.
  • Select the ‘Save’ button after each edit.
  • Select the ‘Apply Changes’ button at the General Setup dashboard to save the edits. Ignore any warnings at this stage.

Note that there is an anomaly with the Deployer instance in that it does not automatically have a SHC cluster label added. If you wish to add a label to the Deployer instance that reflects the Search Head Cluster label assigned to the Search Heads;

  • Select the ‘Edit Instance' option located under the ‘Edit’ menu associated with the Deployer instance.
  • An appropriate option should be presented as you begin to type, select this option so that it appears as a grey highlighted entry.
  • Select the ‘Save’ button, followed by the ‘Apply Changes’ button on the General Setup dashboard.

 

Best Practice: Forwarding Internal Indexes

Having set up the Monitoring Console dashboard, it is normal to see the following error present when saving changes;

or running the built-in Health Check;

These refer to the same issue, which we strongly recommended that you address to fulfill the ‘Splunk best practice’ of forwarding all search head and other non-indexer indexes to the indexer layer.

This has the following advantages:

  • All data will be accumulated in one place. This greatly simplifies the process of managing data as all data is managed at one level, the indexer layer.
  • It improves the output and efficiency of the Monitoring Console, and therefore assists with the troubleshooting of search heads, etc. Should any instance fail, all diagnostic data is accumulated at the indexer layer.
  • By forwarding the results of summary indexes over to the indexer level, all search heads will have access to the same data. Otherwise, data would only be available to the search head on which it was generated.
  • This best practice is an essential requirement if the Customer is planning on running Enterprise Security.

To achieve this best practice feature, essentially we have to turn our non-indexing instances like Search Heads, into ‘forwarders’. These will then forward summary indexes and internal indexes such as _internal, _audit and _introspection over to the clustered Indexers.

 

Method for Forwarding Indexes

This solution will involve the creation of an outputs.conf file, which should be added to each Gemini Instance affected by this issue. Run the Monitoring Console Health Check to determine a list of instances affected, but in general these will include;

  • The Cluster Master
  • All Search Heads (clustered and stand-alone)
  • The Deployer
  • Separate instances acting as; Deployment Server, License Master, Heavy Forwarder

The suggested location for the outputs.conf file is the /etc/system/local directory, although for Search Head Cluster members the file will be delivered in the form of a shcluster_bundle from the Deployer instance. Alternatively, deliver a deployable app containing this file using a Splunk Deployment server.

The process of creating an outputs.conf file can be completed using the Gemini web interface, or if preferred, using an editor such as ‘vi’ at the local terminal. Here we describe the use of the Gemini web interface and highlight the process beginning with the Cluster Master instance itself.

 

Forwarding Indexes from the Cluster Master

Login as the admin user to the Gemini Enterprise web interface of the Cluster Master instance using the following URI:

https://<CM_gemini_instance>

  • Navigate to the Splunk / Config Editor dashboard
  • Locate the system / local directory
  • Select the Create New File button
  • Create a file called outputs.conf and select the ‘Add’ button.
  • Select the outputs.conf file from the list of files to reveal the config editor.
  • Paste the contents of the suggested outputs.conf file below into the editor screen.

#Turn off indexing on this instance
[indexAndForward]
index = false

[tcpout]
defaultGroup = default-autolb-group 
forwardedindex.filter.disable = true  
indexAndForward = false 

[tcpout:default-autolb-group]
server=<indexer01>:9997,<indexer02>:9997

  • Carefully edit the last line to include ALL the Indexers in your cluster.

Note: Remove all the '< >' symbols, and use a comma-delimited list for your Indexers. We have assumed the default listening port of 9997, but change this also if another port is in use. An example of the final output is shown below;

As this is the Cluster Master instance, we can restart this now to accept the changes.

  • Navigate to the Splunk / Daemon dashboard

  • Select the ‘Restart Splunk’ button, and accept the confirmation message.

 

Forwarding Indexes from the Deployer and SHC members

For Splunk Environments that include a Search Head Cluster, the outputs.conf file will need to be added to both the Deployer instance and all of the Search Heads that make up the SHC. We can achieve this in two stages;

  • Stage 1: Create an app on the Deployer that contains the required outputs.conf file necessary to forward Indexes from the Deployer.
  • Stage 2: Reuse this app to be distributed to all members of the SHC so that these too will forward Indexes.
Alternatively, this can be achieved using a Deployment Server if one has been configured specifically to work in a clustered environment. This document does not cover this option.

Stage 1: The Deployer instance

To create an app at the Deployer instance, login as the admin user to the Gemini Enterprise web interface of the Deployer using the following URI:

https://<Dep_gemini_instance>

  • Navigate to the Splunk / Command dashboard, and enter the following command to create a new app called ‘outputs_base_app’: create app outputs_base_app -template barebones -auth admin:
  • Authenticate the command with the correct admin password.
  • If successful, the terminal should respond with the message: ‘App outputs_base_app is created.’ (as shown in the example below)

Open Gemini’s Splunk / Config Editor dashboard at the Deployer and navigate to the apps folder.

  • Locate and select the outputs_base_app and use the ‘Create New Folder’ button to create a folder called ‘local
  • Select the ‘local’ folder of the outputs_base_app, and use the ‘Create New File’ button to create an outputs.conf file here.

  • Select the outputs.conf file to reveal the config editor.
  • Paste the contents of the suggested outputs.conf file below into the editor screen.

    #Turn off indexing on this instance
    [indexAndForward]
    index = false

    [tcpout]
    defaultGroup = default-autolb-group 
    forwardedindex.filter.disable = true  
    indexAndForward = false 

    [tcpout:default-autolb-group]
    server=<indexer01>:9997,<indexer02>:9997
  • Carefully edit the last line to include ALL the Indexers in your cluster.

Note: Remove all the '< >' symbols, and use a comma-delimited list for your Indexers. We have assumed the default listening port of 9997, but change this also if another port is in use. An example of the final output is shown below

  • Select the Save button to complete the process.

This has created an app on the Deployer that will function correctly to forward indexes, following a reboot.

  • Navigate to Gemini’s Splunk / Daemon dashboard and select the ‘Restart Splunk’ icon

 

Stage 2: Search Heads within an SHC

Although we have created a suitable app at the Deployer, it is in the wrong location to be used as a ‘deployable app’ that can be sent to the Search Heads that make up the cluster. Apps and configurations that are required to be distributed to the Search Heads need to be placed in the /etc/shcluster/apps directory. The following process will simply copy our Deployer app to the appropriate folder for distribution.

  • Login as the admin user to the Gemini Enterprise web interface of the Deployer instance using the following URI:

https://<Dep_gemini_instance>

Open Gemini’s Splunk / Config Editor dashboard and navigate to the /etc/apps/ directory

  • Scroll down to the outputs_base_app in the list and select the 'Copy' option from the adjacent Ellipsis menu.

  • Select the ‘Previous Folder’ button to reveal the 'shcluster' folder.
  • Select the ‘shcluster’ folder as shown below;

  • Select the ‘apps’ folder to reveal the final Copy destination of $SPLUNK_HOME/etc/shcluster/apps
  • Select the ‘Copy’ button to conclude the copy operation.

  • Verify the presence of the new app in the /shcluster/apps folder.

Now the app is in the correct location to be sent to the Search Heads that make up the cluster. To push this out to the SHC, we need to use a CLI command.

  • Navigate to Gemini’s Splunk / Command dashboard of the Deployer, and enter the following command to complete the distribution;

apply shcluster-bundle --answer-yes -target https://<the_Captain_of_SHC>:8089 -auth admin:<password>
  • The message ‘Bundle has been pushed successfully to all the cluster members’ should appear at the terminal.
  • The outputs_base_app is copied to the /etc/apps/ folder of each Search Head participating in the cluster.

Sometimes it is necessary to complete a rolling restart of the SHC members. If this is required use the following command from Gemini's Splunk / Command dashboard of any of the Search Head Cluster members (not the Deployer!).

rolling-restart shcluster-members

Alternatively, this can also be achieved from the Splunk web interface of any Search Head Cluster member using the Settings/ Search Head Clustering dashboard.


 

Verification of Indexer forwarding

To verify that a non-indexer instance is forwarding its Indexes to the Indexer layer, run a Health Check from the Monitoring Console, or run the following Splunk search from the Cluster Master or any Search Head to verify the known 'hosts';

index=_internal

Note that this command requires the Splunk user to have admin privileges. See below for a typical output of this search listing all the instances.


 

Forwarding Indexes from other non-indexer instances

For stand-alone search heads or other instances that have been set up as a Deployment Server or Licence Master for instance, the exact same process outlined above for the Cluster Master can be used and verified as in the ‘Verification of Indexer Forwarding’ section.

 


 

Other Gemini Products

Augment the output of Splunk’s Monitoring Console by using our Splunk Inspect app within Gemini Explore. A free trial of Gemini Explore can easily be arranged that includes the Splunk Inspect app.

For details contact: contact@geminidata.com or view the details at Gemini Explore - Trial Introduction