Splunk Interview Questions

Splunk is regarded as one of the best load reporting and planning tools in the IT industry. This is one of the leading analytics and Big Data tools, and Splunk professionals are in limited supply in the business world. If you want to be a successful Big Data professional, you must have specialist skills and knowledge in Splunk.

As a result, competitive rivalry for Splunk jobs is fierce in the market. We have compiled a list of Splunk interview questions and answers with the assistance of industry professionals to help you prepare for your interview. The splunk interview questions are divided into 3 sections, basic, intermediate and advanced based on the level of difficulty. Look over the following Splunk interview questions and prepare to ace your interview:

Most Frequently Asked Power BI Interview Questions

Basic level Google cloud Interview Questions: 

1) What is a splunk app?

Ans: A Splunk app is a container/directory of Splunk configurations, searches, dashboards, and so on.

Interested in learning Splunk Join hkr and Learn more on Splunk Certification Course

2) What features aren't available in Splunk Free?


Splunk Free lacks the following features:

  • Authentication as well as scheduled searches and alerts
  • Searching at a distance
  • TCP/HTTP forwarding (to non-Splunk)
  • Management of deployments

3) What actually occurs if the License Master cannot be reached?

Ans: If the license master is not available, the license slave will start a 24-hour timer, after which the license slave's search will be blocked (though indexing continues). Users will not be able to search for data in that slave until it can reconnect to the license master.

4) What exactly is the Splunk Summary Index?

Ans:  The default Splunk index is a summary index (the index that Splunk Enterprise uses if we do not indicate another one).

We may need to create additional summary indexes if we intend to run a variety of summary index reports.

5) What exactly is Splunk DB Connect?

Ans: Splunk DB Connect is a Splunk SQL database plugin that allows us to easily integrate database data with Splunk queries and reports.

6) Where can I find the Splunk Default Configuration?

Ans: $splunkhome/etc/system/default

7) Can you name a few of Splunk's most important configuration files?


  • props.conf
  • indexes.conf 
  • inputs.conf 
  • Transforms.conf
  • Server.conf

8) What are the different types of Splunk licenses?


  • License for the enterprise
  • License is provided for free.
  • Licence as a forwarder
  • The beta license
  • Permits for search heads (for distributed search)
  • Cluster members' licenses (for index replication)

9) What exactly is a Splunk Forwarder? What are the different kinds of Splunk Forwarders?

Ans: Splunk Forwarders are classified into two types, as shown below:

  • The Splunk agent is installed on a non-Splunk system to gather data locally; it cannot parse or index data.
  • Heavyweight Forwarder (HWF): A complete Splunk instance with advanced functionalities.
  • It functions as a remote collector, intermediate forwarder, and possible data filter in general, and because it parses data, it is not recommended for production systems.

10) What is the most recent Splunk version in use?

Ans: Splunk 8.2.1 is a search engine (as of June 21, 2021)

 We have the perfect professional Splunk Tutorial for you. Enroll now!

11) What exactly is Splunk Indexer? What are the Splunk Indexing stages?

Ans: Splunk Indexer is the component of Splunk Enterprise that creates and manages indexes. An indexer's primary functions are as follows:

  • Incoming data indexing
  • Image Searching the Indexed Data

Splunk Training

  • Master Your Craft
  • Lifetime LMS & Faculty Access
  • 24/7 online expert support
  • Real-world & Project Based Learning

12) What are the Splunk components? Explain the architecture of Splunk.

Ans: This is one of the most common Splunk interview questions. Splunk's components are listed below:

  • Search Head: This component provides a graphical user interface for searching.
  • Indexer: Indexes machine data.
  • Forwarder: Sends logs to Indexer Deployment Server: Manges Components of Splunk in a distributed environment

13) What exactly is Splunk?

Ans: Splunk is the "Google" of machine-generated data. This is an application which can be used to search, visualize, monitor, and report on our enterprise data. Splunk transforms valuable machine data into powerful operational intelligence by providing real-time insights into our data via charts, alerts, reports, and so on.

14) What are the ports numbers used by the splunk?

Ans: The most common port numbers used by the splunk are:

  • Splunk web port:8000
  • Splunk management port:8089
  • Splunk indexing port:9997
  • Splunk Network port:514
  • KV store:8191

Intermediate splunk interview questions:

1) Explain the difference between Stats and Transaction commands.


It's another commonly asked splunk interview question that will put Developer or Engineer knowledge to the test. The transaction command is most useful in two situations:

  • When a single unique ID (from one or more fields) is insufficient to distinguish between two transactions. This is true when the identifier is reused, as in web sessions recognized by a cookie/client IP. The time span or pauses are also used in this case to segment the data into transactions.
  • When an identifier is reused, for example, in DHCP logs, a specific message identifies the start or end of a transaction.
  • When it is preferable to see all the raw text of events blended instead of an assessment of the events' component fields.

In the other cases, statistics are usually preferable.

  • Because the stats command has a better performance, it could be used more effectively, particularly in a distributed search environment.
  • The stats command could be used if there is indeed a unique ID.

2) Could you even create a general regular expression for retrieving IP addresses from logs?


We could indeed extract the IP address from logs in a variety of ways. Here are some examples:

With the help of a regular expression:

rex field= raw "(?ip address>d+.d+.d+.d+.d+.d+.d+)" OR

rex field= raw "(?ip address>([0-9]1,3[.])3[0-9]1,3]0-9]1,3]0-9]1,3]0-9]1,3]0-9]1,3]0-9]1,3]0-9]1,3]0-9]1,3]0-9]1,3]0-9]1,3]0-9]1,3]0 

Subscribe to our youtube channel to get new updates..!

3) How do you troubleshoot Splunk performance problems?

Ans: The answer to this question would be extremely broad, but an interviewer would primarily be looking for the following keywords:

  • Examine the splunkd.log file for any errors.
  • Examine server performance issues such as CPU, memory usage, disk I/O, and so on.
  • Configure the SOS (Splunk on Splunk) app and confirm the dashboard for warnings and errors.
  • Examine the number of and save searches that are currently active, as well as their use of system resources.
  • Setup the Firebug Firefox extension. Log in to Splunk (via Firefox) and launch the Firebug panels. Then, navigate to the 'Net' panel (we will have to enable it).
  • The HTTP requests and responses, as well as the time spent on each, will be displayed in the Net panel. This will provide us with a lot of information quickly, such as which requests are causing Splunk to hang, which requests are blameless, and so on.

4) What exactly are buckets? Describe the Splunk Bucket Lifecycle.


Splunk stores indexed data in directories known as ‘buckets.' It is a physical directory that contains events from a specific time period.As it ages, a bucket goes through several stages. The following are the stages it goes through:

  • A hot bucket is one that contains newly indexed data. It is now open for submissions. Each index may have one or more hot buckets.
  • A warm bucket is made up of data that has been rolled out of a hot bucket. There are numerous warm buckets.
  • Cold: Data in a cold bucket is rolled out from a warm bucket. There are a lot of icy buckets.
  • Frozen: A frozen bucket is made up of data that has been rolled out of a cold bucket. By default, the indexer deletes frozen data, but we can archive it. Data that has been archived can be thawed at a later date (data in a frozen bucket is not searchable).

The buckets have been located in the following locations by default:

$SPLUNK HOME/var/lib/splunk/defaultdb/db

We should be able to see the hot-db and any warm buckets there. Splunk's default bucket size is 10 GB for 64-bit systems and 750 MB for 32-bit systems.

5) What's the distinction between the stats and eventstats commands?

Ans: The stats command generates summary statistics for all existing fields in the search results and saves them as new field values.Eventstats is similar to stats in that the aggregation results are added inline to each event and only if the aggregation is relevant to that event. The eventstats command, like stats, computes requested statistics but aggregates them to the original raw data.

6) Who are Splunk's main direct competitors?

Ans: Splunk's direct competitors include Logstash, Loggly, LogLogic, Sumo Logic, and others.

7) What are Splunk Licenses?

Ans: Splunk licenses limit the amount of data we can index per calendar day.

8) From a licensing standpoint, how does Splunk determine one day?

Ans: In terms of licensing, one day for Splunk is defined as midnight to midnight on the license master's clock.

9) How do you get a Forwarder License?

Ans:They come standard with Splunk. Therefore, there is no need to purchase it separately.

10) What command should be used to restart the Splunk web server?

Ans: This is another common interview question about Splunk commands. Learn everything there is to know about commands. Using the following command, we can restart the Splunk web server.

11) What command should I use to restart Splunk Daemon?

Ans:The Splunk Daemon can be restarted using the following command:

 Splunk start splunkd

12) What command is used to check the status of running Splunk processes on Unix/Linux?

Ans: If we want to inspect the Splunk Enterprise processes that are currently running on Unix/Linux, we can use the following command:

ps aux | grep splunk

13) What command is used to enable Splunk to boot up?

Ans: To get Splunk up and running, run the following command:

$SPLUNK_HOME/bin/splunk enable boot-start

14) How do I disable Splunk's boot-up?

Ans: We can use the following commands to disable Splunk boot-start:

$SPLUNK_HOME/bin/splunk disable boot-start 

15) What is Splunk's Source Type?

Ans: Splunk uses source type to identify data.

Advanced splunk interview questions:

1) How do I change my Splunk Admin password?

Ans: Resetting the Splunk Admin password is dependent on the Splunk version. If we are using Splunk 7.1 or higher, we must perform the following steps:

  1. First, we must terminate our Splunk Enterprise.
  2. We must now locate the ‘passwd' file and rename it to ‘passwd.bk.'
  3. Then, in the following directory, create a file called ‘user-seed.conf':

Ans: $SPLUNK_HOME/etc/system/local/

We will need to use the following command in the file (instead of ‘NEW PASSWORD,' we will enter our own new password):



After that, we can simply restart Splunk Enterprise and log in with the new password.

Now, if we are using a version prior to 7.1, we will do the following:

  1. First, terminate the Splunk Enterprise.
  2. Locate the passwd file and rename it ‘passw.bk.'
  3. Start Splunk Enterprise as well as log in with the admin/changeme credentials.
  4. When prompted to enter a new password for our admin account, we will do so according to the instructions.

Splunk Training

Weekday / Weekend Batches

2) How do I turn off the Splunk Launch Message?

Ans: Set value OFFENSIVE=Less in splunk_launch.conf  

3) How Do I Delete Splunk's Search History?

Ans: We can delete the following file from the Splunk server to clear the Splunk search history:


4) What exactly is Btool? How are you going to troubleshoot Splunk configuration files?

Ans: Splunk Btool seems to be a command-line tool that allows us to troubleshoot configuration file issues or simply see what values our Splunk Enterprise installation is using in the current environment.

5) What is the distinction between the Splunk App and the Splunk Add-on?

Ans: Both contain preconfigured configuration, reports, and so on, but the Splunk add-on does not have a visual app. A Splunk app, on the other hand, comes with a preconfigured visual app.

6) What is the precedence of.conf files in Splunk?

Ans: The order of precedence for files is as follows:

  1. The system local directory has the highest priority.
  2. Local app directories
  3. Default app directories
  4. The system default directory has the lowest priority.

7) What exactly is Fishbucket? What exactly is the Fishbucket Index?

Ans: At the default location, Fishbucket is a directory or index:


It contains seek pointers and CRCs for the files that were indexed so that ' splunkd' can tell us if it's already read them. We can find it in the GUI by searching for:


8) How do I prevent Splunk from indexing certain events?

Ans: This is accomplished by defining a regex to match the required event(s) and sending everything else to NullQueue. Here's a simple example that excludes all events that contain the string login:

In Props.conf:


# Transforms must be applied in this order

# to make sure events are dropped on the

# floor prior to making their way to the

# index processor

TRANSFORMS-set= setnull,setparsing

In Transforms.conf;

[setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue


REGEX = login

DEST_KEY = queue

FORMAT = indexQueue

9) How do I know when Splunk has completed indexing a log file?

Ans: This is something we can figure out:

By real-time monitoring of data from Splunk's metrics log:

index="_internal" source="*metrics.log" group="per_sourcetype_thruput" series="" |

eval MB=kb/1024 | chart sum(MB)

By watching everything split by source type:

index="_internal" source="*metrics.log" group="per_sourcetype_thruput" | eval MB=kb/1024

Since we are having problems with a data input and want to troubleshoot it, especially if our whitelist/blacklist rules are not working as expected, we will go to the URL:


10) How do you change the default search time in Splunk 6?

Ans: In Splunk Enterprise 6.0, we must use ‘ui-prefs.conf' to accomplish this. If we set the value to the following, it will be the default setting for all of our users:


For example, if our

$SPLUNK_HOME/etc/system/local/ui-prefs.conf file



dispatch.earliest_time = @d

dispatch.latest_time = now

The search app's default time range for all users will be today.

The ui-prefs.conf configuration file is referenced here: http://docs.splunk.com/Documentation/Splunk/latest/Admin/Ui-prefsconf

11) What is the Dispatch Directory?

Ans: $SPLUNK_HOME/var/run/splunk/dispatch
Includes a directory for every search that's also actually running or which has been completed. A directory decided to name 1434308943.358, for instance, will encompass a CSV file containing the search results, a search.log containing details about the search execution, and other files. Using the defaults (which we can change in limits.conf), these directories will be deleted 10 minutes after the search is finished—unless the user saves the search results, in which case they will be deleted after 7 days.

12) What exactly is the distinction between Search Head Pooling and Search Head Clustering?

Ans: Both are Splunk features that ensure the high reliability of Splunk search heads in the event that any search head fails. However, the search head cluster is new, and search head pooling will be removed in future versions.

A captain commands the search head cluster, and the captain commands its slaves. The search head cluster outperforms the search head pooling in terms of dependability and efficiency.

13) How do I import folder access logs from a Windows machine into Splunk?

Ans: The following are the steps for adding folder access logs to Splunk:

  • Enable Object Access Audit via group policy on the Windows machine that contains the folder.
  • Enable auditing on the folder where we want to monitor logs.
  • Install the Splunk universal forwarder on your Windows machine.
  • Configure the universal forwarder to send Splunk indexer security logs.

14) What exactly is the MapReduce algorithm?

Ans: Splunk's fast information searching is made possible by the MapReduce algorithm. It is a common algorithm for batch-based large-scale parallelization. It is inspired by the map() and reduce() functions in functional programming.

15) How does Splunk work?

Ans: Splunk's operatio16)n can be divided into three major parts:

  1. Forwarder: Think of it as a dumb agent whose main job is to collect data from various sources, such as remote machines, and send it to the indexer.
  2. Indexer: The indexer will then process the data in real-time before storing and indexing it on the localhost or cloud server.
  3. Search Head: It enables the end user to interact with the data and perform operations such as searching, analyzing, and visualizing the information.

16) What are Splunk pivots and data models?

Ans: Splunk data models are being used when it is important to process large amounts of unstructured data and start creating a comprehensive structure without running complex search queries on the data. Data models are widely used in the creation of sales reports, the addition of access levels, and the creation of an authorization framework for different applications.
Pivots, but on the other hand, allow you to create various viewpoints and then see the results as you need them. Even non-technical supervisors of stakeholders could indeed create views and obtain more information about their departments using pivots.

17) How many different types of dashboards are there in splunk?

Ans: Splunk offers three different types of dashboards:

  1. Dashboards that are u18pdated in real time
  2. Dashboards with dynamic forms
  3. Scheduled report dashboards

18) How to stop/start the Splunk service?

Ans: The command for starting Splunk service:

./splunk start

The command for stopping Splunk service:

./splunk stop 


In the above blog post we had covered all the important splunk interview questions for all levels. These questions help the individuals to crack the interview process easily. If you found anything not covered please drop your query in the comments section to get them answered.

Find our upcoming Splunk Training Online Classes

  • Batch starts on 9th Jul 2022, Weekend batch

  • Batch starts on 13th Jul 2022, Weekday batch

  • Batch starts on 17th Jul 2022, Weekend batch

Global Promotional Image


Request for more information

Research Analyst
As a senior Technical Content Writer for HKR Trainings, Gayathri has a good comprehension of the present technical innovations, which incorporates perspectives like Business Intelligence and Analytics. She conveys advanced technical ideas precisely and vividly, as conceivable to the target group, guaranteeing that the content is available to clients. She writes qualitative content in the field of Data Warehousing & ETL, Big Data Analytics, and ERP Tools. Connect me on LinkedIn.