Elastic Search Commands

Are you ready to enhance your technical Skillset with Elastic Search Command? I think you are at the right place to unleash your knowledge on Elastic Search commands. The Elastic search tool is designed to help all the users to retrieve, search, analyze, and visualize the data from a different source. This offers the organizes to perform various kinds of search operations. One important thing about Elastic search is that it works so fast compare to other Database technologies. So in this blog, you will be learning different kinds of Elastic search commands and programming examples.

Introduction to Elastic Search commands:

Elastic Search is a search engine tool based on the database server called Lucene library. It offers facilities like data distribution, data source multitenant, fully program-oriented text search engine tool with an HTTP web interface, and free-to-use JSON documents. Elastic search is developed on the base of Java programming. It is one of the fully distributed enterprise-grade search and analytical engines. Elastic commands run on multiple servers simultaneously and it is a web-based programming interface. Under this Elastic search command blog, you will be knowing a few key commands such as the creation of an index, the listing of all indices, Curl methods, retrieve and delete of data indices.

Become a Elasticsearch Certified professional  by learning Elasticsearch online course from hkrtrainings! 


Elastic search commands:

Now I am going to discuss important Elastic search commands;

Creation Of Elastic search Index:

Elastic Search Index is nothing but the addition of data to the different sources. When the user adds the data into the Elastic search engine, the data is immediately placed into the Apache Lucene indexes. As I said earlier, the Elastic search engine is designed on the base of Lucene indexes to retrieve the data, transfer, and stores the data. Elastic search engine acts as REST API, so the user can use both POST and PUT methods to add the data. The PUT method is used to know the ID specification of the data, and the POST method is used to generate the ID specifications for the data.

The below program illustrates how to create an Elastic search Index:
Let’s see the example for the "PUT" method, here with the help of the PUT method users can add the new documents and Unique ID. The method used here is “field”: “value”.

PUT /customer_id/_newdocs/1
“name_of_the_customer”: “Adam Smith”

This PUT request automatically creates the name of the customer index, if this field doesn’t exist, then it will add the new document id as 1, and stores all these information in the name field.

Since we have created a new document, so let’s get into the generation of ID:
The below program explains it,

“_index1”: “customer1”,
“_type_of_the_id”: “_doc1”,
“_id_number”: “1”,
“_version1”: 1,
“result_of_the_method”: “created_id”,
“_shards_name” : {
“total_number” : 2,
“Successfully created”: 2,
“failed_id” : 0
“_sequence_number”: 28,
“_primary_term_of_the_id” : 5

Users will get the new document from any node existed in the cluster. You can even retrieve the data with the help of a GET request to specifies the unique ID:


GET /Customer_id/_newdocs/1

The above programming example explains the creation of customer ID.

The next example will explain how to index source fields.

“_index1” : “customer_name ”,
“_type1” : “_doc_name”,
“_id_number”: “1”,
“_version”: 2,
“_seq_no” : 28,
“_primary_term1” : 5,
“found_doc” : true,
“_source_index” : {
“name”: “adam smith”


Indexing documents in bulk:

Suppose if you have a lot of documents, they need to be indexed. Users need to submit all the documents with the help of bulk API. Usually, bulk to batch all the documents mechanism works faster than simply submitting the requests as it reduces the network roundtrips.

                                                             Get ahead in your career with our Elasticsearch Tutorial  !


“customer_account_number”: 0,
“Total _balance” : 16650,
“cus_firstname” : “Adam”,
“cus_lastname”: “smith”,
“cus_age” : 30,
“gender”: “M”,
“Address”: “50 California place”,
“employer_deatails”: “Google”,
“email_id” : [email protected],
“City”: “Roseville”,
“State”: “Newyork”,

Curl commands to count the numbers:


Curl -X : // / ? ’ -d’’

Where the VERB indicates the appropriate HTTP methods or Verb: GET, POST, PUT, HEAD, and DELETE

PROTOCOL: it will be either HTTP or HTTPS (use this only when you have an https proxy in front of the Elastic Search )

HOST: the hostname in Elastic Search may be cluster or node or localhost

PORT: The PORT used to run the Elastic search HTTP services, the default number is 9200.

PATH: API endpoints such as _count will return only the number of documents available in the cluster or node. One important thing is that the PATH may hold multiple components, such as _clusters/states or _nodes/states or –nodes/states/JVM (java virtual machine)

QUERY_STRING: this is optional query-string parameters (JSON tool will be responsible to read the queries)

BODY: this is nothing but a JSON-encoded request.

Curl command examples:


Curl- XGET ‘http: //www.example.com:9200/MyindexName/_countnumber? Pretty’



“Count”: 98,

   “_shrads_data”: {

  “Total”: 5,

  “Successful”: 5,

  “failed_data”   : 0





The index has the 98 documents within it.

ElasticSearch Training

  • Master Your Craft
  • Lifetime LMS & Faculty Access
  • 24/7 online expert support
  • Real-world & Project Based Learning

Delete an Index in Elastic search:

Delete an Index indicates that the deletes an existing index.


DELETE / my-index-number-000001



Path parameters

(Request, string) comma-separated list or wildcard expression of indices to delete.

In this parameter, wildcard expressions match only open, concrete indices. You cannot delete an index using an alias.

To delete all indices, use _all or * . To disallow the deletion of indices with _all or wildcard expressions, change the action. Destructive_requuires_name cluster setting to true. You can update this setting in the elasticsearch.yml file or using the cluster setting APIs.

Query parameters:


(optional, Boolean data type) if true, the request does not return an error if it returns an error when a wildcard expression or _all value retrieves only missing indices or closed indices.

This parameter is also applied to index aliases that point to a missing or closed index.

Defaults to true.


(Optional, string) controls what kind of indices that the wildcard will expand to. Multiple values will be accepted and separated by a comma, as in an open or hidden file.
The valid values are as follows:


This expand is used to open and close the indices


This expansion is only used to open the indices.


Expand only used to close the indices.


Expansions of wildcards will only contain hidden.indices. They must be combined with open, closed, or sometimes both.


(optional, Boolean) if the condition is true, then the user cant includes missing or closed indices in this response.

Defaults to false:


(Optional, time units) indicates the period of the time to wait for the connection, if no responses will be found, the entire requests will fail and it returns an error.

Click here to get  Elasticsearch interview questions and answers for freshers & experienced professionals

Programming example:

Curl –XDELETE _file ‘http: //www.examplecode.com:9200/myIndexName? Pretty’

“acknowledged_detail” : true

List all documents in an Index:

  • Documents are available in the form of JSON objects that will be stored within an Elastic search index and they will be considered as a base unit of the storage.
  • While working with a relational database query, documents in this query will be compared to a row in the table.
  • Data documents are defined within the field entities comprised of keys and values. Where the key is the name of the data field and data value can be stored in the form of items such as a string, a number, a Boolean expression, and array values.

Documents consist of reserved fields that form the documents as metadata such as:

_index: the documents will be stored in the index.

_type: this indicates the type of data

_id: this indicates the unique identifier for the data documents.

In elastic search there are two types of commands are used those are GET and POST, the syntax is as follows

GET method:

http: //localhost_name:9200/ [name of the index]/_search_docs? Size_doc = [number of records you want] &q = *.*

POST method:

http://localhost_name:9200/[name of the index]/_search _docs

“size” : [your_docs_value] //by default its 20
“from” : [your_index_name] //by default is 0
“query”: // retrieve the documents
“match_all” : {}

The following syntax to describe the code to list all the documents in an index:

Curl –XGET http:// www.hkrtechnologies.com.9200 /nameoftheindex/_search?pretty=true & q= *.*

List all indices:


Log on to the host of elastic Search for LSE explorer

Now run the command :

Curl –XGET ‘localhost_name: 9200/_cat/list_indices? v & pretty’

Retrieve a document by ID:

The following syntax indicates how to retrieve the document,

GET my-index-00001/_doc/0


GET / _doc_num/<_id>

HEAD /_doc_num/<_id>

GET /_source/<_id>

HEAD / _source/ <_id>


Here we use GET to retrieve the document and its source or it may be a stored file from a specific index.

Use the HEAD to verify whether the document exists or not. You can also use the _source resources to retrieve the source of the documents or verify it.

Subscribe to our youtube channel to get new updates..!


By default, we are going to get the API in a real-time environment and it will not be affected by the refresh rate of the index (when data will become visible for the future use). The stored field will be requested by the method called stored_fields parameter, and all the document fields will be get updated but not yet refreshed. API will have to analyze and parse the data source to extract the stored fields. To disable the GET method, you need to set the parameter to false.

Source Filtering:

In this case, the GET operation returns the contents of the _source data field, so there is no point in using once the _source field is disabled. Users can turn off the _source data retrieval by using the _source parameters.

GET my-index-number-00011/_doc/0? _source = false.

If you need only two or more fields then you can use the _source parameter or _source_includes or _source_excludes to add or remove the data fields. This can be useful when you have a large amount of data to be retrieved when you use both _source field parameters. Both the parameters will be separated by ; (comma).

For example:

GET my-index-number-00011/_doc / 0? _source_includes =*.id &_source_excludes = entities.

If you want to specify the fields, you can use the shorter notation,

GET my-index-number-00011/_doc/0_source = *.id


Routing can be used when you want to retrieve the data field or documents, see the syntax:

GET my-index-number-00011/_doc/1? routing = user1

In the above example, requests get the document id 2, but it is routed based on the user data. The documents will not be going to fetched if the correct routing is not defined.


PREFERENCES controls the data of which shard replicas to execute the get request. By default, the preference value operation is randomized between the shard replicas.

The preference can be set by using the following parameters:


The operation will prefer to be executed on a local allocated shard if possible.

Custom (string) value:

A custom value will be used to guarantee that the same shards will be used when there is a same custom value. This can help the user with having known about "jumping values", only when hitting different shards in different refresh states. A sample value can be something like the web session id, or the user name.


The refresh parameter can be set to true to refresh the relevant shard before the get operation can be performed. Once you set the Refresh to true, you need to be very careful and there might be a chance of losing the existing data field.


The get operation hashed into a specific shard id filed value. It then gets a redirect to one of the replicas within that shard id fields data and returns the outcome. The replicas are the primary shard data fields and its replicas available within that shard id group. This means that the more replicas we have, the better GET scaling we will have.

Versioning support:

The version parameter will be used only when the current version of the data field is equal to the specified one.

Elastic search will be going to mark the old documents as deleted and added entirely new documents. One important thing is that older version data of elastic search doesn't disappear immediately, although the user can't access it. Elastic search methods clean up the deleted documents from the field in the background and you can continue index more data.

Path parameters:

-> (string, required character) indicates the name of the index that holds the documents.

<_id> -> indicates the unique identifier of the documents.

The programming example describes how to retrieve the id,

“_index”: “my-index-00001”,

“_type”: “_doc”,
“_id”: “0”,
“_version”: 1,
“ _seq_no”: 0,
“_primary_term” : 1,
“found”: true,


“@timesstamp” : “2019-12-15T14:12:12”,
“http”: {
“request” :
“method”: “get”
“response”: {
“status_code” : 300,
“bytes”: 10978900


“version”: “1.1”


“source”: {
“ip” : “”

“message” : “GET / search HTTP/1.1 300 10978900”,
“user” :
“id": "Johnswatson”

Check to see if a document with the _id 0 exists:

HEAD my-index-00001/_doc/0

To get the source field use the following syntax:
GET my-index-000001/_source/1

To get stored fields to use the following programming example:
PUT my-index-000001

“mapping” :
“Properties”: {
“counter” : {
“type” : “integer”,
“store” : false
“tags”: {
“type”: “keyword”,
“store”: true


ElasticSearch Training

Weekday / Weekend Batches


In this blog, I have explained the important commands of Elastic search such as creating the index, Curl methods, delete an index element, retrieve and list all the data records. Elastic Search is a search engine database server that mainly runs on Lucene API interfaces. The main advantage of Elastic search is that its simple-to-use policy and easier method availability. I hope this article may help a few of you learn and enhance knowledge on Elastic search and also get connected with many Elastic search community forums.

Find our upcoming ElasticSearch Training Online Classes

  • Batch starts on 3rd Jun 2023, Weekend batch

  • Batch starts on 7th Jun 2023, Weekday batch

  • Batch starts on 11th Jun 2023, Weekend batch

Global Promotional Image


Request for more information

Research Analyst
As a senior Technical Content Writer for HKR Trainings, Gayathri has a good comprehension of the present technical innovations, which incorporates perspectives like Business Intelligence and Analytics. She conveys advanced technical ideas precisely and vividly, as conceivable to the target group, guaranteeing that the content is available to clients. She writes qualitative content in the field of Data Warehousing & ETL, Big Data Analytics, and ERP Tools. Connect me on LinkedIn.