Load testing ElasticSearch using ESRally and viewing the results in Kibana

Run EsRally against an existing ElasticSeach index and save the results in Kibana

Problem:

  1. Need to be able to write json queries and run these against an existing ES cluster
  2. Want to be able to view the EsRally results in a local ES instance
  3. The test run results to be available in Kibana

Required to be installed:

1. Get and build the ELK stack to store the test results against. Also defines the network so the docker containers can talk to each other.

From your terminal/cmd:


  • docker pull sebp/elk
  • git clone https://github.com/spujadas/elk-docker.git && cd elk-docker
  • docker build -t sebp/elk .
  • docker network create --subnet=172.18.0.0/16 elastic-esrally-network
  • docker run --rm --net elastic-esrally-network --ip 172.18.0.5 -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk sebp/elk

Local Kibana
Local ElasticSearch (Chrome plugin is recommended for viewing ES info)

2. Now we have ES and Kibana running its time to get ES rally sorted:

Pull the docker image / ESRally configuration files in a new terminal window
  • docker pull ducas/elastic-rally:0.11.0
  • git clone https://github.com/sgriffiths/Esrally-ElasticSearch.git && cd Esrally-ElasticSearch

3. Create a test product index by going to Kibana dev tools and entering a PUT query:

Kibana dev tools:

PUT product
{
    "settings" : {
        "index" : {
            "number_of_shards" : 3, 
            "number_of_replicas" : 2 
        }
    }
}

NOTE: To point to a remote ES instance just update the IP address in the .env file.

4. Run EsRally

  • docker-compose run product
Results will be published to the results folder (product_result.md)

5.  After the run check the results have been published to ElasticSearch:

Local ElasticSearch

Should look something like:


6. Define and create the index patten in Kibana management/Indexpatterns and use the @timestamp.

Go to 'Discover' and view the metrics/results:


Below contains information relevant to the different files and configuration:

* The examples are based on searching for food products on a website similar to the following:


Tracks

Where you call the configuration files for the challenges and operations, these can be defined here but it makes the code a little easier to understand when its been separated

1. Create the track:






  • Here we import 'rally helpers' to allow us to define where the operations and challengers are located 
  • You can however define all the operations and challenges in the track.json if required
  • We isolate these files separately make it more efficient and manageable when you have lots of operation (json query) files.

2. Operations:

]

  • This is where we define the query to be executed in this case its just looking for 'family' of products to be returned
  • There can be and usually is multiple of these files in the operations folder (or whatever you call it). These get excused by calling 'challenge' methods







3. Challenges:

  • This does a check on the cluster health to make sure its green

  • Parallel has been used so that multiple 'tasks' can be run inside this block at once

  • Variables have been used for clients, duration and throughput which can then be defined at runtime.. The value to the right is the default if nothing is defined






4. Rally.ini



  • datastore.host is the IP of local ElasticSearch instance where the EsRally test files will be indexed
  • If you use X-Pack then the user and password will need to be defined here for basic auth
Creating additional tests:
You would add additional files to the operations directory and then reference these from the 'parallel.json' file

Comments

Post a Comment

Popular posts from this blog

Installing Testlink (Test Management Tool) on Ubuntu 12.04 (AWS EC2) with URL Rewrite

Running Postman tests on Jenkins using Newman and AWS (Ubuntu 14.04)

Installing ReadyAPI on a Jenkins EC2 instance using X11