Showing results for 
Show  only  | Search instead for 
Did you mean: 
AppDynamics Team

The following document enlists the steps to migrate Data from the Controller in-built Events Service (Elastic Search) to a Clustered Events Service, with minimal downtime and no data loss



If you are migrating a 4.1 cluster, then first convert the single node 


Required tools: curl or equivalent utility (eg. Postman)


Node 1: Controller in-built Events Service

Node 2, 3, 4,.....n: New n node cluster 


  • Install a new cluster

For 4.1 Setup a cluster manually as described here

For 4.2 use the Platform Admin utility to install a cluster as described here


  • Shutdown all the nodes
  • Once a n node cluster is successfully created change the following in the NODE1's conf/ to match the rest of the nodes(NODE2-n) parameters


  • Set the following in the NODE2-n to the same values as NODE1



  • Set the following on all the nodes - NODE1-n,NODE3:9300,...NODEn:9300,NODE1:9300


  • Choose 2 more master nodes apart from NODE1... say NODE2 and NODE3, and set the following on the rest of the machines (4-n) which will act as slaves


  • Empty the data directory on all the nodes NODE2-n
  • Start all the nodes NODE1-n

This will create a n node cluster and replicate the data equally onto the nodes of the cluster


  • Check for sanity:


You should see all the shards in the STARTED state


You should see all the indices in green and open state


  • Once the data is replicated run the following on any of the master nodes (ones that are masters)
curl -XPUT localhost:9200/_cluster/settings -d '{
"transient" :{
"cluster.routing.allocation.exclude._ip" : "<NODE1_IPADDRESS>"
  • This should be successfully executed. After a while of running this command you should see the following when run from any of the master nodes:
curl http://localhost:9200/_cat/allocation?v
disk.used disk.avail disk.percent host ip node 
0 5.2gb 4.7gb 10gb 52 NODE1

Here the number of shards for NODE1 should be 0. This would mean that all the data from the NODE1 has been now moved to the other nodes


  • Shutdown node NODE1-n
  • Reconfigure the cluster to be a n-1 node cluster. Set the following on nodes NODE2-n,NODE3:9300,....,NODEn:9300


  • Reconfigure NODE4 to have:


  • Start the nodes NODE2-n
  • Check the status of the cluster again:


You should see all the shards in the STARTED state
You should see all the indices in green and open state


  • Reconfigure EUM config to point to this cluster

Change the following in the EUM's file to point to the new cluster. eg:





  • Reconfigure Controller

Log on to admin.jsp and change the following keys to point to the new cluster. eg:

Version history
Last update:
‎12-21-2018 03:34 PM
Updated by: