cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Mayuresh.Kshirsagar
AppDynamics Team (Retired)

The following document enlists the steps to migrate Data from the Controller in-built Events Service (Elastic Search) to a Clustered Events Service, with minimal downtime and no data loss

 

Assumption

If you are migrating a 4.1 cluster, then first convert the single node 

 

Required tools: curl or equivalent utility (eg. Postman)

 

Node 1: Controller in-built Events Service

Node 2, 3, 4,.....n: New n node cluster 

 

  • Install a new cluster

For 4.1 Setup a cluster manually as described here

For 4.2 use the Platform Admin utility to install a cluster as described here

 

  • Shutdown all the nodes
  • Once a n node cluster is successfully created change the following in the NODE1's conf/events-service-api-store.properties to match the rest of the nodes(NODE2-n) parameters

ad.es.node.minimum_master_nodes
ad.es.event.index.shards
ad.es.event.index.replicas
ad.es.metadata.replicas
ad.es.rolling.maxShardsPerIndex

 

  • Set the following in the NODE2-n to the same values as NODE1

ad.accountmanager.key.eum
ad.accountmanager.key.controller
ad.accountmanager.key.ops

 

  • Set the following on all the nodes - NODE1-n

ad.es.node.unicast.hosts=NODE2:9300,NODE3:9300,...NODEn:9300,NODE1:9300

 

  • Choose 2 more master nodes apart from NODE1... say NODE2 and NODE3, and set the following on the rest of the machines (4-n) which will act as slaves

ad.es.node.master=false

 

  • Empty the data directory on all the nodes NODE2-n
  • Start all the nodes NODE1-n

This will create a n node cluster and replicate the data equally onto the nodes of the cluster

 

  • Check for sanity:

http://NODE1:9200/_cat/shards?v

You should see all the shards in the STARTED state

http://NODE1:9200/_cat/indices?v

You should see all the indices in green and open state

 

  • Once the data is replicated run the following on any of the master nodes (ones that are masters)
curl -XPUT localhost:9200/_cluster/settings -d '{
"transient" :{
"cluster.routing.allocation.exclude._ip" : "<NODE1_IPADDRESS>"
}
}'
  • This should be successfully executed. After a while of running this command you should see the following when run from any of the master nodes:
curl http://localhost:9200/_cat/allocation?v
shards
disk.used disk.avail disk.total disk.percent host ip node 
0 5.2gb 4.7gb 10gb 52 linux-629i.site 172.16.87.141 NODE1

Here the number of shards for NODE1 should be 0. This would mean that all the data from the NODE1 has been now moved to the other nodes

 

  • Shutdown node NODE1-n
  • Reconfigure the cluster to be a n-1 node cluster. Set the following on nodes NODE2-n

ad.es.node.unicast.hosts=NODE2:9300,NODE3:9300,....,NODEn:9300

 

  • Reconfigure NODE4 to have:

ad.es.node.master=true

 

  • Start the nodes NODE2-n
  • Check the status of the cluster again:

http://NODE2:9200/_cat/shards?v

You should see all the shards in the STARTED state
http://NODE2:9200/_cat/indices?v
You should see all the indices in green and open state

 

  • Reconfigure EUM config to point to this cluster

Change the following in the EUM's eum.properties file to point to the new cluster. eg:

analytics.serverScheme=http

analytics.serverHost=172.16.87.134

analytics.port=180

 

  • Reconfigure Controller

Log on to admin.jsp and change the following keys to point to the new cluster. eg:

appdynamics.analytics.local.store.url=http://172.16.87.134:180

appdynamics.analytics.server.store.url=http://172.16.87.134:180

eum.es.host=http://172.16.87.134:180

Version history
Last update:
‎12-21-2018 03:34 PM
Updated by:
On-Demand Webinar
Discover new Splunk integrations and AI innovations for Cisco AppDynamics.


Register Now!

Observe and Explore
Dive into our Community Blog for the Latest Insights and Updates!


Read the blog here