3 Steps to Better Install Agents in Docker Environments
Air-Date December 5, 2018
Run time: 60 minutes (Please pardon the 15 seconds of choppy audio during the first minute).
Resource links referenced in the session:
You have never seen a machine agent like this! This session focuses on best practices for installing AppDynamics Agents in Docker environments. @James.Barfield,Technical Sales Enablement Manager, will lead us through a live demonstration of the 3-step solution he has developed. This solution provides an easy and safe way to add APM monitoring to an existing application, without changing your Dockerfiles or container builds. @Mark.Prichard, Product Management, shares with us where the product team is taking development further on this topic.
This supports the most recent version of Docker. We've tested against many Java version. The features of docker that we're using are very basic and have been in the product forever. In terms of Dynamic Attach, this has been in the JVM for a long time and is supported with JVM versions going back a long time. We haven't tested against Swarm, but it shouldn't be any different.
It can be. This is one package solution. We could package a similar solution that will go through a server and instrument all the Java apps on that server.
It supports Docker and will work on a Kubernetes cluster, but it’s not a Kubernetes solution yet. One of our next webinars will show Kubernetes.
Dynamic Attach has been available for several years. Agentless Analytics is still in beta. That's straight forward because that's on the Events Service side and enabling collectors to listen to those things. This solution makes all of that transparent. The dynamic agent solution does all that for you anyway. It creates the Analytics Agent, does the reporting, etc.
We used the most recent v4.5.4 Controller and the newest agents. If you are running a v4.4.3+ Controller, it should work with that. Anything after v4.5.2 is backwards compatible starting with v4.4.3. You can run the newest agent or older agents against a new Controller.
Dynamic Attach is a technology that's specific to the JVM. However, there are other approaches that can get the same effect using features of the langauge runtimes for the other agents. That's on the roadmap.
"Dynamic attach” just means that you’re taking our agent and attaching it to an already running JVM. The agent behaves in a normal way. For dynamic reporting, all we're doing is attaching a normal agent dynamically to a running JVM. If you want to learn more about that, see:
"It does class retransformations on the JVM, similar to what the Java agent will do if it's there from the beginning. It injects the various markers - it does that at runtime instead of at the beginning. There's a description of this in our JVM documentation. There are hooks in the JVM that are flagged to do this. It uses Xbootclasspath.
The agent is working as it normally does. You can define custom metrics as you usually would and it will work.
Yes, this is part of our standard Server Agent.
When you pull down the github repo, it will pull down 4 files: the run.sh, the stop.sh, the controller.env, and the readme.txt. The controller.env needs to be in the same directory as the run.sh file. But it will all be there together.
The Controller.env file is configuration file for the Controller. That's where the application name is set, as well as the connection information for the Controller. By default, the solution will use the host name of the container as the tier name but there are other options for arriving at the tier name for each container. If you have a specific schema for naming your nodes and tiers, you can incorporate that into the solution. If you look at readme.txt file and look at the TIER_NAME_FROM and TIER_NAME_PARAM, those are around naming tiers.
No, not ahead of time. This will happen dynamically as it’s running.
This doesn’t, since the approach/solution we've shown here is Docker only. In general, instrumentation of agents in a PCF world is a lot easier due to the nature of the buildpack. Therefore, there's less of a need for something like this. However, if this is something people really want to do in a PCF environment, please reach out to your Account Manager, who can help you connect with Mark Prichard.
Yes, but this is something we will discuss more in a future webinar.
We are assuming that you are referring to upgrading your own application. In that case, you don’t need to rerun any of these steps. The dynamic agent will still be running and checking for new containers. If you're upgrading a container, we assume you're going to be creating a new container. The new container will get picked up and instrumented.
It’s not quite ready yet. but keep an eye out for it. It will work a little differently since there's no Dynamic Attach, but it can add the pieces it needs to add. It will then stop and start the container. When the container starts, it will pick up the instrumentation that was added. Please reach out if this is an urgent need for your organization.
It can, but it's not normally good practice though because there's no added value. You typically want just one process per container. The normal pattern would be to have your application containers running with the APM agents attached dynamically and you have one additional container which is configured to report on the host and containers that are running.
The Machine Agent container is a running machine agent and it has a dynamic agent running as an extension. Server monitoring capabilities use the same mechanism. This is a very standard part of how our Machine Agent works.
This would need to be installed on each host machine.
Not currently, but it's on our list of features to add to the product. It's a good example of where you have containers that would report to different applications.
That’s part of the Controller functionality and that can be confingured on the Controller.
They work the same as they would if they had been instrumented with Java from the start.
Yes. The application will carry on being monitored once the agents are there. That container can drop down and it wouldn't affect anything. If you should down the container, it would have to be brought back just to pick up any new containers that were running or restarted.
Yes, if the container goes down and a new one comes up, it will reinstrument the new container. The whole point is that the other container image is unmodified.
Autoscale would work the same we way we saw when we took down the container and spun a new one during the demo. Every minute it’s looking to see if there are new containers and if it sees a new one (like in the autoscale scenario), it will instrument it then. It will handle autoscaling environments just fine.
There are no sidecars in this implementation. You're just injecting the agent into the container. It’s a different approach from what you would do with sidecars. In some ways, it's much simpler.
Gemfire works with our normal agents. We haven't tried it, but I see no reason why it wouldn't work. The same is true for a lot of JVM-based products. You just need to have a way to dynamically attach that to the JVM.
Dynamic attach works fine with jboss. The docs mentioned something about when you do AppDynamics instrumentation with JBoss, there is a particular class you need to include - the JBoss modules system packages - but that's something you'd have to do with the agent anyway. It's in the blog under "Requirements" section: https://blog.appdynamics.com/engineering/hands-off-my-docker-containers-dynamic-java-instrumentation...
This is our first step and we have a lot we can add to it. Most features have been things we’ve thought of but once this goes out to the field in real world situations, there will be more scenarios that we can cover.
We’ll be coming back in the New Year with Kubernetes environments and we'll try to address Swarm or Node.js. If you have a specific question or environment you want to try this in, reach out to your AppDynamics Account Manager and they can help you. All materials are out there - get going and let’s hear the feedback.