Run Time: 60 minutes.
@mark.prichard and @eric.johanson from the AppDynamics Product Management team led a hands on demo session. They took one of our demo applications and walkthrough the steps to convert it from its original (non-container) code, all the way through to a Docker based deployment using Kubernetes-- the same steps you need to go through when refactoring a traditional app deployment.
As Mark and Eric illustrate key points in the demo environment, they will share customer insights and talk through important choices you must consider when planning a container based architecture.
We're listening, please take our post-webinar survey:
Resource Links references in the session
AppDynamics Docker Store Images
AppDynamics Docker Visibility
Transcribed Q&A Table of Contents
Something like Dockerize enables us to get over that initial hump. When you start refactoring, you have services that have dependencies and when you start them up, you need to ensure they start up in the correct sequence with those services available. The long term solution is to move towards a Service Fabric or Mesh (like Istio) and that will handle the retries. When you do that, a lot of that logic can go away. If you have an application in one container that is talking to a database service, while that database is starting up, Istio will handle the retries and you can just configure that. Getting there requires you to have a full Kubernetes-based deployment where you can add Istio into the mix, but that’s a big jump to do at first.
There are two approaches to this. When you have an application component that needs to be monitored as part of an APM flow, there will be an AppDynamics agent associated with that runtime. In this Java-based example (the same applies to almost any language), when that JVM runs, it needs to have an APM agent associated with it. Typically, that is there when you start the JVM. The simplest option is to bake the agent into the container and when you run the JVM, you pass that as a Java Agent parameter. Alternatively, you can have it host-based or loaded by a shared storage where it can downloaded from central repository and loaded into the container.
The AppDynamics APM will work with Docker Swarm and the things that we discussed could have been done with Swarm. However, we don’t have container visibility for Swarm. We have support for Kubernetes coming soon and you’ll see a whole richer set of supported features, but we don’t have that for Swarm. We’re seeing that a lot of customers are reevaluating that path, moving away from Docker-Compose from Swarm. Kubernetes looks like the end point for most people in that journey.
It’s important to look at the supported OS’ for the AppDynamics Java Agents. The ones we test with are the “big boys” (e.g. CentOS), though there are other distributions you can use. The official images also support Alpine. If what you’re looking for is a really lightweight image with OpenJDK and the agent, for example, you can get that with Alpine and OpenJDK. If you need more, look at the official images for the “major players.”
Yes, that’s a good practice to follow. If you do that from the beginning, you need to work out what you’re going to do with it. Quite often in the early stages of this, putting them into a persistent storage location is a good choice. While it may not be always easy for you to do that, many people want to go through the practice of having their logs persisted.
No, the APM works as normal. These are the same standard agents; you don’t need to use special agents for Docker or Kubernetes. It’s all a matter of how you package and deploy the APM.
This works for both SaaS or on-prem Controller deployments.
The containers are related to the applications being monitored. When you look at our Docker visibility, the containers we are monitoring are the ones that have APM agents in them. We are working on additional functionality that will enable you to see all running containers.
No, we don’t currently support these but these are things we need to do the certification on. In the short term, we’re looking at support for the major traditional Linux OS’ but clearly Red Hat Atomic is high priority for us. As we bring out support for OpenShift, that’s when we’re going to looking at Atomic.
Soon! We’re getting close to support for it.
Think about how you are going to structure the build of the project. It’s very easy with any code to intertwine things that you didn’t mean to. Separate out the pieces of how you build your actual application deployment, the logic, the infrastructure it needs to support it (e.g., the service engine), the framework, and anything specific to how things are packaged into containers. Try to keep those as separate as possible to make the refactoring easier. It’s very important to be aware of the startup dependencies, retries, and all of the pieces that are part of that service. There are tools like Dockerize to help get you there, but make a note of where you’re going to have to change your code to accommodate the microservice architecture.
Think about using persistent storage, how you are going to load those build artifacts into your containers, and agent configuration (loading in the agent binaries). It’s good to decide early on whether you prefer to pull the agents in from base images or if you want to load them dynamically. There are some tradeoffs there so you want to think about it early.