cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Abhi.Bajaj
AppDynamics Team

Appdynamics-Otel.jpg

In the ever-evolving landscape of cloud-native applications, maintaining visibility and monitoring performance are paramount. This article aims to guide you through the process of setting up an observability framework using OpenTelemetry (Otel) in conjunction with AppDynamics. By leveraging the power of OtelCollectors and the AppDynamics backend, we can gather, process, and analyze telemetry data (traces, logs, and metrics) to ensure our applications are performing optimally and to quickly troubleshoot any issues that may arise.

Pre-requisite:

  1. Access to AppDynamics Cisco Cloud Observability endpoint Url.
  2. You would need to generate credentials from step 3 as defined in https://docs.appdynamics.com/observability/cisco-cloud-observability/en/kubernetes-and-app-service-m...

Step 1: Configuring the OtelCollector

Creating the Configuration

The first step in our journey involves creating a configuration for our OtelCollector. This is accomplished by defining a ConfigMap in Kubernetes, which outlines how our collector will operate. This configuration specifies the protocols for receiving telemetry data, the processing of this data, and how it will be exported to our observability backend, such as AppDynamics. Here's an example of what this ConfigMap might look like:

apiVersion: v1
kind: ConfigMap
metadata:
name: collector-config
namespace: appd-cloud-apps
data:
collector.yaml: |
receivers:
otlp:
protocols:
grpc:
endpoint:
http:
endpoint:
processors:
batch:
send_batch_size: 1000
timeout: 10s
send_batch_max_size: 1000
exporters:
logging:
verbosity: detailed
otlphttp/cnao:
auth:
authenticator: oauth2client
traces_endpoint: https://xxxx-xx-xx-xx.xxx.appdynamics.com/data/v1/trace
logs_endpoint: https://xx-pdx-xxx-xx.xxx.appdynamics.com/data/v1/logs
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: 0.0.0.0:17777
oauth2client:
client_id: xxxx
client_secret: xxxx
token_url: xxx
service:
extensions: [health_check, pprof, oauth2client]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlphttp/cnao]
logs:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlphttp/cnao]
telemetry:
logs:
level: "debug"

This configuration is the heart of our observability setup, integrating seamlessly with AppDynamics to provide a detailed view of our application’s performance.

Step 2: Deploying the OtelCollector

Launching the Collector

With our configuration in place, the next step is to deploy the OtelCollector within our Kubernetes cluster. This deployment ensures that the collector is operational and can begin processing telemetry data as defined. The deployment configuration ties our ConfigMap to the OtelCollector, enabling it to start receiving, processing, and exporting telemetry data. Here's a basic example of what the deployment configuration might include:

apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetrycollector
namespace: appd-cloud-apps
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: opentelemetrycollector
template:
metadata:
labels:
app.kubernetes.io/name: opentelemetrycollector
spec:
containers:
- name: otelcol
args:
- --config=/conf/collector.yaml
image: docker.io/otel/opentelemetry-collector-contrib:latest
volumeMounts:
- mountPath: /conf
name: collector-config
volumes:
- configMap:
name: collector-config
items:
- key: collector.yaml
path: collector.yaml
name: collector-config

This ensures our OtelCollector is primed to handle telemetry data, marking a crucial step towards full observability.

Step 3: Instrumenting an Application with the OpenTelemetry Java Agent

Setting Up the Application for Telemetry

The final step involves instrumenting our application with the OpenTelemetry Java Agent. This is crucial for collecting telemetry data from the application itself. By deploying a Kubernetes application with the Java Agent attached, we enable our application to send telemetry data directly to the OtelCollector. This setup includes an init container to prepare the Java Agent and a sidecar container to collect and forward this telemetry data to our central OtelCollector and AppDynamics for analysis.

apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-otel-personal
namespace: appd-cloud-apps
spec:
replicas: 1
selector:
matchLabels:
app: tomcat-otel-personal
template:
metadata:
labels:
app: tomcat-otel-personal
spec:
initContainers:
- name: otel-agent-attach-java
command:
- cp
- -r
- /javaagent.jar
- /otel-auto-instrumentation-java/javaagent.jar
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:latest
volumeMounts:
- mountPath: /otel-auto-instrumentation-java
name: otel-agent-repo
containers:
- name: sidecar-otel-collector
image: otel/opentelemetry-collector
args:
- --config=/conf/agent.yaml
volumeMounts:
- name: sidecar-otel-collector-config
mountPath: /conf
- name: tomcat-app
image: docker.io/abhimanyubajaj98/tomcat-app-buildx
imagePullPolicy: Always
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /otel-auto-instrumentation-java
name: otel-agent-repo
env:
- name: JAVA_TOOL_OPTIONS
value: "-javaagent:/otel-auto-instrumentation-java/javaagent.jar -Dotel.resource.attributes=service.name=open-otel-abhi,service.namespace=open-otel-abhi -Dotel.traces.exporter=otlp,logging"
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: grpc

By following these steps, we’ve successfully configured and deployed an observability framework using OpenTelemetry and integrated it with AppDynamics. This setup not only enhances the visibility into our application’s performance but also empowers us to proactively manage and troubleshoot any issues that may arise, ensuring optimal performance and reliability.

Once you are done, hover to the Cisco Cloud Observability UI -> Services. Filter based on service.name. In our case the service.name is open-otel-abhi

 

Screenshot 2024-03-13 at 9.20.18 PM.png

 

Version history
Last update:
‎03-13-2024 02:12 PM
Updated by: