cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Abhi.Bajaj
AppDynamics Team

1. Pre-requisite

We will use strimzi for deploying a Kafka Cluster. Before deploying the Strimzi cluster operator, create a namespace called kafka:

kubectl create namespace kafka

Apply the Strimzi install files, including ClusterRoles, ClusterRoleBindings and some Custom Resource Definitions (CRDs). The CRDs define the schemas used for the custom resources (CRs, such as Kafka, KafkaTopic and so on) you will be using to manage Kafka clusters, topics and users.

kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka

Follow the deployment of the Strimzi cluster operator:

kubectl get pod -n kafka --watch

You can also follow the operator’s log:

kubectl logs deployment/strimzi-cluster-operator -n kafka -f

2. Create an Apache Kafka cluster

Create a new Kafka custom resource to get a small persistent Apache Kafka Cluster with one node for Apache Zookeeper and Apache Kafka:

# Apply the `Kafka` Cluster CR file
kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka

Wait while Kubernetes starts the required pods, services, and so on:

kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka

Send and receive messages

With the cluster running, run a simple producer to send messages to a Kafka topic (the topic is automatically created):

kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.40.0-kafka-3.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic

Once everything is set up correctly, you’ll see a prompt where you can type in your messages:

If you don't see a command prompt, try pressing enter.
>Hello
>I am Abhi
Screenshot 2024-04-30 at 5.52.45 PM.png

To receive them in a different terminal, run:

kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.40.0-kafka-3.7.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning

If everything works as expected, you’ll be able to see the message you produced in the previous step:

If you don't see a command prompt, try pressing enter.
>Hello
>I am Abhi

Our Cluster is ready. Now let’s set up Prometheus Kafka exporter to emit out metrics related to the Kafka cluster.

3. Prometheus Kafka exporter

We will use prometheus-community helm charts. Let’s add the repo

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

Create a file called kafka_helm_values.yaml with service discovery config

kafkaServer:
- my-cluster-kafka-bootstrap:9092
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "9308"

service:
type: ClusterIP
port: 9308
labels:
clustername: my-cluster
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "9308"

To deploy, Run->

helm install --namespace kafka prom-kafka-expo-ns prometheus-community/prometheus-kafka-exporter -f kafka_helm_values.yaml

If you changed the clustername then please edit above yaml file. To view metrics http://localhost:9308/metrics

export POD_NAME=$(kubectl get pods --namespace kafka -l "app=prometheus-kafka-exporter,release=prom-kafka-expo-ns" -o jsonpath="{.items[0].metadata.name}")
echo $POD_NAME
kubectl port-forward $POD_NAME 9308:9308 -n kafka

4. Instrumenting with Cisco Cloud Observability

Make sure you have deployed Cisco Cloud Observability as mentioned here:

If yes, then follow below->
Edit your kafka_helm_values.yaml file, And add below->

kafkaServer:
- my-cluster-kafka-bootstrap:9092
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "9308"
appdynamics.com/exporter_type: "kafka"
appdynamics.com/kafka_cluster_name: "my-cluster"

service:
type: ClusterIP
port: 9308
labels:
clustername: my-cluster
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "9308"
appdynamics.com/exporter_type: "kafka"
appdynamics.com/kafka_cluster_name: "my-cluster"

Afterward, run

helm upgrade --namespace kafka prom-kafka-expo-ns prometheus-community/prometheus-kafka-exporter -f kafka_helm_values.yaml

Wait about 5 minutes and check the logs of your OpenTelemetry Collector.

kubectl -n cco logs appdynamics-collectors-ss-appdynamics-otel-collector-0 | grep kafka
-> k8s.namespace.name: Str(kafka)
-> k8s.namespace.name: Str(kafka)
-> k8s.namespace.name: Str(kafka)
-> k8s.pod.labels: Map({"run":"kafka-consumer"})
-> k8s.pod.name: Str(kafka-consumer)
-> k8s.pod.containernames: Slice(["kafka-consumer"])
-> k8s.namespace.name: Str(kafka)
-> k8s.namespace.name: Str(kafka)
-> k8s.namespace.name: Str(kafka)
-> k8s.pod.containernames: Slice(["kafka-producer"])
-> k8s.pod.labels: Map({"run":"kafka-producer"})
-> k8s.pod.name: Str(kafka-producer)
logger.kafka.name = org.apache.kafka
logger.kafka.level = WARN
-> k8s.namespace.name: Str(kafka)
-> k8s.service.selector: Map({"strimzi.io/cluster":"my-cluster","strimzi.io/kind":"Kafka","strimzi.io/name":"my-cluster-kafka"})
-> k8s.service.labels: Map({"app.kubernetes.io/instance":"my-cluster","app.kubernetes.io/managed-by":"strimzi-cluster-operator","app.kubernetes.io/name":"kafka","app.kubernetes.io/part-of":"strimzi-my-cluster","strimzi.io/cluster":"my-cluster","strimzi.io/component-type":"kafka","strimzi.io/kind":"Kafka","strimzi.io/name":"my-cluster-kafka"})
-> k8s.service.name: Str(my-cluster-kafka-brokers)
-> k8s.namespace.name: Str(kafka)
-> k8s.secret.labels: Map({"modifiedAt":"1714427393","name":"prom-kafka-expo-ns","owner":"helm","status":"superseded","version":"1"})
-> k8s.namespace.name: Str(kafka)
-> k8s.secret.name: Str(sh.helm.release.v1.prom-kafka-expo-ns.v1)
-> k8s.namespace.name: Str(kafka)

On the UI, scroll to Observe -> Kafka Clusters

Screenshot 2024-04-30 at 5.53.16 PM.png

You should see the Topic listed. 

Screenshot 2024-04-30 at 5.53.28 PM.png

Great work everyone!!

Version history
Last update:
‎04-30-2024 04:11 PM
Updated by:
On-Demand Webinar
Discover new Splunk integrations and AI innovations for Cisco AppDynamics.


Register Now!

Observe and Explore
Dive into our Community Blog for the Latest Insights and Updates!


Read the blog here