In an earlier post we saw how to deploy your Spring application to a Kubernetes cluster. Here we will see how to setup monitoring in your cluster where Kubernetes and Prometheus work together. Follow along to setup your own Prometheus instance!

For more in depth information on Prometheus read the documentation.

Prometheus

Prometheus will be a separate pod in your cluster that will scrape given endpoints for information. In this first section we will get this pod running. All code for this section is found on github. Let’s take a look at the Kubernetes resource definition of Prometheus:

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: prometheus
  labels:
    prometheus: sybrenbolandit
spec:
  serviceMonitorSelector:
    matchLabels:
      prometheus: sybrenbolandit
  resources:
    requests:
      memory: 400Mi

Note that the apiVersion and the kind are not part of the standard Kubernetes api. These Custom Resource Definitions (CRDs) can be added by installing the Prometheus operator. Here is a diagram with the architecture.

prometheus diagram - KUBERNETES AND PROMETHEUS
Prometheus Operator Architecture. Source: prometheus-operator

Here we use helm to install the Prometheus Operator (to get started with helm start here).

helm install coreos/prometheus-operator --name prometheus-operator --namespace sybrenbolandit

Now we can deploy the Prometheus pod with its required config. Get the code from github and run the following command:

cd deployment | kustomize build overlays/test | kubectl apply --record -f  -

Check the pods in the sybrenbolandit namespace and you see these:

running prometheus pods - KUBERNETES AND PROMETHEUS

To get to the Prometheus UI you can port-forward the pod like this:

kubectl -n sybrenbolandit port-forward prometheus-prometheus-0 9090:9090

Then browse to http://localhost:9090 and you will see an empty screen. At the end of this post there will be data.

prometheus graph page - KUBERNETES AND PROMETHEUS

Actuator endpoints

Prometheus can scrape a Spring Boot Actuator endpoint to collect data from a Spring Boot application. In this section we will build from an application used in an earlier post and add the required configuration.

The full code can be found on this branch. Here are the steps I took.

First of all we need to expose the Spring Boot Actuator endpoints. We do this on a different port than the service endpoints. So our service will look something like the following where we have multiple ports.

apiVersion: v1
kind: Service
metadata:
  name: java-spring-api
spec:
  ports:
  - name: web
    port: 8080
  - name: actuator
    port: 8081

In our application we set the following property so for the actuator port.

management.server.port=8081

To get this in our pod we use the configMapGenerator of Kustomize. So we add an application.properties per environment with this property and a kustomization.yaml with this:

namespace: sybrenbolandit
commonLabels:
  env: test
bases:
- ../../base
patchesStrategicMerge:
- autoscaling.yaml
- ingress.yaml
configMapGenerator:
- files:
  - application.properties
  name: application-config

If we push this and run the Jenkins pipeline that we made earlier our pod has the actuator endpoint exposed on port 8081.

Prometheus endpoint

We can now add an Actuator endpoint that Prometheus can scrape. First add the following dependencies.

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-core</artifactId>
    <version>${micrometer.version}</version>
</dependency>
<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
    <version>${micrometer.version}</version>
</dependency>

We also extend our application.properties to include the following.

management.server.port=8081
management.server.address=0.0.0.0
management.endpoints.web.exposure.include=shutdown,health,prometheus
management.endpoint.shutdown.enabled=true
management.metrics.enable.jvm=true
management.endpoint.metrics.enabled=true
management.endpoint.prometheus.enabled=true
management.metrics.export.prometheus.enabled=true

The last thing we need is a ServiceMonitor, a CRD that specifies how groups of services should be monitored (Prometheus scrape config will be automatically generated based on this definition). Please look at the diagram above to see how it fits in the architecture. Here is our definition.

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: java-spring-api
  labels:
    prometheus: sybrenbolandit
spec:
  selector:
    matchLabels:
      app: java-spring-api
  endpoints:
  - port: actuator
    path: /actuator/prometheus
  targetLabels:
  - env
  - app

After deploying this application with the Jenkins pipeline we made in an earlier post, the prometheus endpoint is available and Prometheus scrapes this to get information about this pod. When we look at the Prometheus UI we can now query this information and show it in a graph.

prometheus cpu usage graph 1024x430 - KUBERNETES AND PROMETHEUS

Hopefully you can now get your own Prometheus up and running to monitory your Kubernetes pods. Happy monitoring!