In my current project we are moving our application to the cloud. This means there is a Kubernetes cluster with a Jenkins build server. To get the hang of this setup I got my own Kubernetes cluster locally with minikube and deployed a Jenkins on top. Follow these steps to do the same on your mac.

Here are all the sources I used on github.

Minikube

Minikube enables you to run a Kubernetes cluster locally on your machine. You first need a hypervisor to virtualise the minikube environment.

brew cask install virtualbox

Then install minikube

brew cask install minikube

To start minikube with virtualbox type

minikube start — vm-driver=virtualbox

To see everything is up minikube offers a dashboard interface

minikube dashboard
Minikube dashboard

Kubectl

To send commands to a Kubernetes cluster there is a CLI available: kubectl. To install this use

brew install kubectl

No we are ready to configure everything we need to deploy Jenkins. We will need the following:

  1. Namespace
  2. Persistent volume

These are configured in .yaml files combined in this github project. The namespace configuration looks like this:

apiVersion: v1
kind: Namespace
metadata:
  name: sybrenbolandit

To create the namespace in your cluster use kubectl

kubectl create -f jenkins-namespace.yaml

And we can check this by using

kubectl get ns

Now we can create the persistent volume in this namespace. We take the /data directory for this.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins-pv
  namespace: sybrenbolandit
spec:
  storageClassName: jenkins-pv
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 20Gi
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /data/jenkins-volume/

To create the volume in the cluster type:

kubectl create -f jenkins-volume.yaml

Helm

The last tool we need is Helm. To install use homebrew again.

brew install kubernetes-helm

Now go to the configuration of Jenkins itself in the /helm folder and initialise helm:

cd helm

helm init

The following file is the configuration of Jenkins. We use the standard git and pipeline plugins. Furthermore we use the green-balls plugin for a nice UI to see if our containers are running. Don’t be scared…

# Default values for jenkins.
# This is a YAML-formatted file.
# Declare name/value pairs to be passed into your templates.
# name: value

## Overrides for generated resource names
# See templates/_helpers.tpl
# nameOverride:
# fullnameOverride:

Master:
  Name: jenkins-master
  Image: "jenkins/jenkins"
  ImageTag: "2.141"
  ImagePullPolicy: "Always"
  Component: "jenkins-master"
  UseSecurity: true
  AdminUser: admin
  # AdminPassword: <defaults to random>
  Cpu: "200m"
  Memory: "256Mi"
  ServicePort: 8080
  # For minikube, set this to NodePort, elsewhere use LoadBalancer
  # <to set explicitly, choose port between 30000-32767>
  ServiceType: NodePort
  NodePort: 32000
  ServiceAnnotations: {}
  ContainerPort: 8080
  # Enable Kubernetes Liveness and Readiness Probes
  HealthProbes: true
  HealthProbesTimeout: 60
  SlaveListenerPort: 50000
  LoadBalancerSourceRanges:
    - 0.0.0.0/0
  # List of plugins to be install during Jenkins master start
  InstallPlugins:
    - kubernetes:1.12.4
    - workflow-aggregator:2.5
    - workflow-job:2.24
    - credentials-binding:1.16
    - git:3.9.1
    - greenballs:1.15
  # Used to approve a list of groovy functions in pipelines used the script-security plugin. Can be viewed under /scriptApproval
  ScriptApproval:
    - "method groovy.json.JsonSlurperClassic parseText java.lang.String"
    - "new groovy.json.JsonSlurperClassic"
    - "staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods leftShift java.util.Map java.util.Map"
    - "staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods split java.lang.String"
  CustomConfigMap: false
  NodeSelector: {}
  Tolerations: {}

Agent:
  Enabled: true
  Image: jenkins/jnlp-slave
  ImageTag: 3.10-1
  Component: "jenkins-slave"
  Privileged: false
  Cpu: "200m"
  Memory: "256Mi"
  # You may want to change this to true while testing a new image
  AlwaysPullImage: false
  # You can define the volumes that you want to mount for this container
  # Allowed types are: ConfigMap, EmptyDir, HostPath, Nfs, Pod, Secret
  volumes:
    - type: HostPath
      hostPath: /var/run/docker.sock
      mountPath: /var/run/docker.sock
  NodeSelector: {}

Persistence:
  Enabled: true
  ## A manually managed Persistent Volume and Claim
  ## Requires Persistence.Enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  # ExistingClaim:
  ## jenkins data Persistent Volume Storage Class
  StorageClass: jenkins-pv

  Annotations: {}
  AccessMode: ReadWriteOnce
  Size: 20Gi
  volumes:
  #  - name: nothing
  #    emptyDir: {}
  mounts:
  #  - mountPath: /var/nothing
  #    name: nothing
  #    readOnly: true

NetworkPolicy:
  # Enable creation of NetworkPolicy resources.
  Enabled: false
  # For Kubernetes v1.4, v1.5 and v1.6, use 'extensions/v1beta1'
  # For Kubernetes v1.7, use 'networking.k8s.io/v1'
  ApiVersion: networking.k8s.io/v1

## Install Default RBAC roles and bindings
rbac:
  install: true
  serviceAccountName: default
  # RBAC api version (currently either v1beta1 or v1alpha1)
  apiVersion: v1beta1
  # Cluster role reference
  roleRef: cluster-admin

We only need to state wat we want and helm does the rest.

helm install --name jenkins -f helm/jenkins-values.yaml stable/jenkins --namespace sybrenbolandit --version 0.22.0

To obtain the password to log in to Jenkins run the following command:

printf $(kubectl get secret --namespace sybrenbolandit jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo

Save this output!

The result

We can check the minikube dashboard to see if our Jenkins container is up and running. Don’t forget to select the sybrenbolandit namespace.

Minikube dashboard - pods running

If you see green balls like the above we can use our Jenkins on http://192.168.99.100:32000 and with username admin and the password we saved from the last step.

Jenkins welcome screen

An overview of all the code can be found here.

Happy building!