Helm
Setupβ
We will be using a virtual machine in the faculty's cloud.
When creating a virtual machine in the Launch Instance window:
- Name your VM using the following convention:
cc_lab<no>_<username>
, where<no>
is the lab number and<username>
is your institutional account. - Select Boot from image in Instance Boot Source section
- Select CC 2024-2025 in Image Name section
- Select the m1.xlarge flavor.
In the base virtual machine:
- Download the laboratory archive from here in the
work
directory. Use:wget https://repository.grid.pub.ro/cs/cc/laboratoare/lab-kubernetes-part-1.zip
to download the archive. - Extract the archive.
- Run the setup script
bash lab-kubernetes-part-1.sh
.
$ # change the working dir
$ cd ~/work
$ # download the archive
$ wget https://repository.grid.pub.ro/cs/cc/laboratoare/lab-kubernetes-part-1.zip
$ unzip lab-kubernetes-part-1.zip
$ # run setup script; it may take a while
$ bash lab-kubernetes-part-1.sh
Creating a Kubernetes clusterβ
As in the previous laboratories, we will create a cluster on the lab machine, using the kind create cluster
command:
student@lab-kubernetes:~$ kind create cluster
Creating cluster "kind" ...
β Ensuring node image (kindest/node:v1.23.4) πΌ
β Preparing nodes π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Thanks for using kind! π
It is recommended that you use port-forwarding instead of X11 forwarding to interact with the UI.
What is Helm?β
Package managers are tools used to help with the installation and management of software packages and dependencies. Linux, depending on the installed distribution, has apt
(Ubuntu & Debian), dnf
(Fedora, Redhat & CentOS) or pacman
(Arch). Even for programming languages we use package managers to install libraries, such as pip
for Python or npm
for JavaScript.
Kubernetes has its own package manager called Helm. Helm simplifies the deployment and management of applications inside a Kubernetes cluster by packaging them in charts, which are reusable, configurable and versioned templates.
Artifact Hubβ
Artifact Hub is a centralized platform for discovering and sharing Kubernetes packages, including Helm charts, operators, and plugins.
It allows users to explore and install Helm charts from various repositories, simplifying application deployment.
Helm can connect to Artifact Hub by adding repositories listed there using helm repo add
, making chart management more accessible.
Installing Helmβ
Helm is already installed on our VMs. Use helm version
to check that the tool was successfully installed:
student@lab-helm:~$ helm version
version.BuildInfo{Version:"v3.17.1", GitCommit:"980d8ac1939e39138101364400756af2bdee1da5", GitTreeState:"clean", GoVersion:"go1.23.5"}
If you want to install Helm on your computers, follow the installation link from here.
Helm offers a cheat sheet with the basic commands necessary to manage an application.
Chartsβ
Throughout all the exemples and exercises, please be carefull to follow the instructions the charts give you at the end of their installation.
The packaging format used by Helm is called chart. A chart is a collection of files describing a set of Kubernetes resources. One chart can package a simple resource, like a pod, or complex resources, like entire applications.
A chart has the follwing structure:
my-chart/
Chart.yaml # A YAML file containing information about the chart
LICENSE # OPTIONAL: A plain text file containing the license for the chart
README.md # OPTIONAL: A human-readable README file
values.yaml # The default configuration values for this chart
values.schema.json # OPTIONAL: A JSON Schema for imposing a structure on the values.yaml file
charts/ # A directory containing any charts upon which this chart depends.
crds/ # Custom Resource Definitions
templates/ # A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
templates/NOTES.txt # OPTIONAL: A plain text file containing short usage notes
CRDs - Custom Resource Definitionsβ
CRDs are extensions of the Kubernetes API that allow users to define their own resources.
We can use CRDs to define new types of data or libraries and interact with them directly using kubectl
.
More details about CRDs can be found here.
Chart vs Deploymentβ
A Kubernetes Deployment is a resource that manages the lifecycle of a set of pods. A chart is a collection of files called templates that can include multiple Kubernetes resources (i.e. Deployments, Services, ConfigMaps etc.).
In the following example we will deploy Elastic using the two methods.
Elastic Kubernetes Deploymentβ
Firstly, we install the custom resource definition for Elastic using create
:
student@lab-helm:~$ kubectl create -f https://download.elastic.co/downloads/eck/2.16.1/crds.yaml
customresourcedefinition.apiextensions.k8s.io/agents.agent.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/beats.beat.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticmapsservers.maps.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/enterprisesearches.enterprisesearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/logstashes.logstash.k8s.elastic.co created
Afterwards, we install the Elastic operator using apply
:
student@lab-helm:~$ kubectl apply -f https://download.elastic.co/downloads/eck/2.16.1/operator.yaml
Warning: resource namespaces/elastic-system is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
namespace/elastic-system configured
serviceaccount/elastic-operator created
secret/elastic-webhook-server-cert created
configmap/elastic-operator created
clusterrole.rbac.authorization.k8s.io/elastic-operator created
clusterrole.rbac.authorization.k8s.io/elastic-operator-view created
clusterrole.rbac.authorization.k8s.io/elastic-operator-edit created
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created
service/elastic-webhook-server created
statefulset.apps/elastic-operator created
validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co created
Afterwards, we can monitor the operator's setup from its logs, using logs
:
student@lab-helm:~$ kubectl -n elastic-system logs -f statefulset.apps/elastic-operator
We can check that the operator is ready by using get
and checking that the oparator pod is Running
:
student@lab-helm:~$ kubectl get -n elastic-system pods
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 0 3m51s
Run delete
to remove all the Elastic resources.
student@lab-helm:~$ kubectl delete -f https://download.elastic.co/downloads/eck/2.16.1/operator.yaml
student@lab-helm:~$ kubectl delete -f https://download.elastic.co/downloads/eck/2.16.1/crds.yaml
Elastic Helm Chartβ
Now we will deploy Elastic using Helm. Firstly, we will add the Elastic Helm repository to the package sources, and update the tool.
student@lab-helm:~$ helm repo add elastic https://helm.elastic.co
student@lab-helm:~$ helm repo update
Then we will use helm install
to install the Elastic chart:
student@lab-helm:~$ helm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace
NAME: elastic-operator
LAST DEPLOYED: Mon Mar 17 21:27:22 2025
NAMESPACE: elastic-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Inspect the operator logs by running the following command:
kubectl logs -n elastic-system sts/elastic-operator
And we will check that the operator is running by looking at its logs, using kubectl logs
:
student@lab-helm:~$ kubectl logs -n elastic-system sts/elastic-operator
{"log.level":"info","@timestamp":"2025-03-17T21:27:24.304Z","log.logger":"manager","message":"maxprocs: Updating GOMAXPROCS=1: determined from CPU quota","service.version":"2.16.1+1f74bdd9","service.type":"eck","ecs.version":"1.4.0"}
{"log.level":"info","@timestamp":"2025-03-17T21:27:24.304Z","log.logger":"manager","message":"Setting default container registry","service.version":"2.16.1+1f74bdd9","service.type":"eck","ecs.version":"1.4.0","container_registry":"docker.elastic.co"}
[...]
{"log.level":"info","@timestamp":"2025-03-17T21:27:35.818Z","log.logger":"resource-reporter","message":"Created resource successfully","service.version":"2.16.1+1f74bdd9","service.type":"eck","ecs.version":"1.4.0","kind":"ConfigMap","namespace":"elastic-system","name":"elastic-licensing"}
{"log.level":"info","@timestamp":"2025-03-17T21:27:35.820Z","log.logger":"manager","message":"Orphan secrets garbage collection complete","service.version":"2.16.1+1f74bdd9","service.type":"eck","ecs.version":"1.4.0"}
As we can see, the Elastic operator is running, something that can be seen using kubectl get pods
as well:
student@lab-helm:~$ kubectl get pods -n elastic-system
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 0 3m44s
Exercise: Deploy Podinfoβ
- Deploy Podinfo using Helm and access its frontend to test your deployment.
Use the
ghcr.io
install.
Run the kubectl
command the deployment tells you to run to expose the service and make it accessible from outside the cluster.
To access the frontend for podinfo, connect to the OpenStack VM using -L [local-port]:localhost:8080
.
# Forward local port 1080 to the port that you forward Kubernetes service to.
student@local-machine:~$ ssh -J fep -L 1080:localhost:8080 student@[ip_vm]
If you have issues accessing the Podinfo frontend from Firefox, try a different browser.
Valuesβ
Charts offer the possibility to parameterize values inside their templates.
Values are a great way to customize charts and make them portable, allowing us to set different parameters with specific values that are used in our deployments.
We can find the parameters and their default values for each chart in the values.yaml
file.
We can pass values to charts in two ways: using --set
during the helm install
of a chart, or passing a file with the values using helm install -f my-values.yaml
.
Let's start from a simple chart. We use helm create
to create our own chart:
student@lab-helm:~$ helm create my-chart
Creating my-chart
We now have a chart created with all its files:
student@lab-helm:~$ tree my-chart/
my-chart/
βββ charts # A directory containing any charts upon which this chart depends.
βββ Chart.yaml # A YAML file containing information about the chart
βββ templates # A directory of templates that will generate valid Kubernetes manifest files.
βΒ Β βββ deployment.yaml # The manifest of the deployment
βΒ Β βββ _helpers.tpl # File containing helper function for setting different values for the template
βΒ Β βββ hpa.yaml # Horizontal Pod Autoscaler
βΒ Β βββ ingress.yaml # Ingress configuration
βΒ Β βββ NOTES.txt # Chart installation notes, displayed after a successfull installation to give the next steps
βΒ Β βββ serviceaccount.yaml # The setup of a service account