Skip to main content

Helm

Setup​

We will be using a virtual machine in the faculty's cloud.

When creating a virtual machine in the Launch Instance window:

  • Name your VM using the following convention: cc_lab<no>_<username>, where <no> is the lab number and <username> is your institutional account.
  • Select Boot from image in Instance Boot Source section
  • Select CC 2024-2025 in Image Name section
  • Select the m1.xlarge flavor.

In the base virtual machine:

  • Download the laboratory archive from here in the work directory. Use: wget https://repository.grid.pub.ro/cs/cc/laboratoare/lab-kubernetes-part-1.zip to download the archive.
  • Extract the archive.
  • Run the setup script bash lab-kubernetes-part-1.sh.
$ # change the working dir
$ cd ~/work
$ # download the archive
$ wget https://repository.grid.pub.ro/cs/cc/laboratoare/lab-kubernetes-part-1.zip
$ unzip lab-kubernetes-part-1.zip
$ # run setup script; it may take a while
$ bash lab-kubernetes-part-1.sh

Creating a Kubernetes cluster​

As in the previous laboratories, we will create a cluster on the lab machine, using the kind create cluster command:

student@lab-kubernetes:~$ kind create cluster
Creating cluster "kind" ...
βœ“ Ensuring node image (kindest/node:v1.23.4) πŸ–Ό
βœ“ Preparing nodes πŸ“¦
βœ“ Writing configuration πŸ“œ
βœ“ Starting control-plane πŸ•ΉοΈ
βœ“ Installing CNI πŸ”Œ
βœ“ Installing StorageClass πŸ’Ύ
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Thanks for using kind! 😊
note

It is recommended that you use port-forwarding instead of X11 forwarding to interact with the UI.

What is Helm?​

Package managers are tools used to help with the installation and management of software packages and dependencies. Linux, depending on the installed distribution, has apt (Ubuntu & Debian), dnf (Fedora, Redhat & CentOS) or pacman (Arch). Even for programming languages we use package managers to install libraries, such as pip for Python or npm for JavaScript.

Kubernetes has its own package manager called Helm. Helm simplifies the deployment and management of applications inside a Kubernetes cluster by packaging them in charts, which are reusable, configurable and versioned templates.

Artifact Hub​

Artifact Hub is a centralized platform for discovering and sharing Kubernetes packages, including Helm charts, operators, and plugins. It allows users to explore and install Helm charts from various repositories, simplifying application deployment. Helm can connect to Artifact Hub by adding repositories listed there using helm repo add, making chart management more accessible.

Installing Helm​

Helm is already installed on our VMs. Use helm version to check that the tool was successfully installed:

student@lab-helm:~$ helm version
version.BuildInfo{Version:"v3.17.1", GitCommit:"980d8ac1939e39138101364400756af2bdee1da5", GitTreeState:"clean", GoVersion:"go1.23.5"}
note

If you want to install Helm on your computers, follow the installation link from here.

note

Helm offers a cheat sheet with the basic commands necessary to manage an application.

Charts​

warning

Throughout all the exemples and exercises, please be carefull to follow the instructions the charts give you at the end of their installation.

The packaging format used by Helm is called chart. A chart is a collection of files describing a set of Kubernetes resources. One chart can package a simple resource, like a pod, or complex resources, like entire applications.

A chart has the follwing structure:

my-chart/
Chart.yaml # A YAML file containing information about the chart
LICENSE # OPTIONAL: A plain text file containing the license for the chart
README.md # OPTIONAL: A human-readable README file
values.yaml # The default configuration values for this chart
values.schema.json # OPTIONAL: A JSON Schema for imposing a structure on the values.yaml file
charts/ # A directory containing any charts upon which this chart depends.
crds/ # Custom Resource Definitions
templates/ # A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
templates/NOTES.txt # OPTIONAL: A plain text file containing short usage notes

CRDs - Custom Resource Definitions​

CRDs are extensions of the Kubernetes API that allow users to define their own resources. We can use CRDs to define new types of data or libraries and interact with them directly using kubectl. More details about CRDs can be found here.

Chart vs Deployment​

A Kubernetes Deployment is a resource that manages the lifecycle of a set of pods. A chart is a collection of files called templates that can include multiple Kubernetes resources (i.e. Deployments, Services, ConfigMaps etc.).

In the following example we will deploy Elastic using the two methods.

Elastic Kubernetes Deployment​

Firstly, we install the custom resource definition for Elastic using create:

student@lab-helm:~$ kubectl create -f https://download.elastic.co/downloads/eck/2.16.1/crds.yaml
customresourcedefinition.apiextensions.k8s.io/agents.agent.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/beats.beat.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticmapsservers.maps.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/enterprisesearches.enterprisesearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/logstashes.logstash.k8s.elastic.co created

Afterwards, we install the Elastic operator using apply:

student@lab-helm:~$ kubectl apply -f https://download.elastic.co/downloads/eck/2.16.1/operator.yaml
Warning: resource namespaces/elastic-system is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
namespace/elastic-system configured
serviceaccount/elastic-operator created
secret/elastic-webhook-server-cert created
configmap/elastic-operator created
clusterrole.rbac.authorization.k8s.io/elastic-operator created
clusterrole.rbac.authorization.k8s.io/elastic-operator-view created
clusterrole.rbac.authorization.k8s.io/elastic-operator-edit created
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created
service/elastic-webhook-server created
statefulset.apps/elastic-operator created
validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co created

Afterwards, we can monitor the operator's setup from its logs, using logs:

student@lab-helm:~$ kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

We can check that the operator is ready by using get and checking that the oparator pod is Running:

student@lab-helm:~$ kubectl get -n elastic-system pods
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 0 3m51s
note

Run delete to remove all the Elastic resources.

student@lab-helm:~$ kubectl delete -f https://download.elastic.co/downloads/eck/2.16.1/operator.yaml
student@lab-helm:~$ kubectl delete -f https://download.elastic.co/downloads/eck/2.16.1/crds.yaml

Elastic Helm Chart​

Now we will deploy Elastic using Helm. Firstly, we will add the Elastic Helm repository to the package sources, and update the tool.

student@lab-helm:~$ helm repo add elastic https://helm.elastic.co
student@lab-helm:~$ helm repo update

Then we will use helm install to install the Elastic chart:

student@lab-helm:~$ helm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace
NAME: elastic-operator
LAST DEPLOYED: Mon Mar 17 21:27:22 2025
NAMESPACE: elastic-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Inspect the operator logs by running the following command:
kubectl logs -n elastic-system sts/elastic-operator

And we will check that the operator is running by looking at its logs, using kubectl logs:

student@lab-helm:~$ kubectl logs -n elastic-system sts/elastic-operator
{"log.level":"info","@timestamp":"2025-03-17T21:27:24.304Z","log.logger":"manager","message":"maxprocs: Updating GOMAXPROCS=1: determined from CPU quota","service.version":"2.16.1+1f74bdd9","service.type":"eck","ecs.version":"1.4.0"}
{"log.level":"info","@timestamp":"2025-03-17T21:27:24.304Z","log.logger":"manager","message":"Setting default container registry","service.version":"2.16.1+1f74bdd9","service.type":"eck","ecs.version":"1.4.0","container_registry":"docker.elastic.co"}

[...]

{"log.level":"info","@timestamp":"2025-03-17T21:27:35.818Z","log.logger":"resource-reporter","message":"Created resource successfully","service.version":"2.16.1+1f74bdd9","service.type":"eck","ecs.version":"1.4.0","kind":"ConfigMap","namespace":"elastic-system","name":"elastic-licensing"}
{"log.level":"info","@timestamp":"2025-03-17T21:27:35.820Z","log.logger":"manager","message":"Orphan secrets garbage collection complete","service.version":"2.16.1+1f74bdd9","service.type":"eck","ecs.version":"1.4.0"}

As we can see, the Elastic operator is running, something that can be seen using kubectl get pods as well:

student@lab-helm:~$ kubectl get pods -n elastic-system
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 0 3m44s

Exercise: Deploy Podinfo​

  1. Deploy Podinfo using Helm and access its frontend to test your deployment. Use the ghcr.io install.
note

Run the kubectl command the deployment tells you to run to expose the service and make it accessible from outside the cluster.

note

To access the frontend for podinfo, connect to the OpenStack VM using -L [local-port]:localhost:8080.

# Forward local port 1080 to the port that you forward Kubernetes service to.
student@local-machine:~$ ssh -J fep -L 1080:localhost:8080 student@[ip_vm]

If you have issues accessing the Podinfo frontend from Firefox, try a different browser.

Values​

Charts offer the possibility to parameterize values inside their templates. Values are a great way to customize charts and make them portable, allowing us to set different parameters with specific values that are used in our deployments. We can find the parameters and their default values for each chart in the values.yaml file. We can pass values to charts in two ways: using --set during the helm install of a chart, or passing a file with the values using helm install -f my-values.yaml.

Let's start from a simple chart. We use helm create to create our own chart:

student@lab-helm:~$ helm create my-chart
Creating my-chart

We now have a chart created with all its files:

student@lab-helm:~$ tree my-chart/
my-chart/
β”œβ”€β”€ charts # A directory containing any charts upon which this chart depends.
β”œβ”€β”€ Chart.yaml # A YAML file containing information about the chart
β”œβ”€β”€ templates # A directory of templates that will generate valid Kubernetes manifest files.
β”‚Β Β  β”œβ”€β”€ deployment.yaml # The manifest of the deployment
β”‚Β Β  β”œβ”€β”€ _helpers.tpl # File containing helper function for setting different values for the template
β”‚Β Β  β”œβ”€β”€ hpa.yaml # Horizontal Pod Autoscaler
β”‚Β Β  β”œβ”€β”€ ingress.yaml # Ingress configuration
β”‚Β Β  β”œβ”€β”€ NOTES.txt # Chart installation notes, displayed after a successfull installation to give the next steps
β”‚Β Β  β”œβ”€β”€ serviceaccount.yaml # The setup of a service account
β”‚Β Β  β”œβ”€β”€ service.yaml # The manifest of the service
β”‚Β Β  └── tests # A directory containing tests for the chart
β”‚Β Β  └── test-connection.yaml
└── values.yaml # The default configuration values for this chart.

3 directories, 10 files

Now we will create a ConfigMap template in our chart templates:

student@lab-helm:~/my-chart/templates$ touch config.yaml
student@lab-helm:~/my-chart/templates$ cat config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
data:
message: {{ .Values.message }}
drink: {{ .Values.favoriteDrink }}
desert: {{ .Values.favoriteDesert }}

We added three values in our template that we can configure. We now have to define default values for them in values.yaml:

student@lab-helm:~/my-chart$ cat values.yaml

[...]

# Additional volumeMounts on the output Deployment definition.
volumeMounts: []
# - name: foo
# mountPath: "/etc/foo"
# readOnly: true

nodeSelector: {}

tolerations: []

affinity: {}

message: "Hello, dear customer!"

favoriteDrink: "Cola"

favoriteDesert: "Apple Pie"

If we install this chart in debug mode we will see that the parameterize values in our template are replaced with the default ones:

student@lab-helm:~$ helm install test ./my-chart --dry-run --debug

[...]

---
# Source: my-chart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
data:
message: Hello, dear customer!
drink: Cola
desert: Apple Pie
---

[...]

We can use --set to manually set a parameter to a value that we want during install:

student@lab-helm:~$ helm install test ./my-chart --dry-run --debug --set favoriteDrink=tea --set favoriteDesert="fruit salad"

[...]

---
# Source: my-chart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
data:
message: Hello, dear customer!
drink: tea
desert: fruit salad
---

[...]

And lastly, we can use a custom values file for easier management of parameters:

student@lab-helm:~$ cat myvals.yaml
message: Hello from values file!
favoriteDrink: lemonade
favoriteDesert: chocolate mouse
student@lab-helm:~$ helm install test -f myvals.yaml ./my-chart --dry-run --debug

[...]

---
# Source: my-chart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
data:
message: Hello from values file!
drink: lemonade
desert: chocolate mouse
---

[...]

Exercise: Customize Podinfo​

  1. Starting from the previous Podinfo Helm deployment, use --set to manually configure the replicaCount to 5 and the UI message (the parameter is ui.message) from the home page to a custom one.
  2. Now do the same, but this time use a values file.
note

You can check the replicaCount by using kubectl describe deployment -n [my-podinfo-namespace] [my-podinfo-deployment-name]. You can check the UI message using curl as well: curl localhost:8080 (after running the kubectl port-forward command).

Values - Advanced​

Now that we saw how values work, it is time to dig a bit deeper into their strengths. Making use of values, we can impose conditions in our templates. Conditions can help us isolate parts of our deployements based on our requirements. Moreover, Helm charts give us the posibility to use loops, leading to easier templating of repetitive parts for our deployments.

Conditions​

Let's return to our chart from before:

student@lab-helm:~/my-chart/templates$ cat config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
data:
message: {{ .Values.message }}
drink: {{ .Values.favoriteDrink }}
desert: {{ .Values.favoriteDesert }}

Now, let's begin by adding a condition displaying a special additional message if it is the weekend:

student@lab-helm:~/my-chart/templates$ cat config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
data:
message: {{ .Values.message }}
{{- if .Values.weekend.enabled }}
specialMessage: "Special deal for fruit tarts!"
{{- end }}
drink: {{ .Values.favoriteDrink }}
desert: {{ .Values.favoriteDesert }}

Now we have to define the default value for the new parameter in values.yaml.

student@lab-helm:~/my-chart$ cat values.yaml

[...]

# Added value
weekend:
enabled: false

message: "Hello, dear customer!"

favoriteDrink: "Cola"

favoriteDesert: "Apple Pie"

If we test our chart now, we will see that nothing is changed from before:

student@lab-helm:~$ helm install test ./my-chart --dry-run --debug

[...]

---
# Source: my-chart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
data:
message: Hello, dear customer!
drink: Cola
desert: Apple Pie
---

[...]

But now let's set the value of weekend.enabled to true:

student@lab-helm:~$ helm install test ./my-chart --dry-run --debug --set weekend.enabled=true

[...]

---
# Source: my-chart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
data:
message: Hello, dear customer!
specialMessage: "Special deal for fruit tarts!"
drink: Cola
desert: Apple Pie
---

[...]

We can see that the chart has loaded the specialMessage now.

Loops​

Loops come in handy when we want to define templates where we require lots of variables, such as environment variables. Using range we can simplify the template's design, keeping it cleaner and easier to read and write.

Let's extend the ConfigMap from our chart to make use of loops.

student@lab-helm:~/my-chart/templates$ cat config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
data:
message: {{ .Values.message }}
{{- if .Values.weekend.enabled }}
specialMessage: "Special deal for fruit tarts!"
{{- end }}
drinksMenu:
{{- range .Values.menu.drinks}}
name: {{ .name }}
price: {{ .price}}
{{- end }}
desertsMenu:
{{- range .Values.menu.deserts}}
name: {{ .name }}
price: {{ .price }}
{{- end }}
student@lab-helm:~/my-chart/templates$ cd .. && cat values.yaml

[...]

weekend:
enabled: false

message: "Hello, dear customer!"

favoriteDrink: "Cola"

favoriteDesert: "Apple Pie"

menu:
drinks:
- name: "Cola"
price: "5 lei"
- name: "Tea"
price: "15 lei"
- name: "Coffee"
price: "17 lei"
deserts:
- name: "Chocolate Cake"
price: "25 lei"
- name: "Cheese Cake"
price: "26 lei"

We modified the ConfigMap, adding two loops to create a menu. As the entries in each of the categories follow the same structure, we can add just the generic format and loop over the values given in values.yaml to create and fill new entries. Deploying this chart, we will see that entries were created for each pair in the drinks and deserts categories:

student@lab-helm:~$ helm install test ./my-chart --dry-run --debug

[...]

---
# Source: my-chart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
data:
message: Hello, dear customer!
drinksMenu:
name: Cola
price: 5 lei
name: Tea
price: 15 lei
name: Coffee
price: 17 lei
desertsMenu:
name: Chocolate Cake
price: 25 lei
name: Cheese Cake
price: 26 lei
---

[...]

Exercise: Nginx Advanced Deployment​

Create a new Helm chart using helm create nginx-advance. This will create an example Nginx template starting from which you will have to do the following:

  1. Create a ConfigMap for your Nginx chart (you should create it in ~/nginx-advanced/templates) that can be used to configure index.html. You can follow the example:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configmap
data:
index.html: |
<h1>Hello, World!</h1>
  1. Update the values file (found in ~/nginx-advance/values.yaml) to add the volume and volumeMount you have created with the ConfigMap. The mountPath is /usr/share/nginx/html. Follow this template:
# Additional volumes on the output Deployment definition.
volumes:
- name: [volume-name]
configMap:
name: [ConfigMap-name]

volumeMounts:
- name: [volume-name]
mountPath: /usr/share/nginx/html
  1. Deploy the chart and check the landing page and follow the steps given at the end of the deployment to setup the port-forwarding for the server.
student@lab-helm:~$ helm install nginx-chart ./nginx-advance

[...]

1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=harder-chart,app.kubernetes.io/instance=harder" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
note

You can run kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT & to have the port-forwarding running in the background, but you will have to manually stop it by using ps aux | grep kubectl to get its PID and running kill -9 PID.

note

Run the same commands everytime you update the chart.

  1. Update the chart so that the content of the Nginx server html pages is parameterized (can be set using values). The parameter will have the following name: pageContent.indexPage. To redeploy the chart you can use helm update:
student@lab-helm:~$ helm updgrade nginx-chart ./nginx-advance
Release "nginx-chart" has been upgraded. Happy Helming!
NAME: nginx-chart
LAST DEPLOYED: Thu Mar 20 20:20:32 2025
NAMESPACE: default
STATUS: deployed
REVISION: 5
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=nginx-advance,app.kubernetes.io/instance=nginx-chart" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
  1. Add another page in the ConfigMap that is added only when a condition is true. The name and contents of the page are up to you (it is html so have fun with it), but it should be parameterized (similar to index.html). Test your chart by running it and accessing the pages (index.html is accessed by going to http://localhost:8080, and other pages are accessed by going to http://localhost:8080/[page-name].html)

Chart Versioning​

Helm offers us the possibility to keep track of different varsions of our charts by using versioning. The version of a chart is given by the parameter version found in the Chart.yaml file in the root of our chart. Making use of the version we can upgrade charts or rollback to certain versions.

Let's start with the chart from the previous exercise. Running helm history [chart-deployment-name] we will get to see the release history of our charts.

student@lab-helm:~/nginx-advanced$ helm history nginx-chart
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Thu Mar 20 20:39:39 2025 superseded nginx-advance-0.1.0 1.16.0 Install complete
2 Thu Mar 20 20:46:27 2025 deployed nginx-advance-0.2.0 1.16.0 Upgrade complete

In the CHART column we can see the version of the chart for each revision. The STATUS column gives us information about what revision is currently deployed and what was the final status of the previous revisions. The DESCRIPTION column gives additional information for each of the revisions.

To better make use of the versioning mechanism, let's start by updating the version parameter in the Chart.yaml file of our chart.

student@lab-helm:~/nginx-advanced$ cat Chart.yaml
apiVersion: v2
name: harder-chart
description: A Helm chart for Kubernetes

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.2.0

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

Once the version has been updated, let's update the index.html page by modifing its parameter value in values.yaml:

student@lab-helm:~/nginx-advance$ cat values.yaml
[...]
pageContent:
indexPage: |
<h1>This is the index.html page for version 0.2.0</h1>
[...]

Now let's upgrade the chart using helm upgrade:

student@lab-helm:~/nginx-advance$ helm upgrade nginx-chart .
student@lab-helm:~/nginx-advance$ helm history nginx-chart
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Thu Mar 20 20:39:39 2025 superseded harder-chart-0.1.0 1.16.0 Install complete
2 Thu Mar 20 20:46:27 2025 superseded harder-chart-0.2.0 1.16.0 Upgrade complete
3 Thu Mar 20 22:03:35 2025 deployed harder-chart-0.2.0 1.16.0 Upgrade complete

We can see that the chart has been upgraded. We can check this by accessing the page at http://localhost:8080.

Now let's rollback to a previous release. For this we will use helm rollback:

student@lab-helm:~/nginx-advance$ helm rollback nginx-chart
Rollback was a success! Happy Helming!
student@cc-lab-petre-dragos:~/harder-chart$ helm history nginx-chart
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Thu Mar 20 20:39:39 2025 superseded harder-chart-0.1.0 1.16.0 Install complete
2 Thu Mar 20 20:46:27 2025 superseded harder-chart-0.2.0 1.16.0 Upgrade complete
3 Thu Mar 20 22:03:35 2025 superseded harder-chart-0.2.0 1.16.0 Upgrade complete
4 Thu Mar 20 22:09:29 2025 deployed harder-chart-0.2.0 1.16.0 Rollback to 2

As we can see, we have rolled-back to REVISION 2, as described in the DESCRIPTION column. Let's check this by accessing the index.html page as well: http://localhost:8080.

note

Do not forget to export POD_NAME and CONTAINER_PORT again, as before!

We can see that we have the previous version of the page now.

note

helm rollback nginx-chart or helm rollback nginx-chart 0 will rollback to the previous REVISION. If you want to rollback to a specific revision, do: helm rollback nginx-chart [REVISION_NUMBER]