Homework Project - Monitoring applications in Kubernetes
Task 0 - 5p
Setup
We will be using a virtual machine in the faculty's cloud.
All virtual machines must be created under the scgc_hw_prj
OpenStack project.
When creating a virtual machine in the Launch Instance window:
- Set the following Instance Name:
HW-<Your LDAP username>
(e.g.,HW-anamaria.popescu
). - Select Boot from image in Instance Boot Source section
- Select SCGC Template in Image Name section
- Select the m1.scgc flavor.
You are allowed to create only one virtual machine. If you want to test/redo your homework, please delete and rebuild the virtual machine.
The VMs that do not follow the requirements above will be deleted. Make sure to use the correct values, as they may differ from those used in labs.
We recommend to periodically create local backups for all configuration files, to make sure you are not losing your work in case the virtual machine is corrupted or deleted by mistake.
In the base virtual machine:
- Download the archive from here in the
work
directory. Use:wget https://repository.grid.pub.ro/cs/scgc/laboratoare/homework-project.zip
to download the archive. - Extract the archive.
- Start the virtual machines using
bash runvm.sh
. - The username for connecting to the nested VMs is
student
and the password isstudent
.
$ # change the working dir
$ cd ~/work
$ # download the archive
$ wget https://repository.grid.pub.ro/cs/scgc/laboratoare/homework-project.zip
$ unzip homework-project.zip
$ # start VMs; it may take a while
$ bash runvm.sh
$ # check if the VMs booted
$ virsh net-dhcp-leases labvms
The runvm.sh
script increases the virtual machine's disk size to 16GB, but the
partition and filesystem inside the virtual machine are not automatically
resized. You must resize the virtual machine's disk manually using the following
commands:
# Resize disk and filesystem
student@lab-kubernetes:~$ sudo growpart /dev/sda 2
student@lab-kubernetes:~$ sudo resize2fs /dev/sda2
# Check that the filesystem is correctly resized
student@lab-kubernetes:~$ df -h | grep /dev/sda2
/dev/sda2 16G 3.4G 12G 23% /
Task 1 - 20p
Scenario
The purpose of this project is to explore monitoring options for an application deployed in Kubernetes. For this:
- we will see how an application must be prepared in order to be ready for monitoring
- deploy Prometheus for gathering application metrics
- deploy Grafana and create a dashboard for displaying metrics in a graphical format
Creating a Kubernetes cluster
Similar to the previous Kubernetes lab, deploy a single-node cluster, using Kind.
See the required steps in the Kubernetes lab here.
Deploying the Nginx service
Similar to the Kubernetes lab, deploy an nginx service that will be exposed on port 80 inside the cluster and port 30080 outside the cluster. The service will be named nginx
.
You can choose the content that is served (index.html
) at your discretion.
Review the steps from the Kubernetes lab on how to start an nginx service with persistence here.
Exposing the metrics endpoint
The nginx server will have to provide metrics about itself, on port 8080, location /metrics
. Use the stub_status
module: http://nginx.org/en/docs/http/ngx_http_stub_status_module.html
You will have to add an additional server code block in the nginx config. Review the steps for creating a custom nginx config in the Kubernetes lab here.
Attention: You must serve the metrics on port 8080, not on the same port 80 as the html content!
Updating the service
Expose the metrics endpoint via the same nginx Kubernetes service, on port 8080 inside the cluster and port 30088 outside the cluster.
Task 2 - 20p
Deploying the Prometheus exporter
The metrics exposed by nginx via the stub_status module are not compatible with Prometheus. To be able to use Prometheus for monitoring, we must use a prometheus exporter, which is a simple application that reads metrics and translates them to the prometheus format.
For nginx, a popular prometheus exporter is nginx-prometheus-exporter: https://github.com/nginxinc/nginx-prometheus-exporter
Deploy nginx-prometheus-exporter in the Kubernetes cluster, using a Kubernetes deployment. Take a look at the docker run
command from the README.md
file to figure out how you should configure your deployment.
You should add the -nginx.scrape-uri
command-line argument to the container.
Use args
for configuring command-line arguments.
The URL that will be monitored should be http://nginx:8080/metrics
Exposing the Prometheus exporter as a service
Expose nginx-prometheus-exporter via a new Kubernetes service, on port 9113 inside the cluster. The service will be named promexporter.
Task 3 - 20p
The Helm package manager
Before deploying the Prometheus and Grafana monitoring tools in Kubernetes, we will have to install Helm, which is a package manager for Kubernetes.
We are doing this because Helm provides the simplest method for deploying complex software in Kubernetes, via Helm charts (a fancy name for packages).
Install Helm in the VM, using the instructions in the User Guide.
Prometheus
Monitoring namespace
Prometheus must be deployed in a separate Kubernetes namespace, called monitoring. This namespace does not exist, so you must create it.
Review the steps for creating a new namespace in the Kubernetes lab here.
Deploying Prometheus
Deploy Prometheus using Helm, following the instructions from here: https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus
Attention: Make sure to deploy the helm chart to the monitoring namespace.
After the helm chart is deployed, use kubectl port-forward
to forward the port of the prometheus-server service to your VM.
Connect to the Prometheus UI using a browser.
See https://scgc.pages.upb.ro/cloud-courses/docs/basic/working_with_openstack#connecting-using-an-ssh-jump-host-proxy and use -D 12345
instead of -X
for starting a Socks proxy towards your VM. Then, configure a local browser to use localhost:12345
as a Socks Proxy.
Configuring Prometheus
Configure Prometheus to monitor metrics exposed by promexporter in the default namespace. There are two different ways to do that. You can choose any method you want:
- Customize
values.yaml
and redeploy the helm chart (see the Configuration section in https://github.com/prometheus-community/helm-charts/blob/main/charts/prometheus/README.md) - Directly edit the prometheus-server configmap (an example can be found here: https://sysdig.com/blog/kubernetes-monitoring-prometheus/)
You will have to use the FQDN for specifying the hostname that Prometheus must monitor: promexporter.default.svc.cluster.local:9113
Prometheus queries
Confirm that the configuration is successful by accessing /targets in your browser. Also, go to Graph and query a metric, like nginx_connections_accepted
. Perform requests on the nginx server and verify that the graph is updating.
Task 4 - 20p
Grafana
Deploying Grafana
Deploy Grafana using Helm, following the instructions from here: https://docs.bitnami.com/kubernetes/infrastructure/grafana/get-started/install/
Attention: Make sure to deploy the helm chart to the monitoring namespace.
After the helm chart is deployed, use kubectl port-forward
to forward the port of the grafana service to your VM.
Connect to the Grafana UI using a browser.
See https://scgc.pages.upb.ro/cloud-courses/docs/basic/working_with_openstack#connecting-using-an-ssh-jump-host-proxy and use -D 12345
instead of -X
for starting a Socks proxy towards your VM. Then, configure a local browser to use localhost:12345
as a Socks Proxy.
Configuring Grafana
In the Grafana UI, configure a Prometheus data source, specifying the URL of the Prometheus server deployed in the same namespace.
The URL should be http://prometheus-server
.
Import the Grafana dashboard provided by nginx-prometheus-exporter, by following the instructions from here: https://github.com/nginxinc/nginx-prometheus-exporter/tree/main/grafana
Using Grafana
Perform requests on the nginx server and verify that the dashboard is updating.
Task 5 - 15p
When you have your nginx + Prometheus + Grafana setup up and running, check how the graphs change when you have a high load on your web server compared to when there is a light load.
To do this, run some Denial-of-Service (DoS) attacks against the nginx web server using a tool such as slowhttptest or any other tool you find suitable.
If you are using slowhttptest
, you can find a tutorial on how it can be used
here.
See how things change in the Grafana Dashboard. Take a few screen captures (before, during and after the attack), add them to a document and briefly present some conclusions you can reach by observing the Grafana Dashboard (max half page).
Task 6 (Bonus) - up to 20p
For the nginx-Prometheus-Grafana
setup you have implemented, propose what
security-related actions should be added to secure your web server and your
infrastructure.
Homework submission
To submit your homework, you have to upload on Moodle a zip archive named SCGC - <Your LDAP username here>.zip
(e.g. SCGC - ana.popescu3342.zip
that contains the following
files:
- The Manifests you used to deploy your Kubernetes containers.
- A screenshot with each solved task.
- A write-up on how your infrastructure should be reproduced (any ramp-up scripts work as well) and tested.
- A pdf file containing a write-up, screenshots for all solved tasks and the written text for tasks 5 and 6. This pdf file must be named
Proposed Solution - <Your LDAP username here>.pdf
(e.g.,Proposed Solution - ana.popescu3342.pdf
). - A README file.
The deadline for submitting your zip archive is the 26th of May 2024, 23:55.
Homework presentation
The homework must be presented during the remaining laboratories (until 28th of May 2024) or during the last course (28th of May 2024). You will have to briefly present what and how you have set up for your infrastructure and demonstrate that it works. The homework presentation is mandatory. You will not receive a grade on your homework if you do not present it.