free counter
Science And Nature

How exactly to Monitor Kubernetes Resource Usage With Metrics Server and Kubectl Top

Kubernetes logo

Monitoring the resource using your Kubernetes cluster is vital so that you can track performance and understand whether your workloads are operating efficiently. The kubectl top command streams metrics directly from your own cluster, permitting you to access the fundamentals in your terminal.

This command wont usually work straightaway in a brand new Kubernetes environment. This will depend on the Metrics Server addon being installed in your cluster. This component collects metrics from your own Nodes and Pods and an API to retrieve the info.

In this post well show how exactly to install Metrics Server and access its measurements using kubectl top. Youll have the ability to view the CPU and memory usage of all of your Nodes and Pods.

Adding Metrics Server to Kubernetes

Kubernetes distributions dont normally include Metrics Server built-in. It is simple to check whether your cluster already has support by attempting to run kubectl top:

$ kubectl top node
error: Metrics API unavailable

The error message confirms that the metrics server API isn’t within the cluster.

Metrics Server is maintained within the Kubernetes Special Interest Group (SIG) community. It could be put into your cluster using its plain YAML manifest or the projects Helm chart.

Well utilize the manifest apply for this tutorial. Run the next Kubectl command to set up the Metrics Server:

$ kubectl apply -f
serviceaccount/metrics-server created created created created created created
service/metrics-server created
deployment.apps/metrics-server created created

Metrics Server will now start collecting and exposing Kubernetes resource consumption data. If the installation fails having an error, you need to check your cluster meets the projects requirements. Metrics Server has specific dependencies which might not be supported in a few environments.

Many Kubernetes distributions bundle Metrics Server support utilizing their own addons system. You may use this command to easily add Metrics Server to a Minikube cluster, for instance:

$ minikube addons enable metrics-server
Using image
The 'metrics-server' addon is enabled

Retrieving Metrics With Kubectl Top

With Metrics Server installed, now you can run kubectl top to gain access to the info it collects.

Utilize the node sub-command to have the current resource usage of each one of the Nodes in your cluster:

$ kubectl top node
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
minikube   249m         3%     847Mi           2%

The pod sub-command provides individual metrics for every of one’s Pods:

$ kubectl top pod
NAME    CPU(cores)   MEMORY(bytes)   
nginx   120m         8Mi

This can surface Pods in the default namespace. Add the --namespace flag if youre thinking about Pods in a particular namespace:

$ kubectl top pod --namespace demo-app
NAME    CPU(cores)   MEMORY(bytes)   
nginx   0m           2Mi

The --all-namespaces flag can be supported to list every Pod in your cluster.

Metrics might take a couple of minutes to become available after new Pods are manufactured. Theres a delay in the metrics servers pipeline so that it doesnt turn into a performance issue itself.

The kubectl top command doesnt overwhelm you with a large number of metrics. It targets within the bare essentials of CPU and memory usage. This basic start could be adequate for scenarios where you merely need data fast, such as for example identifying the Pod thats caused a spike in overall utilization.

One way to obtain confusion could possibly be the 100m values reported in the CPU(cores) field. The command displays CPU usage in millicores. A measurement of 1000m always means 100% usage of an individual CPU core. 500m indicates 50% usage of one core, while 2000m means two cores are increasingly being occupied.

Changing the thing Sort Order

The kubectl top command can optionally sort the emitted object list by CPU or memory consumption. This helps it be better to quickly spot the Nodes or Pods which are exerting the best pressure on cluster resources.

Add the --sort-by flag with either cpu or memory as its value to activate this behavior:

$ kubectl top pod --sort-by=memory
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
nginx-1    249m         3%     1790Mi          5%
nginx-2    150m         1%     847Mi           2%

Filtering the thing List

In keeping with other Kubectl commands, the --selector flag enables you to filter the thing list to items with specific labels:

$ kubectl top pod --selector application=demo-app
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
nginx-1    249m         3%     1790Mi          5%
nginx-2    150m         1%     847Mi           2%

In this example, only Pods which have the application: demo-app label will undoubtedly be contained in the output. =, ==, and != are supported as operators. Multiple constraints could be applied by stringing them together as a comma-separated string, such as for example application=demo-app,version!=1. Objects is only going to show up should they match all the label filters in your query.

Obtaining the Utilization of a particular Resource

The top node and top pod sub-commands can both be passed the name of a particular Node or Pod to fetch. The existing metrics connected with that item will undoubtedly be displayed in isolation.

Provide you with the objects name as an ordinary argument to the command, straight after node or pod:

$ kubectl top node minikube
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
minikube   245m         3%     714Mi           2%


The kubectl top command surfaces essential resource consumption metrics for Nodes and Pods in your Kubernetes cluster. You may use it to quickly check the CPU and memory usage connected with all of your workloads. These details are a good idea to diagnose performance issues and identify when its time and energy to add another Node.

Before utilizing the command, you have to install the Kubernetes Metrics Server in your cluster. This gives the API that exposes resource utilization data. Enabling Metrics Server incurs a performance overhead but normally, this is negligible generally in most deployments. It typically requires 1m core of CPU and 2MiB of memory per monitored Node, although this might vary with the workloads running in your unique environment.

Read More

Related Articles

Leave a Reply

Your email address will not be published.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker