Introduction to Kubernetes Dashboard
The dashboard is a very popular web-based user interface of Kubernetes. The Kubernetes dashboard is aided in deploying containerized applications to a Kubernetes cluster, containerized application troubleshoot, and cluster resources management. This system provides a broad view of those applications which run on the Kubernetes cluster and also assists in creating or altering the Kubernetes resources like jobs, Daemon Sets, deployment, and a lot more. For instance, a deploy wizard of Kubernetes can be utilized to scale a Deployment, initiate a rolling update, restart a pod, or deploy new applications. The cluster contains several Kubernetes resources and a dashboard assists in providing facts concerning the state of these resources and on any errors that may have occurred. The important thing to keep in mind is that by default, the Dashboard UI can’t be deployed.
Become a Kubernetes Certified professional by learning this HKR Kubernetes Training !
Accessing Kubernetes Dashboard UI
There are several ways of accessing the dashboard UI. You can either implement the kubectl command-line interface or access the master apiserver of Kubernetes by using a web browser.
- Command-line proxy: The dashboard is accessed by using the command line kubectl proxy. The kubectl command handles the apiserver authentication and the dashboard is available at http://localhost:8001/ui. The dashboard UI can only be accessed from the machine where the command was executed.
- Master Server: The dashboard UI can be accessed directly via the Kubernetes master apiserver. Visit the URL “https:///ui”, the domain name or IP address of the Kubernetes master. This only works when the apiserver of the Kubernetes is set up in such a way so that it allows authentication with username and password.
Welcome Page View
The welcome page can be viewed by accessing the Dashboard on an empty cluster. This page contains a link to this document as well as a button to deploy your first application. Moreover, by default, you can also view the system applications which are running in the kube-system namespace of your cluster, Considering dashboard is the best example to understand.
Containerized Applications Deployment
By using a simple wizard, the dashboard assists in creating and deploying a containerized application in the form of deployment and optional service. It provides you with two options to choose from where you can manually specify application details or upload a YAML or JSON file that contains the application configuration. The right button on the welcome page is utilized for accessing the deploy wizard. To access the wizard at a later point, you can always click the create button that is present at the upper right side of the page.
Specifications of Application details
Information concerning the deploy wizards are:
App name: App name is obligatory and the application must be specified with a name. The name along with a label will be added to the deployment and service and it will also be deployed. The application name has to be unique within the chosen Kubernetes namespace. It should begin with a lowercase character and end with a lowercase character or number and it must also contain only lowercase letters, dashes, and numbers limited by 24 characters. The trailing and leading spaces are ignored.
Container Image: The container image is also obligatory. The URL is part of a public Docker container image on any private image or any registry. The container image specification requires a colon at its end.
Number of pods: This is also obligatory and the number of pods that need to be focussed depends on the number of applications to be deployed. A positive value is given for the number of pods. The deployment has to be made to maintain the expected quantity of a number of pods throughout the cluster.
Service: A service is optional. A service can be exposed onto an external cluster for a few parts of an application. The external part can be a public IP address that is outside of the cluster. This is known as an external service that requires to open one or more than a single port. The services visible from the inside part of the cluster is known as internal services. While creating a service, two ports should be specified and the container listens on a port that is considered as the incoming port.
Description: The description is the text entered which is added as notations to the deployment and can also be displayed over the application.
Labels: The application name and its version are the default labels that are used for the application. You also have the great freedom to specify other labels that are to be applied to the service, deployment, and pods which will include release, environment, partition, and tier, and also release track.
Namespace: Kubernetes supports multiple virtual clusters that are also backed by the same physical cluster. The namespace is a term that is denoted for these virtual clusters which assist in partitioning the resources into logical groups. All types of namespace options are available in a drop form list of the dashboard. A namespace allows a maximum of 63 alphanumeric characters and dashes but doesn’t allow capital letters or will allow only numbers. This is because if the namespace only consists of a number, the pod will be directed into a default namespace.
Image pull secret: If the specified Docker container image is private, then it may require pull secret credentials. All available secrets are offered by the dashboard in a dropdown list and permit you to create a new secret. The secret name follows a DNS domain name syntax, for example, new.image-pull.secret. The content of a secret must be base64-encoded and specified in a .dockercfg file. The secret name may consist of a maximum of 253 characters. If the image pull secret is successfully created then it is selected by default else no secret is applied.
CPU requirement and memory requirement: You will be allowed to specify a minimum resource limit of the container provided with an option. By default, the pods will run with an unbounded memory limit and CPU.
Run as privileged: Through this option, you can find out if a privileged process is equal to the running host processes. The privileged containers possess the capabilities which can access the devices and manipulate the network stack.
Environmental variables: The Kubernetes exposes services via environment variables. You can create environment variables or can pass arguments to your command by utilizing the values of environmental variables.
Uploading a JSON or YAML file
Kubernetes supports the declarative configurations. All types of configurations are stored in JSON or YAML files. The file configuration is done by using API resource schemas. The application details can also be specified with another alternative method. In the scenario of deploying wizard, users are allowed to define their application in the form of a YAML or JSON file and then upload the files with the help of a dashboard.
Using Kubernetes Dashboard
The following sections describe views of the Kubernetes Dashboard UI; what they provide and how they can be used.
When there are Kubernetes objects defined in the cluster, the Dashboard displays them in the initial view. By default, only the default namespace objects are displayed and the namespace selector can be used to change this which is located in the navigation menu. The dashboard displays mostly the different kinds of Kubernetes objects and groups them into a few menu categories.
Overview of Admin
For cluster and namespace administrators, Dashboard lists Nodes, Namespaces, and Persistent Volumes and contains detailed views for them. Node list view contains CPU and memory usage metrics consolidated across all Nodes. The details view displays the metrics for a Node, its specification, status, allocated resources, events, and pods running on the node.
The workloads display all the applications running in the selected namespace. The view lists applications by kinds of workloads such as Deployments, Replica Sets, Stateful Sets, etc. and each workload kind can be viewed separately. The lists give you an overview of actionable information concerning the workloads, such as the number of ready pods for a Replica Set or usage of current memory for a Pod.
Detail views for workloads display the status and specification information and surface relationships between the objects. For instance, Pods that Replica Set is controlling or New Replica Sets and Horizontal Pod Autoscalers for Deployments.
The services display the resources of Kubernetes which allows them to unveil the services to the external world and find them within a cluster. For that cause, Service and Ingress views display Pods focussed by them, internal endpoints for cluster connections, and external endpoints for external users.
Config Maps and Secrets
The config maps and secrets display all Kubernetes resources that are used for the live configuration of applications running in clusters. The view allows for editing and managing config objects and displays hidden secrets by default.
Pod lists and detail pages link to a logs viewer that is built into Dashboard. The viewer permits for drilling down logs from containers belonging to a single Pod.
Kubernetes Certification Training
- Master Your Craft
- Lifetime LMS & Faculty Access
- 24/7 online expert support
- Real-world & Project Based Learning
Deep Dive into Kubernetes Dashboard
In this section, we are going to discuss the Kubernetes Cluster UI Dashboard and the various components that are pre-deployed in your sandbox environment.
Kubernetes Dashboard UI is a web-based interface that lets you visualize the various components of the Kubernetes cluster, as well as deploy and manage Applications through containers running on Pods. It also provides the ability to view the summary about the health of various components and troubleshoot the specifications of those components.
The Kubernetes Dashboard UI comes with a vertical menu. Let us discuss the main sections in this menu.
Top 30 frequently asked Kubernetes Interview Questions !
Overview of Cluster
The Cluster Overview is the landing page. It provides you the general information and the health status of the various workloads installed. Let’s discuss them.
- Overall CPU and memory utilization in the cluster:
- Pod list, status, age, CPU, and memory utilization, etc.
- From each Pod, you could gather the real-time log files by clicking on the right-end icon as shown on the below screen.
- By clicking the 3-dotted icon at the right-end will let you remove or view/edit the actual Pod definition YAML file.
- There by clicking the actual Pod link will drive us to a more in-depth Pod monitoring.
Let us now go back to the Overview page, it will display all the Deployments, which are fundamentally the actual applications (microservices) that are deployed.
- The Deployments will let you do the following things:
- Create a deployment of an application.
- Update a deployment (e.g. deploying a new version of the application).
- Zero-down time rolling updates.
- Undo a deployment to the previous version.
- Roll-back to a specific older version.
- Pause/Resume a deployment.
- Clicking at the right-end icon will let you Scale, Delete or View/Edit the Deployments specification YAML file.
- By selecting the “Scale” option you can modify the current “desired” number of Pods to run this Application. Changing this number will force and match the number of pod replicas being set at design or runtime. This would either lead to the termination of some pods known as “scaling in” or create new pods known as “scaling out”.
- Getting back to the Overview, this would display the “Replica Sets”. These are the evolution of Replication Controllers. Replica Sets also help scale in and out Applications, but are not based on a richer selection filtering based on a set of values, like env is “dev” or “qa”.
- Likewise, as with replication controllers or applications, replication sets will let you scale your applications, by defining an expected replica count.
- Going back to the Overview page will continue with the Ingresses. In this scenario, we have 2 ingresses, cheeses, and gateway.
- Ingresses permit you to create the load balancing rules to provide services for external access/routing outside the Kubernetes cluster to services within the Kubernetes cluster. It offers load balancing traffic, SSL termination, name-based routing, etc. Ingresses are completely the definitions which include the configurations rules, they require an Ingress Controller within the Kubernetes cluster. Traefik is the Ingress Controller/Load Balancer which is used as part of this HOL.
- Again going back to the Overview page, we will then continue with Services. Pods are very dynamic which are terminated constantly because of changing the replicas counts. We use Services to bridge from a conceptual endpoint regardless of the actual Pods serving the requests in the backend, i.e. it could be 1 or 2 or many and back to 1 later. Services ensure there is an external reference of an Application outside the Kubernetes cluster and isolates the actual Pod elasticity in the backend.
- Clicking on a Service name/link gives the ability to edit the definition of Service YAML, delete the service, as well as to view the Number of Pods associated with the Service. From there, you can easily drill down into Pod land.
- Back to the Overview page, you can view the “Secrets”. The Secrets are the mechanism that provides credentials, keys, passwords, or any secret data to your applications living inside the Pods. For instance, if an application requires a username and password to access a MongoDB, SSH Key, or a certificate, this can easily be done through secret propagation. Secrets can be in files or directly in YAML files.
- For instance, if you click on any of the secrets names/links, you can view the actual Secret YAML definition:
- Now, this secret can be used within a Pod or Application YAML definition as environmental key/value properties.
Subscribe to our youtube channel to get new updates..!
On the left side of the vertical menu, if you click on Cluster, you will discover information about the various components associated with the cluster which include namespaces, nodes, persistent volumes, roles, and storage classes. In the sections, we will discuss each of these cluster components.
CPU and memory usage
Likewise to the Overview page, you can gather the overall cluster CPU and Memory utilization:
The Namespaces offer a virtual subdomain to related resources within the cluster. For instance, namespaces can be applied to resources that are part of the same Application or to divide cluster resources among multiple users. When not defined, resources are defined as part of a default namespace. As part of this HOL, we have namespaces for the following options.
- Istio-system: This isolates the Service Mesh resources.
- Sock-shop: This isolates the resources which are part of the Sock-shop Application Demo.
- Weave: This isolates the Weave Scope resources.
- Kube-public: It is used internally on Kubernetes clusters. This namespace is created automatically and it is readable by all users. It is mostly reserved for
- cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster.
- Kube-system: It is the namespace for objects created by the Kubernetes system.
- Default: The default namespace for objects with no other namespace.
The nodes are the worker machines in Kubernetes which hold the Pods that run the Containers with the actual Applications. It is recommended to run at least two nodes across various servers, sites, or Availability Domains for High Availability. To make it clear, this HOL environment was equipped with only one worker node. However multiple worker nodes can be easily turned up and joined to the cluster.
By clicking on the Node link, you can explore more details which include:
- CPU and Memory utilization.
- CPU and Memory allocation and capacity.
- Pods allocation.
- Various conditions like running out of disk, memory, etc.
Containers are not persistent storages, when a Pod terminates, then the state of its associated container(s) also terminates. Hence, persistent volumes are utilized to persist in the state. Persistent Volumes (PV) have an independent lifecycle to pods and can record implementation details of various types of storage, being that NFS, iSCSI, cloud-provider specific storage systems, etc. We are not using Persistent volumes in this HOL.
Kubernetes has introduced role-based access control (RBAC) in version 1.6. Kubernetes defines two resources for roles known as Role and ClusterRole. The Roles enforces policy-based Authentication, Authorization, and Accounting by isolating resources based on namespaces. For instance, we can enforce that an account for a monitoring system has only read-only access to pods in a specific namespace or across all namespaces.
Multiple roles are defined in HOL with a wide “Cluster Role” scope, this covers resources across all namespaces. Likewise, we also have defined roles with a more constraint “Role” type to resources under a specific namespace like kube-system.
A Storage Class provides the direction for administrators to describe the “classes'' of storage they offer. Various classes might map to quality-of-service levels, backup policies, or arbitrary policies determined by the cluster administrators. We are not using Storage classes in this HOL.
This section allows you to filter by specific namespaces. By default, it is set to “All namespaces”, but specific namespaces can be chosen. By doing so, all the rest of the sections will only show resources belonging to the filtered namespace.
For instance, filtering by default will let you view CPU, Memory, Pods, Deployments, etc. associated under the default namespace. This tool is used to filter by related resources, hence, it is a best practice to always associate your resources with specific namespaces.
Want to know more about Kubernetes,visit here Kubernetes Tutorial !
Various workloads have already been introduced as part of the Overview page, these include:
Daemon Sets: These are the mini processes that are chosen to run on all/some Pod(s), to handle specific actions. For instance, there are DaemonSet which ensures that:
- All or few Nodes run a copy of a Pod.
- As nodes are added to the cluster, Pods are added to them.
- As nodes are removed from the cluster, those Pods are garbage collected.
Deployments: (View the cluster Overview as discussed earlier)
Jobs: The jobs are often used to ensure that a certain set of tasks are reliably taken up. For instance, a job can create one or more pods and ensures that a specified number of them successfully terminate. As pods successfully get completed, the job tracks the successful completions. A job is said complete when a specified number of successful completions is reached. A Job can also run multiple pods in parallel.
Pods: (View the cluster Overview as discussed earlier)
Replica Sets: (View the cluster Overview as discussed earlier)
Replication Controllers: (View the cluster Overview as discussed earlier)
Stateful Sets: These are intended to be utilized with stateful applications and distributed systems. It helps manage the deployment and scaling of a set of Pods associated with a stateful application and provides assurance concerning the order and uniqueness of these Pods. A StatefulSet maintains a sticky identity for each of their Pods as compared to Deployment. These pods are created from the same spec but are not interchangeable. Each has a persistent identifier which is maintained across any rescheduling.
Discovery and Load Balancing
- Ingresses: (View the cluster Overview as discussed earlier)
- Services: (View the cluster Overview as discussed earlier)
Config and Storage
- Config Maps: It will decouple configuration artifacts from image content to keep containerized applications portable.
- Persistent Volume: It is a piece of storage in the cluster which is provisioned by an administrator. It is a cluster resource similar to a node in a cluster resource. PVs are volume plugins that have a life cycle independent of any individual pod. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
- Persistent Volume Claims (PVC): It is a request made by a user for storage. It is similar to a pod that consumes node resources and PVCs consume PV resources. Pods can request specific levels of resources like CPU and Memory. Claims can request specific sizes and access modes.
- Secrets: (View the cluster Overview as discussed earlier).
Kubernetes Certification Training
Weekday / Weekend Batches
The dashboard is utilized to give information concerning the state of the Kubernetes resources which is present in the cluster. They are also present on any types of error which might have occurred. The user interface of the web-based Kubernetes dashboard has many usages.