This post explains how to set it up for monitoring without having to login individually. It is exposed via a private endpoint using an Elastic Load Balancer on AWS.
Setting up a Dashboard is pretty simple. It is deployed as a set of Kubernetes components and all you have to do is, as explained here, run the following command:
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
This creates all the resources required to get it up and running.
By default Dashboard is setup to serve traffic over
HTTPSwith a self-signed certificate.
The default setup creates a Kubernetes Service - but that can only be access from within the cluster.
To reach the Dashboard using a web browser,
kubectl proxy command of
Kubernetes commandline client
can be used. This creates a local proxy that forwards traffic to the
remote cluster so that the Dashboard is available via
This comes with a significant limitation - the Dashboard can only be accessed from the machine where the command is executed on.
A simple solution is to create a Service of type
LoadBalancer so it
can be accessed from outside the cluster without having to run the CLI.
On AWS, that would create a
Classic Load Balancer
proxying traffic to the Dashbaord.
First, authentication behaviour of the Dashboard should be changed as it expects users to authenticate using a token.
GitHub repository for Kubernetes Dashboard contains a few alternative manifests:
sets-up the Dashboard instance to serve traffic over
HTTP on port
9090. It also has authentication disabled. That is exactly what we
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/alternative/kubernetes-dashboard.yaml ... serviceaccount "kubernetes-dashboard" created role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created deployment.apps "kubernetes-dashboard" created service "kubernetes-dashboard" created
Note that the resources are created in
Issue with the manifest
I did run into an issue with the alternative manifest - a Pod was not created for the Deployment.
Kubernetes could not pull the image:
kubectl get pod kubernetes-dashboard-75dfcb6bfc-qfgxm ... NAME READY STATUS RESTARTS AGE kubernetes-dashboard-75dfcb6bfc-qfgxm 0/1 ImagePullBackOff 0 14m
Image for the deployment is set to
k8s.gcr.io/kubernetes-dashboard-amd64:v2.0.0-alpha0. But looks like
v2.0.0-alpha0, is invalid:
kubectl describe pod kubernetes-dashboard-75dfcb6bfc-qfgxm ... Normal BackOff 19m (x21 over 23m) kubelet, ip-10-200-32-34.ap-southeast-2.compute.internal Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v2.0.0-alpha0" Warning Failed 4m (x97 over 23m) kubelet, ip-10-200-32-34.ap-southeast-2.compute.internal Error: ImagePullBackOff
By digging in to the
I realised that the latest image is
It looks like a bug in the laternative manifest and should be fixed anytime soon.
Edit the Deployment (this would bring up the default editor set in the terminal to edit the manifest):
kubectl edit deployment kubernetes-dashboard
Change the image property and save. This should force Kubernetes to terminate the current pod and create a new one with the updated container image.
That got the Dashboard up and running.
Back to creating a service that can be accessed externally…
The above manifest already created a Service called
kubernetes-dashboard of type
ClusterIP (which can only be accessed
from within the cluster). The following manifest should change it to a
apiVersion: v1 kind: Service metadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard annotations: external-dns.alpha.kubernetes.io/hostname: $DOMAIN_NAME service.beta.kubernetes.io/aws-load-balancer-ssl-cert: $CERTIFICATE_ARN service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443" service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http" spec: type: LoadBalancer selector: k8s-app: kubernetes-dashboard ports: - port: 443 targetPort: 9090
There are a few things worth explaining.
This can easily be avoided but
DNSshould be updated manually.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: $CERTIFICATE_ARN service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443" service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
Dashboard is now public!
It is worth mentioning that, the Dashboard is exposed to the outside world via the ELB. It would not be ideal for most clusters. Once solution is to make it internal by adding the following annotation to the manifest:
Internal Load Balancer
private IP so it can not be reached via the internet. But the
downside is that, a VPN connection is required to access the
Unlike the recommended deployment, the alternative deployment has logging in with a Bearer Token disabled. This turned out to be an advantage in my case, as I was setting up a read-only Dashboard anyway.
By default, the Dashboard is deployed with a minimal set of RBAC permissions. At this stage, nothing is visible on the Dashboard:
View resources in a namespace
Kubernetes comes with a default ClusterRole called view that allows read-only access to see most objects in a namespace.
Refer here for more information about pre-defined roles.
The Dashboard is associated with a ServiceAccount called
kubernetes-dashboard, which controls permissions for the instance.
view with ServiceAccount
kubernetes-dashboard allows users to see most of the resources. One
namespace bound resource type that would not be visible is Secret.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard-view namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system
Monitor system resources
It is required to see how Nodes and PersistentVolumes behave. To allow this, a custom ClusterRole is required:
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: view-system namespace: kube-system rules: - apiGroups: [""] resources: ["nodes", "persistentvolumes"] verbs: ["get", "watch", "list"]
Bind it to the ServiceAccount:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard-view-system namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view-system subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system
This creates a fairly comprehensive view of the Kubernetes cluster and its resources. It can always be extended by granting additional permissions to more resources.