Not able to browse Kubernetes dashboard for clusters created in Azure - kubernetes

Trying to access Kubernetes dashboard (Azure AKS) by using below command but getting error as attached.
az aks browse --resource-group rg-name --name aks-cluster-name --listen-port 8851

Please read AKS documentation of how to authenticate the dashboard from link. This also explains about how to enable the addon for newer version of k8s also.
Pasting here for reference
Use a kubeconfig
For both Azure AD enabled and non-Azure AD enabled clusters, a kubeconfig can be passed in. Ensure access tokens are valid, if your tokens are expired you can refresh tokens via kubectl.
Set the admin kubeconfig with az aks get-credentials -a --resource-group <RG_NAME> --name <CLUSTER_NAME>
Select Kubeconfig and click Choose kubeconfig file to open file selector
Select your kubeconfig file (defaults to $HOME/.kube/config)
Click Sign In
Use a token
For non-Azure AD enabled cluster, run kubectl config view and copy the token associated with the user account of your cluster.
Paste into the token option at sign in.
Click Sign In
For Azure AD enabled clusters, retrieve your AAD token with the following command. Validate you've replaced the resource group and cluster name in the command.
kubectl config view -o jsonpath='{.users[?(#.name == "clusterUser_<RESOURCE GROUP>_<AKS_NAME>")].user.auth-provider.config.access-token}'

Try to run this
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}').
You will get many values for some other keys such as Name, Labels, ..., token . The important one is the token that related to your name. Then copy that token and paste it.

Related

How to access kubeconfig file inside containers

I have a container where I used a bitnami/kubectl image.
Now I want to run a few kubectl commands inside that container.
How kubectl container aware of my kubeconfig file?
I know that I can mount the local kubeconfig file into containers and use it.
But is there any other way possible to access kubeconfig without using it as a volume mount?
I went throug the documentation of RBAC in Kubernetes.
Does configure role and role-binding alone is enough to run kubectl apply and kubectl delete commands successfully even without mounting kubeconfig file?
It would be really helpful if someone helps me with this.
Thanks in advance!
Now I want to run a few kubectl commands inside that container.
Why do you need it inside the container?
kubectl is your CLI to "communicate" with the cluster, the commands are passed to the kube-api, parsed, and executed usually by Admission controller.
Not clear why you need to run kubectl commands inside the container, since kubectl use your kubeconfig file for the communication (it will read the certificate path to the certificate data) and will be able to connect to your cluster.
How to run K8S API in your container?
The appropriate solution is to run an API query inside your container.
Every pod stores internally the Token & ServiceAccount which will allow you to query the API
Use the following script I'm using to query the API
https://github.com/nirgeier/KubernetesLabs/blob/master/Labs/21-KubeAPI/api_query.sh
#!/bin/sh
#################################
## Access the internal K8S API ##
#################################
# Point to the internal API server hostname
API_SERVER_URL=https://kubernetes.default.svc
# Path to ServiceAccount token
# The service account is mapped by the K8S API server in the pods
SERVICE_ACCOUNT_FOLDER=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace if required
# NAMESPACE=$(cat ${SERVICE_ACCOUNT_FOLDER}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICE_ACCOUNT_FOLDER}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICE_ACCOUNT_FOLDER}/ca.crt
# Explore the API with TOKEN and the Certificate
curl -X GET \
--cacert ${CACERT} \
--header "Authorization: Bearer ${TOKEN}" \
${API_SERVER_URL}/api
You can use kubectl without your kubeconfig file. Your pod is launched with a service account. And all kubectl commands will be executed with the service account privileges. So you have to use rbac to grant access rights to that service account first.

CLI command ordering for toggling between two GKE/kubectl accounts w/ two emails

Several weeks ago I asked this question and received a very helpful answer. The gist of that question was: "how do I switch back and forth between two different K8s/GCP accounts on the same machine?" I have 2 different K8s projects with 2 different emails (gmails) that live on 2 different GKE clusters in 2 different GCP accounts. And I wanted to know how to switch back and forth between them so that when I run kubectl and gcloud commands, I don't inadvertently apply them to the wrong project/account.
The answer was to basically leverage kubectl config set-context along with a script.
This question (today) is an extenuation of that question, a "Part 2" so to speak.
I am confused about the order in which I:
Set the K8s context (again via kubectl config set-context ...); and
Run gcloud init; and
Run gcloud auth; and
Can safely run kubectl and gcloud commands and be sure that I am hitting the right GKE cluster
My understanding is that gcloud init only has to be ran once to initialize the gcloud console on your system. Which I have already done.
So my thinking here is that I could be able to do the following:
# 1. switch K8s context to Project 1
kubectl config set-context <context for GKE project 1>
# 2. authenticate w/ GCP so that now gcloud commands will only hit the GCP
# resources associated with Project 1 (and GCP Account 1)
gcloud auth
# 3. run a bunch of kubectl and gcloud commands for Project/GCP Account 1
# 4. switch K8s context to Project 2
kubectl config set-context <context for GKE project 2>
# 5. authenticate w/ GCP so that now gcloud commands will only hit the GCP
# resources associated with Project 2 (and GCP Account 2)
gcloud auth
# 6. run a bunch of kubectl and gcloud commands for Project/GCP Account 2
Is my understanding here correct or is it more involved/complicated than this (and if so, why)?
I'll assume familiarity with the earlier answer
gcloud
gcloud init need only be run once per machine and only again if you really want to re-init'ialize the CLI (gcloud).
gcloud auth login ${ACCOUNT} authenticates a (Google) (user or service) account and persists (on Linux by default in ${HOME}/.config/gcloud) and renews the credentials.
gcloud auth list lists the accounts that have been gcloud auth login. The results show which account is being used by default (ACTIVE with *).
Somewhat inconveniently, one way to switch between the currently ACTIVE account is to change gcloud global (every instance on the machine) configuration using gcloud config set account ${ACCOUNT}.
kubectl
To facilitate using previously authenticated (i.e. gcloud auth login ${ACCOUNT}) credentials with Kubernetes Engine, Google provides the command gcloud container clusters get-credentials. This uses the currently ACTIVE gcloud account to create a kubectl context that joins a Kubernetes Cluster with a User and possibly with a Kubernetes Namespace too. gcloud container clusters get-credentials makes changes to kubectl config (on Linux by default in ${HOME}/.kube/config).
What is a User? See Users in Kubernetes. Kubernetes Engine (via kubectl) wants (OpenID Connect) Tokens. And, conveniently, gcloud can provide these tokens for us.
How? Per previous answer
user:
auth-provider:
config:
access-token: [[redacted]]
cmd-args: config config-helper --format=json
cmd-path: path/to/google-cloud-sdk/bin/gcloud
expiry: "2022-02-22T22:22:22Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
kubectl uses the configuration file to invoke gcloud config config-helper --format=json and extracts the access_token and token_expiry from the result. GKE can then use the access_token to authenticate the user. And, if necessary can renew the token using Google's token endpoint after expiry (token_expiry).
Scenario
So, how do you combine all of the above.
Authenticate gcloud with all your Google accounts
ACCOUNT="client1#gmail.com"
gcloud auth login ${ACCOUNT}
ACCOUNT="client2#gmail.com"
gcloud auth login ${ACCOUNT} # Last will be the `ACTIVE` account
Enumerate these
gcloud auth list
Yields:
ACTIVE ACCOUNT
client1#gmail.com
* client2#gmail.com # This is ACTIVE
To set the active account, run:
$ gcloud config set account `ACCOUNT`
Switch between users for gcloud commands
NOTE This doesn't affect kubectl
Either
gcloud config set account client1#gmail.com
gcloud auth list
Yields:
ACTIVE ACCOUNT
* client1#gmail.com # This is ACTIVE
client2#gmail.com
Or you can explicitly add --account=${ACCOUNT} to any gcloud command, e.g.:
# Explicitly unset your account
gcloud config unset account
# This will work and show projects accessible to client1
gcloud projects list --account=client1#gmail.com
# This will work and show projects accessible to client2
gcloud projects list --account=client2#gmail.com
Create kubectl contexts for any|all your Google accounts (via gcloud)
Either
ACCOUNT="client1#gmail.com"
PROJECT="..." # Project accessible to ${ACCOUNT}
gcloud container clusters get-credentials ${CLUSTER} \
--ACCOUNT=${ACCOUNT} \
--PROJECT=${PROJECT} \
...
Or equivalently using kubectl config set-context directly:
kubectl config set-context ${CONTEXT} \
--cluster=${CLUSTER} \
--user=${USER} \
But it avoids having to gcloud config get-clusters, gcloud config get-users etc.
NOTE gcloud containers clusters get-credentials uses derived names for contexts and GKE uses derived names for clusters. If you're confident you can edit kubectl config directly (or using kubectl config commands) to rename these cluster, context and user references to suit your needs.
List kubectl contexts
kubectl config get-context
Yields:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* client1 a-cluster client1
client2 b-cluster client2
Switch between kubectl contexts (clusters*users)
NOTE This doesn't affect gcloud
Either
kubectl config use-context ${CONTEXT}
Or* you can explicitly add --context flag to any kubectl commands
# Explicitly unset default|current context
kubectl config unset current-context
# This will work and list deployments accessible to ${CONTEXT}
kubectl get deployments --context=${CONTEXT}

Generating a kubeconfig file and authenticating for google cloud

I have a Kubernetes cluster. Inside my cluster is a Django application which needs to connect to my Kubernetes cluster on GKE. Upon my Django start up (inside my Dockerfile), I authenticate with Google Cloud by using:
gcloud auth activate-service-account $GKE_SERVICE_ACCOUNT_NAME --key-file=$GOOGLE_APPLICATION_CREDENTIALS
gcloud config set project $GKE_PROJECT_NAME
gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_ZONE
I am not really sure if I need to do this everytime my Django container starts, and I am not sure I understand how authentication to Google Cloud works. Could I perhaps just generate my Kubeconfig file, store it somewhere safe and use it all the time instead of authenticating?
In other words, is a Kubeconfig file enough to connect to my GKE cluster?
If your service is running in a Pod inside the GKE cluster you want to connect to, use a Kubernetes service account to authenticate.
Create a Kubernetes service account and attach it to your Pod. If your Pod already has a Kubernetes service account, you may skip this step.
Use Kubernetes RBAC to grant the Kubernetes service account the correct permissions.
The following example grants edit permissions in the prod namespace:
kubectl create rolebinding yourserviceaccount \
--clusterrole=edit \
--serviceaccount=yournamespace:yourserviceaccount\
--namespace=prod
At runtime, when your service invokes kubectl, it automatically receives the credentials you configured.
You can also store the credentials as a secret and mount it on your pod so that it can read them from there
To use a Secret with your workloads, you can specify environment variables that reference the Secret's values, or mount a volume containing the Secret.
You can create a Secret using the command-line or a YAML file.
Here is an example using Command-line
kubectl create secret SECRET_TYPE SECRET_NAME DATA
SECRET_TYPE: the Secret type, which can be one of the following:
generic:Create a Secret from a local file, directory, or literal value.
docker-registry:Create a dockercfg Secret for use with a Docker registry. Used to authenticate against Docker registries.
tls:Create a TLS secret from the given public/private key pair. The public/private key pair must already exist. The public key certificate must be .PEM encoded and match the given private key.
For most Secrets, you use the generic type.
SECRET_NAME: the name of the Secret you are creating.
DATA: the data to add to the Secret, which can be one of the following:
A path to a directory containing one or more configuration files, indicated using the --from-file or --from-env-file flags.
Key-value pairs, each specified using --from-literal flags.
If you need more information about kubectl create you can check the reference documentation

How to access GKE kubectl proxy dashboard?

I would imagine the interface would have some button I could click to launch the kubectl proxy dashboard, but I could not find it.
I tried this command to get the token and entered it in:
gcloud container clusters get-credentials mycluster
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
kubectl proxy
And it shows some things, but not others (services are missing, says it's forbidden).
How do I use kubectl proxy or show that dashboard with GKE?
Provided you are authenticated with gcloud auth login and the current project and k8s cluster is configured to the one you need, authenticate kubectl to the cluster (this will write ~/.kube/config):
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project>
retrieve the auth token that the kubectl itself uses to authenticate as you
gcloud config config-helper --format=json | jq -r '.credential.access_token'
run
kubectl proxy
Then open a local machine web browser on
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
(This will only work if you checked the checkbox Deploy Dashboard in GCP console)
and use the token from the second command to log in with your Google Account's permissions.
The Dashboard is disabled and deprecated in GKE as of September 2017. GKE provides a built in dashboard through the Management Console GUI.
You can disable it from the Google Cloud Console (UI).
Edit your cluster
Go to "Add-ons" section
Find "Kubernetes dashboard"
Chose "disabled" from dropdown
Save it.
Also according to the documentation this thing will be removed starting GKE 1.15
Warning: The open source Kubernetes Dashboard addon is deprecated for clusters on GKE and will be removed as an option in version 1.15. It is recommended to use the alternative GCP Console dashboards described on this page.
At the time of writing, the dashboard is not deployed by default (neither in the standard Kubernetes distribution, nor as part of a GKE cluster). In order to get it up and running, you have to first follow the instructions from the Kubernetes site, summarized here:
Within the proper kubectl context, run the following: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml (nb: this url is obviously subject to change, so do check the official site to obtain the most recent version).
Then do what #Alexander outlines:
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project>
gcloud config config-helper --format=json
kubectl proxy
You'll be prompted for either the access token displayed in the second step or a kubeconfig file:
Pasting the access token in the field provided will gain you entry to the dashboard.

Run kubectl on a Virtual Machine

Im trying to get the kubectl running on a VM. I followed the steps given here and can go thru with the installation. I copied my local kubernetes config (from /Users/me/.kube/config) to the VM in the .kube directory. However when I run any command such as kubectl get nodes it returns error: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information
Is there a way I can run kubectl on a VM ?
To use kubectl to talk to Google Container Engine cluster in a non-Google VM, you can create a user-managed IAM Service Account, and use it to authenticate to your cluster:
# Set these variables for your project
PROJECT_ID=my-project
SA_NAME=my-new-serviceaccount
SA_EMAIL=$SA_NAME#$PROJECT_ID.iam.gserviceaccount.com
KEY_FILE=~/serviceaccount_key.json
# Create a new GCP IAM service account.
gcloud iam service-accounts create $SA_NAME
# Download a json key for that service account.
gcloud iam service-accounts keys create $KEY_FILE --iam-account $SA_EMAIL
# Give that service account the "Container Engine Developer" IAM role for your project.
gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:$SA_EMAIL --role roles/container.developer
# Configure Application Default Credentials (what kubectl uses) to use that service account.
export GOOGLE_APPLICATION_CREDENTIALS=$KEY_FILE
And then go ahead and use kubectl as you normally would.