Kubectl config within AKS - kubernetes

I have a cluster in Azure (AKS), in this cluster, I have 2 pools: a System one and a User one to run apps.
When executing kubectl get pod command inside a pod, I get the error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
the pod is running on the system node(not working on the user node either) in one my own namespace(let's call it cron)
But when running the same command in a pod belonging to the namespace kube-system on the system node, it's working fine.
it looks like to be link to the configuration of kubectl (kubeconfig) but I don't get how it works in the kube-system namespace and not the cron one
what did I miss in AKS to not be able to run kubectl command in pod not belonging to kube-system namespace?
Edit1:
Environment variables are different especially those linked to Kubernetes:
On the pod running under kube-system got the URL:
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
KUBE_DNS_PORT=udp://10.0.0.10:53
KUBE_DNS_PORT_53_TCP_ADDR=10.0.0.10
KUBERNETES_DASHBOARD_PORT_443_TCP_PORT=443
KUBERNETES_DASHBOARD_SERVICE_PORT=443
KUBE_DNS_SERVICE_HOST=10.0.0.10
KUBERNETES_PORT_443_TCP=tcp://*****.hcp.japaneast.azmk8s.io:443
KUBE_DNS_PORT_53_TCP_PORT=53
KUBE_DNS_PORT_53_UDP=udp://10.0.0.10:53
KUBE_DNS_PORT_53_UDP_PROTO=udp
KUBE_DNS_SERVICE_PORT_DNS=53
KUBE_DNS_PORT_53_TCP_PROTO=tcp
KUBE_DNS_PORT_53_UDP_ADDR=10.0.0.10
KUBERNETES_DASHBOARD_PORT_443_TCP_ADDR=10.0.207.97
KUBE_DNS_SERVICE_PORT_DNS_TCP=53
KUBERNETES_DASHBOARD_PORT_443_TCP=tcp://10.0.207.97:443
KUBERNETES_DASHBOARD_PORT=tcp://10.0.207.97:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBE_DNS_PORT_53_UDP_PORT=53
KUBERNETES_PORT_443_TCP_ADDR=****.hcp.japaneast.azmk8s.io
KUBERNETES_SERVICE_HOST=*****.hcp.japaneast.azmk8s.io
KUBERNETES_PORT=tcp://*****.hcp.japaneast.azmk8s.io:443
KUBERNETES_PORT_443_TCP_PORT=443
KUBE_DNS_PORT_53_TCP=tcp://10.0.0.10:53
KUBERNETES_DASHBOARD_PORT_443_TCP_PROTO=tcp
KUBE_DNS_SERVICE_PORT=53
KUBERNETES_DASHBOARD_SERVICE_HOST=10.0.207.97
In my own env cron
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
KUBERNETES_SERVICE_HOST=10.0.0.1
KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
About namespaces. I labeled my own namespace with the same labels as kube-system but no luck.
about the config: both kubectl config view are coming back empty (requested inside the pod):
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

Related

What does ``kubectl set env daemonset aws-node` mean?

This page: https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/iam-policy.md
uses:
kubectl set env daemonset aws-node -n kube-system CLUSTER_NAME=${YourClusterName}
and according to this page: https://kubernetes.io/docs/reference/kubectl/
everything after set seems to be a subcommand.
However, I don't understand what subcommand this is: daemonset aws-node -n kube-system CLUSTER_NAME=${YourClusterName}
Can someone explain (hopefully with more docs)?
...don't understand what subcommand this is: daemonset aws-node -n kube-system CLUSTER_NAME=${YourClusterName}
Every EKS node runs an instance of aws-node pod which provide CNI functionality to the node. The command set the environment variable named CLUSTER_NAME value to the name of your cluster for every pod managed by the daemonset. So that you do not need to go node by node to change each running pod environment variable, the command set and automatically restart all the pod(s) for you.

Cluster name within the Pod [duplicate]

As stated in the title, is it possible to find out a K8s cluster name from the API? I looked around the API and could not find it.
kubectl config current-context does the trick (it outputs little bit more, like project name, region, etc., but it should give you the answer you need).
Unfortunately a cluster doesn't know its own name, or anything else that would uniquely identify it (K8s issue #44954). I wanted to know for helm issue #2055.
Update:
A common workaround is to create a ConfigMap containing the cluster name and read that when required (#2055 comment 1244537799).
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-info
namespace: kube-system
data:
cluster-name: foo
There is no way to get the name via K8s API. But here is a one-liner in case the name you have in your .kube/config file is enough for you (if you download it from your cloud provider the names should match):
kubectl config view --minify -o jsonpath='{.clusters[].name}'
Note 1: The --minify is key here so it will output the name of your current context only. There are other similar answers posted here but without the "minify" you will be listing other contexts in your config that might confuse you.
Note 2: The name in your .kube/config might not reflect the name in your cloud provider, if the file was autogenerated by the cloud provider the names should match, if you configured it manually you could have typed any name just for local config.
Note 3: Do not rely on kubectl config current-context this returns just the name of the context, not the name of the cluster.
I dont believe there is a k8s cluster name. This command could provide some nice informations
kubectl cluster-info
The question is not really well described. However, if this question is related to Google Container Engine then as coreypobrien mentioned the name of cluster is stored in custom metadata of nodes. From inside a node, run the following command and the output will be name of cluster:
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
If you specify your use case, I might be able to extend my answer to cover it.
The kubernetes API doesn't know much about the GKE cluster name, but you can easily get the cluster name from the Google metatdata server like this
kubectl run curl --rm --restart=Never -it --image=appropriate/curl -- -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-name
It is the same as getting the current config, but the below command gives clear output:
kubectl config view
This command will Check all possible clusters, as you know .KUBECONFIG may have multiple contexts
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
And you will get output like
Cluster name Server
kubernetes https://localhost:6443
at-least for kubespray clusters, the following works for me
kubectl config current-context | cut -d '#' -f2
For clusters that were installed using kubeadm, the configuration stored in the kubeadm-config configmap has the cluster name used when installing the cluster.
$ kubectl -n kube-system get configmap kubeadm-config -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeadm-config
namespace: kube-system
data:
ClusterConfiguration: |
clusterName: NAME_OF_CLUSTER
For clusters that are using CoreDNS for their DNS, the "cluster name" from kubeadm is also used as the domain suffix.
$ kubectl -n kube-system get configmap coredns -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
kubernetes NAME_OF_CLUSTER.local in-addr.arpa ip6.arpa {
Well this returns precisely one thing, a cluster name
K8s:
kubectl config view -o jsonpath='{.clusters[].name}{"\n"}'
Openshift:
oc config view -o jsonpath='{.clusters[].name}{"\n"}'
$ kubectl config get-clusters --> get you the list of existing clusters
Using python k8s client. But this won't work with incluster_kubeconfig.
from kubernetes import config
cluster_context = config.kube_config.list_kube_config_contexts()
print (cluster_context)
([{'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'}], {'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'})
cluster_name = cluster_context[1]['context']['cluster']
print (cluster_name)
k01.test.use1.aws.platform.gov
Using kubectl command:
$ kubectl config get-clusters
NAME
kubernetes
kubectl config get-clusters
kubectl config get-contexts
There is a great tool called kubectx https://github.com/ahmetb/kubectx.
kubectx - lists all previously added clusters and highlights the currently used one. This is only one word to type instead of kubectl config current-context.
kubectx <cluster> - switches to a chosen cluster.
Moreover this tool comes also with kubens which does exactly the same for namespaces:
kubens - lists all namespaces and shows the current one,
kubens <namespace> - switches to a chosen namespace.

Not able to access application running on kubernetes pod (Using Docker-Desktop: Single-node cluster)

See below service is running:
and Below error i am getting while trying to access it:
Kubectl get pods:
Yaml files:
Service:
Deployment:
Check pod status if it's running or not.
Also, you can try with port-forwarding to POD
kubectl port-forward <POD name> 8086:8086 & open localhost:8086

How to debug why my pods are pending in GCE

I'#m trying to get a pod running on GCE. The pod has an init container, and is created by me applying a manifest with a deployment that creates 1 replica of the pod.
When I look at my workloads on the cloud console, I can see that under 'Active revisions' my deployment is in the state of 'Pods are pending', and under 'Managed pods', the status is 'PodsInitializing'.
The container logs are empty, and the audit logs contain a single entry for the creation of the deployment.
My pods seem to be stuck in the above state, and I'm not really sure why. How do I go about debugging that?
Edit:
kubectl get pods --namespace=my-namespace
Outputs:
NAME READY STATUS RESTARTS AGE
my-pod-v77jm 0/1 Init:0/1 0 55m
But when I run:
kubectl describe pod my-pod-v77jm
I get
Error from server (NotFound): pods "my-pod-v77jm" not found
If you have access to kube-api via kubectl:
Use describe see details about the pod and containers
kubectl describe myPod --namespace mynamespace
To view container logs (including init containers)
kubectl logs myPod --namespace mynamespace -c initContainerName
You can get more information about pod statuses and how to debug init containers here

How to get Kubernetes cluster name from K8s API

As stated in the title, is it possible to find out a K8s cluster name from the API? I looked around the API and could not find it.
kubectl config current-context does the trick (it outputs little bit more, like project name, region, etc., but it should give you the answer you need).
Unfortunately a cluster doesn't know its own name, or anything else that would uniquely identify it (K8s issue #44954). I wanted to know for helm issue #2055.
Update:
A common workaround is to create a ConfigMap containing the cluster name and read that when required (#2055 comment 1244537799).
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-info
namespace: kube-system
data:
cluster-name: foo
There is no way to get the name via K8s API. But here is a one-liner in case the name you have in your .kube/config file is enough for you (if you download it from your cloud provider the names should match):
kubectl config view --minify -o jsonpath='{.clusters[].name}'
Note 1: The --minify is key here so it will output the name of your current context only. There are other similar answers posted here but without the "minify" you will be listing other contexts in your config that might confuse you.
Note 2: The name in your .kube/config might not reflect the name in your cloud provider, if the file was autogenerated by the cloud provider the names should match, if you configured it manually you could have typed any name just for local config.
Note 3: Do not rely on kubectl config current-context this returns just the name of the context, not the name of the cluster.
I dont believe there is a k8s cluster name. This command could provide some nice informations
kubectl cluster-info
The question is not really well described. However, if this question is related to Google Container Engine then as coreypobrien mentioned the name of cluster is stored in custom metadata of nodes. From inside a node, run the following command and the output will be name of cluster:
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
If you specify your use case, I might be able to extend my answer to cover it.
The kubernetes API doesn't know much about the GKE cluster name, but you can easily get the cluster name from the Google metatdata server like this
kubectl run curl --rm --restart=Never -it --image=appropriate/curl -- -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-name
It is the same as getting the current config, but the below command gives clear output:
kubectl config view
This command will Check all possible clusters, as you know .KUBECONFIG may have multiple contexts
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
And you will get output like
Cluster name Server
kubernetes https://localhost:6443
at-least for kubespray clusters, the following works for me
kubectl config current-context | cut -d '#' -f2
For clusters that were installed using kubeadm, the configuration stored in the kubeadm-config configmap has the cluster name used when installing the cluster.
$ kubectl -n kube-system get configmap kubeadm-config -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeadm-config
namespace: kube-system
data:
ClusterConfiguration: |
clusterName: NAME_OF_CLUSTER
For clusters that are using CoreDNS for their DNS, the "cluster name" from kubeadm is also used as the domain suffix.
$ kubectl -n kube-system get configmap coredns -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
kubernetes NAME_OF_CLUSTER.local in-addr.arpa ip6.arpa {
Well this returns precisely one thing, a cluster name
K8s:
kubectl config view -o jsonpath='{.clusters[].name}{"\n"}'
Openshift:
oc config view -o jsonpath='{.clusters[].name}{"\n"}'
$ kubectl config get-clusters --> get you the list of existing clusters
Using python k8s client. But this won't work with incluster_kubeconfig.
from kubernetes import config
cluster_context = config.kube_config.list_kube_config_contexts()
print (cluster_context)
([{'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'}], {'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'})
cluster_name = cluster_context[1]['context']['cluster']
print (cluster_name)
k01.test.use1.aws.platform.gov
Using kubectl command:
$ kubectl config get-clusters
NAME
kubernetes
kubectl config get-clusters
kubectl config get-contexts
There is a great tool called kubectx https://github.com/ahmetb/kubectx.
kubectx - lists all previously added clusters and highlights the currently used one. This is only one word to type instead of kubectl config current-context.
kubectx <cluster> - switches to a chosen cluster.
Moreover this tool comes also with kubens which does exactly the same for namespaces:
kubens - lists all namespaces and shows the current one,
kubens <namespace> - switches to a chosen namespace.