I've got 3, completely distinct, pods:
kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'
kubernetes-bootcamp-5c69669756-5rh9t
queenly-seahorse-mysql-6dc964999c-h4w54
wordpress-mysql-bcc89f687-hs677
but they seem to share the same env vars. E.g.
kubectl exec "kubernetes-bootcamp-5c69669756-5rh9t" env | grep MYSQL
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP=tcp://10.98.170.14:3306
QUEENLY_SEAHORSE_MYSQL_SERVICE_PORT_MYSQL=3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_ADDR=10.98.170.14
QUEENLY_SEAHORSE_MYSQL_SERVICE_HOST=10.98.170.14
QUEENLY_SEAHORSE_MYSQL_SERVICE_PORT=3306
QUEENLY_SEAHORSE_MYSQL_PORT=tcp://10.98.170.14:3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_PORT=3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_PROTO=tcp
and then on a completely different, unrelated pod (but on the same node):
kubectl exec "queenly-seahorse-mysql-6dc964999c-h4w54" env | grep MYSQL
MYSQL_ROOT_PASSWORD=<redact>
MYSQL_PASSWORD=<redact>
MYSQL_USER=
MYSQL_DATABASE=
QUEENLY_SEAHORSE_MYSQL_PORT=tcp://10.98.170.14:3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP=tcp://10.98.170.14:3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_PORT=3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_ADDR=10.98.170.14
QUEENLY_SEAHORSE_MYSQL_SERVICE_PORT=3306
QUEENLY_SEAHORSE_MYSQL_SERVICE_HOST=10.98.170.14
QUEENLY_SEAHORSE_MYSQL_SERVICE_PORT_MYSQL=3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_PROTO=tcp
MYSQL_MAJOR=5.7
MYSQL_VERSION=5.7.14-1debian8
Any explanation why?
FWIW, I'm clearly exec'ing into 2 different pods. E.g.
kubectl exec "queenly-seahorse-mysql-6dc964999c-h4w54" env | grep HOSTNAME
HOSTNAME=queenly-seahorse-mysql-6dc964999c-h4w54
kubectl exec "kubernetes-bootcamp-5c69669756-5rh9t" env | grep HOSTNAME
HOSTNAME=kubernetes-bootcamp-5c69669756-5rh9t
All the Kubernetes Services environment variables are shared across a namespace. This is by design so that pods can find a specific service if they need to.
There have been discussions about how to disable them, but I believe no fixes have been added upstream yet.
I deleted my comment and am adding this as an answer. I realized that the "QUEENLY_SEAHORSE_MYSQL_xxxx" env vars have been added by Kubernetes for a service named "queenly-seahorse-mysql" - see https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
Related
I would like to know, how to find service name from the Pod Name in Kubernetes.
Can you guys suggest ?
Services (spec.selector) and Pods (metadata.labels) are bound through shared labels.
So, you want to find all Services that include (some) of the Pod's labels.
kubectl get services \
--selector=${KEY-1}=${VALUE-1},${KEY-2}=${VALUE-2},...
--namespace=${NAMESPACE}
Where ${KEY} and ${VALUE} are the Pod's label(s) key(s) and values(s)
It's challenging though because it's possible for the Service's selector labels to differ from Pod labels. You'd not want there to be no intersection but a Service's labels could well be a subset of any Pods'.
The following isn't quite what you want but you may be able to extend it to do what you want. Given the above, it enumerates the Services in a Namespace and, using each Service's selector labels, it enumerates Pods that select based upon them:
NAMESPACE="..."
SERVICES="$(\
kubectl get services \
--namespace=${NAMESPACE} \
--output=name)"
for SERVICE in ${SERVICES}
do
SELECTOR=$(\
kubectl get ${SERVICE} \
--namespace=${NAMESPACE}\
--output=jsonpath="{.spec.selector}" \
| jq -r '.|to_entries|map("\(.key)=\(.value)")|#csv' \
| tr -d '"')
PODS=$(\
kubectl get pods \
--selector=${SELECTOR} \
--namespace=${NAMESPACE} \
--output=name)
printf "%s: %s\n" ${SERVICE} ${PODS}
done
NOTE This requires jq because I'm unsure whether it's possible to use kubectl's JSONPath to range over a Service's labels and reformat these as needed. Even using jq, my command's messy:
Get the Service's selector as {"k1":"v1","k2":"v2",...}
Convert this to "k1=v1","k2=v2",...
Trim the extra (?) "
If you want to do this for all Namespaces, you can wrap everything in:
NAMESPACES=$(kubectl get namespaces --output=name)
for NAMESPACE in ${NAMESPACE}
do
...
done
You can get information about a pods service from it's environment variables.
( ref: https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#environment-variables)
kubectl exec <pod_name> -- printenv | grep SERVICE
Example:
Getting details about pod
Getting service from environment variable
I'm expecting that kubectl get nodes <node> -o yaml to show the spec.providerID (see reference below) once the kubelet has been provided the additional flag --provider-id=provider://nodeID. I've used /etc/default/kubelet file to add more flags to the command line when kubelet is start/restarted. (On a k8s 1.16 cluster) I see the additional flags via a systemctl status kubelet --no-pager call, so the file is respected.
However, I've not seen the value get returned by kubectl get node <node> -o yaml call. I was thinking it had to be that the node was already registered, but I think kubectl re-registers when it starts up. I've seen the log line via journalctl -u kubelet suggest that it has gone through registration.
How can I add a provider ID to a node manually?
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#nodespec-v1-core
How a kubelet is configured on the node itself is separate (AFAIK) from its definition in the master control plane, which is responsible for updating state in the central etcd store; so it's possible for these to fall out of sync. i.e., you need to communicate to the control place to update its records.
In addition to Subramanian's suggestion, kubectl patch node would also work, and has the added benefit of being easily reproducible/scriptable compared to manually editing the YAML manifest; it also leaves a "paper trail" in your shell history should you need to refer back. Take your pick :) For example,
$ kubectl patch node my-node -p '{"spec":{"providerID":"foo"}}'
node/my-node patched
$ kubectl describe node my-node | grep ProviderID
ProviderID: foo
Hope this helps!
You can edit the node config and append providerID information under spec section.
kubectl edit node <Node Name>
...
spec:
podCIDR:
providerID:
I have a script that deploys my application to my kubernetes cluster. However, if my current kubectl context is pointing at the wrong cluster, I can easily end up deploying my application to a cluster that I did not intend to deploy it to. What is a good way to check (from inside a script) that I'm deploying to the right cluster?
I don't really want to hardcode a specific kubectl context name, since different developers on my team have different conventions for how to name their kubectl contexts.
Instead, I'd like something more like if $(kubectl get cluster-name) != "expected-clsuter-name" then error.
#!/bin/bash
if [ $(kubectl config current-context) != "your-cluster-name" ]
then
echo "Do some error!!!"
return
fi
echo "Do some kubectl command"
Above script get the cluster name and match with your-desired-cluster name. If mismatch then give error. Otherwise run desire kubectl command.
For each cluster run kubectl cluster-info once to see what the IP/host for master is - that should be stable for the cluster and not vary with the name in the kubectl context (which developers might be setting differently). Then capture that in the script with export MASTERA=<HOST/IP> where that's the master for cluster A. Then the script can do:
kubectl cluster-info | grep -q $MASTERA && echo 'on MASTERA'
Or use an if-else:
if kubectl cluster-info | grep -q $MASTERA; then
echo 'on $MASTERA'
else
exit 1
fi
I need to get access to current namespace. I've look up KUBERNETES_NAMESPACE and OPENSHIFT_NAMESPACE but they are unset.
$ oc rsh wsec-15-t6xj4
$ env | grep KUBERNETES
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://172.30.0.1:443
KUBERNETES_PORT_53_TCP_ADDR=172.30.0.1
KUBERNETES_PORT_53_UDP_ADDR=172.30.0.1
KUBERNETES_PORT_53_TCP_PORT=53
KUBERNETES_PORT_53_TCP_PROTO=tcp
KUBERNETES_PORT_53_UDP_PORT=53
KUBERNETES_SERVICE_PORT_DNS=53
KUBERNETES_PORT_53_UDP_PROTO=udp
KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1
KUBERNETES_PORT_53_TCP=tcp://172.30.0.1:53
KUBERNETES_PORT_53_UDP=udp://172.30.0.1:53
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_DNS_TCP=53
KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_HOST=172.30.0.1
Also the content of /var/run/secrets/kubernetes.io/namespace is empty.
Any ideas?
OpenShift are using Project instead Namespace.
https://docs.openshift.com/container-platform/3.9/architecture/core_concepts/projects_and_users.html#namespaces
It extended features of kubernetes namespace, such as resource limitation, rbac and so on.
As in title. I want to clone (create a copy of existing cluster).
If it's not possible to copy/clone Google Container Engine cluster, then how to clone Kubernetes cluster?
If that's not possible, is there a way to dump the whole cluster config?
Note:
I try to modify the cluster's configs by calling:
kubectl apply -f some-resource.yaml
But nothing stops me/other employee modifying the cluster by running:
kubectl edit service/resource
Or setting properties from command line kubectl calls.
I'm using a bash script from CoreOS team, with small adjustments, that works pretty good. By default it's excluding the kube-system namespace, but you can adjust this if you need. Also you can add or remove the resources you want to copy.
for ns in $(kubectl get ns --no-headers | cut -d " " -f1); do
if { [ "$ns" != "kube-system" ]; }; then
kubectl --namespace="${ns}" get --export -o=json svc,rc,rs,deployments,cm,secrets,ds,statefulsets,ing | \
jq '.items[] |
select(.type!="kubernetes.io/service-account-token") |
del(
.spec.clusterIP,
.metadata.uid,
.metadata.selfLink,
.metadata.resourceVersion,
.metadata.creationTimestamp,
.metadata.generation,
.status,
.spec.template.spec.securityContext,
.spec.template.spec.dnsPolicy,
.spec.template.spec.terminationGracePeriodSeconds,
.spec.template.spec.restartPolicy
)' >> "./my-cluster.json"
fi
done
To restore it on another cluster, you have to execute kubectl create -f ./my-cluster.json
You can now create/clone an existing cluster,
On the Clusters page, click on create cluster and choose an existing cluster. But remember, this will not clone the api-resources you may have to use a third party tool such as Velero to help you backup the resources.
Here are some useful links
Cluster Creation
Velero
Medium Article on How to use Velero