I need to get access to current namespace. I've look up KUBERNETES_NAMESPACE and OPENSHIFT_NAMESPACE but they are unset.
$ oc rsh wsec-15-t6xj4
$ env | grep KUBERNETES
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://172.30.0.1:443
KUBERNETES_PORT_53_TCP_ADDR=172.30.0.1
KUBERNETES_PORT_53_UDP_ADDR=172.30.0.1
KUBERNETES_PORT_53_TCP_PORT=53
KUBERNETES_PORT_53_TCP_PROTO=tcp
KUBERNETES_PORT_53_UDP_PORT=53
KUBERNETES_SERVICE_PORT_DNS=53
KUBERNETES_PORT_53_UDP_PROTO=udp
KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1
KUBERNETES_PORT_53_TCP=tcp://172.30.0.1:53
KUBERNETES_PORT_53_UDP=udp://172.30.0.1:53
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_DNS_TCP=53
KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_HOST=172.30.0.1
Also the content of /var/run/secrets/kubernetes.io/namespace is empty.
Any ideas?
OpenShift are using Project instead Namespace.
https://docs.openshift.com/container-platform/3.9/architecture/core_concepts/projects_and_users.html#namespaces
It extended features of kubernetes namespace, such as resource limitation, rbac and so on.
Related
I have followed the getting started instructions here: https://linkerd.io/2/getting-started/
Please see the command below:
kubectl kustomize kustomize/deployment | \
linkerd inject - | \
kubectl apply -f -
emojivoto is now installed an accessible as I expected.
How can I remove emojivoto? This appears to work:
kubectl delete -f https://run.linkerd.io/emojivoto.yml
However, is it possible to do this without using an online resource?
This is of course possible: The mentioned yaml consists of multiple object definitions.
For example namespaces and service accounts.
Each of them can be deleted using kubectl delete <type> <name>.
Since all objects are created in the namespace emojivoto it is possible to remove everything by just removing the namespace: kubectl delete namespace emojivoto.
The other option is to save the yaml file locally and use kubectl delete -f <file> instead.
I am running a Kubernates Cluster in bare metal of three nodes.
I have applied a couple of yaml files of different services.
Now I would like to make order in the cluster and clean some orphaned kube objects.
To do that I need to understand the set of pods or other entities which use or refer a certain ServiceAccount.
For example, I can dig ClusterRoleBinding of the, say, admin-user and investigate it:
kubectl get сlusterrolebinding admin-user
But is there a good kubectl options combination to find all the usages/references of some ServiceAccount?
You can list all resources using a service account with the following command:
kubectl get rolebinding,clusterrolebinding --all-namespaces -o jsonpath='{range .items[?(#.subjects[0].name=="YOUR_SERVICE_ACCOUNT_NAME")]}[{.roleRef.kind},{.roleRef.name}];{end}' | tr ";" "\n"
You just need to replace YOUR_SERVICE_ACCOUNT_NAME to the one you are investigating.
I tested this command on my cluster and it works.
Let me know if this solution helped you.
Take a look at this project. After installing via homebrew or krew you can use it find a service account and look at its role, scope, source. It does not tell which pods are referring to it but still a useful tool.
rbac-lookup serviceaccountname --output wide --kind serviceaccount
I've got 3, completely distinct, pods:
kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'
kubernetes-bootcamp-5c69669756-5rh9t
queenly-seahorse-mysql-6dc964999c-h4w54
wordpress-mysql-bcc89f687-hs677
but they seem to share the same env vars. E.g.
kubectl exec "kubernetes-bootcamp-5c69669756-5rh9t" env | grep MYSQL
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP=tcp://10.98.170.14:3306
QUEENLY_SEAHORSE_MYSQL_SERVICE_PORT_MYSQL=3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_ADDR=10.98.170.14
QUEENLY_SEAHORSE_MYSQL_SERVICE_HOST=10.98.170.14
QUEENLY_SEAHORSE_MYSQL_SERVICE_PORT=3306
QUEENLY_SEAHORSE_MYSQL_PORT=tcp://10.98.170.14:3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_PORT=3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_PROTO=tcp
and then on a completely different, unrelated pod (but on the same node):
kubectl exec "queenly-seahorse-mysql-6dc964999c-h4w54" env | grep MYSQL
MYSQL_ROOT_PASSWORD=<redact>
MYSQL_PASSWORD=<redact>
MYSQL_USER=
MYSQL_DATABASE=
QUEENLY_SEAHORSE_MYSQL_PORT=tcp://10.98.170.14:3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP=tcp://10.98.170.14:3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_PORT=3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_ADDR=10.98.170.14
QUEENLY_SEAHORSE_MYSQL_SERVICE_PORT=3306
QUEENLY_SEAHORSE_MYSQL_SERVICE_HOST=10.98.170.14
QUEENLY_SEAHORSE_MYSQL_SERVICE_PORT_MYSQL=3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_PROTO=tcp
MYSQL_MAJOR=5.7
MYSQL_VERSION=5.7.14-1debian8
Any explanation why?
FWIW, I'm clearly exec'ing into 2 different pods. E.g.
kubectl exec "queenly-seahorse-mysql-6dc964999c-h4w54" env | grep HOSTNAME
HOSTNAME=queenly-seahorse-mysql-6dc964999c-h4w54
kubectl exec "kubernetes-bootcamp-5c69669756-5rh9t" env | grep HOSTNAME
HOSTNAME=kubernetes-bootcamp-5c69669756-5rh9t
All the Kubernetes Services environment variables are shared across a namespace. This is by design so that pods can find a specific service if they need to.
There have been discussions about how to disable them, but I believe no fixes have been added upstream yet.
I deleted my comment and am adding this as an answer. I realized that the "QUEENLY_SEAHORSE_MYSQL_xxxx" env vars have been added by Kubernetes for a service named "queenly-seahorse-mysql" - see https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
I'm trying to provision/deprovision service instance/binding from my cloud provider (IBM cloud private), Currently, there is a bug that if the service is not deprovisioned in ICP, that leaves me the orphan service instance on my ICP environment which I can't delete even with force option.
They provide a workaround solution of:
kubectl edit ServiceInstance <service-instance-name>
kubectl edit ServiceBinding <service-binding-name>
then delete the line:
...
finalizers:
- kubernetes-incubator/service-catalog
...
and the orphan service instance/binding will get deleted properly. I'm wondering how to automate this process with bash cli (live edit + delete line + save + exit) or any alternative way.
I'm not sure how this works with the ServiceInstance and ServiceBinding specifically, but you can use kubectl patch to update objects in place. As an example:
kubectl patch ServiceInstance <service-instance-name> -p '{"metadata":{"finalizers":null}}' --type=merge
kubectl patch is one way. You can also use a jq/kubectl oneliner.
kubectl get ServiceInstance <service-instance-name> -o=json | \
jq '.metadata.finalizers = null' | kubectl apply -f -
As in title. I want to clone (create a copy of existing cluster).
If it's not possible to copy/clone Google Container Engine cluster, then how to clone Kubernetes cluster?
If that's not possible, is there a way to dump the whole cluster config?
Note:
I try to modify the cluster's configs by calling:
kubectl apply -f some-resource.yaml
But nothing stops me/other employee modifying the cluster by running:
kubectl edit service/resource
Or setting properties from command line kubectl calls.
I'm using a bash script from CoreOS team, with small adjustments, that works pretty good. By default it's excluding the kube-system namespace, but you can adjust this if you need. Also you can add or remove the resources you want to copy.
for ns in $(kubectl get ns --no-headers | cut -d " " -f1); do
if { [ "$ns" != "kube-system" ]; }; then
kubectl --namespace="${ns}" get --export -o=json svc,rc,rs,deployments,cm,secrets,ds,statefulsets,ing | \
jq '.items[] |
select(.type!="kubernetes.io/service-account-token") |
del(
.spec.clusterIP,
.metadata.uid,
.metadata.selfLink,
.metadata.resourceVersion,
.metadata.creationTimestamp,
.metadata.generation,
.status,
.spec.template.spec.securityContext,
.spec.template.spec.dnsPolicy,
.spec.template.spec.terminationGracePeriodSeconds,
.spec.template.spec.restartPolicy
)' >> "./my-cluster.json"
fi
done
To restore it on another cluster, you have to execute kubectl create -f ./my-cluster.json
You can now create/clone an existing cluster,
On the Clusters page, click on create cluster and choose an existing cluster. But remember, this will not clone the api-resources you may have to use a third party tool such as Velero to help you backup the resources.
Here are some useful links
Cluster Creation
Velero
Medium Article on How to use Velero