Telepresence fails, saying my namespace doesn't exist, pointing to problems with my k8s context - kubernetes

I've been working with a bunch of k8s clusters for a while, using kubectl from the command line to examine information. I don't actually call kubectl directly, I wrap it in multiple scripting layers. I also don't use contexts, as it's much easier for me to specify different clusters in a different way. The resulting kubectl command line has explicit --server, --namespace, and --token parameters (and one other flag to disable tls verify).
This all works fine. I have no trouble with this.
However, I'm now trying to use telepresence, which doesn't give me a choice (yet) of not using contexts to configure this. So, I now have to figure out how to use contexts.
I ran the following (approximate) command:
kubectl config set-context mycontext --server=https://host:port --namespace=abc-def-ghi --insecure-skip-tls-verify=true --token=mytoken
And it said: "Context "mycontext " modified."
I then ran "kubectl config view -o json" and got this:
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [],
"users": [],
"contexts": [
{
"name": "mycontext",
"context": {
"cluster": "",
"user": "",
"namespace": "abc-def-ghi"
}
}
],
"current-context": "mycontext"
}
That doesn't look right to me.
I then ran something like this:
telepresence --verbose --swap-deployment mydeployment --expose 8080 --run java -jar target/my.jar -Xdebug -Xrunjdwp:transport=dt_socket,address=5000,server=y,suspend=n
And it said this:
T: Error: Namespace 'abc-def-ghi' does not exist
Update:
And I can confirm that this isn't a problem with telepresence. If I just run "kubectl get pods", it fails, saying "The connection to the server localhost:8080 was refused". That tells me it obviously can't connect to the k8s server. The key is my "set-context" command. It's obviously not working, and I don't understand what I'm missing.

You don't have any clusters or credentials defined in your configuration. First, you need to define a cluster:
$ kubectl config set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
Then something like this for the user:
$ kubectl config set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
Then you define your context based on your cluster, user and namespace:
$ kubectl config set-context dev-frontend --cluster=development --namespace=frontend --user=developer
More information here
Your config should look something like this:
$ kubectl config view -o json
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [
{
"name": "development",
"cluster": {
"server": "https://1.2.3.4",
"certificate-authority-data": "DATA+OMITTED"
}
}
],
"users": [
{
"name": "developer",
"user": {
"client-certificate": "fake-cert-file",
"client-key": "fake-key-seefile"
}
}
],
"contexts": [
{
"name": "dev-frontend",
"context": {
"cluster": "development",
"user": "developer",
"namespace": "frontend"
}
}
],
"current-context": "dev-frontend"
}

Related

Kubeedge on Kataocoda - no matches for kind "Node" in version "v1"

I am following Kubeedge v1.0.0 deployment on Katacoda and on executing the following command.
kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/node.json -s <kubedge-node-ip-address>:8080
It gives me an error
error: unable to recognize "/root/kubeedge/src/github.com/kubeedge/kubeedge/build/node.json": no matches for kind "Node" in version "v1"
Tried searching for this error but found no relevant answers. Anyone has idea on to get through this?
Below is the content of my node.json file
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "edge-node-1",
"labels": {
"name": "edge-node",
"node-role.kubernetes.io/edge": ""
}
}
}
I have reproduced it in Katakoda and in my case it works perfectly. I recommend you to go through the tutorial once again and take each step carefully.
You need to pay attention for step 7. Change metadata.name to the name of the edge node:
vim $GOPATH/src/github.com/kubeedge/kubeedge/build/node.json
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "edge-node",
"labels": {
"name": "edge-node",
"node-role.kubernetes.io/edge": ""
}
}
}
Then, execute following command, where you need to change IP address:
kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/node.json -s <kubedge-node-ip-address>:8080
Another command to check if a correct API version was used, is:
kubectl explain node -s <kubedge-node-ip-address>:8080
After successful creation of node you should see:
node/edge-node created

How to obtain the enable admission controller list in kubernetes?

AFAIK, the admission controller is the last pass before the submission to the database.
However I cannot know which one is enabled, Is there a way to know which one is taking effect?
Thanks.
The kube-apiserver is running in your kube-apiserver-< example.com > container.
The application does not have a get method at the moment to obtain the enabled admission plugins, but you can get the startup parameters from its command line.
kubectl -n kube-system describe po kube-apiserver-example.com
Another way, to see what is in the container: unfortunately there is no "ps" command in the container, but you can get the initial process command parameters from /proc , something like that:
kubectl -n kube-system exec kube-apiserver-example.com -- sed 's/--/\n/g' /proc/1/cmdline
It will be probably like :
enable-admission-plugins=NodeRestriction
There isn't an admissionscontroller k8s object exposed directly in kubectl.
To get a list of admissions controllers, you have to hit the k8s master API directly with the right versions supported by your k8s installation:
kubectl get --raw /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations | jq
For our environment, we run open policy agent as an admissions controller and we can see the webhook object here:
kubectl get --raw /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations | jq '.items[] | select(.metadata.name=="open-policy-agent-latest-helm-opa")'
Which outputs the JSON object:
{
"metadata": {
"name": "open-policy-agent-latest-helm-opa",
"selfLink": "/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/open-policy-agent-latest-helm-opa",
"uid": "02139b9e-b282-4ef9-8017-d698bb13882c",
"resourceVersion": "150373119",
"generation": 93,
"creationTimestamp": "2021-03-18T06:22:54Z",
"labels": {
"app": "open-policy-agent-latest-helm-opa",
"app.kubernetes.io/managed-by": "Helm",
"chart": "opa-1.14.6",
"heritage": "Helm",
"release": "open-policy-agent-latest-helm-opa"
},
"annotations": {
"meta.helm.sh/release-name": "open-policy-agent-latest-helm-opa",
"meta.helm.sh/release-namespace": "open-policy-agent-latest"
},
"managedFields": [
{
"manager": "Go-http-client",
"operation": "Update",
"apiVersion": "admissionregistration.k8s.io/v1beta1",
"time": "2021-03-18T06:22:54Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {
".": {},
"f:meta.helm.sh/release-name": {},
"f:meta.helm.sh/release-namespace": {}
},
"f:labels": {
".": {},
"f:app": {},
"f:app.kubernetes.io/managed-by": {},
"f:chart": {},
"f:heritage": {},
"f:release": {}
}
},
"f:webhooks": {
".": {},
"k:{\"name\":\"webhook.openpolicyagent.org\"}": {
".": {},
"f:admissionReviewVersions": {},
"f:clientConfig": {
".": {},
"f:caBundle": {},
"f:service": {
".": {},
"f:name": {},
"f:namespace": {},
"f:port": {}
}
},
"f:failurePolicy": {},
"f:matchPolicy": {},
"f:name": {},
"f:namespaceSelector": {
".": {},
"f:matchExpressions": {}
},
"f:objectSelector": {},
"f:rules": {},
"f:sideEffects": {},
"f:timeoutSeconds": {}
}
}
}
}
]
},
"webhooks": [
{
"name": "webhook.openpolicyagent.org",
"clientConfig": {
"service": {
"namespace": "open-policy-agent-latest",
"name": "open-policy-agent-latest-helm-opa",
"port": 443
},
"caBundle": "LS0BLAH="
},
"rules": [
{
"operations": [
"*"
],
"apiGroups": [
"*"
],
"apiVersions": [
"*"
],
"resources": [
"namespaces"
],
"scope": "*"
}
],
"failurePolicy": "Ignore",
"matchPolicy": "Exact",
"namespaceSelector": {
"matchExpressions": [
{
"key": "openpolicyagent.org/webhook",
"operator": "NotIn",
"values": [
"ignore"
]
}
]
},
"objectSelector": {},
"sideEffects": "Unknown",
"timeoutSeconds": 20,
"admissionReviewVersions": [
"v1beta1"
]
}
]
}
You can see from above the clientConfig endpoint in k8s which is what the admissions payload is sent to. Tail the logs of the pods that serve that endpoint and you'll see your admissions requests being processed.
To get mutating webhooks, hit the version of the API of interest again:
# get v1 mutating webhook configurations
kubectl get --raw /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations | jq
This is the official explanation:
https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#which-plugins-are-enabled-by-default
Notes: You should get the stdout by exec in container
kubectl exec -it kube-apiserver-your-machine-name -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
You may find the list of default enabled admission controllers in doc:
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/#options, search for "--enable-admission-plugins";
or equivalently in code:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubeapiserver/options/plugins.go#L131-L145
For customized ones, you may run cmd in any master node:
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep -E "(enable|disable)-admission-plugins".
ImagePolicyWebhook uses a configuration file to set options for the behavior of the backend
Create one of these pods by running kubectl create -f examples/<name>.yaml. In this you can verify the user id under which the pod ran by inspecting the logs, for example:
$ kubectl create -f examples/pod-with-defaults.yaml
$ kubectl logs pod-with-defaults
Not sure why it was not stated before, but it's even in the kubernetes docs:
kubectl exec -it kube-apiserver-<your-machine-name> -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
It does exactly what you want.

kubernetes - volume mapping via command

I need to map a volume while starting the container, I am able to do it so with yaml file.
Is there an way volume mapping can be done via command line without using yaml file? just like
-v option in docker?
without using yaml file
Technically, yes: you would need a json file, as illustrated in "Create kubernetes pod with volume using kubectl run"
See kubectl run.
kubectl run -i --rm --tty ubuntu --overrides='
{
"apiVersion": "batch/v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:14.04",
"args": [
"bash"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=ubuntu:14.04 --restart=Never -- bash

Call a service from any POD

I would like how to call a service from any pod inside or outsite the node.
I have 3 nodes with deployment and services. I already have a kube-proxy.
I exec bash on other pod:
kubectl exec --namespace=develop myotherdpod-78c6bfd876-6zvh2 -i -t -- /bin/bash
And inside my other pod I have tried to exec curl:
curl -v http://myservice.develop.svc.cluster.local/user
This is my created service:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "myservice",
"namespace": "develop",
"selfLink": "/api/v1/namespaces/develop/services/mydeployment-svc",
"uid": "1b5fb4ae-ecd1-11e7-8599-02cc6a4bf8be",
"resourceVersion": "10660278",
"creationTimestamp": "2017-12-29T19:47:30Z",
"labels": {
"app": "mydeployment-deployment"
}
},
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 8080
}
],
"selector": {
"app": "mydeployment-deployment"
},
"clusterIP": "100.99.99.140",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
It looks to me that something may be incorrect with the Network Overlay you deployed. First of all, I would double check that the pod can access kube-dns and obtain the proper IP of the service.
nslookup myservice.develop.svc.cluster.local
nslookup myservice # If they are in the same namespace it should work as well
If you are able to confirm that, then I would also check if services like kube-proxy are working correctly. You can do it by using
systemctl status kube-proxy
If that does not work I will also check the pods from the Overlay network by executing
kubectl get pods --namespace=kube-system
If they are all ok, I would try using a different network overlay: https://kubernetes.io/docs/concepts/cluster-administration/networking/
If that did not work either, I would check if there are firewall rules preventing some communication between the nodes.

Create kubernetes pod with volume using kubectl run

I understand that you can create a pod with Deployment/Job using kubectl run. But is it possible to create one with a volume attached to it? I tried running this command:
kubectl run -i --rm --tty ubuntu --overrides='{ "apiVersion":"batch/v1", "spec": {"containers": {"image": "ubuntu:14.04", "volumeMounts": {"mountPath": "/home/store", "name":"store"}}, "volumes":{"name":"store", "emptyDir":{}}}}' --image=ubuntu:14.04 --restart=Never -- bash
But the volume does not appear in the interactive bash.
Is there a better way to create a pod with volume that you can attach to?
Your JSON override is specified incorrectly. Unfortunately kubectl run just ignores fields it doesn't understand.
kubectl run -i --rm --tty ubuntu --overrides='
{
"apiVersion": "batch/v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:14.04",
"args": [
"bash"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=ubuntu:14.04 --restart=Never -- bash
To debug this issue I ran the command you specified, and then in another terminal ran:
kubectl get job ubuntu -o json
From there you can see that the actual job structure differs from your json override (you were missing the nested template/spec, and volumes, volumeMounts, and containers need to be arrays).