kubernetes how to create new pod from an existing pod - kubernetes

I have the following doubt, let's say I have a kubernetes pod A with n=1 replicas, and I want that every x minutes it will create a new pod B of type "Job", I can create a job pod from kubectl without problems, but how can I make a pod be instantiated from another pod ?
I could try to use kubectl on the parent pod but I don't think it's the most elegant way to do it.

In the pod you could use any supported kubernetes client library to call REST API exposed by kubernetes API server to create a pod.
The client library need to be authenticated to be to call the kubernetes API. A service account can be used by the client library for that and the service account need to have RBAC to be to be able to create a pod by calling kubernetes API server.
Internally kubectl also calls the REST API exposed by kubernetes API server when kubectl is used to create a pod.

In my opinion you have 2 options here:
As suggested in previous answer, using a client library.
Using an ambassador container pattern: ambassador containers proxy a local connection to the world, you can read about this pattern more here.
How will this solve your issue:
Instead of talking to the API server directly from your pod (as you would using kubectl) you can run kubectl proxy in an ambassador container alongside the main container and communicate with the API server through it.
Instead of talking to the API server directly, the app in the main container can connect to the ambassador through HTTP (instead of HTTPS) and let the ambassador
proxy handle the HTTPS connection to the API server, taking care of security transparently. It does this by using the files from the default token’s secret volume (see the script below).
Because all containers in a pod share the same loopback network interface, your app can access the proxy through a port on localhost.
How to build such container?
Dockerfile (uses v1.8):
FROM alpine
RUN apk update && apk add curl && curl -L -O https://dl.k8s.io/v1.8.0/kubernetes-client-linux-amd64.tar.gz && tar zvxf kubernetes-client-linux-amd64.tar.gz kubernetes/client/bin/kubectl && mv kubernetes/client/bin/kubectl / && rm -rf kubernetes && rm -f kubernetes-client-linux-amd64.tar.gz
ADD kubectl-proxy.sh /kubectl-proxy.sh
ENTRYPOINT /kubectl-proxy.sh
Where kubectl-proxy.sh is the following script:
#!/bin/sh
API_SERVER="https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT"
CA_CRT="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
TOKEN="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
/kubectl proxy --server="$API_SERVER" --certificate-authority="$CA_CRT" --token="$TOKEN" --accept-paths='^.*'
All that's left for you to do is build this image (push it to a registry), add it as a container to your app pod, and talk to it directly through loopback.
By default, kubectl proxy binds to port 8001, and because both containers in the pod share the same network interfaces, including loopback, you can point your requests to localhost:8001
Credit for this goes to Kubernetes in Action book (which is awesome!)

Related

How to get Kubernetes cluster name from K8s API using client-go

How to get Kubernetes cluster name from K8s API mentions that
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
(from within the cluster), or
kubectl run curl --rm --restart=Never -it --image=appropriate/curl -- -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-name
(from outside the cluster), can be used to retrieve the cluster name. That works.
Is there a way to perform the same programmatically using the k8s client-go library? Maybe using the RESTClient()? I've tried but kept getting the server could not find the requested resource.
UPDATE
What I'm trying to do is to get the cluster-name from an app that runs either in a local computer or within a k8s cluster. the k8s client-go allows to initialise the clientset via in cluster or out of cluster authentication.
With the two commands mentioned at the top that is achievable. I was wondering if there was a way from the client-go library to achieve the same, instead of having to do kubectl or curl depending on where the service is run from.
The data that you're looking for (name of the cluster) is available at GCP level. The name itself is a resource within GKE, not Kubernetes. This means that this specific information is not available using the client-go.
So in order to get this data, you can use the Google Cloud Client Libraries for Go, designed to interact with GCP.
As a starting point, you can consult this document.
First you have to download the container package:
➜ go get google.golang.org/api/container/v1
Before you will launch you code you will have authenticate to fetch the data:
Google has a very good document how to achieve that.
Basically you have generate a ServiceAccount key and pass it in GOOGLE_APPLICATION_CREDENTIALS environment:
➜ export GOOGLE_APPLICATION_CREDENTIALS=sakey.json
Regarding the information that you want, you can fetch the cluster information (including name) following this example.
Once you do do this you can launch your application like this:
➜ go run main.go -project <google_project_name> -zone us-central1-a
And the result would be information about your cluster:
Cluster "tom" (RUNNING) master_version: v1.14.10-gke.17 -> Pool "default-pool" (RUNNING) machineType=n1-standard-2 node_version=v1.14.10-gke.17 autoscaling=false%
Also it is worth mentioning that if you run this command:
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
You are also interacting with the GCP APIs and can go unauthenticated as long as it's run within a GCE machine/GKE cluster. This provided automatic authentication.
You can read more about it under google`s Storing and retrieving instance metadata document.
Finally, one great advantage of doing this with the Cloud Client Libraries, is that it can be launched externally (as long as it's authenticated) or internally within pods in a deployment.
Let me know if it helps.
If you're running inside GKE, you can get the cluster name through the instance attributes: https://pkg.go.dev/cloud.google.com/go/compute/metadata#InstanceAttributeValue
More specifically, the following should give you the cluster name:
metadata.InstanceAttributeValue("cluster-name")
The example shared by Thomas lists all the clusters in your project, which may not be very helpful if you just want to query the name of the GKE cluster hosting your pod.

Is there an alternative to JMeter GUI and CLI to execute JMeter tests in Kubernetes without "kubectl exec"?

I have JMeter running as a Docker container in a Kubernetes cluster. My company's K8s team disallows calling kubectl exec because they don't want teams to ssh into a running container in a pod. What alternatives exist to calling bin/jmeter with params? Is there a http endpoint that can be created which can call the cli in turn? How would I implement such an API?
The recommended approach in this case where kubectl exec is blocked is to create a proxy web service that would make calls to the JMeter CLI. The complete Dockerized solution with helm chart for K8s deployment is posted here - JMeter_webservice_Kubernetes

What's the recommended way to locate the apiserver from an openshift pod?

From the Kubernetes docs (Accessing the API from a Pod):
The recommended way to locate the apiserver within the pod is with the kubernetes DNS name, which resolves to a Service IP which in turn will be routed to an apiserver.
However, this 'kubernetes' dns name does not appear to exist when I was in the shell of an OpenShift pod. I expected it to exist by default due the Kubernetes running underneath, but am I mistaken? This was using OpenShift Container Platform version 3.7.
Is there a standard way to access the apiserver short of passing it in as an environment variable or something?
In OpenShift, you can use:
https://openshift.default.svc.cluster.local
You could also use the values from the environment variables:
KUBERNETES_SERVICE_PORT
KUBERNETES_SERVICE_HOST
as in:
#!/bin/sh
SERVER=`https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT`
TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`
URL="$SERVER/oapi/v1/users/~"
curl -k -H "Authorization: Bearer $TOKEN" $URL
Note that the default service account that containers are run as will not have REST API access. Best thing to do is to create a new service account in the project and grant that the rights to use the REST API endpoint for the actions it needs.

How can I access an internal HTTP port of a Kubernetes node in Google Cloud Platform

I have a load-balanced service running in a Kubernetes cluster on the Google Cloud Platform. The individual servers expose some debugging information via a particular URL path. I would like to be able to access those individual server URLs, otherwise I just get whichever server the load balancer sends the request to.
What is the easiest way to get access to those internal nodes? Ideally, I'd like to be able to access them via a browser, but if I can only access via a command line (e.g. via ssh or Google Cloud Shell) I'm willing to run curl to get the debugging info.
I think the simplest tool for you would be kubectl proxy or maybe even simpler kubectl port-forward. With the first you can use one endpoint and the apiserver ability to proxy to particular pod by providing appropriate URL.
kubectl proxy
After running kubectl proxy you should be able to open http://127.0.0.1:8001/ in your local browser and see a bunch of paths available on the API server. From there you can proceed with URL like ie. http://127.0.0.1:8001/api/v1/namespaces/default/pods/my-pod-name:80/proxy/ which will proxy to port 80 of your particular pod.
kubectl port-forward
Will do similar but directly to port on your pod : kubectl port-forward my-pod-name 8081:80. At that point any request to 127.0.0.1:8081 will be forwarded to your pods port 80
Port Forward can be used as described in answer from Radek, and it has many advantages. The disadvantage is that it is quite slow, and if you are having a script doing many calles, there is another option for you.
kubectl run curl-mark-friedman --image=radial/busyboxplus:curl -i --tty --rm
This will create a new POD on you network with a busybox that includes the curl command. You can now use interactive mode in that POD to execute curl commands to other PODS from within the network.
You can find many images with the tools included that you like on docker hub. If you for example need jq, there is an image for that:
kubectl run curl-jq-mark-friedman --image=devorbitus/ubuntu-bash-jq-curl -i --tty --rm
The --rm option is used to remove the POD when you are done with it. If you want the POD to stay alive, just remove that option. You may then attach to that POD again using:
kubectl get pods | grep curl-mark-friedman <- get your <POD ID> from here.
kubectl attach <POD ID> -c curl-mark-friedman -i -t

K8S dashboard not accessible after first cluster in GKE - GCP using console

Newbie setup :
Created First project in GCP
Created cluster with default, 3 nodes. Node version 1.7.6. cluster master version 1.7.6-gke.1.
Deployed aan application in a pod, per example.
Able to access "hello world" and the hostname, using the external-ip and the port.
In GCP / GKE webpage of my cloud console, clicked "discovery and loadbalancing", I was able to see the "kubernetes-dashboard" process in green-tick, but cannot access throught the IP listed. tried 8001,9090, /ui and nothing worked.
not using any cloud shell or gcloud commands on my local laptop. Everything is done on console.
Questions :
How can anyone access the kubernetes-dashboard of the cluster created in console?
docs are unclear, are the dashboard components incorporated in the console itself? Are the docs out of sync with GCP-GKE screens?
tutorial says run "kubectl proxy" and then to open
"http://localhost:8001/ui", but it doesnt work, why?
If you create a cluster with with version 1.9.x or greater, then u can access using tokens.
get secret.
kubectl -n kube-system describe secrets `kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` | awk '/token:/ {print $2}'
Copy secret.
kubectl proxy.
Open UI using 127.0.0.1:8001/ui. This will redirect to login page.
there will be two options to login, kubeconfig and token.
Select token and paste the secret copied earlier.
hope this helps
It seems to be an issue with the internal Kubernetes DNS service starting at version 1.7.6 on Google Cloud.
The solution is to access the dashboard at this endpoint instead:
http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Github Issue links:
https://github.com/kubernetes/dashboard/issues/2368
https://github.com/kubernetes/kubernetes/issues/52729
The address of the dashboard service is only accessible from inside of the cluster. If you ssh into a node in your cluster, you should be able to connect to the dashboard. You can verify this by noticing that the address is within the services CIDR range for your cluster.
The dashboard in running as a pod inside of your cluster with an associated service. If you open the Workloads view you will see the kubernetes-dashboard deployment and can see the pod that was created by the deployment. I'm not sure which docs you are referring to, since you didn't provide a link.
When you run kubectl proxy it creates a secure connection from your local machine into your cluster. It works by connecting to your master and then running through a proxy on the master to the pod/service/host that you are connecting to via an ssh tunnel. It's possible that it isn't working because the ssh tunnels are not running; you should verify that your project has newly created ssh rules allowing access from the cluster endpoint IP address. Otherwise, if you could explain more about how it fails, that would be useful for debugging.
First :
gcloud container clusters get-credentials cluster-1 --zone my-zone --project my-project
Then find your kubernetes dashboard endpoint doing :
kubectl cluster-info
It will be something like https://42.42.42.42/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Install kube-dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Run:
$ kubectl proxy
Access:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login