Create a deployment from a pod in kubernetes - kubernetes

For a use case I need to create deployments from a pod when a script is being executed from inside the pod.
I am using google container engine for my cluster.
How to configure the container inside the pod to be able to run commands like kubectl create deployment.yaml?
P.S A bit clueless about it at the moment.

Your container is going to need to have kubectl available. There are some container images available, personally I can't vouch for any of them.
Personally I'd probably build my own and download the latest kubectl. A Dockerfile like this is probably a good starting point
FROM alpine:latest
RUN apk --no-cache add curl
RUN curl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl
RUN chmod +x /usr/local/bin/kubectl
This will build you a container image with kubectl, so you can then all the kubectl commands you want.

Related

kubectl cp from a completed pod to local computer

I would like to use kubectl cp to copy a file from a completed pod to my local host(local computer). I used kubectl cp /:/ , however, it gave me an error: cannot exec into a container in a completed pod; current phase is Succeeded error. Is there a way I can copy a file from a completed pod? It does not need to be kubectl cp. Any help appreciated!
Nope. If the pod is gone, it's gone for good. Only possibility would be if the data is stored in a PV or some other external resource. Pods are cattle, not pets.
You can find the files, because the containers of a pod in the state Completed are not deleted, they are just not running.
I am not aware of any way to do it via Kubernetes itself, but here is how to do it if your container runtime is Docker:
$ ssh <node where the pod is>
$ docker ps -a | grep <pod name>
$ docker cp <pod name>:/your/files ./
The files in containers are just overlayfs mounts; if the container still exists, the files still exist.
So if you are using containerd runtime or something else, look at /var/lib/containers or something (don't know where different runtimes do their overlayfs mounts, but it can't not be at the node. you could check if you find out where via $ mount).

How to copy a file from host to Kubernetes container?

I want to copy a file from my Ubuntu machine to kube-controller-manager-ubuntu container. Currently I do that like this, but I think it has more straight solution in Kubernetes.
Does anyone know how to copy a file to a Kubernetes container?
it is similar to docker copy.
kubectl cp /tmp/foo_dir <some-pod>:/tmp/bar_dir
Please refer here for examples and documentation
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cp
In case you are using namespace then you wanna go like this -
kubectl cp ./file.csv <CONTAINER_ID>:/path/to/copy -n <namespace>
e.g.
kubectl ./file.csv b81dd0b1745c:/usr/cloud_ms/ -n cloud

Pip installing a package inside of a Kubernetes cluster

I have installed Apache Superset from its Helm Chart in a Google Cloud Kubernetes cluster. I need to pip install a package that is not installed when installing the Helm Chart. If I connect to the Kubernetes bash shell like this:
kubectl exec -it superset-4934njn23-nsnjd /bin/bash
Inside there's no python available, no pip and apt-get doesn't find most of the packages.
I understand that during the container installation process the packages are listed in the Dockerfile, I suppose that I need to fork the docker container, modify the Dockerfile, register the container to a container registry and make a new Helm Chart that will run this container.
But all this seems too complicated for a simple pip install, is there a simpler way to do this?
Links:
Docker- https://hub.docker.com/r/amancevice/superset/
Helm Chart - https://github.com/helm/charts/tree/master/stable/superset
As #Murli mentioned, you should use pip3. However, one thing you should remember is, helm is for managing k8s, i.e. what goes into the cluster should be traceable. So I recommend you the following:
$ helm get stable/superset
modify the values.yaml. In my case, I added jenkins-job-builder to pip3:
initFile: |-
pip3 install jenkins-job-builder
/usr/local/bin/superset-init --username admin --firstname admin --lastname user --email admin#fab.org --password admin
superset runserver
and just pass the values.yaml to helm install.
$ helm install --values=values.yaml stable/superset
Thats it.
$ kubectl exec -it doltish-gopher-superset-696448b777-8b9c6 which jenkins-jobs
/usr/local/bin/jenkins-jobs
$
Docker file seems to be installing python3 package.
Try 'python3' or "pip3" instead of 'python'/'pip'
Make the container, a little more dev work and many fewer alerts from pager duty

How can I run a Kubernetes pod with the sole purpose of running exec against it?

Please before you comment or answer, this question is about a CLI program, not a service. Apparently 90% of Kubernetes has to do with running services, so there is sparse documentation for CLI programs meant to be part of a pipeline workflow.
I have a command line program that uses stdout for JSON results.
I have a docker image for the command line program.
If I create the container as a Kubernetes Job, than stdout and stderr are mixed and require heuristic scrubbing to get pure JSON out.
The stderr messages are from native libraries outside of my direct control.
Supposedly, if I run kubectl exec against a running pod, I will get the normal stdout/stderr pipes.
Is there a way to just have the pod running without an entrypoint (or some dummy service entrypoint) with the sole purpose of running kubectl exec against it?
Is there a way to just have the pod running without an entrypoint [...]?
A pod consists of one or more containers, each of which has an individual entrypoint. It is certainly possible to run a container with a dummy command, for example, you can build an image with:
CMD sleep inf
This will run a container that will persist until you kill it, and you could happily docker exec into it.
You can apply the same solution to k8s. You could build an image as described above and deploy that in a pod, or you could use an existing image and simply set the command, as in:
spec:
containers:
- name: mycontainer
image: myexistingimage
command: ["sleep", "inf"]
You can use kubectl as docker cli https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/
kubectl run just do the job. There is no need for a workaround.
Aditionally, you can attach I/O and disable automatic restart:
kubectl run -i -t busybox --image=busybox --restart=Never

aspnetcore:2.0 based image won't run on AKS nodes?

I have an asp.net core 2.0 application whose docker image runs fine locally, but when that same image is deployed to an AKS cluster, the pods have a status of CrashLoopBackOff and the pod log shows:
Did you mean to run dotnet SDK commands? Please install dotnet SDK from:
http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409.
And since you can't ssh to AKS clusters, it's pretty difficult to figure this out?
Dockerfile:
FROM microsoft/aspnetcore:2.0
WORKDIR /app
COPY . .
EXPOSE 80
ENTRYPOINT ["dotnet", "myapi.dll"]
Turned out that our build system wasn't putting the app code into the container as we thought. Since the container wasn't runnable, I didn't know how to inspect its contents until I found this command which is a lifesaver for these kinds of situations:
docker run --rm -it --entrypoint=/bin/bash [image_id]
... which at this point, you can freely inspect/verify the contents of the container.
I just ran into the same issue and it's because I was missing a key piece to the puzzle.
docker-compose -f docker-compose.ci.build.yml run ci-build
VS2017 Docker Tools will create that docker-compose.ci.build.yml file. After that command is run, the publish folder is populated and docker build -t <tag> will build a populated image (without an empty /app folder).