how to make curl available in my container in k8s pod? - kubernetes

I'm using busybox image in my pod. I'm trying to curl another pod, but "curl is not found". How to fix it?
apiVersion: v1
kind: Pod
metadata:
labels:
app: front
name: front
spec:
containers:
- image: busybox
name: front
command:
- /bin/sh
- -c
- sleep 1d
this cmd:
k exec -it front -- sh
curl service-anotherpod:80 -> 'curl not found'

Additional to #gohm'c's answer, you could also try uusing Alpine Linux and either make your own image that has curl installed, or use apk add curl in the pod to install it.
Example pod with alpine:
apiVersion: v1
kind: Pod
metadata:
labels:
app: front
name: front
spec:
containers:
- image: alpine
name: front
command:
- /bin/sh
- -c
- sleep 1d

busybox is a single binary program which you can't install additional program to it. You can either use wget or you can use a different variant of busybox like progrium which come with a package manager that allows you to do opkg-install curl.

You could make your own image and deploy it to a pod. Here is an example Dockerfile
FROM alpine:latest
RUN apk update && \
apk upgrade && \
apk add --no-cache \
bind-tools \
curl \
iproute2 \
wget \
&& \
:
ENTRYPOINT [ "/bin/sh", "-c", "--" , "while true; do sleep 30; done;" ]
Which you can then build like this
docker image build -t networkutils:latest .
Run like this
docker container run -rm -d --name networkutils networkutils
And access it's shell to run curl, wget, or whichever commands you have installed like this
docker container exec -it networkutils sh
To run and access it in k3s you can do make a deployment file like this
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: networkutils
namespace: default
labels:
app: networkutils
spec:
replicas: 1
selector:
matchLabels:
app: networkutils
template:
metadata:
labels:
app: networkutils
spec:
containers:
- name: networkutils-container
image: networkutils:latest
imagePullPolicy: Never
Start the pod
kubectl apply -f deployment.yml
And then access the shell
kubectl exec -it networkutils -- /bin/sh

Related

Google Kubernetes Engine deploy error with github actions

I'm trying to deploy my code to GKE using github actions but getting an error during the deploy step:
Here is my deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-3
namespace: default
labels:
type: nginx
spec:
replicas: 1
selector:
matchLabels:
- type: nginx
template:
metadata:
labels:
- type: nginx
spec:
containers:
- image: nginx:1.14
name: renderer
ports:
- containerPort: 80
Service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-3-service
spec:
ports:
port: 80
protocol: TCP
targetPort: 80
And my dockerfile:
FROM ubuntu/redis:5.0-20.04_beta
# Install.
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y tzdata
RUN \
sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list && \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y build-essential && \
apt-get install -y software-properties-common && \
apt-get install -y byobu curl git htop man unzip vim wget && \
rm -rf /var/lib/apt/lists/*
# Set environment variables.
ENV HOME /root
# Define working directory.
WORKDIR /root
# Define default command.
CMD ["bash"]
This is what the cloud deployments(Workloads) looks like:
I'm trying to push a C++ code using an ubuntu image. I just want to simply push my code to google cloud kubernetes engine.
Update:
I've deleted the deployment and re-run the action and got this:
It said that deployment is successfully created but gives off another error:
deployment.apps/nginx-3 created
Error from server (NotFound): deployments.apps "gke-deployment" not found
Try:
apiVersion: apps/v1
kind: Deployment
metadata:
...
labels:
type: nginx # <-- correct
spec:
...
selector:
matchLabels:
type: nginx # incorrect, remove the '-'
template:
metadata:
labels:
type: nginx # incorrect, remove the '-'
spec:
...
---
apiVersion: v1
kind: Service
...
spec:
...
ports:
- port: 80 # <-- add '-'
protocol: TCP
targetPort: 80

Kubernetes: Run container as non-root if there is no user specified

How can I make every container run as non-root in Kubernetes?
Containers that do not specify a user, as in this example, and also do not specify a SecurityContext in the corresponding deployment, should still be able to be executed in the cluster - but without running as root. What options do you have here?
FROM debian:jessie
RUN apt-get update && apt-get install -y \
git \
python \
vim
CMD ["echo", "hello world"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: mynamespace
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: hello-world
name: hello-world
you can add Pod Security Policy to your cluster, there is an option (below) you can add to prevent any deployment from running without specifying a non-root user:
spec:
runAsUser:
rule: MustRunAsNonRoot
for more info about Pod Security Policy please go to this link:
https://kubernetes.io/docs/concepts/security/pod-security-policy/

The data is not being shared across containers

I am trying to create two containers within a pod with one container being an init container. The job of the init container is to download a jar and make it available for the app container. I am able to create everything and the logs look good but when i check, i do not see the jar in my app container. Below is my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service-test
labels:
app: web-service-test
spec:
replicas: 1
selector:
matchLabels:
app: web-service-test
template:
metadata:
labels:
app: web-service-test
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: web-service-test
image: some image
ports:
- containerPort: 8081
volumeMounts:
- name: shared-data
mountPath: /tmp/jar
initContainers:
- name: init-container
image: busybox
volumeMounts:
- name: shared-data
mountPath: /jdbc-jar
command:
- wget
- "https://repo1.maven.org/maven2/com/oracle/ojdbc/ojdbc8/19.3.0.0/ojdbc8-19.3.0.0.jar"
You need to save jar in the /jdbc-jar folder
try updating your yaml to following
command: ["/bin/sh"]
args: ["-c", "wget -O /pod-data/ojdbc8-19.3.0.0.jar https://repo1.maven.org/maven2/com/oracle/ojdbc/ojdbc8/19.3.0.0/ojdbc8-19.3.0.0.jar"]
Add following block of code to your init container section:
command: ["/bin/sh","-c"]
args: ["wget -O /jdbc-jar/ojdbc8-19.3.0.0.jar https://repo1.maven.org/maven2/com/oracle/ojdbc/ojdbc8/19.3.0.0/ojdbc8-19.3.0.0.jar"]
The command ["/bin/sh", "-c"] says "run a shell, and execute the following instructions". The args are then passed as commands to the shell. In shell scripting a semicolon separates commands. In the wget command I have added the -O flag to download the jar from the specified url and save it as /jdbc-jar/ojdbc8-19.3.0.0.jar .
To check if jar is persistent in container. Simply execute command:
$ kubectl exec -it web-service-test -- /bin/bash
Then go to folder /jdbc-jar ( $ cd jdbc-jar ) and list files in it ($ ls -al). You should see your jar there.
See examples: commands-in-containers, initcontainers-running.

Creating a Docker container that runs forever using bash

I'm trying to create a Pod with a container in it for testing purposes that runs forever using the K8s API. I have the following yaml spec for the Pod which would run a container and exit straight away:
apiVersion: v1
kind: Pod
metadata:
name: pod-example
spec:
containers:
- name: ubuntu
image: ubuntu:trusty
command: ["echo"]
args: ["Hello World"]
I can't find any documentation around the command: tag but ideally I'd like to put a while loop in there somewhere printing out numbers forever.
If you want to keep printing Hello every few seconds you can use:
apiVersion: v1
kind: Pod
metadata:
name: busybox2
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
ports:
- containerPort: 80
command: ["/bin/sh", "-c", "while :; do echo 'Hello'; sleep 5 ; done"]
You can see the output using kubectl logs <pod-name>
Other option to keep a container running without printing anything is to use sleep command on its own, for example:
command: ["/bin/sh", "-ec", "sleep 10000"]

How to create multi container pod from without yaml config of pod or deployment

Trying to figure out how do I create multicontainer pod from terminal with kubectl without yaml config of any resource
tried kubectl run --image=redis --image=nginx but second --image just overrides the first one .. :)
You can't do this in a single kubectl command, but you could do it in two: using a kubectl run command followed by a kubectl patch command:
kubectl run mypod --image redis && kubectl patch deploy mypod --patch '{"spec": {"template": {"spec": {"containers": [{"name": "patch-demo", "image": "nginx"}]}}}}'
kubectl run is for running 1 or more instances of a container image on your cluster
see https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands
Go with yaml config file
follow the below steps
create patch-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: patch-demo
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: nginx
image: nginx
----
deploy patch-demo.yaml
create patch-containers.yaml as below
---
spec:
template:
spec:
containers:
- name: redis
image: redis
---
patch the above yaml to include redis container
kubectl patch deployment patch-demo --patch "$(cat patch-containers.yaml)"