Google Kubernetes Engine deploy error with github actions - kubernetes

I'm trying to deploy my code to GKE using github actions but getting an error during the deploy step:
Here is my deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-3
namespace: default
labels:
type: nginx
spec:
replicas: 1
selector:
matchLabels:
- type: nginx
template:
metadata:
labels:
- type: nginx
spec:
containers:
- image: nginx:1.14
name: renderer
ports:
- containerPort: 80
Service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-3-service
spec:
ports:
port: 80
protocol: TCP
targetPort: 80
And my dockerfile:
FROM ubuntu/redis:5.0-20.04_beta
# Install.
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y tzdata
RUN \
sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list && \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y build-essential && \
apt-get install -y software-properties-common && \
apt-get install -y byobu curl git htop man unzip vim wget && \
rm -rf /var/lib/apt/lists/*
# Set environment variables.
ENV HOME /root
# Define working directory.
WORKDIR /root
# Define default command.
CMD ["bash"]
This is what the cloud deployments(Workloads) looks like:
I'm trying to push a C++ code using an ubuntu image. I just want to simply push my code to google cloud kubernetes engine.
Update:
I've deleted the deployment and re-run the action and got this:
It said that deployment is successfully created but gives off another error:
deployment.apps/nginx-3 created
Error from server (NotFound): deployments.apps "gke-deployment" not found

Try:
apiVersion: apps/v1
kind: Deployment
metadata:
...
labels:
type: nginx # <-- correct
spec:
...
selector:
matchLabels:
type: nginx # incorrect, remove the '-'
template:
metadata:
labels:
type: nginx # incorrect, remove the '-'
spec:
...
---
apiVersion: v1
kind: Service
...
spec:
...
ports:
- port: 80 # <-- add '-'
protocol: TCP
targetPort: 80

Related

Kubernetes: Run container as non-root if there is no user specified

How can I make every container run as non-root in Kubernetes?
Containers that do not specify a user, as in this example, and also do not specify a SecurityContext in the corresponding deployment, should still be able to be executed in the cluster - but without running as root. What options do you have here?
FROM debian:jessie
RUN apt-get update && apt-get install -y \
git \
python \
vim
CMD ["echo", "hello world"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: mynamespace
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: hello-world
name: hello-world
you can add Pod Security Policy to your cluster, there is an option (below) you can add to prevent any deployment from running without specifying a non-root user:
spec:
runAsUser:
rule: MustRunAsNonRoot
for more info about Pod Security Policy please go to this link:
https://kubernetes.io/docs/concepts/security/pod-security-policy/

how to make curl available in my container in k8s pod?

I'm using busybox image in my pod. I'm trying to curl another pod, but "curl is not found". How to fix it?
apiVersion: v1
kind: Pod
metadata:
labels:
app: front
name: front
spec:
containers:
- image: busybox
name: front
command:
- /bin/sh
- -c
- sleep 1d
this cmd:
k exec -it front -- sh
curl service-anotherpod:80 -> 'curl not found'
Additional to #gohm'c's answer, you could also try uusing Alpine Linux and either make your own image that has curl installed, or use apk add curl in the pod to install it.
Example pod with alpine:
apiVersion: v1
kind: Pod
metadata:
labels:
app: front
name: front
spec:
containers:
- image: alpine
name: front
command:
- /bin/sh
- -c
- sleep 1d
busybox is a single binary program which you can't install additional program to it. You can either use wget or you can use a different variant of busybox like progrium which come with a package manager that allows you to do opkg-install curl.
You could make your own image and deploy it to a pod. Here is an example Dockerfile
FROM alpine:latest
RUN apk update && \
apk upgrade && \
apk add --no-cache \
bind-tools \
curl \
iproute2 \
wget \
&& \
:
ENTRYPOINT [ "/bin/sh", "-c", "--" , "while true; do sleep 30; done;" ]
Which you can then build like this
docker image build -t networkutils:latest .
Run like this
docker container run -rm -d --name networkutils networkutils
And access it's shell to run curl, wget, or whichever commands you have installed like this
docker container exec -it networkutils sh
To run and access it in k3s you can do make a deployment file like this
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: networkutils
namespace: default
labels:
app: networkutils
spec:
replicas: 1
selector:
matchLabels:
app: networkutils
template:
metadata:
labels:
app: networkutils
spec:
containers:
- name: networkutils-container
image: networkutils:latest
imagePullPolicy: Never
Start the pod
kubectl apply -f deployment.yml
And then access the shell
kubectl exec -it networkutils -- /bin/sh

Unable to login to Postgres inside Kubernetes cluster from the outside

I want to simply login to a postgres db from outside my K8 cluster. I'm created the following config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PORT
value: '5432'
- name: POSTGRES_DB
value: postgres
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
---
apiVersion: v1
kind: Service
metadata:
name: postgres-srv
spec:
selector:
app: postgres
type: NodePort
ports:
- name: postgres
protocol: TCP
port: 5432
targetPort: 5432
Config Map:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
5432: "default/postgres-srv:5432"
I've checked kubectl get services and attempted to use the endpoint and the cluster-ip. Neith of these worked.
psql "postgresql://postgres:password#[ip]:5432/postgres"
The pod is running and the logs say everything is ready. Anything I'm missing here? I'm running the cluster in digital ocean.
edit:
I want to be able to access the DB from my host. (sub.domain.com) I've bounced the deployments and still can't get in. The only config I've targeted is what is shown above. I do have an A record for my domain and can access my other exposed pods via my ingress nginx service
You can expose TCP and UDP services with ingress-nginx configuration.
For example using GKE with ingress-nginx, nfs-server-provisioner and the bitnami/postgresql helm charts:
kubectl create secret generic -n default postgresql \
--from-literal=postgresql-password=$(openssl rand -base64 32) \
--from-literal=postgresql-replication-password=$(openssl rand -base64 32)
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install -n default postgres bitnami/postgresql \
--set global.storageClass=nfs-client \
--set existingSecret=postgresql
Patch the ingress-nginx tcp-services ConfigMap:
kubectl patch cm -n ingress-nginx tcp-services -p '{"data": {"5432": "default/postgres-postgresql:5432"}}'
Update the controllers Service for the proxied port (i.e. kubectl edit svc -n ingress-nginx ingress-nginx):
- name: postgres
port: 5432
protocol: TCP
targetPort: 5432
Note: you may have to update the existing ingress-nginx controller deployments args (i.e. kubectl edit deployments.apps -n ingress-nginx nginx-ingress-controller) to include --tcp-services-configmap=ingress-nginx/tcp-services and bounce the ingress-nginx controller if you edit the deployment spec (i.e. kubectl scale deployment -n ingress-nginx --replicas=0 && kubectl scale deployment -n ingress-nginx --replicas=3).
Test the connection:
export PGPASSWORD=$(kubectl get secrets -n default postgresql -o jsonpath={.data.postgresql-password} |base64 -d)
docker run --rm -it \
-e PGPASSWORD=${PGPASSWORD} \
--entrypoint psql \
--network host \
postgres:13-alpine -U postgres -d postgres -h example.com
Note: I manually created an A record in Google Cloud DNS to resolve the hostname to the clusters external IP.
Update: in addition to creating the ingress-nginx config, installing the bitnami/postgresql chart etc. it was necessary to Disable "Proxy Protocol" on the Load Balancer to get the connections working for a deployment in DigitalOcean (postgres will LOG: invalid length of startup packet otherwise):

ValidationError: missing required field "selector" in io.k8s.api.v1.DeploymentSpec

I've created Hyper-V machine and tried to deploy Sawtooth on Minikube using Sawtooth YAML file :
https://sawtooth.hyperledger.org/docs/core/nightly/master/app_developers_guide/sawtooth-kubernetes-default.yaml
I changed the apiVersion i.e. apiVersion: extensions/v1beta1 to apiVersion: apps/v1, though I have launched Minikube in Kubernetes v1.17.0 using this command
minikube start --kubernetes-version v1.17.0
After that I can't deploy the server. Command is
kubectl apply -f sawtooth-kubernetes-default.yaml --validate=false
It shows an error with "sawtooth-0" is invalid.
---
apiVersion: v1
kind: List
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: sawtooth-0
spec:
replicas: 1
selector:
matchLabels:
name: sawtooth-0
template:
metadata:
labels:
name: sawtooth-0
spec:
containers:
- name: sawtooth-devmode-engine
image: hyperledger/sawtooth-devmode-engine-rust:chime
command:
- bash
args:
- -c
- "devmode-engine-rust -C tcp://$HOSTNAME:5050"
- name: sawtooth-settings-tp
image: hyperledger/sawtooth-settings-tp:chime
command:
- bash
args:
- -c
- "settings-tp -vv -C tcp://$HOSTNAME:4004"
- name: sawtooth-intkey-tp-python
image: hyperledger/sawtooth-intkey-tp-python:chime
command:
- bash
args:
- -c
- "intkey-tp-python -vv -C tcp://$HOSTNAME:4004"
- name: sawtooth-xo-tp-python
image: hyperledger/sawtooth-xo-tp-python:chime
command:
- bash
args:
- -c
- "xo-tp-python -vv -C tcp://$HOSTNAME:4004"
- name: sawtooth-validator
image: hyperledger/sawtooth-validator:chime
ports:
- name: tp
containerPort: 4004
- name: consensus
containerPort: 5050
- name: validators
containerPort: 8800
command:
- bash
args:
- -c
- "sawadm keygen \
&& sawtooth keygen my_key \
&& sawset genesis -k /root/.sawtooth/keys/my_key.priv \
&& sawset proposal create \
-k /root/.sawtooth/keys/my_key.priv \
sawtooth.consensus.algorithm.name=Devmode \
sawtooth.consensus.algorithm.version=0.1 \
-o config.batch \
&& sawadm genesis config-genesis.batch config.batch \
&& sawtooth-validator -vv \
--endpoint tcp://$SAWTOOTH_0_SERVICE_HOST:8800 \
--bind component:tcp://eth0:4004 \
--bind consensus:tcp://eth0:5050 \
--bind network:tcp://eth0:8800"
- name: sawtooth-rest-api
image: hyperledger/sawtooth-rest-api:chime
ports:
- name: api
containerPort: 8008
command:
- bash
args:
- -c
- "sawtooth-rest-api -C tcp://$HOSTNAME:4004"
- name: sawtooth-shell
image: hyperledger/sawtooth-shell:chime
command:
- bash
args:
- -c
- "sawtooth keygen && tail -f /dev/null"
- apiVersion: apps/v1
kind: Service
metadata:
name: sawtooth-0
spec:
type: ClusterIP
selector:
name: sawtooth-0
ports:
- name: "4004"
protocol: TCP
port: 4004
targetPort: 4004
- name: "5050"
protocol: TCP
port: 5050
targetPort: 5050
- name: "8008"
protocol: TCP
port: 8008
targetPort: 8008
- name: "8800"
protocol: TCP
port: 8800
targetPort: 8800
You need to fix your deployment yaml file. As you can see from your error message, the Deployment.spec.selector field can't be empty.
Update the yaml (i.e. add spec.selector) as shown in below:
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: sawtooth-0
template:
metadata:
labels:
app.kubernetes.io/name: sawtooth-0
Why selector field is important?
The selector field defines how the Deployment finds which Pods to manage. In this case, you simply select a label that is defined in the Pod template (app.kubernetes.io/name: sawtooth-0). However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule.
Reference
Update:
The apiVersion for k8s service is v1:
- apiVersion: v1 # Update here
kind: Service
metadata:
app.kubernetes.io/name: sawtooth-0
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: sawtooth-0
... ... ...
For api version v1 (and also for apps/v1) you need to use app: <your lable>
apiVersion: v1
kind: Service
metadata:
name: sawtooth-0
spec:
selector:
app: sawtooth-0
See : https://kubernetes.io/docs/concepts/services-networking/service/
The answer for this is already covered by #Kamol
Some general possible reasons if you are still getting the error :
missing required field “XXX” in YYY
Check apiVersion at the top of file (for Deployment, version is: apps/v1
& for service it's v1
Check the spelling of "XXX"(unknown field) and check if syntax is incorrect.
Check kind: ... once again.
If you find some other reason, please comment and let other's know :)

OpenShift: Accessing mounted file-system as non-root

I am trying to run Chart Museum as a non-root user in OpenShift. Here is a snapshot of my YAML.
apiVersion: apps/v1
kind: Deployment
metadata:
name: chart-museum
namespace: demo
spec:
selector:
matchLabels:
app: chart-museum
replicas: 1
template:
metadata:
labels:
app: chart-museum
spec:
volumes:
- name: pvc-charts
persistentVolumeClaim:
claimName: pvc-charts
containers:
- name: chart-museum
securityContext:
fsGroup: 1000
image: chartmuseum/chartmuseum:latest
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: chart-museum
volumeMounts:
- name: pvc-charts
mountPath: "/charts"
As you can see, I have set spec.containers.securityContext.fsGroup to 1000 which is same as the user ID in the Chart Museum Dockerfile as shown below.
FROM alpine:3.10.3
RUN apk add --no-cache cifs-utils ca-certificates \
&& adduser -D -u 1000 chartmuseum
COPY bin/linux/amd64/chartmuseum /chartmuseum
USER 1000
ENTRYPOINT ["/chartmuseum"]
And, yet, when I try to upload a chart, I get a permission denied message for /charts. How do I get around this issue?
It's related to Kubernetes and how the given Persistent Volume is defined. You can check all the discussion and possible workarounds in the related GH Issue.
Add the chmod/chown lines in your dockerfile:
FROM alpine:3.10.3
RUN apk add --no-cache cifs-utils ca-certificates \
&& adduser -D -u 1000 chartmuseum
COPY bin/linux/amd64/chartmuseum /chartmuseum
RUN chmod +xr /chartmuseum
RUN chown 1000:1000 /chartmuseum
USER 1000
ENTRYPOINT ["/chartmuseum"]
Modify your spec.template.spec.containers.securityContext to ensure the user and group will be enforced.
containers:
- name: chart-museum
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
This is how I solved the issue.
Download the binary curl -LO https://s3.amazonaws.com/chartmuseum/release/latest/bin/linux/amd64/chartmuseum.
Change permissions chmod +xr chartmuseum
Create a new Dockerfile as shown below. Basically, use the user name instead of ID for chown commands so that the binary and the storage location are owned by chartmuseum user and not root.
FROM alpine:3.10.3
RUN apk add --no-cache cifs-utils ca-certificates \
&& adduser -D -u 1000 chartmuseum
COPY chartmuseum /chartmuseum
RUN chown chartmuseum:chartmuseum /chartmuseum
RUN chown chartmuseum:chartmuseum /charts
USER chartmuseum
ENTRYPOINT ["/chartmuseum"]
Build and push the resulting Docker image to e.g. somerepo/chartmuseum:0.0.0.
Use the k8s manifest as shown below. Edit the namespace as required. Note, creation of PersistentVolumeClaim is not covered here.
kind: ConfigMap
apiVersion: v1
metadata:
name: chart-museum
namespace: demo
data:
DEBUG: 'true'
STORAGE: local
STORAGE_LOCAL_ROOTDIR: "/charts"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: chart-museum
namespace: demo
spec:
selector:
matchLabels:
app: chart-museum
replicas: 1
template:
metadata:
labels:
app: chart-museum
spec:
volumes:
- name: pvc-charts
persistentVolumeClaim:
claimName: pvc-charts
containers:
- name: chart-museum
image: somerepo/chartmuseum:0.0.0
imagePullPolicy: Always
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: chart-museum
volumeMounts:
- mountPath: "/charts"
name: pvc-charts
resources:
limits:
memory: "128Mi"
cpu: "500m"
imagePullSecrets:
- name: us.icr.io.secret
---
apiVersion: v1
kind: Service
metadata:
labels:
app: chart-museum
name: chart-museum
namespace: demo
spec:
type: ClusterIP
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: chart-museum
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: chart-museum
name: chart-museum
namespace: demo
spec:
port:
targetPort: 8080-tcp
tls:
insecureEdgeTerminationPolicy: Redirect
termination: edge
to:
kind: Service
name: chart-museum
The manifest creates a ConfigMap object and uses a PersistentVolumeClaim to 'replicate' the command to run Chart Museum locally (as described at https://chartmuseum.com/)
docker run --rm -it \
-p 8080:8080 \
-v $(pwd)/charts:/charts \
-e DEBUG=true \
-e STORAGE=local \
-e STORAGE_LOCAL_ROOTDIR=/charts \
chartmuseum/chartmuseum:latest
The Service and Route in the manifest expose the repo to the external world.
After the objects are created, enter the HOST/PORT value in oc get route/chart-museum -n demo with https in address bar and hit enter. You should see a welcome page for Chart Museum. This means the installation is successful.