How to run the command in deployment? - kubernetes

I have the following deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: dev
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
imagePullSecrets:
- name: regcred
containers:
- name: keycloak
image: "hub.svc.databaker.io/service/keycloak:0.1.8"
imagePullPolicy: "IfNotPresent"
command:
- "-Dkeycloak.migration.action=import -Dkeycloak.migration.provider=dir -Dkeycloak.profile.feature.upload_scripts=enabled -Dkeycloak.migration.dir=/opt/jboss/keycloak/import-dir -Dkeycloak.migration.strategy=OVERWRITE_EXISTING"
And it can not be deployed. The error message is:
CrashLoopBackOff: back-off 5m0s restarting failed container=keycloak pod=keycloak-86c677456b-tqk6w_dev(6fb23dcc-9fe8-42fb-98d0-619a93f74da1)
I guess because of the command.
I would like to run a command analog to docker:
keycloak:
networks:
- auth
image: hub.svc.databaker.io/service/keycloak:0.1.7
container_name: keycloak
command:
- "-Dkeycloak.migration.action=import -Dkeycloak.migration.provider=dir -Dkeycloak.profile.feature.upload_scripts=enabled -Dkeycloak.migration.dir=/opt/jboss/keycloak/import-dir -Dkeycloak.migration.strategy=OVERWRITE_EXISTING"
How to run a command in K8S deployment?
Update
I have changed the deployment to:
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
imagePullSecrets:
- name: regcred
containers:
- name: keycloak
image: "hub.svc.databaker.io/service/keycloak:0.1.8"
imagePullPolicy: "IfNotPresent"
args:
- "-Dkeycloak.migration.action=import"
- "-Dkeycloak.migration.provider=dir"
- "-Dkeycloak.profile.feature.upload_scripts=enabled"
- "-Dkeycloak.migration.dir=/opt/jboss/keycloak/import-dir"
- "-Dkeycloak.migration.strategy=OVERWRITE_EXISTING"
and receive the error:
RunContainerError: failed to start container "012966e22a00e23a7d1f2d5a12e19f6aa9fcb390293f806e840bc007a733c1b0": Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"-Dkeycloak.migration.action=import -Dkeycloak.migration.provider=dir -Dkeycloak.profile.feature.upload_scripts=enabled -Dkeycloak.migration.dir=/opt/jboss/keycloak/import-dir -Dkeycloak.migration.strategy=OVERWRITE_EXISTING\": stat -Dkeycloak.migration.action=import -Dkeycloak.migration.provider=dir -Dkeycloak.profile.feature.upload_scripts=enabled -Dkeycloak.migration.dir=/opt/jboss/keycloak/import-dir -Dkeycloak.migration.strategy=OVERWRITE_EXISTING: no such file or directory": unknown

If your container has an entrypoint already you can provide the arguments only. This can be done with args. To define or override the entrypoint use command.
keycloak:
networks:
- auth
image: hub.svc.databaker.io/service/keycloak:0.1.7
container_name: keycloak
command: ["./standalone.sh"]
args:
- "-Dkeycloak.migration.action=import"
- "-Dkeycloak.migration.provider=dir"
- "-Dkeycloak.profile.feature.upload_scripts=enabled"
- "-Dkeycloak.migration.dir=/opt/jboss/keycloak/import-dir"
- "-Dkeycloak.migration.strategy=OVERWRITE_EXISTING"

Related

capsh command inside kubernetes container

Pod is running state but logging inside the container and and running capsh --print, give error as:
sh: capsh: not found
Running same image with --cap-add SYS_ADMIN or --privileged as docker container gives desired output.
What changes in deployment or extra permissions are needed for it to work inside k8s container?
Deployment :
kind: Deployment
apiVersion: apps/v1
metadata:
name: sample-deployment
namespace: sample
labels:
app: sample
spec:
replicas: 1
selector:
matchLabels:
app: sample
template:
metadata:
labels:
app: sample
spec:
containers:
- name: sample
image: alpine:3.17
command:
- sh
- -c
- while true; do echo Hello World; sleep 10; done
env:
- name: NFS_EXPORT_0
value: /var/opt/backup
- name: NFS_LOG_LEVEL
value: DEBUG
volumeMounts:
- name: backup
mountPath: /var/opt/backup
securityContext:
capabilities:
add: ["SYS_ADMIN"]
volumes:
- name: backup
persistentVolumeClaim:
claimName: sample-pvc

Kubernetes: Cannot VolumeMount emptydir within init container

I am trying to make use of amazon/aws-cli docker image for downloading all files from s3 bucket through initcontainer and mount the same volume to the main container.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-deployment
name: test-deployment
spec:
replicas: 1
selector:
matchLabels:
app: test-deployment
template:
metadata:
labels:
app: test-deployment
spec:
securityContext:
fsGroup: 2000
serviceAccountName: "s3-sa" #Name of the SA we ‘re using
automountServiceAccountToken: true
initContainers:
- name: data-extension
image: amazon/aws-cli
volumeMounts:
- name: data
mountPath: /data
command:
- aws s3 sync s3://some-bucket/ /data
containers:
- image: amazon/aws-cli
name: aws
command: ["sleep","10000"]
volumeMounts:
- name: data
mountPath: "/data"
volumes:
- name: data
emptyDir: {}
But it does not seems working. It is causing init container to crashbackloop.
error:
Error: failed to start container "data-extension": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "aws s3 sync s3://some-bucket/ /data": stat aws s3 sync s3://some-bucket/ /data: no such file or directory: unknown
Your command needs update:
...
command:
- "aws"
- "s3"
- "sync"
- "s3://some-bucket/"
- "/data"
...

Run consul agent with config-dir caused not found exception

In our docker-compose.yaml we have:
version: "3.5"
services:
consul-server:
image: consul:latest
command: "agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=./usr/src/app/consul.d/"
volumes:
- ./consul.d/:/usr/src/app/consul.d
In the consul.d folder we have statically defined our services. It works fine with docker-compose.
But when trying to run it on Kubernetes with this configmap:
ahmad#ahmad-pc:~$ kubectl describe configmap consul-config -n staging
Name: consul-config
Namespace: staging
Labels: <none>
Annotations: <none>
Data
====
trip.json:
----
... omitted for clarity ...
and consul.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: consul-server
name: consul-server
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: consul-server
template:
metadata:
labels:
io.kompose.service: consul-server
spec:
containers:
- image: quay.io/bitnami/consul:latest
name: consul-server
ports:
- containerPort: 8500
#env:
#- name: CONSUL_CONF_DIR # Consul seems not respecting this env variable
# value: /consul/conf/
volumeMounts:
- name: config-volume
mountPath: /consul/conf/
command: ["agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=/consul/conf/"]
volumes:
- name: config-volume
configMap:
name: consul-config
I got the following error:
ahmad#ahmad-pc:~$ kubectl describe pod consul-server-7489787fc7-8qzhh -n staging
...
Error: failed to start container "consul-server": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=/consul/conf/\":
stat agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=/consul/conf/:
no such file or directory": unknown
But when I run the container without command: agent... and bash into it, I can list files mounted in the right place.
Why consul gives me a not found error despite that folder exists?
To execute command in the pod you have to define a command in command field and arguments for the command in args field. command field is the same as ENTRYPOINT in Docker and args field is the same as CMD.
In this case you define /bin/sh as ENTRYPOINT and "-c, "consul agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -data-dir=/bitnami/consul/data/ -config-dir=/consul/conf/" as arguments so it can execute consul agent ...:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: consul-server
name: consul-server
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: consul-server
template:
metadata:
labels:
io.kompose.service: consul-server
spec:
containers:
- image: quay.io/bitnami/consul:latest
name: consul-server
ports:
- containerPort: 8500
env:
- name: CONSUL_CONF_DIR # Consul seems not respecting this env variable
value: /consul/conf/
volumeMounts:
- name: config-volume
mountPath: /consul/conf/
command: ["bin/sh"]
args: ["-c", "consul agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -data-dir=/bitnami/consul/data/ -config-dir=/consul/conf/"]
volumes:
- name: config-volume
configMap:
name: consul-config

Kubernetes unknown field "volumes"

I am trying to deploy a simple nginx in kubernetes using hostvolumes. I use the next yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
When I deploy it kubectl create -f webserver.yaml, it throws the next error:
error: error validating "webserver.yaml": error validating data: ValidationError(Deployment.spec.template): unknown field "volumes" in io.k8s.api.core.v1.PodTemplateSpec; if you choose to ignore these errors, turn validation off with --validate=false
I believe you have the wrong indentation. The volumes key should be at the same level as containers.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
Look at this wordpress example from the documentation to see how it's done.

Gitlab CI - K8s - Deployment

just going through this guide on gitlab and k8s gitlab-k8s-cd, but my build keeps failing on this part:
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=<my_username> --docker-password=$REGISTRY_PASSWD --docker-email=<my_email>
Although I am not entirely sure what password is needed for --docker-password, I have created an API token in gitlab for my user and I am using that in the secure variables.
This is the error:
$ gcloud container clusters get-credentials deployment
Fetching cluster endpoint and auth data.
kubeconfig entry generated for deployment.
$ kubectl delete secret registry.gitlab.com
Error from server: secrets "registry.gitlab.com" not found
ERROR: Build failed: exit code 1
Any help would be much appreciated thanks.
EDIT
Since the initial post, by removing the initial kubectl delete secret and re-building worked, so it was failing on deleting when there was no previous secret.
Second Edit
Having problems with my deployment.yml for K8s, could anyone shed any light on why I am getting this error:
error validating "deployment.yml": error validating data: field spec.template.spec.containers[0].ports[0]: expected object of type map[string]interface{},
With this yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app>
spec:
replicas: 2
template:
metadata:
labels:
app: <app>
spec:
containers:
- name: deployment
image: registry.gitlab.com/<username>/<app>
imagePullPolicy: Always
ports:
- "80:8080"
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
And this error:
error validating "deployment.yml": error validating data: found invalid field imagePullSecrets for v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
With this yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app>
spec:
replicas: 2
template:
metadata:
labels:
app: <app>
spec:
containers:
- name: <app>
image: registry.gitlab.com/<project>/<app>
imagePullPolicy: Always
ports:
- "80:8080"
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
Latest YAML
apiVersion: v1
kind: Service
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
ports:
- port: 80
selector:
app: <app_name>
tier: frontend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: <app_name>
tier: frontend
spec:
containers:
- image: registry.gitlab.com/<project>/<app>
imagePullPolicy: Always
name: <app_name>
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
ports:
- containerPort: 8080
hostPort: 80
Regarding your first error:
Ports are defined differently in Kubernetes than in Docker or Docker Compose. This is how the port specification should look like:
ports:
- containerPort: 8080
hostPort: 80
See the reference for more information.
Regarding your second error:
According to the reference on PodSpecs, the imagePullSecrets property is correctly placed in your example. However, from reading the error message, it seems that you actually included the imagePullSecrets property into the ContainerSpec, not the PodSpec.
The YAML in your question seems to be correct, in this case. Make sure that your actual manifest matches the example from your question and you did not accidentally indented the imagePullSecrets property more than necessary.
This is the working YAML file for K8s:
apiVersion: v1
kind: Service
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
ports:
- port: 80
selector:
app: <app_name>
tier: frontend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: <app_name>
tier: frontend
spec:
containers:
- image: registry.gitlab.com/<project>/<app>:latest
imagePullPolicy: Always
name: <app_name>
env:
- name: PORT
value: "8080"
ports:
- containerPort: 8080
hostPort: 80
imagePullSecrets:
- name: registry.gitlab.com
This is the working gitlab-ci file also:
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
stages:
- package
- deploy
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/<project>/<app> .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/<project>/<app>
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone <zone>
- gcloud config set project <project>
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials <container-name>
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=<username> --docker-password=$REGISTRY_PASSWD --docker-email=<user-email>
- kubectl apply -f deployment.yml
Just need to work out how to alter the script to allow for rolling back.