OpenShift: Accessing mounted file-system as non-root - kubernetes

I am trying to run Chart Museum as a non-root user in OpenShift. Here is a snapshot of my YAML.
apiVersion: apps/v1
kind: Deployment
metadata:
name: chart-museum
namespace: demo
spec:
selector:
matchLabels:
app: chart-museum
replicas: 1
template:
metadata:
labels:
app: chart-museum
spec:
volumes:
- name: pvc-charts
persistentVolumeClaim:
claimName: pvc-charts
containers:
- name: chart-museum
securityContext:
fsGroup: 1000
image: chartmuseum/chartmuseum:latest
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: chart-museum
volumeMounts:
- name: pvc-charts
mountPath: "/charts"
As you can see, I have set spec.containers.securityContext.fsGroup to 1000 which is same as the user ID in the Chart Museum Dockerfile as shown below.
FROM alpine:3.10.3
RUN apk add --no-cache cifs-utils ca-certificates \
&& adduser -D -u 1000 chartmuseum
COPY bin/linux/amd64/chartmuseum /chartmuseum
USER 1000
ENTRYPOINT ["/chartmuseum"]
And, yet, when I try to upload a chart, I get a permission denied message for /charts. How do I get around this issue?

It's related to Kubernetes and how the given Persistent Volume is defined. You can check all the discussion and possible workarounds in the related GH Issue.

Add the chmod/chown lines in your dockerfile:
FROM alpine:3.10.3
RUN apk add --no-cache cifs-utils ca-certificates \
&& adduser -D -u 1000 chartmuseum
COPY bin/linux/amd64/chartmuseum /chartmuseum
RUN chmod +xr /chartmuseum
RUN chown 1000:1000 /chartmuseum
USER 1000
ENTRYPOINT ["/chartmuseum"]
Modify your spec.template.spec.containers.securityContext to ensure the user and group will be enforced.
containers:
- name: chart-museum
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000

This is how I solved the issue.
Download the binary curl -LO https://s3.amazonaws.com/chartmuseum/release/latest/bin/linux/amd64/chartmuseum.
Change permissions chmod +xr chartmuseum
Create a new Dockerfile as shown below. Basically, use the user name instead of ID for chown commands so that the binary and the storage location are owned by chartmuseum user and not root.
FROM alpine:3.10.3
RUN apk add --no-cache cifs-utils ca-certificates \
&& adduser -D -u 1000 chartmuseum
COPY chartmuseum /chartmuseum
RUN chown chartmuseum:chartmuseum /chartmuseum
RUN chown chartmuseum:chartmuseum /charts
USER chartmuseum
ENTRYPOINT ["/chartmuseum"]
Build and push the resulting Docker image to e.g. somerepo/chartmuseum:0.0.0.
Use the k8s manifest as shown below. Edit the namespace as required. Note, creation of PersistentVolumeClaim is not covered here.
kind: ConfigMap
apiVersion: v1
metadata:
name: chart-museum
namespace: demo
data:
DEBUG: 'true'
STORAGE: local
STORAGE_LOCAL_ROOTDIR: "/charts"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: chart-museum
namespace: demo
spec:
selector:
matchLabels:
app: chart-museum
replicas: 1
template:
metadata:
labels:
app: chart-museum
spec:
volumes:
- name: pvc-charts
persistentVolumeClaim:
claimName: pvc-charts
containers:
- name: chart-museum
image: somerepo/chartmuseum:0.0.0
imagePullPolicy: Always
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: chart-museum
volumeMounts:
- mountPath: "/charts"
name: pvc-charts
resources:
limits:
memory: "128Mi"
cpu: "500m"
imagePullSecrets:
- name: us.icr.io.secret
---
apiVersion: v1
kind: Service
metadata:
labels:
app: chart-museum
name: chart-museum
namespace: demo
spec:
type: ClusterIP
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: chart-museum
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: chart-museum
name: chart-museum
namespace: demo
spec:
port:
targetPort: 8080-tcp
tls:
insecureEdgeTerminationPolicy: Redirect
termination: edge
to:
kind: Service
name: chart-museum
The manifest creates a ConfigMap object and uses a PersistentVolumeClaim to 'replicate' the command to run Chart Museum locally (as described at https://chartmuseum.com/)
docker run --rm -it \
-p 8080:8080 \
-v $(pwd)/charts:/charts \
-e DEBUG=true \
-e STORAGE=local \
-e STORAGE_LOCAL_ROOTDIR=/charts \
chartmuseum/chartmuseum:latest
The Service and Route in the manifest expose the repo to the external world.
After the objects are created, enter the HOST/PORT value in oc get route/chart-museum -n demo with https in address bar and hit enter. You should see a welcome page for Chart Museum. This means the installation is successful.

Related

Shared Folder with Azure File on kubernetes pod doesn't work

I have an issue on my deployment when I try to share a folder with a kubernetes volume.
The folder will be shared using an Azure File Storage.
If I deploy my image without sharing the folder (/integrations) the app start.
as shown in the image below the pod via lens is up and running
If I add the relation of the folder to a volume the result is that the pod will stuck in error with this messagge
Here I put my yaml deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: sandbox-pizzly
name: sandbox-pizzly-widget
labels:
app: sandbox-pizzly-widget
product: sandbox-pizzly
app.kubernetes.io/name: "sandbox-pizzly-widget"
app.kubernetes.io/version: "latest"
app.kubernetes.io/managed-by: "xxxx"
app.kubernetes.io/component: "sandbox-pizzly-widget"
app.kubernetes.io/part-of: "sandbox-pizzly"
spec:
replicas: 1
selector:
matchLabels:
app: sandbox-pizzly-widget
template:
metadata:
labels:
app: sandbox-pizzly-widget
spec:
containers:
- name: sandbox-pizzly-widget
image: davidep931/pizzly-proxy:latest
ports:
- containerPort: 8080
env:
- name: NODE_ENV
value: "production"
- name: DASHBOARD_USERNAME
value: "admin"
- name: DASHBOARD_PASSWORD
value: "admin"
- name: SECRET_KEY
value: "devSecretKey"
- name: PUBLISHABLE_KEY
value: "devPubKey"
- name: PROXY_USES_SECRET_KEY_ONLY
value: "FALSE"
- name: COOKIE_SECRET
value: "devCookieSecret"
- name: AUTH_CALLBACK_URL
value: "https://pizzly.mydomain/auth/callback"
- name: DB_HOST
value: "10.x.x.x"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: "postgresdb"
- name: DB_USER
value: "username"
- name: DB_PASSWORD
value: "password"
- name: PORT
value: "8080"
volumeMounts:
- mountPath: "/home/node/app/integrations"
name: pizzlystorage
resources:
requests:
memory: "100Mi"
cpu: "50m"
limits:
cpu: "75m"
memory: "200Mi"
---
apiVersion: v1
kind: Service
metadata:
namespace: sandbox-pizzly
name: sandbox-pizzly-widget
spec:
ports:
- port: 8080
targetPort: 8080
selector:
app: sandbox-pizzly-widget
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: sandbox-pizzly-pv-volume
labels:
type: local
app: products
spec:
storageClassName: azurefile
capacity:
storage: 1Gi
azureFile:
secretName: azure-secret
shareName: sandbox-pizzly-pv
readOnly: false
secretNamespace: sandbox-pizzly
accessModes:
- ReadWriteMany
claimRef:
namespace: sandbox-pizzly
name: sandbox-pizzly-pv-claim
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: sandbox-pizzly
name: sandbox-pizzly-pv-claim
labels:
app: products
spec:
storageClassName: azurefile
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefilestorage
provisioner: kubernetes.io/azure-file
parameters:
storageAccount: persistentsapizzly
reclaimPolicy: Retain
---
apiVersion: v1
kind: Secret
metadata:
name: azure-secret
namespace: sandbox-pizzly
type: Opaque
data:
azurestorageaccountname: xxxxxxxxxxxxxxxxxxxxx
azurestorageaccountkey: xxxxxxxxxxxxxxxxxxxxxxxxxxx
If I try, in the few seconds before the pod stuck, to access to integrations folder and I perform a touch 'test.txt', I will found that file in the Azure File Storage.
Here what I see few seconds before shell autoclose due to CrashLoopBack
I add the Dockerfile:
FROM node:14-slim
WORKDIR /app
# Copy in dependencies for building
COPY *.json ./
COPY yarn.lock ./
# COPY config ./config
COPY integrations ./integrations/
COPY src ./src/
COPY tests ./tests/
COPY views ./views/
RUN yarn install
# Actual image to run from.
FROM node:14-slim
# Make sure we have ca certs for TLS
RUN apt-get update && apt-get install -y \
curl \
wget \
gnupg2 ca-certificates libnss3 \
git
# Make a directory for the node user. Not running Pizzly as root.
RUN mkdir /home/node/app && chown -R node:node /home/node/app
WORKDIR /home/node/app
USER node
# Startup script
COPY --chown=node:node ./startup.sh ./startup.sh
RUN chmod +x ./startup.sh
# COPY from first container
COPY --chown=node:node --from=0 /app/package.json ./package.json
COPY --chown=node:node --from=0 /app/dist/ .
COPY --chown=node:node --from=0 /app/views ./views
COPY --chown=node:node --from=0 /app/node_modules ./node_modules
# Run the startup script
CMD ./startup.sh
Here the startup.sh script:
#!/bin/sh
# Docker Startup script
# Apply migration
./node_modules/.bin/knex --cwd ./src/lib/database/config migrate:latest
# Start App
node ./src/index.js
Have you got any idea on what I miss or I'm wrong?
Thank you,
Dave.
Well, there are two things I think you need to know when you mount the Azure file to the pods existing folder as the volume:
it will cover the existing files
the mount path will set the ownership as the root user
So the above means if your application will start depends on the existing files, then it will cause the problem. And if your application uses a non-root use, for example, the user app, then it maybe will also cause the problem. Here I guess the problem may be caused by the first limitation.

How can I mount a docker config file with Skaffold?

I want to use the Prometheus image to deploy a container as part of the local deployment. Usually one has to run the container with volume and bind-mount to get the configuration file (prometheus.yml) into the container:
docker run \
-p 9090:9090 \
-v /path/to/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus
How can I achieve this with Kubernetes when using Skaffold?
Your Kubernetes configuration will be something like this,
You can specify the port number and volume mounts. The important sections for mounting are volumeMounts and volumes.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-prom
spec:
selector:
matchLabels:
app: my-prom
template:
metadata:
labels:
app: my-prom
spec:
containers:
- name: my-prom
image: prometheus:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 9090
volumeMounts:
- name: prom-config
mountPath: /path/to/prometheus.yml
volumes:
- name: prom-config
hostPath:
path: /etc/prometheus/prometheus.yml
---
apiVersion: v1
kind: Service
metadata:
name: my-prom
spec:
selector:
app: my-prom
ports:
- port: 9090
targetPort: 9090
save the Kubernetes config file in a folder and add below config to skaffold.yaml with the path to K8s config file,
deploy:
kubectl:
manifests:
- k8s/*.yaml

I want to apt-get install sysstat command in kubernetes yaml file

Cluster information:
Kubernetes version: 1.8
Cloud being used: (put bare-metal if not on a public cloud) AWS EKS
Host OS: debian linux
When I deploy pods,I want to my pod to install and start sysstat automatically
this is my two yaml flies below but it doesn’t work CrashLoopBackoff when I put the command: ["/bin/sh", “-c”]、args: [“apt-get install sysstat”]」 below 「image:」
cat deploy/db/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name:
sbdemo-postgres-sfs
spec:
serviceName: sbdemo-postgres-service
replicas: 1
selector:
matchLabels:
app: sbdemo-postgres-sfs
template:
metadata:
labels:
app: sbdemo-postgres-sfs
spec:
containers:
- name: postgres
image: dayan888/springdemo:postgres9.6
ports:
- containerPort: 5432
**command: ["/bin/bash", "-c"]**
**args: ["apt-get install sysstat"]**
volumeMounts:
- name: pvc-db-volume
mountPath: /var/lib/postgresql
volumeClaimTemplates:
- metadata:
name: pvc-db-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
cat deploy/web/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sbdemo-nginx
spec:
replicas: 3
selector:
matchLabels:
app: sbdemo-nginx
template:
metadata:
labels:
app: sbdemo-nginx
spec:
containers:
- name: nginx
image: gobawoo21/springdemo:nginx
**command: ["/bin/bash", "-c"]**
**args: ["apt-get install sysstat"]**
ports:
- containerPort: 80
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: server-conf
mountPath: /etc/nginx/conf.d/server.conf
subPath: server.conf
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
- name: server-conf
configMap:
name: server-conf
items:
- key: server.conf
path: server.conf
Does anyone know about how to set the repository automatically when deploy pods?
Regards
The best practice is to install packages at image build stage. You can simply add this step to your Dockerfile.
FROM postgres:9.6
RUN apt-get update &&\
apt-get install sysstat -y &&\
rm -rf /var/lib/apt/lists/*
COPY deploy/db/init_ddl.sh /docker-entrypoint-initdb.d/
RUN chmod +x /docker-entrypoint-initdb.d/init_ddl.sh
Kube Manifest
spec:
containers:
- name: postgres
image: harik8/sof:62298191
imagePullPolicy: Always
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
value: password
volumeMounts:
- name: pvc-db-volume
mountPath: /var/lib/postgresql
It should run (Please ignore POSTGRES_PASSWORD env variable)
$ kubectl get po
NAME READY STATUS RESTARTS AGE
sbdemo-postgres-sfs-0 1/1 Running 0 8m46s
Validation
$ kubectl exec -it sbdemo-postgres-sfs-0 bash
root#sbdemo-postgres-sfs-0:/# iostat
Linux 4.19.107 (sbdemo-postgres-sfs-0) 06/10/2020 _x86_64_ (4 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
10.38 0.01 6.28 0.24 0.00 83.09
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
vda 115.53 1144.72 1320.48 1837135 2119208
scd0 0.02 0.65 0.00 1048 0
If this is possible, something is wrong. Your container should not be running as root and so even if you fixed this approach, it shouldn’t work. What you need to do is put this in your container build instead (I.e. in the Dockerfile).

ValidationError: missing required field "selector" in io.k8s.api.v1.DeploymentSpec

I've created Hyper-V machine and tried to deploy Sawtooth on Minikube using Sawtooth YAML file :
https://sawtooth.hyperledger.org/docs/core/nightly/master/app_developers_guide/sawtooth-kubernetes-default.yaml
I changed the apiVersion i.e. apiVersion: extensions/v1beta1 to apiVersion: apps/v1, though I have launched Minikube in Kubernetes v1.17.0 using this command
minikube start --kubernetes-version v1.17.0
After that I can't deploy the server. Command is
kubectl apply -f sawtooth-kubernetes-default.yaml --validate=false
It shows an error with "sawtooth-0" is invalid.
---
apiVersion: v1
kind: List
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: sawtooth-0
spec:
replicas: 1
selector:
matchLabels:
name: sawtooth-0
template:
metadata:
labels:
name: sawtooth-0
spec:
containers:
- name: sawtooth-devmode-engine
image: hyperledger/sawtooth-devmode-engine-rust:chime
command:
- bash
args:
- -c
- "devmode-engine-rust -C tcp://$HOSTNAME:5050"
- name: sawtooth-settings-tp
image: hyperledger/sawtooth-settings-tp:chime
command:
- bash
args:
- -c
- "settings-tp -vv -C tcp://$HOSTNAME:4004"
- name: sawtooth-intkey-tp-python
image: hyperledger/sawtooth-intkey-tp-python:chime
command:
- bash
args:
- -c
- "intkey-tp-python -vv -C tcp://$HOSTNAME:4004"
- name: sawtooth-xo-tp-python
image: hyperledger/sawtooth-xo-tp-python:chime
command:
- bash
args:
- -c
- "xo-tp-python -vv -C tcp://$HOSTNAME:4004"
- name: sawtooth-validator
image: hyperledger/sawtooth-validator:chime
ports:
- name: tp
containerPort: 4004
- name: consensus
containerPort: 5050
- name: validators
containerPort: 8800
command:
- bash
args:
- -c
- "sawadm keygen \
&& sawtooth keygen my_key \
&& sawset genesis -k /root/.sawtooth/keys/my_key.priv \
&& sawset proposal create \
-k /root/.sawtooth/keys/my_key.priv \
sawtooth.consensus.algorithm.name=Devmode \
sawtooth.consensus.algorithm.version=0.1 \
-o config.batch \
&& sawadm genesis config-genesis.batch config.batch \
&& sawtooth-validator -vv \
--endpoint tcp://$SAWTOOTH_0_SERVICE_HOST:8800 \
--bind component:tcp://eth0:4004 \
--bind consensus:tcp://eth0:5050 \
--bind network:tcp://eth0:8800"
- name: sawtooth-rest-api
image: hyperledger/sawtooth-rest-api:chime
ports:
- name: api
containerPort: 8008
command:
- bash
args:
- -c
- "sawtooth-rest-api -C tcp://$HOSTNAME:4004"
- name: sawtooth-shell
image: hyperledger/sawtooth-shell:chime
command:
- bash
args:
- -c
- "sawtooth keygen && tail -f /dev/null"
- apiVersion: apps/v1
kind: Service
metadata:
name: sawtooth-0
spec:
type: ClusterIP
selector:
name: sawtooth-0
ports:
- name: "4004"
protocol: TCP
port: 4004
targetPort: 4004
- name: "5050"
protocol: TCP
port: 5050
targetPort: 5050
- name: "8008"
protocol: TCP
port: 8008
targetPort: 8008
- name: "8800"
protocol: TCP
port: 8800
targetPort: 8800
You need to fix your deployment yaml file. As you can see from your error message, the Deployment.spec.selector field can't be empty.
Update the yaml (i.e. add spec.selector) as shown in below:
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: sawtooth-0
template:
metadata:
labels:
app.kubernetes.io/name: sawtooth-0
Why selector field is important?
The selector field defines how the Deployment finds which Pods to manage. In this case, you simply select a label that is defined in the Pod template (app.kubernetes.io/name: sawtooth-0). However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule.
Reference
Update:
The apiVersion for k8s service is v1:
- apiVersion: v1 # Update here
kind: Service
metadata:
app.kubernetes.io/name: sawtooth-0
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: sawtooth-0
... ... ...
For api version v1 (and also for apps/v1) you need to use app: <your lable>
apiVersion: v1
kind: Service
metadata:
name: sawtooth-0
spec:
selector:
app: sawtooth-0
See : https://kubernetes.io/docs/concepts/services-networking/service/
The answer for this is already covered by #Kamol
Some general possible reasons if you are still getting the error :
missing required field “XXX” in YYY
Check apiVersion at the top of file (for Deployment, version is: apps/v1
& for service it's v1
Check the spelling of "XXX"(unknown field) and check if syntax is incorrect.
Check kind: ... once again.
If you find some other reason, please comment and let other's know :)

CrashLoopBackOff while increasing replicas count more than 1 on Azure AKS cluster for MongoDB image

Click here to get error screen
I am deploying MongoDb to Azure AKS with Azure File Share as Volume (using persistent volume & persistent volume claim). If I am increasing replicas more than one then CrashLoopBackOff is occurring. Only one Pod is getting created, other are getting failed.
My Docker file to Create MongoDb image.
FROM ubuntu:16.04
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb http://repo.mongodb.org/apt/ubuntu $(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d= -f2)/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update && apt-get install -y mongodb-org
EXPOSE 27017
ENTRYPOINT ["/usr/bin/mongod"]
YAML file for Deployment
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: mongo
labels:
name: mongo
spec:
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: <my image of mongodb>
ports:
- containerPort: 27017
protocol: TCP
name: mongo
volumeMounts:
- mountPath: /data/db
name: az-files-mongo-storage
volumes:
- name: az-files-mongo-storage
persistentVolumeClaim:
claimName: mong-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
For your issue, you can take a look at another issue for the same error. So it seems you cannot initialize the same volume when another has already done it for mongo. From the error, I will suggest you just use the volume to store the data. You can initialize in the Dockerfile when creating the image. Or you can use the create volumes for every pod through the StatefulSets and it's more recommended.
Update:
The yam file below will work for you:
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: mongo
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: charlesacr.azurecr.io/mongodb:v1
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: az-files-mongo-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: az-files-mongo-storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: az-files-mongo-storage
resources:
requests:
storage: 5Gi
And you need to create the StorageClass before you create the statefulSets. The yam file below:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: az-files-mongo-storage
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
parameters:
skuName: Standard_LRS
Then the pods work well and the screenshot below:
You can configure accessModes: - ReadWriteMany. But still, the volume or storage type should support this mode. Find the table here
According to that table, AzureFile supports ReadWriteMany but not AzureDisk.
you should be using StatefulSets for mongodb. deployments are for stateless services.