I want to apt-get install sysstat command in kubernetes yaml file - kubernetes

Cluster information:
Kubernetes version: 1.8
Cloud being used: (put bare-metal if not on a public cloud) AWS EKS
Host OS: debian linux
When I deploy pods,I want to my pod to install and start sysstat automatically
this is my two yaml flies below but it doesn’t work CrashLoopBackoff when I put the command: ["/bin/sh", “-c”]、args: [“apt-get install sysstat”]」 below 「image:」
cat deploy/db/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name:
sbdemo-postgres-sfs
spec:
serviceName: sbdemo-postgres-service
replicas: 1
selector:
matchLabels:
app: sbdemo-postgres-sfs
template:
metadata:
labels:
app: sbdemo-postgres-sfs
spec:
containers:
- name: postgres
image: dayan888/springdemo:postgres9.6
ports:
- containerPort: 5432
**command: ["/bin/bash", "-c"]**
**args: ["apt-get install sysstat"]**
volumeMounts:
- name: pvc-db-volume
mountPath: /var/lib/postgresql
volumeClaimTemplates:
- metadata:
name: pvc-db-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
cat deploy/web/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sbdemo-nginx
spec:
replicas: 3
selector:
matchLabels:
app: sbdemo-nginx
template:
metadata:
labels:
app: sbdemo-nginx
spec:
containers:
- name: nginx
image: gobawoo21/springdemo:nginx
**command: ["/bin/bash", "-c"]**
**args: ["apt-get install sysstat"]**
ports:
- containerPort: 80
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: server-conf
mountPath: /etc/nginx/conf.d/server.conf
subPath: server.conf
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
- name: server-conf
configMap:
name: server-conf
items:
- key: server.conf
path: server.conf
Does anyone know about how to set the repository automatically when deploy pods?
Regards

The best practice is to install packages at image build stage. You can simply add this step to your Dockerfile.
FROM postgres:9.6
RUN apt-get update &&\
apt-get install sysstat -y &&\
rm -rf /var/lib/apt/lists/*
COPY deploy/db/init_ddl.sh /docker-entrypoint-initdb.d/
RUN chmod +x /docker-entrypoint-initdb.d/init_ddl.sh
Kube Manifest
spec:
containers:
- name: postgres
image: harik8/sof:62298191
imagePullPolicy: Always
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
value: password
volumeMounts:
- name: pvc-db-volume
mountPath: /var/lib/postgresql
It should run (Please ignore POSTGRES_PASSWORD env variable)
$ kubectl get po
NAME READY STATUS RESTARTS AGE
sbdemo-postgres-sfs-0 1/1 Running 0 8m46s
Validation
$ kubectl exec -it sbdemo-postgres-sfs-0 bash
root#sbdemo-postgres-sfs-0:/# iostat
Linux 4.19.107 (sbdemo-postgres-sfs-0) 06/10/2020 _x86_64_ (4 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
10.38 0.01 6.28 0.24 0.00 83.09
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
vda 115.53 1144.72 1320.48 1837135 2119208
scd0 0.02 0.65 0.00 1048 0

If this is possible, something is wrong. Your container should not be running as root and so even if you fixed this approach, it shouldn’t work. What you need to do is put this in your container build instead (I.e. in the Dockerfile).

Related

Shared Folder with Azure File on kubernetes pod doesn't work

I have an issue on my deployment when I try to share a folder with a kubernetes volume.
The folder will be shared using an Azure File Storage.
If I deploy my image without sharing the folder (/integrations) the app start.
as shown in the image below the pod via lens is up and running
If I add the relation of the folder to a volume the result is that the pod will stuck in error with this messagge
Here I put my yaml deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: sandbox-pizzly
name: sandbox-pizzly-widget
labels:
app: sandbox-pizzly-widget
product: sandbox-pizzly
app.kubernetes.io/name: "sandbox-pizzly-widget"
app.kubernetes.io/version: "latest"
app.kubernetes.io/managed-by: "xxxx"
app.kubernetes.io/component: "sandbox-pizzly-widget"
app.kubernetes.io/part-of: "sandbox-pizzly"
spec:
replicas: 1
selector:
matchLabels:
app: sandbox-pizzly-widget
template:
metadata:
labels:
app: sandbox-pizzly-widget
spec:
containers:
- name: sandbox-pizzly-widget
image: davidep931/pizzly-proxy:latest
ports:
- containerPort: 8080
env:
- name: NODE_ENV
value: "production"
- name: DASHBOARD_USERNAME
value: "admin"
- name: DASHBOARD_PASSWORD
value: "admin"
- name: SECRET_KEY
value: "devSecretKey"
- name: PUBLISHABLE_KEY
value: "devPubKey"
- name: PROXY_USES_SECRET_KEY_ONLY
value: "FALSE"
- name: COOKIE_SECRET
value: "devCookieSecret"
- name: AUTH_CALLBACK_URL
value: "https://pizzly.mydomain/auth/callback"
- name: DB_HOST
value: "10.x.x.x"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: "postgresdb"
- name: DB_USER
value: "username"
- name: DB_PASSWORD
value: "password"
- name: PORT
value: "8080"
volumeMounts:
- mountPath: "/home/node/app/integrations"
name: pizzlystorage
resources:
requests:
memory: "100Mi"
cpu: "50m"
limits:
cpu: "75m"
memory: "200Mi"
---
apiVersion: v1
kind: Service
metadata:
namespace: sandbox-pizzly
name: sandbox-pizzly-widget
spec:
ports:
- port: 8080
targetPort: 8080
selector:
app: sandbox-pizzly-widget
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: sandbox-pizzly-pv-volume
labels:
type: local
app: products
spec:
storageClassName: azurefile
capacity:
storage: 1Gi
azureFile:
secretName: azure-secret
shareName: sandbox-pizzly-pv
readOnly: false
secretNamespace: sandbox-pizzly
accessModes:
- ReadWriteMany
claimRef:
namespace: sandbox-pizzly
name: sandbox-pizzly-pv-claim
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: sandbox-pizzly
name: sandbox-pizzly-pv-claim
labels:
app: products
spec:
storageClassName: azurefile
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefilestorage
provisioner: kubernetes.io/azure-file
parameters:
storageAccount: persistentsapizzly
reclaimPolicy: Retain
---
apiVersion: v1
kind: Secret
metadata:
name: azure-secret
namespace: sandbox-pizzly
type: Opaque
data:
azurestorageaccountname: xxxxxxxxxxxxxxxxxxxxx
azurestorageaccountkey: xxxxxxxxxxxxxxxxxxxxxxxxxxx
If I try, in the few seconds before the pod stuck, to access to integrations folder and I perform a touch 'test.txt', I will found that file in the Azure File Storage.
Here what I see few seconds before shell autoclose due to CrashLoopBack
I add the Dockerfile:
FROM node:14-slim
WORKDIR /app
# Copy in dependencies for building
COPY *.json ./
COPY yarn.lock ./
# COPY config ./config
COPY integrations ./integrations/
COPY src ./src/
COPY tests ./tests/
COPY views ./views/
RUN yarn install
# Actual image to run from.
FROM node:14-slim
# Make sure we have ca certs for TLS
RUN apt-get update && apt-get install -y \
curl \
wget \
gnupg2 ca-certificates libnss3 \
git
# Make a directory for the node user. Not running Pizzly as root.
RUN mkdir /home/node/app && chown -R node:node /home/node/app
WORKDIR /home/node/app
USER node
# Startup script
COPY --chown=node:node ./startup.sh ./startup.sh
RUN chmod +x ./startup.sh
# COPY from first container
COPY --chown=node:node --from=0 /app/package.json ./package.json
COPY --chown=node:node --from=0 /app/dist/ .
COPY --chown=node:node --from=0 /app/views ./views
COPY --chown=node:node --from=0 /app/node_modules ./node_modules
# Run the startup script
CMD ./startup.sh
Here the startup.sh script:
#!/bin/sh
# Docker Startup script
# Apply migration
./node_modules/.bin/knex --cwd ./src/lib/database/config migrate:latest
# Start App
node ./src/index.js
Have you got any idea on what I miss or I'm wrong?
Thank you,
Dave.
Well, there are two things I think you need to know when you mount the Azure file to the pods existing folder as the volume:
it will cover the existing files
the mount path will set the ownership as the root user
So the above means if your application will start depends on the existing files, then it will cause the problem. And if your application uses a non-root use, for example, the user app, then it maybe will also cause the problem. Here I guess the problem may be caused by the first limitation.

How Deploy Drupal 7 using Kubernetes with a persistent store?

I'm trying to deploy Drupal 7 in Kubernetes, It fails with an error Fatal error: require_once(): Failed opening required '/var/www/html/modules/system/system.install' (include_path='.:/usr/local/lib/php') in /var/www/html/includes/install.core.inc on line 241.
Here is K8S deployment manifest:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: drupal-pvc
annotations:
pv.beta.kubernetes.io/gid: "drupal-gid"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: drupal-service
spec:
ports:
- name: http
port: 80
protocol: TCP
selector:
app: drupal
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: drupal
name: drupal
spec:
replicas: 1
template:
metadata:
labels:
app: drupal
spec:
initContainers:
- name: init-sites-volume
image: drupal:7.72
command: ['/bin/bash', '-c']
args: ['cp -r /var/www/html/sites/ /data/; chown www-data:www-data /data/ -R']
volumeMounts:
- mountPath: /data
name: vol-drupal
containers:
- image: drupal:7.72
name: drupal
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/modules
name: vol-drupal
subPath: modules
- mountPath: /var/www/html/profiles
name: vol-drupal
subPath: profiles
- mountPath: /var/www/html/sites
name: vol-drupal
subPath: sites
- mountPath: /var/www/html/themes
name: vol-drupal
subPath: themes
volumes:
- name: vol-drupal
persistentVolumeClaim:
claimName: drupal-pvc
However, when I remove the volumeMounts from the drupal container, it works. I need to use volumes in order to persist the website data, can any one suggest a fix?
Update: I have also added the manifest for the persistence volume.
check if you could write to mounted volume.
# kubectl exec -it drupal-zxxx -- sh
$ ls -alhtr /var/www/html/modules
$ cd /var/www/html/modules
$ touch test.txt
because storage configured with a group ID (GID) allows writing only by Pods using the same GID. Mismatched or missing GIDs cause permission denied errors.
alternatively you could try out an operator for drupal:
https://github.com/geerlingguy/drupal-operator
Also helm chart is another option:
https://bitnami.com/stack/drupal/helm

Postgres / K8S : PANIC could not locate a valid checkpoint record / CrashLoopBackOff

Postgres can't start giving the error:
PANIC could not locate a valid checkpoint record
On Google, there is a lot of solution, but all of them need to connect the pod to execute some pg commands.
But, as I use K8S, my pod falls into status: CrashLoopBackOff, so I can't connect anymore to my pod.
How should I do to fix my postgres DB ?
EDIT:
I have tried to run the command:
pg_resetwal /var/lib/postgresql/data
with:
...
spec:
containers:
- args:
- pg_resetwal
- /var/lib/postgresql/data
But I get:
pg_resetwal: cannot be executed by "root"
You must run pg_resetwal as the PostgreSQL superuser.
Can go further...
EDIT2:
I tried to run a new pod with the same volumes attached, and the same postgres container, but changing the command to : pg_resetwal /var/lib/postgresql/data
I also added:
securityContext:
runAsUser: 0
Here is the yaml for deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
labels:
app: metadata-postgres-fix
name: metadata-postgres-fix
namespace: metadata
spec:
selector:
matchLabels:
app: metadata-postgres-fix
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: metadata-postgres-fix
spec:
containers:
- args:
- pg_resetwal
- /var/lib/postgresql/data
envFrom:
- secretRef:
name: metadata-env
image: postgres:11.3
name: metadata-postgres-fix
securityContext:
runAsUser: 0
ports:
- containerPort: 5432
imagePullPolicy: Always
volumeMounts:
- mountPath: /etc/postgresql/postgresql.conf
name: metadata-postgres-data
subPath: postgres.conf
- mountPath: /docker-entrypoint-initdb.d/init.sh
name: metadata-postgres-data
subPath: init.sh
- mountPath: /var/lib/postgresql/data
name: metadata-postgres-claim
subPath: postgres
restartPolicy: Always
volumes:
- name: metadata-postgres-data
configMap:
name: cfgmap-metadata-postgres
- name: metadata-postgres-claim
persistentVolumeClaim:
claimName: metadata-postgres-claim
nodeSelector:
kops.k8s.io/instancegroup: nodes
I solved it changing
- args:
- pg_resetwal
- /var/lib/postgresql/data
with a pause to be able to get UID of postgres:
- args:
- sleep
- 1000
with
cat /etc/passwd
I could find posgres UID is 999
and finally change runAsUser: 0 with runAsUser: 999

OpenShift: Accessing mounted file-system as non-root

I am trying to run Chart Museum as a non-root user in OpenShift. Here is a snapshot of my YAML.
apiVersion: apps/v1
kind: Deployment
metadata:
name: chart-museum
namespace: demo
spec:
selector:
matchLabels:
app: chart-museum
replicas: 1
template:
metadata:
labels:
app: chart-museum
spec:
volumes:
- name: pvc-charts
persistentVolumeClaim:
claimName: pvc-charts
containers:
- name: chart-museum
securityContext:
fsGroup: 1000
image: chartmuseum/chartmuseum:latest
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: chart-museum
volumeMounts:
- name: pvc-charts
mountPath: "/charts"
As you can see, I have set spec.containers.securityContext.fsGroup to 1000 which is same as the user ID in the Chart Museum Dockerfile as shown below.
FROM alpine:3.10.3
RUN apk add --no-cache cifs-utils ca-certificates \
&& adduser -D -u 1000 chartmuseum
COPY bin/linux/amd64/chartmuseum /chartmuseum
USER 1000
ENTRYPOINT ["/chartmuseum"]
And, yet, when I try to upload a chart, I get a permission denied message for /charts. How do I get around this issue?
It's related to Kubernetes and how the given Persistent Volume is defined. You can check all the discussion and possible workarounds in the related GH Issue.
Add the chmod/chown lines in your dockerfile:
FROM alpine:3.10.3
RUN apk add --no-cache cifs-utils ca-certificates \
&& adduser -D -u 1000 chartmuseum
COPY bin/linux/amd64/chartmuseum /chartmuseum
RUN chmod +xr /chartmuseum
RUN chown 1000:1000 /chartmuseum
USER 1000
ENTRYPOINT ["/chartmuseum"]
Modify your spec.template.spec.containers.securityContext to ensure the user and group will be enforced.
containers:
- name: chart-museum
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
This is how I solved the issue.
Download the binary curl -LO https://s3.amazonaws.com/chartmuseum/release/latest/bin/linux/amd64/chartmuseum.
Change permissions chmod +xr chartmuseum
Create a new Dockerfile as shown below. Basically, use the user name instead of ID for chown commands so that the binary and the storage location are owned by chartmuseum user and not root.
FROM alpine:3.10.3
RUN apk add --no-cache cifs-utils ca-certificates \
&& adduser -D -u 1000 chartmuseum
COPY chartmuseum /chartmuseum
RUN chown chartmuseum:chartmuseum /chartmuseum
RUN chown chartmuseum:chartmuseum /charts
USER chartmuseum
ENTRYPOINT ["/chartmuseum"]
Build and push the resulting Docker image to e.g. somerepo/chartmuseum:0.0.0.
Use the k8s manifest as shown below. Edit the namespace as required. Note, creation of PersistentVolumeClaim is not covered here.
kind: ConfigMap
apiVersion: v1
metadata:
name: chart-museum
namespace: demo
data:
DEBUG: 'true'
STORAGE: local
STORAGE_LOCAL_ROOTDIR: "/charts"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: chart-museum
namespace: demo
spec:
selector:
matchLabels:
app: chart-museum
replicas: 1
template:
metadata:
labels:
app: chart-museum
spec:
volumes:
- name: pvc-charts
persistentVolumeClaim:
claimName: pvc-charts
containers:
- name: chart-museum
image: somerepo/chartmuseum:0.0.0
imagePullPolicy: Always
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: chart-museum
volumeMounts:
- mountPath: "/charts"
name: pvc-charts
resources:
limits:
memory: "128Mi"
cpu: "500m"
imagePullSecrets:
- name: us.icr.io.secret
---
apiVersion: v1
kind: Service
metadata:
labels:
app: chart-museum
name: chart-museum
namespace: demo
spec:
type: ClusterIP
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: chart-museum
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: chart-museum
name: chart-museum
namespace: demo
spec:
port:
targetPort: 8080-tcp
tls:
insecureEdgeTerminationPolicy: Redirect
termination: edge
to:
kind: Service
name: chart-museum
The manifest creates a ConfigMap object and uses a PersistentVolumeClaim to 'replicate' the command to run Chart Museum locally (as described at https://chartmuseum.com/)
docker run --rm -it \
-p 8080:8080 \
-v $(pwd)/charts:/charts \
-e DEBUG=true \
-e STORAGE=local \
-e STORAGE_LOCAL_ROOTDIR=/charts \
chartmuseum/chartmuseum:latest
The Service and Route in the manifest expose the repo to the external world.
After the objects are created, enter the HOST/PORT value in oc get route/chart-museum -n demo with https in address bar and hit enter. You should see a welcome page for Chart Museum. This means the installation is successful.

CrashLoopBackOff while increasing replicas count more than 1 on Azure AKS cluster for MongoDB image

Click here to get error screen
I am deploying MongoDb to Azure AKS with Azure File Share as Volume (using persistent volume & persistent volume claim). If I am increasing replicas more than one then CrashLoopBackOff is occurring. Only one Pod is getting created, other are getting failed.
My Docker file to Create MongoDb image.
FROM ubuntu:16.04
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb http://repo.mongodb.org/apt/ubuntu $(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d= -f2)/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update && apt-get install -y mongodb-org
EXPOSE 27017
ENTRYPOINT ["/usr/bin/mongod"]
YAML file for Deployment
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: mongo
labels:
name: mongo
spec:
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: <my image of mongodb>
ports:
- containerPort: 27017
protocol: TCP
name: mongo
volumeMounts:
- mountPath: /data/db
name: az-files-mongo-storage
volumes:
- name: az-files-mongo-storage
persistentVolumeClaim:
claimName: mong-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
For your issue, you can take a look at another issue for the same error. So it seems you cannot initialize the same volume when another has already done it for mongo. From the error, I will suggest you just use the volume to store the data. You can initialize in the Dockerfile when creating the image. Or you can use the create volumes for every pod through the StatefulSets and it's more recommended.
Update:
The yam file below will work for you:
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: mongo
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: charlesacr.azurecr.io/mongodb:v1
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: az-files-mongo-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: az-files-mongo-storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: az-files-mongo-storage
resources:
requests:
storage: 5Gi
And you need to create the StorageClass before you create the statefulSets. The yam file below:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: az-files-mongo-storage
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
parameters:
skuName: Standard_LRS
Then the pods work well and the screenshot below:
You can configure accessModes: - ReadWriteMany. But still, the volume or storage type should support this mode. Find the table here
According to that table, AzureFile supports ReadWriteMany but not AzureDisk.
you should be using StatefulSets for mongodb. deployments are for stateless services.