I added in my deployment readOnlyRootFilesystem: true but running my code ends with the following error:
OSError: [Errno30] Read-only file system: '/project/logs/dbt.log'
But /project/logs/dbt.log is NOT a root path.
Any idea why does it happen?
here's a more elaborated manifest I'm using:
spec:
containers:
.
.
.
.
securityContext:
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
.
.
.
.
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 101
You can mount a temporary volume (same lifespan as your pod) to avoid writing to root:
spec:
volumes:
- name: logs
emptyDir: {}
containers:
.
.
securityContext:
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
volumeMounts:
- name: logs
mountPath: /project/logs
Related
My jenkins agent is deploy on the k8s, here is the agent yaml:
---
apiVersion: v1
kind: Pod
metadata:
labels:
jenkins: slave
cluster: dev-monitor-platform
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: jenkins
operator: In
values:
- ci
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
containers:
- name: slave-docker
image: harbor.mycompany.net/jenkins/docker:19.03-git
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 50m
memory: 256Mi
limits:
cpu: 100m
memory: 512Mi
securityContext:
privileged: true
command:
- cat
tty: true
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
- mountPath: /root/.m2
name: jenkins-maven-m2
- mountPath: /home/jenkins/
name: workspace-volume
readOnly: false
- name: jnlp
image: harbor.mycompany.net/jenkins/inbound-agent:alpine-jdk11
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 50m
memory: 256Mi
limits:
cpu: 100m
memory: 512Mi
volumeMounts:
- mountPath: /home/jenkins/
name: workspace-volume
readOnly: false
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: workspace-volume
emptyDir: {}
- name: jenkins-maven-m2
nfs:
path: /export/mid-devops/jenkins/m2
server: xxx.xxx.xxx.xxx
The master itself pull code is fast:
However when the agent try to pull pipeline codes, it always gets stuck for about One-and-a-half minutes:
and when agent pull application codes, it is fast.
This problem happens every time, I have no idea.
I expect the agent don't get stuck when checkout codes
I can't seem to understand why the below mentioned pod manifest isn't working if I remove spec.containers.command, the pod fails if I remove the command.
I took this example from the official documentation
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
Because the busybox image doesn't run any process at start by itself. Containers are designed to run single application and shutdown when the app exits. If the image doesn't run anything it will immediately exit. In the Kubernetes the spec.containers.command overwrites the default container command. You can try changing the manifest image for i.e. image: nginx, remove the spec.containers.command and it will run, because that image as default Nginx server.
Background:
I'm trying to use goreplay to mirror the traffic to other destination.
I found that k8s service is a load balancing on layer 4 which cause the traffic can not be capture by goreplay,So i decide to add a reverse-proxy sidecar inside pod just like istio does.
Here is my pod yaml:
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: default
labels:
app: nginx
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- image: nginx
imagePullPolicy: IfNotPresent
name: proxy
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 10m
memory: 40Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: default
initContainers:
- command:
- iptables
args:
- -t
- nat
- -A
- PREROUTING
- -p
- tcp
- --dport
- "80"
- -j
- REDIRECT
- --to-ports
- "15001"
image: soarinferret/iptablesproxy
imagePullPolicy: IfNotPresent
name: istio-init
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 10m
memory: 10Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 256
name: default
optional: false
name: default
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
---
apiVersion: v1
data:
default.conf: |
server {
listen 15001;
server_name localhost;
access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
kind: ConfigMap
metadata:
name: default
namespace: default
I use kubectl port-forward service/nginx 8080:80 and then curl http://localhost:8080,the traffic were sent directly to nginx not my proxy.
WHAT I WANT:
A way to let goreplay to capture traffic that load balanced by k8s service.
Correct iptables rule to let traffic success route to my proxy sideCar.
Thanks for any help!
As #Jonyhy96 mentioned in comments the only things which need to be changed here is to the privileged value to true in the securityContext field of initContainer.
Privileged - determines if any container in a pod can enable privileged mode. By default a container is not allowed to access any devices on the host, but a "privileged" container is given access to all devices on the host. This allows the container nearly all the same access as processes running on the host. This is useful for containers that want to use linux capabilities like manipulating the network stack and accessing devices.
So the initContainer would look like this
initContainers:
- command:
- iptables
args:
- -t
- nat
- -A
- PREROUTING
- -p
- tcp
- --dport
- "80"
- -j
- REDIRECT
- --to-ports
- "15001"
image: soarinferret/iptablesproxy
imagePullPolicy: IfNotPresent
name: istio-init
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 10m
memory: 10Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
drop:
- ALL
privileged: true <---- changed from false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
There is very good tutorial about that, not exactly on nginx, but explains how to actually build the proxy.
The above securityContext works except for requiring a change to
allowPrivilegeEscalation: true
The following trimmed down version also works on GKE (Google Kubernetes Engine):
securityContext:
capabilities:
add:
- NET_ADMIN
drop:
- ALL
privileged: true
I have a problem mounting sharing one volume with three containers in a pod and I would like to know how to solve it.
I have a daemonset with 3 containers in a pod:
s3fs: This container mounts a s3 bucket as fs in a empty volume
myapp: This container needs to mount the volume subpath "/books" in "/books"
myapp2: This containers needs to mount the volume subpath "/books/name1" in "/name1" and the volume subpath "/books/name2" in "/name2".
I think the problem can be in how I manage the subpaths in myapp1 and myapp2.
This is the yaml of the daemonset:
volumes:
- name: myapp-data
emptyDir: {}
containers:
- name: s3fs
volumeMounts:
- name: myapp-data
mountPath: "/data/s3"
mountPropagation: Bidirectional
securityContext:
capabilities:
add:
- SYS_ADMIN
privileged: true
- name: myapp1
volumeMounts:
- name: myapp-data
mountPath: "/books"
subPath: "/books"
mountPropagation: HostToContainer
securityContext:
capabilities:
add:
- SYS_ADMIN
privileged: true
- name: myapp2
volumeMounts:
- name: myapp-data
mountPath: "/name1"
subPath: books/name1
mountPropagation: HostToContainer
- name: myapp-data
mountPath: "/name2"
subPath: books/name2
mountPropagation: HostToContainer
securityContext:
capabilities:
add:
- SYS_ADMIN
privileged: true
With this yaml, the sharing only works with myapp1, but not with myapp2. However If I remove myapp1 from the yaml, myapp2 shares the volume properly.
Any help is appreciated.
Thanks,
When i am trying to mount application log volume from containers to host
getting error: Operation not permitted
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
initContainers:
- name: volume-mount-permission
image: xx.xx.xx.xx/orchestration/credit-card
command:
- sh
- -c
- chown -R 1000:1000 /opt/payara/appserver/glassfish/logs/credit-card
- chgrp 1000 /opt/payara/appserver/glassfish/logs/credit-card
volumeMounts:
- name: card-corp-logs
mountPath: /opt/payara/appserver/glassfish/logs/credit-card
readOnly: false
containers:
- name: credit-card
image: xx.xx.xx.xx/orchestration/credit-card
imagePullPolicy: Always
securityContext:
privileged: true
runAsUser: 1000
ports:
- name: credit-card
containerPort: 8080
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
volumeMounts:
- name: override-setting-storage
mountPath: /p/config
- name: credit-card-teamsite
mountPath: /var/credit-card/teamsite/card_corp
Container Path - /opt/payara/appserver/glassfish/logs/credit-card to hostPath
Can anyone please help me out where i am doing mistake in deployment yml file.
securityContext:
runAsUser: 1000
runAsGroup: 3000
means you cannot chown 1000:1000 because that user is not a member of group 1000
Likely you will want to run that initContainer: as runAsUser: 0 in order to allow it to perform arbitrary chown operations
You also truncated your YAML that would have specified the volumes: that are being mounted by your volumeMounts: -- there is a chance that you are trying to mount a volume type that -- regardless of your readOnly: false declaration -- cannot be modified. ConfigMap, Secret, Downward API, and a bunch of others also will not respond to mutation requests, even as root.