iptables in kuberntes init container does't work - kubernetes

Background:
I'm trying to use goreplay to mirror the traffic to other destination.
I found that k8s service is a load balancing on layer 4 which cause the traffic can not be capture by goreplay,So i decide to add a reverse-proxy sidecar inside pod just like istio does.
Here is my pod yaml:
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: default
labels:
app: nginx
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- image: nginx
imagePullPolicy: IfNotPresent
name: proxy
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 10m
memory: 40Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: default
initContainers:
- command:
- iptables
args:
- -t
- nat
- -A
- PREROUTING
- -p
- tcp
- --dport
- "80"
- -j
- REDIRECT
- --to-ports
- "15001"
image: soarinferret/iptablesproxy
imagePullPolicy: IfNotPresent
name: istio-init
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 10m
memory: 10Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 256
name: default
optional: false
name: default
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
---
apiVersion: v1
data:
default.conf: |
server {
listen 15001;
server_name localhost;
access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
kind: ConfigMap
metadata:
name: default
namespace: default
I use kubectl port-forward service/nginx 8080:80 and then curl http://localhost:8080,the traffic were sent directly to nginx not my proxy.
WHAT I WANT:
A way to let goreplay to capture traffic that load balanced by k8s service.
Correct iptables rule to let traffic success route to my proxy sideCar.
Thanks for any help!

As #Jonyhy96 mentioned in comments the only things which need to be changed here is to the privileged value to true in the securityContext field of initContainer.
Privileged - determines if any container in a pod can enable privileged mode. By default a container is not allowed to access any devices on the host, but a "privileged" container is given access to all devices on the host. This allows the container nearly all the same access as processes running on the host. This is useful for containers that want to use linux capabilities like manipulating the network stack and accessing devices.
So the initContainer would look like this
initContainers:
- command:
- iptables
args:
- -t
- nat
- -A
- PREROUTING
- -p
- tcp
- --dport
- "80"
- -j
- REDIRECT
- --to-ports
- "15001"
image: soarinferret/iptablesproxy
imagePullPolicy: IfNotPresent
name: istio-init
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 10m
memory: 10Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
drop:
- ALL
privileged: true <---- changed from false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
There is very good tutorial about that, not exactly on nginx, but explains how to actually build the proxy.

The above securityContext works except for requiring a change to
allowPrivilegeEscalation: true
The following trimmed down version also works on GKE (Google Kubernetes Engine):
securityContext:
capabilities:
add:
- NET_ADMIN
drop:
- ALL
privileged: true

Related

Jenkins angent on k8s is slow when checkout codes

My jenkins agent is deploy on the k8s, here is the agent yaml:
---
apiVersion: v1
kind: Pod
metadata:
labels:
jenkins: slave
cluster: dev-monitor-platform
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: jenkins
operator: In
values:
- ci
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
containers:
- name: slave-docker
image: harbor.mycompany.net/jenkins/docker:19.03-git
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 50m
memory: 256Mi
limits:
cpu: 100m
memory: 512Mi
securityContext:
privileged: true
command:
- cat
tty: true
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
- mountPath: /root/.m2
name: jenkins-maven-m2
- mountPath: /home/jenkins/
name: workspace-volume
readOnly: false
- name: jnlp
image: harbor.mycompany.net/jenkins/inbound-agent:alpine-jdk11
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 50m
memory: 256Mi
limits:
cpu: 100m
memory: 512Mi
volumeMounts:
- mountPath: /home/jenkins/
name: workspace-volume
readOnly: false
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: workspace-volume
emptyDir: {}
- name: jenkins-maven-m2
nfs:
path: /export/mid-devops/jenkins/m2
server: xxx.xxx.xxx.xxx
The master itself pull code is fast:
However when the agent try to pull pipeline codes, it always gets stuck for about One-and-a-half minutes:
and when agent pull application codes, it is fast.
This problem happens every time, I have no idea.
I expect the agent don't get stuck when checkout codes

Fluentd is unable to write the logs to /fluentd/log Directory

I have deployed a fluentd sidecar container with my application in a pod to collect logs from my app.
Here's my sidecar manifest sidecar.yaml:
spec:
template:
spec:
containers:
- name: fluentd
image: fluent/fluentd
ports:
- containerPort: 24224
protocol: TCP
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /etc/td-agent/config.d
name: configmap-sidecar-volume
securityContext:
runAsUser: 101
runAsGroup: 101
I used this manifest and patched it to my deployment using the following command:
kubectl patch deployment my-deployment --patch “$(cat sidecar.yaml)”
The deployment was successfully updated, however the my fluentd container can't seem to start and is throwing the following error:
2020-10-16 09:07:07 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf"
2020-10-16 09:07:08 +0000 [info]: gem 'fluentd' version '1.11.2'
2020-10-16 09:07:08 +0000 [warn]: [output_docker1] 'time_format' specified without 'time_key', will be ignored
2020-10-16 09:07:08 +0000 [error]: config error file="/fluentd/etc/fluent.conf" error_class=Fluent::ConfigError error="out_file: `/fluentd/log/docker.20201016.log` is not writable"
This is my fluent.conf file:
<source>
#type forward
bind 127.0.0.1
port 24224
<parse>
#type json
</parse>
</source>
What is causing this issue?
fluentd's UID default to 1000 unless it changed via env FLUENT_UID.
/fluentd/log/docker.20201016.log is not writable - error says that your user 101 doesn't have write permission to the log file. Change the security context to 1000 or set env FLUENT_UID=101 to fix the issue.
spec:
template:
spec:
containers:
- name: fluentd
image: fluent/fluentd
ports:
- containerPort: 24224
protocol: TCP
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /etc/td-agent/config.d
name: configmap-sidecar-volume
securityContext:
runAsUser: 1000
runAsGroup: 1000
Related resources:
User id mapping between host and container
No more access to mounted /fluentd/logs folder

How to mount containers volume(non root user) to root user on host in Kubernetes?

When i am trying to mount application log volume from containers to host
getting error: Operation not permitted
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
initContainers:
- name: volume-mount-permission
image: xx.xx.xx.xx/orchestration/credit-card
command:
- sh
- -c
- chown -R 1000:1000 /opt/payara/appserver/glassfish/logs/credit-card
- chgrp 1000 /opt/payara/appserver/glassfish/logs/credit-card
volumeMounts:
- name: card-corp-logs
mountPath: /opt/payara/appserver/glassfish/logs/credit-card
readOnly: false
containers:
- name: credit-card
image: xx.xx.xx.xx/orchestration/credit-card
imagePullPolicy: Always
securityContext:
privileged: true
runAsUser: 1000
ports:
- name: credit-card
containerPort: 8080
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
volumeMounts:
- name: override-setting-storage
mountPath: /p/config
- name: credit-card-teamsite
mountPath: /var/credit-card/teamsite/card_corp
Container Path - /opt/payara/appserver/glassfish/logs/credit-card to hostPath
Can anyone please help me out where i am doing mistake in deployment yml file.
securityContext:
runAsUser: 1000
runAsGroup: 3000
means you cannot chown 1000:1000 because that user is not a member of group 1000
Likely you will want to run that initContainer: as runAsUser: 0 in order to allow it to perform arbitrary chown operations
You also truncated your YAML that would have specified the volumes: that are being mounted by your volumeMounts: -- there is a chance that you are trying to mount a volume type that -- regardless of your readOnly: false declaration -- cannot be modified. ConfigMap, Secret, Downward API, and a bunch of others also will not respond to mutation requests, even as root.

Disable https and change port for SKyDNS

I currently have SkyDNS that is trying to connect via HTTPS and is using port 443.
I1221 01:15:28.199437 1 server.go:91] Using https://10.100.0.1:443 for kubernetes master
I1221 01:15:28.199440 1 server.go:92] Using kubernetes API <nil>
I1221 01:15:28.199637 1 server.go:132] Starting SkyDNS server. Listening on port:10053
I want it to use HTTP and port 8080 instead.
My YAML file is :
apiVersion: v1
kind: ReplicationController
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v18
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
version: v18
spec:
containers:
- args:
- --domain=kube.local
- --dns-port=10053
image: gcr.io/google_containers/kubedns-amd64:1.6
imagePullPolicy: IfNotPresent
name: kubedns
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
- args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
imagePullPolicy: IfNotPresent
name: dnsmasq
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
- args:
- -cmd=nslookup kubernetes.default.svc.kube.local 127.0.0.1 >/dev/null &&
nslookup kubernetes.default.svc.kube.local 127.0.0.1:10053 >/dev/null
- -port=8080
- -quiet
image: gcr.io/google_containers/exechealthz-amd64:1.0
imagePullPolicy: IfNotPresent
name: healthz
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
I understand this might not be a favorable design but is there a way I can change the protocol and port ?
Adding the below like to the kubedns's spec.containers.args section did the trick.
- --kube-master-url=http://master:8080

External nameserver trouble for kubernetes pods

My pods can not resolve external world ( for ex for mail, ... ) how can I add google nameserver to the cluster ? For info the host resolve it without problem and has nameserver.
The problem is that the liveness check made skids fail, I changed it like bellow.
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v10
namespace: kube-system
labels:
k8s-app: kube-dns
version: v10
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v10
template:
metadata:
labels:
k8s-app: kube-dns
version: v10
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd:2.0.9
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.12
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/kube2sky"
- --domain=cluster.local
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain=cluster.local.
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 15
#readinessProbe:
#httpGet:
#path: /healthz
#port: 8080
#scheme: HTTP
#initialDelaySeconds: 1
#timeoutSeconds: 5
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.