Fluentd not loading config from ConfigMap - kubernetes

I am using Fluentd as a sidecar to ship nginx logs to stdout so they show up in the Pod's logs. I have a strange problem that Fluentd is not picking up the configuration when the container starts.
Upon examining the Fluentd startup logs it appears that the configuration is not being loaded. The config is supposed to be loaded from /etc/fluentd-config/fluentd.conf when the container starts. I have connected to the container and the config file is correct, and the pv mounts are also correct. The environment variable also exists.
The full deployment spec is below - and is self contained if you feel like playing with it.
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: PersistentVolume
metadata:
name: weblog-pv
labels:
type: local
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/weblog
type: DirectoryOrCreate
capacity:
storage: 500Mi
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: weblog-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
- apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
data:
fluentd.conf: |
<source>
#type tail
format none
path /var/log/nginx/access.log
tag count.format1
</source>
<match *.**>
#type forward
<server>
name localhost
host 127.0.0.1
</server>
</match>
- apiVersion: v1
kind: Pod
metadata:
name: sidecar-example
labels:
app: webserver
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: logging-vol
mountPath: /var/log/nginx
- name: fdlogger
env:
- name: FLUENTD_ARGS
value: -c /etc/fluentd-config/fluentd.conf
image: fluent/fluentd
volumeMounts:
- name: logging-vol
mountPath: /var/log/nginx
- name: log-config
mountPath: /etc/fluentd-config
volumes:
- name: logging-vol
persistentVolumeClaim:
claimName: weblog-pvc
- name: log-config
configMap:
name: fluentd-config
- apiVersion: v1
kind: Service
metadata:
name: sidecar-svc
spec:
selector:
app: webserver
type: NodePort
ports:
- name: sidecar-port
port: 80
nodePort: 32000

I got it to work by using stdout instead of redirecting to localhost.
- apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
data:
fluent.conf: |
<source>
#type tail
format none
path /var/log/nginx/access.log
tag count.format1
</source>
<match *.**>
#type stdout
</match>

Related

How to use nginx as a sidecar container for IIS in Kubernetes?

I have a strange result from using nginx and IIS server together in single Kubernetes pod. It seems to be an issue with nginx.conf. If I bypass nginx and go directly to IIS, I see the standard landing page -
However when I try to go through the reverse proxy I see this partial result -
Here are the files:
nginx.conf:
events {
worker_connections 4096; ## Default: 1024
}
http{
server {
listen 81;
#Using variable to prevent nginx from checking hostname at startup, which leads to a container failure / restart loop, due to nginx starting faster than IIS server.
set $target "http://127.0.0.1:80/";
location / {
proxy_pass $target;
}
}
}
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
...
name: ...
spec:
replicas: 1
selector:
matchLabels:
pod: ...
template:
metadata:
labels:
pod: ...
name: ...
spec:
containers:
- image: claudiubelu/nginx:1.15-1-windows-amd64-1809
name: nginx-reverse-proxy
volumeMounts:
- mountPath: "C:/usr/share/nginx/conf"
name: nginx-conf
imagePullPolicy: Always
- image: some-repo/proprietary-server-including-iis
name: ...
imagePullPolicy: Always
nodeSelector:
kubernetes.io/os: windows
imagePullSecrets:
- name: secret1
volumes:
- name: nginx-conf
persistentVolumeClaim:
claimName: pvc-nginx
Mapping the nginx.conf file from a volume is just a convenient way to rapidly test different configs. New configs can be swapped in using kubectl cp ./nginx/conf nginx-busybox-pod:/mnt/nginx/.
Busybox pod (used to access the PVC):
apiVersion: v1
kind: Pod
metadata:
name: nginx-busybox-pod
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "360000"
imagePullPolicy: Always
name: busybox
volumeMounts:
- name: nginx-conf
mountPath: "/mnt/nginx/conf"
restartPolicy: Always
volumes:
- name: nginx-conf
persistentVolumeClaim:
claimName: pvc-nginx
nodeSelector:
kubernetes.io/os: linux
And lastly the PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nginx
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
storageClassName: azurefile
Any ideas why?

Can not create ReadWrite filesystem in kubernetes (ReadOnly mount)

Summary
I currently am in the process of learning kubernetes, as such I have decided to start with an application that is simple (Mumble).
Setup
My setup is simple, I have one node (the master) where I have removed the taint so mumble can be deployed on it. This single node is running CentOS Stream but SELinux is disabled.
The issue
The /srv/mumble directory appears to be ReadOnly, and at this point I have tried creating an init container to chown the directory but that fails due to the issue above. This issue appears in both containers, and I am unsure at this point how to change this to allow the mumble application to create files in said directory. The mumble application user runs as user 1000. What am I missing here?
Configs
---
apiVersion: v1
kind: Namespace
metadata:
name: mumble
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mumble-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
type: DirectoryOrCreate
path: "/var/lib/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mumble-pv-claim
namespace: mumble
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mumble-config
namespace: mumble
data:
murmur.ini: |
**cut for brevity**
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mumble-deployment
namespace: mumble
labels:
app: mumble
spec:
replicas: 1
selector:
matchLabels:
app: mumble
template:
metadata:
labels:
app: mumble
spec:
initContainers:
- name: storage-setup
image: busybox:latest
command: ["sh", "-c", "chown -R 1000:1000 /srv/mumble"]
securityContext:
privileged: true
runAsUser: 0
volumeMounts:
- mountPath: "/srv/mumble"
name: mumble-pv-storage
readOnly: false
- name: mumble-config
subPath: murmur.ini
mountPath: "/srv/mumble/config.ini"
readOnly: false
containers:
- name: mumble
image: phlak/mumble
ports:
- containerPort: 64738
env:
- name: TZ
value: "America/Denver"
volumeMounts:
- mountPath: "/srv/mumble"
name: mumble-pv-storage
readOnly: false
- name: mumble-config
subPath: murmur.ini
mountPath: "/srv/mumble/config.ini"
readOnly: false
volumes:
- name: mumble-pv-storage
persistentVolumeClaim:
claimName: mumble-pv-claim
- name: mumble-config
configMap:
name: mumble-config
items:
- key: murmur.ini
path: murmur.ini
---
apiVersion: v1
kind: Service
metadata:
name: mumble-service
spec:
selector:
app: mumble
ports:
- port: 64738
command: ["sh", "-c", "chown -R 1000:1000 /srv/mumble"]
Not the volume that is mounted as read-only, the ConfigMap is always mounted as read-only. Change the command to:
command: ["sh", "-c", "chown 1000:1000 /srv/mumble"] will work.

Cant mount local host path in local kind cluster

Below is my kubernetes file and I need to do two things
need to mount a folder with a file
need to mount a file with startup script
I have on my local /tmp/zoo folder both the files and my zoo folder files never appear in /bitnami/zookeeper inside the pod.
The below is the updated Service,Deployment,PVC and PV
kubernetes.yaml
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.service.type: nodeport
creationTimestamp: null
labels:
io.kompose.service: zookeeper
name: zookeeper
spec:
ports:
- name: "2181"
port: 2181
targetPort: 2181
selector:
io.kompose.service: zookeeper
type: NodePort
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.service.type: nodeport
creationTimestamp: null
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: zookeeper
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: zookeeper
spec:
containers:
- image: bitnami/zookeeper:3
name: zookeeper
ports:
- containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
resources: {}
volumeMounts:
- mountPath: /bitnami/zoo
name: bitnamidockerzookeeper-zookeeper-data
restartPolicy: Always
volumes:
- name: bitnamidockerzookeeper-zookeeper-data
#hostPath:
#path: /tmp/tmp1
persistentVolumeClaim:
claimName: bitnamidockerzookeeper-zookeeper-data
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: bitnamidockerzookeeper-zookeeper-data
type: local
name: bitnamidockerzookeeper-zookeeper-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: v1
kind: PersistentVolume
metadata:
name: foo
spec:
storageClassName: manual
claimRef:
name: bitnamidockerzookeeper-zookeeper-data
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
hostPath:
path: /tmp/tmp1
status: {}
kind: List
metadata: {}
A service cannot be assigned a volume. In line 4 of your YAML, you specify "Service" when it should be "Pod" and every resource used in Kubernetes must have a name, in metadata you could add it. That should fix the simple problem.
apiVersion: v1
items:
- apiVersion: v1
kind: Pod #POD
metadata:
name: my-pod #A RESOURCE NEEDS A NAME
creationTimestamp: null
labels:
io.kompose.service: zookeeper
spec:
containers:
- image: bitnami/zookeeper:3
name: zookeeper
ports:
- containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
resources: {}
volumeMounts:
- mountPath: /bitnami/zookeeper
name: bitnamidockerzookeeper-zookeeper-data
restartPolicy: Always
volumes:
- name: bitnamidockerzookeeper-zookeeper-data
persistentVolumeClaim:
claimName: bitnamidockerzookeeper-zookeeper-data
status: {}
Now, I don't know what you're using but hostPath works exclusively on a local cluster like Minikube. In production things change drastically. If everything is local, you need to have the directory "/ tmp / zoo" in the node, NOTE not on your local pc but inside the node. For example, if you use minikube then you run minikube ssh to enter the node and there copies "/ tmp / zoo". An excellent guide to this is given in the official kubernetes documentation: https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
There are a few potential issues in your YAML.
First, the accessModes of the PersistentVolume doesn't match the one of the PersistentVolumeClaim. One way to fix that is to list both ReadWriteMany and ReadWriteOnce in the accessModes of the PersistentVolume.
Then, the PersistentVolume doesn't specify a storageClassName. As a result, if you have a StorageClass configured to be the default StorageClass on your cluster (you can see that with kubectl get sc), it will automatically provision a PersistentVolume dynamically instead of using the PersistentVolume that you declared. So you need to specify a storageClassName. The StorageClass doesn't have to exist for real (since we're using static provisioning instead of dynamic anyway).
Next, the claimRef in PersistentVolume needs to mention the Namespace of the PersistentVolumeClaim. As a reminder: PersistentVolumes are cluster resources, so they don't have a Namespace; but PersistentVolumeClaims belong to the same Namespace as the Pod that mounts them.
Another thing is that the path used by Zookeeper data in the bitnami image is /bitnami/zookeeper, not /bitnami/zoo.
You will also need to initialize permissions in that volume, because by default, only root will have write access, and Zookeeper runs as non-root here, and won't have write access to the data subdirectory.
Here is an updated YAML that addresses all these points. I also rewrote the YAML to use the YAML multi-document syntax (resources separated by ---) instead of the kind: List syntax, and I removed a lot of fields that weren't used (like the empty status: fields and the labels that weren't strictly necessary). It works on my KinD cluster, I hope it will also work in your situation.
If your cluster has only one node, this will work fine, but if you have multiple nodes, you might need to tweak things a little bit to make sure that the volume is bound to a specific node (I added a commented out nodeAffinity section in the YAML, but you might also have to change the bind mode - I only have a one-node cluster to test it out right now; but the Kubernetes documentation and blog have abundant details on this; https://stackoverflow.com/a/69517576/580281 also has details about this binding mode thing).
One last thing: in this scenario, I think it might make more sense to use a StatefulSet. It would not make a huge difference but would more clearly indicate intent (Zookeeper is a stateful service) and in the general case (beyond local hostPath volumes) it would avoid having two Zookeeper Pods accessing the volume simultaneously.
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
ports:
- name: "2181"
port: 2181
targetPort: 2181
selector:
io.kompose.service: zookeeper
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: zookeeper
template:
metadata:
labels:
io.kompose.service: zookeeper
spec:
initContainers:
- image: alpine
name: chmod
volumeMounts:
- mountPath: /bitnami/zookeeper
name: bitnamidockerzookeeper-zookeeper-data
command: [ sh, -c, "chmod 777 /bitnami/zookeeper" ]
containers:
- image: bitnami/zookeeper:3
name: zookeeper
ports:
- containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
volumeMounts:
- mountPath: /bitnami/zookeeper
name: bitnamidockerzookeeper-zookeeper-data
volumes:
- name: bitnamidockerzookeeper-zookeeper-data
persistentVolumeClaim:
claimName: bitnamidockerzookeeper-zookeeper-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: bitnamidockerzookeeper-zookeeper-data
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: tmp-tmp1
spec:
storageClassName: manual
claimRef:
name: bitnamidockerzookeeper-zookeeper-data
namespace: default
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
- ReadWriteOnce
hostPath:
path: /tmp/tmp1
#nodeAffinity:
# required:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/hostname
# operator: In
# values:
# - kind-control-plane
Little confuse, if you want to use file path on node as volume for pod, you should do as this:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
but you need to make sure you pod will be scheduler the same node which has the file path.

Kubernetes pods refusing connections to each other

I'm trying to implement an ElasticStack in Kubernetes via Minikube. I've barely started, as I'm writing basically everything from scratch to get a better understand of K8s and because the provided yml's from Elastic don't offer any explanation as to what is done why, so I'm doing my own thing.
The problem I've ran into is that my Kibana-pod cannot communicate with my ElasticSearch-pod, although I've set up the necessary services and ports on my pods.
Where it gets weird is that
kubectl port-forward services/elastic-http 9200
works flawlessly and lets me get information from my ElasticSearch pod. However, when I enter a pod via
kubectl exec -it <pod-name> -- /bin/bash
and try to use curl to get the same information my browser just showed me, the connection is being refused and my pods won't talk to one another.
My configs look as follows.
Kibana.yml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: my-kb
namespace: default
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
name: kibana
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.3.0
ports:
- containerPort: 5601
name: kibana-web
volumeMounts:
- name: kb-conf
mountPath: /usr/share/kibana/config/kibana.yml
subPath: kibana.yml
volumes:
- name: kb-conf
configMap:
name: kibana-config
items:
- key: kibana.yml
path: kibana.yml
---
kind: Service
apiVersion: v1
metadata:
name: kibana-http
namespace: default
spec:
selector:
app: kibana
ports:
- protocol: TCP
port: 5601
name: kibana-web
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kibana-config
namespace: default
data:
kibana.yml: |
elasticsearch.hosts: ["http://elastic-http.default.svc:9200"]
ElasticSearch.yml
kind: PersistentVolume
apiVersion: v1
metadata:
name: elastic-pv
namespace: default
spec:
capacity:
storage: 15Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: elastic-pv-claim
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 15Gi
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: elastic-deploy
namespace: default
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
ports:
- containerPort: 9200
name: elastic-http
protocol: TCP
- containerPort: 9300
name: node-sniffer
protocol: TCP
#readinessProbe:
# httpGet:
# port: 9200
# periodSeconds: 5
volumeMounts:
- name: elastic-conf
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
- name: elastic-data
mountPath: /var/data
securityContext:
privileged: true
initContainers:
- name: sysctl-adj
image: busybox
command: ['sysctl', '-w', 'vm.max_map_count=262144']
securityContext:
privileged: true
volumes:
- name: elastic-data
persistentVolumeClaim:
claimName: elastic-pv-claim
- name: elastic-conf
configMap:
name: elastic-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
---
kind: Service
apiVersion: v1
metadata:
name: elastic-http
namespace: default
spec:
selector:
app: elasticsearch
ports:
- port: 9200
targetPort: elastic-http
name: elastic-http
- port: 9300
targetPort: node-sniffer
name: node-finder
---
kind: ConfigMap
apiVersion: v1
metadata:
name: elastic-config
namespace: default
data:
elasticsearch.yml: |
xpack.security.enabled: false
node.master: true
path.data: /var/data
http.port: 9200
I think you are having clusterIP service type and if you want to see it in browser one of the option is to have service type as NodePort.
You can see more details here
I'm not sure about this part in service:
targetPort: elastic-http
targetPort: node-sniffer
could you try to remove them and try again

Kubernetes nfs define path in claim

I want to create common persistence volume with nfs.
PV(nfs):
common-data-pv 1500Gi RWO Retain
192.168.0.24 /home/common-data-pv
I want a claim or pod(mount the claim) subscribed common-data-pv can define path example :
/home/common-data-pv/www-site-1(50GI)
/home/common-data-pv/www-site-2(50GI)
But i not found in documentation how i can define this.
My actual conf for pv :
kind: PersistentVolume
apiVersion: v1
metadata:
name: common-data-pv
labels:
type: common
spec:
capacity:
storage: 1500Gi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.122.1
path: "/home/pv/common-data-pv"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: common-data-pvc
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: common
Example use:
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-web-1
namespace: kube-system
spec:
replicas: 2
selector:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx:alpine
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: common-data-pvc
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-web-2
namespace: kube-system
spec:
replicas: 2
selector:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx:alpine
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: common-data-pvc
To use the claim you just need to add a volumeMounts section and volumes to your manifest. Here's an example replication controller for nginx that would use your claim. Note the very last line that uses the same PVC name.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-web
namespace: kube-system
spec:
replicas: 2
selector:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx:alpine
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: common-data-pvc
More examples can be found in the kubernetes repo under examples