I am trying to setup a Rabbitmq cluster and when the containers start they are failing with error [error] CRASH REPORT Process <0.200.0> with 0 neighbours crashed with reason: "Bad characters in cookie" in auth:init_no_setcookie/0 line 313. This suggests that the erlang cookie value passed in is not valid :
kubectl -n demos get pods
NAME READY STATUS RESTARTS AGE
mongodb-deployment-6499999-vpcjh 1/1 Running 0 12h
rabbitmq-0 0/1 CrashLoopBackOff 9 25m
rabbitmq-1 0/1 CrashLoopBackOff 9 24m
rabbitmq-2 0/1 CrashLoopBackOff 9 23m
And when I query the logs for one of the pods :
kubectl -n demos logs -p rabbitmq-0 --previous
I get :
WARNING: '/var/lib/rabbitmq/.erlang.cookie' was populated from
'$RABBITMQ_ERLANG_COOKIE', which will no longer happen in 3.9 and later! (https://github.com/docker-library/rabbitmq/pull/424)
Configuring logger redirection
02:04:47.506 [error] Bad characters in cookie
02:04:47.512 [error]
02:04:47.506 [error] Supervisor net_sup had child auth started with auth:start_link() at undefined exit with reason "Bad characters in cookie" in auth:init_no_setcookie/0 line 313 in context start_error
02:04:47.506 [error] CRASH REPORT Process <0.200.0> with 0 neighbours crashed with reason: "Bad characters in cookie" in auth:init_no_setcookie/0 line 313
02:04:47.522 [error] BOOT FAILED
BOOT FAILED
02:04:47.523 [error] ===========
===========
02:04:47.523 [error] Exception during startup:
Exception during startup:
02:04:47.524 [error]
02:04:47.524 [error] supervisor:children_map/4 line 1250
....
....
....
This is how I am generating the cookie in bash :
dd if=/dev/urandom bs=30 count=1 | base64
And in the secrets manifest I have :
metadata:
name: rabbit-secret
namespace: demos
type: Opaque
data:
# echo -n "cookie-value" | base64
RABBITMQ_ERLANG_COOKIE: <encoded_cookie_value_here>
And in statefulset I have :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbitmq
namespace: demos
spec:
serviceName: rabbitmq
replicas: 3
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
serviceAccountName: rabbitmq
initContainers:
- name: config
image: busybox
imagePullPolicy: "IfNotPresent"
command: ['/bin/sh', '-c', 'cp /tmp/config/rabbitmq.conf /config/rabbitmq.conf && ls -l /config/ && cp /tmp/config/enabled_plugins /etc/rabbitmq/enabled_plugins']
volumeMounts:
- name: config
mountPath: /tmp/config/
readOnly: false
- name: config-file
mountPath: /config/
- name: plugins-file
mountPath: /etc/rabbitmq/
containers:
- name: rabbitmq
image: rabbitmq:3.8-management
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 4369
name: discovery
- containerPort: 5672
name: amqp
env:
- name: RABBIT_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: RABBIT_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: RABBITMQ_NODENAME
value: rabbit#$(RABBIT_POD_NAME).rabbitmq.$(RABBIT_POD_NAMESPACE).svc.cluster.local
- name: RABBITMQ_USE_LONGNAME
value: "true"
- name: RABBITMQ_CONFIG_FILE
value: "/config/rabbitmq"
- name: RABBITMQ_ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: rabbit-secret
key: RABBITMQ_ERLANG_COOKIE
- name: K8S_HOSTNAME_SUFFIX
value: .rabbitmq.$(RABBIT_POD_NAMESPACE).svc.cluster.local
volumeMounts:
- name: data
mountPath: /var/lib/rabbitmq
readOnly: false
- name: config-file
mountPath: /config/
- name: plugins-file
mountPath: /etc/rabbitmq/
volumes:
- name: config-file
emptyDir: {}
- name: plugins-file
emptyDir: {}
- name: config
configMap:
name: rabbitmq-config
defaultMode: 0755
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "cinder-csi"
resources:
requests:
storage: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
namespace: labs
spec:
clusterIP: None
ports:
- port: 4369
targetPort: 4369
name: discovery
- port: 5672
targetPort: 5672
name: amqp
selector:
app: rabbitmq
What am I missing ?
Is there a recommended way of generating the cookie or something else to do with the K8s cluster itself.
I have followed the example given here with the only difference being that I am generating my cookie on my local machine and not the k8s host.
duplicate question?
This requires you to create the Secret. Just to bring this up, you can run a one-off imperative command to create a random Secret:
kubectl create secret generic rabbitmq \
--from-literal=erlangCookie=$(dd if=/dev/urandom bs=30 count=1 | base64)
The error is from the rabbitmq's source code file docker-entrypoint.sh
if [ "${RABBITMQ_ERLANG_COOKIE:-}" ]; then
cookieFile='/var/lib/rabbitmq/.erlang.cookie'
if [ -e "$cookieFile" ]; then
if [ "$(cat "$cookieFile" 2>/dev/null)" != "$RABBITMQ_ERLANG_COOKIE" ]; then
echo >&2
echo >&2 "warning: $cookieFile contents do not match RABBITMQ_ERLANG_COOKIE"
echo >&2
fi
else
echo "$RABBITMQ_ERLANG_COOKIE" > "$cookieFile"
fi
chmod 600 "$cookieFile"
fi
Therefore you can delete '/var/lib/rabbitmq/.erlang.cookie' file, the code will copy the content from environment variable $RABBITMQ_ERLANG_COOKIE to create this file.
If you are working for the production system, you should be very careful and test it in your local system firstly and gain experience.
This source code will be execuated only the rabbitmq restart, and won't run again after it is execuated.
And you can find the actual rabbitmq's cookie data from the erlang's console by erlang:get_cookie() -> Cookie | nocookie.
Related
I have several secrets that are mounted and need to be read as a properties file. It seems kubernetes can't mount them as a single file so I'm trying to concatenate the files after the pod starts. I tried running a cat command in a postStart handler but it seems execute before the secrets are mounted as I get this error:
Error: failed to create containerd task: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword": stat cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword: no such file or directory: unknown
Then here is the yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: K8S_ID
spec:
selector:
matchLabels:
app: K8S_ID
replicas: 1
template:
metadata:
labels:
app: K8S_ID
spec:
containers:
- name: K8S_ID
image: IMAGE_NAME
ports:
- containerPort: 8080
env:
- name: PROPERTIES_FILE
value: "/properties/dbPassword"
volumeMounts:
- name: secret-properties
mountPath: "/properties"
lifecycle:
postStart:
exec:
command: ["cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword"]
volumes:
- name: secret-properties
secret:
secretName: secret-properties
items:
- key: SECRET_ITEM
path: dbPassword
- key: S3Key
path: S3Key
- key: S3Secret
path: S3Secret
You need a shell session for your command like this:
...
lifecycle:
postStart:
exec:
command: ["/bin/sh","-c","cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword"]
...
When I deployment IPFS-Cluster on Kubernetes, I get the following error (these are ipfs-cluster logs):
error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
2022-01-04T10:23:08.103Z INFO service ipfs-cluster-service/daemon.go:47 Initializing. For verbose output run with "-l debug". Please wait...
2022-01-04T10:23:08.103Z ERROR config config/config.go:352 error reading the configuration file: open /data/ipfs-cluster/service.json: no such file or directory
error loading configurations: open /data/ipfs-cluster/service.json: no such file or directory
These are initContainer logs:
+ user=ipfs
+ mkdir -p /data/ipfs
+ chown -R ipfs /data/ipfs
+ '[' -f /data/ipfs/config ]
+ ipfs init '--profile=badgerds,server'
initializing IPFS node at /data/ipfs
generating 2048-bit RSA keypair...done
peer identity: QmUHmdhauhk7zdj5XT1zAa6BQfrJDukysb2PXsCQ62rBdS
to get started, enter:
ipfs cat /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme
+ ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
+ ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
+ ipfs config --json Swarm.ConnMgr.HighWater 2000
+ ipfs config --json Datastore.BloomFilterSize 1048576
+ ipfs config Datastore.StorageMax 100GB
These are ipfs container logs:
Changing user to ipfs
ipfs version 0.4.18
Found IPFS fs-repo at /data/ipfs
Initializing daemon...
go-ipfs version: 0.4.18-aefc746
Repo version: 7
System version: amd64/linux
Golang version: go1.11.1
Error: open /data/ipfs/config: permission denied
Received interrupt signal, shutting down...
(Hit ctrl-c again to force-shutdown the daemon.)
The following is my kubernetes yaml file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ipfs-cluster
spec:
serviceName: ipfs-cluster
replicas: 3
selector:
matchLabels:
app: ipfs-cluster
template:
metadata:
labels:
app: ipfs-cluster
spec:
initContainers:
- name: configure-ipfs
image: "ipfs/go-ipfs:v0.4.18"
command: ["sh", "/custom/configure-ipfs.sh"]
volumeMounts:
- name: ipfs-storage
mountPath: /data/ipfs
- name: configure-script
mountPath: /custom/entrypoint.sh
subPath: entrypoint.sh
- name: configure-script-2
mountPath: /custom/configure-ipfs.sh
subPath: configure-ipfs.sh
containers:
- name: ipfs
image: "ipfs/go-ipfs:v0.4.18"
imagePullPolicy: IfNotPresent
env:
- name: IPFS_FD_MAX
value: "4096"
ports:
- name: swarm
protocol: TCP
containerPort: 4001
- name: swarm-udp
protocol: UDP
containerPort: 4002
- name: api
protocol: TCP
containerPort: 5001
- name: ws
protocol: TCP
containerPort: 8081
- name: http
protocol: TCP
containerPort: 8080
livenessProbe:
tcpSocket:
port: swarm
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 15
volumeMounts:
- name: ipfs-storage
mountPath: /data/ipfs
- name: configure-script
mountPath: /custom
resources:
{}
- name: ipfs-cluster
image: "ipfs/ipfs-cluster:latest"
imagePullPolicy: IfNotPresent
command: ["sh", "/custom/entrypoint.sh"]
envFrom:
- configMapRef:
name: env-config
env:
- name: BOOTSTRAP_PEER_ID
valueFrom:
configMapRef:
name: env-config
key: bootstrap-peer-id
- name: BOOTSTRAP_PEER_PRIV_KEY
valueFrom:
secretKeyRef:
name: secret-config
key: bootstrap-peer-priv-key
- name: CLUSTER_SECRET
valueFrom:
secretKeyRef:
name: secret-config
key: cluster-secret
- name: CLUSTER_MONITOR_PING_INTERVAL
value: "3m"
- name: SVC_NAME
value: $(CLUSTER_SVC_NAME)
ports:
- name: api-http
containerPort: 9094
protocol: TCP
- name: proxy-http
containerPort: 9095
protocol: TCP
- name: cluster-swarm
containerPort: 9096
protocol: TCP
livenessProbe:
tcpSocket:
port: cluster-swarm
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
volumeMounts:
- name: cluster-storage
mountPath: /data/ipfs-cluster
- name: configure-script
mountPath: /custom/entrypoint.sh
subPath: entrypoint.sh
resources:
{}
volumes:
- name: configure-script
configMap:
name: ipfs-cluster-set-bootstrap-conf
- name: configure-script-2
configMap:
name: configura-ipfs
volumeClaimTemplates:
- metadata:
name: cluster-storage
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 5Gi
- metadata:
name: ipfs-storage
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 200Gi
---
kind: Secret
apiVersion: v1
metadata:
name: secret-config
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
bootstrap-peer-priv-key: >-
UTBGQlUzQjNhM2RuWjFOcVFXZEZRVUZ2U1VKQlVVTjBWbVpUTTFwck9ETkxVWEZNYzJFemFGWlZaV2xKU0doUFZGRTBhRmhrZVhCeFJGVmxVbmR6Vmt4Nk9IWndZ...
cluster-secret: 7d4c019035beb7da7275ea88315c39b1dd9fdfaef017596550ffc1ad3fdb556f
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
name: env-config
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
bootstrap-peer-id: QmWgEHZEmJhuoDgFmBKZL8VtpMEqRArqahuaX66cbvyutP
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ipfs-cluster-set-bootstrap-conf
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
entrypoint.sh: |2
#!/bin/sh
user=ipfs
# This is a custom entrypoint for k8s designed to connect to the bootstrap
# node running in the cluster. It has been set up using a configmap to
# allow changes on the fly.
if [ ! -f /data/ipfs-cluster/service.json ]; then
ipfs-cluster-service init
fi
PEER_HOSTNAME=`cat /proc/sys/kernel/hostname`
grep -q ".*ipfs-cluster-0.*" /proc/sys/kernel/hostname
if [ $? -eq 0 ]; then
CLUSTER_ID=${BOOTSTRAP_PEER_ID} \
CLUSTER_PRIVATEKEY=${BOOTSTRAP_PEER_PRIV_KEY} \
exec ipfs-cluster-service daemon --upgrade
else
BOOTSTRAP_ADDR=/dns4/${SVC_NAME}-0/tcp/9096/ipfs/${BOOTSTRAP_PEER_ID}
if [ -z $BOOTSTRAP_ADDR ]; then
exit 1
fi
# Only ipfs user can get here
exec ipfs-cluster-service daemon --upgrade --bootstrap $BOOTSTRAP_ADDR --leave
fi
---
kind: ConfigMap
apiVersion: v1
metadata:
name: configura-ipfs
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
configure-ipfs.sh: >-
#!/bin/sh
set -e
set -x
user=ipfs
# This is a custom entrypoint for k8s designed to run ipfs nodes in an
appropriate
# setup for production scenarios.
mkdir -p /data/ipfs && chown -R ipfs /data/ipfs
if [ -f /data/ipfs/config ]; then
if [ -f /data/ipfs/repo.lock ]; then
rm /data/ipfs/repo.lock
fi
exit 0
fi
ipfs init --profile=badgerds,server
ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
ipfs config --json Swarm.ConnMgr.HighWater 2000
ipfs config --json Datastore.BloomFilterSize 1048576
ipfs config Datastore.StorageMax 100GB
I follow the official steps to build.
I follow the official steps and use the following command to generate cluster-secret:
$ od -vN 32 -An -tx1 /dev/urandom | tr -d ' \n' | base64 -w 0 -
But I get:
error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
I saw the same problem from the official github issue. So, I use openssl rand -hex command is not ok.
To clarify I am posting community Wiki answer.
To solve following error:
no such file or directory
you used runAsUser: 0.
The second error:
error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
was caused by different encoding than hex to CLUSTER_SECRET.
According to this page:
The Cluster Secret Key
The secret key of a cluster is a 32-bit hex encoded random string, of which every cluster peer needs in their _service.json_ configuration.
A secret key can be generated and predefined in the CLUSTER_SECRET environment variable, and will subsequently be used upon running ipfs-cluster-service init.
Here is link to solved issue.
See also:
Configuration reference
IPFS Tutorial
Documentation guide Security and ports
I'm new to RabbitMQ and Kubernetes world, and I'm trying to achieve the following objectives:
I've deployed successfully a RabbitMQ Cluster on Kubernetes(minikube) and exposed via loadbalancer(doing minikube tunnel on local)
I can connect successfully to my Queue with a basic Spring Boot app. I can send and receive message from my cluster.
In my clusters i have 4 nodes. I have applied the mirror queue policy (HA), and the created queue is mirrored also on other nodes.
This is my cluster configuration:
apiVersion: v1
kind: Secret
metadata:
name: rabbit-secret
type: Opaque
data:
# echo -n "cookie-value" | base64
RABBITMQ_ERLANG_COOKIE: V0lXVkhDRFRDSVVBV0FOTE1RQVc=
apiVersion: v1
kind: ConfigMap
metadata:
name: rabbitmq-config
data:
enabled_plugins: |
[rabbitmq_federation,rabbitmq_management,rabbitmq_federation_managementrabbitmq_peer_discovery_k8s].
rabbitmq.conf: |
loopback_users.guest = false
listeners.tcp.default = 5672
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
cluster_formation.k8s.address_type = hostname
cluster_formation.node_cleanup.only_log_warning = true
##cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
##cluster_formation.classic_config.nodes.1 = rabbit#rabbitmq-0.rabbitmq.rabbits.svc.cluster.local
##cluster_formation.classic_config.nodes.2 = rabbit#rabbitmq-1.rabbitmq.rabbits.svc.cluster.local
##cluster_formation.classic_config.nodes.3 = rabbit#rabbitmq-2.rabbitmq.rabbits.svc.cluster.local
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbitmq
spec:
serviceName: rabbitmq
replicas: 4
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
serviceAccountName: rabbitmq
initContainers:
- name: config
image: busybox
command: ['/bin/sh', '-c', 'cp /tmp/config/rabbitmq.conf /config/rabbitmq.conf && ls -l /config/ && cp /tmp/config/enabled_plugins /etc/rabbitmq/enabled_plugins']
volumeMounts:
- name: config
mountPath: /tmp/config/
readOnly: false
- name: config-file
mountPath: /config/
- name: plugins-file
mountPath: /etc/rabbitmq/
containers:
- name: rabbitmq
image: rabbitmq:3.8-management
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- >
until rabbitmqctl --erlang-cookie ${RABBITMQ_ERLANG_COOKIE} await_startup; do sleep 1; done;
rabbitmqctl --erlang-cookie ${RABBITMQ_ERLANG_COOKIE} set_policy ha-two "" '{"ha-mode":"exactly", "ha-params": 2, "ha-sync-mode": "automatic"}'
ports:
- containerPort: 15672
name: management
- containerPort: 4369
name: discovery
- containerPort: 5672
name: amqp
env:
- name: RABBIT_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: RABBIT_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: RABBITMQ_NODENAME
value: rabbit#$(RABBIT_POD_NAME).rabbitmq.$(RABBIT_POD_NAMESPACE).svc.cluster.local
- name: RABBITMQ_USE_LONGNAME
value: "true"
- name: RABBITMQ_CONFIG_FILE
value: "/config/rabbitmq"
- name: RABBITMQ_ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: rabbit-secret
key: RABBITMQ_ERLANG_COOKIE
- name: K8S_HOSTNAME_SUFFIX
value: .rabbitmq.$(RABBIT_POD_NAMESPACE).svc.cluster.local
volumeMounts:
- name: data
mountPath: /var/lib/rabbitmq
readOnly: false
- name: config-file
mountPath: /config/
- name: plugins-file
mountPath: /etc/rabbitmq/
volumes:
- name: config-file
emptyDir: {}
- name: plugins-file
emptyDir: {}
- name: config
configMap:
name: rabbitmq-config
defaultMode: 0755
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
type: LoadBalancer
ports:
- port: 15672
targetPort: 15672
name: management
- port: 4369
targetPort: 4369
name: discovery
- port: 5672
targetPort: 5672
name: amqp
selector:
app: rabbitmq
If I understood correctly, rabbitmq with HA receives message on one node(master) and then response is mirrored to slave, right?
But if I want to load balance the workload? For example suppose that I sends 200 messages per second.
These 200 messages are received all from the master node. What I want, instead, is that these 200 messages are distributed across nodes, for example 100 messages are received from node1, 50 messages are received on node 2 and the rest on node 3. It's possible to do that? And if yes, how can I achieve it on kubernetes?
I am binding persistentvolumeclaim into my Job's pod, it shows:
persistentvolumeclaim "flink-pv-claim-11" is being deleted
but the persistent volume claim is exists and binded success.
and the pod has no log output. what should I do to fix this? this is the job yaml:
apiVersion: batch/v1
kind: Job
metadata:
name: flink-jobmanager-1.11
spec:
template:
metadata:
labels:
app: flink
component: jobmanager
spec:
restartPolicy: OnFailure
containers:
- name: jobmanager
image: flink:1.11.0-scala_2.11
env:
args: ["standalone-job", "--job-classname", "com.job.ClassName", <optional arguments>, <job arguments>] # optional arguments: ["--job-id", "<job id>", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState"]
ports:
- containerPort: 6123
name: rpc
- containerPort: 6124
name: blob-server
- containerPort: 8081
name: webui
livenessProbe:
tcpSocket:
port: 6123
initialDelaySeconds: 30
periodSeconds: 60
volumeMounts:
- name: flink-config-volume
mountPath: /opt/flink/conf
- name: job-artifacts-volume
mountPath: /opt/flink/usrlib
- name: job-artifacts-volume
mountPath: /opt/flink/data/job-artifacts
securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
volumes:
- name: flink-config-volume
configMap:
name: flink-1.11-config
items:
- key: flink-conf.yaml
path: flink-conf.yaml
- key: log4j-console.properties
path: log4j-console.properties
- name: job-artifacts-volume
persistentVolumeClaim:
claimName: flink-pv-claim-11
Make sure the job tied to this PVC is deleted. If any other job/pods are running and using this pvc then you can not delete the PVC until you delete job/pod.
To see which resources are in use at the moment from the PVC try to run:
kubectl get pods --all-namespaces -o=json | jq -c \ '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName:.spec.volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'
You have two volumeMounts named job-artifacts-volume, which may be causing some confusion.
I am trying to create a "logstash 6.5.4" pod using this file (cdn_akamai.yml) in machines with 16G of memory and 8 vCPUS
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: cdn-akamai-pipe
spec:
template:
metadata:
labels:
app: cdn-akamai-pipe
spec:
securityContext:
runAsUser: 0
runAsGroup: 0
hostname: cdn-akamai-pipe
containers:
- name: cdn-akamai-pipe
resources:
limits:
memory: "1Gi"
requests:
memory: "1Gi"
ports:
- containerPort: 9600
image: docker.elastic.co/logstash/logstash:6.5.4
volumeMounts:
- name: cdn-akamai-pipe-config
mountPath: /usr/share/logstash/pipeline/cdn_akamai.conf
subPath: cdn_akamai.conf
- name: logstash-jvm-options-config
mountPath: /usr/share/logstash/config/jvm.options
subPath: jvm.options
- name: pipeline-config
mountPath: /usr/share/logstash/config/pipelines.yml
subPath: pipelines.yml
command:
- logstash
volumes:
- name: cdn-akamai-pipe-config
configMap:
name: cdn-akamai-pipe
- name: logstash-jvm-options-config
configMap:
name: logstash-jvm-options
- name: pipeline-config
configMap:
name: pipeline-akamai
---
kind: Service
apiVersion: v1
metadata:
name: cdn-akamai-pipe
spec:
type: NodePort
selector:
app: cdn-akamai-pipe
ports:
- protocol: TCP
port: 9600
targetPort: 9600
name: logstash
And using the next commands
kubectl create configmap logstash-jvm-options --from-file jvm.options
kubectl create configmap cdn-akamai-pipe --from-file cdn_akamai.conf
kubectl create configmap pipeline-akamai --from-file pipelines.yml
kubectl create -f cdn_akamai.yml
where the files are
Data
====
pipelines.yml:
----
- pipeline.id: main
path.config: "/usr/share/logstash/pipeline/cdn_akamai.conf
Data
====
jvm.options:
----
-Xms800m
-Xmx800m
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-Djava.awt.headless=true
-Dfile.encoding=UTF-8
-Djruby.compile.invokedynamic=true
-Djruby.jit.threshold=0
-XX:+HeapDumpOnOutOfMemoryError
-Djava.security.egd=file:/dev/urandom
Data
====
cdn_akamai.conf:
----
input {stdin{}}
output {stdout{}}
But I get Error or CrashLoopBackOff as
cdn-akamai-pipe-74c64757b9-t4k2f 0/1 Error 7 12m
I had running a hello word pod to verify if my cluster is ok, and this pod run perfectly.
Could you help me, please? Is there any problem with my volumes, service, syntax, Do you see anything?