Deployment IPFS-Cluster on Kubernetes I got error - kubernetes

When I deployment IPFS-Cluster on Kubernetes, I get the following error (these are ipfs-cluster logs):
error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
2022-01-04T10:23:08.103Z INFO service ipfs-cluster-service/daemon.go:47 Initializing. For verbose output run with "-l debug". Please wait...
2022-01-04T10:23:08.103Z ERROR config config/config.go:352 error reading the configuration file: open /data/ipfs-cluster/service.json: no such file or directory
error loading configurations: open /data/ipfs-cluster/service.json: no such file or directory
These are initContainer logs:
+ user=ipfs
+ mkdir -p /data/ipfs
+ chown -R ipfs /data/ipfs
+ '[' -f /data/ipfs/config ]
+ ipfs init '--profile=badgerds,server'
initializing IPFS node at /data/ipfs
generating 2048-bit RSA keypair...done
peer identity: QmUHmdhauhk7zdj5XT1zAa6BQfrJDukysb2PXsCQ62rBdS
to get started, enter:
ipfs cat /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme
+ ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
+ ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
+ ipfs config --json Swarm.ConnMgr.HighWater 2000
+ ipfs config --json Datastore.BloomFilterSize 1048576
+ ipfs config Datastore.StorageMax 100GB
These are ipfs container logs:
Changing user to ipfs
ipfs version 0.4.18
Found IPFS fs-repo at /data/ipfs
Initializing daemon...
go-ipfs version: 0.4.18-aefc746
Repo version: 7
System version: amd64/linux
Golang version: go1.11.1
Error: open /data/ipfs/config: permission denied
Received interrupt signal, shutting down...
(Hit ctrl-c again to force-shutdown the daemon.)
The following is my kubernetes yaml file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ipfs-cluster
spec:
serviceName: ipfs-cluster
replicas: 3
selector:
matchLabels:
app: ipfs-cluster
template:
metadata:
labels:
app: ipfs-cluster
spec:
initContainers:
- name: configure-ipfs
image: "ipfs/go-ipfs:v0.4.18"
command: ["sh", "/custom/configure-ipfs.sh"]
volumeMounts:
- name: ipfs-storage
mountPath: /data/ipfs
- name: configure-script
mountPath: /custom/entrypoint.sh
subPath: entrypoint.sh
- name: configure-script-2
mountPath: /custom/configure-ipfs.sh
subPath: configure-ipfs.sh
containers:
- name: ipfs
image: "ipfs/go-ipfs:v0.4.18"
imagePullPolicy: IfNotPresent
env:
- name: IPFS_FD_MAX
value: "4096"
ports:
- name: swarm
protocol: TCP
containerPort: 4001
- name: swarm-udp
protocol: UDP
containerPort: 4002
- name: api
protocol: TCP
containerPort: 5001
- name: ws
protocol: TCP
containerPort: 8081
- name: http
protocol: TCP
containerPort: 8080
livenessProbe:
tcpSocket:
port: swarm
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 15
volumeMounts:
- name: ipfs-storage
mountPath: /data/ipfs
- name: configure-script
mountPath: /custom
resources:
{}
- name: ipfs-cluster
image: "ipfs/ipfs-cluster:latest"
imagePullPolicy: IfNotPresent
command: ["sh", "/custom/entrypoint.sh"]
envFrom:
- configMapRef:
name: env-config
env:
- name: BOOTSTRAP_PEER_ID
valueFrom:
configMapRef:
name: env-config
key: bootstrap-peer-id
- name: BOOTSTRAP_PEER_PRIV_KEY
valueFrom:
secretKeyRef:
name: secret-config
key: bootstrap-peer-priv-key
- name: CLUSTER_SECRET
valueFrom:
secretKeyRef:
name: secret-config
key: cluster-secret
- name: CLUSTER_MONITOR_PING_INTERVAL
value: "3m"
- name: SVC_NAME
value: $(CLUSTER_SVC_NAME)
ports:
- name: api-http
containerPort: 9094
protocol: TCP
- name: proxy-http
containerPort: 9095
protocol: TCP
- name: cluster-swarm
containerPort: 9096
protocol: TCP
livenessProbe:
tcpSocket:
port: cluster-swarm
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
volumeMounts:
- name: cluster-storage
mountPath: /data/ipfs-cluster
- name: configure-script
mountPath: /custom/entrypoint.sh
subPath: entrypoint.sh
resources:
{}
volumes:
- name: configure-script
configMap:
name: ipfs-cluster-set-bootstrap-conf
- name: configure-script-2
configMap:
name: configura-ipfs
volumeClaimTemplates:
- metadata:
name: cluster-storage
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 5Gi
- metadata:
name: ipfs-storage
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 200Gi
---
kind: Secret
apiVersion: v1
metadata:
name: secret-config
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
bootstrap-peer-priv-key: >-
UTBGQlUzQjNhM2RuWjFOcVFXZEZRVUZ2U1VKQlVVTjBWbVpUTTFwck9ETkxVWEZNYzJFemFGWlZaV2xKU0doUFZGRTBhRmhrZVhCeFJGVmxVbmR6Vmt4Nk9IWndZ...
cluster-secret: 7d4c019035beb7da7275ea88315c39b1dd9fdfaef017596550ffc1ad3fdb556f
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
name: env-config
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
bootstrap-peer-id: QmWgEHZEmJhuoDgFmBKZL8VtpMEqRArqahuaX66cbvyutP
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ipfs-cluster-set-bootstrap-conf
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
entrypoint.sh: |2
#!/bin/sh
user=ipfs
# This is a custom entrypoint for k8s designed to connect to the bootstrap
# node running in the cluster. It has been set up using a configmap to
# allow changes on the fly.
if [ ! -f /data/ipfs-cluster/service.json ]; then
ipfs-cluster-service init
fi
PEER_HOSTNAME=`cat /proc/sys/kernel/hostname`
grep -q ".*ipfs-cluster-0.*" /proc/sys/kernel/hostname
if [ $? -eq 0 ]; then
CLUSTER_ID=${BOOTSTRAP_PEER_ID} \
CLUSTER_PRIVATEKEY=${BOOTSTRAP_PEER_PRIV_KEY} \
exec ipfs-cluster-service daemon --upgrade
else
BOOTSTRAP_ADDR=/dns4/${SVC_NAME}-0/tcp/9096/ipfs/${BOOTSTRAP_PEER_ID}
if [ -z $BOOTSTRAP_ADDR ]; then
exit 1
fi
# Only ipfs user can get here
exec ipfs-cluster-service daemon --upgrade --bootstrap $BOOTSTRAP_ADDR --leave
fi
---
kind: ConfigMap
apiVersion: v1
metadata:
name: configura-ipfs
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
configure-ipfs.sh: >-
#!/bin/sh
set -e
set -x
user=ipfs
# This is a custom entrypoint for k8s designed to run ipfs nodes in an
appropriate
# setup for production scenarios.
mkdir -p /data/ipfs && chown -R ipfs /data/ipfs
if [ -f /data/ipfs/config ]; then
if [ -f /data/ipfs/repo.lock ]; then
rm /data/ipfs/repo.lock
fi
exit 0
fi
ipfs init --profile=badgerds,server
ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
ipfs config --json Swarm.ConnMgr.HighWater 2000
ipfs config --json Datastore.BloomFilterSize 1048576
ipfs config Datastore.StorageMax 100GB
I follow the official steps to build.
I follow the official steps and use the following command to generate cluster-secret:
$ od -vN 32 -An -tx1 /dev/urandom | tr -d ' \n' | base64 -w 0 -
But I get:
error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
I saw the same problem from the official github issue. So, I use openssl rand -hex command is not ok.

To clarify I am posting community Wiki answer.
To solve following error:
no such file or directory
you used runAsUser: 0.
The second error:
error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
was caused by different encoding than hex to CLUSTER_SECRET.
According to this page:
The Cluster Secret Key
The secret key of a cluster is a 32-bit hex encoded random string, of which every cluster peer needs in their _service.json_ configuration.
A secret key can be generated and predefined in the CLUSTER_SECRET environment variable, and will subsequently be used upon running ipfs-cluster-service init.
Here is link to solved issue.
See also:
Configuration reference
IPFS Tutorial
Documentation guide Security and ports

Related

Rabbitmq containers throwing error "Bad characters in cookie"

I am trying to setup a Rabbitmq cluster and when the containers start they are failing with error [error] CRASH REPORT Process <0.200.0> with 0 neighbours crashed with reason: "Bad characters in cookie" in auth:init_no_setcookie/0 line 313. This suggests that the erlang cookie value passed in is not valid :
kubectl -n demos get pods
NAME READY STATUS RESTARTS AGE
mongodb-deployment-6499999-vpcjh 1/1 Running 0 12h
rabbitmq-0 0/1 CrashLoopBackOff 9 25m
rabbitmq-1 0/1 CrashLoopBackOff 9 24m
rabbitmq-2 0/1 CrashLoopBackOff 9 23m
And when I query the logs for one of the pods :
kubectl -n demos logs -p rabbitmq-0 --previous
I get :
WARNING: '/var/lib/rabbitmq/.erlang.cookie' was populated from
'$RABBITMQ_ERLANG_COOKIE', which will no longer happen in 3.9 and later! (https://github.com/docker-library/rabbitmq/pull/424)
Configuring logger redirection
02:04:47.506 [error] Bad characters in cookie
02:04:47.512 [error]
02:04:47.506 [error] Supervisor net_sup had child auth started with auth:start_link() at undefined exit with reason "Bad characters in cookie" in auth:init_no_setcookie/0 line 313 in context start_error
02:04:47.506 [error] CRASH REPORT Process <0.200.0> with 0 neighbours crashed with reason: "Bad characters in cookie" in auth:init_no_setcookie/0 line 313
02:04:47.522 [error] BOOT FAILED
BOOT FAILED
02:04:47.523 [error] ===========
===========
02:04:47.523 [error] Exception during startup:
Exception during startup:
02:04:47.524 [error]
02:04:47.524 [error] supervisor:children_map/4 line 1250
....
....
....
This is how I am generating the cookie in bash :
dd if=/dev/urandom bs=30 count=1 | base64
And in the secrets manifest I have :
metadata:
name: rabbit-secret
namespace: demos
type: Opaque
data:
# echo -n "cookie-value" | base64
RABBITMQ_ERLANG_COOKIE: <encoded_cookie_value_here>
And in statefulset I have :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbitmq
namespace: demos
spec:
serviceName: rabbitmq
replicas: 3
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
serviceAccountName: rabbitmq
initContainers:
- name: config
image: busybox
imagePullPolicy: "IfNotPresent"
command: ['/bin/sh', '-c', 'cp /tmp/config/rabbitmq.conf /config/rabbitmq.conf && ls -l /config/ && cp /tmp/config/enabled_plugins /etc/rabbitmq/enabled_plugins']
volumeMounts:
- name: config
mountPath: /tmp/config/
readOnly: false
- name: config-file
mountPath: /config/
- name: plugins-file
mountPath: /etc/rabbitmq/
containers:
- name: rabbitmq
image: rabbitmq:3.8-management
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 4369
name: discovery
- containerPort: 5672
name: amqp
env:
- name: RABBIT_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: RABBIT_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: RABBITMQ_NODENAME
value: rabbit#$(RABBIT_POD_NAME).rabbitmq.$(RABBIT_POD_NAMESPACE).svc.cluster.local
- name: RABBITMQ_USE_LONGNAME
value: "true"
- name: RABBITMQ_CONFIG_FILE
value: "/config/rabbitmq"
- name: RABBITMQ_ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: rabbit-secret
key: RABBITMQ_ERLANG_COOKIE
- name: K8S_HOSTNAME_SUFFIX
value: .rabbitmq.$(RABBIT_POD_NAMESPACE).svc.cluster.local
volumeMounts:
- name: data
mountPath: /var/lib/rabbitmq
readOnly: false
- name: config-file
mountPath: /config/
- name: plugins-file
mountPath: /etc/rabbitmq/
volumes:
- name: config-file
emptyDir: {}
- name: plugins-file
emptyDir: {}
- name: config
configMap:
name: rabbitmq-config
defaultMode: 0755
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "cinder-csi"
resources:
requests:
storage: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
namespace: labs
spec:
clusterIP: None
ports:
- port: 4369
targetPort: 4369
name: discovery
- port: 5672
targetPort: 5672
name: amqp
selector:
app: rabbitmq
What am I missing ?
Is there a recommended way of generating the cookie or something else to do with the K8s cluster itself.
I have followed the example given here with the only difference being that I am generating my cookie on my local machine and not the k8s host.
duplicate question?
This requires you to create the Secret. Just to bring this up, you can run a one-off imperative command to create a random Secret:
kubectl create secret generic rabbitmq \
--from-literal=erlangCookie=$(dd if=/dev/urandom bs=30 count=1 | base64)
The error is from the rabbitmq's source code file docker-entrypoint.sh
if [ "${RABBITMQ_ERLANG_COOKIE:-}" ]; then
cookieFile='/var/lib/rabbitmq/.erlang.cookie'
if [ -e "$cookieFile" ]; then
if [ "$(cat "$cookieFile" 2>/dev/null)" != "$RABBITMQ_ERLANG_COOKIE" ]; then
echo >&2
echo >&2 "warning: $cookieFile contents do not match RABBITMQ_ERLANG_COOKIE"
echo >&2
fi
else
echo "$RABBITMQ_ERLANG_COOKIE" > "$cookieFile"
fi
chmod 600 "$cookieFile"
fi
Therefore you can delete '/var/lib/rabbitmq/.erlang.cookie' file, the code will copy the content from environment variable $RABBITMQ_ERLANG_COOKIE to create this file.
If you are working for the production system, you should be very careful and test it in your local system firstly and gain experience.
This source code will be execuated only the rabbitmq restart, and won't run again after it is execuated.
And you can find the actual rabbitmq's cookie data from the erlang's console by erlang:get_cookie() -> Cookie | nocookie.

How to access redis master node in kubernetes cluster from external application

I have installed redis master and cluster nodes in kubernetes using restfulset and headless service. I am able to communicate between the pods in kubernetes using redis-0.redis.redis.svc.cluster.local:6379 address but while I am trying access from external application (spring boot), I am not able to connect it.
I tried to expose using node port also but no luck.
Can you help me to resolve this issue.
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
type: NodePort
selector:
name: redis
ports:
- port: 6379
targetPort: 6379
nodePort: 30001
Statefullset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: redis
replicas: 3
selector:
matchLabels:
app: redis
template:
metadata:
labels:
name: redis
app: redis
spec:
initContainers:
- name: config
image: redis:6.2.3-alpine
command: [ "sh", "-c" ]
args:
- |
cp /tmp/redis/redis.conf /etc/redis/redis.conf
echo "finding master..."
MASTER_FDQN=`hostname -f | sed -e 's/redis-[0-9]\./redis-0./'`
if [ "$(redis-cli -h sentinel -p 5000 ping)" != "PONG" ]; then
echo "master not found, defaulting to redis-0"
if [ "$(hostname)" == "redis-0" ]; then
echo "this is redis-0, not updating config..."
else
echo "updating redis.conf..."
echo "slaveof $MASTER_FDQN 6379" >> /etc/redis/redis.conf
fi
else
echo "sentinel found, finding master"
MASTER="$(redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name mymaster | grep -E '(^redis-\d{1,})|([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})')"
echo "master found : $MASTER, updating redis.conf"
echo "slaveof $MASTER 6379" >> /etc/redis/redis.conf
fi
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
- name: config
mountPath: /tmp/redis/
containers:
- name: redis
image: redis:6.2.3-alpine
command: ["redis-server"]
args: ["/etc/redis/redis.conf"]
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: data
mountPath: /data
- name: redis-config
mountPath: /etc/redis/
volumes:
- name: redis-config
emptyDir: {}
- name: config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 500Mi

502 Bad Gateway Gateway with Kubernetes Ingress

I have an EKS cluster with 2 nodes. I have deployed MySQL with 3 Pods, a master one for writing and 2 workers for reading.
I have a simple nodejs application which connects and reads from MySQL DB called emails_db.
I created the database simply by entering the master MySQL pod and the nodejs applications should read from the DB.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 3
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: init-mysql
image: mysql:5.7
command:
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/primary.cnf /mnt/conf.d/
else
cp /mnt/config-map/replica.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: gcr.io/google-samples/xtrabackup:1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on primary (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: gcr.io/google-samples/xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing replica. (Need to remove the tailing semicolon!)
cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_slave_info xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from primary. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm -f xtrabackup_binlog_info xtrabackup_slave_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
mysql -h 127.0.0.1 \
-e "$(<change_master_to.sql.in), \
MASTER_HOST='mysql-0.mysql', \
MASTER_USER='root', \
MASTER_PASSWORD='', \
MASTER_CONNECT_RETRY=10; \
START SLAVE;" || exit 1
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
Nodejs Service
apiVersion: v1
kind: Service
metadata:
labels:
app: nodejs
name: nodejs-service
spec:
selector:
app: nodejs
ports:
- name: web
port: 5000
protocol: TCP
targetPort: 3306
Nodejs Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs
labels:
app: nodejs
spec:
replicas: 2
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodejs
image: 199266549/my-nodeapi:PortLatest
ports:
- containerPort: 5000
env:
- name: db-url
valueFrom:
configMapKeyRef:
name: myconfig
key: db-url
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: myconfig
key: DB_NAME
- name: DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: DB_USER
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: DB_PASSWORD
Ingress looks like this:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysql-ingress
namespace: default
labels:
app: mysql
spec:
rules:
- host: mysql.com
http:
paths:
- path: /read
backend:
serviceName: mysql-read
servicePort: 80
- path: /write
backend:
serviceName: mysql
servicePort: 80
- path: /
backend:
serviceName: nodejs-service
servicePort: 5000
target port is 3306 which is the port exposed by MySql.
While nodejs application exposes port 5000.
I tried the following curl command
curl -v -Hhost:mysql.com http://abccc302f198147aca906e71c710c661-29001ea7d40ff165.elb.us-east-1.amazonaws.com/
I checked the ingress backend. The service of interest is the one given with path /
Any idea what is missing when trying to connect the nodjs service to MySql Pods?
Is this problematic?
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)

pod events shows persistentvolumeclaim "flink-pv-claim-11" is being deleted in kubernetes but binding success

I am binding persistentvolumeclaim into my Job's pod, it shows:
persistentvolumeclaim "flink-pv-claim-11" is being deleted
but the persistent volume claim is exists and binded success.
and the pod has no log output. what should I do to fix this? this is the job yaml:
apiVersion: batch/v1
kind: Job
metadata:
name: flink-jobmanager-1.11
spec:
template:
metadata:
labels:
app: flink
component: jobmanager
spec:
restartPolicy: OnFailure
containers:
- name: jobmanager
image: flink:1.11.0-scala_2.11
env:
args: ["standalone-job", "--job-classname", "com.job.ClassName", <optional arguments>, <job arguments>] # optional arguments: ["--job-id", "<job id>", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState"]
ports:
- containerPort: 6123
name: rpc
- containerPort: 6124
name: blob-server
- containerPort: 8081
name: webui
livenessProbe:
tcpSocket:
port: 6123
initialDelaySeconds: 30
periodSeconds: 60
volumeMounts:
- name: flink-config-volume
mountPath: /opt/flink/conf
- name: job-artifacts-volume
mountPath: /opt/flink/usrlib
- name: job-artifacts-volume
mountPath: /opt/flink/data/job-artifacts
securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
volumes:
- name: flink-config-volume
configMap:
name: flink-1.11-config
items:
- key: flink-conf.yaml
path: flink-conf.yaml
- key: log4j-console.properties
path: log4j-console.properties
- name: job-artifacts-volume
persistentVolumeClaim:
claimName: flink-pv-claim-11
Make sure the job tied to this PVC is deleted. If any other job/pods are running and using this pvc then you can not delete the PVC until you delete job/pod.
To see which resources are in use at the moment from the PVC try to run:
kubectl get pods --all-namespaces -o=json | jq -c \ '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName:.spec.volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'
You have two volumeMounts named job-artifacts-volume, which may be causing some confusion.

How Deploy Drupal 7 using Kubernetes with a persistent store?

I'm trying to deploy Drupal 7 in Kubernetes, It fails with an error Fatal error: require_once(): Failed opening required '/var/www/html/modules/system/system.install' (include_path='.:/usr/local/lib/php') in /var/www/html/includes/install.core.inc on line 241.
Here is K8S deployment manifest:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: drupal-pvc
annotations:
pv.beta.kubernetes.io/gid: "drupal-gid"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: drupal-service
spec:
ports:
- name: http
port: 80
protocol: TCP
selector:
app: drupal
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: drupal
name: drupal
spec:
replicas: 1
template:
metadata:
labels:
app: drupal
spec:
initContainers:
- name: init-sites-volume
image: drupal:7.72
command: ['/bin/bash', '-c']
args: ['cp -r /var/www/html/sites/ /data/; chown www-data:www-data /data/ -R']
volumeMounts:
- mountPath: /data
name: vol-drupal
containers:
- image: drupal:7.72
name: drupal
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/modules
name: vol-drupal
subPath: modules
- mountPath: /var/www/html/profiles
name: vol-drupal
subPath: profiles
- mountPath: /var/www/html/sites
name: vol-drupal
subPath: sites
- mountPath: /var/www/html/themes
name: vol-drupal
subPath: themes
volumes:
- name: vol-drupal
persistentVolumeClaim:
claimName: drupal-pvc
However, when I remove the volumeMounts from the drupal container, it works. I need to use volumes in order to persist the website data, can any one suggest a fix?
Update: I have also added the manifest for the persistence volume.
check if you could write to mounted volume.
# kubectl exec -it drupal-zxxx -- sh
$ ls -alhtr /var/www/html/modules
$ cd /var/www/html/modules
$ touch test.txt
because storage configured with a group ID (GID) allows writing only by Pods using the same GID. Mismatched or missing GIDs cause permission denied errors.
alternatively you could try out an operator for drupal:
https://github.com/geerlingguy/drupal-operator
Also helm chart is another option:
https://bitnami.com/stack/drupal/helm