502 Bad Gateway Gateway with Kubernetes Ingress - kubernetes

I have an EKS cluster with 2 nodes. I have deployed MySQL with 3 Pods, a master one for writing and 2 workers for reading.
I have a simple nodejs application which connects and reads from MySQL DB called emails_db.
I created the database simply by entering the master MySQL pod and the nodejs applications should read from the DB.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 3
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: init-mysql
image: mysql:5.7
command:
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/primary.cnf /mnt/conf.d/
else
cp /mnt/config-map/replica.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: gcr.io/google-samples/xtrabackup:1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on primary (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: gcr.io/google-samples/xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing replica. (Need to remove the tailing semicolon!)
cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_slave_info xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from primary. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm -f xtrabackup_binlog_info xtrabackup_slave_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
mysql -h 127.0.0.1 \
-e "$(<change_master_to.sql.in), \
MASTER_HOST='mysql-0.mysql', \
MASTER_USER='root', \
MASTER_PASSWORD='', \
MASTER_CONNECT_RETRY=10; \
START SLAVE;" || exit 1
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
Nodejs Service
apiVersion: v1
kind: Service
metadata:
labels:
app: nodejs
name: nodejs-service
spec:
selector:
app: nodejs
ports:
- name: web
port: 5000
protocol: TCP
targetPort: 3306
Nodejs Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs
labels:
app: nodejs
spec:
replicas: 2
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodejs
image: 199266549/my-nodeapi:PortLatest
ports:
- containerPort: 5000
env:
- name: db-url
valueFrom:
configMapKeyRef:
name: myconfig
key: db-url
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: myconfig
key: DB_NAME
- name: DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: DB_USER
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: DB_PASSWORD
Ingress looks like this:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysql-ingress
namespace: default
labels:
app: mysql
spec:
rules:
- host: mysql.com
http:
paths:
- path: /read
backend:
serviceName: mysql-read
servicePort: 80
- path: /write
backend:
serviceName: mysql
servicePort: 80
- path: /
backend:
serviceName: nodejs-service
servicePort: 5000
target port is 3306 which is the port exposed by MySql.
While nodejs application exposes port 5000.
I tried the following curl command
curl -v -Hhost:mysql.com http://abccc302f198147aca906e71c710c661-29001ea7d40ff165.elb.us-east-1.amazonaws.com/
I checked the ingress backend. The service of interest is the one given with path /
Any idea what is missing when trying to connect the nodjs service to MySql Pods?
Is this problematic?
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)

Related

How to access redis master node in kubernetes cluster from external application

I have installed redis master and cluster nodes in kubernetes using restfulset and headless service. I am able to communicate between the pods in kubernetes using redis-0.redis.redis.svc.cluster.local:6379 address but while I am trying access from external application (spring boot), I am not able to connect it.
I tried to expose using node port also but no luck.
Can you help me to resolve this issue.
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
type: NodePort
selector:
name: redis
ports:
- port: 6379
targetPort: 6379
nodePort: 30001
Statefullset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: redis
replicas: 3
selector:
matchLabels:
app: redis
template:
metadata:
labels:
name: redis
app: redis
spec:
initContainers:
- name: config
image: redis:6.2.3-alpine
command: [ "sh", "-c" ]
args:
- |
cp /tmp/redis/redis.conf /etc/redis/redis.conf
echo "finding master..."
MASTER_FDQN=`hostname -f | sed -e 's/redis-[0-9]\./redis-0./'`
if [ "$(redis-cli -h sentinel -p 5000 ping)" != "PONG" ]; then
echo "master not found, defaulting to redis-0"
if [ "$(hostname)" == "redis-0" ]; then
echo "this is redis-0, not updating config..."
else
echo "updating redis.conf..."
echo "slaveof $MASTER_FDQN 6379" >> /etc/redis/redis.conf
fi
else
echo "sentinel found, finding master"
MASTER="$(redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name mymaster | grep -E '(^redis-\d{1,})|([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})')"
echo "master found : $MASTER, updating redis.conf"
echo "slaveof $MASTER 6379" >> /etc/redis/redis.conf
fi
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
- name: config
mountPath: /tmp/redis/
containers:
- name: redis
image: redis:6.2.3-alpine
command: ["redis-server"]
args: ["/etc/redis/redis.conf"]
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: data
mountPath: /data
- name: redis-config
mountPath: /etc/redis/
volumes:
- name: redis-config
emptyDir: {}
- name: config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 500Mi

Deployment IPFS-Cluster on Kubernetes I got error

When I deployment IPFS-Cluster on Kubernetes, I get the following error (these are ipfs-cluster logs):
error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
2022-01-04T10:23:08.103Z INFO service ipfs-cluster-service/daemon.go:47 Initializing. For verbose output run with "-l debug". Please wait...
2022-01-04T10:23:08.103Z ERROR config config/config.go:352 error reading the configuration file: open /data/ipfs-cluster/service.json: no such file or directory
error loading configurations: open /data/ipfs-cluster/service.json: no such file or directory
These are initContainer logs:
+ user=ipfs
+ mkdir -p /data/ipfs
+ chown -R ipfs /data/ipfs
+ '[' -f /data/ipfs/config ]
+ ipfs init '--profile=badgerds,server'
initializing IPFS node at /data/ipfs
generating 2048-bit RSA keypair...done
peer identity: QmUHmdhauhk7zdj5XT1zAa6BQfrJDukysb2PXsCQ62rBdS
to get started, enter:
ipfs cat /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme
+ ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
+ ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
+ ipfs config --json Swarm.ConnMgr.HighWater 2000
+ ipfs config --json Datastore.BloomFilterSize 1048576
+ ipfs config Datastore.StorageMax 100GB
These are ipfs container logs:
Changing user to ipfs
ipfs version 0.4.18
Found IPFS fs-repo at /data/ipfs
Initializing daemon...
go-ipfs version: 0.4.18-aefc746
Repo version: 7
System version: amd64/linux
Golang version: go1.11.1
Error: open /data/ipfs/config: permission denied
Received interrupt signal, shutting down...
(Hit ctrl-c again to force-shutdown the daemon.)
The following is my kubernetes yaml file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ipfs-cluster
spec:
serviceName: ipfs-cluster
replicas: 3
selector:
matchLabels:
app: ipfs-cluster
template:
metadata:
labels:
app: ipfs-cluster
spec:
initContainers:
- name: configure-ipfs
image: "ipfs/go-ipfs:v0.4.18"
command: ["sh", "/custom/configure-ipfs.sh"]
volumeMounts:
- name: ipfs-storage
mountPath: /data/ipfs
- name: configure-script
mountPath: /custom/entrypoint.sh
subPath: entrypoint.sh
- name: configure-script-2
mountPath: /custom/configure-ipfs.sh
subPath: configure-ipfs.sh
containers:
- name: ipfs
image: "ipfs/go-ipfs:v0.4.18"
imagePullPolicy: IfNotPresent
env:
- name: IPFS_FD_MAX
value: "4096"
ports:
- name: swarm
protocol: TCP
containerPort: 4001
- name: swarm-udp
protocol: UDP
containerPort: 4002
- name: api
protocol: TCP
containerPort: 5001
- name: ws
protocol: TCP
containerPort: 8081
- name: http
protocol: TCP
containerPort: 8080
livenessProbe:
tcpSocket:
port: swarm
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 15
volumeMounts:
- name: ipfs-storage
mountPath: /data/ipfs
- name: configure-script
mountPath: /custom
resources:
{}
- name: ipfs-cluster
image: "ipfs/ipfs-cluster:latest"
imagePullPolicy: IfNotPresent
command: ["sh", "/custom/entrypoint.sh"]
envFrom:
- configMapRef:
name: env-config
env:
- name: BOOTSTRAP_PEER_ID
valueFrom:
configMapRef:
name: env-config
key: bootstrap-peer-id
- name: BOOTSTRAP_PEER_PRIV_KEY
valueFrom:
secretKeyRef:
name: secret-config
key: bootstrap-peer-priv-key
- name: CLUSTER_SECRET
valueFrom:
secretKeyRef:
name: secret-config
key: cluster-secret
- name: CLUSTER_MONITOR_PING_INTERVAL
value: "3m"
- name: SVC_NAME
value: $(CLUSTER_SVC_NAME)
ports:
- name: api-http
containerPort: 9094
protocol: TCP
- name: proxy-http
containerPort: 9095
protocol: TCP
- name: cluster-swarm
containerPort: 9096
protocol: TCP
livenessProbe:
tcpSocket:
port: cluster-swarm
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
volumeMounts:
- name: cluster-storage
mountPath: /data/ipfs-cluster
- name: configure-script
mountPath: /custom/entrypoint.sh
subPath: entrypoint.sh
resources:
{}
volumes:
- name: configure-script
configMap:
name: ipfs-cluster-set-bootstrap-conf
- name: configure-script-2
configMap:
name: configura-ipfs
volumeClaimTemplates:
- metadata:
name: cluster-storage
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 5Gi
- metadata:
name: ipfs-storage
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 200Gi
---
kind: Secret
apiVersion: v1
metadata:
name: secret-config
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
bootstrap-peer-priv-key: >-
UTBGQlUzQjNhM2RuWjFOcVFXZEZRVUZ2U1VKQlVVTjBWbVpUTTFwck9ETkxVWEZNYzJFemFGWlZaV2xKU0doUFZGRTBhRmhrZVhCeFJGVmxVbmR6Vmt4Nk9IWndZ...
cluster-secret: 7d4c019035beb7da7275ea88315c39b1dd9fdfaef017596550ffc1ad3fdb556f
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
name: env-config
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
bootstrap-peer-id: QmWgEHZEmJhuoDgFmBKZL8VtpMEqRArqahuaX66cbvyutP
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ipfs-cluster-set-bootstrap-conf
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
entrypoint.sh: |2
#!/bin/sh
user=ipfs
# This is a custom entrypoint for k8s designed to connect to the bootstrap
# node running in the cluster. It has been set up using a configmap to
# allow changes on the fly.
if [ ! -f /data/ipfs-cluster/service.json ]; then
ipfs-cluster-service init
fi
PEER_HOSTNAME=`cat /proc/sys/kernel/hostname`
grep -q ".*ipfs-cluster-0.*" /proc/sys/kernel/hostname
if [ $? -eq 0 ]; then
CLUSTER_ID=${BOOTSTRAP_PEER_ID} \
CLUSTER_PRIVATEKEY=${BOOTSTRAP_PEER_PRIV_KEY} \
exec ipfs-cluster-service daemon --upgrade
else
BOOTSTRAP_ADDR=/dns4/${SVC_NAME}-0/tcp/9096/ipfs/${BOOTSTRAP_PEER_ID}
if [ -z $BOOTSTRAP_ADDR ]; then
exit 1
fi
# Only ipfs user can get here
exec ipfs-cluster-service daemon --upgrade --bootstrap $BOOTSTRAP_ADDR --leave
fi
---
kind: ConfigMap
apiVersion: v1
metadata:
name: configura-ipfs
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
configure-ipfs.sh: >-
#!/bin/sh
set -e
set -x
user=ipfs
# This is a custom entrypoint for k8s designed to run ipfs nodes in an
appropriate
# setup for production scenarios.
mkdir -p /data/ipfs && chown -R ipfs /data/ipfs
if [ -f /data/ipfs/config ]; then
if [ -f /data/ipfs/repo.lock ]; then
rm /data/ipfs/repo.lock
fi
exit 0
fi
ipfs init --profile=badgerds,server
ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
ipfs config --json Swarm.ConnMgr.HighWater 2000
ipfs config --json Datastore.BloomFilterSize 1048576
ipfs config Datastore.StorageMax 100GB
I follow the official steps to build.
I follow the official steps and use the following command to generate cluster-secret:
$ od -vN 32 -An -tx1 /dev/urandom | tr -d ' \n' | base64 -w 0 -
But I get:
error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
I saw the same problem from the official github issue. So, I use openssl rand -hex command is not ok.
To clarify I am posting community Wiki answer.
To solve following error:
no such file or directory
you used runAsUser: 0.
The second error:
error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
was caused by different encoding than hex to CLUSTER_SECRET.
According to this page:
The Cluster Secret Key
The secret key of a cluster is a 32-bit hex encoded random string, of which every cluster peer needs in their _service.json_ configuration.
A secret key can be generated and predefined in the CLUSTER_SECRET environment variable, and will subsequently be used upon running ipfs-cluster-service init.
Here is link to solved issue.
See also:
Configuration reference
IPFS Tutorial
Documentation guide Security and ports

Helm lifecycle commands in deployment

I have deployment.yaml template.
I have AWS EKS 1.8 and same kubectl.
I'm using Helm 3.3.4.
When I tried to deploy same template directly thru kubectl apply -f deployment.yaml,is everything good, init containers and main container in pod works fine.
But if I tried to start deployment thru Helm I got this error:
OCI runtime exec failed: exec failed: container_linux.go:349: starting
container process caused "process_linux.go:101: executing setns
process caused \"exit status 1\"": unknown\r\n"
kubectl describe pods osad-apidoc-6b74c9bcf9-tjnrh
Looks like I have missed something in annotations or I'm using wrong syntaxes in command description.
Some not important parameters are omitted in this example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "apidoc.fullname" . }}
labels:
{{- include "apidoc.labels" . | nindent 4 }}
spec:
template:
metadata:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- |
echo 'ServerName 127.0.0.1' >> /etc/apache2/apache2.conf
a2enmod rewrite
a2enmod headers
a2enmod ssl
a2dissite default
a2ensite 000-default
a2enmod rewrite
service apache2 start
I have tried invoke simple command like:
lifecycle:
postStart:
exec:
command: ['/bin/sh', '-c', 'printenv']
Unfortunately I got same error.
If I will delete this Lifecycle everything works fine thru Helm.
But I need invoke this commands, I can't omit this steps.
Also I checked helm template thru lint, looks good:
Original deployment.yaml before move to Helm:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apidoc
namespace: apidoc
labels:
app: apidoc
stage: dev
version: v-1
spec:
selector:
matchLabels:
app: apidoc
stage: dev
replicas: 1
template:
metadata:
labels:
app: apidoc
stage: dev
version: v-1
spec:
initContainers:
- name: git-clone
image: '123123123123.dkr.ecr.us-east-1.amazonaws.com/helper:latest'
volumeMounts:
- name: repos
mountPath: /var/repos
workingDir: /var/repos
command:
- sh
- '-c'
- >-
git clone --single-branch --branch k8s
git#github.com:examplerepo/apidoc.git -qv
- name: copy-data
image: '123123123123.dkr.ecr.us-east-1.amazonaws.com/helper:latest'
volumeMounts:
- name: web-data
mountPath: /var/www/app
- name: repos
mountPath: /var/repos
workingDir: /var/repos
command:
- sh
- '-c'
- >-
if cp -r apidoc/* /var/www/app/; then echo 'Success!!!' && exit 0;
else echo 'Failed !!!' && exit 1;fi;
containers:
- name: apache2
image: '123123123123.dkr.ecr.us-east-1.amazonaws.com/apache2:2.2'
tty: true
volumeMounts:
- name: web-data
mountPath: /var/www/app
- name: configfiles
mountPath: /etc/apache2/sites-available/000-default.conf
subPath: 000-default.conf
ports:
- name: http
protocol: TCP
containerPort: 80
lifecycle:
postStart:
exec:
command:
- /bin/sh
- '-c'
- |
echo 'ServerName 127.0.0.1' >> /etc/apache2/apache2.conf
a2enmod rewrite
a2enmod headers
a2enmod ssl
a2dissite default
a2ensite 000-default
a2enmod rewrite
service apache2 start
volumes:
- name: web-data
emptyDir: {}
- name: repos
emptyDir: {}
- name: configfiles
configMap:
name: apidoc-config

I want to apt-get install sysstat command in kubernetes yaml file

Cluster information:
Kubernetes version: 1.8
Cloud being used: (put bare-metal if not on a public cloud) AWS EKS
Host OS: debian linux
When I deploy pods,I want to my pod to install and start sysstat automatically
this is my two yaml flies below but it doesn’t work CrashLoopBackoff when I put the command: ["/bin/sh", “-c”]、args: [“apt-get install sysstat”]」 below 「image:」
cat deploy/db/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name:
sbdemo-postgres-sfs
spec:
serviceName: sbdemo-postgres-service
replicas: 1
selector:
matchLabels:
app: sbdemo-postgres-sfs
template:
metadata:
labels:
app: sbdemo-postgres-sfs
spec:
containers:
- name: postgres
image: dayan888/springdemo:postgres9.6
ports:
- containerPort: 5432
**command: ["/bin/bash", "-c"]**
**args: ["apt-get install sysstat"]**
volumeMounts:
- name: pvc-db-volume
mountPath: /var/lib/postgresql
volumeClaimTemplates:
- metadata:
name: pvc-db-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
cat deploy/web/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sbdemo-nginx
spec:
replicas: 3
selector:
matchLabels:
app: sbdemo-nginx
template:
metadata:
labels:
app: sbdemo-nginx
spec:
containers:
- name: nginx
image: gobawoo21/springdemo:nginx
**command: ["/bin/bash", "-c"]**
**args: ["apt-get install sysstat"]**
ports:
- containerPort: 80
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: server-conf
mountPath: /etc/nginx/conf.d/server.conf
subPath: server.conf
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
- name: server-conf
configMap:
name: server-conf
items:
- key: server.conf
path: server.conf
Does anyone know about how to set the repository automatically when deploy pods?
Regards
The best practice is to install packages at image build stage. You can simply add this step to your Dockerfile.
FROM postgres:9.6
RUN apt-get update &&\
apt-get install sysstat -y &&\
rm -rf /var/lib/apt/lists/*
COPY deploy/db/init_ddl.sh /docker-entrypoint-initdb.d/
RUN chmod +x /docker-entrypoint-initdb.d/init_ddl.sh
Kube Manifest
spec:
containers:
- name: postgres
image: harik8/sof:62298191
imagePullPolicy: Always
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
value: password
volumeMounts:
- name: pvc-db-volume
mountPath: /var/lib/postgresql
It should run (Please ignore POSTGRES_PASSWORD env variable)
$ kubectl get po
NAME READY STATUS RESTARTS AGE
sbdemo-postgres-sfs-0 1/1 Running 0 8m46s
Validation
$ kubectl exec -it sbdemo-postgres-sfs-0 bash
root#sbdemo-postgres-sfs-0:/# iostat
Linux 4.19.107 (sbdemo-postgres-sfs-0) 06/10/2020 _x86_64_ (4 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
10.38 0.01 6.28 0.24 0.00 83.09
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
vda 115.53 1144.72 1320.48 1837135 2119208
scd0 0.02 0.65 0.00 1048 0
If this is possible, something is wrong. Your container should not be running as root and so even if you fixed this approach, it shouldn’t work. What you need to do is put this in your container build instead (I.e. in the Dockerfile).

Trying to create a stateful set, but can't bind the pv to the pvc

I have been trying to create a stateful set in Kubernetes with a persistent volume.
Unfortunately, the pods never get out of pending state because kubernetes say that it can't bind the pv to the pvc.
Here is my pv template
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
spec:
storageClassName: manual
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/data/pv001
Here is my statefulset configuration file.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 3
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: init-mysql
image: mysql:5.7
command:
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/master.cnf /mnt/conf.d/
else
cp /mnt/config-map/slave.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: gcr.io/google-samples/xtrabackup:1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on master (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: gcr.io/google-samples/xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing slave.
mv xtrabackup_slave_info change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm xtrabackup_binlog_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
mysql -h 127.0.0.1 <<EOF
$(<change_master_to.sql.orig),
MASTER_HOST='mysql-0.mysql',
MASTER_USER='root',
MASTER_PASSWORD='',
MASTER_CONNECT_RETRY=10;
START SLAVE;
EOF
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 3Gi
Here are the errors that I get.
3m49s Normal FailedBinding persistentvolumeclaim/data-mysql-0 no persistent volumes available for this claim and no storage class is set
29s Warning FailedScheduling pod/mysql-0 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.
I can create a persistent volume claim manually and it seems to bind, but I can't get this statefulset yaml file to work. The pods don't start with the failed binding error.
Any help appreciated.
In GKE (EKS, Azure should be similar) you just need to apply this statefull set yaml without the PV:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 3
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: init-mysql
image: mysql:5.7
command:
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/master.cnf /mnt/conf.d/
else
cp /mnt/config-map/slave.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: gcr.io/google-samples/xtrabackup:1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on master (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: gcr.io/google-samples/xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing slave.
mv xtrabackup_slave_info change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm xtrabackup_binlog_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
mysql -h 127.0.0.1 <<EOF
$(<change_master_to.sql.orig),
MASTER_HOST='mysql-0.mysql',
MASTER_USER='root',
MASTER_PASSWORD='',
MASTER_CONNECT_RETRY=10;
START SLAVE;
EOF
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
# storageClassName: standard
resources:
requests:
storage: 3Gi
If this is not GKE update the details and I will try to make this work for your setup and explain the steps.
if you have n replica of your mysql you should have n worker node otherwise it does not work. forexample if your replication is 3 you should have 3 worker node.
this is because of affinity that does not allow two replica run in one worker node.
You need to set the field spec.volumeClaimTemplates.spec.storageClassName = "manual"on your statefulset definition (the value "manual" is the value if showed on your persistentVolume).
Other thing to point it out is your replication factor (3) + your PV is a hostPath, you should consider either setup affinity rules for the workload goes to the same node or change the volume type for something more reliable