How to access redis master node in kubernetes cluster from external application - kubernetes

I have installed redis master and cluster nodes in kubernetes using restfulset and headless service. I am able to communicate between the pods in kubernetes using redis-0.redis.redis.svc.cluster.local:6379 address but while I am trying access from external application (spring boot), I am not able to connect it.
I tried to expose using node port also but no luck.
Can you help me to resolve this issue.
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
type: NodePort
selector:
name: redis
ports:
- port: 6379
targetPort: 6379
nodePort: 30001
Statefullset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: redis
replicas: 3
selector:
matchLabels:
app: redis
template:
metadata:
labels:
name: redis
app: redis
spec:
initContainers:
- name: config
image: redis:6.2.3-alpine
command: [ "sh", "-c" ]
args:
- |
cp /tmp/redis/redis.conf /etc/redis/redis.conf
echo "finding master..."
MASTER_FDQN=`hostname -f | sed -e 's/redis-[0-9]\./redis-0./'`
if [ "$(redis-cli -h sentinel -p 5000 ping)" != "PONG" ]; then
echo "master not found, defaulting to redis-0"
if [ "$(hostname)" == "redis-0" ]; then
echo "this is redis-0, not updating config..."
else
echo "updating redis.conf..."
echo "slaveof $MASTER_FDQN 6379" >> /etc/redis/redis.conf
fi
else
echo "sentinel found, finding master"
MASTER="$(redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name mymaster | grep -E '(^redis-\d{1,})|([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})')"
echo "master found : $MASTER, updating redis.conf"
echo "slaveof $MASTER 6379" >> /etc/redis/redis.conf
fi
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
- name: config
mountPath: /tmp/redis/
containers:
- name: redis
image: redis:6.2.3-alpine
command: ["redis-server"]
args: ["/etc/redis/redis.conf"]
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: data
mountPath: /data
- name: redis-config
mountPath: /etc/redis/
volumes:
- name: redis-config
emptyDir: {}
- name: config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 500Mi

Related

Redis cluster on minikube stays in "pending" state

I have installed minikube on local windows machine. Trying to install redis cluster.
I ran all cluster using kubectl create -f <resource> -n <namespace>. Following are the files that were used to create clusters.
Storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete
Persistent volumes.
kind: PersistentVolume
metadata:
name: local-pv1
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/storage/data1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv2
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/storage/data2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv3
spec:
storageClassName: local-storage
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/storage/data3"
Redis config map
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster
labels:
app: redis-cluster
data:
fix-ip.sh: |
#!/bin/sh
CLUSTER_CONFIG="/data/nodes.conf"
echo "creating nodes"
if [ -f ${CLUSTER_CONFIG} ]; then
echo "[ INFO ]File:${CLUSTER_CONFIG} is Found"
else
touch $CLUSTER_CONFIG
fi
if [ -z "${POD_IP}" ]; then
echo "Unable to determine Pod IP address!"
exit 1
fi
echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"
sed -i.bak -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG}
echo "done"
exec "$#"
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
cluster-node-timeout 15000
cluster-config-file /data/nodes.conf
cluster-migration-barrier 1
appendonly yes
protected-mode no
Statefulset redis cluster
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: redis
replicas: 3
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
initContainers:
- name: config
image: redis:6.2.3-alpine
command: [ "sh", "-c" ]
args:
- |
cp /tmp/redis/redis.conf /etc/redis/redis.conf
echo "finding master..."
MASTER_FDQN=`hostname -f | sed -e 's/redis-[0-9]\./redis-0./'`
if [ "$(redis-cli -h sentinel -p 5000 ping)" != "PONG" ]; then
echo "master not found, defaulting to redis-0"
if [ "$(hostname)" == "redis-0" ]; then
echo "this is redis-0, not updating config..."
else
echo "updating redis.conf..."
echo "slaveof $MASTER_FDQN 6379" >> /etc/redis/redis.conf
fi
else
echo "sentinel found, finding master"
MASTER="$(redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name mymaster | grep -E '(^redis-\d{1,})|([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})')"
echo "master found : $MASTER, updating redis.conf"
echo "slaveof $MASTER 6379" >> /etc/redis/redis.conf
fi
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
- name: config
mountPath: /tmp/redis/
containers:
- name: redis
image: redis:6.2.3-alpine
command: ["redis-server"]
args: ["/etc/redis/redis.conf"]
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: data
mountPath: /data
- name: redis-config
mountPath: /etc/redis/
volumes:
- name: redis-config
emptyDir: {}
- name: config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 500Mi
Head less redis service
apiVersion: v1
kind: Service
metadata:
name: redis-cluster
namespace: redis
spec:
type: ClusterIP
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
selector:
app: redis-cluster
This is what comes on getting pods
redis-cluster-0 0/1 Pending 0 2d
On describing the pods this is the message shown. Not sure if this is an issue
Warning FailedScheduling 6m24s (x110 over 46h) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
It's likely that your cluster has no storageClassName: local-storage. Check your storage classes using kubectl get sc. To fix this, you need to do any of the following:
Create a storage class called local-storage
Modify your manifest file's storage-class field to any pre-existing SC.

Kafka Kraft replication factor of 3

I tried to run Kafka in Raft mode (zookeeper-less) in Kubernetes and everything worked fine with this configuration:
I am curious about how to change the provided configuration to run with a replication factor of 3 for instance?
The fruitful topic was on the github but no one provided Kafka Kraft mode with replication set up.
Statefulset
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka-statefulset
namespace: kafka
labels:
app: kafka-cluster
spec:
serviceName: kafka-svc
replicas: 1
selector:
matchLabels:
app: kafka-cluster
template:
metadata:
labels:
app: kafka-cluster
spec:
containers:
- name: kafka-container
image: 'bitnami/kafka:latest'
ports:
- containerPort: 9092
- containerPort: 9093
env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CFG_NODE_ID
value: "1"
- name: KAFKA_ENABLE_KRAFT
value: "yes"
- name: KAFKA_CFG_PROCESS_ROLES
value: "broker,controller"
- name: KAFKA_CFG_CONTROLLER_LISTENER_NAMES
value: "CONTROLLER"
- name: KAFKA_CFG_LISTENERS
value: "CLIENT://:9092,CONTROLLER://:9093,EXTERNAL://0.0.0.0:9094"
- name: KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP
value: "CONTROLLER:PLAINTEXT,CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT"
- name: KAFKA_CFG_INTER_BROKER_LISTENER_NAME
value: "CLIENT"
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: "CLIENT://kafka-statefulset-0.kafka-svc.kafka.svc.cluster.local:9092,EXTERNAL://127.0.0.1:9094"
- name: KAFKA_CFG_CONTROLLER_QUORUM_VOTERS
value: "1#127.0.0.1:9093"
- name: KAFKA_AUTO_CREATE_TOPICS_ENABLE
value: "false"
- name: KAFKA_DEFAULT_REPLICATION_FACTOR
value: "1"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
Headless service:
apiVersion: v1
kind: Service
metadata:
name: kafka-svc
labels:
app: kafka-cluster
spec:
clusterIP: None
ports:
- name: '9092'
port: 9092
protocol: TCP
targetPort: 9092
selector:
app: kafka-cluster
Finally, I deployed Kafka in Kraft mode with a replication factor of 3 in Kubernetes. I used guidelines within this article. There is a very comprehensive description of how this setup works. I went through the image in docker hub doughgle/kafka-kraft, and there is a link to their Github repo where you can find a script:
#!/bin/bash
NODE_ID=${HOSTNAME:6}
LISTENERS="PLAINTEXT://:9092,CONTROLLER://:9093"
ADVERTISED_LISTENERS="PLAINTEXT://kafka-$NODE_ID.$SERVICE.$NAMESPACE.svc.cluster.local:9092"
CONTROLLER_QUORUM_VOTERS=""
for i in $( seq 0 $REPLICAS); do
if [[ $i != $REPLICAS ]]; then
CONTROLLER_QUORUM_VOTERS="$CONTROLLER_QUORUM_VOTERS$i#kafka-$i.$SERVICE.$NAMESPACE.svc.cluster.local:9093,"
else
CONTROLLER_QUORUM_VOTERS=${CONTROLLER_QUORUM_VOTERS::-1}
fi
done
mkdir -p $SHARE_DIR/$NODE_ID
if [[ ! -f "$SHARE_DIR/cluster_id" && "$NODE_ID" = "0" ]]; then
CLUSTER_ID=$(kafka-storage.sh random-uuid)
echo $CLUSTER_ID > $SHARE_DIR/cluster_id
else
CLUSTER_ID=$(cat $SHARE_DIR/cluster_id)
fi
sed -e "s+^node.id=.*+node.id=$NODE_ID+" \
-e "s+^controller.quorum.voters=.*+controller.quorum.voters=$CONTROLLER_QUORUM_VOTERS+" \
-e "s+^listeners=.*+listeners=$LISTENERS+" \
-e "s+^advertised.listeners=.*+advertised.listeners=$ADVERTISED_LISTENERS+" \
-e "s+^log.dirs=.*+log.dirs=$SHARE_DIR/$NODE_ID+" \
/opt/kafka/config/kraft/server.properties > server.properties.updated \
&& mv server.properties.updated /opt/kafka/config/kraft/server.properties
kafka-storage.sh format -t $CLUSTER_ID -c /opt/kafka/config/kraft/server.properties
exec kafka-server-start.sh /opt/kafka/config/kraft/server.properties
This script is necessary for setting proper configuration one by one to pods/brokers.
Then I built my own image with the latest version of Kafka, Scala and openjdk 17:
FROM openjdk:17-bullseye
ENV KAFKA_VERSION=3.3.1
ENV SCALA_VERSION=2.13
ENV KAFKA_HOME=/opt/kafka
ENV PATH=${PATH}:${KAFKA_HOME}/bin
LABEL name="kafka" version=${KAFKA_VERSION}
RUN wget -O /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz https://downloads.apache.org/kafka/${KAFKA_VERSION}/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz \
&& tar xfz /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz -C /opt \
&& rm /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz \
&& ln -s /opt/kafka_${SCALA_VERSION}-${KAFKA_VERSION} ${KAFKA_HOME} \
&& rm -rf /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz
COPY ./entrypoint.sh /
RUN ["chmod", "+x", "/entrypoint.sh"]
ENTRYPOINT ["/entrypoint.sh"]
and here is the Kubernetes configuration:
apiVersion: v1
kind: Namespace
metadata:
name: kafka-kraft
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: kafka-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: '/path/to/dir'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kafka-pv-claim
namespace: kafka-kraft
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
---
apiVersion: v1
kind: Service
metadata:
name: kafka-svc
labels:
app: kafka-app
namespace: kafka-kraft
spec:
clusterIP: None
ports:
- name: '9092'
port: 9092
protocol: TCP
targetPort: 9092
selector:
app: kafka-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka
labels:
app: kafka-app
namespace: kafka-kraft
spec:
serviceName: kafka-svc
replicas: 3
selector:
matchLabels:
app: kafka-app
template:
metadata:
labels:
app: kafka-app
spec:
volumes:
- name: kafka-storage
persistentVolumeClaim:
claimName: kafka-pv-claim
containers:
- name: kafka-container
image: me/kafka-kraft
ports:
- containerPort: 9092
- containerPort: 9093
env:
- name: REPLICAS
value: '3'
- name: SERVICE
value: kafka-svc
- name: NAMESPACE
value: kafka-kraft
- name: SHARE_DIR
value: /mnt/kafka
- name: CLUSTER_ID
value: oh-sxaDRTcyAr6pFRbXyzA
- name: DEFAULT_REPLICATION_FACTOR
value: '3'
- name: DEFAULT_MIN_INSYNC_REPLICAS
value: '2'
volumeMounts:
- name: kafka-storage
mountPath: /mnt/kafka
I am not 100% sure if this setup works like with a stable zookeeper setup, but it is currently sufficient for me for the testing phase.
UPDATE:
Kafka Kraft is production ready in release 3.3.1

502 Bad Gateway Gateway with Kubernetes Ingress

I have an EKS cluster with 2 nodes. I have deployed MySQL with 3 Pods, a master one for writing and 2 workers for reading.
I have a simple nodejs application which connects and reads from MySQL DB called emails_db.
I created the database simply by entering the master MySQL pod and the nodejs applications should read from the DB.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 3
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: init-mysql
image: mysql:5.7
command:
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/primary.cnf /mnt/conf.d/
else
cp /mnt/config-map/replica.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: gcr.io/google-samples/xtrabackup:1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on primary (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: gcr.io/google-samples/xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing replica. (Need to remove the tailing semicolon!)
cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_slave_info xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from primary. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm -f xtrabackup_binlog_info xtrabackup_slave_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
mysql -h 127.0.0.1 \
-e "$(<change_master_to.sql.in), \
MASTER_HOST='mysql-0.mysql', \
MASTER_USER='root', \
MASTER_PASSWORD='', \
MASTER_CONNECT_RETRY=10; \
START SLAVE;" || exit 1
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
Nodejs Service
apiVersion: v1
kind: Service
metadata:
labels:
app: nodejs
name: nodejs-service
spec:
selector:
app: nodejs
ports:
- name: web
port: 5000
protocol: TCP
targetPort: 3306
Nodejs Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs
labels:
app: nodejs
spec:
replicas: 2
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodejs
image: 199266549/my-nodeapi:PortLatest
ports:
- containerPort: 5000
env:
- name: db-url
valueFrom:
configMapKeyRef:
name: myconfig
key: db-url
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: myconfig
key: DB_NAME
- name: DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: DB_USER
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: DB_PASSWORD
Ingress looks like this:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysql-ingress
namespace: default
labels:
app: mysql
spec:
rules:
- host: mysql.com
http:
paths:
- path: /read
backend:
serviceName: mysql-read
servicePort: 80
- path: /write
backend:
serviceName: mysql
servicePort: 80
- path: /
backend:
serviceName: nodejs-service
servicePort: 5000
target port is 3306 which is the port exposed by MySql.
While nodejs application exposes port 5000.
I tried the following curl command
curl -v -Hhost:mysql.com http://abccc302f198147aca906e71c710c661-29001ea7d40ff165.elb.us-east-1.amazonaws.com/
I checked the ingress backend. The service of interest is the one given with path /
Any idea what is missing when trying to connect the nodjs service to MySql Pods?
Is this problematic?
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)

Deployment IPFS-Cluster on Kubernetes I got error

When I deployment IPFS-Cluster on Kubernetes, I get the following error (these are ipfs-cluster logs):
error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
2022-01-04T10:23:08.103Z INFO service ipfs-cluster-service/daemon.go:47 Initializing. For verbose output run with "-l debug". Please wait...
2022-01-04T10:23:08.103Z ERROR config config/config.go:352 error reading the configuration file: open /data/ipfs-cluster/service.json: no such file or directory
error loading configurations: open /data/ipfs-cluster/service.json: no such file or directory
These are initContainer logs:
+ user=ipfs
+ mkdir -p /data/ipfs
+ chown -R ipfs /data/ipfs
+ '[' -f /data/ipfs/config ]
+ ipfs init '--profile=badgerds,server'
initializing IPFS node at /data/ipfs
generating 2048-bit RSA keypair...done
peer identity: QmUHmdhauhk7zdj5XT1zAa6BQfrJDukysb2PXsCQ62rBdS
to get started, enter:
ipfs cat /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme
+ ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
+ ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
+ ipfs config --json Swarm.ConnMgr.HighWater 2000
+ ipfs config --json Datastore.BloomFilterSize 1048576
+ ipfs config Datastore.StorageMax 100GB
These are ipfs container logs:
Changing user to ipfs
ipfs version 0.4.18
Found IPFS fs-repo at /data/ipfs
Initializing daemon...
go-ipfs version: 0.4.18-aefc746
Repo version: 7
System version: amd64/linux
Golang version: go1.11.1
Error: open /data/ipfs/config: permission denied
Received interrupt signal, shutting down...
(Hit ctrl-c again to force-shutdown the daemon.)
The following is my kubernetes yaml file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ipfs-cluster
spec:
serviceName: ipfs-cluster
replicas: 3
selector:
matchLabels:
app: ipfs-cluster
template:
metadata:
labels:
app: ipfs-cluster
spec:
initContainers:
- name: configure-ipfs
image: "ipfs/go-ipfs:v0.4.18"
command: ["sh", "/custom/configure-ipfs.sh"]
volumeMounts:
- name: ipfs-storage
mountPath: /data/ipfs
- name: configure-script
mountPath: /custom/entrypoint.sh
subPath: entrypoint.sh
- name: configure-script-2
mountPath: /custom/configure-ipfs.sh
subPath: configure-ipfs.sh
containers:
- name: ipfs
image: "ipfs/go-ipfs:v0.4.18"
imagePullPolicy: IfNotPresent
env:
- name: IPFS_FD_MAX
value: "4096"
ports:
- name: swarm
protocol: TCP
containerPort: 4001
- name: swarm-udp
protocol: UDP
containerPort: 4002
- name: api
protocol: TCP
containerPort: 5001
- name: ws
protocol: TCP
containerPort: 8081
- name: http
protocol: TCP
containerPort: 8080
livenessProbe:
tcpSocket:
port: swarm
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 15
volumeMounts:
- name: ipfs-storage
mountPath: /data/ipfs
- name: configure-script
mountPath: /custom
resources:
{}
- name: ipfs-cluster
image: "ipfs/ipfs-cluster:latest"
imagePullPolicy: IfNotPresent
command: ["sh", "/custom/entrypoint.sh"]
envFrom:
- configMapRef:
name: env-config
env:
- name: BOOTSTRAP_PEER_ID
valueFrom:
configMapRef:
name: env-config
key: bootstrap-peer-id
- name: BOOTSTRAP_PEER_PRIV_KEY
valueFrom:
secretKeyRef:
name: secret-config
key: bootstrap-peer-priv-key
- name: CLUSTER_SECRET
valueFrom:
secretKeyRef:
name: secret-config
key: cluster-secret
- name: CLUSTER_MONITOR_PING_INTERVAL
value: "3m"
- name: SVC_NAME
value: $(CLUSTER_SVC_NAME)
ports:
- name: api-http
containerPort: 9094
protocol: TCP
- name: proxy-http
containerPort: 9095
protocol: TCP
- name: cluster-swarm
containerPort: 9096
protocol: TCP
livenessProbe:
tcpSocket:
port: cluster-swarm
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
volumeMounts:
- name: cluster-storage
mountPath: /data/ipfs-cluster
- name: configure-script
mountPath: /custom/entrypoint.sh
subPath: entrypoint.sh
resources:
{}
volumes:
- name: configure-script
configMap:
name: ipfs-cluster-set-bootstrap-conf
- name: configure-script-2
configMap:
name: configura-ipfs
volumeClaimTemplates:
- metadata:
name: cluster-storage
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 5Gi
- metadata:
name: ipfs-storage
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 200Gi
---
kind: Secret
apiVersion: v1
metadata:
name: secret-config
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
bootstrap-peer-priv-key: >-
UTBGQlUzQjNhM2RuWjFOcVFXZEZRVUZ2U1VKQlVVTjBWbVpUTTFwck9ETkxVWEZNYzJFemFGWlZaV2xKU0doUFZGRTBhRmhrZVhCeFJGVmxVbmR6Vmt4Nk9IWndZ...
cluster-secret: 7d4c019035beb7da7275ea88315c39b1dd9fdfaef017596550ffc1ad3fdb556f
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
name: env-config
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
bootstrap-peer-id: QmWgEHZEmJhuoDgFmBKZL8VtpMEqRArqahuaX66cbvyutP
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ipfs-cluster-set-bootstrap-conf
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
entrypoint.sh: |2
#!/bin/sh
user=ipfs
# This is a custom entrypoint for k8s designed to connect to the bootstrap
# node running in the cluster. It has been set up using a configmap to
# allow changes on the fly.
if [ ! -f /data/ipfs-cluster/service.json ]; then
ipfs-cluster-service init
fi
PEER_HOSTNAME=`cat /proc/sys/kernel/hostname`
grep -q ".*ipfs-cluster-0.*" /proc/sys/kernel/hostname
if [ $? -eq 0 ]; then
CLUSTER_ID=${BOOTSTRAP_PEER_ID} \
CLUSTER_PRIVATEKEY=${BOOTSTRAP_PEER_PRIV_KEY} \
exec ipfs-cluster-service daemon --upgrade
else
BOOTSTRAP_ADDR=/dns4/${SVC_NAME}-0/tcp/9096/ipfs/${BOOTSTRAP_PEER_ID}
if [ -z $BOOTSTRAP_ADDR ]; then
exit 1
fi
# Only ipfs user can get here
exec ipfs-cluster-service daemon --upgrade --bootstrap $BOOTSTRAP_ADDR --leave
fi
---
kind: ConfigMap
apiVersion: v1
metadata:
name: configura-ipfs
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
configure-ipfs.sh: >-
#!/bin/sh
set -e
set -x
user=ipfs
# This is a custom entrypoint for k8s designed to run ipfs nodes in an
appropriate
# setup for production scenarios.
mkdir -p /data/ipfs && chown -R ipfs /data/ipfs
if [ -f /data/ipfs/config ]; then
if [ -f /data/ipfs/repo.lock ]; then
rm /data/ipfs/repo.lock
fi
exit 0
fi
ipfs init --profile=badgerds,server
ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
ipfs config --json Swarm.ConnMgr.HighWater 2000
ipfs config --json Datastore.BloomFilterSize 1048576
ipfs config Datastore.StorageMax 100GB
I follow the official steps to build.
I follow the official steps and use the following command to generate cluster-secret:
$ od -vN 32 -An -tx1 /dev/urandom | tr -d ' \n' | base64 -w 0 -
But I get:
error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
I saw the same problem from the official github issue. So, I use openssl rand -hex command is not ok.
To clarify I am posting community Wiki answer.
To solve following error:
no such file or directory
you used runAsUser: 0.
The second error:
error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
was caused by different encoding than hex to CLUSTER_SECRET.
According to this page:
The Cluster Secret Key
The secret key of a cluster is a 32-bit hex encoded random string, of which every cluster peer needs in their _service.json_ configuration.
A secret key can be generated and predefined in the CLUSTER_SECRET environment variable, and will subsequently be used upon running ipfs-cluster-service init.
Here is link to solved issue.
See also:
Configuration reference
IPFS Tutorial
Documentation guide Security and ports

Sentinel stateful set schedule fail, can't find persistent volumes to bind

Good afternoon
I really need some help getting a group of sentinels up so that they can monitor and perform elections for my redis pods, which are running without issue. At the bottom of this message I have included the sentinel config, which spells out the volumes. The first sentinel, sentinel0, sits at Pending, while the rest of the redis instances are READY 1/1, for all three.
But they don't get scheduled. When I attempt to apply the sentinel statefulset, I get the following schedule error. The sentinel statefulset config is at the bottom of this post
Warning FailedScheduling 5s default-scheduler 0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) didn't find available persistent volumes to bind.
Warning FailedScheduling 4s default-scheduler 0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) didn't find available persistent volumes to bind.
About my kubernetes setup:
I am running a four-node baremetal kubernetes cluster; one master node and three worker nodes respectively.
For storage, I am using a 'local-storage' StorageClass shared across the nodes. Currently I am using a single persistent volume configuration file which defines three volumes across three nodes. This seems to be working out for the redis statefulset, but not sentinel. (sentiel config at bottom)
See below config of persistent volume (all three pv-volume-node-0, 1, 2 all are bound)
kind: PersistentVolume
apiVersion: v1
metadata:
name: ag1-pv-volume-node-0
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
local:
path: "/var/opt/mssql"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node-0
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ag1-pv-volume-node-1
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
local:
path: "/var/opt/mssql"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node-1
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ag1-pv-volume-node-2
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
local:
path: "/var/opt/mssql"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node-2
Note: the path "/var/opt/mssql" is the stateful directory data pt for the redis cluster. It's a misnomer and in no way reflects a sql database (I just used this directory from a walkthrough), and it works.
Presently all three redis pods are successfully deployed with a functioning statefulset, see below for the redis config (all working)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: redis
replicas: 3
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
initContainers:
- name: config
image: redis:6.0-alpine
command: [ "sh", "-c" ]
args:
- |
cp /tmp/redis/redis.conf /etc/redis/redis.conf
echo "finding master..."
MASTER_FDQN=`hostname -f | sed -e 's/redis-[0-9]\./redis-0./'`
if [ "$(redis-cli -h sentinel -p 5000 ping)" != "PONG" ]; then
echo "master not found, defaulting to redis-0"
if [ "$(hostname)" == "redis-0" ]; then
echo "this is redis-0, not updating config..."
else
echo "updating redis.conf..."
echo "slaveof $MASTER_FDQN 6379" >> /etc/redis/redis.conf
fi
else
echo "sentinel found, finding master"
MASTER="$(redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name mymaster | grep -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"
echo "master found : $MASTER, updating redis.conf"
echo "slaveof $MASTER 6379" >> /etc/redis/redis.conf
fi
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
- name: config
mountPath: /tmp/redis/
containers:
- name: redis
image: redis:6.0-alpine
command: ["redis-server"]
args: ["/etc/redis/redis.conf"]
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: data
mountPath: /var/opt/mssql
- name: redis-config
mountPath: /etc/redis/
volumes:
- name: redis-config
emptyDir: {}
- name: config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
clusterIP: None
ports:
- port: 6379
targetPort: 6379
name: redis
selector:
app: redis
The real issue I'm having, I believe spawns from how I've configured the sentinel statefulset. The pods won't schedule and its printed reason is it isn't finding persistent volumes to bind from.
SENTINEL STATEFULSET CONFIG, problem here, can't figure out how to set it up right with the volumes I made.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sentinel
spec:
serviceName: sentinel
replicas: 3
selector:
matchLabels:
app: sentinel
template:
metadata:
labels:
app: sentinel
spec:
initContainers:
- name: config
image: redis:6.0-alpine
command: [ "sh", "-c" ]
args:
- |
REDIS_PASSWORD=a-very-complex-password-here
nodes=redis-0.redis.redis.svc.cluster.local,redis-1.redis.redis.svc.cluster.local,redis-2.redis.redis.svc.cluster.local
for i in ${nodes//,/ }
do
echo "finding master at $i"
MASTER=$(redis-cli --no-auth-warning --raw -h $i -a $REDIS_PASSWORD info replication | awk '{print $1}' | grep master_host: | cut -d ":" -f2)
if [ "$MASTER" == "" ]; then
echo "no master found"
MASTER=
else
echo "found $MASTER"
break
fi
done
echo "sentinel monitor mymaster $MASTER 6379 2" >> /tmp/master
echo "port 5000
$(cat /tmp/master)
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
sentinel parallel-syncs mymaster 1
sentinel auth-pass mymaster $REDIS_PASSWORD
" > /etc/redis/sentinel.conf
cat /etc/redis/sentinel.conf
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
containers:
- name: sentinel
image: redis:6.0-alpine
command: ["redis-sentinel"]
args: ["/etc/redis/sentinel.conf"]
ports:
- containerPort: 5000
name: sentinel
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
- name: data
mountPath: /var/opt/mssql
volumes:
- name: redis-config
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: sentinel
spec:
clusterIP: None
ports:
- port: 5000
targetPort: 5000
name: sentinel
selector:
app: sentinel
This is my first post here. I am a big fan of stackoverflow!
You may try to create three PVs using this template:
kind: PersistentVolume
apiVersion: v1
metadata:
name: ag1-pv-volume-node-0
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: default
name: data-redis-0
local:
path: "/var/opt/mssql"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node-0
Important part here is claimRef field which ties PV with PVC with StatefulSet.
It should be of special format.
Read more here: https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd#using_a_preexisting_disk_in_a_statefulset