Helm lifecycle commands in deployment - kubernetes

I have deployment.yaml template.
I have AWS EKS 1.8 and same kubectl.
I'm using Helm 3.3.4.
When I tried to deploy same template directly thru kubectl apply -f deployment.yaml,is everything good, init containers and main container in pod works fine.
But if I tried to start deployment thru Helm I got this error:
OCI runtime exec failed: exec failed: container_linux.go:349: starting
container process caused "process_linux.go:101: executing setns
process caused \"exit status 1\"": unknown\r\n"
kubectl describe pods osad-apidoc-6b74c9bcf9-tjnrh
Looks like I have missed something in annotations or I'm using wrong syntaxes in command description.
Some not important parameters are omitted in this example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "apidoc.fullname" . }}
labels:
{{- include "apidoc.labels" . | nindent 4 }}
spec:
template:
metadata:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- |
echo 'ServerName 127.0.0.1' >> /etc/apache2/apache2.conf
a2enmod rewrite
a2enmod headers
a2enmod ssl
a2dissite default
a2ensite 000-default
a2enmod rewrite
service apache2 start
I have tried invoke simple command like:
lifecycle:
postStart:
exec:
command: ['/bin/sh', '-c', 'printenv']
Unfortunately I got same error.
If I will delete this Lifecycle everything works fine thru Helm.
But I need invoke this commands, I can't omit this steps.
Also I checked helm template thru lint, looks good:
Original deployment.yaml before move to Helm:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apidoc
namespace: apidoc
labels:
app: apidoc
stage: dev
version: v-1
spec:
selector:
matchLabels:
app: apidoc
stage: dev
replicas: 1
template:
metadata:
labels:
app: apidoc
stage: dev
version: v-1
spec:
initContainers:
- name: git-clone
image: '123123123123.dkr.ecr.us-east-1.amazonaws.com/helper:latest'
volumeMounts:
- name: repos
mountPath: /var/repos
workingDir: /var/repos
command:
- sh
- '-c'
- >-
git clone --single-branch --branch k8s
git#github.com:examplerepo/apidoc.git -qv
- name: copy-data
image: '123123123123.dkr.ecr.us-east-1.amazonaws.com/helper:latest'
volumeMounts:
- name: web-data
mountPath: /var/www/app
- name: repos
mountPath: /var/repos
workingDir: /var/repos
command:
- sh
- '-c'
- >-
if cp -r apidoc/* /var/www/app/; then echo 'Success!!!' && exit 0;
else echo 'Failed !!!' && exit 1;fi;
containers:
- name: apache2
image: '123123123123.dkr.ecr.us-east-1.amazonaws.com/apache2:2.2'
tty: true
volumeMounts:
- name: web-data
mountPath: /var/www/app
- name: configfiles
mountPath: /etc/apache2/sites-available/000-default.conf
subPath: 000-default.conf
ports:
- name: http
protocol: TCP
containerPort: 80
lifecycle:
postStart:
exec:
command:
- /bin/sh
- '-c'
- |
echo 'ServerName 127.0.0.1' >> /etc/apache2/apache2.conf
a2enmod rewrite
a2enmod headers
a2enmod ssl
a2dissite default
a2ensite 000-default
a2enmod rewrite
service apache2 start
volumes:
- name: web-data
emptyDir: {}
- name: repos
emptyDir: {}
- name: configfiles
configMap:
name: apidoc-config

Related

Helm Exporting environments variables from env file located on volumeMounts

I have problem, I am trying to set env vars from file located on volumeMount (initContainer pulls the file from remote location puts on path /mnt/volume/ - custom solution) during container start, but it doesn't work.
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- /bin/sh
- -c
- "echo 'export $(cat /mnt/volume/.env | xargs) && java -jar application/target/application-0.0.1-SNAPSHOT.jar"
and this doesn't work, than I tried simple export command.
command:
- /bin/sh
- -c
- "export ENV_TMP=tmp && java -jar application/target/application-0.0.1-SNAPSHOT.jar"
and it doesn't work either, also I tried to set appending .bashrc file, but still it doesn't work.
I am not sure how to handle this.
Thanks
EDIT: typo
By default there is no shared storage between the pod containers, so first step is checking if you have a mounted volume in the two pods, one of the best solutions is an emptyDir.
Then you can write a variable in the init container and read it from the main container:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
initContainers:
- name: init-container
image: alpine
command:
- /bin/sh
- -c
- 'echo "test_value" > /mnt/volume/var.txt'
volumeMounts:
- mountPath: /mnt/volume
name: shared-storage
containers:
- image: alpine
name: test-container
command:
- /bin/sh
- -c
- 'READ_VAR=$(cat /mnt/volume/var.txt) && echo "main_container: ${READ_VAR}"'
volumeMounts:
- mountPath: /mnt/volume
name: shared-storage
volumes:
- name: shared-storage
emptyDir: {}
Here is the log:
$ kubectl logs test-pd
main_container: test_value
I have checked this, and on the end I found a typo.
Anyway, thanks for suggestions.

How will I differentiate the two yml files for GitLab ci cd yaml script file?

I've the GitLab ci/cd yaml file script
services:
- docker:19.03.11-dind
workflow:
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH == "developer" || $CI_COMMIT_BRANCH == "stage"|| ($CI_COMMIT_BRANCH =~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: always
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH != "developer" || $CI_COMMIT_BRANCH != "stage"|| ($CI_COMMIT_BRANCH !~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: never
stages:
- build
- Publish
- deploy
cache:
paths:
- .m2/repository
- target
build_jar:
image: maven:3.8.3-jdk-11
stage: build
script:
- mvn clean install package -DskipTests=true
artifacts:
paths:
- target/*.jar
docker_build:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
deploy_dev:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" provider-service-dev.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_DEV $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f provider-service-dev.yml
only:
- developer
deploy_stage:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" provider-service-stage.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_STAGE $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f provider-service-stage.yml
only:
- stage
this yaml script which I combine from two branches i.e developer and stage
but I'm having two yml files for separate branches (for developer and stage)
provider-service-dev.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: provider-app
namespace: stellacenter-dev
labels:
app: provider-app
spec:
replicas: 1
selector:
matchLabels:
app : provider-app
template:
metadata:
labels:
app: provider-app
spec:
containers:
- name: provider-app
image: registry.gitlab.com/stella-center/backend-services/provider-service:<VERSION>
imagePullPolicy: Always
ports:
- containerPort: 8092
imagePullSecrets:
- name: gitlab-registry-token-auth
---
apiVersion: v1
kind: Service
metadata:
name: provider-service
namespace: stellacenter-dev
spec:
type: NodePort
selector:
app: provider-app
ports:
- port: 8092
targetPort: 8092
provider-service-stage.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: provider-app
namespace: stellacenter-stage-uat
labels:
app: provider-app
spec:
replicas: 1
selector:
matchLabels:
app : provider-app
template:
metadata:
labels:
app: provider-app
spec:
containers:
- name: provider-app
image: registry.gitlab.com/stella-center/backend-services/provider-service:<VERSION>
imagePullPolicy: Always
ports:
- containerPort: 8092
imagePullSecrets:
- name: gitlab-registry-token-auth
---
apiVersion: v1
kind: Service
metadata:
name: provider-service
namespace: stellacenter-stage-uat
spec:
type: NodePort
selector:
app: provider-app
ports:
- port: 8092
targetPort: 8092
I have separately mentioned in the gitlab cicd yaml script but it shows the error like
$ kubectl apply -f provider-service-dev.yml
Error from server (NotFound): error when creating "provider-service-dev.yml": namespaces "stellacenter-dev" not found
Error from server (NotFound): error when creating "provider-service-dev.yml": namespaces "stellacenter-dev" not found
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
It shows error in the last line of the script. It shows that the namespaces can't found but that was there only . I don't know how to sort it out this. Please Kindly help me to sort it out
The error is saying your namespace is missing. You must first create the namespace stellacenter-dev.

502 Bad Gateway Gateway with Kubernetes Ingress

I have an EKS cluster with 2 nodes. I have deployed MySQL with 3 Pods, a master one for writing and 2 workers for reading.
I have a simple nodejs application which connects and reads from MySQL DB called emails_db.
I created the database simply by entering the master MySQL pod and the nodejs applications should read from the DB.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 3
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: init-mysql
image: mysql:5.7
command:
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/primary.cnf /mnt/conf.d/
else
cp /mnt/config-map/replica.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: gcr.io/google-samples/xtrabackup:1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on primary (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: gcr.io/google-samples/xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing replica. (Need to remove the tailing semicolon!)
cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_slave_info xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from primary. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm -f xtrabackup_binlog_info xtrabackup_slave_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
mysql -h 127.0.0.1 \
-e "$(<change_master_to.sql.in), \
MASTER_HOST='mysql-0.mysql', \
MASTER_USER='root', \
MASTER_PASSWORD='', \
MASTER_CONNECT_RETRY=10; \
START SLAVE;" || exit 1
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
Nodejs Service
apiVersion: v1
kind: Service
metadata:
labels:
app: nodejs
name: nodejs-service
spec:
selector:
app: nodejs
ports:
- name: web
port: 5000
protocol: TCP
targetPort: 3306
Nodejs Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs
labels:
app: nodejs
spec:
replicas: 2
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodejs
image: 199266549/my-nodeapi:PortLatest
ports:
- containerPort: 5000
env:
- name: db-url
valueFrom:
configMapKeyRef:
name: myconfig
key: db-url
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: myconfig
key: DB_NAME
- name: DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: DB_USER
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: DB_PASSWORD
Ingress looks like this:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysql-ingress
namespace: default
labels:
app: mysql
spec:
rules:
- host: mysql.com
http:
paths:
- path: /read
backend:
serviceName: mysql-read
servicePort: 80
- path: /write
backend:
serviceName: mysql
servicePort: 80
- path: /
backend:
serviceName: nodejs-service
servicePort: 5000
target port is 3306 which is the port exposed by MySql.
While nodejs application exposes port 5000.
I tried the following curl command
curl -v -Hhost:mysql.com http://abccc302f198147aca906e71c710c661-29001ea7d40ff165.elb.us-east-1.amazonaws.com/
I checked the ingress backend. The service of interest is the one given with path /
Any idea what is missing when trying to connect the nodjs service to MySql Pods?
Is this problematic?
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)

Deployment IPFS-Cluster on Kubernetes I got error

When I deployment IPFS-Cluster on Kubernetes, I get the following error (these are ipfs-cluster logs):
error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
2022-01-04T10:23:08.103Z INFO service ipfs-cluster-service/daemon.go:47 Initializing. For verbose output run with "-l debug". Please wait...
2022-01-04T10:23:08.103Z ERROR config config/config.go:352 error reading the configuration file: open /data/ipfs-cluster/service.json: no such file or directory
error loading configurations: open /data/ipfs-cluster/service.json: no such file or directory
These are initContainer logs:
+ user=ipfs
+ mkdir -p /data/ipfs
+ chown -R ipfs /data/ipfs
+ '[' -f /data/ipfs/config ]
+ ipfs init '--profile=badgerds,server'
initializing IPFS node at /data/ipfs
generating 2048-bit RSA keypair...done
peer identity: QmUHmdhauhk7zdj5XT1zAa6BQfrJDukysb2PXsCQ62rBdS
to get started, enter:
ipfs cat /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme
+ ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
+ ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
+ ipfs config --json Swarm.ConnMgr.HighWater 2000
+ ipfs config --json Datastore.BloomFilterSize 1048576
+ ipfs config Datastore.StorageMax 100GB
These are ipfs container logs:
Changing user to ipfs
ipfs version 0.4.18
Found IPFS fs-repo at /data/ipfs
Initializing daemon...
go-ipfs version: 0.4.18-aefc746
Repo version: 7
System version: amd64/linux
Golang version: go1.11.1
Error: open /data/ipfs/config: permission denied
Received interrupt signal, shutting down...
(Hit ctrl-c again to force-shutdown the daemon.)
The following is my kubernetes yaml file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ipfs-cluster
spec:
serviceName: ipfs-cluster
replicas: 3
selector:
matchLabels:
app: ipfs-cluster
template:
metadata:
labels:
app: ipfs-cluster
spec:
initContainers:
- name: configure-ipfs
image: "ipfs/go-ipfs:v0.4.18"
command: ["sh", "/custom/configure-ipfs.sh"]
volumeMounts:
- name: ipfs-storage
mountPath: /data/ipfs
- name: configure-script
mountPath: /custom/entrypoint.sh
subPath: entrypoint.sh
- name: configure-script-2
mountPath: /custom/configure-ipfs.sh
subPath: configure-ipfs.sh
containers:
- name: ipfs
image: "ipfs/go-ipfs:v0.4.18"
imagePullPolicy: IfNotPresent
env:
- name: IPFS_FD_MAX
value: "4096"
ports:
- name: swarm
protocol: TCP
containerPort: 4001
- name: swarm-udp
protocol: UDP
containerPort: 4002
- name: api
protocol: TCP
containerPort: 5001
- name: ws
protocol: TCP
containerPort: 8081
- name: http
protocol: TCP
containerPort: 8080
livenessProbe:
tcpSocket:
port: swarm
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 15
volumeMounts:
- name: ipfs-storage
mountPath: /data/ipfs
- name: configure-script
mountPath: /custom
resources:
{}
- name: ipfs-cluster
image: "ipfs/ipfs-cluster:latest"
imagePullPolicy: IfNotPresent
command: ["sh", "/custom/entrypoint.sh"]
envFrom:
- configMapRef:
name: env-config
env:
- name: BOOTSTRAP_PEER_ID
valueFrom:
configMapRef:
name: env-config
key: bootstrap-peer-id
- name: BOOTSTRAP_PEER_PRIV_KEY
valueFrom:
secretKeyRef:
name: secret-config
key: bootstrap-peer-priv-key
- name: CLUSTER_SECRET
valueFrom:
secretKeyRef:
name: secret-config
key: cluster-secret
- name: CLUSTER_MONITOR_PING_INTERVAL
value: "3m"
- name: SVC_NAME
value: $(CLUSTER_SVC_NAME)
ports:
- name: api-http
containerPort: 9094
protocol: TCP
- name: proxy-http
containerPort: 9095
protocol: TCP
- name: cluster-swarm
containerPort: 9096
protocol: TCP
livenessProbe:
tcpSocket:
port: cluster-swarm
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
volumeMounts:
- name: cluster-storage
mountPath: /data/ipfs-cluster
- name: configure-script
mountPath: /custom/entrypoint.sh
subPath: entrypoint.sh
resources:
{}
volumes:
- name: configure-script
configMap:
name: ipfs-cluster-set-bootstrap-conf
- name: configure-script-2
configMap:
name: configura-ipfs
volumeClaimTemplates:
- metadata:
name: cluster-storage
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 5Gi
- metadata:
name: ipfs-storage
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 200Gi
---
kind: Secret
apiVersion: v1
metadata:
name: secret-config
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
bootstrap-peer-priv-key: >-
UTBGQlUzQjNhM2RuWjFOcVFXZEZRVUZ2U1VKQlVVTjBWbVpUTTFwck9ETkxVWEZNYzJFemFGWlZaV2xKU0doUFZGRTBhRmhrZVhCeFJGVmxVbmR6Vmt4Nk9IWndZ...
cluster-secret: 7d4c019035beb7da7275ea88315c39b1dd9fdfaef017596550ffc1ad3fdb556f
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
name: env-config
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
bootstrap-peer-id: QmWgEHZEmJhuoDgFmBKZL8VtpMEqRArqahuaX66cbvyutP
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ipfs-cluster-set-bootstrap-conf
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
entrypoint.sh: |2
#!/bin/sh
user=ipfs
# This is a custom entrypoint for k8s designed to connect to the bootstrap
# node running in the cluster. It has been set up using a configmap to
# allow changes on the fly.
if [ ! -f /data/ipfs-cluster/service.json ]; then
ipfs-cluster-service init
fi
PEER_HOSTNAME=`cat /proc/sys/kernel/hostname`
grep -q ".*ipfs-cluster-0.*" /proc/sys/kernel/hostname
if [ $? -eq 0 ]; then
CLUSTER_ID=${BOOTSTRAP_PEER_ID} \
CLUSTER_PRIVATEKEY=${BOOTSTRAP_PEER_PRIV_KEY} \
exec ipfs-cluster-service daemon --upgrade
else
BOOTSTRAP_ADDR=/dns4/${SVC_NAME}-0/tcp/9096/ipfs/${BOOTSTRAP_PEER_ID}
if [ -z $BOOTSTRAP_ADDR ]; then
exit 1
fi
# Only ipfs user can get here
exec ipfs-cluster-service daemon --upgrade --bootstrap $BOOTSTRAP_ADDR --leave
fi
---
kind: ConfigMap
apiVersion: v1
metadata:
name: configura-ipfs
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
configure-ipfs.sh: >-
#!/bin/sh
set -e
set -x
user=ipfs
# This is a custom entrypoint for k8s designed to run ipfs nodes in an
appropriate
# setup for production scenarios.
mkdir -p /data/ipfs && chown -R ipfs /data/ipfs
if [ -f /data/ipfs/config ]; then
if [ -f /data/ipfs/repo.lock ]; then
rm /data/ipfs/repo.lock
fi
exit 0
fi
ipfs init --profile=badgerds,server
ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
ipfs config --json Swarm.ConnMgr.HighWater 2000
ipfs config --json Datastore.BloomFilterSize 1048576
ipfs config Datastore.StorageMax 100GB
I follow the official steps to build.
I follow the official steps and use the following command to generate cluster-secret:
$ od -vN 32 -An -tx1 /dev/urandom | tr -d ' \n' | base64 -w 0 -
But I get:
error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
I saw the same problem from the official github issue. So, I use openssl rand -hex command is not ok.
To clarify I am posting community Wiki answer.
To solve following error:
no such file or directory
you used runAsUser: 0.
The second error:
error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
was caused by different encoding than hex to CLUSTER_SECRET.
According to this page:
The Cluster Secret Key
The secret key of a cluster is a 32-bit hex encoded random string, of which every cluster peer needs in their _service.json_ configuration.
A secret key can be generated and predefined in the CLUSTER_SECRET environment variable, and will subsequently be used upon running ipfs-cluster-service init.
Here is link to solved issue.
See also:
Configuration reference
IPFS Tutorial
Documentation guide Security and ports

Kubernetes initContainers to copy file and execute as part of Lifecycle Hook PostStart

I am trying to execute some scripts as part of statefulset deployment kind. This script I have added as configmap and I use this as volumeMount inside the pod definition. I use the lifecycle poststart exec command to execute this script. It fails with the permission issue.
based on certain articles, I found that we should copy this file as part of InitContainer and then use that (I am not sure why should we do and what will make a difference)
Still, I tried it and that also gives the same error.
Here is my ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap-initscripts
data:
poststart.sh: |
#!/bin/bash
echo "It`s done"
Here is my StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-statefulset
spec:
....
serviceName: postgres-service
replicas: 1
template:
...
spec:
initContainers:
- name: "postgres-ghost"
image: alpine
volumeMounts:
- mountPath: /scripts
name: postgres-scripts
containers:
- name: postgres
image: postgres
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "/scripts/poststart.sh" ]
ports:
- containerPort: 5432
name: dbport
....
volumeMounts:
- mountPath: /scripts
name: postgres-scripts
volumes:
- name: postgres-scripts
configMap:
name: postgres-configmap-initscripts
items:
- key: poststart.sh
path: poststart.sh
The error I am getting:
postStart hook will be call at least once but may be call more than once, this is not a good place to run script.
The poststart.sh file that mounted as ConfigMap will not have execute mode hence the permission error.
It is better to run script in initContainers, here's an quick example that do a simple chmod; while in your case you can execute the script instead:
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: busybox
data:
test.sh: |
#!/bin/bash
echo "It's done"
---
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
run: busybox
spec:
volumes:
- name: scripts
configMap:
name: busybox
items:
- key: test.sh
path: test.sh
- name: runnable
emptyDir: {}
initContainers:
- name: prepare
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c"]
args: ["cp /scripts/test.sh /runnable/test.sh && chmod +x /runnable/test.sh"]
volumeMounts:
- name: scripts
mountPath: /scripts
- name: runnable
mountPath: /runnable
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c"]
args: ["while :; do . /runnable/test.sh; sleep 1; done"]
volumeMounts:
- name: scripts
mountPath: /scripts
- name: runnable
mountPath: /runnable
EOF