Chaincode instantiation fails in AWS EKS network - kubernetes

I have a simple one orderer and one peer network deployed on AWS-Elastic Kubernetes Services. I created the network using the official eks documentation. I am able to bring up the orderer as well as the peer within the pods. The peer is able to create and join the channel as well as anchor peer update transaction is successful as well.
High-level configuration for the pods:
Started as Stateful Sets
Pods use dynamic PV with Storage Class configured for aws-ebs
Exposed using service type load balancer
Uses Docker In Docker(DIND) approach for chaincode
Detailed Configuration for Orderer Pod:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: allparticipants-orderer
labels:
app: allparticipants-orderer
spec:
serviceName: orderer
replicas: 1
selector:
matchLabels:
app: allparticipants-orderer
template:
metadata:
labels:
app: allparticipants-orderer
spec:
containers:
- name: allparticipants-orderer
image: <docker-hub-url>/orderer:0.1
imagePullPolicy: Always
command: ["sh", "-c", "orderer"]
ports:
- containerPort: 7050
env:
- name: FABRIC_LOGGING_SPEC
value: DEBUG
- name: ORDERER_GENERAL_LOGLEVEL
value: DEBUG
- name: ORDERER_GENERAL_LISTENADDRESS
value: 0.0.0.0
- name: ORDERER_GENERAL_GENESISMETHOD
value: file
- name: ORDERER_GENERAL_GENESISFILE
value: /var/hyperledger/orderer/orderer.genesis.block
- name: ORDERER_GENERAL_LOCALMSPID
value: OrdererMSP
- name: ORDERER_GENERAL_LOCALMSPDIR
value: /var/hyperledger/orderer/msp
- name: ORDERER_GENERAL_TLS_ENABLED
value: "false"
- name: ORDERER_GENERAL_TLS_PRIVATEKEY
value: /var/hyperledger/orderer/tls/server.key
- name: ORDERER_GENERAL_TLS_CERTIFICATE
value: /var/hyperledger/orderer/tls/server.crt
- name: ORDERER_GENERAL_TLS_ROOTCAS
value: /var/hyperledger/orderer/tls/ca.crt
volumeMounts:
- name: allparticipants-orderer-ledger
mountPath: /var/ledger
volumeClaimTemplates:
- metadata:
name: allparticipants-orderer-ledger
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: allparticipants-orderer-sc
resources:
requests:
storage: 1Gi
Detailed configuration for Peer along with DIND in the same pod:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: allparticipants-peer0
labels:
app: allparticipants-peer0
spec:
serviceName: allparticipants-peer0
replicas: 1
selector:
matchLabels:
app: allparticipants-peer0
template:
metadata:
labels:
app: allparticipants-peer0
spec:
containers:
- name: docker
env:
- name: DOCKER_TLS_CERTDIR
value:
securityContext:
privileged: true
image: "docker:stable-dind"
ports:
- containerPort: 2375
volumeMounts:
- mountPath: /var/lib/docker
name: dockervolume
- name: allparticipants-peer0
image: <docker-hub-url>/peer0:0.1
imagePullPolicy: Always
command: ["sh", "-c", "peer node start"]
ports:
- containerPort: 7051
env:
- name: CORE_VM_ENDPOINT
value: http://localhost:2375
- name: CORE_PEER_CHAINCODELISTENADDRESS
value: 0.0.0.0:7052
- name: FABRIC_LOGGING_SPEC
value: debug
- name: CORE_LOGGING_PEER
value: debug
- name: CORE_LOGGING_CAUTHDSL
value: debug
- name: CORE_LOGGING_GOSSIP
value: debug
- name: CORE_LOGGING_LEDGER
value: debug
- name: CORE_LOGGING_MSP
value: info
- name: CORE_LOGGING_POLICIES
value: debug
- name: CORE_LOGGING_GRPC
value: debug
- name: CORE_LEDGER_STATE_STATEDATABASE
value: goleveldb
- name: GODEBUG
value: "netdns=go"
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_SKIPHANDSHAKE
value: "true"
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_PEER_COMMITTER_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /etc/hyperledger/fabric/tls/server.crt
- name: CORE_PEER_TLS_KEY_FILE
value: /etc/hyperledger/fabric/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /etc/hyperledger/fabric/tls/ca.crt
- name: CORE_PEER_ID
value: allparticipants-peer0
- name: CORE_PEER_ADDRESS
value: <peer0-load-balancer-url>:7051
- name: CORE_PEER_LISTENADDRESS
value: 0.0.0.0:7051
- name: CORE_PEER_EVENTS_ADDRESS
value: 0.0.0.0:7053
- name: CORE_PEER_GOSSIP_BOOTSTRAP
value: <peer0-load-balancer-url>:7051
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: <peer0-load-balancer-url>:7051
- name: CORE_PEER_LOCALMSPID
value: AllParticipantsMSP
- name: CORE_PEER_MSPCONFIGPATH
value: <path-to-msp-of-peer0>
- name: ORDERER_ADDRESS
value: <orderer-load-balancer-url>:7050
- name: CORE_PEER_ADDRESSAUTODETECT
value: "true"
- name: CORE_VM_DOCKER_ATTACHSTDOUT
value: "true"
- name: FABRIC_CFG_PATH
value: /etc/hyperledger/fabric
volumeMounts:
- name: allparticipants-peer0-ledger
mountPath: /var/ledger
- name: dockersock
mountPath: /host/var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: dockervolume
persistentVolumeClaim:
claimName: docker-pvc
volumeClaimTemplates:
- metadata:
name: allparticipants-peer0-ledger
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: allparticipants-peer0-sc
resources:
requests:
storage: 1Gi
Not including Storage Class and Service configuration for the pods as they seem to work fine.
So, as stated earlier, the peer is able to create and join the channel as well as I am able to install the chaincode in the peer. However, while instantiating the chaincode from within the peer, I get the following error:
Error: error endorsing chaincode: rpc error: code = Unavailable desc = transport is closing
Apart from this the Docker(DIND) container started within peer0's pod has following warning logs:
level=warning msg="887409d917bd7ef74ebe8617b8dcbedc8197741662a14b988491c54085e9acfd cleanup: failed to unmount IPC: umount /var/lib/docker/containers/887409d917bd7ef74ebe8617b8dcbedc8197741662a14b988491c54085e9acfd/mounts/shm, flags: 0x2: no such file or directory"
Additionally, there are no logs in this docker container when I submit the following instantiation request:
peer chaincode instantiate -o $ORDERER_ADDRESS -C carchannel -n fabcar
-l node -v 0.1.1 -c '{"Args":[]}' -P "OR ('AllParticipantsMSP.member','AllParticipantsMSP.peer', 'AllParticipantsMSP.admin', 'AllParticipantsMSP.client')"
I tried searching for similar issues but not able to find one that matches this eks one.
Is there an issue with pod configuration or eks configuration? Not able to get past this one. Can someone please point me in the right direction? I am quite new to K8s.
Update 1:
I updated the service type to Load Balancer keeping rest of the configurations similar. Still I get the same error.
Update 2:
Configured DIND approach for the chaincode container.
Update 3:
Mounted pv and pvc for the dind container as well as updated the CORE_VM_ENDPOINT access protocol from tcp to http.

The issue here was the image used for Docker In Docker container. I downgraded the docker-in-docker image version from docker:stable-dind one to docker:18-dind. The stable image version has TLS enabled by default. In my case, I tried setting the value of the environment variable DOCKER_TLS_CERTDIR to blank. But that did not work out.
Attaching the snippet of the DIND configuration:
containers:
- name: docker
securityContext:
privileged: true
image: "docker:18-dind"
ports:
- containerPort: 2375
protocol: TCP
volumeMounts:
- mountPath: /var/lib/docker
name: dockervolume
Note: While I have been able to work this out, I am not marking this as the accepted answer since TLS might be required to instantiate the chaincode and there should be a way to use that and I would be open to and looking out for those answers.

Related

How to use git-sync image as a sidecar in kubernetes that git pulls periodically

I am trying to use git-sync image as a side car in kubernetes that runs git-pull periodically and mounts cloned data to shared volume.
Everything is working fine when I configure it for sync one time. I want to run it periodically like every 10 mins. Somehow when I configure it to run periodically pod initializing is failing.
I read documentation but couldn't find proper answer. Would be nice if you help me to figure out what I am missing in my configuration.
Here is my configuration that failing.
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "10"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
volumes:
- name: www-data
emptyDir: {}
Pod
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx-helloworld
name: nginx-helloworld
spec:
containers:
- image: nginx
name: nginx-helloworld
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
you are using the git-sync as an initContainers, which run only during init (once in lifecycle)
A Pod can have multiple containers running apps within it, but it can also have one or more init containers, which are run before the app containers are started.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
init-containers
So use this as a regular container
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "20"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
volumes:
- name: www-data
emptyDir: {}

deployment throwing error for init container only when I add a second regular container to my deployment

Hi There I am currently trying to deploy sonarqube 7.8-community in GKE using a DB cloudsql instance.
This requires 2 containers ( one for sonarqube and the other for the cloudproxy in order to connect to the DB)
Sonarqube container, however, also requires an init container to give it some special memory requirments.
When I create the deployment with just the sonarqube image and the init container it works fine but this wont be of any use as I need the cloudsql proxy container to connect to my external db. When I add this container though the deployment suddenly errors with the below
deirdrerodgers#cloudshell:~ (meta-gear-306013)$ kubectl create -f initsonar.yaml
The Deployment "sonardeploy" is invalid:spec.template.spec.initContainers[0].volumeMounts[0].name: Not found: "init-sysctl"
This is my complete yaml file with the init container and the other two containers. I wonder is the issue because it doesnt know which container to apply the init container to?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonardeploy
name: sonardeploy
namespace: sonar
spec:
replicas: 1
selector:
matchLabels:
app: sonardeploy
strategy: {}
template:
metadata:
labels:
app: sonardeploy
spec:
initContainers:
- name: init-sysctl
image: busybox:1.32
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
resources:
{}
command: ["sh",
"-e",
"/tmp/scripts/init_sysctl.sh"]
volumeMounts:
- name: init-sysctl
mountPath: /tmp/scripts/
volumes:
- name: init-sysctl
configMap:
name: sonarqube-sonarqube-init-sysctl
items:
- key: init_sysctl.sh
path: init_sysctl.sh
spec:
containers:
- image: sonarqube:7.8-community
name: sonarqube
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: sonarsecret
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://localhost:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command: ["/cloud_sql_proxy",
"-instances=meta-gear-306013:us-central1:sonardb=tcp:5432",
"-credential_file=/secrets/service_account.json"]
securityContext:
runAsNonRoot: true
volumeMounts:
- name: cloudsql-instance-credentials-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: cloudsql-instance-credentials-volume
secret:
secretName: cloudsql-instance-credentials
Your yaml file is incorrect. You have two spec: blocks. It should be only one. You need to combine it together. Under spec block should be initContainers block, then containers and finally volumes block. Look at the correct yaml file below:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonardeploy
name: sonardeploy
namespace: sonar
spec:
replicas: 1
selector:
matchLabels:
app: sonardeploy
strategy: {}
template:
metadata:
labels:
app: sonardeploy
spec:
initContainers:
- name: init-sysctl
image: busybox:1.32
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
resources:
{}
command: ["sh",
"-e",
"/tmp/scripts/init_sysctl.sh"]
volumeMounts:
- name: init-sysctl
mountPath: /tmp/scripts/
containers:
- image: sonarqube:7.8-community
name: sonarqube
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: sonarsecret
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://localhost:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command: ["/cloud_sql_proxy",
"-instances=meta-gear-306013:us-central1:sonardb=tcp:5432",
"-credential_file=/secrets/service_account.json"]
securityContext:
runAsNonRoot: true
volumeMounts:
- name: cloudsql-instance-credentials-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: cloudsql-instance-credentials-volume
secret:
secretName: cloudsql-instance-credentials
- name: init-sysctl
configMap:
name: sonarqube-sonarqube-init-sysctl
items:
- key: init_sysctl.sh
path: init_sysctl.sh

kubernetes creating statefulset fail

I am trying to create a stateful set with definition below but I get this error:
error: unable to recognize "wordpress-database.yaml": no matches for kind "StatefulSet" in version "apps/v1beta2"
what's wrong?
The yaml file is (please do not consider the alignment of the rows):
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: wordpress-database
spec:
selector:
matchLabels:
app: blog
serviceName: "blog"
replicas: 1
template:
metadata:
labels:
app: blog
spec:
containers:
- name: database
image: mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: rootPassword
- name: MYSQL_DATABASE
value: database
- name: MYSQL_USER
value: user
- name: MYSQL_PASSWORD
value: password
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: blog
image: wordpress:latest
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: 127.0.0.1:3306
- name: WORDPRESS_DB_NAME
value: database
- name: WORDPRESS_DB_USER
value: user
- name: WORDPRESS_DB_PASSWORD
value: password
volumeClaimTemplates:
- metadata:
name: data
spec:
resources:
requests:
storage: 1Gi
The api version of StatefulSet shoud be:
apiVersion: apps/v1
From the official documentation
Good luck.

Kubernetes volumeMount folder and file permissions?

Trying to mount config files from a hostPath to a kubernetes container. This works using minikube and VirtualBox shared folder, but I am unable to make this work on Linux.
I making use of AWS EKS and the following architecture https://aws.amazon.com/quickstart/architecture/amazon-eks/. I think my problem is that the files need to live on each of the EKS Node instances.
Here is the architecture diagram:
Below is the Deployment file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: openhim-core-deployment
spec:
replicas: 1
selector:
matchLabels:
component: openhim-core
template:
metadata:
labels:
component: openhim-core
spec:
volumes:
- name: core-config
hostPath:
path: /var/config/openhim-core
containers:
- name: openhim-core
image: jembi/openhim-core:5.rc
ports:
- containerPort: 8080
- containerPort: 5000
- containerPort: 5001
volumeMounts:
- name: core-config
mountPath: /usr/src/app/config
env:
- name: NODE_ENV
value: development
After much pain I found that I am trying to place the configuration on the Linux Bastion host where I have access to kubectl but in fact this configuration will have to be on each of the EC2 instances in every availability zone.
The solution for me was to make use of a initContainer.
apiVersion: apps/v1
kind: Deployment
metadata:
name: openhim-core-deployment
spec:
replicas: 1
selector:
matchLabels:
component: openhim-core
template:
metadata:
labels:
component: openhim-core
spec:
volumes:
- name: core-config
hostPath:
path: /var/config/openhim-core
containers:
- name: openhim-core
image: jembi/openhim-core:5
ports:
- containerPort: 8080
- containerPort: 5000
- containerPort: 5001
volumeMounts:
- name: core-config
mountPath: /usr/src/app/config
env:
- name: NODE_ENV
value: development
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/usr/src/app/config/development.json"
- https://s3.eu-central-1.amazonaws.com/../development.json
volumeMounts:
- name: core-config
mountPath: "/usr/src/app/config"
volumes:
- name: core-config
emptyDir: {}

Kubernetes using service cluster IP and port as environment variables

I have a backend service on cluster IP 10.101.71.17 and port 26379. I have a frontend deployment where I intend to pass this service IP as an environment variable.
frontend-deployment.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
namespace: my-namespace
spec:
replicas: 2
template:
metadata:
labels:
name: frontend
spec:
containers:
- name: frontend
image: localhost:5000/frontend
command: [ "/usr/local/bin/node"]
args: [ "./index.js" ]
imagePullPolicy: IfNotPresent
env:
- name: NODE_ENV
value: production
- name: API_URL
value: BACKEND_HTTP_SERVICE_HOST // Here
- name: BASIC_AUTH
value: "true"
- name: SECURE
value: "true"
- name: PORT
value: "443"
ports:
- containerPort: 443
- containerPort: 80
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 8079
nodeSelector:
beta.kubernetes.io/os: linux
---
I can get all the environment variables inside the pod but I am not sure what's the proper way of assigning it to the environment variable value.
I assume in your front end application your referring your back-end service in API_URL environment variable.
If this is the case just replace BACKEND_HTTP_SERVICE_HOST with 10.101.71.17:26379
env:
- name: NODE_ENV
value: production
- name: API_URL
value: 10.101.71.17:26379
- name: BASIC_AUTH
value: "true"
- name: SECURE
value: "true"
- name: PORT
value: "443"
your should consider using the DNS name for services.