k3s with skaffold: container can't be pulled - kubernetes

I'm creating a micro-service API I want to run on a local raspberry k3s cluster.
The aim is to use skaffold to deploy during development.
The problem I have is everytime I use skaffold dev I have the same error :
deployment/epos-auth-deploy: container epos-auth is waiting to start: 192.168.1.10:8080/epos_auth:05ea8c1#sha256:4e7f7c7224ce1ec4831627782ed696dee68b66929b5641e9fc6cfbfc4d2e82da can't be pulled
I've tried to setup a local docker registry which is defined with this docker-compose.yaml file:
version: '2.0'
services:
registry:
image: registry:latest
volumes:
- ./registry-data:/var/lib/registry
networks:
- registry-ui-net
ui:
image: joxit/docker-registry-ui:static
ports:
- 8080:80
environment:
- REGISTRY_TITLE=Docker Registry
- REGISTRY_URL=http://registry:5000
depends_on:
- registry
networks:
- registry-ui-net
networks:
registry-ui-net:
It's working on http://192.168.1.10:8080 on my local network.
It seems ok when it builds and pushes the image.
I also have set /etc/docker/daemon.json on my local computer
{
"insecure-registries": ["192.168.1.10:8080"],
"registry-mirrors": ["http://192.168.1.10:8080"]
}
I've set the /etc/rancher/k3s/registries.yaml on all the nodes:
mirrors:
docker.io:
endpoint:
- "http://192.168.1.10:8080"
skaffold.yaml looks like this :
apiVersion: skaffold/v2alpha3
kind: Config
metadata:
name: epos-skaffold
deploy:
kubectl:
manifests:
- ./infra/k8s/skaffold/*
build:
local:
useBuildkit: true
artifacts:
- image: epos_auth
context: epos-auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
And ./infra/k8s/skaffold/epos-auth-deploy.yaml looks like this :
apiVersion: apps/v1
kind: Deployment
metadata:
name: epos-auth-deploy
spec:
replicas: 1
selector:
matchLabels:
app: epos-auth
template:
metadata:
labels:
app: epos-auth
spec:
containers:
- name: epos-auth
image: epos_auth
env:
- name: NATS_URL
value: http://nats-srv:4222
- name: NATS_CLUSTER_ID
value: epos
- name: NATS_CLIENT_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-keys
key: key
- name: JWT_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: jwt-keys
key: private_key
- name: JWT_PUBLIC_KEY
valueFrom:
secretKeyRef:
name: jwt-keys
key: public_key
- name: MONGO_URI
value: mongodb://mongodb-srv:27017/auth-service
---
apiVersion: v1
kind: Service
metadata:
name: epos-auth-srv
spec:
selector:
app: epos-auth
ports:
- name: epos-auth
protocol: TCP
port: 3000
targetPort: 3000
I did
skaffold config set default-repo 192.168.1.10:8080
skaffold config set insecure-registries 192.168.1.10:8080
I really don't know what's wrong with this.
Do you have any clue please ?

Please change the docker.io reference as it is confusing;
mirrors:
docker.io:
endpoint:
- "http://192.168.1.10:8080"
And use the image with the full registry path just like:
registry.local:5000/epos_auth
https://devopsspiral.com/articles/k8s/k3d-skaffold/

I installed docker-ce on the master node. Then I created the master node with --docker parameter in the command.
When I created the k3s node, I defined a context. I forgot to replace default context in ~/.skaffold/config

Related

My pod is always in 'ContainerCreating' state

When I ssh into minikube and pull the image from docker hub it pulls the image successfully:
$ docker pull mysql:5.7
So I understand network is not an issue.
But when I try deploying using the following command it goes into 'ContainerCreating' endlessly.
$ kubectl apply -f my-depl.yaml
#my-depl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-depl
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
subPath: "mysql"
name: mysql-data
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-root-password
key: ROOT_PASSWORD
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql-data-disk
Please let me know if there is anything wrong with the above yaml file or any other helpful debug tips that can help pull the image successfully from the Docker Hub.
I do not know the reason but my container got created automatically without any problems when I restarted the Minikube.
It would help if someone can add the reason behind this behavior.

Pod does not see secrets

The pod that created in the same default namespace as it's secret does not see values from it.
Secret's file contains following:
apiVersion: v1
kind: Secret
metadata:
name: backend-secret
data:
SECRET_KEY: <base64 of value>
DEBUG: <base64 of value>
After creating this secret via kubectl create -f backend-secret.yaml I'm launching pod with the following configuration:
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
imagePullSecrets:
- name: dockerhub-credentials
volumes:
- name: secret
secret:
secretName: backend-secret
But pod crashes after trying to extract this environment variable via python's os.environ['DEBUG'] line.
How to make it work?
If you mount secret as volume, it will be mounted in a defined directory where key name will be the file name. For example click here
If you want to access secrets from the environment into your pod then you need to use secret in an environment variable like following.
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
env:
- name: DEBUG
valueFrom:
secretKeyRef:
name: backend-secret
key: DEBUG
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: backend-secret
key: SECRET_KEY
imagePullSecrets:
- name: dockerhub-credentials
Ref: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables
Finally, I've used these lines at Deployment.spec.template.spec.containers:
containers:
- name: backend
image: zuber93/wts_backend
imagePullPolicy: Always
envFrom:
- secretRef:
name: backend-secret
ports:
- containerPort: 8000

Chaincode instantiation fails in AWS EKS network

I have a simple one orderer and one peer network deployed on AWS-Elastic Kubernetes Services. I created the network using the official eks documentation. I am able to bring up the orderer as well as the peer within the pods. The peer is able to create and join the channel as well as anchor peer update transaction is successful as well.
High-level configuration for the pods:
Started as Stateful Sets
Pods use dynamic PV with Storage Class configured for aws-ebs
Exposed using service type load balancer
Uses Docker In Docker(DIND) approach for chaincode
Detailed Configuration for Orderer Pod:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: allparticipants-orderer
labels:
app: allparticipants-orderer
spec:
serviceName: orderer
replicas: 1
selector:
matchLabels:
app: allparticipants-orderer
template:
metadata:
labels:
app: allparticipants-orderer
spec:
containers:
- name: allparticipants-orderer
image: <docker-hub-url>/orderer:0.1
imagePullPolicy: Always
command: ["sh", "-c", "orderer"]
ports:
- containerPort: 7050
env:
- name: FABRIC_LOGGING_SPEC
value: DEBUG
- name: ORDERER_GENERAL_LOGLEVEL
value: DEBUG
- name: ORDERER_GENERAL_LISTENADDRESS
value: 0.0.0.0
- name: ORDERER_GENERAL_GENESISMETHOD
value: file
- name: ORDERER_GENERAL_GENESISFILE
value: /var/hyperledger/orderer/orderer.genesis.block
- name: ORDERER_GENERAL_LOCALMSPID
value: OrdererMSP
- name: ORDERER_GENERAL_LOCALMSPDIR
value: /var/hyperledger/orderer/msp
- name: ORDERER_GENERAL_TLS_ENABLED
value: "false"
- name: ORDERER_GENERAL_TLS_PRIVATEKEY
value: /var/hyperledger/orderer/tls/server.key
- name: ORDERER_GENERAL_TLS_CERTIFICATE
value: /var/hyperledger/orderer/tls/server.crt
- name: ORDERER_GENERAL_TLS_ROOTCAS
value: /var/hyperledger/orderer/tls/ca.crt
volumeMounts:
- name: allparticipants-orderer-ledger
mountPath: /var/ledger
volumeClaimTemplates:
- metadata:
name: allparticipants-orderer-ledger
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: allparticipants-orderer-sc
resources:
requests:
storage: 1Gi
Detailed configuration for Peer along with DIND in the same pod:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: allparticipants-peer0
labels:
app: allparticipants-peer0
spec:
serviceName: allparticipants-peer0
replicas: 1
selector:
matchLabels:
app: allparticipants-peer0
template:
metadata:
labels:
app: allparticipants-peer0
spec:
containers:
- name: docker
env:
- name: DOCKER_TLS_CERTDIR
value:
securityContext:
privileged: true
image: "docker:stable-dind"
ports:
- containerPort: 2375
volumeMounts:
- mountPath: /var/lib/docker
name: dockervolume
- name: allparticipants-peer0
image: <docker-hub-url>/peer0:0.1
imagePullPolicy: Always
command: ["sh", "-c", "peer node start"]
ports:
- containerPort: 7051
env:
- name: CORE_VM_ENDPOINT
value: http://localhost:2375
- name: CORE_PEER_CHAINCODELISTENADDRESS
value: 0.0.0.0:7052
- name: FABRIC_LOGGING_SPEC
value: debug
- name: CORE_LOGGING_PEER
value: debug
- name: CORE_LOGGING_CAUTHDSL
value: debug
- name: CORE_LOGGING_GOSSIP
value: debug
- name: CORE_LOGGING_LEDGER
value: debug
- name: CORE_LOGGING_MSP
value: info
- name: CORE_LOGGING_POLICIES
value: debug
- name: CORE_LOGGING_GRPC
value: debug
- name: CORE_LEDGER_STATE_STATEDATABASE
value: goleveldb
- name: GODEBUG
value: "netdns=go"
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_SKIPHANDSHAKE
value: "true"
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_PEER_COMMITTER_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /etc/hyperledger/fabric/tls/server.crt
- name: CORE_PEER_TLS_KEY_FILE
value: /etc/hyperledger/fabric/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /etc/hyperledger/fabric/tls/ca.crt
- name: CORE_PEER_ID
value: allparticipants-peer0
- name: CORE_PEER_ADDRESS
value: <peer0-load-balancer-url>:7051
- name: CORE_PEER_LISTENADDRESS
value: 0.0.0.0:7051
- name: CORE_PEER_EVENTS_ADDRESS
value: 0.0.0.0:7053
- name: CORE_PEER_GOSSIP_BOOTSTRAP
value: <peer0-load-balancer-url>:7051
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: <peer0-load-balancer-url>:7051
- name: CORE_PEER_LOCALMSPID
value: AllParticipantsMSP
- name: CORE_PEER_MSPCONFIGPATH
value: <path-to-msp-of-peer0>
- name: ORDERER_ADDRESS
value: <orderer-load-balancer-url>:7050
- name: CORE_PEER_ADDRESSAUTODETECT
value: "true"
- name: CORE_VM_DOCKER_ATTACHSTDOUT
value: "true"
- name: FABRIC_CFG_PATH
value: /etc/hyperledger/fabric
volumeMounts:
- name: allparticipants-peer0-ledger
mountPath: /var/ledger
- name: dockersock
mountPath: /host/var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: dockervolume
persistentVolumeClaim:
claimName: docker-pvc
volumeClaimTemplates:
- metadata:
name: allparticipants-peer0-ledger
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: allparticipants-peer0-sc
resources:
requests:
storage: 1Gi
Not including Storage Class and Service configuration for the pods as they seem to work fine.
So, as stated earlier, the peer is able to create and join the channel as well as I am able to install the chaincode in the peer. However, while instantiating the chaincode from within the peer, I get the following error:
Error: error endorsing chaincode: rpc error: code = Unavailable desc = transport is closing
Apart from this the Docker(DIND) container started within peer0's pod has following warning logs:
level=warning msg="887409d917bd7ef74ebe8617b8dcbedc8197741662a14b988491c54085e9acfd cleanup: failed to unmount IPC: umount /var/lib/docker/containers/887409d917bd7ef74ebe8617b8dcbedc8197741662a14b988491c54085e9acfd/mounts/shm, flags: 0x2: no such file or directory"
Additionally, there are no logs in this docker container when I submit the following instantiation request:
peer chaincode instantiate -o $ORDERER_ADDRESS -C carchannel -n fabcar
-l node -v 0.1.1 -c '{"Args":[]}' -P "OR ('AllParticipantsMSP.member','AllParticipantsMSP.peer', 'AllParticipantsMSP.admin', 'AllParticipantsMSP.client')"
I tried searching for similar issues but not able to find one that matches this eks one.
Is there an issue with pod configuration or eks configuration? Not able to get past this one. Can someone please point me in the right direction? I am quite new to K8s.
Update 1:
I updated the service type to Load Balancer keeping rest of the configurations similar. Still I get the same error.
Update 2:
Configured DIND approach for the chaincode container.
Update 3:
Mounted pv and pvc for the dind container as well as updated the CORE_VM_ENDPOINT access protocol from tcp to http.
The issue here was the image used for Docker In Docker container. I downgraded the docker-in-docker image version from docker:stable-dind one to docker:18-dind. The stable image version has TLS enabled by default. In my case, I tried setting the value of the environment variable DOCKER_TLS_CERTDIR to blank. But that did not work out.
Attaching the snippet of the DIND configuration:
containers:
- name: docker
securityContext:
privileged: true
image: "docker:18-dind"
ports:
- containerPort: 2375
protocol: TCP
volumeMounts:
- mountPath: /var/lib/docker
name: dockervolume
Note: While I have been able to work this out, I am not marking this as the accepted answer since TLS might be required to instantiate the chaincode and there should be a way to use that and I would be open to and looking out for those answers.

How to convert docker-compose to kubernetes?

I have an example project with nginx, php-fpm, mysql:
https://gitlab.com/x-team-blog/docker-compose-php
This is the docker-compose file:
version: '3'
services:
php-fpm:
build:
context: ./php-fpm
volumes:
- ../src:/var/www
nginx:
build:
context: ./nginx
volumes:
- ../src:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/sites/:/etc/nginx/sites-available
- ./nginx/conf.d/:/etc/nginx/conf.d
depends_on:
- php-fpm
ports:
- "80:80"
- "443:443"
database:
build:
context: ./database
environment:
- MYSQL_DATABASE=mydb
- MYSQL_USER=myuser
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./database/data.sql:/docker-entrypoint-initdb.d/data.sql
I'd like to use Kubernetes in Google Cloud and for this I have created several files:
Persistent Volume database-claim0-persistentvolumeclaim.yaml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: database-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 400Mi
Also I created secrets:
kubectl create secret generic mysql --from-literal=MYSQL_DATABASE=mydb --from-literal=MYSQL_PASSWORD=secret --from-literal=MYSQL_ROOT_PASSWORD=docker --from-literal=MYSQL_USER=myuser
Then I created a service:
apiVersion: v1
kind: Service
metadata:
name: database
labels:
app: database
spec:
type: ClusterIP
ports:
- port: 3306
selector:
app: database
And now I'd like to create database-deployment.yaml, but I don't know what I have to write in the volumeMounts section for copy sql file like docker-compose:
- ./database/data.sql:/docker-entrypoint-initdb.d/data.sql
database-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: database
labels:
app: database
spec:
replicas: 1
selector:
matchLabels:
app: database
template:
metadata:
labels:
app: database
spec:
containers:
- image: mariadb
name: database
env:
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql
key: MYSQL_DATABASE
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: MYSQL_PASSWORD
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql
key: MYSQL_USER
ports:
- containerPort: 3306
name: database
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: database-claim0
UPD:
My mistake. I did not push images first.
Use kompose
INFO Kubernetes file "nginx-service.yaml" created
INFO Kubernetes file "database-deployment.yaml" created
INFO Kubernetes file "database-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "nginx-deployment.yaml" created
INFO Kubernetes file "nginx-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "nginx-claim1-persistentvolumeclaim.yaml" created
INFO Kubernetes file "nginx-claim2-persistentvolumeclaim.yaml" created
INFO Kubernetes file "nginx-claim3-persistentvolumeclaim.yaml" created
INFO Kubernetes file "php-fpm-deployment.yaml" created
INFO Kubernetes file "php-fpm-claim0-persistentvolumeclaim.yaml" created
This is an Online tool here which help generating most of the Docker compose file converted to the Kubernetes Resources

Kubernetes/Openshift: how to use envFile to read from a filesystem file

I've configured two container into a pod.
The first of them is engaged to create a file like:
SPRING_DATASOURCE_USERNAME: username
SPRING_DATASOURCE_PASSWORD: password
I want that the second container reads from this location in order to initialize its env variables.
I'm using envFrom but I don't quite figure out how to use it.
This is my spec:
metadata:
annotations:
configmap.fabric8.io/update-on-change: ${project.artifactId}
labels:
name: wsec
name: wsec
spec:
replicas: 1
selector:
name: wsec
version: ${project.version}
provider: fabric8
template:
metadata:
labels:
name: wsec
version: ${project.version}
provider: fabric8
spec:
containers:
- name: sidekick
image: quay.io/ukhomeofficedigital/vault-sidekick:latest
args:
- -cn=secret:openshift/postgresql:env=USERNAME
env:
- name: VAULT_ADDR
value: "https://vault.vault-sidekick.svc:8200"
- name: VAULT_TOKEN
value: "34f8e679-3fbd-77b4-5de9-68b99217cc02"
volumeMounts:
- name: sidekick-backend-volume
mountPath: /etc/secrets
readOnly: false
- name: wsec
image: ${docker.image}
env:
- name: SPRING_APPLICATION_JSON
valueFrom:
configMapKeyRef:
name: wsec-configmap
key: SPRING_APPLICATION_JSON
envFrom:
???
volumes:
- name: sidekick-backend-volume
emptyDir: {}
It looks like you are packaging the environment variables with the first container and read then in the second container.
If that's the case, you can use initContainers.
Create a volume mapped to an emptyDir. Mount that on the initContainer(propertiesContainer) and the main container(springBootAppContainer) as a volumeMount. This directory is now visible to both containers.
image: properties/container/path
command: [bash, -c]
args: ["cp -r /location/in/properties/container /propsdir"]
volumeMounts:
- name: propsDir
mountPath: /propsdir
This will put the properties in /propsdir. When the main container starts, it can read properties from /propsdir