In Google Kubernetes Engine I created a POC cluster for our company which worked flawlessly. But now, when I try to create our production environment I cannot seem to get the imagesPullSecrets to work, it's the exact same credentials as in the POC, Same helm chart and the exact same regcred yaml file.
Yet i keep getting the classical:
Back-off pulling image "registry.company.co/frontend/company-web/upload": ImagePullBackOff
Pulling manually on the node works with the same credentials as those that i supplied in the imagesPullSecret
I've tried defining the imagesPullSecret both on a chart level and on the Service Account
I've verified the secret format and directly copied the credentials there when trying the manual pulls
GKE picks up regcred and shows it in the deployment
Regcred generated by kubectl create secret docker-registry regcred --docker-server="registry.company.co" --docker-username="gitlab" --docker-password="[PASSWORD]"
regcred secret
kind: Secret
apiVersion: v1
metadata:
name: regcred
namespace: default
data:
.dockerconfigjson: eyJhdXRocyI6eyJyZWdpc3RyeS5jb21wYW55LmNvIjp7InVzZXJuYW1lIjoiZ2l0bGFiIiwicGFzc3dvcmQiOiJbUkVEQUNURURdIiwiYXV0aCI6IloybDBiR0ZpT2x0QmJITnZJRkpsWkdGamRHVmtYUT09In19fQ==
type: kubernetes.io/dockerconfigjson
Service Account
kind: ServiceAccount
apiVersion: v1
metadata:
name: default
namespace: default
secrets:
- name: default-token-jktj5
imagePullSecrets:
- name: regcred
Deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:latest
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
initContainers:
- name: init-volume-perms
imagePullPolicy: Always
image: alpine
command: ["/bin/sh", "-c"]
args: ["mkdir /mnt/company-logos; mkdir /mnt/uploads; chown -R 1337:1337 /mnt"]
volumeMounts:
- mountPath: /mnt
name: mypvc
- name: company-web-uploads
image: registry.company.co/frontend/company-web/uploads
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/company/web/uploads
subPath: uploads
name: mypvc
- name: company-logos
image: registry.company.co/backend/pdf-service/company-logos
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/company/shared/company-logos
subPath: company-logos
name: mypvc
volumes:
- name: mypvc
gcePersistentDisk:
pdName: gke-nfs-disk
fsType: ext4
I've looked around, following different guides from the ground up to no success.
So I'm at a total loss as to what to do.
Default namespace all around
It may be because of namespace issue. Can you verify a few things
Are you using default namespace at both places?
K8S version difference between poc and prod.
Can you recreate working secret by something like kubectl get secret default-token-jktj5 -o yaml > imagepullsecret.yaml. Edit the yaml file to remove revision and other status information. Apply the same to prod
I have seen this issue in GKE because of multiline secret conversion to base64. Ensure secrets are matching between environments.
Related
I'm trying to run DNS Server (Dnsmasq) in Kubernetes cluster. The cluster has only one node. Everything works fine until I need to restart dnsmasq container (kubectl rollout restart daemonsets dnsmasq-daemonset) to apply changes made to hosts ConfigMap. As I found out this is needed as Dnsmasq that is already running will not otherwise load changes made into hosts ConfigMap.
Soon as the container is restarted it is not able to pull dnsmasq image and it fails. It is expected behavior as it cannot resolve the image name as there are no other dns servers running, but I wonder what would be best way around it or what are the best practices with running DNS Server in Kubernetes in general. Is this something that CoreDNS is used for or what other alternatives are there? Maybe some high availability solution?
hosts ConfigMap:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: dnsmasq-hosts
namespace: core
data:
hosts: |
127.0.0.1 localhost
10.x.x.x example.com
...
Dnsmasq deployment:
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: dnsmasq-daemonset
namespace: core
spec:
selector:
matchLabels:
app: dnsmasq-app
template:
metadata:
labels:
app: dnsmasq-app
namespace: core
spec:
containers:
- name: dnsmasq
image: registry.gitlab.com/path/to/dnsmasqImage:tag
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: "1"
memory: "32Mi"
requests:
cpu: "150m"
memory: "16Mi"
ports:
- name: dns
containerPort: 53
hostPort: 53
protocol: UDP
volumeMounts:
- name: conf-dnsmasq
mountPath: /etc/dnsmasq.conf
subPath: dnsmasq.conf
readOnly: true
- name: dnsconf-dnsmasq
mountPath: /etc/dnsmasq.d/dns.conf
subPath: dns.conf
readOnly: true
- name: hosts-dnsmasq
mountPath: /etc/dnsmasq.d/hosts
subPath: hosts
readOnly: true
volumes:
- name: conf-dnsmasq
configMap:
name: dnsmasq-conf
- name: dnsconf-dnsmasq
configMap:
name: dnsmasq-dnsconf
- name: hosts-dnsmasq
configMap:
name: dnsmasq-hosts
imagePullSecrets:
- name: gitlab-registry-credentials
nodeSelector:
kubernetes.io/hostname: master
restartPolicy: Always
I tried to use imagePullPolicy: Never, but it seems to fail anyway.
I'm trying to deploy Drupal 7 in Kubernetes, It fails with an error Fatal error: require_once(): Failed opening required '/var/www/html/modules/system/system.install' (include_path='.:/usr/local/lib/php') in /var/www/html/includes/install.core.inc on line 241.
Here is K8S deployment manifest:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: drupal-pvc
annotations:
pv.beta.kubernetes.io/gid: "drupal-gid"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: drupal-service
spec:
ports:
- name: http
port: 80
protocol: TCP
selector:
app: drupal
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: drupal
name: drupal
spec:
replicas: 1
template:
metadata:
labels:
app: drupal
spec:
initContainers:
- name: init-sites-volume
image: drupal:7.72
command: ['/bin/bash', '-c']
args: ['cp -r /var/www/html/sites/ /data/; chown www-data:www-data /data/ -R']
volumeMounts:
- mountPath: /data
name: vol-drupal
containers:
- image: drupal:7.72
name: drupal
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/modules
name: vol-drupal
subPath: modules
- mountPath: /var/www/html/profiles
name: vol-drupal
subPath: profiles
- mountPath: /var/www/html/sites
name: vol-drupal
subPath: sites
- mountPath: /var/www/html/themes
name: vol-drupal
subPath: themes
volumes:
- name: vol-drupal
persistentVolumeClaim:
claimName: drupal-pvc
However, when I remove the volumeMounts from the drupal container, it works. I need to use volumes in order to persist the website data, can any one suggest a fix?
Update: I have also added the manifest for the persistence volume.
check if you could write to mounted volume.
# kubectl exec -it drupal-zxxx -- sh
$ ls -alhtr /var/www/html/modules
$ cd /var/www/html/modules
$ touch test.txt
because storage configured with a group ID (GID) allows writing only by Pods using the same GID. Mismatched or missing GIDs cause permission denied errors.
alternatively you could try out an operator for drupal:
https://github.com/geerlingguy/drupal-operator
Also helm chart is another option:
https://bitnami.com/stack/drupal/helm
Below is deployment yaml, after deployment, I could access the pod
and I can see the mountPath "/usr/share/nginx/html", but I could not find
"/work-dir" which should be created by initContainer.
Could someone explain me the reason?
Thanks and Rgds
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
The volume at "/work-dir" is mounted by the init container and the "/work-dir" location only exists in the init container. When the init container completes, its file system is gone so the "/work-dir" directory in that init container is "gone". The application (nginx) container mounts the same volume, too, (albeit at a different location) providing mechanism for the two containers to share its content.
Per the docs:
Init containers can run with a different view of the filesystem than
app containers in the same Pod.
The volume mount with a PVC allows you to share the contents of /work-dir/ and /use/share/nginx/html/ but it does not mean the nginx container will have the /work-dir folder. Given this, you may think that you could just mount the path / which would allow you to share all folders underneath. However, a mountPath does not work for /.
So, how do you solve your problem? You could have another pod mount /work-dir/ in case you actually need the folder. Here is an example (pvc and deployment with mounts):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-fs-pvc
namespace: default
labels:
mojix.service: default-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: shared-fs
labels:
mojix.service: shared-fs
spec:
replicas: 1
selector:
matchLabels:
mojix.service: shared-fs
template:
metadata:
creationTimestamp: null
labels:
mojix.service: shared-fs
spec:
terminationGracePeriodSeconds: 3
containers:
- name: nginx-c
image: nginx:latest
volumeMounts:
- name: shared-fs-volume
mountPath: /var/www/static/
- name: alpine-c
image: alpine:latest
command: ["/bin/sleep", "10000s"]
lifecycle:
postStart:
exec:
command: ["/bin/mkdir", "-p", "/work-dir"]
volumeMounts:
- name: shared-fs-volume
mountPath: /work-dir/
volumes:
- name: shared-fs-volume
persistentVolumeClaim:
claimName: shared-fs-pvc
I want to change timezone with command.
I know applying hostpath.
Could you know how to apply command ?
ln -snf /user/share/zoneinfor/$TZ /etc/localtime
it works well within container.
But I don't know applying with command and arguments in yaml file.
You can change the timezone of your pod by using specific timezone config and hostPath volume to set specific timezone. Your yaml file will look something like:
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "1000000"
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Prague
type: File
If you want it across all pods or deployments, you need to add volume and volumeMounts to all your deployment file and change the path value in hostPath section to the timezone you want to set.
Setting TZ environment variable as below works fine for me on GCP Kubernetes.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: demo
image: gcr.io/project/image:master
imagePullPolicy: Always
env:
- name: TZ
value: Europe/Warsaw
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 0
In a deployment, you can do it by creating a volumeMounts in /etc/localtime and setting its values. Here is an example I have for a mariadb:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mariadb
spec:
replicas: 1
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb
ports:
- containerPort: 3306
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Madrid
In order to add "hostPath" in the deployment config, as suggested in previous answers, you'll need to be a privileged user. Otherwise your deployment may fail on:
"hostPath": hostPath volumes are not allowed to be used
As a workaround you can try one of theses options:
Add allowedHostPaths: {} next to volumes.
Add TZ environment variable. For example: TZ = Asia/Jerusalem
(Option 2 is similar to running docker exec -it openshift/origin /bin/bash -c "export TZ='Asia/Jerusalem' && /bin/bash").
For me: Setting up of volumes and volumeMounts didn't help. Setting up of TZ environment alone works in my case.
In a simple Postgres Deployment, I wish to choose the volume dependent on the namespace. The aim is to use the same Deployment configuration file to create Postgres deployments in different namespaces (e.g. production/staging).
What ways are there to achieve this?
Below my configuration file, I basically want to make MAKE_THIS_DEPENDENT_ON_NAMESPACE dependent on the environment (or namespace) this Deployment is used in.
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.6
name: postgres
volumeMounts:
-name: postgres-storage
mountPath: /var/lib/postgresql
volumes:
- name: postgres-persistent-storage
gcePersistentDisk:
pdName: MAKE_THIS_DEPENDENT_ON_NAMESPACE
You should try using a Persistent Volume Claim instead, PVCs are namespaced.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: dockerfile/nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim