Kubernetes cluster not pulling images created by Skaffold - kubernetes

This is my skaffold file:
apiVersion: skaffold/v1
kind: Config
metadata:
name: app-skaffold
build:
artifacts:
- image: myappservice
context: api-server
deploy:
helm:
releases:
- name: myapp
chartPath: chart/myapp
And in my Helm templates folder I have only one manifest.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
labels:
app: my-app
spec:
replicas: 2
selector:
matchLabels:
app: myapp-pod
template:
metadata:
labels:
app: myapp-pod
spec:
containers:
- name: apiserver
image: myappservice
ports:
- containerPort: 5050
env:
- name: a-key
valueFrom:
secretKeyRef:
name: secret-key
key: secret-key-value
But everytime I run:
$ skaffold dev
And check my pods' status with $ kubectl get pods, I get ErrImagePull Statuses.
This started since I decided to add Helm to the stack because it was working using only kubectl.
In my deploy section in my skaffold.yaml file, I had:
deploy:
kubectl:
manifests:
- ./k8s-manifests/*.yaml
And it was working fine, the only thing I did was to move the manifest file into the templates folder of my Helm chart and change the skaffold.yaml file as shown above.
What am I missing?

I ran into a registry issue where all my images suddenly disappeared after an ingress config change and skaffold (1.0.0) couldn't load anything. The only way I could fix it was by deleting my entire cluster and re-creating it again.
This probably won't help, but it's worth a shot.

Related

Kubernetes :Validation Error(Deployment.spec.template.spec.container[0]): unknown field "ConfigMapref" in io.k8s.api.core.v1.Container

I am doing my first deployment in Kubernetes and I've hosted my API in my namespace and it's up and running. So I tried to connect my API with MongoDB. Added my database details in ConfigMaps via Rancher.
I tried to invoke the DB in my deployment YAML file but got an error stating Unknown Field - ConfigMapref
Below is my deployment YAML file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myfistproject
namespace: Owncloud
spec
replicas: 2
selector:
matchLables:
app: myfirstproject
version: 1.0.0
template:
metadata:
labels:
app: myfirstproject
version: 1.0.0
spec:
containers:
- name: myfirstproject
image: **my image repo location**
imagePullPolicy: always
ports:
- containerPort: 80
configMapRef:
- name: myfirstprojectdb # This is the name of the config map created via rancher
myfirstprojectdb ConfigMap will store all the details like the database name, username, password, etc.
On executing the pipeline I get the below error.
How do I need to refer my config map in deployment yaml?
Validation Error(Deployment.spec.template.spec.container[0]): unknown field "ConfigMapref" in io.k8s.api.core.v1.Container
There are some more typos (e.g. missing : after spec or Always should be with capital letter). Also indentation should be consistent in the whole yaml file - see yaml indentation and separation.
I corrected your yaml so it passes api server's check + added config map reference (considering it contains env variables):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myfistproject
namespace: Owncloud
spec:
replicas: 2
selector:
matchLabels:
app: myfirstproject
version: 1.0.0
template:
metadata:
labels:
app: myfirstproject
version: 1.0.0
spec:
containers:
- name: myfirstproject
image: **my image repo location**
imagePullPolicy: Always
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: myfirstprojectdb
Useful link:
Configure all key-value pairs in a ConfigMap as container environment variables which is related to this question.

How to copy a local file into a helm deployment

I'm trying to deploy in Kubernetes several pods using a mongo image with a initialization script in them. I'm using helm for the deployment. Since I'm beginning with the official Mongo docker image, I'm trying to add a script at /docker-entrypoint-initdb.d so it will be executed right at the beginning to initialize some parameters of my Mongo.
What I don't know is how can I insert my script, that is, let's say, in my local machine, in /docker-entrypoint-initdb.d using helm.
I'm trying to do something like docker run -v hostfile:mongofile but I need the equivalent in helm, so this will be done in all the pods of the deployment
You can use configmap. Lets put nginx configuration file to container via configmap. We have directory name called nginx with same level values.yml. Inside there we have actual configuration file.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-file
labels:
app: ...
data:
nginx.conf: |-
{{ .Files.Get "nginx/nginx.conf" | indent 4 }}
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: SomeDeployment
...
spec:
replicas:
selector:
matchLabels:
app: ...
release: ...
template:
metadata:
labels:
app: ...
release: ...
spec:
volumes:
- name: nginx-conf
configMap:
name: nginx-config-file
items:
- key: nginx.conf
path: nginx.conf
containers:
- name: ...
image: ...
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
You can also check initContainers concept from this link :
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

How to kubernetes "kubectl apply" does not update existing deployments

I have a .NET-core web application. This is deployed to an Azure Container Registry. I deploy this to my Azure Kubernetes Service using
kubectl apply -f testdeployment.yaml
with the yaml-file below
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb
spec:
replicas: 1
selector:
matchLabels:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: mycontainerregistry.azurecr.io/myweb:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: my-registry-key
This works splendid, but when I change some code, push new code to container and run the
kubectl apply -f testdeployment
again, the AKS/website does not get updated, until I remove the deployment with
kubectl remove deployment myweb
What should I do to make it overwrite whatever is deployed? I would like to add something in my yaml-file. (Im trying to use this for continuous delivery in Azure DevOps).
I believe what you are looking for is imagePullPolicy. The default is ifNotPresent which means that the latest version will not be pulled.
https://kubernetes.io/docs/concepts/containers/images/
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb
spec:
replicas: 1
selector:
matchLabels:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: mycontainerregistry.azurecr.io/myweb
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: my-registry-key
To ensure that the pod is recreated, rather run:
kubectl delete -f testdeployment && kubectl apply -f testdeployment
kubectl does not see any changes in your deployment yaml file, so it will not make any changes. That's one of the problems using the latest tag.
Tag your image to some incremental version or build number and replace latest with that tag in your CI pipeline (for example with envsubst or similar). This way kubectl knows the image has changed. And you also know what version of the image is running. The latest tag could be any image version.
Simplified example for Azure DevOps:
# <snippet>
image: mycontainerregistry.azurecr.io/myweb:${TAG}
# </snippet>
Pipeline YAML:
stages:
- stage: Build
jobs:
- job: Build
variables:
- name: TAG
value: $(Build.BuildId)
steps:
- script: |
envsubst '${TAG}' < deployment-template.yaml > deployment.yaml
displayName: Replace Environment Variables
Alternatively you could also use another tool like Replace Tokens (different syntax: #{TAG}#).
First delete the deployment config file by running below command on the relative path of the deployment file.
kubectl delete -f .\deployment-file-name.yaml
earlier I used to get
deployment.apps/deployment-file-name unchanged
meaning the deployment file remains cached.
It happens while you're fixing some errors / typos on the deployment YAML & the config got cached once the error got cleared.
Only a kubectl delete -f .\deployment-file-name.yaml could remove the cache.
Later you can do the deployment by
kubectl apply -f .\deployment-file-name.yaml
Sample yaml file as follows :
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-file-name
spec:
replicas: 1
selector:
matchLabels:
app: myservicename
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: /platformservice:latest

How to fetch configmap from kubernetes pod

I have one spring boot microservice running on docker container, below is the Dockerfile
FROM java:8-jre
MAINTAINER <>
WORKDIR deploy/
#COPY config/* /deploy/config/
COPY ./ms.console.jar /deploy/
CMD chmod +R 777 ./ms.console.jar
CMD ["java","-jar","/deploy/ms.console.jar","console"]
EXPOSE 8384
here my configuration stores in external folder, i.e /config/console-server.yml and when I started the application, internally it will load the config (spring boot functionality).
Now I want to separate this configuration using configmap, for that I simply created one configmap and storing all the configuration details.
kubectl create configmap console-configmap
--from-file=./config/console-server.yml
kubectl describe configmap console-configmap
below are the description details:
Name: console-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
console-server.yml:
----
server:
http:
port: 8385
compression:
enabled: true
mime-types: application/json,application/xml,text/html,text/xml,text/plain,text/css,application/javascript
min-response-size: 2048
---
spring:
thymeleaf:
prefix: classpath:/static
application:
name: console-service
profiles:
active: native
servlet:
multipart:
max-file-size: 30MB
max-request-size: 30MB
---
host:
gateway: http://apigateway:4000
webhook: http://localhost:9000
my deployment yml is:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
envFrom:
- configMapRef:
name: console-configmap
imagePullSecrets:
- name: regcresd
My doubt is, I commented config folder in the Dockerfile, so while running pods, it's throwing exception because of no configuration, how I will inject this console-configmap to my deployment, what I tried already shared, but getting same issues.
First of all, how are you consuming the .yml file in your application? If you consume your yml file contents as environment variables, your config should just work fine. But I suspect that you want to consume the contents from the config file inside the container. If that is the case you have to create a volume out of the configmap as follows:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /app/config
name: config
volumes:
- name: config
configMap:
name: console-configmap
imagePullSecrets:
- name: regcresd
The file will be available in the path /app/config/console-server.yml. You have to modify it as per your needs.
do you need to load key:value pairs from the config file as environment variables then below spec would work
envFrom:
- configMapRef:
name: console-configmap
if you need the config as a file inside pod then mount the configmap as volume. following link would be helpful
https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/

Gitlab CI - K8s - Deployment

just going through this guide on gitlab and k8s gitlab-k8s-cd, but my build keeps failing on this part:
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=<my_username> --docker-password=$REGISTRY_PASSWD --docker-email=<my_email>
Although I am not entirely sure what password is needed for --docker-password, I have created an API token in gitlab for my user and I am using that in the secure variables.
This is the error:
$ gcloud container clusters get-credentials deployment
Fetching cluster endpoint and auth data.
kubeconfig entry generated for deployment.
$ kubectl delete secret registry.gitlab.com
Error from server: secrets "registry.gitlab.com" not found
ERROR: Build failed: exit code 1
Any help would be much appreciated thanks.
EDIT
Since the initial post, by removing the initial kubectl delete secret and re-building worked, so it was failing on deleting when there was no previous secret.
Second Edit
Having problems with my deployment.yml for K8s, could anyone shed any light on why I am getting this error:
error validating "deployment.yml": error validating data: field spec.template.spec.containers[0].ports[0]: expected object of type map[string]interface{},
With this yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app>
spec:
replicas: 2
template:
metadata:
labels:
app: <app>
spec:
containers:
- name: deployment
image: registry.gitlab.com/<username>/<app>
imagePullPolicy: Always
ports:
- "80:8080"
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
And this error:
error validating "deployment.yml": error validating data: found invalid field imagePullSecrets for v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
With this yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app>
spec:
replicas: 2
template:
metadata:
labels:
app: <app>
spec:
containers:
- name: <app>
image: registry.gitlab.com/<project>/<app>
imagePullPolicy: Always
ports:
- "80:8080"
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
Latest YAML
apiVersion: v1
kind: Service
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
ports:
- port: 80
selector:
app: <app_name>
tier: frontend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: <app_name>
tier: frontend
spec:
containers:
- image: registry.gitlab.com/<project>/<app>
imagePullPolicy: Always
name: <app_name>
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
ports:
- containerPort: 8080
hostPort: 80
Regarding your first error:
Ports are defined differently in Kubernetes than in Docker or Docker Compose. This is how the port specification should look like:
ports:
- containerPort: 8080
hostPort: 80
See the reference for more information.
Regarding your second error:
According to the reference on PodSpecs, the imagePullSecrets property is correctly placed in your example. However, from reading the error message, it seems that you actually included the imagePullSecrets property into the ContainerSpec, not the PodSpec.
The YAML in your question seems to be correct, in this case. Make sure that your actual manifest matches the example from your question and you did not accidentally indented the imagePullSecrets property more than necessary.
This is the working YAML file for K8s:
apiVersion: v1
kind: Service
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
ports:
- port: 80
selector:
app: <app_name>
tier: frontend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: <app_name>
tier: frontend
spec:
containers:
- image: registry.gitlab.com/<project>/<app>:latest
imagePullPolicy: Always
name: <app_name>
env:
- name: PORT
value: "8080"
ports:
- containerPort: 8080
hostPort: 80
imagePullSecrets:
- name: registry.gitlab.com
This is the working gitlab-ci file also:
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
stages:
- package
- deploy
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/<project>/<app> .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/<project>/<app>
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone <zone>
- gcloud config set project <project>
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials <container-name>
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=<username> --docker-password=$REGISTRY_PASSWD --docker-email=<user-email>
- kubectl apply -f deployment.yml
Just need to work out how to alter the script to allow for rolling back.