Gitlab CI - K8s - Deployment - kubernetes

just going through this guide on gitlab and k8s gitlab-k8s-cd, but my build keeps failing on this part:
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=<my_username> --docker-password=$REGISTRY_PASSWD --docker-email=<my_email>
Although I am not entirely sure what password is needed for --docker-password, I have created an API token in gitlab for my user and I am using that in the secure variables.
This is the error:
$ gcloud container clusters get-credentials deployment
Fetching cluster endpoint and auth data.
kubeconfig entry generated for deployment.
$ kubectl delete secret registry.gitlab.com
Error from server: secrets "registry.gitlab.com" not found
ERROR: Build failed: exit code 1
Any help would be much appreciated thanks.
EDIT
Since the initial post, by removing the initial kubectl delete secret and re-building worked, so it was failing on deleting when there was no previous secret.
Second Edit
Having problems with my deployment.yml for K8s, could anyone shed any light on why I am getting this error:
error validating "deployment.yml": error validating data: field spec.template.spec.containers[0].ports[0]: expected object of type map[string]interface{},
With this yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app>
spec:
replicas: 2
template:
metadata:
labels:
app: <app>
spec:
containers:
- name: deployment
image: registry.gitlab.com/<username>/<app>
imagePullPolicy: Always
ports:
- "80:8080"
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
And this error:
error validating "deployment.yml": error validating data: found invalid field imagePullSecrets for v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
With this yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app>
spec:
replicas: 2
template:
metadata:
labels:
app: <app>
spec:
containers:
- name: <app>
image: registry.gitlab.com/<project>/<app>
imagePullPolicy: Always
ports:
- "80:8080"
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
Latest YAML
apiVersion: v1
kind: Service
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
ports:
- port: 80
selector:
app: <app_name>
tier: frontend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: <app_name>
tier: frontend
spec:
containers:
- image: registry.gitlab.com/<project>/<app>
imagePullPolicy: Always
name: <app_name>
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
ports:
- containerPort: 8080
hostPort: 80

Regarding your first error:
Ports are defined differently in Kubernetes than in Docker or Docker Compose. This is how the port specification should look like:
ports:
- containerPort: 8080
hostPort: 80
See the reference for more information.
Regarding your second error:
According to the reference on PodSpecs, the imagePullSecrets property is correctly placed in your example. However, from reading the error message, it seems that you actually included the imagePullSecrets property into the ContainerSpec, not the PodSpec.
The YAML in your question seems to be correct, in this case. Make sure that your actual manifest matches the example from your question and you did not accidentally indented the imagePullSecrets property more than necessary.

This is the working YAML file for K8s:
apiVersion: v1
kind: Service
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
ports:
- port: 80
selector:
app: <app_name>
tier: frontend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: <app_name>
tier: frontend
spec:
containers:
- image: registry.gitlab.com/<project>/<app>:latest
imagePullPolicy: Always
name: <app_name>
env:
- name: PORT
value: "8080"
ports:
- containerPort: 8080
hostPort: 80
imagePullSecrets:
- name: registry.gitlab.com
This is the working gitlab-ci file also:
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
stages:
- package
- deploy
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/<project>/<app> .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/<project>/<app>
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone <zone>
- gcloud config set project <project>
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials <container-name>
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=<username> --docker-password=$REGISTRY_PASSWD --docker-email=<user-email>
- kubectl apply -f deployment.yml
Just need to work out how to alter the script to allow for rolling back.

Related

TensorFlow Setting model_config_file runtime argument in YAML file for K8s

I've been having a hell of a time trying to figure-out how to serve multiple models using a yaml configuration file for K8s.
I can run directly in Bash using the following, but having trouble converting it to yaml.
docker run -p 8500:8500 -p 8501:8501 \
[container id] \
--model_config_file=/models/model_config.config \
--model_config_file_poll_wait_seconds=60
I read that model_config_file can be added using a command element, but not sure where to put it, and I keep receiving errors around valid commands or not being able to find the file.
command:
- '--model_config_file=/models/model_config.config'
- '--model_config_file_poll_wait_seconds=60'
Sample YAML config below for K8s, where would the command go referencing the docker run command above?
---
apiVersion: v1
kind: Namespace
metadata:
name: model-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tensorflow-test-rw-deployment
namespace: model-test
spec:
selector:
matchLabels:
app: rate-predictions-server
replicas: 1
template:
metadata:
labels:
app: rate-predictions-server
spec:
containers:
- name: rate-predictions-container
image: aws-ecr-path
command:
- --model_config_file=/models/model_config.config
- --model_config_file_poll_wait_seconds=60
ports:
#- grpc: 8500
- containerPort: 8500
- containerPort: 8501
---
apiVersion: v1
kind: Service
metadata:
labels:
run: rate-predictions-service
name: rate-predictions-service
namespace: model-test
spec:
type: ClusterIP
selector:
app: rate-predictions-server
ports:
- port: 8501
targetPort: 8501
What you are passing on seems to be the arguments and not the command. Command should be set as the entrypoint in the container and arguments should be passed in args. Please see following link.
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/

How to deploy a specific images using skaffold buildpacks?

I would like to deploy a specific image in Kubernetes using skaffold buildpack. Everything is fine for doing the build but the deployment in Kubernetes failed because skaffold didn't use my dockerhub id as prefix, only skaffold-buidpacks is passed to silent kubectl command.
apiVersion: skaffold/v2beta21
kind: Config
build:
artifacts:
- image: systemdevformations/skaffold-buildpacks
buildpacks:
builder: "gcr.io/buildpacks/builder:v1"
trustBuilder: true
env:
- GOPROXY={{.GOPROXY}}
profiles:
- name: gcb
build:
googleCloudBuild: {}
DEBU[0027] Running command: [kubectl --context kubernetes-admin#kubernetes create --dry-run=client -oyaml -f /home/ubuntu/skaffold/examples/buildpacks/k8s/web.yaml] subtask=-1 task=DevLoop
DEBU[0027] Command output: [apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
ports:
- name: http
port: 8080
selector:
app: web
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: default
spec:
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: skaffold-buildpacks
name: web
ports:
- containerPort: 8080
Kubernetes auto-generated script doesn't use docker hub image prefix

Kubernetes: Pod is not created after applying deployment

I have a problem with Kubernetes on my local machine. I want to create a pod with a database so I prepared a deployment file with service.
apiVersion: v1
kind: Service
metadata:
name: bid-service-db
labels:
app: bid-service-db
tier: database
spec:
ports:
- name: "5432"
port: 5432
targetPort: 5432
selector:
app: bid-service-db
tier: database
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bid-service-db
labels:
app: bid-service-db
tier: database
spec:
selector:
matchLabels:
app: bid-service-db
strategy:
type: Recreate
template:
metadata:
labels:
app: bid-service-db
tier: database
spec:
containers:
- env:
- name: POSTGRES_DB
value: mydb
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: postgres
image: postgres:13
imagePullPolicy: Never
name: bid-service-db
ports:
- containerPort: 5432
name: bid-service-db
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: postgres-persistance-storage
persistentVolumeClaim:
claimName: bid-service-db-volume
status: {}
I am applying this file with k apply -f bid-db-deployment.yaml. k get all returns that only service was created, but pod not started. What can I do in this case? How to troubleshoot that?
if you didn't get any errors on the 'apply' you can get the failure reason by:
kubectl describe deployment/DEPLOMENT_NAME
Also, you can take only the deployment part and put it in a separate YAML file and see if you get errors.
Since after restart the cluster it worked for you, a good ideia next times should be verify the logs from kube-api and kube-controller pods using the command:
kubectl logs pn kube-system <kube-api/controller_pod_name>
To get the list of your deployments in all name space you can use the command:
kubectl get deployments -A

unable to recognize "deployment.yml": yaml: line 3: mapping values are not allowed in this context

I have setup a gitlab CI/CD pipeline, which builds and deploys docker images to kubernetes. I'm using yaml based deployment to kubernetes. When i run the pipeline the gitlab-runner always throws "unable to recognize yaml line 3: mapping values are not allowed in this context", but when i run it directly using kubectl create -f deployment.yaml, it runs correctly.
Here's my first few lines of the yml file. I have already validated the yml formatting. The error is thrown at line 3.
apiVersion: v1
kind: Service
metadata:
labels:
app: configserver
name: configserver
spec:
ports:
- name: http
port: 8888
selector:
app: configserver
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: configserver
name: configserver
spec:
replicas: 1
selector:
matchLabels:
app: configserver
template:
metadata:
creationTimestamp: null
labels:
app: configserver
spec:
containers:
- image: config-server:latest
name: configserver
ports:
- containerPort: 8888
resources: {}
restartPolicy: Always
It this something to do with gitlab?
Thanks.
EDIT:
Here's the relevant part of my .gitlab-ci.yml
stages:
- build
- deploy
build:
stage: build
script:
- mvn clean install -DskipTests
- docker-compose -f docker-compose-istio.yml build
- docker-compose -f docker-compose-istio.yml push
deploy:
stage: deploy
script:
- kubectl apply -f itp-ms-deploy.yml
- kubectl apply -f itp-ms-gateway.yml
- kubectl apply -f itp-ms-autoscale.yml
when: manual
only:
- master

How to fetch configmap from kubernetes pod

I have one spring boot microservice running on docker container, below is the Dockerfile
FROM java:8-jre
MAINTAINER <>
WORKDIR deploy/
#COPY config/* /deploy/config/
COPY ./ms.console.jar /deploy/
CMD chmod +R 777 ./ms.console.jar
CMD ["java","-jar","/deploy/ms.console.jar","console"]
EXPOSE 8384
here my configuration stores in external folder, i.e /config/console-server.yml and when I started the application, internally it will load the config (spring boot functionality).
Now I want to separate this configuration using configmap, for that I simply created one configmap and storing all the configuration details.
kubectl create configmap console-configmap
--from-file=./config/console-server.yml
kubectl describe configmap console-configmap
below are the description details:
Name: console-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
console-server.yml:
----
server:
http:
port: 8385
compression:
enabled: true
mime-types: application/json,application/xml,text/html,text/xml,text/plain,text/css,application/javascript
min-response-size: 2048
---
spring:
thymeleaf:
prefix: classpath:/static
application:
name: console-service
profiles:
active: native
servlet:
multipart:
max-file-size: 30MB
max-request-size: 30MB
---
host:
gateway: http://apigateway:4000
webhook: http://localhost:9000
my deployment yml is:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
envFrom:
- configMapRef:
name: console-configmap
imagePullSecrets:
- name: regcresd
My doubt is, I commented config folder in the Dockerfile, so while running pods, it's throwing exception because of no configuration, how I will inject this console-configmap to my deployment, what I tried already shared, but getting same issues.
First of all, how are you consuming the .yml file in your application? If you consume your yml file contents as environment variables, your config should just work fine. But I suspect that you want to consume the contents from the config file inside the container. If that is the case you have to create a volume out of the configmap as follows:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /app/config
name: config
volumes:
- name: config
configMap:
name: console-configmap
imagePullSecrets:
- name: regcresd
The file will be available in the path /app/config/console-server.yml. You have to modify it as per your needs.
do you need to load key:value pairs from the config file as environment variables then below spec would work
envFrom:
- configMapRef:
name: console-configmap
if you need the config as a file inside pod then mount the configmap as volume. following link would be helpful
https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/