how to use "kubectl apply -f <file.yaml> --force=true" making an impact over a deployed container EXEC console? - kubernetes

I am trying to redeploy the exact same existing image, but after changing a secret in the Azure Vault. Since it is the same image that's why kubectl apply doesn't deploy it. I tried to make the deploy happen by adding a --force=true option. Now the deploy took place and the new secret value is visible in the dashboard config map, but not in the API container kubectl exec console prompt in the environment.
Below is one of the 3 deploy manifest (YAML file) for the service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tube-api-deployment
namespace: tube
spec:
selector:
matchLabels:
app: tube-api-app
replicas: 3
template:
metadata:
labels:
app: tube-api-app
spec:
containers:
- name: tube-api
image: ReplaceImageName
ports:
- name: tube-api
containerPort: 80
envFrom:
- configMapRef:
name: tube-config-map
imagePullSecrets:
- name: ReplaceRegistrySecret
---
apiVersion: v1
kind: Service
metadata:
name: api-service
namespace: tube
spec:
ports:
- name: api-k8s-port
protocol: TCP
port: 8082
targetPort: 3000
selector:
app: tube-api-app

I think it is not happening because when we update a ConfigMap, the files in all the volumes referencing it are updated. It’s then up to the pod container process to detect that they’ve been changed and reload them. Currently, there is no built-in way to signal an application when a new version of a ConfigMap is deployed. It is up to the application (or some helper script) to look for the config files to change and reload them.

Related

Google Cloud Kubernetes Engine "gke-deployment" not found while running github actions

I'm trying to deploy my code to GKE using github actions and when I run the action it says that service and deployment is created but then gives off an error, also the deployment it created in the cloud has an error.
And this is the deployment it creates:
My deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-1
labels:
type: nginx # <-- correct
spec:
selector:
matchLabels:
type: nginx # incorrect, remove the '-'
template:
metadata:
labels:
type: nginx # incorrect, remove the '-'
spec:
containers:
- image: nginx:1.14
name: renderer
ports:
- containerPort: 80
My service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-3-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
I just want to simply deploy my C++ code to kubernetes engine so I can experiment with it. I'm trying to push an ubuntu 20.04 image to europe-west1.
This question says that I must change the zone to fix the cloud's deployment's error but I'm not sure whether it will fix my problem or not and I don't know how to properly change it.
So appereantly, for anyone facing this issue, you have to change the name of your DEPLOYMENT_NAME variable name inside your google.yaml file to your deployments name so it is the same as metadata: name: "---" inside your deployment.yaml, in my case I changed my DEPLOYMENT_NAME into nginx-1 to fix the issue.

TensorFlow Setting model_config_file runtime argument in YAML file for K8s

I've been having a hell of a time trying to figure-out how to serve multiple models using a yaml configuration file for K8s.
I can run directly in Bash using the following, but having trouble converting it to yaml.
docker run -p 8500:8500 -p 8501:8501 \
[container id] \
--model_config_file=/models/model_config.config \
--model_config_file_poll_wait_seconds=60
I read that model_config_file can be added using a command element, but not sure where to put it, and I keep receiving errors around valid commands or not being able to find the file.
command:
- '--model_config_file=/models/model_config.config'
- '--model_config_file_poll_wait_seconds=60'
Sample YAML config below for K8s, where would the command go referencing the docker run command above?
---
apiVersion: v1
kind: Namespace
metadata:
name: model-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tensorflow-test-rw-deployment
namespace: model-test
spec:
selector:
matchLabels:
app: rate-predictions-server
replicas: 1
template:
metadata:
labels:
app: rate-predictions-server
spec:
containers:
- name: rate-predictions-container
image: aws-ecr-path
command:
- --model_config_file=/models/model_config.config
- --model_config_file_poll_wait_seconds=60
ports:
#- grpc: 8500
- containerPort: 8500
- containerPort: 8501
---
apiVersion: v1
kind: Service
metadata:
labels:
run: rate-predictions-service
name: rate-predictions-service
namespace: model-test
spec:
type: ClusterIP
selector:
app: rate-predictions-server
ports:
- port: 8501
targetPort: 8501
What you are passing on seems to be the arguments and not the command. Command should be set as the entrypoint in the container and arguments should be passed in args. Please see following link.
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/

Docker Desktop error converting YAML to JSON while trying to deploy the voting app

I am using Docker Desktop to run the voting app, I am following the tutorial the link in the command line is deprecated :
kubectl apply -f https://raw.githubusercontent.com/docker/docker-birthday/master/resources/kubernetes-docker-desktop/vote.yaml
So I tried to use the link from this repo :
kubectl apply -f https://github.com/dockersamples/docker-fifth-birthday/blob/master/kubernetes-desktop/kube-deployment.yml
But this error keeps on popping :
error: error parsing https://github.com/dockersamples/docker-fifth-birthday/blob/master/kubernetes-desktop/kube-deployment.yml: error converting YAML to JSON: YAML: line 92: mapping values are not allowed in this context
---
apiVersion: v1
kind: Service
metadata:
name: result
labels:
app: result
spec:
type: LoadBalancer
ports:
what am I doing wrong?
I tried to do a get the file to my local to execute but got the same error as 92 line using wget https://github.com/dockersamples/docker-fifth-birthday/blob/master/kubernetes-desktop/kube-deployment.yml. However, I tried just did a copy/paste of the content and it creates services fine but there are 2 issues with the project.
the apiversion in deployment is apps/v1beta it needs to be apps/v1 as per documentation. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
There are places where the selectors have not been mentioned in the deployments which is why the deployments are not getting created, you might need to fix it. To elaborate, the selectors in the deployments(spec section) have to match the labels of the service (metadata). Below is a working version of service/deployment from the project mentioned.
On why you would do that? every deployment will run a set of pods,it will Maintain a set of identical pods, ensuring that they have the correct config and that the right number and to access these you will expose a service. these services will look up the deployment based on these labels.
If you are looking for learning material, you can check the official documentation below.
https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
clusterIP: None
ports:
- name: redis
port: 6379
targetPort: 6379
selector:
app: redis
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:alpine
ports:
- containerPort: 6379
name: redis

kubernetes service getting auto deleted after each deployment

We are facing an unexpected behavior in Kubernetes. When we are running the command:
kubectl apply -f my-service_deployment.yml
It's noticed that the associated existing service of the same pod getting deleted automatically. Also, we noticed that when we are applying the deployment file, instead of giving the output as the "deployment got configured" (as its already running), its showing output as "deployment created".. is some problem here?
Also sometimes we have noticed that the service is recreated with different timestamps than we created with different Ip.
What may be the reasons for this unexpected behavior of this service?
Note:- it's noticed that there is another pod and service running in the same cluster with pod name as "my-stage-my-service" and service name as my-stage-service-v1. will this have any impact?
Deployment file:-
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
labels:
app: my-service
spec:
replicas: 1
selector:
matchLabels:
app: my-service
template:
metadata:
labels:
app: my-service
spec:
containers:
- name: my-service
image: myacr/myservice:v1-dev
ports:
- containerPort: 8080
imagePullSecrets:
- name: my-az-secret
Service file:
apiVersion: v1
kind: Service
metadata:
name: my-stage-service
spec:
selector:
app: my-service
ports:
- protocol: TCP
port: 8880
targetPort: 8080
type: LoadBalancer

How to create multiple instances of Mediawiki in a Kubernetes Cluster

I´m about to deploy multiple Mediawiki instances on my Kubernetes-cluster.
In my case the YAML deploymentfile for the DB (MySQL) works as it supposed to do, the deploymentfile for Mediawiki deploys as many pods as expected, but I can´t access them from outside of the cluster even if I create a Service for this case.
If I try to create one single Mediawiki pod and a service to access it from outside of the cluster it works as it should. If I try to create a deploymentfile for Mediawiki equal to the one for MySQL it does creates the pods and the requiered service but it´s not accessible from the externel-IP assigned to it.
My deploymentfile for Mediawiki:
apiVersion: v1
kind: Service
metadata:
name: mediawiki-service
labels:
name: mediawiki-service
app: mediawiki
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
name: mediawiki-pod
app: mediawiki
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mediawiki
spec:
replicas: 6
selector:
matchLabels:
app: mediawiki
strategy:
type: Recreate
template:
metadata:
labels:
app: mediawiki
spec:
containers:
- image: mediawiki
name: mediawiki
ports:
- containerPort: 80
name: mediawiki
This is the pod-definition file:
apiVersion: v1
kind: Pod
metadata:
name: mediawiki-pod
labels:
name: mediawiki-pod
app: mediawiki
spec:
containers:
- name: mediawiki
image: mediawiki
ports:
- containerPort: 80
This is the service-definition file:
apiVersion: v1
kind: Service
metadata:
name: mediawiki-service
labels:
name: mediawiki-service
app: mediawiki
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
name: mediawiki-pod
The accual resault should be that I can deploy multiple instances of Mediawiki on my cluster and can access them from outside with the externel-IP.
If you look at kubectl describe service mediawiki-service in both scenarios, I expect you will see that in the single-pod case, there is an Endpoints: list that includes a single IP address (the pod's, but that's an implementation detail) but in the deployment case, it says <none>.
Your Service only matches pods that have both name and app labels:
apiVersion: v1
kind: Service
spec:
selector:
name: mediawiki-pod
app: mediawiki
But the pods deployed by your deployment only have app labels:
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
labels:
app: mediawiki
So at that specific point (the labels inside the template for the deployment; also adding them at the top level doesn't hurt, but this embedded point is what's important) you need to add the second label name: mediawiki-pod.
If you want to deploy multiple instances of some piece of software on Kubernetes cluster it's good idea to check out if there is a helm chart for it.
In your case the answer is positive - there is a stable helm chart for Mediawiki.
Creating multiple instances is as easy as creating multiple releases, for example:
helm install --name wiki1 stable/mediawiki
helm install --name wiki2 stable/mediawiki
helm install --name wiki3 stable/mediawiki
To use Helm you have to install it on your local machine and on k8s cluster - following the quick start guide will be enough.