Replicaset or Deployment with multiple template specifications - kubernetes

Is it possible to create a replicaset / deployment with multiple template specifications - say I had one template specification for logical group "app = ui, rel = stable" and other template specification for "app = as, rel =stable".
Is it possible to create a replicset / deployment targeting "rel=stable" - to target all the pods with the label "rel = stable" ?
Please see the attached pic for more details
Credits : Kubernetes In action
Update1 - adding more details. I am aware of deployments to some extent. However, wanted to know if this is possible ? If not , how can achieve it.
The requirement is to have a single deployment that manages different types of pods.
Please see yaml file for reference. Please ignore images names and ports etc., those are just some dummy names
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
rel: stable
spec:
selector:
matchLabels:
rel: stable
template:
metadata:
labels:
rel: stable
spec:
containers:
- name: uipod
image: ui
ports:
- containerPort: 80
template:
metadata:
labels:
rel: stable
spec:
containers:
- name: aspod
image: as
ports:
- containerPort: 81
template:
metadata:
labels:
rel: stable
spec:
containers:
- name: pcpod
image: pc
ports:
- containerPort: 82
template:
metadata:
labels:
rel: stable
spec:
containers:
- name: scpod
image: sc
ports:
- containerPort: 83

"manages all the templates ( Pods ) that had the label " rel = stable""
I don't exactly what you mean with, but it not possible create a deployment to manage other deployments.
You can create a deployment file with as many pods you want, but if to separate them you need to use external script/kubectl command to manage all of them.

Related

I have one deployment.yaml file if I am trying to deploy it in kubernetes by the command kubectl apply -f then it is throwing resource not found error

I am unable to deploy this file by using
kubectl apply -f command
Deployment YAML image
I have provided the YAML file required for your deployment. It is important that all the lines are indented correctly. Hyphens (-) indicate a list item. Therefore, it is not required to use them on every line.
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc-deployment
namespace: abc
spec:
replicas: 3
selector:
matchLabels:
app: abc-deployment
template:
metadata:
labels:
app: abc-deployment
spec:
containers:
- name: abc-deployment
image: anyimage
ports:
- containerPort: 80
env:
- name: APP_VERSION
value: v1
- name: ENVIRONMENT
value: "123"
- name: DATA
valueFrom:
configMapKeyRef:
name: abc-configmap
key: data
imagePullPolicy: IfNotPresent
restartPolicy: Always
imagePullSecrets:
- name: abc-secret
As a side note, the way envFrom was used is incorrect. It must be within the container env section, and formatted as such in the example above (see the DATA env variable).
If you are using Visual Studio Code, there is an official Kubernetes extension from Microsoft that provides Intellisense (suggestions) and alerts you to errors.
Hope this helps.

Kubernetes :Validation Error(Deployment.spec.template.spec.container[0]): unknown field "ConfigMapref" in io.k8s.api.core.v1.Container

I am doing my first deployment in Kubernetes and I've hosted my API in my namespace and it's up and running. So I tried to connect my API with MongoDB. Added my database details in ConfigMaps via Rancher.
I tried to invoke the DB in my deployment YAML file but got an error stating Unknown Field - ConfigMapref
Below is my deployment YAML file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myfistproject
namespace: Owncloud
spec
replicas: 2
selector:
matchLables:
app: myfirstproject
version: 1.0.0
template:
metadata:
labels:
app: myfirstproject
version: 1.0.0
spec:
containers:
- name: myfirstproject
image: **my image repo location**
imagePullPolicy: always
ports:
- containerPort: 80
configMapRef:
- name: myfirstprojectdb # This is the name of the config map created via rancher
myfirstprojectdb ConfigMap will store all the details like the database name, username, password, etc.
On executing the pipeline I get the below error.
How do I need to refer my config map in deployment yaml?
Validation Error(Deployment.spec.template.spec.container[0]): unknown field "ConfigMapref" in io.k8s.api.core.v1.Container
There are some more typos (e.g. missing : after spec or Always should be with capital letter). Also indentation should be consistent in the whole yaml file - see yaml indentation and separation.
I corrected your yaml so it passes api server's check + added config map reference (considering it contains env variables):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myfistproject
namespace: Owncloud
spec:
replicas: 2
selector:
matchLabels:
app: myfirstproject
version: 1.0.0
template:
metadata:
labels:
app: myfirstproject
version: 1.0.0
spec:
containers:
- name: myfirstproject
image: **my image repo location**
imagePullPolicy: Always
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: myfirstprojectdb
Useful link:
Configure all key-value pairs in a ConfigMap as container environment variables which is related to this question.

kubernetes set env variable

My requirement is inside pod there is a file
location : /mnt/secrets-store/environment
In kubernetes manifest file i would like to set environment variable . Values contains above location flat file
pls share your thought how to achieve that
I have tried below option in the k8s yml file but not working
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-api
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: sample-api
template:
metadata:
labels:
app: sample-api
aadpodidbinding: azure-pod-identity-binding-selector
spec:
containers:
- name: sample-api
image: sample.azurecr.io/sample:11129
imagePullPolicy: Always
ports:
- containerPort: 80
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Kubernetes"
- name: NRIA_DISPLAY_NAME
value: $("/usr/bin/cat" "/mnt/secrets-store/environment")

how to use "kubectl apply -f <file.yaml> --force=true" making an impact over a deployed container EXEC console?

I am trying to redeploy the exact same existing image, but after changing a secret in the Azure Vault. Since it is the same image that's why kubectl apply doesn't deploy it. I tried to make the deploy happen by adding a --force=true option. Now the deploy took place and the new secret value is visible in the dashboard config map, but not in the API container kubectl exec console prompt in the environment.
Below is one of the 3 deploy manifest (YAML file) for the service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tube-api-deployment
namespace: tube
spec:
selector:
matchLabels:
app: tube-api-app
replicas: 3
template:
metadata:
labels:
app: tube-api-app
spec:
containers:
- name: tube-api
image: ReplaceImageName
ports:
- name: tube-api
containerPort: 80
envFrom:
- configMapRef:
name: tube-config-map
imagePullSecrets:
- name: ReplaceRegistrySecret
---
apiVersion: v1
kind: Service
metadata:
name: api-service
namespace: tube
spec:
ports:
- name: api-k8s-port
protocol: TCP
port: 8082
targetPort: 3000
selector:
app: tube-api-app
I think it is not happening because when we update a ConfigMap, the files in all the volumes referencing it are updated. It’s then up to the pod container process to detect that they’ve been changed and reload them. Currently, there is no built-in way to signal an application when a new version of a ConfigMap is deployed. It is up to the application (or some helper script) to look for the config files to change and reload them.

How to create multiple instances of Mediawiki in a Kubernetes Cluster

I´m about to deploy multiple Mediawiki instances on my Kubernetes-cluster.
In my case the YAML deploymentfile for the DB (MySQL) works as it supposed to do, the deploymentfile for Mediawiki deploys as many pods as expected, but I can´t access them from outside of the cluster even if I create a Service for this case.
If I try to create one single Mediawiki pod and a service to access it from outside of the cluster it works as it should. If I try to create a deploymentfile for Mediawiki equal to the one for MySQL it does creates the pods and the requiered service but it´s not accessible from the externel-IP assigned to it.
My deploymentfile for Mediawiki:
apiVersion: v1
kind: Service
metadata:
name: mediawiki-service
labels:
name: mediawiki-service
app: mediawiki
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
name: mediawiki-pod
app: mediawiki
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mediawiki
spec:
replicas: 6
selector:
matchLabels:
app: mediawiki
strategy:
type: Recreate
template:
metadata:
labels:
app: mediawiki
spec:
containers:
- image: mediawiki
name: mediawiki
ports:
- containerPort: 80
name: mediawiki
This is the pod-definition file:
apiVersion: v1
kind: Pod
metadata:
name: mediawiki-pod
labels:
name: mediawiki-pod
app: mediawiki
spec:
containers:
- name: mediawiki
image: mediawiki
ports:
- containerPort: 80
This is the service-definition file:
apiVersion: v1
kind: Service
metadata:
name: mediawiki-service
labels:
name: mediawiki-service
app: mediawiki
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
name: mediawiki-pod
The accual resault should be that I can deploy multiple instances of Mediawiki on my cluster and can access them from outside with the externel-IP.
If you look at kubectl describe service mediawiki-service in both scenarios, I expect you will see that in the single-pod case, there is an Endpoints: list that includes a single IP address (the pod's, but that's an implementation detail) but in the deployment case, it says <none>.
Your Service only matches pods that have both name and app labels:
apiVersion: v1
kind: Service
spec:
selector:
name: mediawiki-pod
app: mediawiki
But the pods deployed by your deployment only have app labels:
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
labels:
app: mediawiki
So at that specific point (the labels inside the template for the deployment; also adding them at the top level doesn't hurt, but this embedded point is what's important) you need to add the second label name: mediawiki-pod.
If you want to deploy multiple instances of some piece of software on Kubernetes cluster it's good idea to check out if there is a helm chart for it.
In your case the answer is positive - there is a stable helm chart for Mediawiki.
Creating multiple instances is as easy as creating multiple releases, for example:
helm install --name wiki1 stable/mediawiki
helm install --name wiki2 stable/mediawiki
helm install --name wiki3 stable/mediawiki
To use Helm you have to install it on your local machine and on k8s cluster - following the quick start guide will be enough.