We have multiple environments like dev, qa, prepod etc. We have namespaces based on environment. Right now we name the service with environment as suffix. e.g.,
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-order-service-develop-deployment
namespace: dev
labels:
k8s-app: k8s-order-service-develop
spec:
selector:
matchLabels:
k8s-app: k8s-order-service-develop
Instead can I use the following in all namespaces? ie whether deployment is unique per namespace?
in dev env:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-order-service-deployment
namespace: dev
labels:
k8s-app: k8s-order-service
spec:
selector:
matchLabels:
k8s-app: k8s-order-service
in qa env:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-order-service-deployment
namespace: qa
labels:
k8s-app: k8s-order-service
spec:
selector:
matchLabels:
k8s-app: k8s-order-service
remove the namespace from the deployment definition and name it as deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-order-service-develop-deployment
labels:
k8s-app: k8s-order-service-develop
spec:
selector:
matchLabels:
k8s-app: k8s-order-service-develop
then you can deploy it in a specific namespace using the below command
kubectl create -f deploy.yaml -n <namespace-name>
ex:
kubectl create -f deploy.yaml -n dev
kubectl create -f deploy.yaml -n qa
you can look at kustomize for more options and felxibility
This way you can use same deployment files for different environments.
and each environment is isolated from the other
You definitely can create same deployments in different namespace. Just be careful when updating deployment in incorrect environment/namespace. Using namespace as a part of shell prompt may be useful.
Related
I'd like to do a rollout restart using nodejs, but I can't find the relevant API :
kubectl rollout restart deployment/myApp
Could you please help?
P.S : I'd like to avoid the scaling to 0 and then to N.
This is usually nothing that you should have to do. But what you can do is to add or update and annotation in the template:-part of the Deployment, this will trigger a new deployment.
Example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
annotations:
dummy: hello # this is a new annotation
I would like to set the name field in a Namespace resource and also replace the namespace field in a Deployment resource with the same value, for example my-namespace.
Here is kustomization.json:
namespace: <NAMESPACE>
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- patch: |-
- op: replace
path: /metadata/name
value: <NAMESPACE>
target:
kind: Namespace
name: system
version: v1
resources:
- manager.yaml
and manager.yaml:
apiVersion: v1
kind: Namespace
metadata:
labels:
control-plane: controller-manager
name: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- command:
- /manager
args:
- --enable-leader-election
image: controller:latest
name: manager
I tried using kustomize edit set namespace my-namespace && kustomize build, but it only changes the namespace field in the Deployment object.
Is there a way to change both field without using sed, in 'pure' kustomize and without having to change manually value in kustomization.json?
Is there a way to change both field without using sed, in 'pure' kustomize and without having to change manually value in kustomization.json?
I managed to achieve somewhat similar with the following configuration:
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
resources:
- deployment.yaml
depyloment.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
And here is the output of the command that you used:
➜ kustomize kustomize edit set namespace my-namespace7 && kustomize build .
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace7
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: my-namespace7
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
What is happening here is that once you set the namespace globally in kustomization.yaml it will apply it to your targets which looks to me that looks an easier way to achieve what you want.
I cannot test your config without manager_patch.yaml content. If you wish to go with your way further you will have update the question with the file content.
So I did update the manifest and replaced apiVersion: extensions/v1beta1 to apiVersion: apps/v1
apiVersion: apps/v1
kind: Deployment
metadata:
name: secretmanager
namespace: kube-system
spec:
selector:
matchLabels:
app: secretmanager
template:
metadata:
labels:
app: secretmanager
spec:
...
I then applied the change
k apply -f deployment.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/secretmanager configured
I also tried
k replace --force -f deployment.yaml
That recreated the POD (downtime :( ) but still if you try to output the yaml config of the deployment I see the old value
k get deployments -n kube-system secretmanager -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment",
"metadata":{"annotations":{},"name":"secretmanager","namespace":"kube-system"}....}
creationTimestamp: "2020-08-21T21:43:21Z"
generation: 2
name: secretmanager
namespace: kube-system
resourceVersion: "99352965"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/secretmanager
uid: 3d49aeb5-08a0-47c8-aac8-78da98d4c342
spec:
So I still see this apiVersion: extensions/v1beta1
What I am doing wrong?
I am preparing eks kubernetes v1.15 to be migrated over to v1.16
The Deployment exists in multiple apiGroups, so it is ambiguous. Try to specify e.g. apps/v1 with:
kubectl get deployments.v1.apps
and you should see your Deployment but with apps/v1 apiGroup.
In kubernetes 1.8, when I create a deployment for example
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then when I do a
kubectl get deploy nginx-deployment -o yaml
I got
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2018-01-24T01:01:01Z
....
Why the apiversion is extension/v1beta1 instead of apiVersion: apps/v1beta2
When you create a deployment, the apiserver persists it and is capable of converting the persisted deployment into any supported version.
kubectl get deployments actually requests the extensions/v1beta1 version (you can see this by adding --v=6)
To get apps/v1beta2 deployments, do kubectl get deployments.v1beta2.apps
You might use an old version of kubectl.
If so, please upgrade your kubectl to 1.8, then create the deployment again.
My Understanding of this doc page is, that I can configure service accounts with Pods and hopefully also deployments, so I can access the k8s API in Kubernetes 1.6+. In order not to alter or use the default one I want to create service account and mount certificate into the pods of a deployment.
How do I achieve something similar like in this example for a deployment?
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
As you will need to specify 'podSpec' in Deployment as well, you should be able to configure the service account in the same way. Something like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
# Below is the podSpec.
metadata:
name: ...
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
kubernetes nginx-deployment.yaml where serviceAccountName: test-sa
used as non default service account
Link: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
test-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
namespace: test-ns
nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: test-ns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: nginx
spec:
serviceAccountName: test-sa
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80