nodejs & kubernetes : How to do a deployment rollout restart (nodejs) - kubernetes

I'd like to do a rollout restart using nodejs, but I can't find the relevant API :
kubectl rollout restart deployment/myApp
Could you please help?
P.S : I'd like to avoid the scaling to 0 and then to N.

This is usually nothing that you should have to do. But what you can do is to add or update and annotation in the template:-part of the Deployment, this will trigger a new deployment.
Example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
annotations:
dummy: hello # this is a new annotation

Related

Kubernetes - How to deploy Filebeat on kubernetes?

I would like to know how I can deploy a basic Filebeat pod on Kubernetes?
I need to configure a .yaml file but I don't know what I need to specify:
apiVersion: apps/v1
kind: Deployment
metadata:
name: Filebeat
labels:
app: Filebeat
spec:
replicas: 1
selector:
matchLabels:
app: filebeat
template:
metadata:
labels:
app: filebeat
spec:
containers:
- name: ???
image: ???
Try deploy the filebeat component with the helm official chart, is very easy deploy and maintain (upgrade, change configuration) the app.
Btw if you decide deploy with a custom yaml, the current version for filebeat docker image is the 8.0.0, so your yaml example see like this:
spec:
containers:
- name: "filebeat"
image: "docker.elastic.co/beats/filebeat:8.0.0-SNAPSHOT"
If you check filebeat-kubernetes.yaml from
https://github.com/elastic/beats/blob/master/deploy/kubernetes/filebeat-kubernetes.yaml, you will see there already prepared values for you
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:8.0.0

how can i set kubectl scale deployment into deployment file?

After setup my kubernetes cluster on GCP i used command kubectl scale deployment superappip--replicas=30 from google console to scale my deployments, but what should be added in my deployment file myip-service.yaml to do the same?
The following is an example of a Deployment. It creates a ReplicaSet to bring up three nginx Pods
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
you can follow more here.

Update deployment labels using "kubectl patch" does not work

I am trying to update a label using kubectl.
When I use apply it works but it doesn't when doing a patch.
I tried kubectl patch deployment nginx-deployment --patch "$(cat nginx.yaml)"; it returns back no change where I would expect to get back a label change.
These are the only changes on my yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: testLab
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.8
ports:
- containerPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: helloWorld
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.8
ports:
- containerPort: 80
Is there a restriction on what patch updates or it am I doing something wrong?
I also tried specifying --type strategic and other types but none seem to work.
After executing command kubectl patch on your second file (where you changed label) you should see following error:
Error from server: cannot restore map from string
After executing command kubectl apply on this file you should get following error :
error: error validating "nginx.yaml": error validating data: ValidationError(Deployment.metadata): unknown field "label" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta; if you choose to ignore these errors, turn validation off with --validate=false
Your deployment file should looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: helloWorld
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.8
ports:
- containerPort: 8
You missed to add space after app label.
Add space and then execute command kubectl patch deployment nginx-deployment --patch "$(cat nginx.yaml)" once again.
Here are useful documentations: labels-selectors, kubernetes-deployments, kubernetes-patch.
You should be having something like this in your metadata:
metadata:
name: nginx-deployment
labels:
label: testLabel2

Uniqueness of deployment definition in kubernetes namespace

We have multiple environments like dev, qa, prepod etc. We have namespaces based on environment. Right now we name the service with environment as suffix. e.g.,
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-order-service-develop-deployment
namespace: dev
labels:
k8s-app: k8s-order-service-develop
spec:
selector:
matchLabels:
k8s-app: k8s-order-service-develop
Instead can I use the following in all namespaces? ie whether deployment is unique per namespace?
in dev env:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-order-service-deployment
namespace: dev
labels:
k8s-app: k8s-order-service
spec:
selector:
matchLabels:
k8s-app: k8s-order-service
in qa env:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-order-service-deployment
namespace: qa
labels:
k8s-app: k8s-order-service
spec:
selector:
matchLabels:
k8s-app: k8s-order-service
remove the namespace from the deployment definition and name it as deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-order-service-develop-deployment
labels:
k8s-app: k8s-order-service-develop
spec:
selector:
matchLabels:
k8s-app: k8s-order-service-develop
then you can deploy it in a specific namespace using the below command
kubectl create -f deploy.yaml -n <namespace-name>
ex:
kubectl create -f deploy.yaml -n dev
kubectl create -f deploy.yaml -n qa
you can look at kustomize for more options and felxibility
This way you can use same deployment files for different environments.
and each environment is isolated from the other
You definitely can create same deployments in different namespace. Just be careful when updating deployment in incorrect environment/namespace. Using namespace as a part of shell prompt may be useful.

apiversion changed after kubectl create

In kubernetes 1.8, when I create a deployment for example
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then when I do a
kubectl get deploy nginx-deployment -o yaml
I got
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2018-01-24T01:01:01Z
....
Why the apiversion is extension/v1beta1 instead of apiVersion: apps/v1beta2
When you create a deployment, the apiserver persists it and is capable of converting the persisted deployment into any supported version.
kubectl get deployments actually requests the extensions/v1beta1 version (you can see this by adding --v=6)
To get apps/v1beta2 deployments, do kubectl get deployments.v1beta2.apps
You might use an old version of kubectl.
If so, please upgrade your kubectl to 1.8, then create the deployment again.