Kubernetes - How to deploy Filebeat on kubernetes? - kubernetes

I would like to know how I can deploy a basic Filebeat pod on Kubernetes?
I need to configure a .yaml file but I don't know what I need to specify:
apiVersion: apps/v1
kind: Deployment
metadata:
name: Filebeat
labels:
app: Filebeat
spec:
replicas: 1
selector:
matchLabels:
app: filebeat
template:
metadata:
labels:
app: filebeat
spec:
containers:
- name: ???
image: ???

Try deploy the filebeat component with the helm official chart, is very easy deploy and maintain (upgrade, change configuration) the app.
Btw if you decide deploy with a custom yaml, the current version for filebeat docker image is the 8.0.0, so your yaml example see like this:
spec:
containers:
- name: "filebeat"
image: "docker.elastic.co/beats/filebeat:8.0.0-SNAPSHOT"

If you check filebeat-kubernetes.yaml from
https://github.com/elastic/beats/blob/master/deploy/kubernetes/filebeat-kubernetes.yaml, you will see there already prepared values for you
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:8.0.0

Related

Kubernetes StatefulSet error - serviceName environment variable doesn't exist

I'm supposed to make a StatefulSet with a Headless Service but when I make the Headless Service and create the StatefulSet only one pod gets made but with Error status and I get this error when trying to use kubectl log:
serviceName environment variable doesn't exist! Fix your specification.
Here is my code:
apiVersion: v1
kind: Service
metadata:
name: svc-hl-xyz
spec:
clusterIP: None
selector:
app: svc-hl-xyz
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sts-xyz
spec:
replicas: 3
serviceName: "svc-hl-xyz"
selector:
matchLabels:
app: svc-hl-xyz
template:
metadata:
labels:
app: svc-hl-xyz
spec:
containers:
- name: ctr-sts-xyz
image: XXX/XXX/XXX
command: ["XXX", "XXX","XXX"]
My specification seems to follow the Kubernetes documentation for StatefulSet so I'm not sure why it doesn't work. All I can think of is that the command or the image I'm trying to use is causing this somehow.
The container logs (serviceName environment variable doesn't exist! Fix your specification.) tell you that the serviceName environment variable is missing.
Add it to the container spec in your statefulset:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sts-xyz
spec:
replicas: 3
serviceName: "svc-hl-xyz"
selector:
matchLabels:
app: svc-hl-xyz
template:
metadata:
labels:
app: svc-hl-xyz
spec:
containers:
- name: ctr-sts-xyz
image: quay.io/myafk/interactive:stable
command: ["interactive", "workloads","-t=first"]
env:
- name: serviceName
value: svc-hl-xyz
More information about env variables on Pods can be found in the docs

How to replace statefulset with deployment

We have an sts, say as below for example -
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test-app-onkar
spec:
serviceName: test-app-onkar
selector:
matchLabels:
app: test-app-onkar
replicas: 1
template:
metadata:
name: test-app-onkar
labels:
app: test-app-onkar
spec:
containers:
- name: test-app-onkar
image: 10.1.1.100:5000/kubernetes-bootcamp:v1
We use helm and our charts have above sts definition. Now we have REPLACED the above file with a deployment.yaml as shown below and done a helm upgrade operation using our new charts.
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-onkar
spec:
selector:
matchLabels:
app: test-app-onkar
replicas: 1
template:
metadata:
name: test-app-onkar
labels:
app: test-app-onkar
spec:
containers:
- name: test-app-onkar
image: 10.1.1.100:5000/kubernetes-bootcamp:v1
All's well as the helm upgrade creates a new deployment test-app-onkar. But the sts test-app-onkar is still not removed. It is still there and i think that is the expecation? If so, how to ensure that kubernetes actually removes the older sts and creates only the new deployment. What possible ways can this be achieved?

how can i set kubectl scale deployment into deployment file?

After setup my kubernetes cluster on GCP i used command kubectl scale deployment superappip--replicas=30 from google console to scale my deployments, but what should be added in my deployment file myip-service.yaml to do the same?
The following is an example of a Deployment. It creates a ReplicaSet to bring up three nginx Pods
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
you can follow more here.

Error deploying a pod in a kubernetes cluster

I'm trying to deploy this yaml in my kubernetes cluster into one of my nodes
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-1
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
But when I try to deploy it with the command, I get this error message
pi#k8s-master-rasp4:~ $ kubectl apply -f despliegue-nginx.yaml -l kubernetes.io/hostname=k8s-worker-1
error: no objects passed to apply
Anyone knows where the problem could be?
Thanks
You are not allowed to use label selector (-l) with kubectl apply....
Use nodeSelector to assign pods to specific nodes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-1
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
nodeSelector:
kubernetes.io/hostname: k8s-worker-1 # <-- updated here!
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80

Update deployment labels using "kubectl patch" does not work

I am trying to update a label using kubectl.
When I use apply it works but it doesn't when doing a patch.
I tried kubectl patch deployment nginx-deployment --patch "$(cat nginx.yaml)"; it returns back no change where I would expect to get back a label change.
These are the only changes on my yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: testLab
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.8
ports:
- containerPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: helloWorld
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.8
ports:
- containerPort: 80
Is there a restriction on what patch updates or it am I doing something wrong?
I also tried specifying --type strategic and other types but none seem to work.
After executing command kubectl patch on your second file (where you changed label) you should see following error:
Error from server: cannot restore map from string
After executing command kubectl apply on this file you should get following error :
error: error validating "nginx.yaml": error validating data: ValidationError(Deployment.metadata): unknown field "label" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta; if you choose to ignore these errors, turn validation off with --validate=false
Your deployment file should looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: helloWorld
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.8
ports:
- containerPort: 8
You missed to add space after app label.
Add space and then execute command kubectl patch deployment nginx-deployment --patch "$(cat nginx.yaml)" once again.
Here are useful documentations: labels-selectors, kubernetes-deployments, kubernetes-patch.
You should be having something like this in your metadata:
metadata:
name: nginx-deployment
labels:
label: testLabel2