How to keep filebeat registry - kubernetes

I need to move my filebeat to other namespace, but I must keep registry , I mean that:
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
Can you tell me how can I copy that in kubernetes

Just to check my assumptions:
filebeat is a a DaemonSet
When you start it up in the new Namespace, you want to keep the registry
You're happy to keep the on-disk path the same
Because the data folder is mounted from the host directly - if you apply the same DaemonSet in a new Namespace, it will mount the same location into the container. So there's no need to copy any files around.

Related

Sharing existing media folder with pods on Kubernetes

I'm working on my toy project and I want to share an existing folder with media files with pods running on Kubernetes (Docker Desktop's built in Kubernetes on Windows 10 or microk8s on my home linux server). What is the best way to do it? I have searched through the docs and there are no exemaples with existing folder and data.
A file or directory from the filesystem of the host node is mounted into your Pod by a hostPath volume.You can create a PV with hostpath so that you can claim in the pod configurations. For this your existing directory has to be in the node where the pods are going to create.
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: registry.k8s.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
Only root has access to the newly created files and folders on the underlying hosts. To be able to write to a hostPath drive, you must either execute your process as root in a privileged Container or change the file permissions on the host.
For detailed information refer to this document
NOTE: Avoiding the usage of HostPath volumes whenever possible is a best practise since they pose numerous security issues.

Kubernetes: make pod directory writable

I have docker image with config folder in it.
Logback.xml is located there.
So I want to have ability to change logback.xml in pod (to up log level dinamicly, for example)
First, I've thought to use epmtyDir volume, but this thing rewrite directory and it becomes empty.
So Is there some simple method to make directory writable into pod?
Hello, hope you are enjoying your kubernetes Journey, if I understand, you want to have a file in a pod and be able to modify it when needed,
The thing you need here is to create a configmap based on your logback.xml file (can do it with imperative or declarative kubernetes configuration, here is the imperative one:
kubectl create configmap logback --from-file logback.xml
And after this, just mount this very file to you directory location by using volume and volumeMount subpath in your deployment/pod yaml manifest:
...
volumeMounts:
- name: "logback"
mountPath: "/CONFIG_FOLDER/logback.xml"
subPath: "logback.xml"
volumes:
- name: "logback"
configMap:
name: "logback"
...
After this, you will be able to modify your logback.xml config, by editing / recreating the configmap and restarting your pod.
But, keep in mind:
1: The pod files are not supposed to be modified on the fly, this is against the container philosophy (cf. Pet vs Cattle)
2: Howerver, Depending on your container image user rights, all pods directories can be writable...

Add SCDF (Spring Cloud Data Flow) Application to Bitnami chart generated cluster?

I've used the Bitnami Helm chart to install SCDF into a k8s cluster generated by kOps in AWS.
I'm trying to add my development SCDF stream apps into the installation using a file URI and cannot figure-out where or how the shared Skipper & Server mount point is. exec'ing into either instance there is no /home/cnb and I'm not seeing anything common via mount. The best I can tell the Bitnami installation is using the MariaDB instance for shared "storage".
Is there a recommended way of installing local/dev Stream apps into the cluster?
There are a couple of parameters under the deployer section that allows you to mount volumes (link):
deployer:
## #param deployer.volumeMounts Streaming applications extra volume mounts
##
volumeMounts: {}
## #param deployer.volumes Streaming applications extra volumes
##
volumes: {}
see https://github.com/bitnami/charts/tree/master/bitnami/spring-cloud-dataflow#deployer-parameters.
Then, the mounted volume is used in the ConfigMaps (both server and skipper):
Server
https://github.com/bitnami/charts/blob/c351211a5501bb44b5e065a5e3a7d4b7414f84f3/bitnami/spring-cloud-dataflow/templates/server/configmap.yaml#L60
Skipper
https://github.com/bitnami/charts/blob/c351211a5501bb44b5e065a5e3a7d4b7414f84f3/bitnami/spring-cloud-dataflow/templates/skipper/configmap.yaml#L72
Apart from that, there are also server.extraVolumes and server.extraVolumeMounts to be set on the Dataflow Server Pod, and skipper.extraVolumes and skipper.extraVolumeMounts to be set on the Skipper Pod just in case it's useful for your use case.
Building on the previous answer here is what I came-up with:
Create an EBS Volume
Mount it on each EC2 instance in the cluster at the same location (/cdf)
Install CDF using the Bitnami chart and this config file:
server.extraVolumeMounts:
# Locstion in container
- mountPath: /applications
# Refer to the volume below
name: application-volume
server.extraVolumes:
- name: application-volume
hostPath:
# Location in host filesystem
path: /cdf
# this field is optional
type: Directory
skipper.extraVolumeMounts:
# Locstion in container
- mountPath: /applications
# Refer to the volume below
name: application-volume
skipper.extraVolumes:
- name: application-volume
hostPath:
# Location in host filesystem
path: /cdf
# this field is optional
type: Directory
Then I can copy my jars into /cdf on the host file system and install the applications using a file URI of file:///applications/<jar-file-name> and everything works.

Best practice for adding app configuration files into kubernetes pods

I have the following setup:
An azure kubernetes cluster with some nodes where my application (consisting of multiple pods) is running.
I'm looking for a good way to make a project-specific configuration file (a few hundred lines) available for two of the deployed containers and their replicas.
The configuration file is different between my projects but the containers are not.
I'm looking for something like a read-only file mount in the containers, but haven't found an good way. I played around with persistent volume claims but there seems to be no automatic file placement possibility apart from copying (including uri and secret managing).
Best thing would be to have a possiblility where kubectl makes use of a yaml file to access a specific folder on my developer machine to push my configuration file into the cluster.
ConfigMaps are not a proper way to do it (because data has to be inside the yaml and my file is big and changing)
For volumes there seems to be no automatic way to place files inside them at creation time.
Can anybody guide me to a good solution that matches my situation?
You can use a configmap for this, but the configmap includes your config file. You can create a configmap with the content of your config file via the following:
kubectl create configmap my-config --from-file=my-config.ini=/path/to/your/config.ini
and the bind it as a volume in your pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: mypod
...
volumeMounts:
- name: config
mountPath: "/config"
readOnly: true
volumes:
- name: config
configMap:
name: my-config #the name of your configmap
Afterwards your config is available in your pod under /config/my-config.ini

How do I inject binary files into a v1.7 pod

I have a 3rd party docker image that I want to use (https://github.com/coreos/dex/releases/tag/v2.10.0). I need to inject some customisation into the pod (CSS stylesheet and PNG images).
I haven't found a suitable way to do this yet. Configmap binaryData is not available before v1.10 (or 9, can't remember off the top of my head). I could create a new image and COPY the PNG files into the image, but I don't want the overhead of maintaining this new image - far safer to just use the provided image.
Is there an easy way of injecting these 2/3 files I need into the pod I create?
One way would be to mount 1 or more volumes into the desired locations within the pod, seemingly /web/static. This however would overwrite the entire directly so you would need to supply all the files not just those you wish to overwrite.
Example:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- image: dex:2.10.0
name: dex
volumeMounts:
- mountPath: /web/static # the mount location within the container
name: dex-volume
volumes:
- name: dex-volume
hostPath:
path: /destination/on/K8s/node # path on host machine
There are a number of types of storage types for different cloud providers so take a look at https://kubernetes.io/docs/concepts/storage/volumes/ and see if theres something a little more specific to your environment rather than storing on disk.
For what it's worth, creating your own image would probably be the simplest solution.
You could mount your custom files into a volume, and additionally define a set of commands to run on pod startup (see here) to copy your files to their target path.
You of course need to also run the command that starts your service, in addition to the ones that copy your files.