Howto export Helm chart values programmatically and periodically from pod - kubernetes-helm

I thought it to be a simple task but can't find a solution. The goal here is to have the full output of values of an Helm Chart actually installed and used in a k8s cluster. Helm is installed locally so it's easy to do a helm get values but what if I want these values extracted periodically and sent to a third place by a pod running in the same cluster? Like exporting in json and save them in a DB or something.
Can I write a go/python script that run in a pod with Helm installed? Are there some API?

Related

Kubernetes single deployment yaml file for spinning up the application

I am setting up kubernetes for an application with 8 microservices,activemq,postgres,redis and mongodb.
After the entire configuration of pods and deployment ,is there any way to create a single master deployment yaml file which will create the entire set of services,replcas etc for the entire application.
Note:I will be using multiple deployment yaml files,statefulsets etc for all above mentioned services.
You can use this script:
NAMESPACE="your_namespace"
RESOURCES="configmap secret daemonset deployment service hpa"
for resource in ${RESOURCES};do
rsrcs=$(kubectl -n ${NAMESPACE} get -o json ${resource}|jq '.items[].metadata.name'|sed "s/\"//g")
for r in ${rsrcs};do
dir="${NAMESPACE}/${resource}"
mkdir -p "${dir}"
kubectl -n ${NAMESPACE} get -o yaml ${resource} ${r} > "${dir}/${r}.yaml"
done
done
Remember to specify what resources you want exported in the script.
More info here
Is there any way to create a single master deployment yaml file which will create the entire set of services,replicas etc for the entire application.
Since you already mentioned kubernetes-helm why don't you actually used it for that exact purpose? In short helm is sort of package manager for Kubernetes, some say similar to yum or apt. It deploys charts which you can actually refer to as packed application. Its pack of all your pre-configured applications which can be deploy as one unit. It's not entirely one file but more collection of files that build so called helm chart.
What are the helm charts?
Well they are basically K8s yaml manifest combined into a single package that can be installed to your cluster. And installing the package is just as simple as running single command such as helm install. Once done the charts are highly reusable which reduces the time for creating dev, test and prod environments.
As an example of a complex helm chart deploying multiple resources you many want to check Stackstorm.
Basically once deployed without any custom config this chart will deploy 2 replicas for each component of StackStorm as well as backends like RabbitMQ, MongoDB and Redis.

is there a Helm built-in function that would allow me to write a file within my chart to an external directory?

Our helm charts create the appropriate k8s manifest and they install into our cluster. Associated (or packaged within our helm chart is a rule file which I would like to be able to write to some external directory.
What we would like to do is for an integrator to install an number of our helm charts. Each helm chart they install will have a rule that would be written to some external directory. The integrator after installing all of the associated helm charts would then inspect the rule's directory and then create a configmap containing those rules.
We can't create the configmap until we have install all the helm charts that contribute rules. Is there a helm built in that would allow me to write one (or more) files to an external directory?
No. The only outputs of a Helm chart are Kubernetes resources (that are installed in the cluster) and the plain-text rendered output of the NOTES.txt file.
However, you could install this metadata in the cluster. The easiest way would be to create a ConfigMap with the content you need, and then have the integrator process look for those ConfigMaps and combine them.
kubectl get configmap --all-namespaces -l type=rule -o name
If you want to do this programmatically, you could write a Kubernetes operator that runs in the cluster, and instead of writing out ConfigMaps, write some kind of custom resource; a controller would watch for those resources appearing using the Kubernetes API, and combine them appropriately.

How to ingest the boston housing dataset into Cassandra in Kubernetes?

I am new to Kubernetes and have tried to set up my first cluster using minikube. I have installed Cassandra using helm chart throug the following.
helm install bitnami/cassandra
I have Cassandra running right now on one pod. I would like to explore and understand how I can interact with Cassandra inside my Kubernetes cluster.
My goal right now is therefore to ingest the Boston Housing dataset into Cassandra. And I have tried to read up on how this is done in Kubernetes. Has anyone done anything similar to this? And what is the correct way to ingest data into Cassandra in kubernetes? I have a hard time finding the right information on how to do this. Is it done through jobs?
Would love any tips or insights into this.
Before installing Cassandra via helm, you can fetch it to local current foler via:
$ helm fetch bitnami/cassandra --untar
$ cd cassandra
Then in folder and create job template there, and add to hook annotations to of this template and helm will recognize it as hook not as part of release.
...
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install # It will run after deploying all resources
# Job will be deleted after successfully completed
"helm.sh/hook-delete-policy": hook-succeeded
...
You can see full example template of helm hook in official doc
After adding your hook job template, you can install your chart via:
$ # Make sure you are in cassandra folder
$ pwd
~/cassandra
$ # And install
$ helm install cassandra .
Related more about kubernetes jobs, you can visit official documentation
Hope it helps!

Automate helm chart installation when new Kubernetes namespace is created

I'm creating a Multi-Tenancy Kubernetes infrastructure.
I created a Helm Chart with my app, and now I need automate the helm chart installation when a new namespace is created.
For example, when the namespace client1 is create I need to run helm install myrepo/myapp --name client1.
How can i get the new namespace creation event? And the namespace name?
You can either keep running a script which executes kubectl get namespace every since a while and compares the current result with the old result. When you find out a new namespace created, you can then execute helm install myrepo/myapp --name client1. Or you can run an application in your cluster. What the application does is basically listing all namespaces in the cluster, comparing the current with the cached, if a new namespace found, then call helm client to install your app. For more information, if you are using golang, I would recommend you to use kubernetes client-go to get the list of namespaces in the cluster and you can refer to the open resource project pipeline for the helm client-go part to install your app.

Improve performance of helm install chart

I have a chart that is installing a pod to kubernetes. since that Helm allows us to set values in a single chart, i decided to create a reusable chart that allows me to create multiple pod with the same chart configuration.
I'm trying to create about 10,000 pods, and using helm install is easiest way since that im reusing the chart config. I was wondering how can i improve the performance of helm install?
I tried to scale tiller-deploy to about 4, but only one of the pods that are processing the helm requests.
Example script to create 10,000 pods
created = has_created(`helm status #{$name} 2>&1`)
if !created
`helm install --name=#{$name} --set start=#{$start} --set end=#{$until} --set key=#{$key} ./chart`
p "deployed #{$name} release"
end
Thanks
Your bottleneck is not tiller, it’s the way that you’re starting the process. What about to run this process in background or use a modern language to create this in a thread?
You could try to have a single chart that you install that has a long requirements list of your 10,000 pods with different variables passed, that way helm can send a single install command and tiller can take care of the rest. This might be a bit faster as you limit the communication between helm and tiller.