Operator-SDK and NewController function - kubernetes

I am building an operator using operator-sdk version 1.2 and I do understand we have a reconciliation loop but I was referring to some GitHub repos and unable to make out the use of the NewController function. It seems that these GitHub repos are developed using operator-sdk but in operator-sdk 1.2, I do not need to find any Newcontroller function.
For example, I was referring to https://github.com/oracle/mysql-operator and looking at the https://github.com/oracle/mysql-operator/blob/master/pkg/controllers/cluster/controller.go and I do not find NewControllerfunction in the current operator-sdk.
Also, I do not understand how this MySQL operator is using kubeconfig? Do we need to pass the kubeconfig location to execute the command in the container? Is there a way to read the kube config without passing kubeconfig location in operator-sdk?

If you're building a new operator and you plan to use Operator SDK, then I recommend reading the official Operator SDK: Go tutorial. You can find another example of Go-based operator here.
Concerning the kube config, it will use your default location if you don't specify anything. So the default kubeconfig your kubectl is configured with.

Related

Skaffold and mutiple Sub Charts

lately I was experimenting with Skaffold with our Helm Charts and I am in little bit in a dilemma that our Helm Chart \ Sub Charts are compatible with Skaffold or not.
Our helm Charts are looking like the following
my-helm-charts
+-charts
+-project1
+-project2
+-project3
+-project4
+-infrastructure_kafka
+-charts
+-kafka
+-zookeeper
+-infrastructure_cassandra
+-infrastructure_elasticsearch
+-Charts.yaml
+-Values.yaml
The reason we choosed to structure the Helm Charts this way, is that if necessary to spin up extra stages for our project.
Now when I want to develop project2 with Google Cloud Code / Skaffold (which I configured correctly and I can start without problem in IntelliJ) I have to start whole my-helm-charts.
That is actually Ok but the problem is, if I use Debug in Kubernetes, I have a feeling Google Cloud Code/Skaffold can really locate the project2 and no debugging occurs.
My feeling is Google Cloud/Skaffold is more oriented to work with following contruct...
project2-helm
+-templates
+-Charts.yaml
+-Values.yaml
My Subcharts contructs starts in Google Cloud Code/Skaffold without any exception but I can't debug, is it possible to achieve want I want with my structure and if yes, how?
Or is it not possible at all...
Thx for answers...
We recently added a feature called config dependencies which might help here. It allows you to create more specific skaffold.yamls and then map them together with a "requires" field:
https://skaffold.dev/docs/design/config/#configuration-dependencies
Once you have the skaffold.yamls created and the right dependency mapping you can run skaffold with the -m flag to choose once slice of your services:
skaffold dev -m project3
Cloud Code support for modules is incoming.
Cloud Code IntelliJ and Cloud Code VS Code recently added preview level support for deploying and debugging modules of a larger application which uses Skaffold. See more here https://cloud.google.com/code/docs/intellij/skaffold-modules

How to synchronize Custom resource when its specification is updated

In a Kubernetes operator based on operator-sdk, do you know how to write code to synchronize CR resource when CR specification is updated with kubectl apply? Could you please provide some code samples?
It is mostly up to how you deploy things. The default skeleton gives you a Kustomize-based deployment structure so kustomize build config/default | kubectl apply -f. This is also wrapped up for you behind make deploy. There is also make install for just installing the generated CRD files.
Implementing it a go-lang based operator is pretty complex, and I would recommand studying the kubebuilder documentation and example in order to achieve that: https://book.kubebuilder.io/cronjob-tutorial/controller-implementation.html#implementing-a-controller

How do i find the 'from' Chart version at a helm upgrade?

I am using helm built in object 'Release.isUpgrade' to ensure an init-container is only run at upgrade.
I want to only run the init-container when upgrading from a specific Chart version.
Is it possible to get the 'from' Chart version in a helm upgrade ?
It doesn't look like this information is published either in the .Release object or through information available to a hook job.
You probably want a pre-upgrade hook and not an init container. If you have multiple replicas on your deployments, the init container will run on all of them; even if you have just one, if the node it's on fails and is replaced, the replacement will re-run the init container. A pre-upgrade hook will run just once, regardless of how the corresponding deployments are configured.
That hook will be a separate pod (and will require writing code), so within that you can do whatever you want. You can give it read access to the Kubernetes API to get the definition of an existing deployment, for example, and then look at its labels or container image tag to find out what version of the chart/application is running now. (There are standard labels that can help with this.) You could also make the upgrade step just look for its own outputs: if object X is supposed to exist, create it if it's not there, without focusing on specific versions.

Setting GCloud SDK properties through environment variables

I'm trying to configure properties for the Google Cloud SDK in a non-interactive environment (specifically, a Docker container), and I'd like to use environment variables to do it (because it seems much simpler to get right and portable compared to volume-mounting config files...). However, I can't find any documentation on what the environment variables should be called, etc.
Is it possible to configure the Google Cloud SDK using environment variables, and how do I do so?
Clarification: For now, the only property I care about is the default project, core/project in this listing.
There is a set of environment variables (CLOUDSDK_) that match some (all?) of the gcloud config properties.
I was unable to find these documented but I'm aware of them through the kubectl Cloud Builder (see here) and this post
I've submitted an issue asking Google to document these (more clearly).

Verify that all values for a kubernetes helm chart have been used

I'd like to check that my kubernetes helm chart does not define unused values in values.yaml. This should include any subcharts such that if you've defined subchart.foo.bar: ??? in the top-level values.yaml that key is definitely used in the subchart, or possibly as a short-cut mentioned in the subchart/values.yaml.
This is needed to prevent us from shipping bogus "documentation" in the values.yaml, for example if a key in a subchart has been changed or removed.
Ideally there would also be some possibility to report on which subchart values have not been overridden in the top-level chart, though this is less concerning.
Are there any existing tools that can help with this?
Since the Helm v3 release you can now define a schema for your values. On commands like helm install your provided values are automatically validated against the schema.
Please see the official documentation: https://helm.sh/docs/topics/charts/#schema-files
Schema validation works for subcharts too, this is also mentioned in the documentation on the link above.
AFAIK, there isn't a tool for that. However, it shouldn't be that hard to make one, even using bash. For example, you need to export all key/value pairs like this test.test1.test2 and grep for that string recursively in the templates folder. If you want to read yaml using bash, you can install shyaml. If you know how to code in Python, even better.
helm lint --detect-unused-values