How do I use crossplane to Install helm charts (with provider-helm) into other cluster - kubernetes

I'm evaluating crossplane to use as our go to tool to deploy our clients different solutions and have struggled with one issue:
We want to install crossplane to one cluster on GCP (which we create manually) and use that crossplane to provision new cluster on which we can install helm charts and deploy as usual.
The main problem so far is that we haven't figured out how to tell crossplane to install the helm charts into other clusters than itself.
This is what we have tried so for:
The provider-config in the example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: helm-provider
spec:
credentials:
source: InjectedIdentity
...which works but installs everything into the same cluster as crossplane.
and the other example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
source: Secret
secretRef:
name: cluster-credentials
namespace: crossplane-system
key: kubeconfig
...which required a lot of makefile scripting to easier generate a kubeconfig for the new cluster and with that kubecoinfig still gives a lot of errors (but does begin to create something in the new cluster, but it doesnt work all the way. Gettings errors like: " PodUnschedulable Cannot schedule pods: gvisor}).
I have only tried crossplane for a couple of days so I'm aware that I might be approaching this from a completely wrong angle but I do like the promise of crossplane and its approach compared to Terraform and alike.
So the question is: I'm thinking completely wrong or I'm missing something obvious.
The second test with the kubeconfig feels quite complicated right now (many steps in correct order to achieve it).
Thanks

As you've noticed, ProviderConfig with InjectedIdentity is for the case where provider-helm installs the helm release into the same cluster.
To deploy to other clusters, provider-helm needs a kubeconfig file of the remote cluster which needs to be provided as a Kubernetes secret and referenced from ProviderConfig. So, as long as you've provided a proper kubeconfig to an external cluster that is accessible from your Crossplane cluster (a.k.a. control plane), provider-helm should be able to deploy the release to the remote cluster.
So, it looks like you're on the right track regarding configuring provider-helm, and since you observed something getting deployed to the external cluster, you provided a valid kubeconfig, and provider-helm could access and authenticate to the cluster.
The last error you're getting sounds like some incompatibility between your cluster and release, e.g. the external cluster only allows pods with gvisor and the application that you want to install with provider helm does not have some labels accordingly.
As a troubleshooting step, you might try installing that helm chart with exactly same configuration to the external cluster via helm cli, using the same kubeconfig you built.
Regarding the inconvenience of building the Kubeconfig you mentioned, provider-helm needs a way to access to that external Kubernetes cluster, and since kubeconfig is the most common way for this purpose. However, if you see another alternative that makes things easier for some common use cases, this could be implemented and it would be great if you could create a feature request in the repo for this.
Finally, I am wondering how you're creating those external clusters. If it makes sense to create them with Crossplane as well, e.g. if GKE with provider-gcp, then, you can compose a helm ProviderConfig together with a GKE Cluster resource which would just create the appropriate secret and ProviderConfig when you create a new cluster, you can check this as an example: https://github.com/crossplane-contrib/provider-helm/blob/master/examples/in-composition/composition.yaml#L147

Related

Set a Kubernetes Secret with data from an external Key Vault

I'm using ArgoCD (a GitOps tool) to deploy my Helm project in a Kubernetes cluster. My cluster is configured to connect to my external key vault, so once my application is deployed to my cluster, it can fetch an access token and grab any secret it needs in the vault. My project also has a use case in which is spins up a new pod within my cluster to run some batch process, and this pod uses a imagePullSecret to connect to my private registry that contains an image for this pod.
If this imagePullSecret is present in my cluster and in the same namespace as my project, this process works. However, I'm trying to keep my project IaC as cluster agnostic as possible. Basically, should I spin up a new Kubernetes cluster, I can deploy my project to this cluster without any manual changes needed, and the project has everything it needs to run.
What I want to do is add a yaml file in my template folder that defines a secret in the cluster (which will be my imagePullSecret) so that I know that secret will always be there. Since this secret is project dependent, assume that the IaC code to deploy the cluster will not do this task. Of course, to be secure, I don't want to have the actual secret data in the file itself.
As my cluster is already seamlessly connected to an external Key Vault, I'm wondering if there's a way to fetch this information from the vault at the time of deployment. So the secret yaml would look like:
kind: Secret
apiVersion: v1
metadata:
name: superSecretData
namespace: projectNameSpace
# Maybe some connection annotation needed here?
type: dockerconfigjson
data:
secretData: <Data from Key Vault>
I saw there was some small project for this, but I wanted to know if there is a more...standard way to do it. When searching online for a solution, there are many to set secrets in a Pod, but I want to actually create a Kubernetes secret in my namespace. My GitOps tool may be able to do this, and may be a solution to this problem, but if I can avoid this middleman situation (KV -> ArgoCD -> k8s secret -> project instead of just KV -> k8s secret -> project) I would like to go that route

How do I interpolate Kubernetes variables into JSON in the ConfigMap YAML file?

I have this ConfigMap where I am constructing a app-config.json file that I pass into Angular. This file is how I get environment variables into Angular as they must be served.
Below is how I thought passing variables into the JSON would work in ConfiMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: frontend-settings
data:
app-config.json: |-
{
"keycloakUrl": "http://${minikube ip}:${keycloak_port}/auth",
"realm": "eshc",
"clientId": "eshc-frontend",
"backendApi": "http://localhost:${backend_port}"
}
The problem is that these are not evaluated. I want to pass Kube service aliases, and the minikube ip command as in the example above. Could someone point me in the right direction as to how I might do this?
Thanks in advance!
Kubernetes doesn't provide this facility in the API.
You can do this at deploy time with helm or kubectl's kustomization features.
Depending on your use case, this can also be done at runtime in a container entry point before the app starts up or in a Kubernetes specific init container. Avoid the init container unless you are working with shared file systems, or with the Kubernetes API to apply these changes.
From your example it looks like everything should be available at deploy time, maybe not the minikube IP. For that you should be able to use the magic DNS name host.minikube.internal

Kubernetes infrastructure as code best practice

Can anyone point me to the common strategy to setup a Kubernetes cluster according to the principles of infrastructure as code and automatic deployment for different developer teams with Git repos and an undefined CI/CD platform.
Let's say I am going to use Terraform to deploy a Kubernetes cluster on a hypothetical cloud service named QKS with a commonly used service, for example Apache Airflow, for which a public helm chart is available. There are two custom services (from two independent developer groups) to deploy named "apples" and "bananas".
I am struggling with the separation of responsibilities of different code bases. Which steps in this process can best still be done manually. A lot is being written about this technology, but I cannot find any articles on this issue in particular.
This is my own proposal.
Have three git repositories:
my-infrastructure: includes the Terraform files, the Airflow Helm deployment and deployment of two namespaces included access roles to these namespaces. CICD tracks for changes and deploys them on QKS
apples: code base and corresponding helm template. CICD can deploy on the apples namespace only.
bananas: code base and corresponding helm template. CICD can deploy on the bananas namespace only.
Notes:
subdivision of the cluster into namespaces is obvious;
all secrets and authorization tokens for the namespaces can be created via Terraform using Terraform kubernetes provider.
https://www.terraform.io/docs/providers/kubernetes/r/secret.html
There is an interesting kubernetes project for this called cluster-api that lets you create, configure & manage kubernetes clusters in a declarative fashion in a way similar to how we manage different resources in kubernetes itself. It defines new resources of different kinds like Cluster, Machine
e.g. You could define a cluster like this:
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Cluster
metadata:
name: capi-quickstart
spec:
clusterNetwork:
pods:
cidrBlocks: ["192.168.0.0/16"]
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
name: capi-quickstart
Of course you would need a starting / bootstrap kubernetes cluster where you will deploy this resource. This project is still in prototype stage, so use caution.
Check out the cluster-api repository on Github: https://github.com/kubernetes-sigs/cluster-api

Look up secrets from gcloud secrets manager directly as secretGenerator with kustomize

I am setting up my Kubernetes cluster using kubectl -k (kustomize). Like any other such arrangement, I depend on some secrets during deployment. The route I want go is to use the secretGenerator feature of kustomize to fetch my secrets from files or environment variables.
However managing said files or environment variables in a secure and portable manner has shown itself to be a challenge. Especially since I have 3 separate namespaces for test, stage and production, each requiring a different set of secrets.
So I thought surely there must be a way for me to manage the secrets in my cloud provider's official way (google cloud platform - secret manager).
So how would the secretGenerator that accesses secrets stored in the secret manager look like?
My naive guess would be something like this:
secretGenerator:
- name: juicy-environment-config
google-secret-resource-id: projects/133713371337/secrets/juicy-test-secret/versions/1
type: some-google-specific-type
Is this at all possible?
What would the example look like?
Where is this documented?
If this is not possible, what are my alternatives?
I'm not aware of a plugin for that. The plugin system in Kustomize is somewhat new (added about 6 months ago) so there aren't a ton in the wild so far, and Secrets Manager is only a few weeks old. You can find docs at https://github.com/kubernetes-sigs/kustomize/tree/master/docs/plugins for writing one though. That links to a few Go plugins for secrets management so you can probably take one of those and rework it to the GCP API.
There is a Go plugin for this (I helped write it), but plugins weren't supported until more recent versions of Kustomize, so you'll need to install Kustomize directly and run it like kustomize build <path> | kubectl apply -f - rather than kubectl -k. This is a good idea anyway IMO since there are a lot of other useful features in newer versions of Kustomize than the one that's built into kubectl.
As seen in the examples, after you've installed the plugin (or you can run it within Docker, see readme) you can define files like the following and commit them to version control:
my-secret.yaml
apiVersion: crd.forgecloud.com/v1
kind: EncryptedSecret
metadata:
name: my-secrets
namespace: default
source: GCP
gcpProjectID: my-gcp-project-id
keys:
- creds.json
- ca.crt
In your kustomization.yaml you would add
generators:
- my-secret.yaml
and when you run kustomize build it'll automatically retrive your secret values from Google Secret Manager and output Kubernetes secret objects.

Kubernetes: Is it possible to have the exact same deployment descriptor for all environments including local?

I am new to Kubernetes and Minikube. Both look amazing tools, but I wonder if is there any way to have a single .yml file to deploy my services/deployments in all the environments, including local dev env...
The first limitation I see is related to service discovery since I would like to have my services behind a load balancer on the cloud, but at development environment, I can't since minikube don't support it, so I have to fall back to NodePort.
Can you provide me with some info about that matter?
There are other common differences between environments- names; credentials for any database or other permissioned resources; allocation of RAM/CPU; replica counts. There are limitations that minikube has as a runtime, compared to production k8s.
So- though one can use the same single yaml file in different environments, typically that's not what one wants.
What one usually wants is to have the general architectural shape of the solution be the same across environments, have differences extracted into minimalist configuration, then rendered using templates into environment-specific files to be used at deployment time.
The tool most commonly used to support this kind of approach is helm:
https://helm.sh/
Helm is basically a glorified templating wrapper around kubectl (though it has an in-cluster component). With helm, you can use the same base set of resource files, extract environment differences into config files, and then use helm to deploy as appropriate to each environment.
If I understood your question properly, you would like to spin up your infrastructure using one command and one file.
It is possible; however, it depends on your services. If some pods require another one to be running before they can start, this can get tricky. However technically you can put all your manifest files in one bundle. You can then create all the deployments services etc with kubectl apply -f bundle.yml
To create this bundle, you need to separate every manifest (deployment, service configmap, etc.) by triple dashes (---)
Example:
apiVersion: v1
kind: Namespace
metadata:
name: namespace-1
---
apiVersion: v1
kind: Namespace
metadata:
name: namespace-2