As per https://docs.traefik.io/configuration/acme/
I've created a secret like so:
kubectl --namespace=gitlab-managed-apps create secret generic traefik-credentials \
--from-literal=GCE_PROJECT=<id> \
--from-file=GCE_SERVICE_ACCOUNT_FILE=key.json \
And passed it to the helm chart by using: --set acme.dnsProvider.$name=traefik-credentials
However I am still getting the following error:
{"level":"error","msg":"Unable to obtain ACME certificate for domains \"traefik.my.domain.com\" detected thanks to rule \"Host:traefik.my.domain.com\" : cannot get ACME client googlecloud: Service Account file missing","time":"2019-01-14T21:44:17Z"}
I don't know why/if traefik uses GCE_SERVICE_ACCOUNT_FILE variable. All Google tooling and 3rd party integrations use GOOGLE_APPLICATION_CREDENTIALS environment variable for that purpose (and all Google API clients automatically pick up this variable). So looks like traefik might have done a poor decision here calling it something else.
I recommend you look at the Pod spec of the traefik pod (fields volumes and volumeMounts to see if the Secret is mounted to the pod correctly).
If you follow this tutorial https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform you can learn how to mount IAM Service accounts to any Pod. So maybe you can combine this with the Helm chart itself and figure out what you need to do to make this work.
Related
My pod is running with the default service account. My pod uses secrets through mounted files and config maps but this is defined in yaml and the pod does not contain kubectl or similar component.
Is there a point of using RBAC for anything if I don't call the API? The best practices state "Enable or configure RBAC rules that restrict reading data in Secrets (including via indirect means)."
Only things that call the Kubernetes API, like the kubectl command and the various Kubernetes SDK libraries, use RBAC. For your basic application, you as the user need permission to create deployments, create secrets, etc. but if you have cluster-administrator permissions you don't need anything special setup.
You could imagine an orchestrator application that wanted to farm out work by creating Kubernetes Jobs. In this case the orchestrator itself would need an RBAC setup; typically its Helm chart or other deployment YAML would contain a Role (to create Jobs), a ServiceAccount, and a RoleBinding, and set its own Deployment to run using that ServiceAccount. This isn't the "normal" case of a straightforward HTTP-based application (Deployment/Service/Ingress) with a backing database (StatefulSet/Service).
... restrict reading data in Secrets ...
If you can kubectl get secret -o yaml then the Secret values are all but there to read; they are base64 encoded but not encrypted at all. It's good practice to limit the ability to do this. This having been said, you can also create a Pod, mounting the Secret, and make the main container command be to dump out the Secret value to somewhere readable, so even then Secrets aren't that secret. It's still a good practice, but not required per se, particularly in an evaluation or test cluster.
I'm evaluating crossplane to use as our go to tool to deploy our clients different solutions and have struggled with one issue:
We want to install crossplane to one cluster on GCP (which we create manually) and use that crossplane to provision new cluster on which we can install helm charts and deploy as usual.
The main problem so far is that we haven't figured out how to tell crossplane to install the helm charts into other clusters than itself.
This is what we have tried so for:
The provider-config in the example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: helm-provider
spec:
credentials:
source: InjectedIdentity
...which works but installs everything into the same cluster as crossplane.
and the other example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
source: Secret
secretRef:
name: cluster-credentials
namespace: crossplane-system
key: kubeconfig
...which required a lot of makefile scripting to easier generate a kubeconfig for the new cluster and with that kubecoinfig still gives a lot of errors (but does begin to create something in the new cluster, but it doesnt work all the way. Gettings errors like: " PodUnschedulable Cannot schedule pods: gvisor}).
I have only tried crossplane for a couple of days so I'm aware that I might be approaching this from a completely wrong angle but I do like the promise of crossplane and its approach compared to Terraform and alike.
So the question is: I'm thinking completely wrong or I'm missing something obvious.
The second test with the kubeconfig feels quite complicated right now (many steps in correct order to achieve it).
Thanks
As you've noticed, ProviderConfig with InjectedIdentity is for the case where provider-helm installs the helm release into the same cluster.
To deploy to other clusters, provider-helm needs a kubeconfig file of the remote cluster which needs to be provided as a Kubernetes secret and referenced from ProviderConfig. So, as long as you've provided a proper kubeconfig to an external cluster that is accessible from your Crossplane cluster (a.k.a. control plane), provider-helm should be able to deploy the release to the remote cluster.
So, it looks like you're on the right track regarding configuring provider-helm, and since you observed something getting deployed to the external cluster, you provided a valid kubeconfig, and provider-helm could access and authenticate to the cluster.
The last error you're getting sounds like some incompatibility between your cluster and release, e.g. the external cluster only allows pods with gvisor and the application that you want to install with provider helm does not have some labels accordingly.
As a troubleshooting step, you might try installing that helm chart with exactly same configuration to the external cluster via helm cli, using the same kubeconfig you built.
Regarding the inconvenience of building the Kubeconfig you mentioned, provider-helm needs a way to access to that external Kubernetes cluster, and since kubeconfig is the most common way for this purpose. However, if you see another alternative that makes things easier for some common use cases, this could be implemented and it would be great if you could create a feature request in the repo for this.
Finally, I am wondering how you're creating those external clusters. If it makes sense to create them with Crossplane as well, e.g. if GKE with provider-gcp, then, you can compose a helm ProviderConfig together with a GKE Cluster resource which would just create the appropriate secret and ProviderConfig when you create a new cluster, you can check this as an example: https://github.com/crossplane-contrib/provider-helm/blob/master/examples/in-composition/composition.yaml#L147
I am trying to see which kubernetes user is creating the deployment and what type of authentication is used (basic auth, token, etc).
I try to do it using this:
kubectl describe deployment/my-workermole
but I am not finding that type of information in there.
Cluster is not managed by me and I am not able to find it in the deployment Jenkinsfile. Where and how can I find that type of information in my kubernetes deployment but after deployment?
I am practicing k8s from katacoda. Currently I am working on ingress.yaml. This is chapter has extra kind of services come to the yaml file. They are Namespace, Secret, ServiceAccount, and ConfigMap.
For Secret I can read on other chapter to understand it later.
Questions:
Do I need to use Namespace, ServiceAccount, and ConfigMap in my ingress.yaml?
Suppose I Caddy to make https. Secret from the example is a hardcode. How can I achieve automatically renew after certain period?
Do I need to use Namespace, ServiceAccount, and ConfigMap in my ingress.yaml?
No, it's not required. They are different Kubernetes resources and can be created independently. But you can place several resource definition in one YAML file just for convenience.
Alternatively you can create separate YAML file for each resource and place them all in the same directory. After that one of the following commands can be used to create resources in bulk:
kubectl create -f project/k8s/development
kubectl create -f project/k8s/development --recursive
Namespace is just a placeholder for Kubernetes resources. It should be created before any other resources use it.
ServiceAccount is used as a security context to restrict permission for specific automation operations.
ConfigMap is used as a node independent source of configuration/file/environment for pods.
Suppose I Caddy to make https. Secret from the example is a hardcode. How can I achieve automatically renew after certain period?
Not quite clear question, but I believe you can use cert-manager for that.
cert-manager is quite popular solution to manage certificates for Kubernetes cluster.
I want register kubernetes-elastic-agents with gocd-server. In the doc https://github.com/gocd/kubernetes-elastic-agents/blob/master/install.md
I need kubernetes security token and cluster ca certificate. My Kubernetes is running. How do I create a security token? Where can I find the cluster ca cert?
Jake
There are two answers:
The first is that it's very weird that one would need to manually input those things since they live in a well-known location on disk of any Pod (that isn't excluded via the automountServiceAccountToken field) as described in Accessing the API from a Pod
The second is that if you really do need a statically provisioned token belonging to a ServiceAccount, then you can either retrieve an existing token from the Secret that is created by default for every ServiceAccount, or create a second Secret as described in Manually create a service account API token
The CA cert you requested is present in every Pod in the cluster at the location mentioned in the first link, as well as in the ~/.kube/config of anyone who wishes to access the cluster. kubectl config view -o yaml will show it to you.