I'm trying to install a Kubernetes operator into an OpenShift cluster using OLM 0.12.0. I ran oc create -f my-csv.yaml to install it. It is created successfully, but I do not get any results.
In the olm operator logs I find this message:
level=info msg="couldn't ensure RBAC in target namespaces" csv=my-operator.v0.0.5 error="no owned roles found" id=d1h5n namespace=playground phase=Pending
I also note that there is no InstallPlan created to make the accounts that I thought it was making.
What's wrong?
This message probably means that the RBAC assigned to your service account does not match the requirements specified by CSV (cluster service version).
In other words, while creating an operator you define CSV which defines the requirements for creating your custom resource. Then, when operator creates the resource it checks if the used service account fulfills these requirements.
You can check the Hazelcast Operator we created. It has some requirements regarding RBAC. So, before installing it, you need to apply the following RBAC file.
Related
I'm trying to connect to a Digital Ocean Kubernates cluster using doctl but when I run
doctl kubernetes cluster kubeconfig save <> I get an error saying .kube/config: not a directory. I've authenticated using doctl and when I run doctl account get I see my account info. I'm confused as to what the problem is. Is this some sort of permission issue or did I miss a config step somewhere?
kubectl (by default) stores a configuration in ${HOME}/.kube/config. It appears you don't have the file and the command doesn't create it if it doesn't exist; I recommend you try creating ${HOME}/.kube first as doctl really ought to create the config file if it doesn't exist.
kubectl facilitates interacting with multiple clusters as multiple users in multiple namespaces through the use a tuple called 'context' which combines a cluster with a user with a(n optional) namespace. The command lets you switch between these easily.
After you're done with a cluster, generally (!) you must tidy up its entires in ${HOME}/.kube/config too as these configs tend to grow over time.
You can change the location of the kubectl config file using an environment variable (KUBECONFIG).
See Organizing Cluster Access Using kubeconfig Files
I am trying to utilize Rancher Terraform provider to create a new RKE cluster and then use the Kubernetes and Helm Terraform providers to create/deploy resources to the created cluster. I'm using this https://registry.terraform.io/providers/rancher/rancher2/latest/docs/resources/cluster_v2#kube_config attribute to create a local file with the new cluster's kube config.
The Helm and Kubernetes providers need the kube config in the provider configuration: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs. Is there any way I can get the provider configuration to wait for the local file being created?
Generally speaking, Terraform always needs to evaluate provider configurations during the planning step because providers are allowed to rely on those settings in order to create the plan, and so it typically isn't possible to have a provider configuration refer to something created only during the apply step.
As a way to support bootstrapping in a situation like this though, this is one situation where it can be reasonable to use the -target=... option to terraform apply, to plan and apply only sufficient actions to create the Rancher cluster first, and then follow up with a normal plan and apply to complete everything else:
terraform apply -target=rancher2_cluster_v2.example
terraform apply
This two-step process is needed only for situations where the kube_config attribute isn't known yet. As long as this resource type has convergent behavior, you should be able to use just terraform apply as normal unless you in future make a change that requires replacing the cluster.
(This is a general answer about provider configurations refering to resource attributes. I'm not familiar with Rancher in particular and so there might be some specifics about that particular resource type which I'm not mentioning here.)
I found a sort of workaround solution. I output the rancher2_cluster.cluster.kube_config object into a variable. Then referenced that variable in my Kubernetes module. Instead of using kube_config attribute in the provider configuration, I used the token and host attributes and used yamldecode to parse the creds directly from the kube_config variable.
provider "kubernetes" {
token = "${yamldecode(var.kube_config)["users"][0]["user"]["token"]}"
host = "${yamldecode(var.kube_config)["clusters"][0]["cluster"]["server"]}"
}
I will suggest to split your functionality in 2 layers
Run the fist layer to generate the kube_config file.
Run the second layer that will consume this file.
I am trying to see which kubernetes user is creating the deployment and what type of authentication is used (basic auth, token, etc).
I try to do it using this:
kubectl describe deployment/my-workermole
but I am not finding that type of information in there.
Cluster is not managed by me and I am not able to find it in the deployment Jenkinsfile. Where and how can I find that type of information in my kubernetes deployment but after deployment?
The operator is https://operatorhub.io/operator/keycloak-operator version 11.0.0.
The cluster is Kubernetes version 1.18.12.
I was able to follow the steps from OperatorHub.io to install the Operator Lifecycle Manager and the Keycloak "OperatorGroup" and "Subscription".
It took much longer than I was expecting (maybe 20 minutes?), but eventually the corresponding "ClusterServiceVersion" was created.
However, now when I try to use it by creating the following resource, it doesn't seem to be doing anything at all:
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
name: example-keycloak
namespace: keycloak
labels:
app: sso
spec:
instances: 1
externalAccess:
enabled: true
extensions:
- https://github.com/aerogear/keycloak-metrics-spi/releases/download/1.0.4/keycloak-metrics-spi-1.0.4.jar
It accepts the new resource, so I know the CRD is in place. The documentation states that it should create a stateful set, an ingress, and more, but it just doesn't seem to create anything.
I checked the cluster logs and this is the error that is jumping out to me:
olm-operator ERROR controllers.operator Could not update Operator status {"request": "/keycloak-operator.my-keycloak-operator", "error": "Operation cannot be fulfilled on operators.operators.coreos.com \"keycloak-operator.my-keycloak-operator\": the object has been modified; please apply your changes to the latest version and try again"}
I have quite a bit of experience with plain kubernetes, but I'm brand new to "operators" and so I'm really not sure where to look next wrt what might be going wrong.
Any hints/suggestions/explanations?
UPDATE: I was creating the keycloak resource in a namespace OTHER than the one I installed the operator into. Since it allowed me to create the custom resource (Kind: Keycloak) into this namespace, I thought this was supported. However, when I created the keycloak resource to the same namespace where the operator was installed (my-keycloak-operator), then it actually tried to do something. Its still failing to bring up the pod, mind you, but at least its trying to do something.
Will leave this question open for a bit to see if the "Could not update Operator status" is something I should be concerned about or not...
It looks like the operator or/and the components that it wants to bring up cannot do a write (POST/PUT) to the kube-apiserver.
From what you describe, it appears that the first time when you installed the operator on a different namespace it just didn't have permissions to bring up anything at all. The second time when you installed it on the right namespace it looks like the operator was able to talk to the kube-apiserver but the components that it's bring up (Keycloak, etc) are not able to.
I would check the logs on the kube-apiserver (control plane) to see if you have some unauthorized requests, also check the log files of the components (pods, deployments, etc) that the operator is trying to bring up.
If you have unauthorized requests you may have to manually update the RBAC rules. Finally, I would check with IBM cloud to see what specific permission its K8s control plane could have that is preventing applications to talk to it (the kube-apiserver).
✌️
I've followed the instructions to create an EKS cluster in AWS using Terraform.
https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html
I've also copied the output for connecting to the cluster to ~/.kube/config-eks. I've verified this successfully works as I've been able to connect to the cluster and manually deploy containers. However, now i'm trying to use the Terraform Kubernetes provider to connect to the cluster but cannot seem to be able to configure the provider properly.
I've configured the provider to use my kubectl configuration but when attempting to push a simple configmap, i get an error stating the following:
configmaps is forbidden: User "system:anonymous" cannot create configmaps in the namespace "kube-system"
I know that the provider is picking up part of the configuration but I cannot seem to get it to authenticate. I suspect this is because EKS uses heptio for authentication and i'm not sure if the K8s Go client used by Terraform can support heptio. However, given that Terraform released their AWS EKS support when EKS went GA, I'd doubt that they wouldn't also update their Terraform provider to work with it.
Is it possible to even do this now? Are there alternatives?
Exec auth was added here: https://github.com/kubernetes/client-go/commit/19c591bac28a94ca793a2f18a0cf0f2e800fad04
This is what is utilized for custom authentication plugins and was published Feb 7th.
Right now, Terraform doesn't support the new exec-based authentication provider, but there is an issue open with a workaround: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161
That said, if I get some free time I will work on a PR.