Vault on k8s without admin rights - kubernetes

I am trying to install the Hashicorp Vault in my k8s available on Openshift environment, but unfortunately I don't have admin rights and the IT department said that it is not possible to provide this admin right.
Is there another option for a vault where it is not necessary admin right for the kubernetes?
The error after the tentative installation is this one.
Error: rendered manifests contain a resource that already exists.
Unable to continue with install: could not get information about the
resource: customresourcedefinitions.apiextensions.k8s.io
"vaultsecrets.ricoberger.de" is forbidden: User "" cannot get resource
"customresourcedefinitions" in API group "apiextensions.k8s.io" at the
cluster scope.

It seems that you want to install custom resource definitions (CRDs), which are a cluster-wide resource. Since it is cluster-wide, this is typically something that will be limited to cluster admins.
So apart from giving you admin privileges, the IT operators could give you specific permissions to create / edit custom resource definitions, maybe that will work.

Related

Openshift custom resources restrictions

I didn't get how I can restrict the access of custom resources in Openshift using RBAC
let's assume I have a custom api:
apiVersion: yyy.xxx.com/v1
kind: MyClass
metadata:
...
Is it possible to prevent some users to deploy resources where apiVersion=yyy.xxx.com/v1 and kind=MyClass?
Also can I grant access to other users to deploy resources where apiVersion=yyy.xxx.com/v1 and kind=MyOtherClass?
If this can be done using RBAC roles, how can I deploy RBAC roles in Openshift? only using CLI or I can create some yaml configuration files and deploy them with Yaml for example?
You can use cluster roles and RBAC roles:
oc adm policy add/remove-cluster-role-to-group oauth:system:authenticated
So the general idea is to remove the permission to deploy the resource to all the authenticated users.
The next step is to add the permission to deploy that resourse only to ServicesAccounts assigned to specific namepsaces
OpenShift/Kubernetes has Cluster Role/Binding and Local Role/Binding.
Here is the definitions in the docs. *1
Cluster role/binding: Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles.
Local role/binding: Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles.
If your Custom Resource is the resource existing in a single namespace. You can manage to give permission to others.
Whereas, if the Custom Resource is the cluster wide resource, cluster admin can only deploy the resource.
*1: https://docs.openshift.com/container-platform/4.11/authentication/using-rbac.html

Deploy from CI/CD via HELM to external Kubernetes cluster with limited rights

I understand that I can copy my .kube/config to my CI/CD server, or just name the ServiceAccount to allow my CD pipeline to use HELM for deployment.
However, what if I want to allow deployment via Helm, but restrict a lot of other access, like:
reading data from pods or a deployed database
port-forward services
... so basically accessing all data in the cluster, except for stateless Docker containers deployed via Helm.
Would it be possible to create a new ClusterRole with limited rights? What verbs in a ClusterRole does Helm need at least to function properly?
What rights does Helm need at the least?
It comes down to what your Helm chart is doing to Kubernetes.
ClusterRoles can be bound to a particular namespace through reference in a RoleBinding. The admin, edit and view default ClusterRoles are commonly used in this manner. For more detailed info see this description. For example edit is a default ClusterRole which allows read/write access to most objects in a namespace. It does not allow viewing or modifying Roles or RoleBindings; and granting a user cluster-admin access at the namespace scope provides full control over every resource in the namespace, including the namespace itself.
You can also restrict a user's access to a particular namespace by using either the edit or the admin role. See this example.
The permissions strategy could also depend on what objects will be created by the installation. The user will need all access to those API objects that will be managed by helm installations. Using RBAC Authorization has this concept explained in more detail with several examples that you could use as a reference. Also, this source would be helpful.

Kubernetes Operator CSV stuck pending

I'm trying to install a Kubernetes operator into an OpenShift cluster using OLM 0.12.0. I ran oc create -f my-csv.yaml to install it. It is created successfully, but I do not get any results.
In the olm operator logs I find this message:
level=info msg="couldn't ensure RBAC in target namespaces" csv=my-operator.v0.0.5 error="no owned roles found" id=d1h5n namespace=playground phase=Pending
I also note that there is no InstallPlan created to make the accounts that I thought it was making.
What's wrong?
This message probably means that the RBAC assigned to your service account does not match the requirements specified by CSV (cluster service version).
In other words, while creating an operator you define CSV which defines the requirements for creating your custom resource. Then, when operator creates the resource it checks if the used service account fulfills these requirements.
You can check the Hazelcast Operator we created. It has some requirements regarding RBAC. So, before installing it, you need to apply the following RBAC file.

How do I authenticate with Kubernetes kubectl using a username and password?

I've got a username and password, how do I authenticate kubectl with them?
Which command do I run?
I've read through: https://kubernetes.io/docs/reference/access-authn-authz/authorization/ and https://kubernetes.io/docs/reference/access-authn-authz/authentication/ though can not find any relevant information in there for this case.
kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif
https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_config_set-credentials/
The above does not seem to work:
kubectl get pods
Error from server (Forbidden): pods is forbidden: User "client" cannot list pods in the namespace "default": Unknown user "client"
Kubernetes provides a number of different authentication mechanisms. Providing a username and password directly to the cluster (as opposed to using an OIDC provider) would indicate that you're using Basic authentication, which hasn't been the default option for a number of releases.
The syntax you've listed appears right, assuming that the cluster supports basic authentication.
The error you're seeing is similar to the one here which may suggest that the cluster you're using doesn't currently support the authentication method you're using.
Additional information about what Kubernetes distribution and version you're using would make it easier to provide a better answer, as there is a lot of variety in how k8s handles authentication.
You should have a group set for the authenticating user.
Example:
password1,user1,userid1,system:masters
password2,user2,userid2
Reference:
"Use a credential with the system:masters group, which is bound to the cluster-admin super-user role by the default bindings."
https://kubernetes.io/docs/reference/access-authn-authz/rbac/

Kubernetes 1.6+ RBAC: Gain access as role cluster-admin via kubectl

1.6+ sees a lot of changes revolving around RBAC and ABAC. However, what is a little quirky is not being able to access the dashboard etc. by default as previously possible.
Access will result in
User "system:anonymous" cannot proxy services in the namespace "kube-system".: "No policy matched."
Documentation at the k8s docs is plenty, but not really stating how to gain access practically, as creator of a cluster, to become cluster-admin
What is a practical way to authenticate me as cluster-admin?
By far the easiest method is to use the credentials​ from /etc/kubernetes/admin.conf (this is on your master if you used kubeadm) . Run kubectl proxy --kubeconfig=admin.conf on your client and then you can visit http://127.0.0.1:8001/ui from your browser.
You might need to change the master address in admin.conf after you copied to you client machine.