How can a Kubernetes container be restricted from accessing the API Server? - kubernetes

Kubernetes automatically places a token and certificate in /var/run/secrets/kubernetes.io/serviceaccount of each running container in a pod. This token allows access to the the API Server from any container.
Is it possible to either prevent this directory from being added to a container or specify a service account that has zero privileges?

That token has no explicit permissions. If you run with any authorization mode other than AllowAll, you will find that account cannot do anything with the API.
If you want to stop injecting API tokens, you can remove the service account admission controller from the list (in apiserver options).
If you want to stop generating tokens completely, you can remove the private key argument from the controller manager start options.

Related

Programmatically create users in Kubernetes

I am looking for a way to create/retrieve/update/delete a user in Kubernetes, such that I can allow him certain stuff via RoleBindings.
Everything I have found is more or less manual work on the master node. However, I imagine a service deployed in Kubernetes I could call via an API to do the magic for me without doing any manual work. Is such a thing available?
From https://kubernetes.io/docs/reference/access-authn-authz/authentication/#users-in-kubernetes
All Kubernetes clusters have two categories of users: service accounts
managed by Kubernetes, and normal users.
Kubernetes does not have objects which represent
normal user accounts. Normal users cannot be added to a cluster
through an API call.
Even though a normal user cannot be added via an API call, any user
that presents a valid certificate signed by the cluster's certificate
authority (CA) is considered authenticated.
So there is no API call to create normal user. However you can create service accounts that can have RoleBindings bound to them.
Another possibility is to create TLS certificate, sign it with Kubernetes cluster CA (using CSRs) and use it as a "normal user".

How to create base authentication in kubernetes?

I want to create base authentication in kubernetes. every document say that I should create CSV or file then enter the username and password in it. but I do not want to use file I want to some database or kubernetes handle it.
what can I do for base authentication?
You can based your authentication on tokens if you don't want to use static pasword file.
First option:
Service Account Tokens
A service account is an automatically enabled authenticator that uses signed bearer tokens to verify requests.
The plugin uses two flags(which are optional):
Service accounts are usually created automatically by the API server and associated with pods running in the cluster through the ServiceAccount Admission Controller. Bearer tokens are mounted into pods at well-known locations, and allow in-cluster processes to talk to the API server. Accounts may be explicitly associated with pods using the serviceAccountName field of a PodSpec.
Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a service account, simply use the kubectl create serviceaccount (NAME) command. This creates a service account in the current namespace and an associated secret.
The created secret holds the public CA of the API server and a signed JSON Web Token (JWT).
The signed JWT can be used as a bearer token to authenticate as the given service account. See above for how the token is included in a request. Normally these secrets are mounted into pods for in-cluster access to the API server, but can be used from outside the cluster as well.
There is some drawbacks because service account tokens are stored in secrets, any user with read access to those secrets can authenticate as the service account. Be careful when granting permissions to service accounts and read capabilities for secrets.
Second:
Install OpenID Connect (full documentation you can find here: oidc).
OpenID Connect (OIDC) is a superset of OAuth2 supported by some service providers, notably Azure Active Directory, Salesforce, and Google. The protocol’s main addition on top of OAuth2 is a field returned with the access token called an ID Token. This token is a JSON Web Token (JWT) with well known fields, such as a user’s email, signed by the server.
To identify the user, the authenticator uses the id_token (not the access_token) from the OAuth2 token response as a bearer token.
Since all of the data needed to validate who you are is in the id_token, Kubernetes doesn’t need to “phone home” to the identity provider. In a model where every request is stateless this provides a very scalable solution for authentication.
Kubernetes has no “web interface” to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first.
There’s no easy way to authenticate to the Kubernetes dashboard without using the kubectl proxy command or a reverse proxy that injects the id_token.
More information you can find here: kubernetes-authentication.

What's the hostname of openshift master server for internal access?

If I want to access the REST API of the openshift master server from anywhere in my company I use https://master.test04.otc-test.company.com:8443 which works just fine.
Now I'm writing an admin application that is accessing the REST API and is deployed in this openshift cluster. Is there a generic name or environment variable in openshift to get the hostname of the master server?
Background: My admin application will be deployed on multiple openshift clusters which do not have the same URL. It would be very handy to have them autodiscover the hostname of the current master server instead of configuring this value for every deployment.
Use environment variables:
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT
In the container, unless service account details are not being mounted, you can also access the directory:
/var/run/secrets/kubernetes.io/serviceaccount
In this you can then find a token file which contains the access token for the service account the container runs as. This means you can create a separate service account for the application in that project, and use RBAC to control what it can do via the REST API.
That same directory also has a namespace file so you know what project the container is running in, and files with certificates to use when accessing the REST API over a secure connection.
This is the recommended approach, rather than trying to pass an access token to your application through its configuration.
Note that in OpenShift 4, if you need to access the OAuth server endpoint, it is on a separate URL to what the REST API is. In 3.X, they were on the same URL.
In 4.0, you can access the path /.well-known/oauth-authorization-server on the REST API URL, to get information about the separate OAuth server endpoint.
For additional information on giving REST API access to an application via a service account, see:
https://cookbook.openshift.org/users-and-role-based-access-control/how-do-i-enable-rest-api-access-for-an-application.html
Note that that page currently says you can use https://openshift.default.svc.cluster.local as URL, but this doesn't work in OpenShift 4.

Recovering access after initially provisioning wrong scopes for an instance

I recently created a VM, but mistakenly gave the default service account Storage: Read Only permissions instead of the intended Read Write under "Identity & API access", so GCS write operations from the VM are now failing.
I realized my mistake, so following the advice in this answer, I stopped the VM, changed the scope to Read Write and started the VM. However, when I SSH in, I'm still getting 403 errors when trying to create buckets.
$ gsutil mb gs://some-random-bucket
Creating gs://some-random-bucket/...
AccessDeniedException: 403 Insufficient OAuth2 scope to perform this operation.
Acceptable scopes: https://www.googleapis.com/auth/cloud-platform
How can I fix this? I'm using the default service account, and don't have the IAM permissions to be able to create new ones.
$ gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* (projectnum)-compute#developer.gserviceaccount.com
I will suggest you to try add the scope "cloud-platform" to the instance by running the gcloud command below
gcloud alpha compute instances set-scopes INSTANCE_NAME [--zone=ZONE]
[--scopes=[SCOPE,…] [--service-account=SERVICE_ACCOUNT
As a scopes put "https://www.googleapis.com/auth/cloud-platform" since it give Full access to all Google Cloud Platform resources.
Here is gcloud documentation
Try creating the Google Cloud Storage bucket with your user account.
Type gcloud auth login and access the link you are provided, once there, copy the code and paste it into the command line.
Then do gsutil mb gs://bucket-name.
The security model has 2 things at play, API Scopes and IAM permissions. Access is determined by the AND of them. So you need an acceptable scope and enough IAM privileges in order to do whatever action.
API Scopes are bound to the credentials. They are represented by a URL like, https://www.googleapis.com/auth/cloud-platform.
IAM permissions are bound to the identity. These are setup in the Cloud Console's IAM & admin > IAM section.
This means you can have 2 VMs with the default service account but both have different levels of access.
For simplicity you generally want to just set the IAM permissions and use the cloud-platform API auth scope.
To check if you have this setup go to the VM in cloud console and you'll see something like:
Cloud API access scopes
Allow full access to all Cloud APIs
When you SSH into the VM by default gcloud will be logged in as the service account on the VM. I'd discourage logging in as yourself otherwise you more or less break gcloud's configuration to read the default service account.
Once you have this setup you should be able to use gsutil properly.

How does kubectl being authorized?

I have been confused for a long time about how the user of kubectl being authorized. I bootstrap a k8s cluster from scratch and use 'RBAC' as the authorization mode. The user kubectl used is authenticated by certificate first, then it should be authorized by RBAC when accessing the api-server. I did nothing about granting permissions to the user, however, it is allowed to access all the apis(creating pod or listing pods).
Kubernetes has no built in user management system. It expects you to implement that part on your own. In this sense, a common way to implement user auth is to create a certificate sign request and have it signed by the cluster certificate authority. By reading that newly generated certificate, the cluster will extract the username and the groups it belongs to. Then, after that, it will apply the RBAC policies you implemented. In this sense, if the user can access everything, then it can be one of the following:
You are still using the admin user account instead of the newly created user account.
The user account you created belongs to an admin group
You did not enable RBAC correctly
This guide should help you with an easy example of user auth in Kubernetes: https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/