Permissions on GKE cluster - kubernetes

After creation of a standard GKE cluster in the Google Cloud Platform Console I find when I click on the cluster and look at the clusters' setting s a 'Permissions' setting, which looks like this:
what I don't understand is that I have allowed API access on a lot of these service I believe, so why does only 'Cloud Platform' show 'enabled'? Is this what is enabled at creation of the cluster maybe?!
When selecting 'edit' you can not 'enable' these services from here..., so what exactly are these Permissions?

The GKE cluster will be created with the permissions that is set on the 'Access scopes' section in the 'Advanced edit' tab. So only the APIs with the access enabled in this section will be shown as enabled.
These permissions denote the type and level of API access granted to the VM in the node pool. Scopes inform the access level your cluster nodes will have to specific GCP services as a whole. Please see this link for more information about accesss scopes.
In the tab of 'Create a Kubernetes cluster', click 'Advanced edit'. Then you will see another tab called 'Edit node pool' pops up with more options. If you click 'Set access for each API', you will see the option to set these permissions.
'Permissions' are defined when the cluster is created. You can not edit it directly on the cluster after the creation. You may want to create a new cluster with appropriate permissions or create a new Node Pool with the new scopes you need and then delete your old 'default' Node Pool as specified in this link .

Related

Rename the EKS creator's IAM user name via aws cli

If we have a role change in the team, I read that EKS creator can NOT be transferred. Can we instead rename the creator's IAM user name via aws cli? Will that break EKS?
I only find ways to add new user using configmap but this configmap doesn't have the root user in there.
$ kubectl edit configmap aws-auth --namespace kube-system
There is no way to transfer the root user of an EKS cluster to another IAM user. The only way to do this would be to delete the cluster and recreate it with the new IAM user as the root user.
Can we instead rename the creator's IAM user name via aws cli? Will that break EKS?
The creator record is immutable and managed within EKS. This record is simply not accessible using CLI and not amendable (including DELETE).
How do we know a cluster was created by IAM roles or IAM users?
If you cannot find the identity (userIdentity.arn) in CloudTrail that invoked CreateCluster (eventName) for the cluster (responseElements.clusterName) in last 90 days, you need to raise it to the AWS Support to obtain the identity.
is it safe to delete the creator IAM user?
Typically, you start with deactivate the IAM user account (creator) if you are not sure of any side effect. You can proceed to delete the account later when you are confident to do so.
As already mentioned in the answer by Muhammad, it is not possible to transfer the root/creator role to another IAM user.
To avoid getting into the situation that you describe, or any other situation where the creator of the cluster should not stay root, it is recommended to not create clusters with IAM users but with assumed IAM roles instead.
This leads to the IAM role becoming the "creator", meaning that you can use IAM access management to control who can actually assume the given role und thus act as root.
You can either have dedicated roles for each cluster or one role for multiple clusters, depending on how you plan to do access management. The limits will however apply later, meaning that you can not switch the creator role afterwards, so this must be properly planned in advance.

Not getting Kubernetes cluster option in Create Server Group of Spinnaker

I am using Spinnaker version 1.26.6 which is deployed using Halyard.
I have added multiple Kubernetes account with provider version V2 following here https://spinnaker.io/docs/setup/install/providers/kubernetes-v2/ and the service account has entire cluster access.
While clicking on Create Server Group in UI, I don't get option to select my kubernetes accounts added. I get something like this
This account has empty dropdown list.
Is there any way by which instead of this prompt I get to choose my kubernetes accounts which I have added and deployed applications to?
We skipped this manual step of user grants.
https://spinnaker.io/docs/setup/productionize/persistence/clouddriver-sql/#database-setup
After adding this, it took sometime to sync and we are able to get the required results.

Vault on k8s without admin rights

I am trying to install the Hashicorp Vault in my k8s available on Openshift environment, but unfortunately I don't have admin rights and the IT department said that it is not possible to provide this admin right.
Is there another option for a vault where it is not necessary admin right for the kubernetes?
The error after the tentative installation is this one.
Error: rendered manifests contain a resource that already exists.
Unable to continue with install: could not get information about the
resource: customresourcedefinitions.apiextensions.k8s.io
"vaultsecrets.ricoberger.de" is forbidden: User "" cannot get resource
"customresourcedefinitions" in API group "apiextensions.k8s.io" at the
cluster scope.
It seems that you want to install custom resource definitions (CRDs), which are a cluster-wide resource. Since it is cluster-wide, this is typically something that will be limited to cluster admins.
So apart from giving you admin privileges, the IT operators could give you specific permissions to create / edit custom resource definitions, maybe that will work.

Why are Dataproc cluster properties not in the info panel?

In the Google Cloud Console I have opened the "info" panel for a cluster but I only see labels and permissions. What I really want to see is the cluster properties, such as the Spark and YARN properties.
How can I see cluster properties?
The info panel is generally consistent in the Cloud Console inside and outside of Dataproc. The info panel usually shows labels and IAM permissions.
To see cluster properties:
Open a cluster's detail in the Cloud Console (click the cluster)
Click 'Configuration'
Expand 'Properties'
You can also use the gcloud command to list properties.
gcloud dataproc clusters describe
In addition to what James has posted above, you can also click on REST equivalent link on bottom left to see all the information in one go.

How to enable Kubernetes API in GCP? not sorted out here by following the doc

I am learning GCP and wanted to create a Kubernetes cluster with instance, here is what I did and what I followed with no success:
First set the region to my default us-east1-b:
xenonxie#cloudshell:~ (rock-perception-263016)$ gcloud config set compute/region us-east1-b
Updated property [compute/region].
Now proceed to create it:
xenonxie#cloudshell:~ (rock-perception-263016)$ gcloud container clusters create my-first-cluster --num-nodes 1
ERROR: (gcloud.container.clusters.create) One of [--zone, --region]
must be supplied: Please specify location.
So it seems default region/zone us-east1-b is NOT picked up
I then run the same command again with region specified explicitly:
xenonxie#cloudshell:~ (rock-perception-263016)$ gcloud container clusters create my-first-cluster --num-nodes 1 --zone us-east1-b
WARNING: Currently VPC-native is not the default mode during cluster
creation. In the future, this will become the default mode and can be
disabled using --no-enable-ip-alias flag. Use
--[no-]enable-ip-alias flag to suppress this warning. WARNING: Newly
created clusters and node-pools will have node auto-upgrade enabled by
default. This can be disabled using the --no-enable-autoupgrade
flag. WARNING: Starting in 1.12, default node pools in new clusters
will have their legacy Compute Engine instance metadata endpoints
disabled by default. To create a cluster with legacy instance metadata
endpoints disabled in the default node pool,run clusters create with
the flag --metadata disable-legacy-endpoints=true. WARNING: Your Pod
address range (--cluster-ipv4-cidr) can accommodate at most 1008
node(s). This will enable the autorepair feature for nodes. Please see
https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for
more information on node autorepairs. ERROR:
(gcloud.container.clusters.create) ResponseError: code=403,
message=Kubernetes Engine API is not enabled for this project. Please
ensure it is enabled in Google Cloud Console and try again: visit
https://console.cloud.google.com/apis/api/container.googleapis.com/overview?project=rock-perception-263016
to do so.
From the warning/error it seems I need to enable Kubernetes API, and a link is provided to me already, wonderful, I then clicked the link and it took me to enable it, which I did, right after I enabled it, I was prompt to create credential before I can use the API.
Clicking into it and choosing the right API, as you can see from the screenshot, it doesn't give me a button to create the credential:
What is missing here?
Thank you very much.
Once the API is created, you can go ahead and create the cluster. The credentials are not used when you use gcloud since the SDK will wrap the API call and use your logged-in user credentials.
As long as the Kubernetes Engine API shows as enabled, you should be able to run the same command you used and the cluster will be created. Most of those are just warnings letting you know about default settings that you did not specify