Why are Dataproc cluster properties not in the info panel? - google-cloud-dataproc

In the Google Cloud Console I have opened the "info" panel for a cluster but I only see labels and permissions. What I really want to see is the cluster properties, such as the Spark and YARN properties.
How can I see cluster properties?

The info panel is generally consistent in the Cloud Console inside and outside of Dataproc. The info panel usually shows labels and IAM permissions.
To see cluster properties:
Open a cluster's detail in the Cloud Console (click the cluster)
Click 'Configuration'
Expand 'Properties'
You can also use the gcloud command to list properties.
gcloud dataproc clusters describe

In addition to what James has posted above, you can also click on REST equivalent link on bottom left to see all the information in one go.

Related

Not getting Kubernetes cluster option in Create Server Group of Spinnaker

I am using Spinnaker version 1.26.6 which is deployed using Halyard.
I have added multiple Kubernetes account with provider version V2 following here https://spinnaker.io/docs/setup/install/providers/kubernetes-v2/ and the service account has entire cluster access.
While clicking on Create Server Group in UI, I don't get option to select my kubernetes accounts added. I get something like this
This account has empty dropdown list.
Is there any way by which instead of this prompt I get to choose my kubernetes accounts which I have added and deployed applications to?
We skipped this manual step of user grants.
https://spinnaker.io/docs/setup/productionize/persistence/clouddriver-sql/#database-setup
After adding this, it took sometime to sync and we are able to get the required results.

Remove gcloud VPC-SC security perimeter when no organisation is set up

A cloud run project that worked two months ago suddenly started complaining about the default log bucket being outside the VPC-SC perimeter. However, this project is not in an organisation, so I don't understand how I can remove the perimeter.
gcloud builds submit --tag [tag]
Errors with:
ERROR: (gcloud.builds.submit)
The build is running, and logs are being written to the default logs bucket.
Unfortunately, the default logs bucket is always outside any VPC-SC security
perimeter, so this tool cannot stream the logs for you.
While changing the controls is not possible:
If you still have the issue, I reviewed documentation according to your main question of how to remove gcloud VPC-SC security perimeter, and, if you activate VPC accessible services and then decide that the VPC networks in your perimeter no longer need access to the Cloud Storage service, you can remove services from your service perimeter's VPC accessible services using the following command:
gcloud access-context-manager perimeters update example_perimeter \
--remove-vpc-allowed-services=example.storage.googleapis.com \
--policy=example.11271009391
If the issue persists, you can leave a comment so that we can continue helping you, or here is a link that helps to troubleshoot any issue related to VPC Service Control.
Update your gcloud tools......

How to enable Kubernetes API in GCP? not sorted out here by following the doc

I am learning GCP and wanted to create a Kubernetes cluster with instance, here is what I did and what I followed with no success:
First set the region to my default us-east1-b:
xenonxie#cloudshell:~ (rock-perception-263016)$ gcloud config set compute/region us-east1-b
Updated property [compute/region].
Now proceed to create it:
xenonxie#cloudshell:~ (rock-perception-263016)$ gcloud container clusters create my-first-cluster --num-nodes 1
ERROR: (gcloud.container.clusters.create) One of [--zone, --region]
must be supplied: Please specify location.
So it seems default region/zone us-east1-b is NOT picked up
I then run the same command again with region specified explicitly:
xenonxie#cloudshell:~ (rock-perception-263016)$ gcloud container clusters create my-first-cluster --num-nodes 1 --zone us-east1-b
WARNING: Currently VPC-native is not the default mode during cluster
creation. In the future, this will become the default mode and can be
disabled using --no-enable-ip-alias flag. Use
--[no-]enable-ip-alias flag to suppress this warning. WARNING: Newly
created clusters and node-pools will have node auto-upgrade enabled by
default. This can be disabled using the --no-enable-autoupgrade
flag. WARNING: Starting in 1.12, default node pools in new clusters
will have their legacy Compute Engine instance metadata endpoints
disabled by default. To create a cluster with legacy instance metadata
endpoints disabled in the default node pool,run clusters create with
the flag --metadata disable-legacy-endpoints=true. WARNING: Your Pod
address range (--cluster-ipv4-cidr) can accommodate at most 1008
node(s). This will enable the autorepair feature for nodes. Please see
https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for
more information on node autorepairs. ERROR:
(gcloud.container.clusters.create) ResponseError: code=403,
message=Kubernetes Engine API is not enabled for this project. Please
ensure it is enabled in Google Cloud Console and try again: visit
https://console.cloud.google.com/apis/api/container.googleapis.com/overview?project=rock-perception-263016
to do so.
From the warning/error it seems I need to enable Kubernetes API, and a link is provided to me already, wonderful, I then clicked the link and it took me to enable it, which I did, right after I enabled it, I was prompt to create credential before I can use the API.
Clicking into it and choosing the right API, as you can see from the screenshot, it doesn't give me a button to create the credential:
What is missing here?
Thank you very much.
Once the API is created, you can go ahead and create the cluster. The credentials are not used when you use gcloud since the SDK will wrap the API call and use your logged-in user credentials.
As long as the Kubernetes Engine API shows as enabled, you should be able to run the same command you used and the cluster will be created. Most of those are just warnings letting you know about default settings that you did not specify

Mark Logic's cloud formation template not working in eu-west-1 region

I want to install MarkLogic solution in AWS eu-west-1 region using cloud formation template available in http://developer.marklogic.com/products/cloud/aws but the stack fails to create launch configuration.
I have downloaded the cloud formation template from the link http://developer.marklogic.com/products/cloud/aws and created a AWS cloud formation stack from "mlcluster.template" which is available in the above link but the stake failed during launch configuration set up. Not able to fix the template. Any suggestions ?
Problem got fixed. It is a configuration mistake.
For the IAM role parameter in AWS cloud formation stack I have to provide only the IAM name and not the entire ARN. Initially I provided the IAM ARN and it probably confused the resource name while creating an Auto Scaling Launch Configuration.

Permissions on GKE cluster

After creation of a standard GKE cluster in the Google Cloud Platform Console I find when I click on the cluster and look at the clusters' setting s a 'Permissions' setting, which looks like this:
what I don't understand is that I have allowed API access on a lot of these service I believe, so why does only 'Cloud Platform' show 'enabled'? Is this what is enabled at creation of the cluster maybe?!
When selecting 'edit' you can not 'enable' these services from here..., so what exactly are these Permissions?
The GKE cluster will be created with the permissions that is set on the 'Access scopes' section in the 'Advanced edit' tab. So only the APIs with the access enabled in this section will be shown as enabled.
These permissions denote the type and level of API access granted to the VM in the node pool. Scopes inform the access level your cluster nodes will have to specific GCP services as a whole. Please see this link for more information about accesss scopes.
In the tab of 'Create a Kubernetes cluster', click 'Advanced edit'. Then you will see another tab called 'Edit node pool' pops up with more options. If you click 'Set access for each API', you will see the option to set these permissions.
'Permissions' are defined when the cluster is created. You can not edit it directly on the cluster after the creation. You may want to create a new cluster with appropriate permissions or create a new Node Pool with the new scopes you need and then delete your old 'default' Node Pool as specified in this link .