Can't define network when creating instance group with gcloud - gcloud

Whilst it is possible to select "network" and "subnetwork" when creating an instance group in Google Cloud Platform Console, I get the following when I try to assign a network to a newly created Instance Group using gcloud:
gcloud compute instance-groups unmanaged create my-instance-group-1 --network my-net1 --subnetwork my-vpc-dmz0 --zone europe-west1-b
ERROR: (gcloud.compute.instance-groups.unmanaged.create) unrecognized arguments:
--network
my-net1
--subnetwork
my-vpc-dmz0

These flags do not exist on that command.
For unmanaged instance groups specifically, you create a group and then add instances using gcloud compute instance-groups unmanaged add-instances. You would add the network or subnet (note that the flag is named --subnet, not --subnetwork) at the time that you create each instance, not while creating the instance group.
Or, you could create a single instance template using gcloud compute instance-templates create --subnet my-subnet, and then create a managed instance group from that template. That might be closer to what you're trying to do.
More info here - https://cloud.google.com/compute/docs/instance-groups/

Related

How to create GCP Instance-Template with accessConfig using gcloud command

Hope this is not too incidental so someone can help.
I like to create an instance template using the create command
when I run this :
gcloud compute instance-templates create jenkins-slave-instance-template-tmp1 --network-interface=network=default,network-tier=PREMIUM ... .
I get the networkInterfaces in this way (using describe command ) :
networkInterfaces:
- kind: compute#networkInterface
name: nic0
network:
https://www.googleapis.com/compute/v1/projects/*******/global/networks/default
But when creating using the GCP UI console I get it ( as I actually need it ):
networkInterfaces:
- accessConfigs:
- kind: compute#accessConfig
name: External NAT
networkTier: PREMIUM
type: ONE_TO_ONE_NAT
kind: compute#networkInterface
name: nic0
network: https://www.googleapis.com/compute/v1/projects/*******/global/networks/default
How can I add the accessConfig for the instance-template on creation time ( I can do the same from the UI but the equivalent gcloud compute instance-templates create create it without the accessConfig entry .
Thanks for the your help
You can create an instance-template in gcloud cli by using the instance-templates create command with default parameters.
gcloud compute instance-templates create INSTANCE_TEMPLATE_NAME
gcloud compute uses the following default values, if you do not provide explicit template configuration/properties.
Machine type: the machine type—for example, n1-standard-1
Image: the latest Debian image
Boot disk: a new standard boot disk named after the VM
Network: the default VPC network
IP address: an ephemeral external IP address
If you run
gcloud compute instance-templates describe INSTANCE_TEMPLATE_NAME command you will get accessConfigs in network interface parameters.
If you want to provide the template configuration settings explicitly like Machine type, boot disk, Image properties,service account etc.,
you can specify them through gcloud cli, but if you want accessConfigs parameters then you should omit network-interface parameters
(--network-interface=network=default,network-tier=PREMIUM,nic-type=GVNIC) while running instance-template command.
For example:
gcloud compute instance-templates create example-template-1 --machine-type=e2-standard-4 --image-family=debian-10 --image-project=debian-cloud --boot-disk-size=250GB
The above command will create the instance-template with mentioned configuration and gives you accessConfigs parameters since you didn’t mention network- interface parameters.
Refer to the documentation for creating a new instance template.

Unable to Bind Google Service Account to Kubernetes Service Account

I am trying to bind my Google Service Account (GSA) to my Kubernetes Service Account (KSA) so I can connect to my Cloud SQL database from the Google Kubernetes Engine (GKE). I am currently using the follow guide provided in Google's documentation (https://cloud.google.com/sql/docs/sqlserver/connect-kubernetes-engine).
Currently I have a cluster running on GKE named MY_CLUSTER, a GSA with the correct Cloud SQL permissions named MY_GCP_SERVICE_ACCOUNT#PROJECT_ID.iam.gserviceaccount.com, and a KSA named MY_K8S_SERVICE_ACCOUNT. I am trying to bind the two accounts using the following command.
gcloud iam service-accounts add-iam-policy-binding \
--member "serviceAccount:PROJECT_ID.svc.id.goog[K8S_NAMESPACE/MY_K8S_SERVICE_ACCOUNT]" \
--role roles/iam.workloadIdentityUser \
MY_GCP_SERVICE_ACCOUNT#PROJECT_ID.iam.gserviceaccount.com
However when I run the previous command I get the following error message.
ERROR: Policy modification failed. For a binding with condition, run "gcloud alpha iam policies lint-condition" to identify issues in condition.
ERROR: (gcloud.iam.service-accounts.add-iam-policy-binding) INVALID_ARGUMENT: Identity Pool does not exist (PROJECT_ID.svc.id.goog). Please check that you specified a valid resource name as returned in the `name` attribute in the configuration API.
Why am I getting this error when I try to bind my GSA to my KSA?
In order to bind your Google Service Account (GSA) to you Kubernetes Service Account (KSA) you need to enable Workload Identity on the cluster. This is explained in more details in Google's documentation (https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity).
To enable Workload Identity on an existing cluster you can run.
gcloud container clusters update MY_CLUSTER \
--workload-pool=PROJECT_ID.svc.id.goog

Identify redundant GCP resources created by Kubernetes

When creating various Kubernetes objects in GKE, associated GCP resources are automatically created. I'm specifically referring to:
forwarding-rules
target-http-proxies
url-maps
backend-services
health-checks
These have names such as k8s-fw-service-name-tls-ingress--8473ea5ff858586b.
After deleting a cluster, these resources remain. How can I identify which of these are still in use (by other Kubernetes objects, or another cluster) and which are not?
There is no easy way to identify which added GCP resources (LB, backend, etc.) are linked to which cluster. You need to manually go into these resources to see what they are linked to.
If you delete a cluster with additional resources attached, you have to also manually delete these resources as well. At this time, I would suggest taking note of which added GCP resources are related to which cluster, so that you will know which resources to delete when the time comes to deleting the GKE cluster.
I would also suggest to create a feature request here to request for either a more defined naming convention for additional GCP resources being created linked to a specific cluster and/or having the ability to automatically delete all additonal resources linked to a cluster when deleting said cluster.
I would recommend you to look at https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/14-cleanup.md
You can easily delete all the objects by using the google cloud sdk in the following manner :
gcloud -q compute firewall-rules delete \
kubernetes-the-hard-way-allow-nginx-service \
kubernetes-the-hard-way-allow-internal \
kubernetes-the-hard-way-allow-external \
kubernetes-the-hard-way-allow-health-check
{
gcloud -q compute routes delete \
kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \
kubernetes-route-10-200-2-0-24
gcloud -q compute networks subnets delete kubernetes
gcloud -q compute networks delete kubernetes-the-hard-way
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region)
gcloud -q compute target-pools delete kubernetes-target-pool
gcloud -q compute http-health-checks delete kubernetes
gcloud -q compute addresses delete kubernetes-the-hard-way
}
This assumes you named your resources 'kubernetes-the-hard-way', if you do not know the names, you can also use various filter mechanisms to filter resources by namespaces etc to remove these.

AWS ecs scheduled task with cloudwatch

I am trying to create scheduled task with cloudwatch.
I am using this page
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-events-rule-target.html
The problem i see is when i run task normally then aws asks
vpc
subnets
Launchtype
BUT when i use cloudwatch target then it dont ask for vpc, subnets etc. why is that ?
CloudFormation has not been updated to accommodate some Fargate functionality yet. If you get an error while trying to deploy an ECS task from CloudFormation,
try using the command line interface (aws events put-target) instead, which allows you to add a target that contains the required ECS parameters for launch type and network config.
Here is an example of how I configured my ECS tasks to be deployed from the CLI instead of CloudFormation:
1. Add vpc/subnet config to a variable, NETWORK_CONFIGURATION:
NETWORK_CONFIGURATION='{"awsvpcConfiguration":{"AssignPublicIp":"ENABLED","SecurityGroups": \["'${AWS_NETWORKCONFIG_SECURITY_GROUP}'"],"Subnets":["'${AWS_NETWORKCONFIG_SUBNET}'"]}}'
Run the following command to deploy your task, which will take the vpc config from the variable declared above
aws events put-targets \
--rule events-rule--${TASK_NAME} \
--targets '{"Arn":"arn:aws:ecs:'${AWS_REGION}':'${AWS_ACCOUNT_ID}':cluster/ecs-cluster-1","EcsParameters":{"LaunchType":"FARGATE","NetworkConfiguration":'${NETWORK_CONFIGURATION}',"TaskCount": 1,"TaskDefinitionArn": "arn:aws:ecs:'${AWS_REGION}':'${AWS_ACCOUNT_ID}':task-definition/ecs-task-'${TASK_NAME}'"},"Id": "ecs-targets-'${TASK_NAME}'","RoleArn": "arn:aws:iam::'${AWS_ACCOUNT_ID}':role/ecsEventsRole"}' \
;

Unable to create Dataproc cluster using custom image

I am able to create a google dataproc cluster from the command line using a custom image:
gcloud beta dataproc clusters create cluster-name --image=custom-image-name
as specified in https://cloud.google.com/dataproc/docs/guides/dataproc-images, but I am unable to find information about how to do the same using the v1beta2 REST api in order to create a cluster from within airflow. Any help would be greatly appreciated.
Since custom images can theoretically reside in a different project if you grant read/use access of that custom image to whatever project service account you use for the Dataproc cluster, images currently always need a full URI, not just a short name.
When you use gcloud, there's syntactic sugar where gcloud will resolve the full URI automatically; you can see this in action if you use --log-http with your gcloud command:
gcloud beta dataproc clusters create foo --image=custom-image-name --log-http
If you created one with gcloud you can also gcloud dataproc clusters describe your cluster to see the fully-resolved custom image URI.