How to create GCP Instance-Template with accessConfig using gcloud command - gcloud

Hope this is not too incidental so someone can help.
I like to create an instance template using the create command
when I run this :
gcloud compute instance-templates create jenkins-slave-instance-template-tmp1 --network-interface=network=default,network-tier=PREMIUM ... .
I get the networkInterfaces in this way (using describe command ) :
networkInterfaces:
- kind: compute#networkInterface
name: nic0
network:
https://www.googleapis.com/compute/v1/projects/*******/global/networks/default
But when creating using the GCP UI console I get it ( as I actually need it ):
networkInterfaces:
- accessConfigs:
- kind: compute#accessConfig
name: External NAT
networkTier: PREMIUM
type: ONE_TO_ONE_NAT
kind: compute#networkInterface
name: nic0
network: https://www.googleapis.com/compute/v1/projects/*******/global/networks/default
How can I add the accessConfig for the instance-template on creation time ( I can do the same from the UI but the equivalent gcloud compute instance-templates create create it without the accessConfig entry .
Thanks for the your help

You can create an instance-template in gcloud cli by using the instance-templates create command with default parameters.
gcloud compute instance-templates create INSTANCE_TEMPLATE_NAME
gcloud compute uses the following default values, if you do not provide explicit template configuration/properties.
Machine type: the machine type—for example, n1-standard-1
Image: the latest Debian image
Boot disk: a new standard boot disk named after the VM
Network: the default VPC network
IP address: an ephemeral external IP address
If you run
gcloud compute instance-templates describe INSTANCE_TEMPLATE_NAME command you will get accessConfigs in network interface parameters.
If you want to provide the template configuration settings explicitly like Machine type, boot disk, Image properties,service account etc.,
you can specify them through gcloud cli, but if you want accessConfigs parameters then you should omit network-interface parameters
(--network-interface=network=default,network-tier=PREMIUM,nic-type=GVNIC) while running instance-template command.
For example:
gcloud compute instance-templates create example-template-1 --machine-type=e2-standard-4 --image-family=debian-10 --image-project=debian-cloud --boot-disk-size=250GB
The above command will create the instance-template with mentioned configuration and gives you accessConfigs parameters since you didn’t mention network- interface parameters.
Refer to the documentation for creating a new instance template.

Related

Create an AWS credentials file in a Kubernetes pod

Can I put environment variables to aws credentials file and let aws configure recognize and parse the file? I have tried below. Look like the variable is not parsed by aws configure.
[default]
aws_access_key_id=${TEST_KEY_ID}
aws_secret_access_key=${TEST_SECRET_KEY}
[profile2]
aws_access_key_id=${TEST2_KEY_ID}
aws_secret_access_key=${TEST2_SECRET_KEY}
If I cannot, how can I create an AWS credentials file in a Kubernetes pod? I know we can generate a file using configMap. But I do not want to put key id and secret key in configMap directly since all Kubernetes code will be stored in git repository.
Yes, you can put environment variables into pod.
Then, type commands:
aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
aws configure set region $AWS_REGION
aws configure set output $AWS_OUTPUT
Files config will automatically be created in pods.
You can refer to the yaml file here:
https://hub.docker.com/repository/docker/cuongquocvn/aws-cli-kubectl
I would suggest you create a new Kubernetes service account and then map it to a specific IAM role.
Reference: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html

Trying to connect to Digital Ocean Kubernates Cluster - .kube/config: not a directory

I'm trying to connect to a Digital Ocean Kubernates cluster using doctl but when I run
doctl kubernetes cluster kubeconfig save <> I get an error saying .kube/config: not a directory. I've authenticated using doctl and when I run doctl account get I see my account info. I'm confused as to what the problem is. Is this some sort of permission issue or did I miss a config step somewhere?
kubectl (by default) stores a configuration in ${HOME}/.kube/config. It appears you don't have the file and the command doesn't create it if it doesn't exist; I recommend you try creating ${HOME}/.kube first as doctl really ought to create the config file if it doesn't exist.
kubectl facilitates interacting with multiple clusters as multiple users in multiple namespaces through the use a tuple called 'context' which combines a cluster with a user with a(n optional) namespace. The command lets you switch between these easily.
After you're done with a cluster, generally (!) you must tidy up its entires in ${HOME}/.kube/config too as these configs tend to grow over time.
You can change the location of the kubectl config file using an environment variable (KUBECONFIG).
See Organizing Cluster Access Using kubeconfig Files

How to set node allocatable computation on kubernetes?

I'm reading the Reserve Compute Resources for System Daemons task in Kubernetes docs and it briefly explains how to allocate a compute resource to a node using kubelet command and flags --kube-reserved, --system-reserved and --eviction-hard.
I'm learning on Minikube for masOS and as far as I got, minikube is configured to use command kubectl along with minikube command.
For local learning purposes on minikube I don't need to have it set (maybe it can't be done on minikube) but
How this could be done let's say in K8's development environment on a node?
This could be be done by:
1. Passing config file during cluster initialization or initilize kubelet with additional parameters via config file,
For cluster initialization using config file it should contains at least:
kind: InitConfiguration
kind: ClusterConfiguration
additional configuratuion types like:
kind: KubeletConfiguration
In order to get basic config file you can use kubeadm config print init-defaults
2. For the live cluster please consider reconfigure current cluster using steps "Generate the configuration file" and "Push the configuration file to the control plane" like described in "Reconfigure a Node's Kubelet in a Live Cluster"
3. I didn't test it but for minikube - please take a look here:
Note:
Minikube has a “configurator” feature that allows users to configure the Kubernetes components with arbitrary values. To use this feature, you can use the --extra-config flag on the minikube start command.
This flag is repeated, so you can pass it several times with several different values to set multiple options.
This flag takes a string of the form component.key=value, where component is one of the strings from the below list, key is a value on the configuration struct and value is the value to set.
Valid keys can be found by examining the documentation for the Kubernetes componentconfigs for each component. Here is the documentation for each supported configuration:
kubelet
apiserver
proxy
controller-manager
etcd
scheduler
Hope this helped:
Additional community resources:
Memory usage in kubernetes cluster

Unable to create Dataproc cluster using custom image

I am able to create a google dataproc cluster from the command line using a custom image:
gcloud beta dataproc clusters create cluster-name --image=custom-image-name
as specified in https://cloud.google.com/dataproc/docs/guides/dataproc-images, but I am unable to find information about how to do the same using the v1beta2 REST api in order to create a cluster from within airflow. Any help would be greatly appreciated.
Since custom images can theoretically reside in a different project if you grant read/use access of that custom image to whatever project service account you use for the Dataproc cluster, images currently always need a full URI, not just a short name.
When you use gcloud, there's syntactic sugar where gcloud will resolve the full URI automatically; you can see this in action if you use --log-http with your gcloud command:
gcloud beta dataproc clusters create foo --image=custom-image-name --log-http
If you created one with gcloud you can also gcloud dataproc clusters describe your cluster to see the fully-resolved custom image URI.

Can't define network when creating instance group with gcloud

Whilst it is possible to select "network" and "subnetwork" when creating an instance group in Google Cloud Platform Console, I get the following when I try to assign a network to a newly created Instance Group using gcloud:
gcloud compute instance-groups unmanaged create my-instance-group-1 --network my-net1 --subnetwork my-vpc-dmz0 --zone europe-west1-b
ERROR: (gcloud.compute.instance-groups.unmanaged.create) unrecognized arguments:
--network
my-net1
--subnetwork
my-vpc-dmz0
These flags do not exist on that command.
For unmanaged instance groups specifically, you create a group and then add instances using gcloud compute instance-groups unmanaged add-instances. You would add the network or subnet (note that the flag is named --subnet, not --subnetwork) at the time that you create each instance, not while creating the instance group.
Or, you could create a single instance template using gcloud compute instance-templates create --subnet my-subnet, and then create a managed instance group from that template. That might be closer to what you're trying to do.
More info here - https://cloud.google.com/compute/docs/instance-groups/