gcloud compute instances add-metadata set environment variable - metadata

I am trying to set an environment variable from a script added to an instance metadata. I added the metadata from file using the command:
gcloud compute instances add-metadata server-1 --metadata-from-file file=~/meta.sh
and the script is
#!/bin/sh
export SERVER="ide"
it seems is doing nothing when I reboot the server

The --metadata-from-file flag reads the values for the specified metadata keys from the specified files. In your example, you are assigning the contents of ~/meta.sh as the value for the metadata-data key 'file'.
In order to do something with 'file', you need to read its value from the instance (server-1) and act on it. There are some special metadata keys that are used by compute engine during certain times of the instance life-cycle. For example, 'startup-script' is a key that is read and executed during start-up. I think you intended to use this key. So, try this:
gcloud compute instances add-metadata "server-1" --metadata-from-file startup-script=~/meta.sh
For more details on metadata usage, run:
gcloud compute instances add-metadata --help
or go here:
https://cloud.google.com/compute/docs/metadata

6 years old question, but for future reference for myself and others:
Setting environment-variables in the startup-script doesn't seem to work, but what you can do is write them to your .bashrc - in my example, I set them like this:
gcloud compute instances add-metadata etl-pipelines --metadata startup-script='#! /bin/bash
echo "
export USER='${USER}'
export PASSWORD='${PASSWORD}'
" >> /home/USERNAME/.bashrc
better would of course be to check if that string is already inserted into the VM, but that wasn't relevant for me as I kill the VMs quite quickly anyway.
Alternatively, in this SO answer, it is described how to user curl to get the env-vars directly from the metadata, but I haven't looked further into it yet.

Related

Multiple active projects under single config? Or, multiple active configurations?

I have a set of clusters split between two projects, 1 and 2. Currently, need to use gcloud init to switch between the two projects. Is there any possibility of having both projects active under the single configuration? Or, is it possible to have two configurations simultaneously active? I would hate to have to use init every time to switch between the two. Thanks!
gcloud init should only be used to (re)initialize gcloud on a host. The only time I ever use it is when I install gcloud on a new machine.
gcloud uses a global config that can be manipulated with the gcloud config command. IMO (I've been using GCP for 9 years) the less you use gcloud config, the better for your experience.
I think you're much better placed specifying config explicitly with gcloud commands.
Every gcloud command can include e.g.:
--project=${PROJECT} to specify the project to use
--account=${ACCOUNT} to specify the gcloud auth'd account to use
--region=${REGION} or --zone=${ZONE} or --location=${LOCATION}
etc.
Using gcloud commands and explicitly setting flags to specific the project, account, location etc. makes it trivial to flip between these and often (though not always) in a more intentional way.

gcloud list instances in managed group sorted by creation time

I need to get the oldest instance from an instance group. I am using the following command:
gcloud compute instance-groups managed list-instances "instance-group-name" --region "us-central1" --format="value(NAME,ZONE,CREATION_TIMESTAMP)" --sort-by='~CREATION_TIMESTAMP'
But it seems --sort-by is not working or I am using it a bit wrong.
Could you please suggest the right way.
It's probably creationTimestamp not CREATION_TIMESTAMP.
See: instances.list and the response body for the underlying field names.
It's slightly confusing but gcloud requires you to use the (field|property) names of the underlying request|response types not the output names.
Another way to more readily determine this is to add --format=yaml or --format=json to gcloud compute instances list (or any gcloud command) to get an idea of what's being returned so that you can begin filtering and formatting it.

How to check which gcloud project is active

To see all your gcloud projects you use command gcloud projects list. To switch to a specific project, you use gcloud config set project PROJECT_ID.
But what command can you use when you want to check which project is active? By which I mean, on which project was called the set command last?
gcloud config get-value project
You can always type gcloud config --help
There's a very cool and well-hidden interactive tool: gcloud beta interactive that will help with gcloud command completion.
Personally, I recommend not using configurations to hold default values (for e.g. project) in order to (help) avoid "To which project did I just apply that command?" issues.
IMO, it's much better to be more explicit and I prefer:
gcloud ... --project=${PROJECT}
If, like me, you put the project value in a variable, you can still make mistakes but it is easier to avoid them.
You can also define sets of configurations and then use gcloud ... --configuration=${CONFIG} and this works too as long as you don't set values in the default config
You can use gcloud projects list --filter='lifecycleState:ACTIVE' to get all active projects.
Or you can list them all showing lifecyclestate and filter with grep or other bash stuff:
$ gcloud projects list --format="table(projectNumber,projectId,createTime.date(tz=LOCAL),lifecycleState)" --limit 10
PROJECT_NUMBER PROJECT_ID CREATE_TIME LIFECYCLE_STATE
310270846648 again-testing-no-notebook 2022-12-11T07:03:03 ACTIVE
[...]
Hope this helps.

Running kubectl commands in parallel with different credentials

I'm currently running two Kubernetes clusters one on Google cloud and one on IBM cloud. To manage them I use kubectl. I've made a script that executes some commands on one of the clusters then switches to the other and does some other work there.
This works fine as long as the script only runs in one process, however when run in parallel the credentials are sometimes overwritten by one process when in use by another and this obviously causes issues.
I therefore want to know if I can supply kubectl with a credentials file for every call, instead of storing it in a environmental variable with kubectl config set-credentials.
Any help/solution is much appreciated.
If I need to work with multiple clusters using kubectl I am splitting my terminal and setting KUBECONFIG for each split:
For my first split:
export KUBECONFIG=~/.kube/cluster1
For the second split
export KUBECONFIG=~/.kube/cluster2
It is working pretty well, but this approach has one issue:
If you are using some kind of prompt with the current Kubernetes context it will give you different output and it might be missing leading.
For scripts, I am just changing value of KUBECONFIG in for loop, to loop over each cluster.
You need to use Kubefed in order to manage multiple clusters.
It will take one cluster as the main one, and execute all the same requests to the second cluster.

How can you use the kubectl tool (in a stateful/local way) for multiple managing multiple clusters from different directories simultaneously?

Is there a way you can run kubectl in a 'session' such that it gets its kubeconfig from a local directory rather then from ~/.kubeconfig?
Example Use Case
Given the abstract nature of the question, it's worth describing why this may be valuable in an example. If someone had an application, call it 'a', and they had 4 kubernetes clusters, each running a, they may have a simple script which did some kubectl actions in each cluster to smoke test a new deployment of A, for example, they may want to deploy the app, and see how many copies of it were autoscaled in each cluster afterward.
Example Solution
As in git, maybe there could be a "try to use a local kubeconfig file if you can find one" as a git-style global setting:
kubectl global set-precedence local-kubectl
Then, in one terminal:
cd firstcluster
cat << EOF > kubeconfig
firstcluster
...
EOF
kubectl get pods
p4
Then, in another terminal:
cd secondcluster/
cat << EOF > kubeconfig
secondcluster
...
EOF
kubectl get pods
p1
p2
p3
Thus, the exact same kubectl commands (without having to set context) actually run against new clusters depending on the directory you are in.
Some ideas for solutions
One idea I had for this, was to write a kubectl-context plugin which somehow made kubectl always check for local kubeconfig, setting context behind the scenes if it could before running, to a context in a global config that matched the directory name.
Another idea I've had along these lines would be to create different users which each had different kubeconfig home files.
And of course, using something like virtualenv, you might be able to do something where kubeconfig files had their own different value.
Final thought
Ultimately I think the goal here is to subvert the idea that a ~/.kubeconfig file has any particular meaning, and instead look at ways that many kubeconfig files can be used in the same machine, however, not just using the --kubeconfig option but rather, in such a way that state is still maintained in a directory local manner.
AFAIK, the config file is under ~/.kube/config and not ~/.kubeconfig. I suppose you are looking at an opinion on your answer, so you gave me the great idea about creating kubevm, inspired by awsvm for the AWS CLI, chefvm for managing multiple Chef servers and rvm for managing multiple Ruby versions.
So, in essence, you could have a kubevm setup that switches between different ~/.kube configs. You can use a CLI like this:
# Use a specific config
kubevm use {YOUR_KUBE_CONFIG|default}
# or
kubevm YOUR_KUBE_CONFIG
# Set your default config
kubevm default YOUR_KUBE_CONFIG
# List your configurations, including current and default
kubevm list
# Create a new config
kubevm create YOUR_KUBE_CONFIG
# Delete a config
kubevm delete YOUR_KUBE_CONFIG
# Copy a config
kubevm copy SRC_CONFIG DEST_CONFIG
# Rename a config
kubevm rename OLD_CONFIG NEW_CONFIG
# Open a config directory in $EDITOR
kubevm edit YOUR_KUBE_CONFIG
# Update kubevm to the latest
kubevm update
Let me know if it's useful!