How to generate `gcloud functions deploy` command from existing gcloud functions? - gcloud

I already have a couple of functions deployed through google console. Now, I want them to deploy through command line. Is there any way to generate the corresponding gcloud functions deploy command(with as many argument prefilled as possible) for the existing functions.
I'm asking this because gcloud functions deploy supports gazillion arguments, which would be pain to figure out manually..

Interesting question and I think this would be useful functionality.
There isn't a solution for this today (although see Declarative Export for an analog using Terraform).
gcloud functions deploy is lossy; for example, if you deployed from a local file-system using --source, I think this is irrecoverably lost.
That said, you can describe deployments as-is perfectly.
For example:
gcloud functions describe ${NAME} \
--region=${REGION} \
--project=${PROJECT}
You could use the output from this command as a template for generating the gcloud functions deploy command.
NOTE If this weren't so service-specific I'd be interested to build a prototype solution.

Related

Can `oc create` behave in a "transactional"/"atomic" manner when asked to create _multiple_ objects on the cluster?

I have written a number of related OKD object definitions, each in its own YAML file. These together essentially make up an application deployment. I am doing something like the following to install my application on an OKD cluster, which works to my satisfaction when none of the objects already exist [on the cluster]:
oc create -f deploymentconfig.yaml,service.yaml,route.yaml,configmap.yaml,secret.yaml
However, if some of the objects oc create is asked to create, already exist on the cluster, then oc create refuses to re-create them (naturally) but it will have created all the other ones that did not exist.
This isn't ideal when the objects I am creating on the cluster were made to behave "in tandem", and are parts of an application where they depend on one another -- the configuration map, for instance, is pretty much a hard requirement as without it the container will fail to start properly (lacking configuration data through a mounted volume).
I'd like to know, can oc create be made to behave like either all of the objects specified on the command line, are installed, or none if some of them already exist or if there were errors?
I am aware OKD has template faculties and other features that may greatly help with application deployment, so if I am putting too much (misplaced) faith on oc create here, I'll take an alternative solution if oc create by design does not do "transactions". This is just me trying what seems simple from where I currently stand -- not being much of an OKD expert.
Unfortunately, there is no such thing.
In Kubernetes (and so in Openshift), manifests are declarative, but they are declarative by resource.
You can oc apply or oc replace to create or modify some resource in a atomic way, but the same cannot be done with a lot of resources because Kubernetes don't see them as a unity.
Even if you have a Template or a List, some resources may have problems and you will end with a part of the whole.
For this kind of thing helm is much more versatile and works as you want with --atomic flag.

How do people test Kube config locally (Kustomize)

Scenario
We have a large, complex sets of Kustomize with replacements, CRDs, SOPs etc.
We can generate the config locally/CI with
..\kustomize.exe build --load-restrictor=LoadRestrictionsNone .\path > sample-build.yaml
But this doesn't test the actual values that will be used. It doesn't detect things like missing config maps.
kubectl apply -f .\dev-01.sample.yaml --dry-run=server Could be used but would need a cloud for each developer or the developer to have all the containers locally (e.g. docker).
Are there any commands to 'expand' sample-build.yaml so I can see the actual environment variables that will be passed to the container at deploy time, but, without a cluster locally.
What do other people use to CI Kustomize?

What are generators in kubernetes kubectl?

When I want to generate yaml by running kubectl, it denotes that I should denote --generator=something flag within the command.
For example, to get the deployment template via kubectl, I should run the below command:
kubectl run --generator=deployment/v1beta1 nginx --image=nginx --dry-run -o yaml
Without mentioning --generator flag the CLI states in some kind that I should mention the generator flag with a proper value (e.g. run-pod/v1).
My question is, what is essentially a generator? What does it do? Are they some sort of object creation templates or something else?
That was introduced in commit 426ef93, Jan. 2016 for Kubernetes v1.2.0-alpha.8.
The generators were described as:
Generators are kubectl commands that generate resources based on a set of inputs (other resources, flags, or a combination of both).
The point of generators is:
to enable users using kubectl in a scripted fashion to pin to a particular behavior which may change in the future.
Explicit use of a generator will always guarantee that the expected behavior stays the same.
to enable potential expansion of the generated resources for scenarios other than just creation, similar to how -f is supported for most general-purpose commands.
And:
Generator commands should obey to the following conventions:
A --generator flag should be defined. Users then can choose between different generators, if the command supports them (for example, kubectl run currently supports generators for pods, jobs, replication controllers, and deployments), or between different versions of a generator so that users depending on a specific behavior may pin to that version (for example, kubectl expose currently supports two different versions of a service generator).
Generation should be decoupled from creation.
A generator should implement the kubectl.StructuredGenerator interface and have no dependencies on cobra or the Factory

How to programmatically generate kubernetes config from GCP service account using python API

I already found the way using gcloud CLI.
gcloud auth activate-service-account --key-file=serviceaccount.json
gcloud container clusters get-credentials $clusterName \
--zone=$zone --project=$project
kubectl config view --minify --flatten
However, to eliminate dependency to gcloud cli, Is there any programmatic way to achieve a similar result as above? Preferably using API exposed in Google's python client library.
My expected result is a portable config file that can be passed to any kubectl --kubeconfig=... command.
update: I have found that the commands I showed above results in a kube config file that still depends on gcloud cli as auth helper, probably to automatically handle token expiration. So, any workarounds are welcome.
I wrote a shell script which basically does exactly what you are expecting.
https://gitlab.com/workshop21/open-source/rbac

Unable to create Dataproc cluster using custom image

I am able to create a google dataproc cluster from the command line using a custom image:
gcloud beta dataproc clusters create cluster-name --image=custom-image-name
as specified in https://cloud.google.com/dataproc/docs/guides/dataproc-images, but I am unable to find information about how to do the same using the v1beta2 REST api in order to create a cluster from within airflow. Any help would be greatly appreciated.
Since custom images can theoretically reside in a different project if you grant read/use access of that custom image to whatever project service account you use for the Dataproc cluster, images currently always need a full URI, not just a short name.
When you use gcloud, there's syntactic sugar where gcloud will resolve the full URI automatically; you can see this in action if you use --log-http with your gcloud command:
gcloud beta dataproc clusters create foo --image=custom-image-name --log-http
If you created one with gcloud you can also gcloud dataproc clusters describe your cluster to see the fully-resolved custom image URI.