I'm trying to configure properties for the Google Cloud SDK in a non-interactive environment (specifically, a Docker container), and I'd like to use environment variables to do it (because it seems much simpler to get right and portable compared to volume-mounting config files...). However, I can't find any documentation on what the environment variables should be called, etc.
Is it possible to configure the Google Cloud SDK using environment variables, and how do I do so?
Clarification: For now, the only property I care about is the default project, core/project in this listing.
There is a set of environment variables (CLOUDSDK_) that match some (all?) of the gcloud config properties.
I was unable to find these documented but I'm aware of them through the kubectl Cloud Builder (see here) and this post
I've submitted an issue asking Google to document these (more clearly).
Related
I am configuring an SPI provider (specifically, for the x509cert-lookup SPI) in Keycloak, deployed on bare metal. The provider config documentation tells me to use the build command for selecting the provider and the start command to pass options to that provider.
However, from the docs about general configuration I conclude that all options can also be passed in a keycloak.conf file, and the build step is merely an optimization.
If I do not care much about optimizing startup time: Can the build step be eliminated altogether, putting all options into the config file for simplicity? Or is there anything so special about the providers that they must be set in the build step?
(Background: I am running a non-containerized bare metal setup where Keycloak is managed by systemd, and we've had situations where provider configuration was somehow lost between restarts.)
You're right when you mention that the extra build step prior to the start command is purely optimization.In fact, when you call start, it performs a build!
When running inside a containerized environment, the optimization step is a nice feature. Here's the configs that can be set in the extra build step (if desired)
https://www.keycloak.org/server/all-config?f=build
If it's not the case, like you, and you run on bare metal, then the additional build doesn't provide you much.
Here's the most useful link to get you started:
https://www.keycloak.org/server/configuration
Beware that there is an order of precedence when setting the config such as:
command-line parameters
environment variables
user-created .conf file
keycloak.conf file located in the conf directory.
command-line parameters take precedence over environment (and so on).
Hope this helps!
Consider an app which, in 12-factor style, receives its config in the form of a JSON document provided as an environment variable. The config contains secrets, so it is never stored on disk; instead, it is computed on the fly before starting the app, using something like sops or nunjucks.
I am trying to debug such an app in VS Code. Is there any way to run some arbitrary script before launch and provide its output to the app as an environment variable?
I will accept answers for any language or runtime, but an approach that works with package.json scripts or Node.js binaries (e.g. Jest, Playwright) would be most helpful for me.
From this question and this article we get that is possible to create multiple configs for the gcloud SDK.
But it seems that you have to manually switch between then, by running:
gcloud config configurations activate <CONFIG_NAME>
But is there a way for each config to be automatically selected whenever I open up a project workspace/folder on VSCode? How can I do this?
I've just tested activating a new config on a different VSCode project. That seems to update it globally. Now, all of my VSCode windows (different projects) are seeing the same activated config.
Isn't it dangerous? I mean, I could be uploading stuff to the cloud on a different project that I'm not aware of. How do people usually handle this? Do I need to run the activate command on every script before deploying something?
Unfortunately, I am not aware of such a possibility, however I have found something interesting that may help you. There is following extension:
GCP Project Switcher
The extension only allows you to change projects, however as I looked into the code it is running gcloud set config project command under the hood. You could raise a request to add the possibility to change the whole configuration to the instead of project only, as it is a very similar approach.
I am trying to follow Google's instructions on deploying a Cloud Function from the command line. I cloned their sample project, but when I used gcloud functions deploy to deploy it, it complained that it failed to find attribute [project]. I had to provide that manually.
Where in their docs to they talk about setting the project attribute? I must've missed it, and it seems pretty important ...
This answer is in addition to #Kolban.
You can modify your gcloud settings at any time. Here are some common ones:
gcloud config set core/project my-project-id
gcloud config set compute/region us-central1
To list your projects:
gcloud projects list
To see your current settings:
gcloud config list
To see your authorization settings:
gcloud auth list
Then there are settings for individual services such as Cloud Run:
gcloud config set run/region us-central1
To get help to see the vast number of settings available:
gcloud config --help
All of this is documented. Just put a command into Google and a document link will appear. For example put this string into Google: "gcloud compute instances create". The first link takes you to the command documentation.
When you install the Google Cloud SDK (which provides the gcloud command), you have the opportunity to create one or more configurations (including the default). Think of these as "profiles" for your interaction with GCP. A configuration includes:
Your identity
Your default project
Your default region/zone
See the following article:
Initializing Cloud SDK
It sounds like you either didn't run gcloud init or didn't identify a project you wanted to use when you did run it. When you subsequently run gcloud commands and don't specify a project, then the current configuration project will be used. If you didn't set one, then that would explain the error encountered.
I am wondering if there is a way to pass a value for RAILS_ENV directly into the Torquebox server without going through a deployment descriptor; similar to how I can pass properties into Java with the -D option.
I have been wrestling with various deployment issues with Torquebox over the past couple weeks. I think a large part of the problem has to do with packaging the gems into the Knob file, which is the most practical way for managing them on a Window environment. I have tried archive deployment and expanded deployment; with and without external deployment descriptor.
With an external deployment descriptor, I found the packaged Gem dependencies were not properly deployed and I received errors about missing dependencies.
When expanded, I had to fudge around a lot with the dependencies and what got included in the Knob, but eventually I got it to deploy. However, certain files in the expanded Knob were marked as failed (possible duplicate dependencies?), but they did not affect the overall deployment. The problem was when the server restarted, deployment would fail the second time mentioning it could not redeploy one of the previously failed files.
The only one I have found to work consistently for me is archive without external deployment descriptor. However, I still need a way to tell the application in which environment it is running. I have different Torquebox instances for each environment and they only run the one application, so it would be fairly reasonable to configure this at the server level.
Any assistance in this matter would be greatly appreciated. Thank you very much!
The solution I finally came to was to pass in RAILS_ENV as a Java property to the Torquebox server and then to set ENV['RAILS_ENV'] to this value in the Rails boot.rb initializer.
Step 1: Set Java Property
First, you will need to set a Rails Environment java property for your Torquebox server. To keep with standard Java conventions, I called this rails.env.
Dependent on your platform and configuration, this change will need to be made in one of the following scripts:
Using JBoss Windows Service Wrapper: service.bat
Standalone environment: standalone.conf.bat (Windows) or standalone.conf (Unix)
Domain environment:: domain.conf.bat (Windows) or domain.conf (Unix)
Add the following line to the appropriate file above to set this Java property:
set JAVA_OPTS=%JAVA_OPTS% -Drails.env=staging
The -D option is used for setting Java system properties.
Step 2: Set ENV['RAILS_ENV'] based on Java Property
We want to set the RAILS_ENV as early as possible, since it is used by a lot of Rails initialization logic. Our first opportunity to inject application logic into the Rails Initialization Process is boot.rb.
See: http://guides.rubyonrails.org/initialization.html#config-boot-rb
The following line should be added to the top of boot.rb:
# boot.rb (top of the file)
ENV['RAILS_ENV'] = ENV_JAVA['rails.env'] if defined?(ENV_JAVA) && ENV_JAVA['rails.env']
This needs to be the first thing in the file, so Bundler can make intelligent decisions about the environment.
As you can see above, a seldom mentioned feature of JRuby is that it conveniently exposes all Java system properties via the ENV_JAVA global map (mirroring the ENV ruby map), so we can use it to access our Java system property.
We check that ENV_JAVA is defined (i.e. JRuby is being used), since we support multiple deployment environments.
I force the rails.env property to be used when present, as it appears that *RAILS_ENV* already has a default value at this point.