Consider the case in which some script and/or Makefile is running a series of gcloud commands. While waiting for those commands to complete, the user goes to another shell and changes the gcloud configuration to refer to a different project. Hopefully, the script/Makefile was written well enough that all necessary gcloud invocations include "--project", and no harm will be done by stray gcloud commands running in the wrong project. Is there any gcloud configuration that can help to prevent problems in that scenario? Perhaps a config setting to force gcloud commands to fail if --project is not specified?
You could deal with this issue by using multiple gcloud configurations, each configuration can have a different value for "project". Only one configurations can be active at a time so you would still have the same problem, however you can activate a configurations for a single gcloud invocations with the --configuration flag. This means that if your Makefile uses the --configuration flag for each gcloud invocation it would be immune to the user going to another shell and changing the project as long as the user was not using the same configuration that the Makefile was using. "gcloud topic configurations" has documentation about how to use configurations.
Related
I ran the gcloud components update on ubuntu. After updating I received the following warning
WARNING: There are other instances of Google Cloud Platform tools on your system PATH.
Please remove the following to avoid confusion or accidental invocation:
/usr/bin/gsutil
What will happen if I do not remove it?
How should I remove it
I am able to configure Fastlane locally and working well with terminal, but when I am trying to run it with Jenkins(I have configured Jenkins locally on my macbook) it is failing every-time(i have installed ruby 2.5.0 again).
Any help on the same would be highly appreciated.
I am attaching SS for your reference.
Jenkins run its build scripts using specified user 'jenkins'. You might want to check if 'jenkins' user had installed requires dependencies to run fastlane, for e.g ruby ...
Have you set up your PATH in Jenkins? In the configuration of your node, in the environment variables section, you'll want to include /usr/local/bin/ with Jenkins's PATH by entering /usr/local/bin/:$PATH.
To use any GCloud componet, I have installed on Cloud Shell just once, and i could use it each time i open cloud shell. But for CBT component for BigTable, I don't know what is happening that each time I close the browser the CBT tool is not installed any more and I should re-install it. The problem does not appear immediately, generally each day I should install it and it exist between installed components for whole day, and the day after I see it is not any more installed!
Any idea ?
This problem is caused by Google terminating idle Cloud Shell instances when they are not being used. Termination happens after about 60 minutes of non-use.
Only data stored in the $HOME directory persists after a new Cloud Shell is launched.
To install cbt the following steps are recommended:
gcloud components update
gcloud components install cbt
Since these components are not being installed in $HOME, they do not persist after Cloud Shell is terminated.
There are two methods that I recommend to solve this problem:
Google Cloud Shell is a Docker container. You can modify the docker image to customize to fit your needs. This method will allow you to install packages, tools, etc that are not located in your $HOME directory.
Modify .bashrc to run a script located in the $HOME directory to install cbt each time a new instance is created.
Note: It appears as of now that cbt is included in the default Cloud Shell instance. This answer should help others understand what is happening and be able to install other programs, tools, etc. persistently.
I have a gradle build, which runs a few tests on our application. Currently the tests that store assets in mongoDB fail if the developer forgets to run mongod first. So I want any build that uses mongoDB to fail with a message the user that clearly tells him to start mongoDB. Ideally, later we would start mongoDB from gradle.
I already found this nice article about how to see if mongoDB is running under Linux, which is quite simple. I am sure something similar can be done under Windows using tasklist /FI "IMAGENAME eq mongod", etc. But I need to know how to use this correctly in gradle.
Is there a cross platform way to check if a service or normal process is running in gradle?
The suggestion provided by Orid to use the Gradle Mongo Plugin should work if you set the necessary Gradle tasks to be dependent on a startManagedMongoDb task.
While that may seem to be the easiest way, it may be breaking with how MongoDb will be used in non-development environments or on a continuous integration build-server, where the MongoDb service will already be running.
A very simple solution would be to add the MongoDb checking functionality to the top of a customized gradlew.bat (and the gradlew bash script if it will be run on a *nix operating system).
Another simple solution that wouldn’t require changing the gradlew.bat script would be to create your own MongoDb checking script that then called gradlew.bat, passing on command line arguments. I’m not sure if there is an equivalent to the bash $# for all positional arguments in windows, but looping through the arguments with SHIFT %1 can be used to generate the gradlew.bat command line.
As I understand, all that Capistrano does is ssh into the server and execute the commands we want it to (mostly).
I've used rvm in some past couple of projects, and had to install the rvm-capistrano gem. Otherwise, it failed to find the executables (or so I recall), even though we had a proper .rvmrc file (with the correct ruby and the correct gemset) in the repository.
Similarly, today I was setting up deployment for a project for which I'm using pythonbrew, and a simple "cd #{deploy_to}/current && pythonbrew venv use myenv && gunicorn_django -c gunicorn.py" gave me an error message saying "cannot find the executable gunicorn_django". This, I suppose is because the virtualenv was not activated correctly. But didn't we activate the environment when we did "pythonbrew venv use myenv"? The complete command works fine if I ssh into the server and execute it on the shell, but it doesn't when I do it via Capistrano.
My question is - why does Capistrano need modifications to play along with programs like rvm and pythonbrew, even though all it's doing is executing a couple of commands over ssh?
Thats because their ssh'ing in doesn't activate your shell's environment. So it's not picking up the source statements that enable the magic. Just do an rvm use ... before running commands instead of assuming the cd will pick that up automatically. Should be fine then. If you had been using fabric there is the env() context manager that you could use to be sure thats run before each command.