The following command doesn't exit on my system:
gcloud compute project-info add-metadata --metadata=itemname=itemvalue
I am using powershell on windows, and I've also tested on a linux container in docker. In both environments, the metadata is updated, but the command never terminates.
If I provide an invalid key, or update to the existing value, I do get the output: No change requested; skipping update for [project]. and the program exits. Performing an actual update produces the hang.
I need this command to terminate so that I can use it in a script. I would like to be able to check the exit code to ensure the update occurred successfully.
You aren't patient enough. In large projects, this operation can take significant time to process. Give the script several minutes to complete.
Related
I have a script creates my database (script with all required DDL and inserts). My goal is to test that script is correct and database will be created successfully and without exceptions.
I decide to use for this docker image "postgres:latest".
My question is: can I run the docker image so that my script will be applied (I know, I can run my cript by copying to /docker-entrypoint-initdb.d/), and immedietly after that database will be shutdown and docker container exit with code 0. I want to shutdown database for automation this process and check exit code in test script.
I'll be glad to other suggestions of automation the prosess.
I have a NodeJS Express App that depends on MongoDB change streams. For them to be available, MongoDB has to be configured to run as a replica set (even if there is only one node in that set).
I'm working on Windows 10 pro.
I'm trying to dockerize this App, basing the MongoDB container off the official mongo:5 image.
For this to work, I want an automated way of initializing the DB as a replica set. Tutorials I've found rely on either execing into the container and running rs.initiate() from mongosh (or similar approaches), which is manual work I want to avoid. Or they use hacks like wait-for-it.sh as here.
I feel there must be a better solution, based somehow on the paragraph "Initializing a fresh instance", from the docs.
It describes that
When a container is started for the first time it will execute files with extensions .sh and .js that are found in /docker-entrypoint-initdb.d.
When exactly in the container lifecycle does that happen? After the container is initialized? Or after the DB is ready? Because this seems to be the perfect place for this initialization logic, which runs flawlessly when executed manually, from within the container.
However, placing
// initReplSet.js
print('Script running');
config={"_id":"rs0", "members":[{"_id":0,"host":"app-db:27017"}]};
print(JSON.stringify(rs.initiate(config)));
print('Script end');
fails with the error {"ok":0,"errmsg":"No host described in new configuration with {version: 1, term: 0} for replica set rs0 maps to this node","code":93,"codeName":"InvalidReplicaSetConfig"}, yet the database is available under the hostname app-db from other containers. This makes me feel that this code runs too early, before all other initialization logic (networking) is done.
Another approach is to place a bash script that executes code via mongosh. Here's what I've tried:
#!/bin/bash
mongosh "mongodb://app-db:27017/app_db" "initiateReplSet"
where initiateReplSet is
config={"_id":"rs0", "members":[{"_id":0,"host":"app-db:27017"}]}
rs.initiate(config)
exit
but this crashes the container with the error
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/initiateReplSetWrapper.sh
{"t":{"$date":"2022-02-15T11:31:23.353+00:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":600}}
Warning: Could not access file: EACCES: permission denied, mkdir '/home/mongodb'
Current Mongosh Log ID: 620b8f0b04b7ad69b446768d
Connecting to: mongodb://app-db:27017/app_db?directConnection=true&appName=mongosh+1.1.9
Only the first and the last three lines seem to really belong to the bash script, the second line is repeated constantly.
I'm not sure whether the error originates at the permission denied issue, or whether the DB really can't be accessed. However, specifying
RUN mkdir -p /home/mongodb/.mongodb
RUN chown -R 777 /home/mongodb
in the Dockerfile did not improve the situation (same error nevertheless).
Could you please explain either why this approach can not work, or how to make it work? Is there another, better, automated way to initialize the replica set? Could the docker image be improved to allow such initialization logic?
I just made it work with a wild experiment. Means I simply left out the config in my call to rs.initiate(), from the JS script. For some reason, the script then runs successfully and change streams become available to my NodeJS backend.
I will post everything that's needed to run a MongoDB docker with change streams enabled:
# Dockerfile
From mongo
WORKDIR .
COPY initiateReplSet.js ./docker-entrypoint-initdb.d/
CMD ["-replSet", "rs0"]
// initiateReplSet.js
rs.initiate()
Running sql exports via jenkins (backups), On a regular basis i receive
"ERROR: (gcloud.sql.export.sql) HTTPError 409: Operation failed because another operation was already in progress. ERROR: (gcloud.sql.operations.wait) argument OPERATION [OPERATION ...]: Must be specified.
I'm trying to determine where i can see which job are causing this to fail
ive tried to extending the gcloud sql operations wait --timeout to 1600
no luck
gcloud sql operations wait --timeout=1600
To wait for an operation, you need to specify the ID of the operation, as #PYB said. Here's how you can do that programmatically, like in a Jenkins script:
$ gcloud sql operations list --instance=$DB_INSTANCE_NAME --filter='NOT status:done' --format='value(name)' | xargs -r gcloud sql operations wait
$ gcloud sql ... # whatever you need to do
There are 2 errors here that could be affecting you. The first one is that there is an administrative operation starting before the previous one has completed. Reading through this “Best Practices” doc on SQL will help you on that front:
https://cloud.google.com/sql/docs/mysql/best-practices#admin
Specifically, in the Operations tab you can see the operations that are running.
Finally, the [OPERATION] argument is missing from the command “gcloud sql operations wait --timeout=1600”. See the documentation on that command here: https://cloud.google.com/sdk/gcloud/reference/sql/operations/wait
OPERATION is the name of the running operation, and if you wish to list all instance operations to get the right name, you can use this command:
https://cloud.google.com/sdk/gcloud/reference/sql/operations/list.
The operations names are 36 chars string on hexadecimal format, so your command should look something like this:
“gcloud sql operations wait OPERATION aaaaaaaa-0000-0000-0000-000000000000 --timeout=1600”
Cheers
I have the same problem during a long running import:
gcloud sql import sql "mycompany-mysql-1" $DB_BACKUP_PATH --database=$DB_NAME -q
Does it really mean if the import runs for an hour, I am not able to create databases
during that time? Really???
gcloud sql databases create $DB_NAME --instance="mycompany-mysql-1", -i "mycompany-mysql-1" --async "
This is a big issue if you use GCloud inside CI/CD! Anyone with an easy solution?
My idea until now:
download the backup to the CI/CD from the cloud bucket
connect over CLI to the MySQL and import the dump this way
But this means whenever two task inside the CI/CD want to do more than one task at the same time, one task will fail or I have to wait. Very Sad, if I am got it correct.
I am executing a code in sever with Docker and jupyter.
I set my container with a feature: restart=always.
But, the jupyter always show a same error: Kernel Restarting, The kernel appears to have died. It will restart automatically.
The code is based on Keras with TensorFlow.
In this way, I need to restart manually it. And I need to execute the code from beginning if I have no save the parameters. And I cann't find the error in time, so it is wasting time.
So, is there any way to set it automatically connect successfully.
I have a web app that uses postgresql 9.0 with some plperl functions that call custom libraries of mine. So, when I want to start fresh as if just released, my build process for my development area does basically this:
dumps data and roles from production
drops dev data and roles
restores production data and roles onto dev
restarts postgresql so that any cached versions of my custom libraries are flushed and newly-changed ones will be picked up
applies my dev delta
vacuums
Since switching my app's stack from win32 to CentOS, I now sometimes (i.e., it seems, only if and only if I haven't run this build process in "a while"--perhaps at least a day) get an error when my build script tries to apply the delta:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Specifically, what's failing to execute at the shell level is this:
psql --host=$host -U $superuser -p $port -d $db -f "$delta_filename.sql"
If, immediately after seeing this error, I try to connect to the dev database with psql, I can do so with no trouble. Also, if I just re-run the build script, it works fine the second time, every time I've encountered this. Acceptable workaround, but is the underlying cause something to be concerned about?
So far in my attempts to debug this, I inserted a step just after the server restart (which of course reports OK shutdown, OK startup) whereby I check the results of service postgresql-dev status in a loop, waiting 2 seconds between tries if it fails. On my latest build script run, said loop succeeds on the first try--status returns "is running"--but then applying the delta still fails with the above connection error. Again, second try succeeds, as does connecting via psql outside the script just after it fails.
My next debug attempt was to sleep for 5 seconds before the first status check and see what happens. So far this seems to solve the problem.
So why is pgsql not listening on the socket after it starts [OK] and also has status running ok, for up to 5 seconds, unless it has "recently" been restarted?
The status check only checks whether the process is running. It doesn't check whether you can connect. There can be any amount of time between starting the process and the process being ready to accept connections. It's usually a few seconds, but it could be longer. If you need to cope with this, you need to script it so that it checks whether it is possible to connect before proceeding. You could argue that the CentOS package should do this for you, but it doesn't.
Actually, I think in your case there is no reason to do a full restart. Unless you are loading libraries with shared_preload_libraries, it is sufficient to restart the connection to pick up new libraries.