Gcloud components on linux - gcloud

I ran the gcloud components update on ubuntu. After updating I received the following warning
WARNING: There are other instances of Google Cloud Platform tools on your system PATH.
Please remove the following to avoid confusion or accidental invocation:
/usr/bin/gsutil
What will happen if I do not remove it?
How should I remove it

Related

Unable to add filesystem permission denied

Using node v 8.9.0 and this tutorial
When I try and debug my http google cloud function in dev tools:
C:_Users_Matt_AppData_Roaming_nvm_v8.9.0_node_modules_#google-cloud_functions-emulator_src_supervis
I get filesystem permission denied error, how can I debug my cloud functions:
I also got the filesystem permission denied error, and the issue was that you need to accept the premissions from chrome to be able to access that filesystem. initially I didn't see the premissions prompt, but then I found it on a different tab (which was kinda weird behavior). I guess just look for that permissions prompt, it should be right below your address bar.
I see that you are referring to a C directory, which means that you are trying this on Windows OS. I will put the steps below with documentation links on how to properly setup the configuration. Those steps worked for me well without giving me any issues, so I suggest you to follow them one by one and see if that helps you.
Run Google Cloud Functions Emulator on Windows OS:
Install and set up Google Cloud SDK for Windows. Link and documentation here
Install Node.js and npm for Windows. Tutorial here
Right click on Google Cloud SDK Shell and select Run as administrator.
Execute $ node --version you should get the version of Node.js without any additional errors
Execute $ npm --version you should get the version of npm without any additional errors
The tutorial that you are referring to is part of Google Cloud Functions Tutorial Series
First install and set up npm functions emulator by running $ npm install -g #google-cloud/functions-emulator as mentioned in Google Cloud Functions Tutorial : Setting up a Local Development Environment
Setup the project for the functions $ functions config set projectId PROJECT_ID as mentioned in Start and Stop the Emulator documentation.
Start the emulator by executing $ functions start. Same documentation as above.
Download the source code as mentioned in the documentation you are referring to. The GitHub repository is here.
Clone the project locally. $ git clone https://github.com/rominirani/googlecloudfunctions-training.git
Navigate to the folder $ cd googlecloudfunctions-training/helloworld-http
Follow the rest of the Google Cloud Functions Tutorial : Debugging Local Functions documentation.
NOTE: Whenever you run / execute / call the Cloud Function the Node.js
blank window will pop up. Keep it open as it is the executable that
executes your code.
I have tested the tutorial with the above set up that I described and it worked for me. You have to be the administrator of your account, since the Functions Emulator and the code is running locally, so you have to have all the permissions of the directories that are going to be used and execute all the software as administrator.

gcloud dataproc clusters create: No module named jsonschema

When trying to create a new cluster with gcloud dataproc clusters create, the following error is displayed:
ERROR: gcloud failed to load (gcloud.dataproc.clusters.create): Problem loading gcloud.dataproc.clusters.create: No module named jsonschema.
This usually indicates corruption in your gcloud installation or problems with your Python interpreter.
Please verify that the following is the path to a working Python 2.7 executable:
/usr/bin/python2
If it is not, please set the CLOUDSDK_PYTHON environment variable to point to a working Python 2.7 executable.
If you are still experiencing problems, please run the following command to reinstall:
$ gcloud components reinstall
If that command fails, please reinstall the Cloud SDK using the instructions here:
https://cloud.google.com/sdk/
Installing jsonschema does not seem to help.
This was an issue with gcloud sdk release 208.0.0. Upgrading to 208.0.1 should resolve this issue.

How can you require project to be specified in call to gcloud?

Consider the case in which some script and/or Makefile is running a series of gcloud commands. While waiting for those commands to complete, the user goes to another shell and changes the gcloud configuration to refer to a different project. Hopefully, the script/Makefile was written well enough that all necessary gcloud invocations include "--project", and no harm will be done by stray gcloud commands running in the wrong project. Is there any gcloud configuration that can help to prevent problems in that scenario? Perhaps a config setting to force gcloud commands to fail if --project is not specified?
You could deal with this issue by using multiple gcloud configurations, each configuration can have a different value for "project". Only one configurations can be active at a time so you would still have the same problem, however you can activate a configurations for a single gcloud invocations with the --configuration flag. This means that if your Makefile uses the --configuration flag for each gcloud invocation it would be immune to the user going to another shell and changing the project as long as the user was not using the same configuration that the Makefile was using. "gcloud topic configurations" has documentation about how to use configurations.

Cloud Shell , I should install CBT each time i open cloud shell the day after

To use any GCloud componet, I have installed on Cloud Shell just once, and i could use it each time i open cloud shell. But for CBT component for BigTable, I don't know what is happening that each time I close the browser the CBT tool is not installed any more and I should re-install it. The problem does not appear immediately, generally each day I should install it and it exist between installed components for whole day, and the day after I see it is not any more installed!
Any idea ?
This problem is caused by Google terminating idle Cloud Shell instances when they are not being used. Termination happens after about 60 minutes of non-use.
Only data stored in the $HOME directory persists after a new Cloud Shell is launched.
To install cbt the following steps are recommended:
gcloud components update
gcloud components install cbt
Since these components are not being installed in $HOME, they do not persist after Cloud Shell is terminated.
There are two methods that I recommend to solve this problem:
Google Cloud Shell is a Docker container. You can modify the docker image to customize to fit your needs. This method will allow you to install packages, tools, etc that are not located in your $HOME directory.
Modify .bashrc to run a script located in the $HOME directory to install cbt each time a new instance is created.
Note: It appears as of now that cbt is included in the default Cloud Shell instance. This answer should help others understand what is happening and be able to install other programs, tools, etc. persistently.

Failed to install PostgreSQL 8.3, failed to run initdb:1?

I am reinstalling PostgreSQL using pgInstaller postgresql-8.3.16-1. An Error occurs in the last step of the install process:
Failed to run initdb:1!
\tmp\initdb.log shows this message:
The application has failed to start because its side-by-side
configuration is incorrect. Please see the application event log or
use the command-line sxstrace.exe tool for more detail.
The message is quite simple but I can't locate the root cause of the install failure.
Any one knows what's the reason?
You probably already have a database cluster installed in the location where your Posgres8.3 install is trying to init a new one. You can't really mix and match versions like that.
If possible, install the old version you had when you created the existing database. Then use pg_dumpall to create a .SQL dump of all of your data. You can then move or delete the old database (usually at /var/lib/pgsql) and install the new version. finally, apply the database dump to get the old data back.
For more details on this, read the Upgrading a PostgreSQL cluster manual page.
If you are installing the same version, there's no need to upgrade the cluster, you can probably safely ignore errors about initdb, so long as everything runs Ok.