Gsutil gives 401 error despite fresh install and proper credentials - google-cloud-storage

I have been using gsutil with Google cloud for almost a year through my organization with no issues. This morning, when I try to use any gsutil command, I get the following error:
401 Anonymous users does not have storage.objects.list access to bucket <my-bucket>.
What I have tried:
Uninstalling and reinstalling gcloud via curl https://sdk.cloud.google.com | bash as well as pip install -U gcloud gsutil.
I have deleted my .boto file before and after reinstalling.
I have tried installing inside and outside of an anaconda environment. Note that both configurations were working without issue previously
Before reinstalling, I remove any references to gcloud from ~/.bash_profile
Output of gsutil version -l:
gsutil version: 4.22
checksum: 2434a37a663d09ae21d1644f64ce60ca (OK)
boto version: 2.42.0
python version: 2.7.12 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:43:17) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)]
OS: Darwin 15.6.0
multiprocessing available: True
using cloud sdk: True
config path: /Users/<username>/.boto
gsutil path: /Users/<username>/google-cloud-sdk/platform/gsutil/gsutil
compiled crcmod: True
installed via package manager: False
editable install: False
Output of gcloud info
Google Cloud SDK [146.0.0]
Platform: [Mac OS X, x86_64]
Python Version: [2.7.12 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:43:17) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)]]
Python Location: [/Users/<username>/anaconda/envs/tensorflow_source/bin/python2]
Site Packages: [Disabled]
Installation Root: [/Users/<username>/google-cloud-sdk]
Installed Components:
core: [2017.02.28]
core-nix: [2016.11.07]
gcloud-deps: [2017.02.28]
gcloud: []
gsutil-nix: [4.18]
gsutil: [4.22]
bq: [2.0.24]
gcloud-deps-darwin-x86_64: [2017.02.21]
bq-nix: [2.0.24]
System PATH: [/Users/<username>/anaconda/envs/tensorflow_source/bin:/Users/<username>/google-cloud-sdk/bin:/Users/<username>/anaconda/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/<username>/google-cloud-sdk/bin:/Users/<username>/anaconda/bin:/opt/local/bin:/opt/local/sbin]
Cloud SDK on PATH: [True]
Kubectl on PATH: [False]
Installation Properties: [/Users/<username>/google-cloud-sdk/properties]
User Config Directory: [/Users/<username>/.config/gcloud]
Active Configuration Name: [jared]
Active Configuration Path: [/Users/<username>/.config/gcloud/configurations/config_jared]
Account: [<email>]
Project: [<project-name>]
Current Properties:
[core]
project: [<project-name>]
account: [<email>]
disable_usage_reporting: [False]
[compute]
region: [us-east1]
zone: [us-east1-c]
Logs Directory: [/Users/<username>/.config/gcloud/logs]
Last Log File: [/Users/<username>/.config/gcloud/logs/2017.03.08/14.00.35.867536.log]
Using gsutil from a compute instance after running gcloud auth login and using my personal credentials is also working so I know it is not an issue with my account.
Does anyone know what I can do to fix this?
Another observation: The file ~/.boto is blank and there is another file with path ~/.config/gcloud/legacy_credentials/<email>/.boto that just has my credentials Oauth token.

Well I got it working, not sure if this answer will apply to anyone else but I will post what I did just in case.
This morning I deleted all files related to google cloud (rm -rf ~/google-cloud-sdk && rm -rf ~/.config && rm ~/.boto). For me the ~/.config folder only had a google cloud folder inside but you might want to check to make sure there isn't anything else before just deleting it.
Then I restarted my computer and reinstalled gcloud via curl https://sdk.cloud.google.com | bash and closed and reopened the terminal instead of running exec -l $SHELL but I think that does the same thing. After running gcloud init everything worked fine.
I am still not sure what happened here.

Related

gcloud 'no such file or directory'

On Ubuntu 20.04, gcloud installed with snap install google-cloud-sdk --classic...
Today it no longer works. Yesterday there was an auto update.
$ kubectl get all
Unable to connect to the server: error executing access token command "/snap/google-cloud-sdk/188/bin/gcloud config config-helper --format=json": err=fork/exec /snap/google-cloud-sdk/188/bin/gcloud: no such file or directory output= stderr=
Version 188 it is referencing is gone, and it is now at 190. (Version 189 is also present.)
I've uninstalled and deleted the .config/gcloud, and reinstalled, but still have the same error.
Any tips on where to look for that stale path?
The problem is that gcloud stores the absolete reference to the gcloud binary to ~/.kube/config. The solution is to replace /snap/google-cloud-sdk/.*/gcloud -> /snap/bin/gcloud in ~/.kube/config.
Example of accomplishing this with perl on the command line:
perl -i -p -e 's/\/snap\/google-cloud-(sdk|cli)\/.*?\/gcloud/\/snap\/bin\/gcloud/' ~/.kube/config

Why is my gcloud command suddenly very slow inside WSL2

When I run a simple command, it takes about 10 seconds to complete,
λ time gcloud version
Google Cloud SDK 293.0.0
beta 2019.05.17
bq 2.0.57
core 2020.05.15
gsutil 4.50
real 0m9.731s
user 0m0.735s
sys 0m1.690s
λ uname -a
Linux LAPTOP-U7E4CROH 4.19.104-microsoft-standard #1 SMP Wed Feb 19 06:37:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
λ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04 LTS
Release: 20.04
Codename: focal
I should note that I do not experience this slow behavior on the same laptop but within a git-bash environment - I only see this within WSL2 / Ubuntu.
I have tried to google around and I have found these two questions on SO, but they are not helping me:
google compute engine tool gcloud is exceptionally slow
Why gcloud command is slow to start?
Any ideas on how I can solve this?
I have the same issue and it turns out that in WSL 2 when I run gcloud it actually turn to use the gcloud installed on my Windows system.
Somehow it is very slow if you run the Windows gcloud from WSL 2, which is also never my intention.
I just disable appending Windows PATH to my WSL PATH all together after this.
But you know the root cause.
For the sake of completeness, to disable this feature, edit the /etc/wsl.conf to add the following section:
[interop]
appendWindowsPath = false
and terminate the WSL distro (wsl.exe --terminate <distro_name>) to make it immediately effective.
I had the same issue, resolved it by reinstalling the SDK: https://cloud.google.com/sdk/docs/downloads-interactive#linux
You must reinstall using 'Interactive installation', which replaces the previous installation and updates the PATH.
'Non-interactive (silent) deployment' does not seem to improve the issue.
Hope this helps.
I had the same issue and the only solution that worked for me was to set an alias for the gcloud command such that it gets executed by cmd.exe like so:
# in ~/.bashrc
alias gcloud="cmd.exe /c gcloud"
Then simply restart your terminal or run $ source ~/.bashrc, and the alias will take effect.
Of course, this assumes you have the gcloud CLI installed and added to your windows PATH.

Deploying Flask app with psycopg2 dependency to Elastic Beanstalk. ec2 instance won't install yum packages

I'm having problems deploying my flask app to EB.
I'm using eb-cli. I've created a .ebextensions folder in the root folder of my application. The folder contains two files:
00dependencies.config
packages:
yum:
libffi-devel : []
postgresql95-devel: []
01setup.config
container_commands:
00_wsgi_pass_headers:
command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf'
option_settings:
"aws:elasticbeanstalk:container:python":
WSGIPath: "api-siifra/manage.py"
"aws:elasticbeanstalk:container:python:staticfiles":
"/static/": "api-siifra/app/static/"
But when I run eb deploy I get the error:
ERROR: Update environment operation is complete, but with errors. For more information, see troubleshooting documentation.
Looking in the eb web interface under Health I see the error:
/opt/python/run/venv/bin/pip install -r /opt/python/ondeck/app/requirements.txt' returned non-zero exit status 1
And the command fails on: Error: pg_config executable not found. while installing psycopg2
If I ssh in to the ec2 instance in question and install postgresql95-devel manually the eb deploy command returns with out errors.
I though packages: yum: .... in a *.config file ran before the pip command?
Any help would be appreciated.
Thank you.

kubectl : connection refused

I am on the way of installing minkube 0.19.1 in Ubuntu 16.04 following the kubernetes documentation. As prerequisits I have installed kubectl and Oracle VirtualBox.
When I check kubectl with kubectl version it gives following.
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:34:20Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
But when I netstat the port to check the process it gives nothing for the results.
I have setup Google cloud SDK as well.
I have searched and tried for many solutions inclusing this but was not able to resolve my issue.
Here are my gcloud config and info results.
$gcloud config list
[compute]
zone = asia-southeast1-a
[core]
account = userName#mail.com
disable_usage_reporting = False
project = sampleproject1990
$gcloud info
Google Cloud SDK [159.0.0]
Platform: [Linux, x86_64] ('Linux', 'userName', '4.8.0-54-generic', '#57~16.04.1-Ubuntu SMP Wed May 24 16:22:28 UTC 2017', 'x86_64', 'x86_64')
Python Version: [2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]]
Python Location: [/usr/bin/python2]
Site Packages: [Disabled]
Installation Root: [/home/userName/products/google-cloud-sdk]
Installed Components:
kubectl: []
core: [2017.06.09]
gcloud: []
gsutil: [4.26]
bq: [2.0.24]
alpha: [2017.03.24]
System PATH: [PATH=/usr/lib/jvm/java-8-oracle/bin:/home/userName/bin:/home/userName/.local/bin:/usr/local/maven/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/lib/jvm/java-8-oracle/bin:/usr/lib/jvm/java-8-oracle/db/bin:/usr/lib/jvm/java-8-oracle/jre/bin:/usr/local/apache-maven-3.3.9/bin]
Python PATH: [/home/userName/products/./google-cloud-sdk/lib/third_party:/home/userName/products/google-cloud-sdk/lib:/usr/lib/python2.7/:/usr/lib/python2.7/plat-x86_64-linux-gnu:/usr/lib/python2.7/lib-tk:/usr/lib/python2.7/lib-old:/usr/lib/python2.7/lib-dynload]
Cloud SDK on PATH: [False]
Kubectl on PATH: [/usr/local/bin/kubectl]
WARNING: There are old versions of the Google Cloud Platform tools on your system PATH.
/usr/local/bin/kubectl
Installation Properties: [/home/userName/products/google-cloud-sdk/properties]
User Config Directory: [/home/userName/.config/gcloud]
Active Configuration Name: [my-configuration]
Active Configuration Path: [/home/userName/.config/gcloud/configurations/config_my-configuration]
Account: [userName#mail.com]
Project: [sampleproject1990]
Current Properties:
[core]
project: [sampleproject1990]
account: [userName#mail.com]
disable_usage_reporting: [False]
[compute]
zone: [asia-southeast1-a]
Logs Directory: [/home/userName/.config/gcloud/logs]
Last Log File: [/home/userName/.config/gcloud/logs/2017.06.21/12.39.23.391849.log]
git: [git version 2.7.4]
ssh: [OpenSSH_7.2p2 Ubuntu-4ubuntu2.2, OpenSSL 1.0.2g 1 Mar 2016]
Can anyone tell me how I can fix this issue ?
I had similar issues with Minikube and virtualbox driver. Please ensure the interface to which the virtualbox is configured, is up .
I did a sudo ifconfig vboxnet0 up and my issue got resolved
I faced the same issue. Turns out that I was running the command without being the root user. So, if you login as the super user (sudo -i), it might work.
This issue is because the Kubelet is not running or is not healthy.
One way to resolve this issue:
$ sudo swapoff -a
$ sudo systemctl enable kubelet
$ sudo systemctl start kubelet
After this, deploy Kubernetes with kubeadm as given below:
$ sudo kubeadm init --ignore-preflight-errors=all
After loading the kubeadm credentials, untaint the master node and join worker nodes if you are working on a cluster.
And now give the command:
$ sudo kubectl cluster-info
The server and the client should be running with the same Kubernetes version.
If this solution doesn't work, scrape Kubernetes, kubectl, kubeadm and kubelet and follow the Kubernetes installation steps alone from this guide.

How to deploy a meteor application to my own server?

How to deploy a meteor application to my own server?
flavour 1: the development and deployment server are the same;
flavour 2: the development server is one (maybe my localhost) and the deployment server is another (maybe a VPS in the cloud);
flavour 3: I want to make a "meteor hosting" domain, just like "meteor.com". Is it possible? How?
Update:
I'm running Ubuntu and I don't want to "demeteorize" the application. Thank you.
Meteor documentation currently says:
"[...] you need to provide Node.js 0.8 and a MongoDB server. You can
then run the application by invoking node, specifying the HTTP port
for the application to listen on, and the MongoDB endpoint."
So, among the several ways to install Node.js, I got it up and running following the best advice I found, which is basically unpacking the latest version available directly in the official Node.JS website, already compiled for Linux (64 bits, in my case):
# Does NOT need to be root user:
# create directory
mkdir -p ~/.nodes && cd ~/.nodes
# download latest Node.js distribution
curl -O http://nodejs.org/dist/v0.10.13/node-v0.10.13-linux-x64.tar.gz
# unpack it
tar -xzf node-v0.10.13-linux-x64.tar.gz
# discard it
rm node-v0.10.13-linux-x64.tar.gz
# rename unpacked folder
mv node-v0.10.13-linux-x64 0.10.13
# create symlink
ln -s 0.10.13 current
# add path to PATH
export PATH="~/.nodes/current/bin:$PATH"
# check
node --version
npm --version
And to install MongoDB, I simply followed the instructions in the MongoDB manual available in the Documentation section of its official website:
# Needs to be root user (apply "sudo" if not at root shell)
apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list
apt-get update
apt-get install mongodb-10gen
The server is ready to run Meteor applications! For deployment, the main "issue" is where the "bundle" operation happens. We need to run meteor bundle command from inside the application source files tree. For example:
cd ~/leaderboard
meteor bundle leaderboard.tar.gz
If the deployment will happen in another server (flavour 2), we need to upload the bundle tar.gz file to it, using sftp, ftp, or any other file transfer method. Once the file is there, we follow both Meteor documentation and the README file which is magically included in the root of the bundle tree:
# unpack the bundle
tar -xvzf leaderboard.tar.gz
# discard tar.gz file
rm leaderboard.tar.gz
# rebuild native packages
pushd bundle/programs/server/node_modules
rm -r fibers
npm install fibers#1.0.1
popd
# setup environment variables
export MONGO_URL='mongodb://localhost'
export ROOT_URL='http://example.com'
export PORT=3000
# start the server
node main.js
If the deployment will be in the same server (flavour 1), the bundle tar.gz file is already there, and we don't need to recompile the native packages. (Just jump the corresponding section above.)
Cool! With these steps, I've got the "Leaderboard" example deployed to my custom server, not "meteor.com"... (only to learn and value their services!)
I still have to make it run on port 80 (I plan to use NginX for this), persist environment variables, start Node.JS dettached from terminal, et cetera... I am aware this setup in a "barely naked" one... just the base, the first step, basic foundation stones.
The application has been "manually" deployed, without taking advantage of all meteor deploy command magic features... I've seen people published their "meteor.sh" and "meteoric.sh" and I am following the same path... create a script to emulate the "single command deploy" feature... aware that in the near future all this stuff will be part of the pioneer Meteor explorers only, as it will grow into a whole Galaxy! and most of these issues will be an archaic thing of the past.
Anyway, I am very happy to see how fast the deployed application runs in the cheapest VPS ever, with a surprisingly low latency and almost instant simultaneous updates in several distinct browsers. Fantastic!
Thank you!!!
Try Meteor Up too
With that you can deploy into any Ubuntu server. This uses meteor build command internally. And used by many for deploying production apps.
I created Meteor Up to allow developers to deploy production quality Meteor apps until Galaxy comes.
I would recommend flavor two with a separate deployment server. Separation of concerns leads to a more stable environment for your code and its easier to debug.
To do it, there's the excellent Meteoric bash script that helps you deploy to Amazon's EC2 or your own server.
As for how to roll your own meteor.com, I suggest you break that out into it's own StackOverflow question as it's not related. Plus, I can't answer it :)
I done with it few days ago. I deployed my Meteor application to my own server on the DigitalOcean. I used Meteor Up tool for managing deploys and Nginx on the server to serve the app.
It's very simple to use. You should install meteor up with the command:
npm install -g mup
Then create the folder for deployment configuration and go to the created directory. Then run mup init command. It will created two configuration files. We are have interest for mup.json file. It have configurations for deployment process. It's looks like this:
{
// Server authentication info
"servers": [
{
"host": "hostname",
"username": "root",
"password": "password",
// or pem file (ssh based authentication)
//"pem": "~/.ssh/id_rsa",
// Also, for non-standard ssh port use this
//"sshOptions": { "port" : 49154 },
// server specific environment variables
"env": {}
}
],
// Install MongoDB on the server. Does not destroy the local MongoDB on future setups
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: nodeVersion defaults to 0.10.36 if omitted. Do not use v, just the version number.
"nodeVersion": "0.10.36",
// Install PhantomJS on the server
"setupPhantom": true,
// Show a progress bar during the upload of the bundle to the server.
// Might cause an error in some rare cases if set to true, for instance in Shippable CI
"enableUploadProgressBar": true,
// Application name (no spaces).
"appName": "meteor",
// Location of app (local directory). This can reference '~' as the users home directory.
// i.e., "app": "~/Meteor/my-app",
// This is the same as the line below.
"app": "/Users/arunoda/Meteor/my-app",
// Configure environment
// ROOT_URL must be set to https://YOURDOMAIN.com when using the spiderable package & force SSL
// your NGINX proxy or Cloudflare. When using just Meteor on SSL without spiderable this is not necessary
"env": {
"PORT": 80,
"ROOT_URL": "http://myapp.com",
"MONGO_URL": "mongodb://arunoda:fd8dsjsfh7#hanso.mongohq.com:10023/MyApp",
"MAIL_URL": "smtp://postmaster%40myapp.mailgun.org:adj87sjhd7s#smtp.mailgun.org:587/"
},
// Meteor Up checks if the app comes online just after the deployment.
// Before mup checks that, it will wait for the number of seconds configured below.
"deployCheckWaitTime": 15
}
After you fill all data fields you can start the setup process with command mup setup. It will setup your server.
After sucessfull setup you can deploy your app. Just type mup deploy in the console.
Another alternative is to just develop on your own server to start with.
I just created a Digital Ocean box and then connected my Cloud9 IDE account.
Now, I can develop right on the machine in a Cloud IDE and deployment is easy--just copying files.
I created a tutorial that shows exactly how my set up works.
I had a lot of trouble with meteor up, so I decided writing my own deploy script. I also added additional info how to set up nginx or mongodb. Hope it helps!
See /sh folder in repository
What the script meteor-deploy.sh does:
Select environment (./meteor-deploy.sh for staging, ./meteor-deploy.sh prod for production)
Build and bundle production version of the meteor app
Copy bundle to server
SSH into server
Do a mongodump to backup database
Stop the running app
Unpack bundle
Overwrite app files
Re-install app node package dependencies
Start the app (uses forever)
Tested for the following server configurations:
Ubuntu 14.04.4 LTS
meteor --version 1.3.2.4
node --version v0.10.41
npm --version 3.10.3