Can you pass a keyfile.json to gsutil? - google-cloud-storage

I have a (maybe unique?) use case in some Python scripts that I am running. Namely, I want the parallel awesomeness of gsutil and so I don't do from google.cloud import storage, rather I use subprocess calls such as:
subprocess.Popen(["gsutil", "-q", "-m", "-o", "GSUtil:parallel_process_count=8,GSUtil:parallel_thread_count=8", "cp", files, destination])
in order to upload and download files from buckets.
In an instance group template I can pass in the service account via -scopes, but I'd like authentication to be handled at the application level. I tried setting environment variables and passing it to subprocess:
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "keyfile.json"
tmp_env = os.environ.copy()
subprocess.Popen(['gsutil', ...], env=tmp_env)
but to no avail. Running:
gcloud auth activate-service-account --key-file /path/to/keyfile.json --project my-project -q
seems to be the best way to authenticate with a json keyfile that does not require the Python API. But it doesn't work if I throw it in at the end of my Dockerfile, and while I could of course throw it in at the end of a startup.sh script that I have executed at the end of an instance group template embedded bootstrap.sh script, neither is really accomplishing what I'd like. Namely, both get away from my original goal of having "gsutil authentication" at the application level.
tl;dr Is there a way to pass keyfile.json credentials to gsutil? Is this a feature the gsutil team has ever discussed? My apologies if I just haven't been hunting the Cloud Platform and gsutil docs well enough.

You can provide a pointer to a JSON key file for gsutil in your .boto configuration file like so:
[Credentials]
gs_service_key_file = /path/to/your/keyfile.json
This is equivalent to running gsutil config -e for a standalone (non-gcloud) install.
If you want to provide this on the command line as opposed to in your .boto configuration file, you can use the -o parameter similar to how you configured the process and thread counts in your command line. To wit:
subprocess.Popen(["gsutil", "-q", "-m", "-o", "Credentials:gs_service_key_file=/path/to/your/keyfile.json",
"-o", "GSUtil:parallel_process_count=8", "-o", GSUtil:parallel_thread_count=8", "cp", files, destination])
Note that you need to make sure the key file path is accessible from within your container.

Related

How do not push ".env" file to the Github repository? [duplicate]

I've tried many different solutions on the web for this problem, but all have been unsuccessful.
Here's the problem: My app needs to know whether it is being run on Heroku (production mode) or locally (development mode). For this purpose, we want to use environment variables. I've understood that environment variables on Heroku can be set in a .env file. So my attempt was to run heroku run bash -a <app-name> and then to install vim by doing this:
mkdir ~/vim
cd ~/vim
# Staically linked vim version compiled from https://github.com/ericpruitt/static-vim
# Compiled on Jul 20 2017
curl 'https://s3.amazonaws.com/bengoa/vim-static.tar.gz' | tar -xz
export VIMRUNTIME="$HOME/vim/runtime"
export PATH="$HOME/vim:$PATH"
cd -
Apart from crashing repeatedly, vim didn't work anymore when I logged in and out of the shell:
~ $ vim // in the heroku shell
vim: error while loading shared libraries: libXt.so.6: cannot open shared object file: No such file or directory
I also tried heroku plugins:install heroku-vim but running heroku vim after that only resulted in a long delay followed by the normal heroku shell opening, no vim.
I don't really care if I get vim to work. I just want to be able to write in a file named .env on Heroku so I can set environment variables in it.
How can I achieve this?
There is no need for an .env file on Heroku. In fact, such a file won't work very well since
Heroku gets all of its files from your Git repository,
has an ephemeral filesystem, meaning that changes to files like .env will be quickly lost, and
the .env file won't be available on other dynos if you scale your app
As such, creating an .env file on Heroku isn't a good approach.
Instead, you can use its built-in support for environment variables, using heroku config:set <var> <value> or its web UI. Either way, you'll get a regular environment variable.
It is fairly simple.
Just as you added them in your .env file, do the same with heroku's command line and you will see heroku restart and you are all set to fly again.
Just use the command :
(heroku config:set VARIABLE=this_is_the_value)
Remember to use the underscores in the value as spaces are not allowed not inverted quotes (" ")to turn it into a single string is permissible.

Calling External Command From Powershell Plugin

I have an application process that runs in IBM UrbanCode. The process uses a Powershell Script that uses the CloudFoundry CLI. Our application process runs on an agent on which the CloudFoundry CLI is installed and available on the Path. Strangely enough, the Powershell plugin doesn't know that the CloudFoundry CLI is on the path. Echoing out the path via the plugin itself confirms this.
Currently, our application process looks like:
Copy CloudFoundry CLI into UCD's workspace at the start of the job.
Execute various CloudFoundry commands via the following sytax: .\cf login -u foo -p bar -o baz -s bart
I want to avoid copying the client into the workspace and having to use the .\cf sytax in order to make the scripts more portable.
How can I get the Powershell plugin to respect the Agent's path?
Sounds like the user that your powershell agent is running under does not have CloudFoundry in its path. options are
1. Ensure the PATH variable is set system wide.
2. instead of copying the CloudFoundary CLI you could manually add the path to CloudFoundry before you run the script
$env:Path += ;<PATH TO CLOUDFOUNARY>
Note: this will only persist for the current session.
To test that you have CloudFoundary in the path you can use
Get-Command cf

How Can I Compile Postgresql (pl/pgsql) Functions from Sublime?

Is there a way to compile pl/psql functions from within Sublime Text 2?
Add New Build System file with this commands and save it.
{
"cmd": ["psql", "-d", "your-database name",
"-U", "postgres",
"-f", "$file"],
"word_wrap": "false"
}
I actually just released a plugin (called DB1) that allows you to do just that. You can dynamically connect to and execute queries and functions against a PostgreSQL or MySQL database (I'm in the process of adding more databases too). A cool part about it is it doesn't require you to install anything on your computer (other than Sublime Text).
All you have to do is install DB1 through Package Control, and then in a view you can run the command DB1: Connect to connect to your database. You can then execute sql in that view through one of the DB1: Execute commands.
You can also just open the PSQL function (if you have it saved in a file) and execute the whole file.
To see how it works you can check out the DB1 Website or the documentation. Let me know if you have any questions about it!
Yes you can. In order for this answer to work your network user must have access to the database. The way to do it is to create a new build system in Sublime for postgresql. You can do this by clicking Tools>Build System>New Build System.... Then replace the default build text with:
{
"path": "C:/Program Files (x86)/pgAdmin III/1.20/",
"cmd": ["psql.exe", "-f", "$file", "postgresql://db-staging-1:5432/mydbname"],
"selector": "source.postgresql",
"shell": true
}
Path: This should be the location of your psql.exe executable. Note if this path is in your environment variables path this line is unnecessary.
CMD: This is what will be run from the command line. I've included my connection information here as well. You'll need to replace it with the server path and port number for your database. Note, if you are having trouble with getting your build to run, the easiest way to debug what it's actually trying to run is to add echo on the front of this line:
"cmd": ["echo", "psql.exe", "-f", "$file", "postgresql://db-staging-1:5432/mydbname"],
Now the output of your build will be exactly what it's trying to run on the command line. If what it outputs here doesn't work on your command line then you need to change it to something that will.
Selector: This set the default build for postgresql files.
Shell: Treats the command as a shell script.
Now you can choose your build to be postgresql under Tools>Build System. After that a simple Ctrl+B will compile pl/pgsql functions to your database! Note that regular SQL can be run against your Postgresql database now as well.
If you regularly interact with more than one database at once, see this article as a good reference for setting up connections to multiple databases: How to make build system for PostgreSQL
Other Sublime text build options can be found here: http://sublimetext.info/docs/en/reference/build_systems.html

Set umask for remote commands

How can I direct processes started on remote machines via ssh to run with a certain umask? I want this to apply to commands run as part of standard Capistrano recipes too, so I can't just make an explicit call to "umask" part of the command.
It does not appear that ~/.bash_profile on the remote machine is read, with the way that Capistrano invokes remote commands.
I was confronted to the same issue and got around it by using the then-undocumented SSHKit.config.umask in config/deploy.rb. Note that this will set the umask for every ssh command.
Put umask 0002 in the .bashrc of the user account you use to deploy.
Agreed with Alain--set the umask in your .bashrc instead of .bash_profile. When deploying with Capistrano in a typical setup, your .bash_profile isn't loaded by default. Reading up on the difference between .bashrc and .bash_profile will help in understanding the purposes of the two.
I have environment variables set in my .bashrc file and they are certainly used when I deploy or for running any other commands with capistrano.
Another option is to create a task to set your umask value before you begin creating files on deploy. For example, in Cap 3, you can use this:
task :set_umask do
on roles(:all) do |host|
execute "umask 0002"
end
end
before "deploy:starting", "set_umask"
#beauby's answer using SSHKit is good, but it works only for Capistrano 3 as Capistrano 2 doesn't use SSHKit.
A common problem in relation to umask and Capistrano is that bundle install installs gems with permissions that are too restrictive. For this specific issue, the solution I've found for Capistrano 2 is to say:
namespace :bundle do
task :postinstall do
run "chmod -R u=rwX,go=rX #{bundle_dir}"
end
end
after 'bundle:install', 'bundle:postinstall'

Is it possible to have Perl run shell script aliases?

Is it possible to have a Perl script run shell aliases? I am running into a situation where we've got a Perl module I don't have access to modify and one of the things it does is logs into multiple servers via SSH to run some commands remotely. Sadly some of the systems (which I also don't have access to modify) have a buggy SSH server that will disconnect as soon as my system tries to send an SSH public key. I have the SSH agent running because I need it to connect to some other servers.
My initial solution was to set up an alias to set ssh to ssh -o PubkeyAuthentication=no, but Perl runs the ssh binary it finds in the PATH instead of trying to use the alias.
It looks like the only solutions are disable the SSH agent while I am connecting to the problem servers or override the Perl module that does the actual connection.
Perhaps you could put a command called ssh in PATH ahead of the ssh which runs ssh as you want it to be run.
Alter the PATH before you run the perl script, or use this in your .ssh/config
Host *
PubkeyAuthentication no
Why don't you skip the alias and just create a shell script called ssh in a directory somewhere, then change the path to put that directory before the one containing the real ssh?
I had to do this recently with iostat because the new version output a different format that a third-party product couldn't handle (it scanned the output to generate a report).
I just created an iostat shell script which called the real iostat (with hardcoded path, but you could be more sophisticated), passing the output through an awk script to massage it into the original format. Then, I changed the path for the third-party program and it started working fine.
You could declare a function in .bashrc (or .profile or whatever) with that name. It could look like this (might break):
function ssh {
/usr/bin/ssh -o PubkeyAuthentication=no "$#"
}
But using a config file might be the best solution in your case.