Tab completion for non-file-system paths or URIs (e.g. gs://...) - google-cloud-storage

I use cloud storage (eg gcs, s3) in addition to the local file system for data analysis.
My question is: are there tools that enable tab completion (in a shell environment) for file paths or URIs that aren't on local, mounted file systems? Eg for a file URI like gs://path/to/file.txt, I'd like to have tab completion when typing part of the path.
Note: I don't want to use FUSE (or mount a file system or volume in some way). I'm wondering if there are bash or zsh extensions that enable this functionality for non-file-system URIs, presumably with API calls in the background or something.

gcloud interactive shell has auto-complete and auto prompting for any command that has a manual page, including the gcloud, bq, gsutil, and kubectl command-line tools.
To enable the gcloud interactive shell:
Install the gcloud beta components first by running the command gcloud components install beta.
Run gcloud beta interactive to enter gcloud interactive mode.
You can now use the tab to complete a file path or resource assignment
Check this link for the official documentation of gcloud auto-complete.

Related

vscode dev container with prompt environment variables

For dev containers, it is possible to either provide env variables directly in decontainer.json or to provide a decontainer.env file. I wish to provide most of the variables via the env file but be prompted for input for a few sensitive ones to be available inside the container. This can be achieved for launch.json files using VAR="{input:var_id}". I could not find a similar thing for devcontainer.json. I tried asking for user input in postCreateCommand and exporting it but it was not available when I attached a shell to the container.

How do I set the default browser for xdg-open on Centos 7 if xdg-settings has no desktop environment

There are many questions similar to mine (e.g. xdg-open not open default browser or xdgutils - xdg-settings not setting default-web-browser in gentoo, but none of the answers helped in my case. Therefor I ask for my particular situation:
On Centos 7 I have no free desktop manager running, I just run some X11 applications (like VS Code) from the command line where the DISPLAY variable is set to the X server on the (Windows) machine I connect from.
On the Centos machine I have two browsers installed, firefox and google-chrome. I can start both browsers just by typing firefox resp. google-chrome in the bash terminal.
xdg-open is available and it opens links in google-chrome - as does VS Code. However I want to change this to firefox.
I tried:
Ticking "Default browser" in Firefox's GUI preferences.
Using xdg-settings, but
xdg-settings get default-web-browser
returns "xdg-settings: unknown desktop environment"
Setting $BROWSER. In bash I issued
export BROWSER=firefox
but still google-chrome is started by xdg-open
How can I set in this environment the default browser to firefox?
Note: Strangely on another machine with Centos 6 (and "no desktop environment" either) the export BROWSER method works!
The desired behavior can be set in the mimeapps.list configuration files described in the XDG MIME Applications specification.
TLDR:
In order to configure firefox as the default browser for your user create ~/.config/mimeapps.list containing the following lines:
[Default Applications]
x-scheme-handler/http=firefox.desktop
x-scheme-handler/https=firefox.desktop
x-scheme-handler/ftp=firefox.desktop
x-scheme-handler/chrome=firefox.desktop
text/html=firefox.desktop
application/x-extension-htm=firefox.desktop
application/x-extension-html=firefox.desktop
application/x-extension-shtml=firefox.desktop
application/xhtml+xml=firefox.desktop
application/x-extension-xhtml=firefox.desktop
application/x-extension-xht=firefox.desktop
Details:
xdg-utils like xdg-open(1) and xdg-mime(1) look for this file in the locations listed under the File name and location section of this specification:
$XDG_CONFIG_HOME/$desktop-mimeapps.list user overrides, desktop-specific (for advanced users)
$XDG_CONFIG_HOME/mimeapps.list user overrides (recommended location for user configuration GUIs)
$XDG_CONFIG_DIRS/$desktop-mimeapps.list sysadmin and ISV overrides, desktop-specific
$XDG_CONFIG_DIRS/mimeapps.list sysadmin and ISV overrides
$XDG_DATA_HOME/applications/$desktop-mimeapps.list for completeness, deprecated, desktop-specific
$XDG_DATA_HOME/applications/mimeapps.list for compatibility, deprecated
$XDG_DATA_DIRS/applications/$desktop-mimeapps.list distribution-provided defaults, desktop-specific
$XDG_DATA_DIRS/applications/mimeapps.list distribution-provided defaults
The locations for the $XDG variables are governed by the XDG Base Directory specification. If you want to figure out where xdg-utils are looking for configuration in your particular case, run them with the XDG_UTILS_DEBUG_LEVEL environment variable like so:
$ XDG_UTILS_DEBUG_LEVEL=10 xdg-open 'https://www.example.com'
...
Checking /home/USERNAME/.config/mimeapps.list
...

How can I avoid typing in the profile for aws each time?

I want to avoid typing --profile dev-platform every time I want to access the AWS cli commands.
How do I add a postfix or suffix like --profile dev-platform every time I want to run an AWS command?
I just want to type aws s3 ls with out manually putting in the profile each time like the above two instances.
aws *wild card* --profile dev-platform
I tried some things like
alias aws= aws ** | --profile dev-platform
but to no avail.
Assuming you have multiple profiles in your ~/.aws/config, and want to set dev-platform as the profile to use for your terminal session you can use:
export AWS_DEFAULT_PROFILE=dev-platform
This will set the default AWS profile to the selected profile for your terminal session.
I switch between multiple profile regularly and create aliases for my different profiles so I can use a short command like “use-dev” to switch to my dev profile, or use-prod, to switch to my production profile quickly and easily without having to type the full command.

Automating gsutil commands

I'm trying to automate some gsutils commands, but struggling to see where the authentication files are kept and how to re-use (if thats what happens).
I've gone through the gcloud init process in bash...
curl https://sdk.cloud.google.com | bash
gcloud init
All works well when I run
'gsutil ls'
Now I'm trying to automate the process, so this would work on a new server adding into a crontab on it (rather than creating a new config each time).
I saw a mention of setting env variable GOOGLE_APPLICATION_CREDENTIALS, so I copied my credentials from web login to a file and tried it, eg trying as a different user to test
export GOOGLE_APPLICATION_CREDENTIALS=/home/user/.gsutil/mycreds
and then gsutil ls, but fails.
So I assume I've got the whole credentials thing a bit wrong. I'm assuming there is a file somewhere that was originally created by gcloud which I could use, but I can't see it anywhere ?
I've looked at the answer here but doesn't seem up to date now, as per last comment.
Edit: I have followed Zacharys steps, gcloud auth activate-service-account --key-file=myfilelocation
However, with 'gsutil ls' I now get..
You are attempting to perform an operation that requires a project id, with none configured. Please re-run gsutil config and make sure to follow the instructions for finding and entering your default project id.
So my next question would be, where is it looking for the project id ? If I run gsutil config, it seems to create a new set of auth which then creates another error, so have removed that.
You should be able to do this without diving in too deep to the implementation of authentication for gsutil.
If you're using standalone gsutil (if you installed via this method), the instructions in the linked question are still valid (as Travis points out).
If you'd like to continue using the gsutil supplied via the Cloud SDK, you should use service accounts. Service accounts are the preferred method of authenticating on headless machines or in non-interactive contexts.
Your flow would look something like the following:
Create a service account via the Google Cloud Developers Console.
On the remote machine, install the Cloud SDK and gsutil. If you're not installing interactively, it's better to skip the curl ... | bash method. Instead, download this install archive, extract it, and run the install.sh script. This script has options (visible with --help); if you specify choices to all of these options, it won't prompt you.
Copy the service account to the remote machine. Run gcloud auth activate-service-account --key-file=/path/to/service-account.json.
Run gsutil. You should be appropriately authenticated.
You have to set default project and user in gsutil. Run the following command:
gcloud init
Choose 1. It shows you different users; select the user and then select the project.
I was trying to create a bucket with project id as name:
$ gsutil mb -l eu gs://PROJECT-ID
Creating gs://root****/...
Error: You are attempting to perform an operation that requires a project id, with none configured. Please re-run gsutil config and make sure to follow the instructions for finding and entering your default project id.
Steps that resolved for me:
gcloud auth login
gcloud config set project <PROJECT-ID>
gsutil mb -l eu gs://<PROJECT-ID>
Creating gs://root***/...
The error is gone out of the way and it works as expected.

Docker and sensitive information used at run-time

We are dockerizing an application (written in Node.js) that will need to access some sensitive data at run-time (API tokens for different services) and I can't find any recommended approach to deal with that.
Some information:
The sensitive information is not in our codebase, but it's kept on another repository in encrypted format.
On our current deployment, without Docker, we update the codebase with git, and then we manually copy the sensitive information via SSH.
The docker images will be stored in a private, self-hosted registry
I can think of some different approaches, but all of them have some drawbacks:
Include the sensitive information in the Docker images at build time. This is certainly the easiest one; however, it makes them available to anyone with access to the image (I don't know if we should trust the registry that much).
Like 1, but having the credentials in a data-only image.
Create a volume in the image that links to a directory in the host system, and manually copy the credentials over SSH like we're doing right now. This is very convenient too, but then we can't spin up new servers easily (maybe we could use something like etcd to synchronize them?)
Pass the information as environment variables. However, we have 5 different pairs of API credentials right now, which makes this a bit inconvenient. Most importantly, however, we would need to keep another copy of the sensitive information in the configuration scripts (the commands that will be executed to run Docker images), and this can easily create problems (e.g. credentials accidentally included in git, etc).
PS: I've done some research but couldn't find anything similar to my problem. Other questions (like this one) were about sensitive information needed at build-time; in our case, we need the information at run-time
I've used your options 3 and 4 to solve this in the past. To rephrase/elaborate:
Create a volume in the image that links to a directory in the host system, and manually copy the credentials over SSH like we're doing right now.
I use config management (Chef or Ansible) to set up the credentials on the host. If the app takes a config file needing API tokens or database credentials, I use config management to create that file from a template. Chef can read the credentials from encrypted data bag or attributes, set up the files on the host, then start the container with a volume just like you describe.
Note that in the container you may need a wrapper to run the app. The wrapper copies the config file from whatever the volume is mounted to wherever the application expects it, then starts the app.
Pass the information as environment variables. However, we have 5 different pairs of API credentials right now, which makes this a bit inconvenient. Most importantly, however, we would need to keep another copy of the sensitive information in the configuration scripts (the commands that will be executed to run Docker images), and this can easily create problems (e.g. credentials accidentally included in git, etc).
Yes, it's cumbersome to pass a bunch of env variables using -e key=value syntax, but this is how I prefer to do it. Remember the variables are still exposed to anyone with access to the Docker daemon. If your docker run command is composed programmatically it's easier.
If not, use the --env-file flag as discussed here in the Docker docs. You create a file with key=value pairs, then run a container using that file.
$ cat >> myenv << END
FOO=BAR
BAR=BAZ
END
$ docker run --env-file myenv
That myenv file can be created using chef/config management as described above.
If you're hosting on AWS you can leverage KMS here. Keep either the env file or the config file (that is passed to the container in a volume) encrypted via KMS. In the container, use a wrapper script to call out to KMS, decrypt the file, move it in to place and start the app. This way the config data is not exposed on disk.