Inherit VSCode host AWS credentials when connected to GitHub CodeSpace - visual-studio-code

Goal: Not have to configure AWS CLI credentials (~/.aws) every time I create a new devcontainer with CodeSpace.
I know I can't bind mount ~/.aws like I could with a local devcontainer. Is there any other mechanism that will allow me to inherit AWS credentials (or GitHub CLI credentials, etc.) from my VSCode host machine?

Related

configuring Airflow ssh connection in google cloud composer

I'm trying to configure a SSH connection from Airflow UI on google cloud composer environment to an on premise posgresql server
Where I should store my private key ?
How to pass to SSH connection config the private key location ?
First, you will need to add an SSH connection under:
Airflow -> Admin -> Connections -> Connection Type (SSH)
That will allow you to use this connection in an operator to access the remote instance. Add your key to the Extra field (check key_file & host_key).
Documentation here: https://airflow.apache.org/docs/apache-airflow-providers-ssh/stable/connections/ssh.html
Adding the file under the same GCS bucket having the dags will make it reachable by the Airflow workers. You can have a new directory under gads and name it keys if you want.
Then you will need to design your pipeline (dag) to be able to get your private key from the remote instance.
You can use the SSHExecuteOperator or any other operator based on your design.
Check this question for more helpful details:
Airflow: How to SSH and run BashOperator from a different server

How to authenticate with GitHub when using a shared machine

My current workflow includes typing the password. Log into a server, pull (or sometimes clone, checkout or even push), type in the creds and leave. I do not want to store my credentials on that machine and I do not always have the chance to access my own password manager on the same machine.
How are we supposed to do this after a password can no longer be used with GitHub on the command line? Should I actually carry a paper slip with an access token? Or am I obliged to configure SSH deploy key for every project on every server? It seems to require logging into github website and it's not like I have a GUI on those machines.
Is there any sane way? How would you do it, if you sit down in front of a linux bash and have to deploy a project on that machine, using that machine?
How you should handle this depends on what your needs are.
If you want to automate a deployment process for a machine, then using a deploy key for that machine is a good idea, since that's the exact purpose for which they're designed. Ideally your deployment processes are automated, and deploy keys are a good way to do that.
If your goal is to log into several machines via SSH and perform Git operations with a remote, you can use an SSH key. If you're logging in via SSH, then add your SSH key to your agent and forward your agent to the remote system with the -A option, which will let you perform the access as if you had that key on the remote system. This is the easiest and simplest solution if you can do so, and is even more convenient than typing your username and password.
If you need to log in to machines at the console, then generate an SSH key, add it to GitHub, and store it on a flash drive, at which point you can mount the flash drive and use the keys with Git by setting the environment variable GIT_SSH_COMMAND to ssh -oIdentitiesOnly=yes -i /mnt/path-to-key (substituting the path to the key).

Pointing to private github repository or AWS S3 as notebook directory for Jupyterhub notebook servers

Is it possible to point to private github repository or AWS S3 as notebook directory for Jupyterhub notebook servers?
In Jupyterhub config file, I can set C.Spawner.notebook_dir to point to local directories but how can I point to a fileshare protected by password or to a private github repository or AWS S3?
There is some information here - https://github.com/jupyterhub/jupyterhub/issues/314 on customizing the directory location for each user. Is there a way to extend the custom spawner class to have the ability to point to private github or S3?
The simplest way, if you can satisfy the requirements, would be to use the S3 FUSE filesystem, to mount an S3 bucket at a path in your local directory tree.
You could also further extend the custom spawner in that issue to re-clone/update a github repo every time you spawn a notebook (and then pass the path into the notebook), but that would be pretty slow. Also in that case the user account running the spawner needs to be able to read the credentials for the github account. The S3 solution allows you to do this outside of the Jupyter workflow, allowing you to preserve credentials with a different permissions scheme.

VSTS Azure File Copy task and ACL

I am using VSTS (Visual Studio Team Services, formerly known as Visual Studio Onine) for continuous deployment to an Azure VM using an Azure File Copy task in my build definition.
The problem I am having is that I have an ACL setup on the Azure VM that is only allowing connections from my office for Remote Powershell.
With the ACL in place, the Azure File Copy task fails with an error like "WinRM cannot complete the operation. Verify that the specified computer name is valid, that the computer is accessible over the network, and that the firewall exception for the WinRM service is enabled and allows access from this computer." With the ACL removed, everything works.
To be clear, this is not a problem with WinRM configuration or firewalls or anything like that. It is specifically the ACL on the VM that is blocking the activity.
So the question is, how can I get this to work without completely removing the ACL from my VM? I don't want to open up the VM Powershell endpoint to the world, but I need to be able to have the Azure File Copy task of my build succeed.
You can have an on-premises build agent that lives within your office's network and configure things so that the build only uses that agent.
https://msdn.microsoft.com/library/vs/alm/release/getting-started/configure-agents#installing
Azure File Copy Task need to use WinRM Https Protocol, so when you enable the ACL, the Hosted Build Agent won't be able access to the WinRM on Azure VM and that will cause Azure File Copy Task fail.
When copying the files from the blob container to the Azure VMs,
Windows Remote Management (WinRM) HTTPS protocol is used. This
requires that the WinRM HTTPS service is properly setup on the VMs and
a certificate is also installed on the VMs.
There isn't any easy workaround for this as I know. I would recommend you to setup your own build agent in your network that can access to Azure VM WinRM.

How to gain SSH access from an AWS instance to another without private key?

I have an SSH keypair: private lives on my local Mac, public lives on several AWS cloud machines.
From my Mac, I can SSH to a cloud instance, call it "deploy server". From there, I need to deploy my application to several instances (I cannot deploy locally).
I authenticate to the other instances with my private key. I can do this by either leaving my private key on the deploy server (insecure), or SSH Agent Forwarding (probably not much better).
Moreover, the deploy takes a while, so I do it in a gnu screen or tmux session; then I just detach and end the SSH session with the deploy server meaning I cannot use SSH Agent Forwarding (as I believe it requires the SSH connection to remain open).
What other options are available to me?
You can use a deploy key. That is a server specific key that has read only access to the repository.
To use this, you need to:
Generate a private key for the server (ssh-keygen on the server)
Set it at the github repo as a deploy key (https://github.com/<user>/<repo>/settings/keys). That will grant read only permissions to the repo. You have a checkbox if you also need write access to it.
Read more on this github help guide. There you can see more methods for deploying from a server accessing a repository.