Share SSH keys with VS Code Devcontainer running with Docker's WSL2 backend - visual-studio-code

I'm reading these docs on sharing SSH keys with a dev container, but I can't get it to work.
My setup is as follows:
Windows 10 with Docker Desktop 4.2.0 using the WSL2 backend
A WSL2 distro running Ubuntu 20.04
In WSL2, I have ssh-agent running and aware of my key:
λ ssh-add -l
4096 SHA256:wDqVYQshQBCG/Sri/bsgjEaUFboQDUO/9FJqhFMncdk /home/taschan/.ssh/id_rsa (RSA)
The docs say
the extension will automatically forward your local SSH agent if one is running
But if I do ssh-add -l in the devcontainer, it responds with Could not open a connection to your authentication agent.; and of course starting one (with eval "$(ssh-agent -s)") only starts one that doesn't know of my private key.
What am I missing?

I had basically the same issue. Running Windows 11 with WSL2 and my VSCode Devcontainer wouldn't show any ssh keys (running ssh-add -l inside the container showed an empty list) despite having Git configured on my host machine with working ssh keys.
For me, there were 3 separate instances of ssh-agent on my machine:
WSL2
Git Bash
Windows host 🠆 This is the one VSCode is forwarding to the devcontainer
My existing ssh keys were set up inside Git Bash (as per Github's instructions) so running ssh-add -l only ever showed my ssh keys from inside a Git Bash terminal, nowhere else.
However, as explained in the previous answer, digging through the Devcontainer startup logs shows that VSCode is forwarding only the host machine's ssh-agent, it doesn't look at the WSL2 or Git Bash ones.
Solution: I suggest following the below Microsoft docs page. You need to enable an "Optional Feature" in Windows, then run a few commands in PowerShell (as admin) to activate the ssh-agent service. With this set up, the ssh-agent/ssh-add commands will work from a regular CMD terminal too.
You can use these with the usual keygen commands etc to generate and add new keys on the host (I just ssh-add'ed the same keys generated by Git Bash originally). The added keys should immediately be detected by ssh-add -l inside the container.
https://learn.microsoft.com/en-us/windows-server/administration/openssh/openssh_keymanagement

I tried many things but did not work. Finally after devcontainer is created , I note down the container name and copy the id_rsa and id_rsa.pub key inside container using docker cp command.
syntax:
docker cp <sourcefile> container_id:/dir
Copy both private and public key:
docker cp /root/.ssh/id_ed25519 eloquent_ritchie:/root/.ssh/
docker cp /root/.ssh/id_ed25519.pub eloquent_ritchie:/root/.ssh/
change the permission of private key so that you can do git operations
docker exec eloquent_ritchie chmod 600 /root/.ssh/id_ed25519
eloquent_ritchie is sample container name. Your container name will differ. Use your container name
Then I was able to do Git operations inside devcontainer.
If you rebuild your container again you need to copy the file to devcontainer again.

I also had quite a lot of trouble to get this to work. The following steps might help troubleshooting:
Check that ssh-agent is running on your host and the key is added
Run ssh-agent -l on Windows and expect to see the name of your key
Check that VSCode forwards the socket
Search ssh-agent in the startup log. I had the message
ssh-agent: SSH_AUTH_SOCK in container (/tmp/vscode-ssh-auth-a56c4b60c939c778f2998dee2a6bbe12285db2ad.sock) forwarded to local host (\\.\pipe\openssh-ssh-agent).
So it seems that VSCode is directly forwarding the Windows SSH agent here (and not an SSH agent running in your WSL).

Related

Podman isn't working in Remote Containers in Windows?

Any hints on why Remote - Containers isn't working with podman on Windows?
Installed podman v4.2.0 on Windows 11 via .msi package
Set remote.containers.dockerPath to podman in VS Code Settings
Run podman machine init
Run podman machine start
Open Remote Explorer in VS Code and be presented with the following:
Everything is working with podman — pull, run, images, etc, but Remote - Containers on VSCode doesn't recognize podman.
After running Remote-Containers Developer: Show All Logs... in VS Code:
[2022-08-21T12:55:15.916Z] Start: Run: podman version --format {{.Server.APIVersion}}
[2022-08-21T12:55:16.080Z] Stop (164 ms): Run: podman version --format {{.Server.APIVersion}}
[2022-08-21T12:55:16.080Z] Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM
Error: unable to connect to Podman. failed to create sshClient: dial unix \\.\pipe\openssh-ssh-agent: connect: No connection could be made because the target machine actively refused it.
And podman system connection list in a terminal:
Name URI Identity Default
podman-machine-default ssh://user#localhost:62078/run/user/1000/podman/podman.sock C:\Users\Edmundo\.ssh\podman-machine-default true
podman-machine-default-root ssh://root#localhost:62078/run/podman/podman.sock C:\Users\Edmundo\.ssh\podman-machine-default false
Related Issues: #6957, #6747.
Please confirm you are running the latest build (prerelease)
v0.236.1.
(there are known issues on github with earlier release, fixed in this version)
in a WSL shell, i.e. for debugging try this
first - try to start podman podlib REST api (for socket, lifetime 5000 sec. - set to zero for "forever")
podman system service -t 5000 &
then symlink the podman.sock to the location vscode expects:
sudo ln -s /mnt/wslg/runtime-dir/podman/podman.sock /var/run/docker.sock
if none of that works, would you mind posting a dump:
podman info
HINT: check the podman info YAML output for host | remoteSocket | path & make sure it matches the path /mnt/wslg/runtime-dir/podman/podman.sock above.
The bug being tracked on GitHub. One step you should also do is enable Run in WSL in VS Code Development Container extension settings. Then it will run the podman commands in the podman-machine-default wsl instance.

SSH doesn't bash into profile, so no permission to mkdir for vscode remote ssh

I have a weird problem and don't really know where it's coming from. I have machine A, B and C. I want to connect my VSCode to machine C with the remote-ssh extension for vscode, here's my config:
# Jump box
Host jump-box
HostName machineB
User myuser
# Target machine
Host target-box
HostName machineC
User myuser
ProxyCommand ssh -q -W %h:%p jump-box
The machine C is a weird server used by a lot of people, when I try to connect, the connection to machine B is fine but then, the extension tries to ssh -D -T some5XXXXport machineC bash from B.
The last command passes fine, and I have tested it manually, however the bash at the end makes it run the root bash or something, because I lose my home directory and get an admin one.
So in consequence, the extension fails because it tries to mkdir /some/admin/home/.vscode-server/bin/somecommithash: Permission Denied. My ~doesn't belong to me anymore when the ssh command is bash.
Any ideas how to overwrite or even hack the command?
Do you know why when you ssh address bash it breaks everything?
I also don't think the B->C ssh connection is picking any ~/.bash_profile, ~/.bashrc nor ~/.profile from machine C, perhaps because ~ points to another home.
Solved it. Added all what I need directly in the extension.js.

How to run multiple VS Code instances to use different identities for interacting with a remote git repository while working with Remote Containers?

Sorry for a long post
I have a VS Code Remote Development setup using containers. I have multiple user accounts for the same git server. I am using key-based authentication to interact with git-server. I am using Ubuntu 18.04 LTS in my host machine and Debian Buster in containers.
Git Server - git.server.com
Git urls
- git#git.server.com:repo1.git
- git#git.server.com:repo2.git
Repo1 is used by User1 account
Repo2 is used by User2 account
ssh-key for User1 - ~/.ssh/id_ed25519_user1
ssh-key for User2 - ~/.ssh/id_ed25519_user2
I have created an ssh config in ~/.ssh/config
Host user1.git.server.com
HostName git.server.com
User git
IdentityFile /home/user/.ssh/id_ed25519_user1
IdentitiesOnly yes
Host user2.git.server.com
HostName git.server.com
User git
IdentityFile /home/user/.ssh/id_ed25519_user2
IdentitiesOnly yes
From a terminal in host machine I can authenticate to server
as user1:
ssh git#user1.git.server.com
as user2:
ssh git#user2.git.server.com
I can launch multiple vs code instances and inside the vs code terminal (without opening the folder in remote container), I can use the above commands to authenticate.
Things get interesting once we use "Reopen in Containers".
I don't have the .ssh/config folder mounted inside containers so the above commands fail to execute by giving error:
Could not resolve hostname user1.git.server.com: Name or service not known
or
Could not resolve hostname user2.git.server.com: Name or service not known
I don't want to mount my ./ssh folder to containers for obvious reasons. Also mount=type=ssh is also not possible as this is not building a container but working with container as a dev environment.
Next thing I notice is that ssh-agent forwarding is working.
ssh-add -l
If I execute the above command in terminal in host, terminal in vs code (when folder is opened locally) and terminal in vs code (when folder is opened in remote container) all returns same output (the fingerprint for keys of identities for user1 and user2).
So I start a new ssh-agent in host terminal before launching the code.
~/repo1$ eval `ssh-agent -s`
~/repo1$ ssh-add ~/.ssh/id_ed25519_user1
~/repo1$ code .
and in another terminal
~/repo2$ eval `ssh-agent -s`
~/repo2$ ssh-add ~/id_ed25519_user2
~/repo2$ code .
In respective host terminal and vs code terminal (without opening folder in remote container) I get the desired result. I can use:
ssh git#git.server.com
So I don't need a ssh-config file anymore.
But this does not work when the folder is opened in Remote Container. VS Code only connects to the first ssh-agent started. That is (in this case) the repo1 opened in VS Code Remote Container works perfectly with all git support but repo2 opened in VS Code Remote Container does not work.
How can I direct VS Code to use which ssh-agent to forward to container when switching to remote container?
Workaround
The workaround which I am using currently is use the default ssh-agent (Ubuntu starts an ssh-agent at the time of logging in). This agent already contains both the identities. Verify by running
ssh-add -l
Launch multiple vs code instances as usual and switch to Remote Container.
When I want to perform a git operation I do the following in host terminal:
For repo1 (user1) operations
$ ssh-add -D
$ ssh-add ~/.ssh/id_ed25519_user1
For repo2 (user2) operations
$ ssh-add -D
$ ssh-add ~/.ssh/id_ed25519_user2
Is there any other suggested way? Is there any settings that can be added to devcontainer.json to achieve the proper forwarding of ssh-agent?
Thank you for your patience.

How to deploy docker-compose solution automatically from github to vps over ssh?

What I want to do:
Deploy docker-compose solution from Github to my virtual private server which has docker and docker-compose installed.
I saw that there are Github Actions that allow me to copy files over SSH after push to master, but I don't know how to run docker-compose up on my server after source has been copied.
On my VPS I have Ubuntu 18.4 installed.
I believe Github actions also allow you to run arbitrary commands on remote servers via ssh (there are a few in their library).
Assuming you copy your docker-compose.yml into, /home/user/app/docker-compose.yml, you could run a command like so:
ssh user#yourserver.example.com "cd /home/user/app/ && docker-compose up -d"

Deployment with only SSH Key and dockerfile

Excuse my dev ops naiveté but I assume all you need to deploy to a machine is a proper SSH key, a port to expose, the machine's IP address a login and the code to deploy.
So are there any simple solutions that deploy code to a remote server with the only input being an SSH key, a Dockerfile and the code itself? I'm thinking it could be set up in a deterministic (almost functional) manner where the input is server IP address, login, and the output is a running server.
I've tried setting up Dokku on digital ocean (https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-dokku-application) and that requires a DNS record, and git. I don't need those as dependencies.
Thanks
If I understand your question correctly, you don't needed anything more than scp, ssh and a couple of shell scripts.
Let's say you want to deploy your code from serverA to serverB.
On serverB, create a directory with you Dockerfile. Also, create a shell script, let's call it build_image.sh, that runs your docker build command using sudo.
Also, on serverB, create a shell script that builds your code from source (if necessary).
Finally, on serverB, create a shell script that calls your code build script, your docker build script and at the end runs your new docker image. Let call this script do_it_all.sh.
Make sure that you chmod 755 all shell scripts.
Now, on serverA, you have a directory with the source code. scp that directory to serverB into the directory with the Dockerfile.
Next, from serverA use ssh to call do_it_all.sh on serverB. This will build your code, build your image and deploy a container without the need for extra software, packages, git, DNS records, etc.
You can even automate this process using cron or something else to have nightly deployments, if you wish, or deployments under other conditions.
Example scripts/commands:
On serverB:
build_image.sh:
#!/bin/bash
sudo docker build -t my_image
build_code.sh (optional, adjust to your code):
#!/bin/bash
cd /path/to/my/code
./configure
make
do_it_all.sh:
#!/bin/bash
cd /path/to/my/dockerfile
sudo docker stop my_container #stop the old container
sudo docker rm my_container #remove the old container
sudo docker rmi my_image #remove the old image
./build_code.sh #comment out if not needed
./build_image.sh
sudo docker run -d --name my_container my_image
On serverA:
scp -r /path/to/my/code serverB:/path/to/my/dockerfile
ssh serverB '/path/to/my/dockerfile/do_it_all.sh'
That should be it. Adjust for your system.
To deploy to a brand new system, just write a script on serverA that uses ssh to copy create necessary directories on serverB ssh serverB 'mkdir /path/to/dockerfile'. Next, copy your Dockerfile and your build scripts and your code from serverA to serverB using scp. Then run do_it_all.sh on serverB from serverA using ssh.