SSH doesn't bash into profile, so no permission to mkdir for vscode remote ssh - visual-studio-code

I have a weird problem and don't really know where it's coming from. I have machine A, B and C. I want to connect my VSCode to machine C with the remote-ssh extension for vscode, here's my config:
# Jump box
Host jump-box
HostName machineB
User myuser
# Target machine
Host target-box
HostName machineC
User myuser
ProxyCommand ssh -q -W %h:%p jump-box
The machine C is a weird server used by a lot of people, when I try to connect, the connection to machine B is fine but then, the extension tries to ssh -D -T some5XXXXport machineC bash from B.
The last command passes fine, and I have tested it manually, however the bash at the end makes it run the root bash or something, because I lose my home directory and get an admin one.
So in consequence, the extension fails because it tries to mkdir /some/admin/home/.vscode-server/bin/somecommithash: Permission Denied. My ~doesn't belong to me anymore when the ssh command is bash.
Any ideas how to overwrite or even hack the command?
Do you know why when you ssh address bash it breaks everything?
I also don't think the B->C ssh connection is picking any ~/.bash_profile, ~/.bashrc nor ~/.profile from machine C, perhaps because ~ points to another home.

Solved it. Added all what I need directly in the extension.js.

Related

Share SSH keys with VS Code Devcontainer running with Docker's WSL2 backend

I'm reading these docs on sharing SSH keys with a dev container, but I can't get it to work.
My setup is as follows:
Windows 10 with Docker Desktop 4.2.0 using the WSL2 backend
A WSL2 distro running Ubuntu 20.04
In WSL2, I have ssh-agent running and aware of my key:
λ ssh-add -l
4096 SHA256:wDqVYQshQBCG/Sri/bsgjEaUFboQDUO/9FJqhFMncdk /home/taschan/.ssh/id_rsa (RSA)
The docs say
the extension will automatically forward your local SSH agent if one is running
But if I do ssh-add -l in the devcontainer, it responds with Could not open a connection to your authentication agent.; and of course starting one (with eval "$(ssh-agent -s)") only starts one that doesn't know of my private key.
What am I missing?
I had basically the same issue. Running Windows 11 with WSL2 and my VSCode Devcontainer wouldn't show any ssh keys (running ssh-add -l inside the container showed an empty list) despite having Git configured on my host machine with working ssh keys.
For me, there were 3 separate instances of ssh-agent on my machine:
WSL2
Git Bash
Windows host 🠆 This is the one VSCode is forwarding to the devcontainer
My existing ssh keys were set up inside Git Bash (as per Github's instructions) so running ssh-add -l only ever showed my ssh keys from inside a Git Bash terminal, nowhere else.
However, as explained in the previous answer, digging through the Devcontainer startup logs shows that VSCode is forwarding only the host machine's ssh-agent, it doesn't look at the WSL2 or Git Bash ones.
Solution: I suggest following the below Microsoft docs page. You need to enable an "Optional Feature" in Windows, then run a few commands in PowerShell (as admin) to activate the ssh-agent service. With this set up, the ssh-agent/ssh-add commands will work from a regular CMD terminal too.
You can use these with the usual keygen commands etc to generate and add new keys on the host (I just ssh-add'ed the same keys generated by Git Bash originally). The added keys should immediately be detected by ssh-add -l inside the container.
https://learn.microsoft.com/en-us/windows-server/administration/openssh/openssh_keymanagement
I tried many things but did not work. Finally after devcontainer is created , I note down the container name and copy the id_rsa and id_rsa.pub key inside container using docker cp command.
syntax:
docker cp <sourcefile> container_id:/dir
Copy both private and public key:
docker cp /root/.ssh/id_ed25519 eloquent_ritchie:/root/.ssh/
docker cp /root/.ssh/id_ed25519.pub eloquent_ritchie:/root/.ssh/
change the permission of private key so that you can do git operations
docker exec eloquent_ritchie chmod 600 /root/.ssh/id_ed25519
eloquent_ritchie is sample container name. Your container name will differ. Use your container name
Then I was able to do Git operations inside devcontainer.
If you rebuild your container again you need to copy the file to devcontainer again.
I also had quite a lot of trouble to get this to work. The following steps might help troubleshooting:
Check that ssh-agent is running on your host and the key is added
Run ssh-agent -l on Windows and expect to see the name of your key
Check that VSCode forwards the socket
Search ssh-agent in the startup log. I had the message
ssh-agent: SSH_AUTH_SOCK in container (/tmp/vscode-ssh-auth-a56c4b60c939c778f2998dee2a6bbe12285db2ad.sock) forwarded to local host (\\.\pipe\openssh-ssh-agent).
So it seems that VSCode is directly forwarding the Windows SSH agent here (and not an SSH agent running in your WSL).

How to run multiple VS Code instances to use different identities for interacting with a remote git repository while working with Remote Containers?

Sorry for a long post
I have a VS Code Remote Development setup using containers. I have multiple user accounts for the same git server. I am using key-based authentication to interact with git-server. I am using Ubuntu 18.04 LTS in my host machine and Debian Buster in containers.
Git Server - git.server.com
Git urls
- git#git.server.com:repo1.git
- git#git.server.com:repo2.git
Repo1 is used by User1 account
Repo2 is used by User2 account
ssh-key for User1 - ~/.ssh/id_ed25519_user1
ssh-key for User2 - ~/.ssh/id_ed25519_user2
I have created an ssh config in ~/.ssh/config
Host user1.git.server.com
HostName git.server.com
User git
IdentityFile /home/user/.ssh/id_ed25519_user1
IdentitiesOnly yes
Host user2.git.server.com
HostName git.server.com
User git
IdentityFile /home/user/.ssh/id_ed25519_user2
IdentitiesOnly yes
From a terminal in host machine I can authenticate to server
as user1:
ssh git#user1.git.server.com
as user2:
ssh git#user2.git.server.com
I can launch multiple vs code instances and inside the vs code terminal (without opening the folder in remote container), I can use the above commands to authenticate.
Things get interesting once we use "Reopen in Containers".
I don't have the .ssh/config folder mounted inside containers so the above commands fail to execute by giving error:
Could not resolve hostname user1.git.server.com: Name or service not known
or
Could not resolve hostname user2.git.server.com: Name or service not known
I don't want to mount my ./ssh folder to containers for obvious reasons. Also mount=type=ssh is also not possible as this is not building a container but working with container as a dev environment.
Next thing I notice is that ssh-agent forwarding is working.
ssh-add -l
If I execute the above command in terminal in host, terminal in vs code (when folder is opened locally) and terminal in vs code (when folder is opened in remote container) all returns same output (the fingerprint for keys of identities for user1 and user2).
So I start a new ssh-agent in host terminal before launching the code.
~/repo1$ eval `ssh-agent -s`
~/repo1$ ssh-add ~/.ssh/id_ed25519_user1
~/repo1$ code .
and in another terminal
~/repo2$ eval `ssh-agent -s`
~/repo2$ ssh-add ~/id_ed25519_user2
~/repo2$ code .
In respective host terminal and vs code terminal (without opening folder in remote container) I get the desired result. I can use:
ssh git#git.server.com
So I don't need a ssh-config file anymore.
But this does not work when the folder is opened in Remote Container. VS Code only connects to the first ssh-agent started. That is (in this case) the repo1 opened in VS Code Remote Container works perfectly with all git support but repo2 opened in VS Code Remote Container does not work.
How can I direct VS Code to use which ssh-agent to forward to container when switching to remote container?
Workaround
The workaround which I am using currently is use the default ssh-agent (Ubuntu starts an ssh-agent at the time of logging in). This agent already contains both the identities. Verify by running
ssh-add -l
Launch multiple vs code instances as usual and switch to Remote Container.
When I want to perform a git operation I do the following in host terminal:
For repo1 (user1) operations
$ ssh-add -D
$ ssh-add ~/.ssh/id_ed25519_user1
For repo2 (user2) operations
$ ssh-add -D
$ ssh-add ~/.ssh/id_ed25519_user2
Is there any other suggested way? Is there any settings that can be added to devcontainer.json to achieve the proper forwarding of ssh-agent?
Thank you for your patience.

VSCode: Remote - SSH extension - nothing happens when trying to conncet to remote server

I have recently discovered vscode and the remote development extension, and wanting to try it out, but I can't get the damn thing to connect to my development server.
I've installed the both the vscode and vscode insiders packages by downloading the .deb packages manually on my local machine and installed the extension it self by executing this line:
ext install ms-vscode-remote.vscode-remote-extensionpack
Afterwards my pre-configured ssh hosts gets discovered fine and i have key-based auth running and it works fine when connecting to the server from a terminal.
But when I try to connect to the server, by right clicking and selecting one of the two options, only a notification saying "Confirming hostname is reachable" pops up for a second or two and then nothing else happens.
There are no information in the "output" view, other than this line:
remote-ssh#0.42.2
I've checked and confirmed the remote server has the needed prerequisites.
Also i see nothing in the /var/log/audit/audit.log on the remote server when trying to connect, so I dont even think it tries to establish a connection.
What am I missing ?
Local OS: Linux Mint 19 Tara
Remote OS: CentOS 7
I found the issue - "+" signs makes the extension fail: https://github.com/microsoft/vscode-remote-release/issues/612
I have this line in my local .ssh/config
Host *+*
ProxyCommand ssh $(echo %h | sed 's/+[^+]*$//;s/\([^+%%]*\)%%\([^+]*\)$/\2 -l \1/;s/:/ -p /') nc -q0 $(echo %h | sed 's/^.*+//;/:/!s/$/ %p/;s/:/ /')
Which allows me to connect to host b through host a like so:
ssh hosta+hostb
Removing that fixed the issue, and I connect succesfully to a remote host - it simple seems that that vscode dislikes that line.

How can I automate deployment through multiple ssh firewalls (using PW auth)?

I'm stuck in a bit of annoying situation.
There's a chain of machines between my desktop and the production servers. Something like this:
desktop -> firewall 1 -> firewall 2 -> prod_box 1
-> prod_box 2
-> ...
I'm looking for a way to automate deployment to the prod boxes via ssh.
I'm aware there are a number of solutions in general, but my restrictions are:
No changes permitted to firewall 2
No config changes permitted to prod boxes (only content)
firewall 1 has a local user account for me
firewall 2 and prod are accessed as root
port 22 is the only open port between each link
So, in general the command sequence I do to deploy is:
scp archive.tar user#firewall1:archive.tar
ssh user#firewall1
scp archive.tar root#firewall2:/tmp/archive.tar
ssh root#firewall2
scp /tmp/archive.tar root#prod1:/tmp/archive.tar
ssh root#prod1
cd /var/www/
tar xvf /tmp/archive.tar
Its a bit more complex than that in reality, but that's a basic summary of the tasks to do.
I've put my ssh key in firewall1:/home/user/.ssh/authorized_keys, so that's no problem.
However, I can't do this for firewall2 or prod boxes.
It'd be great if I could run this (commands above) from a shell script locally, type my password in 4 times and be done with it. Sadly I cannot figure out how to do that.
I need some way to chain ssh commands. I've spent all afternoon trying to use python to do this and eventually given up because the ssh libraries don't seem to support password-entry-style login.
What can I do here?
There must be some kind of library I can use to:
login via ssh using either a key file OR a dynamically entered password
remote remote shell commands through the chain of ssh tunnels
I'm not really sure what to tag this question, so I've just left it as ssh, deployment for now.
NB. It'd be great to use ssh tunnels and a deployment tool to push these changes out, but I'd still have to manually login to each box to setup the tunnel, and that wont work anyway, because of the port blocking.
I am working on Net::OpenSSH::Gateway, an extension for my other Perl module Net::OpenSSH that does just that.
For instance:
use Net::OpenSSH;
use Net::OpenSSH::Gateway;
my $gateway = Net::OpenSSH::Gateway->find_gateway(
proxies => ['ssh://user#firewall1',
'ssh://password:root#firewall2'],
backend => 'perl');
for my $host (#prod_hosts) {
my $ssh = Net::OpenSSH->new($host, gateway => $gateway);
if ($ssh->error) {
warn "unable to connect to $host\n";
next;
}
$ssh->scp_put($file_path, $destination)
or warn "scp for $host failed\n";
}
It requires Perl available in both firewalls, but no write permissions or installing any additional software there.
Unfortunately this isn't possible to do as one shell script. I did try, but ssh's password negotiation requires an interactive terminal, which you don't get with chained ssh commands. You could do it with passwordless keys, but since that's highly insecure and you can't do it anyway, nevermind.
The basic idea is that each server sends a bash script to the next one, which is then activated and sends the next one (and so on) until it reaches the last one, which does the distribution.
However, since this requires an interactive terminal at each stage, you're going to need to follow the payload down the chain manually executing each script as you go, somewhat as you do now but with less typing.
Obviously, you will need to customise them a bit, but try these scripts:
script1.sh
#!/bin/bash
user=doug
firewall1=firewall_1
#Minimise password entries across the board.
tar cf payload1.tar script3.sh archive.tar
tar cf payload2.tar script2.sh payload1.tar
scp payload2.tar ${user}#${firewall1}:payload2.tar
ssh ${user}#${firewall1} "tar xf payload2.tar;chmod +x script2.sh"
echo "Now connect to ${firewall1} and run ./script.sh"
script2.sh
#!/bin/bash
user=root
firewall2=firewall_2
# Minimise password entries
scp payload1.tar ${user}#${firewall2}:/tmp/payload1.tar
ssh ${user}#${firewall2} "cd /tmp;tar xf payload1.tar;chmod +x script3.sh"
echo "Now connect to ${firewall2} and run /tmp/script3.sh"
script3.sh
#!/bin/bash
user=root
hosts="prod1 prod2 prod3 prod4"
for host in $hosts
do
echo scp archive.tar ${user}#${host}:/tmp/archive.tar
echo ssh ${user}#${host} "cd /var/www; tar xvf /tmp/archive.tar"
done
It does require 3 password entries per firewall which is a bit annoying, but such is life.
This do you any good?

Editing remote files with Emacs using public key authentication

How can I edit files on my remote host using my local Emacs when I can access the remote host only through SSH with public key authentication? Tramp handles normal password logins pretty well but I can't figure out how to get it work with key pairs. I'm using unix/linux on both ends.
There is no TRAMP equivalent to ssh user#host -i private-key.pem. However, if you run the shell command ssh-add private-key.pem, then ssh (and thus TRAMP) will automatically use private-key.pem for authentication. Simply ssh user#host will work on the shell, and opening the file /user#host:~/filename.txt will work in emacs, without it prompting for a password.
I don't get your question as Tramp works perfectly well with public-key authenticated SSH connections.
For instance, assuming you have set the following config in ~/.ssh/config:
Host remotehost
User mylogin
Port 22
Hostname remotehost.fqdn
and assuming that you can run ssh remotehost correctly in a terminal, then you are able to open your remote file using TRAMP C-x C-f /remotehost:path/to/file
If you are on Windows you can use plink with tramp easily. You have to make sure the plink binary is in your path and have to customize the variable (M-x customize-option) tramp-default-method to plink which combined with pageant would get you what you want.
I let you read the putty home page how to configure pageant to add your key.
There is the method plinkx as well which use the profile name so when you do a :
C-x C-f /putty_profile:
It will get the putty_profile from your putty saved profile name.
If you are using Linux usually modern distros should have the gnome keyring (named as well seahorse) starting X with a global SSH agent. Example on my debian distro :
chmouel#lutece:~$ ps aux|grep ssh-agent
chmouel 2917 0.0 0.0 4904 552 ? Ss Aug30 0:00 /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session /usr/bin/seahorse-agent --execute x-session-manager
if you do a ssh-add (making sure you have identity in your ~/.ssh properly configured) it should request for your password and identify for all your X session.
If it does not happen you probably have a problem somewhere else in your distro.