systemclt services and ssh - github

I have a simple bash script that makes a call to a git repository on github (/home/user/simple_git.sh):
#!/bin/bash
# Change to the Git repoistory
cd /home/user/git/a_git_repo
remote=$(
git ls-remote -h origin master |
awk '{print $1}'
)
local=$(
git rev-parse HEAD
)
printf "Local : %s\nRemote: %s\n" $local $remote
It gives the following output:
Local : a10dc1d7d30ed67ed1e514a3c1ffc5a824cea14b
Remote: a10dc1d7d30ed67ed1e514a3c1ffc5a824cea14b
git authentication is done via ssh keys - the following is my my .bashrc
# ssh
eval `ssh-agent -s`
ssh-add
The script runs just fine as user as well as with sudo (by preserving the user environment) ie.
~/.simple_git.sh
or
sudo -E ~/simple_git.sh
However, I've still not yet found a way to run the script as a service (/etc/systemd/user/simple_git.service or /etc/systemd/system/simple_git.service)
[Unit]
Description=TestScript
[Service]
Type=simple
ExecStart=/home/user/simple_git.sh
I've tried running the systemctl command with the --user option as well as modifying visudo to include the line
Defaults env_keep += SSH_AUTH_SOCK
but to no avail. Everytime I check the status of the job:
Feb 21 23:16:00 alarmpi systemd[484]: Started TestScript.
Feb 21 23:16:01 alarmpi simple_git.sh[15255]: Permission denied (publickey).
Feb 21 23:16:01 alarmpi simple_git.sh[15255]: fatal: Could not read from remote repository.
Feb 21 23:16:01 alarmpi simple_git.sh[15255]: Please make sure you have the correct access rights
Feb 21 23:16:01 alarmpi simple_git.sh[15255]: and the repository exists.
Feb 21 23:16:01 alarmpi simple_git.sh[15255]: Local : a10dc1d7d30ed67ed1e514a3c1ffc5a824cea14b

Systemd is not running the service with your environment variables from your session. I would recommend you to
Use git using https, which will not require authentication (instead of ssh)
Create an unprotected "deploy key", which will be in standard location (~alarmpi/.ssh/id_rsa), which will get picked up by git automatically without ssh-agent.

At this time (working practise policy) it was not possible to use https to connect to github. While the idea of an unprotected deploy-key is useful I ended up using systemctl --user import-environment (wiki.archlinux.org/index.php/Systemd/User) to mange the issue at this time.

Related

custom DDEV pull provider to update local database and user generated files

I'm trying to create a custom DDEV Provider, to import the current database and also user generated files from the web server.
I want to use it with TYPO3 Projects, where I develop the EXT locally with DDEV (because its awesome :) ) and I want to update my local database and also the "fileadmin" files with the help of the ddev pull function.
I've read the docs: Introduction to Hosting Provider Integration and I tested the bash commands locally within the DDEV Container (ddev ssh) and I'm able to connect to the remote Webserver and make a database dump and transfer it to the local DDEV container.
So I added the bash commands to the my custom provider .yaml file in the /provider/ folder.
Here is the current file:
environment_variables:
DB_NAME: db_name
DB_USER: password
DB_PASSWORD: password
HOST_IP: 11.11.11.11
SSH_USERNAME: username
SSH_PASSWORD: password
SSH_PORT: 22
db_pull_command:
command: |
# Creates the .download folder if it doesn't exist
mkdir -p /var/www/html/.ddev/.downloads
# execute the mysqldump on the remote webserver via SSH
ssh -p ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP} 'mysqldump -h 127.0.0.1 -u ${DB_USER} -p ${DB_PASSWORD} ${DB_NAME} > /tmp/${DB_NAME}.sql.gz'
# download to sql file to the ddev folder
scp -P ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP}:/tmp/${DB_NAME}.sql.gz /var/www/html/.ddev/.downloads/db.sql.gz.
If I execute the pull with ddev pull my-provider I get the following Error:
Downloading database...
bash: 03: command not found
Pull failed: Failed to exec mkdir -p /var/www/html/.ddev/.downloads
I assumed that the commands are executed like I would within the DDEV Container (with ddev ssh). What am I missing?
My Environment:
TYPO3 v10.4.20
Windows 10 (WSL)
Docker Desktop 3.5.2
DDEV-Local version v1.17.7
architecture amd64
db drud/ddev-dbserver-mariadb-10.3:v1.17.7
dba phpmyadmin:5
ddev-ssh-agent drud/ddev-ssh-agent:v1.17.0
docker 20.10.7
docker-compose 1.29.2
The web server is running on Plesk.
Note: I only tried to implement the db pull command so far.
UPDATE 09.11.21:
So I've gotten this far that I'm able update and also download the files. However I'm only able to do it, if I hardcode the variables. Everytime I'm trying to setup the environment_variables: I get the following error, if I run the ddev pull myProvider:
Downloading database...
bash: 03: command not found
Here is my current .yaml file with the environment_variables:, which currently don't work. I've tested all the commands within ddev ssh
and it works if I call them manually.
environment_variables:
DB_NAME: db_name
DB_USER: db_user
DB_PASSWORD: 'Password$'
HOST_IP: 10.10.10.10
SSH_USERNAME: username
SSH_PORT: 21
auth_command:
command: |
ssh-add -l >/dev/null || ( echo "Please 'ddev auth ssh' before running this command." && exit 1 )
db_pull_command:
command: |
mkdir -p /var/www/html/.ddev/.downloads
ssh -p ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP} "mysqldump -h 127.0.0.1 -u ${DB_USER} -p'${DB_PASSWORD}' ${DB_NAME} > /tmp/${DB_NAME}.sql"
scp -P ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP}:/tmp/${DB_NAME}.sql /var/www/html/.ddev/.downloads/db.sql
gzip -f /var/www/html/.ddev/.downloads/db.sql
files_pull_command:
command: |
scp -P ${SSH_PORT} -r ${SSH_USERNAME}#${HOST_IP}:/path/to/public/fileadmin/user_upload /var/www/html/.ddev/.downloads/files
Do I declare the variables the wrong way? Or what is it that I'm missing?
For anyone who has trouble connecting via ssh without the password promt, you can run the following commands:
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 username#host
Afterward you should be able to connect without a password promt. Try the following: ssh -p 22 username#host
before you try to ddev puul you have to execute ddev auth ssh
Thanks to #rfay for pointing me into the right direction.
The Problem was, that my password containted a special charater (not a $ though) which needed to be escaped.
After escpaing it correctly like so
environment_variables:
DB_PASSWORD: 'Password\&\'
the ddev pull works.
I hope my .yaml file helps someone else that needs to pull from a webserver.

How to setup gsutil to run from Anacron?

As user, gsutil works nice.
gsutil works nice when called from crontab (user).
As root, gsutil says:
Caught non-retryable exception while listing gs://....: ServiceException: 401 Anonymous users does not have storage.objects.list access to bucket ...."
gsutil does not work when called from Anacron (root).
Other scripts called from Anacron run nice.
The ~/.boto file contains credentials, and is located in user HOME directory.
So maybe that is causing the exception.
I tried setting BOTO_CONFIG, but it didn't change results:
$ gsutil -D ls 2>&1 | grep config_file_list
config_file_list: ['/home/wolfv/.boto']
$ sudo gsutil -D ls 2>&1 | grep config_file_list
config_file_list: []
$ BOTO_CONFIG="/root/.boto"
$ sudo gsutil -D ls 2>&1 | grep config_file_list
config_file_list: []
How to setup gsutil to run from Anacron?
$ gsutil -D
gsutil version: 4.22
checksum: 2434a37a663d09ae21d1644f64ce60ca (OK)
boto version: 2.42.0
python version: 2.7.13 (default, Jan 12 2017, 17:59:37) [GCC 6.3.1 20161221 (Red Hat 6.3.1-1)]
OS: Linux 4.9.11-200.fc25.x86_64
multiprocessing available: True
using cloud sdk: True
config path: /home/wolfv/.boto
gsutil path: /home/wolfv/Downloads/google-cloud-sdk/platform/gsutil/gsutil
compiled crcmod: True
installed via package manager: False
editable install: False
Command being run: /home/wolfv/Downloads/google-cloud-sdk/platform/gsutil/gsutil -o GSUtil:default_project_id=redacted -D
config_file_list: ['/home/wolfv/.config/gcloud/legacy_credentials/redacted/.boto', '/home/wolfv/.boto']
config: [('debug', '0'), ('working_dir', '/mnt/pyami'), ('https_validate_certificates', 'True'), ('debug', '0'), ('working_dir', '/mnt/pyami'), ('content_language', 'en'), ('default_api_version', '2'), ('default_project_id', 'redacted')]
UPDATE_1
export BOTO_CONFIG worked for the terminal:
$ sudo -s
[root] # export BOTO_CONFIG=/home/wolfv/.boto
[root] # gsutil -D ls 2>&1 | grep config_file_list
config_file_list: ['/home/wolfv/.boto']
[root] # vi /root/.bashrc
add this line to end of .bashrc:
export BOTO_CONFIG=/home/wolfv/.boto
exit
open new terminal and test the new BOTO_CONFIG in bash.rc
$ sudo -s
[root] # gsutil -D ls 2>&1 | grep config_file_list
config_file_list: ['/home/wolfv/.boto']
exit
Unfortunately export BOTO_CONFIG in /root/.bashrc did not help Anacron call gsutil.
The backup log shows that Anacron called the backup script, and the backup script call to gsutil failed.
Does it matter in which initialization script sets path BOTO_CONFIG?
To make the path permanently accessible to Anacron (root), in which file should set BOTO_CONFIG?:
/etc/profile
/root/.bash_profile
/root/.bashrc
UPDATE_2
My credentials are now invlalid, probably from some change I made.
Here is my attempt at houglum's suggestions for BOTO_CONFIG.
First authorize login to get that out of the way:
$ gcloud auth login
Your browser has been opened to visit:
https://accounts.google.com/o/oauth2/auth?redirect_uri=http%3A%2F%2Flocalhost%3A8085%2F&prompt=select_account&response_type=code&client_id=redacted.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Faccounts.reauth&access_type=offline
Created new window in existing browser session.
WARNING: `gcloud auth login` no longer writes application default credentials.
If you need to use ADC, see:
gcloud auth application-default --help
You are now logged in as [edacted].
Your current project is [redacted]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
Defining BOTO_CONFIG inline does not work:
$ BOTO_CONFIG=/home/wolfv/.boto gsutil ls
Your credentials are invalid. Please run
$ gcloud auth login
Exporting BOTO_CONFIG does not work:
$ export BOTO_CONFIG=/home/wolfv/.boto; gsutil ls
Your credentials are invalid. Please run
$ gcloud auth login
Sourcing bashrc does not work:
$ ls /home/wolfv/.bashrc
/home/wolfv/.bashrc
$ . /home/wolfv/.bashrc; gsutil ls
Your credentials are invalid. Please run
$ gcloud auth login
UPDATE_3
My credentials work if I remove my credentials from .boto, and use auth login instead (based on Your credentials are invalid. Please run $ gcloud auth login)
$ gcloud auth login redacted#email.com
WARNING: `gcloud auth login` no longer writes application default credentials.
If you need to use ADC, see:
gcloud auth application-default --help
You are now logged in as [redacted#email.com].
Your current project is [redacted-123]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
After using auth login, gsutil works from the terminal:
$ gsutil ls
gs://redacted/
gs://redacted/
gs://redacted/
And the backup script that calls gsutil also works from the terminal:
$ ~/scripts/backup_to_gcs/backup_to_gcs.sh
backup_to_gcs.sh in progress ...
backup_to_gcs.sh completed successfully
However, backup_to_gcs.sh fails when called from crontab.
How to run gsutil from crontab?
UPDATE_4
This is in my anacron file:
1 10 anacron_test_id BOTO_PATH=/home/wolfv/.config/gcloud/legacy_credentials/wolfvolpi#gmail.com/.boto:/home/wolfv/.boto /home/wolfv/scripts/backup_to_gcs/backup_to_gcs.sh
anacron runs the backup_to_gcs.sh script as expected, but the backup fails.
When backup_to_gcs.sh script is called from command line, it works fine.
Probably because gsutil runs as user, but does not run as root:
$ gsutil ls
gs://wolfv/
gs://wolfv-test-log/
gs://wolfv2/
gs://wolfvtest/
$ BOTO_PATH=/home/wolfv/.config/gcloud/legacy_credentials/wolfvolpi#gmail.com/.boto:/home/wolfv/.boto gsutil ls
gs://wolfv/
gs://wolfv-test-log/
gs://wolfv2/
gs://wolfvtest/
$ sudo BOTO_PATH=/home/wolfv/.config/gcloud/legacy_credentials/wolfvolpi#gmail.com/.boto:/home/wolfv/.boto gsutil ls
sudo: gsutil: command not found
$ sudo gsutil ls
sudo: gsutil: command not found
Two days ago root was able to run gsutil.
Since then I used dnf history rollback to uninstall a different software.
Could that have effected gsutil authentication?
UPDATE_5
I followed the instructions on https://cloud.google.com/storage/docs/authentication#gsutilauth
USING SERVICE ACCOUNT
$ gcloud auth activate-service-account --key-file=/home/wolfv/REDACTED.json
Activated service account credentials for: [REDACTED#appspot.gserviceaccount.com]
But still, root could not run gsutil:
$ sudo gsutil ls
sudo: gsutil: command not found
$ gsutil ls -la gs://wolfvtest/test_lifecycle/
CommandException: You have multiple types of configured credentials (['Oauth 2.0 User Account', 'OAuth 2.0 Service Account']), which is not supported. One common way this happens is if you run gsutil config to create credentials and later run gcloud auth, and create a second set of credentials. Your boto config path is: ['/home/wolfv/.boto', '/home/wolfv/.config/gcloud/legacy_credentials/my-project#appspot.gserviceaccount.com/.boto']. For more help, see "gsutil help creds".
The help referse to a page that no longer mentions "auth" https://developers.google.com/cloud/sdk/gcloud/#gcloud.auth
So I have one too many credentials:
$ gsutil -D
...
config_file_list: ['/home/wolfv/.boto', '/home/wolfv/.config/gcloud/legacy_credentials/my-project#appspot.gserviceaccount.com/.boto']
Are any of these credentials used by root (for anacron)?
They are not in the root directory.
Should credintals needed for anacron be in the root directory?
UPDATE_5
I tried again after installing Fedora 26 on How to authorize root to run gsutil?
When you execute BOTO_CONFIG=<value> in the shell, you're not actually defining an environment variable, but rather a local shell variable (see this thread for more details). You want to either define the variable inline with the command:
BOTO_CONFIG=/path/to/config gsutil ls
or first export the BOTO_CONFIG environment variable, then run the gsutil command:
export BOTO_CONFIG=/path/to/config; gsutil ls
EDIT:
I just noticed that in addition to your own $HOME/.boto file, you're relying on gcloud's credentials that get set up from gcloud auth login. When you run this, gcloud creates another .boto file for you, and when you run gsutil from gcloud's wrapper script, it loads that .boto file first, followed by whatever .boto file(s) you specify with either the BOTO_CONFIG or BOTO_PATH environment variable.
If you want to run as root (which the cron job does) and use both those .boto files, you'll need to instead use the BOTO_PATH variable to list them, separated by colons, also making sure the BOTO_CONFIG environment variable is not set (BOTO_CONFIG takes precedence over BOTO_PATH... the gsutil docs mention this briefly):
BOTO_PATH=/home/wolfv/.config/gcloud/legacy_credentials/REDACTED/.boto:/home/wolfv/.boto gcloud ls
EDIT 2:
1) When you get the error "sudo: gsutil: command not found", it means that the root user cannot find the gsutil executable in its PATH. You should use the absolute path to the gsutil executable instead -- from your post, it looks like this is /home/wolfv/Downloads/google-cloud-sdk/platform/gsutil/gsutil.
2) When you activate service account credentials, the gcloud wrapper for gsutil will create a separate .boto file (with a path containing legacy_credentials/myproject#appspot[...]), and prefer to use this one if it's present. It contains the attribute gs_service_key_file, while your other .boto file probably contains gs_oauth2_refresh_token -- loading multiple .boto files with multiple credentials attributes like this will result in the error you're seeing.
If you want to use gcloud to manage your auth credentials, you generally shouldn't put anything under the [Credentials] section of your $HOME/.boto file.

cannot access git reposoitory

any idea why this does not work:
D:\apache-tomcat-8.0.33\bin>"C:\Program Files\Git\bin\git.exe" -c core.askpass=true ls-remote -h https://tobias#wdmycloud/shares/githome/Repo.git HEAD
fatal: repository 'https://tobias#wdmycloud/shares/githome/Repo.git/' not found
I can successfully clone this repository using eclipse:
In Eclipse, you're using SSH, and on the commandline you're using HTTPS. Those are two different protocols, and the fact that one works doesn't necessarily mean the other will work, too. Try SSH URL instead:
"C:\Program Files\Git\bin\git.exe" -c core.askpass=true ls-remote -h ssh://tobias#wdmycloud/shares/githome/Repo.git HEAD
Regarding WD MyCloud, you will find tutorial to access a git repo through ssh, which is fairly easy to setup, considering all you need is a sshd running.
But for an https url to work, you would need an Apache server running, properly configured to support git http-backend.
Something along the line of an httpd.conf including:
# Configure Git HTTP Backend
SetEnv GIT_PROJECT_ROOT /www/example.com/git
SetEnv GIT_HTTP_EXPORT_ALL
# Note: Serve static files directly
AliasMatch ^/(.*/objects/[0-9a-f]{2}/[0-9a-f]{38})$ /var/www/git/$1
AliasMatch ^/(.*/objects/pack/pack-[0-9a-f]{40}.(pack|idx))$ /var/www/git/$1
# Note: Serve repository objects with Git HTTP backend
ScriptAliasMatch \
"(?x)^/(.*/(HEAD | \
info/refs | \
objects/info/[^/]+ | \
git-(upload|receive)-pack))$" \
/usr/libexec/git-core/git-http-backend/$1
And that is not standard, unless you set it up beforehand yourself.

How to resolve cvs error: no such system user

I am trying to setup CVS on one our server ( let's call it JEDI). Then there is production server called DVADER.
I am able to log in from DVADER to JEDI using cvs login command with production user STWAR. However, as soon as I do cvs status I get following error :
Fatal error, aborting.
dsicnspr: no such system user
I have setup .passwd in CVSROOT folder for production user STWAR account on DVADER as shown below.
STWAR:hsfwfewiiu34de
However, there is no account of STWAR which is our production id on JEDI which is CVS server. So there is no entry of STWAR in /etc/passwd file on JEDI. I also tried using SystemAuth=no in config file inside CVSROOT but that is not working.
JEDI the CVS Server is also used for development and have other user account e.g. LIA who are able to login to JEDI.
Can anyone please tell me how to get rid of this error ? Do I need to setup account for STWAR
on JEDI and make an entry in /etc/passwd file ?
http://blog.jdknight.me/2015/03/how-to-setup-cvs-server-pserver-on.html
sudo chown -R :cvs /opt/cvsroot
sudo chmod -R g+ws /opt/cvsroot
(if you have selinux enforcing)
semanage fcontext -a -t cvs_data_t '/opt/cvsroot(/.*)?'
restorecon -R -v /opt/cvsroot
For example:
[root#*** ~]# ls -l /usr/local/repo/CVSROOT/passwd*
**-rw-rwSr--**. 1 root cvs 23 Nov 30 10:51 /usr/local/repo/CVSROOT/passwd -> no error
**-rw-r--r--**. 1 root cvs 1033 Dec 10 13:59 /usr/local/repo/CVSROOT/passwd.backup -> error as your questions above!

Github SSH config containing multiple ssh keys capistrano deployment fails saying Repository not found

~/.ssh/config
# User_A
Host github.com-User_A
HostName github.com
User git
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa
IdentitiesOnly yes
# User_B
Host github.com-User_B
HostName github.com
User git
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa_user_b
IdentitiesOnly yes
# http://serverfault.com/questions/400633/capistrano-deploying-to-different-servers-with-different-authentication-methods
Host example.com
IdentityFile ~/.ssh_keys/example_env.pem
ForwardAgent yes
On local machine:
$ ssh -T git#github.com
Hi User_B! You've successfully authenticated, but GitHub does not provide shell access.
On remote machine
~$ ssh remote_user#example.com
[remote_user#example ~]$ ssh -T git#github.com
Hi User_A! You've successfully authenticated, but GitHub does not provide shell access.
Note:
ssh-add -l shows all the mentioned keys enlisted
deploy.rb contains:
set :repository, "git#User_B:<REPO_NAME>"
ssh_options[:forward_agent] = true
I am trying to deploy my application using Capistrano to an Amazon EC2 instance for which I the .pem file is already added to my local machine using ssh-add and it can be seen enlisted in output for ssh-add -l.However I am facing following error while deploying:
** [example.com :: err] ERROR: Repository not found.
** fatal: The remote end hung up unexpectedly
Following is the full output of my cap deploy command:
$ cap bat deploy
triggering load callbacks
* executing `bat'
triggering start callbacks for `deploy'
* executing `multistage:ensure'
* executing `deploy'
* executing `deploy:update'
** transaction: start
* executing `deploy:update_code'
updating the cached checkout on all servers
executing locally: "git ls-remote git#User_B:<REPO_NAME> <BRANCH_NAME>"
command finished in 6296ms
* executing "if [ -d /srv/<APP_NAME>/shared/cached-copy ]; then cd /srv/<APP_NAME>/shared/cached-copy && git fetch -q origin && git fetch --tags -q origin && git reset -q --hard df84fadff305e1729991caddde47f6802e424d57 && git clean -q -d -x -f; else git clone -q git#User_B:<REPO_NAME> /srv/<APP_NAME>/shared/cached-copy && cd /srv/<APP_NAME>/shared/cached-copy && git checkout -q -b deploy df84fadff305e1729991caddde47f6802e424d57; fi"
servers: ["example.com"]
[example.com] executing command
** [example.com :: err] ERROR: Repository not found.
** fatal: The remote end hung up unexpectedly
command finished in 3811ms
*** [deploy:update_code] rolling back
* executing "rm -rf /srv/<APP_NAME>/releases/20130723222237; true"
servers: ["example.com"]
[example.com] executing command
command finished in 477ms
failed: "sh -c 'if [ -d /srv/<APP_NAME>/shared/cached-copy ]; then cd /srv/<APP_NAME>/shared/cached-copy && git fetch -q origin && git fetch --tags -q origin && git reset -q --hard df84fadff305e1729991caddde47f6802e424d57 && git clean -q -d -x -f; else git clone -q git#User_B:<REPO_NAME> /srv/<APP_NAME>/shared/cached-copy && cd /srv/<APP_NAME>/shared/cached-copy && git checkout -q -b deploy df84fadff305e1729991caddde47f6802e424d57; fi'" on example.com
So I guess this error is caused due to conflicts arising between multiple SSH keys getting detected i.e. on local machine User_B(who is a member of the repository) is used as default however on remote machine User_A(who is not having access to the repository) is used.
If my assumption is correct can anybody please help me in getting this problem solved? Is there any way in which a specific user config can be used while agent forwarding? If not then what could be the solution to this?
Thanks.
Ok it seems like the sequence in which keys are listed in ~/.ssh/config matters.
Initially it was
# User_A
Host github.com-User_A
HostName github.com
User git
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa
IdentitiesOnly yes
# User_B
Host github.com-User_B
HostName github.com
User git
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa_user_b
IdentitiesOnly yes
# http://serverfault.com/questions/400633/capistrano-deploying-to-different-servers-with-different-authentication-methods
Host example.com
IdentityFile ~/.ssh_keys/example_env.pem
ForwardAgent yes
Afterwards I did this:
# User_B
Host github.com-User_B
HostName github.com
User git
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa_user_b
IdentitiesOnly yes
# User_A
Host github.com-User_A
HostName github.com
User git
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa
IdentitiesOnly yes
# http://serverfault.com/questions/400633/capistrano-deploying-to-different-servers-with-different-authentication-methods
Host example.com
IdentityFile ~/.ssh_keys/example_env.pem
ForwardAgent yes
But after doing that I didn't restarted the machine, thus the changes were not in effect.
This morning after I started my machine after posting above problem I found that it is working:
On local machine:
$ ssh -T git#github.com
Hi User_B! You've successfully authenticated, but GitHub does not provide shell access.
On remote machine
$ ssh -T git#github.com
Hi User_B! You've successfully authenticated, but GitHub does not provide shell access.
Hope this helps somebody else in case he faces a similar problem.
Thanks.