Host key verification failed bitbucket pipeline - deployment

Hi i have a problem configuring bitbucket pipeline with ssh login on my remote server.
The output of error is:
ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory
Host key verification failed
These are the steps i follow:
generate private and public keys (without password) on my server using this command: ssh-keygen -t rsa -b 4096
add base64 encoded private key under Repository Settings->Pipelines->Deployments->Staging environments
push file "my_known_hosts" on the repository created with: ssh-keyscan -t rsa myserverip > my_known_hosts
I also tried to do another test:
generate keys from Repository Settings
copy public key to authorized_keys file on my remote server
type the ip of my remote server in "Known hosts" click fetch and add
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
This is how i configure pipeline ssh connection
image: atlassian/default-image:latest
pipelines:
default:
- step:
name: Deploy to staging
deployment: staging
script:
- echo "Deploying to staging environment"
- mkdir -p ~/.ssh
- cat ./my_known_hosts >> ~/.ssh/known_hosts
- (umask 077 ; echo $SSH_KEY | base64 --decode > ~/.ssh/id_rsa)
- ssh $USER#$SERVER -p$PORT 'echo "connected to remote host as $USER"'
I'm trying all possible things but still can't connect.
Can anyone help me?

This happen when you try to ssh the first time to the server, you can remove host checking by this option StrictHostKeyChecking=no, below is the complete command for your reference.
ssh -o StrictHostKeyChecking=no $USER#$SERVER -p$PORT 'echo "connected to remote host as $USER"'
PS: disabling host checking is not secure way to do, you can add server key to your ~/.ssh/known_host , run this command ssh-keyscan host1 , replace host1 to the host you want to connect.

Related

Permission denied, please try again github

I'd like to have files automatically uploaded to my server when using the git push command. But the problem is that it stops at the keys and gives an error ( Load key "/home/runner/.ssh/key": invalid format ). On the hosting, the keys are added, in the settings of the github repository - too. Maybe someone faced similar? How can this problem be solved?
UPD: I fixed the error by changing the output of the key, but the following appeared .. Writes access denied.
Here is the updated code:
name: Deploy
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
# Setup key
- run: set -eu
- run: mkdir "$HOME/.ssh"
- run: echo "${{ secrets.key }}" > "$HOME/.ssh/key"
- run: chmod 600 "$HOME/.ssh/key"
# Deploy
- run: rsync -e "ssh -p 1022 -i $HOME/.ssh/key -o StrictHostKeyChecking=no" --archive --compress --delete . *server*:/*link*/public_html/
Error code:
Run rsync -e "ssh -p 1022 -i $HOME/.ssh/key -o StrictHostKeyChecking=no" --archive --compress --delete . *server*:*link*/public_html/
Warning: Permanently added '*server*,[*IP*]:1022' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Received disconnect from *IP* port 1022:2: Too many authentication failures
Disconnected from *IP* port 1022
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(235) [sender=3.1.3]
Error: Process completed with exit code 255.
Try to change key file and folder .ssh permissions to
.ssh directory: 700 (drwx------)
public key (.pub file): 644 (-rw-r--r--)
private key (id_rsa): 600 (-rw-------)
lastly your home directory should not be writeable by the group or others (at most 755 (drwxr-xr-x))
http://linuxcommand.org/lc3_man_pages/ssh1.html

custom DDEV pull provider to update local database and user generated files

I'm trying to create a custom DDEV Provider, to import the current database and also user generated files from the web server.
I want to use it with TYPO3 Projects, where I develop the EXT locally with DDEV (because its awesome :) ) and I want to update my local database and also the "fileadmin" files with the help of the ddev pull function.
I've read the docs: Introduction to Hosting Provider Integration and I tested the bash commands locally within the DDEV Container (ddev ssh) and I'm able to connect to the remote Webserver and make a database dump and transfer it to the local DDEV container.
So I added the bash commands to the my custom provider .yaml file in the /provider/ folder.
Here is the current file:
environment_variables:
DB_NAME: db_name
DB_USER: password
DB_PASSWORD: password
HOST_IP: 11.11.11.11
SSH_USERNAME: username
SSH_PASSWORD: password
SSH_PORT: 22
db_pull_command:
command: |
# Creates the .download folder if it doesn't exist
mkdir -p /var/www/html/.ddev/.downloads
# execute the mysqldump on the remote webserver via SSH
ssh -p ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP} 'mysqldump -h 127.0.0.1 -u ${DB_USER} -p ${DB_PASSWORD} ${DB_NAME} > /tmp/${DB_NAME}.sql.gz'
# download to sql file to the ddev folder
scp -P ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP}:/tmp/${DB_NAME}.sql.gz /var/www/html/.ddev/.downloads/db.sql.gz.
If I execute the pull with ddev pull my-provider I get the following Error:
Downloading database...
bash: 03: command not found
Pull failed: Failed to exec mkdir -p /var/www/html/.ddev/.downloads
I assumed that the commands are executed like I would within the DDEV Container (with ddev ssh). What am I missing?
My Environment:
TYPO3 v10.4.20
Windows 10 (WSL)
Docker Desktop 3.5.2
DDEV-Local version v1.17.7
architecture amd64
db drud/ddev-dbserver-mariadb-10.3:v1.17.7
dba phpmyadmin:5
ddev-ssh-agent drud/ddev-ssh-agent:v1.17.0
docker 20.10.7
docker-compose 1.29.2
The web server is running on Plesk.
Note: I only tried to implement the db pull command so far.
UPDATE 09.11.21:
So I've gotten this far that I'm able update and also download the files. However I'm only able to do it, if I hardcode the variables. Everytime I'm trying to setup the environment_variables: I get the following error, if I run the ddev pull myProvider:
Downloading database...
bash: 03: command not found
Here is my current .yaml file with the environment_variables:, which currently don't work. I've tested all the commands within ddev ssh
and it works if I call them manually.
environment_variables:
DB_NAME: db_name
DB_USER: db_user
DB_PASSWORD: 'Password$'
HOST_IP: 10.10.10.10
SSH_USERNAME: username
SSH_PORT: 21
auth_command:
command: |
ssh-add -l >/dev/null || ( echo "Please 'ddev auth ssh' before running this command." && exit 1 )
db_pull_command:
command: |
mkdir -p /var/www/html/.ddev/.downloads
ssh -p ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP} "mysqldump -h 127.0.0.1 -u ${DB_USER} -p'${DB_PASSWORD}' ${DB_NAME} > /tmp/${DB_NAME}.sql"
scp -P ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP}:/tmp/${DB_NAME}.sql /var/www/html/.ddev/.downloads/db.sql
gzip -f /var/www/html/.ddev/.downloads/db.sql
files_pull_command:
command: |
scp -P ${SSH_PORT} -r ${SSH_USERNAME}#${HOST_IP}:/path/to/public/fileadmin/user_upload /var/www/html/.ddev/.downloads/files
Do I declare the variables the wrong way? Or what is it that I'm missing?
For anyone who has trouble connecting via ssh without the password promt, you can run the following commands:
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 username#host
Afterward you should be able to connect without a password promt. Try the following: ssh -p 22 username#host
before you try to ddev puul you have to execute ddev auth ssh
Thanks to #rfay for pointing me into the right direction.
The Problem was, that my password containted a special charater (not a $ though) which needed to be escaped.
After escpaing it correctly like so
environment_variables:
DB_PASSWORD: 'Password\&\'
the ddev pull works.
I hope my .yaml file helps someone else that needs to pull from a webserver.

How to deploy with .gitlab-ci.yml in runner used by docker?

I installed docker and gitlab + a runner using this tutorial: https://frenchco.de/article/Add-un-Runner-Gitlab-CE-Docker
The problem is that when I try to modify the .gitlab-ci.yml to make a deployment on my host machine I can not do it.
My .yml :
stages:
- deploy
deploy_develop:
stage: deploy
before_script:
- apk update && apk add bash && apk add openssh && apk add rsync
- apk add --no-cache bash
script:
- mkdir -p ~/.ssh
- ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
- cat ~/.ssh/id_rsa.pub
- rsync -hrvz ~/ root#172.16.1.97:~/web_dev/www/test/
environment:
name: develop
And the problem is that in ssh or rsync I always have the same error message in my job:
$ rsync -hrvz ~/ root#172.16.1.97:~/web_dev/www/test/
Host key verification failed.
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(226) [sender=3.1.3]
I tried to copy the ssh id_rsa and id_rsa.pub in the host, it's the same.
Surely a problem because my runner is in a docker can be? It is strange because I manage to ping my host (172.16.1.97) since the execution of the .yml. An idea has my problem?
Looks like you did not add the public key into your authorized_keys on the host server for the deploy-user?
For example, I use gitlab-ci to deploy my webapp, and therefore I added the user gitlab on my host machine, and added the public key to authorized_keys and then I can connect with ssh gitlab#IP -i PRIVATE_KEY to that server.
My gitlab-ci.yml looks like this:
deploy-app:
stage: deploy
image: ubuntu
before_script:
- apt-get update -qq
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent -s)
- ssh-add <(cat "$DEPLOY_SERVER_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- chmod 755 ./deploy.sh
script:
- ./deploy.sh
where I added the private key's content as a variable to my gitlab-instance. (see https://docs.gitlab.com/ee/ci/variables/)
The deploy.sh looks like this:
#!/bin/bash
set -eo pipefail
scp app/docker-compose.yml gitlab#"${DEPLOY_SERVER_IP}":~/apps/${NGINX_SERVER_NAME}/
ssh gitlab#$DEPLOY_SERVER_IP "apps/${NGINX_SERVER_NAME}/app.sh update" # this is just doing docker-compose pull && docker-compose up in the app's directory.
Maybe this helps? It's working fine for me and scp/ssh are giving more intuitive error messages than what you got with rsync in this particular case.

How to use deploy ssh key to clone private repos using Chef 12 on AWS OpsWorks

I could clone public repos using Chef 12 on AWS OpsWorks as follows:
execute "get code" do
user "root"
cwd node['conf-cookbook']['project_root']
command "git clone #{app['app_source']['url']}"
end
but I don't know how to use deploy ssh key to clone private repos - I have searched a while and see a potential solution as follows:
git node['conf-cookbook']['app_dir'] do
repository "ext::ssh -i #{app['app_source']['ssh_key']} -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no #{app['app_source']['url']}"
checkout_branch "master"
action :sync
end
which did not work with the error msg:
---- Begin output of git ls-remote "ext::ssh -i -----BEGIN RSA PRIVATE KEY----
MIIJKQIBAAKCAgEApaViIRinBrusrE....[key detail]7xAOmo3NAmqcPxdrOI+hZJHh5KRvrQPLHY
-----END RSA PRIVATE KEY----- -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no git#github.com:harrywang/app-main.git" "HEAD" ----
STDOUT:
STDERR: Warning: Identity file -----BEGIN not accessible: No such file or directory.
ssh: Could not resolve hostname rsa: Name or service not known
fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists.
Any help? Thanks!
the following works according to #coderanger's suggestion:
application node['conf-cookbook']['app_dir'] do
git app['app_source']['url'] do
deploy_key app['app_source']['ssh_key']
end
end
-i takes a path to a key file, not the actual key data itself. Use the application_git cookbook for setting up deploy keys with Chef.

Github SSH config containing multiple ssh keys capistrano deployment fails saying Repository not found

~/.ssh/config
# User_A
Host github.com-User_A
HostName github.com
User git
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa
IdentitiesOnly yes
# User_B
Host github.com-User_B
HostName github.com
User git
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa_user_b
IdentitiesOnly yes
# http://serverfault.com/questions/400633/capistrano-deploying-to-different-servers-with-different-authentication-methods
Host example.com
IdentityFile ~/.ssh_keys/example_env.pem
ForwardAgent yes
On local machine:
$ ssh -T git#github.com
Hi User_B! You've successfully authenticated, but GitHub does not provide shell access.
On remote machine
~$ ssh remote_user#example.com
[remote_user#example ~]$ ssh -T git#github.com
Hi User_A! You've successfully authenticated, but GitHub does not provide shell access.
Note:
ssh-add -l shows all the mentioned keys enlisted
deploy.rb contains:
set :repository, "git#User_B:<REPO_NAME>"
ssh_options[:forward_agent] = true
I am trying to deploy my application using Capistrano to an Amazon EC2 instance for which I the .pem file is already added to my local machine using ssh-add and it can be seen enlisted in output for ssh-add -l.However I am facing following error while deploying:
** [example.com :: err] ERROR: Repository not found.
** fatal: The remote end hung up unexpectedly
Following is the full output of my cap deploy command:
$ cap bat deploy
triggering load callbacks
* executing `bat'
triggering start callbacks for `deploy'
* executing `multistage:ensure'
* executing `deploy'
* executing `deploy:update'
** transaction: start
* executing `deploy:update_code'
updating the cached checkout on all servers
executing locally: "git ls-remote git#User_B:<REPO_NAME> <BRANCH_NAME>"
command finished in 6296ms
* executing "if [ -d /srv/<APP_NAME>/shared/cached-copy ]; then cd /srv/<APP_NAME>/shared/cached-copy && git fetch -q origin && git fetch --tags -q origin && git reset -q --hard df84fadff305e1729991caddde47f6802e424d57 && git clean -q -d -x -f; else git clone -q git#User_B:<REPO_NAME> /srv/<APP_NAME>/shared/cached-copy && cd /srv/<APP_NAME>/shared/cached-copy && git checkout -q -b deploy df84fadff305e1729991caddde47f6802e424d57; fi"
servers: ["example.com"]
[example.com] executing command
** [example.com :: err] ERROR: Repository not found.
** fatal: The remote end hung up unexpectedly
command finished in 3811ms
*** [deploy:update_code] rolling back
* executing "rm -rf /srv/<APP_NAME>/releases/20130723222237; true"
servers: ["example.com"]
[example.com] executing command
command finished in 477ms
failed: "sh -c 'if [ -d /srv/<APP_NAME>/shared/cached-copy ]; then cd /srv/<APP_NAME>/shared/cached-copy && git fetch -q origin && git fetch --tags -q origin && git reset -q --hard df84fadff305e1729991caddde47f6802e424d57 && git clean -q -d -x -f; else git clone -q git#User_B:<REPO_NAME> /srv/<APP_NAME>/shared/cached-copy && cd /srv/<APP_NAME>/shared/cached-copy && git checkout -q -b deploy df84fadff305e1729991caddde47f6802e424d57; fi'" on example.com
So I guess this error is caused due to conflicts arising between multiple SSH keys getting detected i.e. on local machine User_B(who is a member of the repository) is used as default however on remote machine User_A(who is not having access to the repository) is used.
If my assumption is correct can anybody please help me in getting this problem solved? Is there any way in which a specific user config can be used while agent forwarding? If not then what could be the solution to this?
Thanks.
Ok it seems like the sequence in which keys are listed in ~/.ssh/config matters.
Initially it was
# User_A
Host github.com-User_A
HostName github.com
User git
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa
IdentitiesOnly yes
# User_B
Host github.com-User_B
HostName github.com
User git
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa_user_b
IdentitiesOnly yes
# http://serverfault.com/questions/400633/capistrano-deploying-to-different-servers-with-different-authentication-methods
Host example.com
IdentityFile ~/.ssh_keys/example_env.pem
ForwardAgent yes
Afterwards I did this:
# User_B
Host github.com-User_B
HostName github.com
User git
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa_user_b
IdentitiesOnly yes
# User_A
Host github.com-User_A
HostName github.com
User git
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa
IdentitiesOnly yes
# http://serverfault.com/questions/400633/capistrano-deploying-to-different-servers-with-different-authentication-methods
Host example.com
IdentityFile ~/.ssh_keys/example_env.pem
ForwardAgent yes
But after doing that I didn't restarted the machine, thus the changes were not in effect.
This morning after I started my machine after posting above problem I found that it is working:
On local machine:
$ ssh -T git#github.com
Hi User_B! You've successfully authenticated, but GitHub does not provide shell access.
On remote machine
$ ssh -T git#github.com
Hi User_B! You've successfully authenticated, but GitHub does not provide shell access.
Hope this helps somebody else in case he faces a similar problem.
Thanks.