Doing sftp using pysftp but getting hostkey error - rsa

Trying to connect to sftp server using ssh key
#!/bin/python
import pysftp
with pysftp.Connection(host='sftp.myserver.com', username='stg', private_key='/Users/joel/.ssh/id_rsa_sftp', private_key_pass='') as sftp:
And I am getting
raise SSHException("No hostkey for host %s found." % host)

You could use my pysftp fork from github and add auto_add_key=True
This will add the host key on the first connection to your known_hosts file...
import pysftp
with pysftp.Connection('hostname', username='me', password='secret', auto_add_key=True) as sftp:
with sftp.cd('public'): # temporarily chdir to public
sftp.put('/my/local/filename') # upload file to public/ on remote
sftp.get('remote_file') # get a remote file

Related

Use local socket with mpd

I'm trying to switch from using the local network to a UNIX socket with mpd.To do so, i have my config file:
# Recommended location for database
db_file "~/.config/mpd/database"
# If running mpd using systemd, delete this line to log directly to systemd.
#log_file "syslog"
# The music directory is by default the XDG directory, uncomment to amend and choose a different directory
music_directory "~/Music"
# Uncomment to refresh the database whenever files in the music_directory are changed
auto_update "yes"
auto_update_depth "5"
# Uncomment to enable the functionalities
playlist_directory "~/.config/mpd/playlists"
pid_file "~/.config/mpd/pid"
state_file "~/.config/mpd/state"
#sticker_file "~/.config/mpd/sticker.sql"
bind_to_address "~/.config/mpd/socket"
restore_paused "yes"
audio_output {
type "pipewire"
name "PipeWire Sound Server"
}
I created a socket file in the folder ~/.config/mpd/socket
I also export MPD_HOST=~/.config/mpd/socket in order to be the default host. Nevertheless if i run the command:
mpc play , i have the error MPD error: Failed to resolve host name
But if i run MPD_HOST=~/.config/mpd/socket mpc play it work. I don't understand what i'm missing?

Azure build pipeline reports cannot read password

my main.tf file looks like below
module "sql_vms" {
source = "git::https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git//compute"
rg_name = var.resource_group_name
location = module.resource_group.external_rg_location
vnet_name = var.virtual_network_name
subnet_name = var.sql_subnet_name
app_nsg = var.application_nsg
vm_count = var.count_vm
base_hostname = var.sql_host_basename
sto_acc_suffix = var.storage_account_suffix
vm_size = var.virtual_machine_size
vm_publisher = var.virtual_machine_image_publisher
vm_offer = var.virtual_machine_image_offer
vm_sku = var.virtual_machine_image_sku
vm_img_version = var.virtual_machine_image_version
username = var.username
password = var.password
}
The modules are in same repo, technically not right but for now, I want to use the Azure repo which has a terraform module and creates multiple VM's from TF modules.
I get the error like below
2020-08-23T02:27:38.1439274Z [command]/usr/local/bin/terraform init -backend-config=storage_account_name=stoaccautomationnonprod -backend-config=container_name=stoacccon01nonprod -backend-config=key=nonprod.tfstate -backend-config=resource_group_name=automation -backend-config=arm_subscription_id=cc800481-b728-4d8f-81be-e80b955d346e -backend-config=arm_tenant_id=*** -backend-config=arm_client_id=*** -backend-config=arm_client_secret=***
2020-08-23T02:27:38.1441494Z [0m[1mInitializing modules...[0m
2020-08-23T02:27:38.1442513Z Downloading git::https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git for sql_vms...
2020-08-23T02:27:38.1443347Z [31m
2020-08-23T02:27:38.1444113Z [1m[31mError: [0m[0m[1mFailed to download module[0m
2020-08-23T02:27:38.1444608Z
2020-08-23T02:27:38.1445408Z [0mCould not download module "sql_vms" (main.tf:1) source code from
2020-08-23T02:27:38.1446189Z "git::https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git":
2020-08-23T02:27:38.1446845Z error downloading
2020-08-23T02:27:38.1447746Z 'https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git':
2020-08-23T02:27:38.1448669Z /usr/bin/git exited with 128: Cloning into '.terraform/modules/sql_vms'...
2020-08-23T02:27:38.1449408Z fatal: could not read Password for
2020-08-23T02:27:38.1450157Z 'https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com':
2020-08-23T02:27:38.1450684Z terminal prompts disabled
2020-08-23T02:27:38.1450936Z
2020-08-23T02:27:38.1451324Z [0m[0m
2020-08-23T02:27:38.1451716Z [31m
2020-08-23T02:27:38.1452230Z [1m[31mError: [0m[0m[1mFailed to download module[0m
2020-08-23T02:27:38.1452525Z
2020-08-23T02:27:38.1453109Z [0mCould not download module "sql_vms" (main.tf:1) source code from
2020-08-23T02:27:38.1454386Z "git::https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git":
2020-08-23T02:27:38.1454903Z error downloading
2020-08-23T02:27:38.1456723Z 'https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git':
2020-08-23T02:27:38.1457540Z /usr/bin/git exited with 128: Cloning into '.terraform/modules/sql_vms'...
2020-08-23T02:27:38.1458063Z fatal: could not read Password for
2020-08-23T02:27:38.1458813Z 'https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com':
2020-08-23T02:27:38.1459301Z terminal prompts disabled
2020-08-23T02:27:38.1459470Z
2020-08-23T02:27:38.1459765Z [0m[0m
2020-08-23T02:27:38.1459896Z
2020-08-23T02:27:38.1496541Z ##[error]Terraform command 'init' failed with exit code '1'.: Failed to download module | Failed to download module
2020-08-23T02:27:38.1786437Z ##[section]Finishing: terraform init
I was thinking to use SSH instead of HTTPS with PAT Token, unfortunately I couldn't figure it out how to add public key on Microsoft agent?
Please assist
When using the SSH key to pull the Terraform modules, you need to generate the SSH key yourself. And then create an SSH Key in the DevOps:
And then you need to upload the private key in the pipeline variable group as secure files and add the step to install the SSH in your agent. The Install SSH in an agent job like this:
Get more details about use SSH to pull the remote Terraform module.

rsync: #ERROR: auth failed on module tomcat_backup

I just can't figure out what's going on with my RSync. I'm running RSync on RHEL5, ip = xx.xx.xx.97. It's getting files from RHEL5, ip = xx.xx.xx.96.
Here's what the log (which I specified on the RSync command line) shows on xx.97 (the one requesting the files):
(local time)
2015/08/30 13:40:01 [17353] #ERROR: auth failed on module tomcat_backup
2015/08/30 13:40:01 [17353] rsync error: error starting client-server protocol (code 5) at main.c(1530) [receiver=3.0.6]
Here's what the log(which is specified in the rsyncd.conf file) shows on xx.96 (the one supplying the files):
(UTC time)
2015/08/30 07:40:01 [8836] name lookup failed for xx.xx.xx.97: Name or service not known
2015/08/30 07:40:01 [8836] connect from UNKNOWN (xx.xx.xx.97)
2015/08/30 07:40:01 [8836] auth failed on module tomcat_backup from unknown (xx.xx.xx.97): password mismatch
Here's the actual rsync.sh command called from xx.xx.xx.97 (the requester):
export RSYNC_PASSWORD=rsyncclient
rsync -havz --log-file=/usr/local/bin/RSync/test.log rsync://rsyncclient#xx.xx.xx.96/tomcat_backup/ProcessSniffer/ /usr/local/bin/ProcessSniffer
Here's the rsyncd.conf on xx.xx.xx.97:
lock file = /var/run/rsync.lock
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
[files]
name = tomcat_backup
path = /usr/local/bin/
comment = The copy/backup of tomcat from .96
uid = tomcat
gid = tomcat
read only = no
list = yes
auth users = rsyncclient
secrets file = /etc/rsyncd.secrets
hosts allow = xx.xx.xx.96/255.255.255.0
Here's the rsyncd.secrets on xx.xx.xx.97:
files:files
Here's the rsyncd.conf on xx.xx.xx.96 (the supplier of files):
Note: there is a 'cwrsync' (Windows version of rsync) successfully calling for files also (xx.xx.xx.100)
Note: yes, there is the possibility of xx.96 requesting files from xx.97. However, this is NOT actually happening.
It's commented out of the init.d mechanism.
lock file = /var/run/rsync.lock
log file = /var/log/rsync.log
pid file = /var/run/rsync.pid
strict modes = false
[files]
name = tomcat_backup
path = /usr/local/bin
comment = The copy/backup of tomcat from xx.97
uid = tomcat
gid = tomcat
read only = no
list = yes
auth users = rsyncclient
secrets file = /etc/rsyncd.secrets
hosts allow = xx.xx.xx.97/255.255.255.0, xx.xx.xx.100/255.255.255.0
Here's the rsyncd.secrets on xx.xx.xx.97:
files:files
It was something else. I had a script calling the rsync command, and that was causing the problem. The actual rsync command line was ok.
Apologies.
This is what I have been through when I got this error. My first thinking was to check rsync server log. and it is not in the place configured in rsync.conf. Then I checked the log printed in systemctl status rsyncd
rsyncd[23391]: auth failed on module signaling from unknown (172.28.15.10): missing secret for user "rsync_backup"
rsyncd[23394]: Badly formed boolean in configuration file: "no # rsync daemon before transmission, change to the root directory and limited within.".
rsyncd[23394]: params.c:Parameter() - Ignoring badly formed line in configuration file: ignore errors # ignore some io error informations.
rsyncd[23394]: Badly formed boolean in configuration file: "false # if true, cannot upload file to this server.".
rsyncd[23394]: Badly formed boolean in configuration file: "false # if true, cannot download file from this server.".
rsyncd[23394]: Badly formed boolean in configuration file: "false # if true, can only list files here.".
Combining the fact that log configuration does not come into play. It seems that the comment after each line of configuration in rsync.conf makes configurations invalid. So I deleted those # ... and restart rsyncd.

How can I solve Permission denied on deploying app?

I have setup an app to deploy to my site that is hosted on Digital Ocean CentOS 6 server and I am using Capistrano to deploy the app from my development machine. I have got a repo setup that I push to and that my Capistrano config references when I do cap development deploy.
The issue I am having is that it throws this error:
[a7406f5e] Command: ( GIT_ASKPASS=/bin/echo GIT_SSH=/tmp/PopupHub/git-ssh.sh /usr/bin/env git ls-remote git#repo-url-is-here/popup-hub.git )
DEBUG [a7406f5e] Permission denied (publickey).
DEBUG [a7406f5e] fatal: The remote end hung up unexpectedly
In capfile I have got this:
# Load DSL and Setup Up Stages
require 'capistrano/setup'
# Includes default deployment tasks
require 'capistrano/deploy'
# Includes tasks from other gems included in your Gemfile
#
# For documentation on these, see for example:
#
# https://github.com/capistrano/rvm
# https://github.com/capistrano/rbenv
# https://github.com/capistrano/chruby
# https://github.com/capistrano/bundler
# https://github.com/capistrano/rails
#
# require 'capistrano/rvm'
# require 'capistrano/rbenv'
# require 'capistrano/chruby'
require 'capistrano/bundler'
require 'capistrano/rails/assets'
require 'capistrano/rails/migrations'
require 'capistrano/sitemap_generator'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.cap').each { |r| import r }
In my config/deploy.rb I have:
lock '3.1.0'
server "0.0.0.0.0"
set :application, "NameOfApp"
set :scm, "git"
set :repo_url, "git#the-repo-url-is-here/popup-hub.git"
# set :scm_passphrase, ""
# set :user, "deploy"
# files we want symlinking to specific entries in shared.
set :linked_files, %w{config/database.yml}
# dirs we want symlinking to shared
set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
SSHKit.config.command_map[:rake] = "bundle exec rake" #8
SSHKit.config.command_map[:rails] = "bundle exec rails"
set :branch, ENV["REVISION"] || ENV["BRANCH_NAME"] || "master"
set :keep_releases, 20
namespace :deploy do
desc 'Restart passenger without service interruption (keep requests in a queue while restarting)'
task :restart do
on roles(:app) do
execute :touch, release_path.join('tmp/restart.txt')
unless execute :curl, '-s -k --location localhost | grep "Pop" > /dev/null'
exit 1
end
end
end
after :finishing, "deploy:cleanup"
after :finishing, "deploy:sitemap:refresh"
end
after "deploy", "deploy:migrate"
after 'deploy:publishing', 'deploy:restart'
# deploy:sitemap:create #Create sitemaps without pinging search engines
# deploy:sitemap:refresh #Create sitemaps and ping search engines
# deploy:sitemap:clean #Clean up sitemaps in the sitemap path
# start new deploy.rb stuff for the beanstalk repo
Then in my config/development.rb I have got:
set :stage, :development
set :ssh_options, {
forward_agent: true,
password: 'thepassword',
user: 'deployer',
}
server "0.0.0.0", user: "deployer", roles: %w{web app db}
set :deploy_to, "/home/deployer/development"
set :rails_env, 'development' # If the environment differs from the stage name
set :branch, ENV["REVISION"] || ENV["BRANCH_NAME"] || "master"
When I push in bash cap development deploy the error further up happens.
Can anyone tell me why this is happening? I have carried out everything fine up to now and I have this setup on another Digital Ocean droplet.
Thanks,
I think you have not ssh access to your remote server using you local system's ssh keys.
If you don't have ssh keys on local system, generate:
ssh-keygen -t rsa
Upload your local keys to remote server:
cat ~/.ssh/id_rsa.pub | ssh user#hostname 'cat >> .ssh/authorized_keys'
Source: HowToGeek.com
You need to set up your SSH key in Digital Ocean

Chef running git clone results in host key verification error

I am using Chef, invoked by Capistrano.
There is a directive to clone a repository using git.
git node['rails']['rails_root'] do
repository "git#myrepo.com:/myproj.git"
reference "master"
action :sync
user node['rails']['rails_user']
group node['rails']['rails_group']
end
When it gets to this point, I get:
** [out :: 10.1.1.1] STDERR: Host key verification failed.
So, I need to add a "known_hosts" entry. No problem. But to which user? The core of my problem is that I have no idea which user is executing what commands, and if they are invoking sudo, etc.
I've run keyscan to populate the known_hosts of root, and the user I ssh in as, to no avail.
Note, this git repo is read-protected, and requires ssh key access.
Another way to solve https://github.com/opscode-cookbooks/ssh_known_hosts
this worked for me
You can use an ssh wrapper approach. Look here for details.
Briefly do the following steps
First, create a file in the cookbooks/COOKBOOK_NAME/files/default directory that is named wrap-ssh4git.sh and which contains the following:
#!/usr/bin/env bash
/usr/bin/env ssh -o "StrictHostKeyChecking=no" $1 $2
Then, use the following block for your deployment:
directory "/tmp/private_code/.ssh" do
owner "ubuntu"
recursive true
end
cookbook_file "/tmp/private_code/wrap-ssh4git.sh" do
source "wrap-ssh4git.sh"
owner "ubuntu"
mode 00700
end
deploy "private_repo" do
repo "git#github.com:acctname/private-repo.git"
user "ubuntu"
deploy_to "/tmp/private_code"
action :deploy
ssh_wrapper "/tmp/private_code/wrap-ssh4git.sh"
end
The git repository will be cloned as user node['rails']['rails_user'] (via https://docs.chef.io/resource_git.html) - I assume that users known_hosts file is the one you have to modify.
I have resolved this issue as below
_home_dir = nil
node['etc']['passwd'].each do |user, data|
if user.eql? node['jenkins']['username']
_home_dir = data['dir']
end
end
key_config ="Host *\n\tStrictHostKeyChecking no\n"
file "#{_home_dir}/.ssh/config" do
owner node['jenkins']['username']
group node['jenkins']['username']
mode "0600"
content key_config
end