How to copy a directory from one host to another host? - perl

I want to copy a directory from one host to another host using SCP
I tried with following syntax
my $src_path="/abc/xyz/123/";
my $BASE_PATH="/a/b/c/d/";
my $scpe = Net::SCP::Expect->new(host=> $host, user=>$username, password=>$password);
$scpe->scp -r($host.":".$src_path, $dst_path);
i am getting the errror like no such file or directory.can you help in this regard.

According to the example given in the manpage, you don't need to repeat the host in the call, if you already passed it as an option.
from http://search.cpan.org/~djberg/Net-SCP-Expect-0.12/Expect.pm:
Example 2 - uses constructor, shorthand scp:
my $scpe = Net::SCP::Expect->new(host=>'host', user=>'user', password=>'xxxx');
$scpe->scp('file','/some/dir'); # 'file' copied to 'host' at '/some/dir'
Besides, is this "-r" a typo? If you want to copy recursively, you need to set recursive => "yes" in the options hash.

Related

Passing parameters to puppet manifest via command line

I have been searching for an answer to this question with no luck, but is there a way to pass parameters into puppet manifests when running the 'apply' command, in a similar way to the way you pass parameters when running a UNIX script on the command line?
The suggestions I see mention either keeping variables at the top of the manifest for use later, or to store them in a hiera file. But neither really answer the question I am posing?
Any guidance on how to do this would be greatly appreciated?
Edit:
An example of what I have been doing is:
$doc_root = "/var/www/example"
exec { 'apt-get update':
command => '/usr/bin/apt-get update'
}
package { 'apache2':
ensure => "installed",
require => Exec['apt-get update']
}
file { $doc_root:
ensure => "directory",
owner => "www-data",
group => "www-data",
mode => 644
}
file { "$doc_root/index.html":
ensure => "present",
source => "puppet:///modules/main/index.html",
require => File[$doc_root]
}
As you can see the variable is hardcoded at the top, whereas whilst I am trying to use the variable in the same way, I need to be able to pass the value in when running the apply command.
Using lookup functions in conjunction with hiera.yaml files doesn't fulfil my requirements for the same reason.
The only thing I can think may be a work around is to create a UNIX script that accepts parameters, saves those values in a yaml file, and then have the script execute the .pp file.
But I'm hoping that puppet has a way to do this directly.
The common procedure for passing variables into a classless manifest for use with the puppet apply subcommand would be to assign the value to a Facter fact from the CLI, and then resolve its value inside the manifest. You would begin with removing the hardcoded variable doc_root from the head of the manifest. Then, you would modify the variable into a fact like:
file { $facts['doc_root']:
...
file { "${facts['doc_root']}/index.html":
...
require => File["${facts['doc_root']}"] <-- interpolation required due to Puppet DSL inability to resolve hash value as first class expression
You would then pass the Facter value from the puppet apply subcommand like:
FACTER_doc_root=/var/www/example puppet apply manifest.pp
Note this also causes FACTER_doc_root to be temporarily set as an environment variable as a side effect.

Owncloud "Add as trusted domain" button fails

The 'Add as a trusted domain' button didn't do anything before now it takes me to an 'Error 404' page.
I can set the domain on the owncloud box by editing the file config.php and have done so but I still do not understand why the button doesn't work.
you can manually override this button:
go to your installation folder in the config directory
open config.php and add a line like this:
'trusted_domains' =>
array (
0 => 'localhost',
1 => 'yourdomain.com'
),
Just started working with ownCloud myself and happened upon a similar issue when I reboot my BananaPi. The BPi got assigned a new IP on my network and the only Trusted IP was the original IP. I wanted to see how I could allow more trusted domains or IPs. Quick search shows no wildcard options. Since I often add and remove devices from my network, I wanted to add a range, like 192.168.0.1 to 192.168.0.254.
Since config.php is simply included and can still run code, versus just being XML or something, we can build an array really quickly.
config.php
<?php
$local_ips = array();
$base = "192.168.0.";
for($i = 1; $i < 255; $i++){
array_push($local_ips, $base . $i);
}
$CONFIG = array(
// Other config items ...
'trusted_domains' => $local_ips,
// More config items...
);
This will create an array of IPs that can then be used as Trusted Domains. $base is the first 4 octets of your Private IP Subnet. If you use 192.168.0.0/24 or 10.0.1.0/24, the $base would be 192.168.0. or 10.0.1. The for() loop will want to have a limit based on your network size.
You must to include http or https before the domain. The general form is:
http://domain:port
or
https://domain:port
This is an example example:
http://10.0.0.1:8000
or
https://10.0.0.1:8000

Change configuration parameters from command-line or programatically

How can I change settings in pg_hba.conf and postgresql.conf either from the command-line or programatically (especially from fabric or fabtools)?
I already found set_config, but that does not seem to work for parameters which require a server restart. The parameters to change are listen_addresses in postgresql.conf and a new line in pg_hba.conf, so connections from our sub-network will be accepted.
This is needed to write deployment scripts using fabric. It is not an option to copy template-files which then override the existing *.conf files, because the database server might be shared with other applications which bring their own configuration parameters. Thus, the existing configuration must be altered, not replaced.
Here is the currently working solution, incorporating the hint from a_horse_with_no_name. I paste a snippet from our fabfile.py (it uses require from fabtools, and it runs against Ubuntu):
db_name = env.variables['DB_NAME']
db_user = env.variables['DB_USER']
db_pass = env.variables['DB_PASSWORD']
# Require a PostgreSQL server.
require.postgres.server(version="9.4")
require.postgres.user(db_user, db_pass)
require.postgres.database(db_name, db_user)
# Listen on all addresses - use firewall to block inadequate access.
sudo(''' psql -c "ALTER SYSTEM SET listen_addresses='*';" ''', user='postgres')
# Download the remote pg_hba.conf to a temp file
tmp = tempfile.NamedTemporaryFile()
with open(tmp.name, "w") as f:
get("/etc/postgresql/9.4/main/pg_hba.conf", f, use_sudo=True)
# Define the necessary line in pg_hba.conf.
hba_line = "host all all {DB_ACCEPT_IP}/0 md5".format(**env.variables)
# Search the hba_line in the existing pg_hba.conf
with open(tmp.name, "ra") as f:
for line in f:
if hba_line in line:
found = True
break
else:
found = False
# If it does not exist, append it and upload the modified pg_hba.conf to the remote machine.
if not found:
with open(tmp.name, "a") as f:
f.write(hba_line)
put(f.name, "/etc/postgresql/9.4/main/pg_hba.conf", use_sudo=True)
# Restart the postgresql service, so the changes take effect.
sudo("service postgresql restart")
The aspect I don't like with this solution is that if I change DB_ACCEPT_IP, this will just append a new line and not remove the old one. I am sure a cleaner solution is possible.

Not able to use 'copy_perm' option in Net::SFTP::Foreign module

I want to copy the file from remote host to the local host with the preservation of file permission, hence i tried to use the 'copy_perm' option as per the documentation of Net::SFTP::Foreign as mentioned below -
my $sftp = Net::SFTP::Foreign->new(
host => $host,
key_path => $ssh_key_path,
copy_perm => 1,
more => [ -o => 'Compression yes' ]
);
But I am getting the below error -
Invalid option 'copy_perm' or bad combination of options at test.pl at line 101.
The line 101 is the Net::SFTP::Foreign object creation as mentioned above.
Did i miss anything or anyone has faced same issue before?
That's because copy_perm isn't an option for the new method. You use it in get and put.

Perl SCP ERROR(Asking to Continue?)

Here's is what I am doing
my $username = "user";
my $password= "pass";
my $host="xxx.xxx.xxx.xxx";
my $scpe = Net::SCP::Expect->new(user => $username,
password => $password,
preserve => 1,
recursive => 1,
verbose=>1,
auto_yes=>1);
$scpe->scp("$file","$host:./drop/drop.txt");
When I run this code there is no error I am using unix box, $file is in my directory and have full permissions, also I have changed the directory to temp in unix box but when somebody else runs this code they get
Problem performing scp: Are you sure
you want to continue connecting
(yes/no)? at scp.pl line 242
I am very confused why is it happening, as this error is not received by me
Short answer:
Raise the timeout_auto value:
my $scpe = Net::SCP::Expect->new(user => $username,
password => $password,
preserve => 1,
recursive => 1,
verbose=>1,
timeout_auto=>10, #For example - 5 should probably be plenty
auto_yes=>1);
Long answer.
The
problem performing scp
is what Net::SCP::Expect prepends to the literal error message it gets from SCP itself, so in this case
Are you sure you want to continue
connecting (yes/no)?
This usually happens because the host SCP is connecting to is not yet known.
You should set auto_yes to 1 if you want to avoid this error as the CPAN Documentation for NET::SCP::Expect explains, but I see you're already doing that.
If that doesn't help, consider raising the timeout_auto value. It defaults to 1 second, but if it takes longer for SCP to pose the 'are you sure' question (because for example the DNS looking of the host takes longer), it might not be enough.