There may be a much better way to do what i need altogether. I'll give the background first then my current (non-working) approach.
The goal is to migrate a bunch of servers from one SLES 11 to SLES 12 making use of ansible playbooks. The problem is that the newserver and the oldserver are supposed to have the same nfs mounted dir. This has to be done at the beginning of the playbook so that all of the other tasks can be completed. The name of the dir being created can be determined in 2 ways - on the oldserver directly or from a mysql db query for the volume name on that oldserver. The newservers are named migrate-(oldservername). I tried to prompt for the volumename in ansible, but that would then apply the same name to every server. Goal Recap: dir name must be determined from the oldserver, created on new server.
Approach 1: I've created a perl script that ansible will copy to the newserver and it will execute the mysql query, and create the dir itself. There are 2 problems with this - 1) mysql-client needs to be installed on each server. This is completely unneccesary for these servers and would have to then be uninstalled after the query is run. 2) Copying files and remotely executing them seems like a bad approach in general.
Approach 2: Create a version of the above except run it on the ansible control machine (where mysql-client is installed already) and store the values in key:value pairs in a file. Problems - 1) I cannot figure out how to in the perl script determine what hosts Ansible is running against and would have to manually enter them into the perl script. 2) I cannot figure out how to get Ansible to import those values correctly from the file created.
Here's the relevant perl code I have for this -
$newserver = migrate-old.server.com
($mig, $oldhost) = split(/\-/, $newserver);
$volname=`mysql -u idm-ansible -p -s -N -e "select vol_name from assets.servers where hostname like '$oldhost'";`;
open(FH, ">vols.yml");
$values = $newhost{$volname};
print FH "$newhost:$volname\n";
close(FH);
My Ansible code is all over the place as I've tried and commented out a ton of things. I can share that here if it is helpful.
Approach 3: Do it completely in Ansible Basically a mysql query for loop over each host. Problem - I have absolutely no idea how to do this. Way too unfamiliar with ansible. I think this is what I would prefer to try and do though.
What is the best approach here? How do I go about getting the right value into ansible to create the correct directory?
Please let me know if I can clarify anything.
Goal Recap: dir name must be determined from the oldserver, created on new server.
Will magic variables do?
Something like this:
---
- hosts: old-server
tasks:
- shell: "/get-my-mount-name.sh"
register: my_mount_name
- hosts: new-server
tasks:
- shell: "/mount-me.sh --mount_name={{ hostvars['old-server'].my_mount_name.stdout }}"
Related
I am trying to change the IP address set to a particular site in the host file.
For example :
# 123.123.123 www.google.com
# 456.456.456 www.google.com
I want to make a test that I enter Google through 123.123.123 and as the program changes and open Google through 456.456.456.
Changing the servers manually is removing the # from the beginning of the line.
I do not want to use selenium grid with some machines since any machine on another server do not have the resources for it.
I want to change this in the same machine while running through the code.
As the etc/hosts file is picked up immediately by the system without a restart you can manipulate or even completely overwrite this file during your run.
The trouble is that to edit the hosts file you need 'root' rights and you are actually changing the behaviour of your host system. In order to prevent this you might think about running in a docker environment but if that is not possible you can do something like this with root access:
/etc/hosts file
# 123.123.123 www.google.com
# 456.456.456 www.google.com
as part of your test run:
# at start of run
sed -i .bak 's/# 123.123.123/123.123.123/g' /etc/hosts
# do other tests now
# later when stuff has changed
sed -i .bak 's/123.123.123/456.456.456/g' /etc/hosts
Something like this?
I am configuring Blast+ on my mac (os sierra) and am having trouble configuring my nr and nt databases that I also downloaded locally. I am trying to follow NCBI's instructions here, and am getting hung up on the Configuration and Example Execution steps.
They say to change my .bash_profile so that it says:
export PATH=$PATH:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/ncbi-blast-2.6.0+/bin
That works fine, and they say configure a path for BLASTDB "similarly" but to the file where my DB will be, so I have done this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/nt.00
which specifies the exact folder that I got when I unzipped the nt tar file from their FTP. With this path, if I run the command...
blastn -query test_query.fa -db nt.00 -task blastn -outfmt "7 qseqid sseqid evalue bitscore" -max_target_seqs 5
then it runs successfully and I get results, but I am worried that these are only being checked against the nt.00 section of the entire nt.00 database file, especially because if I run my test_query.fa sequence on the Web Blast, I get different results.
Also, their instructions say that the path only needs to point to the folder that contains the whole database folder nt.00, from the tar I unzipped--and not the specific nt.00 itself--, which in my case would just be "blastdb/" (As opposed to "blastdb/nt.00/" which then contains nt.00.nhd, nt.00.nal, etc.). That makes sense because when I am working I want to be able to blastn on the nt database but also blastp on the nr one, etc. by changing the -db flag on my command, and there shouldn't be a problem with having them all in this folder, right? But if I must specify the path for BLASTDB with the nt.00 DB added to the end, how could I ever use nr.00 in the same folder (blastdb/)? Essentially, I want to do as the instructions say, and just have this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/
And then depending on what database I want to use I could just say so after the -db flag on my command. But when I make the path like that above, it gives me this error:
BLAST Database error: No alias or index file found for nucleotide database [nt] in search path [/Users/LJStout::/Users/LJStout/Documents/Luke/Research/Pedulla 17-18/blast/blastdb:]
I have tried running that same blastn command from above and swapping out "nt" for "nt.00", and have tried these commands with the path for BLASTDB ending in both "blastdb/" and "blastdb/nt" and of course "blastdb/nt.00" which is the only one that runs without errors.
Here's an example of another thread I read where the OP is worried about his executions not checking the entire nt.00 folder, this was different than my problem however.
Thanks for you help!
This whole problem came down to having the nt.00 & nr.00 files, the original folders that result from unzipping their respective .tar.gz's, in the same parent folder when it should be that their contents are in the same parent folder. I simply deleted the folders they came in and copied the contents over to my new, singular parent. I was kind of mislead by the instructions, it was a simple mistake. Now, I have one folder, blastdb/ that contains all of the contents of every database I plan on using, including nt,nr, and refseq.
I'm using Sphinx on a Linux production server as well as a Windows dev machine running WampServer.
The index configurations in sphinx.conf each require a path setting for the output file name. Because the filesystems on the production server and dev machine are different, I have to have two lines and then comment one out depending on which server I'm using.
#path = /path/to/folder/name #LIVE
path = C:\wamp\www\site\path\to\folder\name #LOCALHOST
Since I have lots of indexes, it gets really old having to constantly comment and uncomment dozens of lines every time I need to update the file.
Using relative paths would be the ideal solution, but when I tried that I received the following error when running the indexer:
FATAL: failed to open ../folder/name.tmp.spl: Invalid argument, will not index. Try --rotate option.
Is it possible to use relative paths in sphinx.conf?
You can use relative paths, but its kind of tricky because you the various utilities will have different working directories.
eg On windows the searchd service, will start IIRC with a working directory of $WINDIR$\System32
on linux, via crontab, I think it has working directory left over from previously, so would have to change the folder in the actual command line
... ie its not relative to the config file, its relative to the current working directory.
Personally I use a version control system (SVN actually) to manage it. The version from Dev, is always the one commited to the repository, the 'working copy' on the LIVE server, has had the paths edited to the right location. So when 'update' to the latest file, only changes are merged leaving the local filepaths in tact.
Other people use a dynamic config file. The config file can be a script (php/python/perl etc) - but this only works on linux so wont help you.
Or can just have a 'publish' script. Basically, you edit a 'master' config file, and one that can be freely copied to all servers. Then a 'publish' script, that writes the apprirate local path. It could do it with some pretty simple search replace.
<?php
if (trim(`hostname`) == 'live') {
$path = '/path/to/folder/';
} else {
$path = 'C:\wamp\www\site\path\to\folder\`;
}
$contents = file_get_contents('sphinx.conf.master');
$contents = str_replace('$path',$path,$contents);
file_put_contents('sphinx.conf',$contents);
Then have path = $path\name in the master config file, which will get replaced to the proper path, when run the script on the local machine
I'm running a Centos virtual machine using Vagrant. The machine seems to run properly, but when I try to sync Perforce I can see the following error:
[vagrant#vagrant-c5-x86_64 ~]$ /perforce/p4 sync -f ...
Perforce client error:
Connect to server failed; check $P4PORT.
failed.TCP connect to perforce.xxx.com:1666
Servname not supported for ai_socktype
I have read this http://www.ducea.com/2006/09/11/error-servname-not-supported-for-ai_socktype/ and tried to set the ports in /etc/services, but it didn't work. I am not even sure if the problem is Perforce or OS related.
Any hints?
I had this problem with a Tornado/Python app. Apparently, this can be caused by the port being interpreted as a string instead of an integer. So in my case, I needed to change my startup script to force it to be interpreted as an integer.
application = tornado.web.Application(...)
application.listen(int(port))
Are you able to enter your client ? before trying to sync the files, try to create a perforce client:
p4 client
Maybe it's not the host:port that is the issue, but other flags in the connection string that interrupt.
I personally received the exact same error, but it was a Perforce issue.
The reason is, Perforce has its own priority when it's looking for its P4USER/P4PORT/... configuration.
ENV variables ( run export )
if a P4CONFIG variable was specified somewhere it can search in a file ( like .perforce in the current/upper directory )
Problem was, even though it first search for an ENV variable - the P4CONFIG file might override it.
So my $P4PORT ENV variable had a connection string X, but the .perforce file had a connection string Y.
Removing the P4PORT from my local .perforce file - solved this issue.
in example:
$~] echo $P4PORT;
rsh:ssh -2 -q -a -x -l p4ssh perforce.mydomain.com
$~] cat .perforce
# P4PORT="rsh:ssh -q -a -x -l perforce.mydomain.com /bin/true"
P4USER="my_user"
Also remember that Perforce will search for the $P4CONFIG file ( if configured one ) in the entire directory hierarchy upwards, so even if you don't have the file in the current directory - it might be found in an upper directory, and there - you might have a $P4PORT configuration that you didn't expect..
Put trailing slash, e.g.:
http://perforce.xxx.com:1666/
Instead of:
http://perforce.xxx.com:1666
In my case I got like this error in golang language, i changed my port type from string to int and works fine
I wrote a little CDN server that rebuilds its registry pool when new pool-content-packages are installed into that registry pool.
Instead of having each pool-content-package call the init.d of the cdn-server, I'd like to use triggers. That way it would restart the server only once at the end of an installation run, after all packages were installed.
What have I to do to use triggers in my packages with debhelper support?
What you are looking for is dpkg-triggers.
One solution with use of debhelper to build the debian packages is this:
Step 1)
Create file debian/<serverPackageName>.triggers (replace <serverPackageName> with name of your server package).
Step 1a)
Define a trigger that watch the directory of your pool. The content of file would be:
interest /path/to/my/pool
Step 1b)
But you can also define a named trigger, which have to be fired explicit (see step 3).
content of file:
interest cdn-pool-changed
The name of the trigger cdn-pool-changed is free. You can take what ever you want.
Step 2)
Add handler for trigger to file debian/<serverPackageName>.postinst (replace <serverPackageName> with name of your server package).
Example:
#!/bin/sh
set -e
case "$1" in
configure)
;;
triggered)
#here is the handler
/etc/init.d/<serverPackageName> restart
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0
Replace <serverPackageName> with name of your server package.
Step 3) (only for named triggers, step 1b) )
Add in every content package the file debian/<contentPackageName>.triggers (replace <contentPackageName> with names of your content packages).
content of file:
activate cdn-pool-changed
Use same name for trigger you defined in step 1.
More detailed Information
The best description for dpkg-triggers I could found is "How to use dpkg triggers". The corresponding git repository with examples you can get here:
git clone git://anonscm.debian.org/users/seanius/dpkg-triggers-example.git
I had a need and read and re-read the docs many times. I think that the process is not clearly explain or rather what goes where is not clearly explained. Here I hope to clarify the use of Debian package triggers.
Service with Configuration Directory
A service reading its settings in a specific directory can mark that directory as being of interest.
Say I create a new service which reads settings from /usr/share/my-service/config/...
That service gets two additions:
In its debian directory I add my-service.triggers
And here are the contents:
# my-service.triggers
interest /usr/share/my-service/config
This means if any other package installs or removes a file from that directory, the trigger enters its "needs to be run" state.
In its debian directory I also add my-service.postinst
And I have a script as follow to check whether the trigger happened and run a process as required:
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "/usr/share/my-service/config" ]
then
# this may or may not be what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
That's it.
Now packages adding extensions to your service can add their own configuration file(s) under /usr/share/my-service/config (or a directory under /etc/my-service/my-service.d/... or /var/lib/my-service/..., although that last one should be reserved for dynamic files, not files installed from a package) and dpkg automatically calls your postinst script with:
postinst triggered /usr/share/my-service/config
# where /usr/share/my-service/config is your <interest-path>
This call happens only once and after all the packages were installed, hence the advantage of having a trigger in the first place. This way each package does not need to know that it has to restart my-service and it does not happen more than once, which could cause all sorts of side effects (i.e. the service tries to listen on a TCP port and get error: address already in use).
IMPORTANT: keep in mind that the postinst should include a line with #DEBHELPER#.
So you do not have to do anything special in other packages. Only make sure to install the configuration files in the correct directory and dpkg picks up from there (i.e. in my example under /usr/share/my-service/config).
I have an extension to BIND9 called ipmgr which makes use of .ini files saved in a specific folder. It uses the files to generate DNS zones (way less errors that way! and it includes support for getting letsencrypt certificates and settings for dmarc/dkim). This package uses this case: a simple directory where configuration files get installed. Other packages do not need to do anything other than install files in the right place (/usr/share/ipmgr/zones, for this package).
Service without a Configuration Folder
In some (rare?) cases, you may need to trigger something in a service which is not driven by the installation of a new configuration file.
In this case, you can use an arbitrary name (it should include your package name to make sure it is unique since this name is global to the entire Debian/Ubuntu system).
To make this one work, you need three files, one of which is a trigger in the other packages.
State the Interest
As above, we have an interest. In this case, the interest is stated as a name on its own. The dpkg system distinguish between a name and a path because a name cannot include a slash (/) character. Names are limited to ASCII except control characters and spaces. I would suggest you stick to a-z, 0-9 and dashes (-).
# my-service.triggers
interest my-service-settings
This is useful if you cannot simply track a folder. For example, the settings could come from a network connection that a package offers once installed.
Listen for the Triggers
Again, as above, you need a postinst script in your Service Package. This captures the trigger and allows you to run a command. The script is the same, only you test for the name instead of the folder (note that you can have any number of triggers, so you could also have both: a folder as above and a special name as here).
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "my-service-settings" ]
then
# this may or may not what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
The Trigger
As mentioned above, we need a third file. An arbitrary name is not going to be triggered automatically by dpkg. It wouldn't know whether your other package needs to trigger something just like that (although it is fairly automated as it is already).
So in other packages, you create a trigger file which looks like this:
# other-package.triggers
activate my-service-settings
Now we recognize the name, it is the same as the interest stated above.
In other words, if the trigger needs to run for something other than just the installation of files in a given location, use a special name and add this triggers file with the activate keyword.
Other Features
I have not tested the other features of the dpkg-trigger(1) tool. There are other keywords support in the triggers files:
interest
interest-await
interest-noawait
activate
activate-await
activate-noawait
The deb-triggers manual page has additional information about those. I am not too sure what the await/noawait implies other than the trigger may happen at any time when nowait is used.
Automatic Trigger Added
The build system on Ubuntu (probably Debian too) automatically adds a triggers file with the following when your package includes a library:
$ cat triggers
# Triggers added by dh_makeshlibs/11.1.6ubuntu2
activate-noawait ldconfig
I suggest you exercise caution if your package includes libraries. If you have your own triggers file, I do not know whether this addition will still happen automatically.
This also shows us a special case where it wants to use the noawait. If I understand correctly, it has to run the ldconfig trigger ASAP so your commands will work as expected after the unpack. Otherwise ldd will not know anything about your newly installed library.