I am trying to change the IP address set to a particular site in the host file.
For example :
# 123.123.123 www.google.com
# 456.456.456 www.google.com
I want to make a test that I enter Google through 123.123.123 and as the program changes and open Google through 456.456.456.
Changing the servers manually is removing the # from the beginning of the line.
I do not want to use selenium grid with some machines since any machine on another server do not have the resources for it.
I want to change this in the same machine while running through the code.
As the etc/hosts file is picked up immediately by the system without a restart you can manipulate or even completely overwrite this file during your run.
The trouble is that to edit the hosts file you need 'root' rights and you are actually changing the behaviour of your host system. In order to prevent this you might think about running in a docker environment but if that is not possible you can do something like this with root access:
/etc/hosts file
# 123.123.123 www.google.com
# 456.456.456 www.google.com
as part of your test run:
# at start of run
sed -i .bak 's/# 123.123.123/123.123.123/g' /etc/hosts
# do other tests now
# later when stuff has changed
sed -i .bak 's/123.123.123/456.456.456/g' /etc/hosts
Something like this?
Related
I am creating a two Tomcat instance cluster using mod_jk.
The instances need to communicate with each other, so they each need to know the other's private IP address. The addresses need to be added to a workers.properties file, and also to the server.xml file. I am trying to automate this.
I have created an ec2 userdata script that uses outputs from a stack to write the IP addresses to a text file, which looks like:
10.0.75.75
10.0.75.142
(The top one is "tomcatnode1ip", the bottom one is "tomcatnode2ip".)
I can run
sed '1!d' /home/ec2-user/scripts/properties/host.properties"
and it prints line1 of host.properties, which is an IP address, I can also output that to another txt file.
What I want to do is overwrite variables in workers.properties and server.xml with the IP addresses of the 2 servers.
This is done with
sed -i 's/tomcatnode1ip/tomcat1/g' /usr/share/tomcat/conf/server.xml
and
sed -i 's/tomcatnode1ip/tomcat1/g' /etc/httpd-2.4.39/modules/tomcat-connectors-1.2.46-src/conf/workers.properties
using the variables tomcat1 and tomcat2.
So basically I have two working sed scripts, and what I want is for either:
the output of the first script to feed the second script, or
nest the scripts, so that the IP address is sent directly to the variable
Are either of these possible?
There may be a much better way to do what i need altogether. I'll give the background first then my current (non-working) approach.
The goal is to migrate a bunch of servers from one SLES 11 to SLES 12 making use of ansible playbooks. The problem is that the newserver and the oldserver are supposed to have the same nfs mounted dir. This has to be done at the beginning of the playbook so that all of the other tasks can be completed. The name of the dir being created can be determined in 2 ways - on the oldserver directly or from a mysql db query for the volume name on that oldserver. The newservers are named migrate-(oldservername). I tried to prompt for the volumename in ansible, but that would then apply the same name to every server. Goal Recap: dir name must be determined from the oldserver, created on new server.
Approach 1: I've created a perl script that ansible will copy to the newserver and it will execute the mysql query, and create the dir itself. There are 2 problems with this - 1) mysql-client needs to be installed on each server. This is completely unneccesary for these servers and would have to then be uninstalled after the query is run. 2) Copying files and remotely executing them seems like a bad approach in general.
Approach 2: Create a version of the above except run it on the ansible control machine (where mysql-client is installed already) and store the values in key:value pairs in a file. Problems - 1) I cannot figure out how to in the perl script determine what hosts Ansible is running against and would have to manually enter them into the perl script. 2) I cannot figure out how to get Ansible to import those values correctly from the file created.
Here's the relevant perl code I have for this -
$newserver = migrate-old.server.com
($mig, $oldhost) = split(/\-/, $newserver);
$volname=`mysql -u idm-ansible -p -s -N -e "select vol_name from assets.servers where hostname like '$oldhost'";`;
open(FH, ">vols.yml");
$values = $newhost{$volname};
print FH "$newhost:$volname\n";
close(FH);
My Ansible code is all over the place as I've tried and commented out a ton of things. I can share that here if it is helpful.
Approach 3: Do it completely in Ansible Basically a mysql query for loop over each host. Problem - I have absolutely no idea how to do this. Way too unfamiliar with ansible. I think this is what I would prefer to try and do though.
What is the best approach here? How do I go about getting the right value into ansible to create the correct directory?
Please let me know if I can clarify anything.
Goal Recap: dir name must be determined from the oldserver, created on new server.
Will magic variables do?
Something like this:
---
- hosts: old-server
tasks:
- shell: "/get-my-mount-name.sh"
register: my_mount_name
- hosts: new-server
tasks:
- shell: "/mount-me.sh --mount_name={{ hostvars['old-server'].my_mount_name.stdout }}"
Running snort 2.9.7.0 on the latest Arch Linux OS on Raspberry Pi B+ model.
I have tried to run Snort multiple times in NIDS mode: snort –dev –l log –h 192.168.1.0/24 –c snort.conf OR snort -c snort.conf -l /log -h 127.0.0.1/24 -s.
I always get this error: ./etc/snort/rules/emerging-icmp.rules(0) Unable to open rules file "./etc/snort/rules/emerging-icmp.rules" no such file or directory. The problem is this file does exist and is part of the rules directory!
I did modify the snort.conf as some tutorials and the manual http://manual.snort.org/node18.html suggested however this did not help in any way and I hit a brick wall. I'm not seeing what I'm doing wrong.
Does it have to do with . before / ?
The ./ will check the directory you're snort.conf is in so if it isn't in the root (/) directory that is probably why. You should remove the . If the rules files is actually in /etc. It could also be a permissions problem. Make sure the permissions are correct on that file for the user you are running snort as.
I'm using Sphinx on a Linux production server as well as a Windows dev machine running WampServer.
The index configurations in sphinx.conf each require a path setting for the output file name. Because the filesystems on the production server and dev machine are different, I have to have two lines and then comment one out depending on which server I'm using.
#path = /path/to/folder/name #LIVE
path = C:\wamp\www\site\path\to\folder\name #LOCALHOST
Since I have lots of indexes, it gets really old having to constantly comment and uncomment dozens of lines every time I need to update the file.
Using relative paths would be the ideal solution, but when I tried that I received the following error when running the indexer:
FATAL: failed to open ../folder/name.tmp.spl: Invalid argument, will not index. Try --rotate option.
Is it possible to use relative paths in sphinx.conf?
You can use relative paths, but its kind of tricky because you the various utilities will have different working directories.
eg On windows the searchd service, will start IIRC with a working directory of $WINDIR$\System32
on linux, via crontab, I think it has working directory left over from previously, so would have to change the folder in the actual command line
... ie its not relative to the config file, its relative to the current working directory.
Personally I use a version control system (SVN actually) to manage it. The version from Dev, is always the one commited to the repository, the 'working copy' on the LIVE server, has had the paths edited to the right location. So when 'update' to the latest file, only changes are merged leaving the local filepaths in tact.
Other people use a dynamic config file. The config file can be a script (php/python/perl etc) - but this only works on linux so wont help you.
Or can just have a 'publish' script. Basically, you edit a 'master' config file, and one that can be freely copied to all servers. Then a 'publish' script, that writes the apprirate local path. It could do it with some pretty simple search replace.
<?php
if (trim(`hostname`) == 'live') {
$path = '/path/to/folder/';
} else {
$path = 'C:\wamp\www\site\path\to\folder\`;
}
$contents = file_get_contents('sphinx.conf.master');
$contents = str_replace('$path',$path,$contents);
file_put_contents('sphinx.conf',$contents);
Then have path = $path\name in the master config file, which will get replaced to the proper path, when run the script on the local machine
I'm running a Centos virtual machine using Vagrant. The machine seems to run properly, but when I try to sync Perforce I can see the following error:
[vagrant#vagrant-c5-x86_64 ~]$ /perforce/p4 sync -f ...
Perforce client error:
Connect to server failed; check $P4PORT.
failed.TCP connect to perforce.xxx.com:1666
Servname not supported for ai_socktype
I have read this http://www.ducea.com/2006/09/11/error-servname-not-supported-for-ai_socktype/ and tried to set the ports in /etc/services, but it didn't work. I am not even sure if the problem is Perforce or OS related.
Any hints?
I had this problem with a Tornado/Python app. Apparently, this can be caused by the port being interpreted as a string instead of an integer. So in my case, I needed to change my startup script to force it to be interpreted as an integer.
application = tornado.web.Application(...)
application.listen(int(port))
Are you able to enter your client ? before trying to sync the files, try to create a perforce client:
p4 client
Maybe it's not the host:port that is the issue, but other flags in the connection string that interrupt.
I personally received the exact same error, but it was a Perforce issue.
The reason is, Perforce has its own priority when it's looking for its P4USER/P4PORT/... configuration.
ENV variables ( run export )
if a P4CONFIG variable was specified somewhere it can search in a file ( like .perforce in the current/upper directory )
Problem was, even though it first search for an ENV variable - the P4CONFIG file might override it.
So my $P4PORT ENV variable had a connection string X, but the .perforce file had a connection string Y.
Removing the P4PORT from my local .perforce file - solved this issue.
in example:
$~] echo $P4PORT;
rsh:ssh -2 -q -a -x -l p4ssh perforce.mydomain.com
$~] cat .perforce
# P4PORT="rsh:ssh -q -a -x -l perforce.mydomain.com /bin/true"
P4USER="my_user"
Also remember that Perforce will search for the $P4CONFIG file ( if configured one ) in the entire directory hierarchy upwards, so even if you don't have the file in the current directory - it might be found in an upper directory, and there - you might have a $P4PORT configuration that you didn't expect..
Put trailing slash, e.g.:
http://perforce.xxx.com:1666/
Instead of:
http://perforce.xxx.com:1666
In my case I got like this error in golang language, i changed my port type from string to int and works fine