I know you can use environment variables to configure HAProxy. It is working for me with a single value.
But, is it possible to use an environment variable with a list of values? (e.g: multiple src addresses)
For instance, in my haproxy.cfg:
...
acl acl_gateway_03 src "${ACL_GATEWAY_03_SRC}"
...
ACL_GATEWAY_04_SRC=172.30.4.0/24
But if I set a list of values (i.e: ACL_GATEWAY_04_SRC=172.30.4.0/24 172.30.6/24) the server does not start and shows this error message:
[ALERT] 034/181026 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:47] : error detected while parsing ACL 'acl_gateway_04' : '172.30.4.0/24 127.0.0.1' is not a valid IPv4 or IPv6 address.
You can use an acl for each ip range.
acl acl_gateway_03 src 172.30.4.0/24
or you can write a command in crontab to write the ranges of ips in files and use them in haproxy as below:
acl acl_gateway_03 -f file1.lst -i -f file2.lst test
The "-f" flag is followed by the name of a file from which all lines will be read as individual values. It is even possible to pass multiple "-f" arguments if the patterns are to be loaded from multiple files. Empty lines as well as lines beginning with a sharp ('#') will be ignored.
As user16818195 said in his response, according to documentation:
If the variable contains a list of several values separated by spaces, like this:
ENV_VAR=10.0.0.7 10.0.0.9 10.0.0.11
You need to reference the environment variable this way in the haproxy configuration file:
acl your_acl src "${ENV_VAR[*]}"
According to the documentation, this is supported but I couldn't figure out how exactly to do that:
If the variable contains a list of several values separated by spaces,
it can be expanded as individual arguments by enclosing the variable
with braces and appending the suffix '[*]' before the closing brace.
Related
I am creating a two Tomcat instance cluster using mod_jk.
The instances need to communicate with each other, so they each need to know the other's private IP address. The addresses need to be added to a workers.properties file, and also to the server.xml file. I am trying to automate this.
I have created an ec2 userdata script that uses outputs from a stack to write the IP addresses to a text file, which looks like:
10.0.75.75
10.0.75.142
(The top one is "tomcatnode1ip", the bottom one is "tomcatnode2ip".)
I can run
sed '1!d' /home/ec2-user/scripts/properties/host.properties"
and it prints line1 of host.properties, which is an IP address, I can also output that to another txt file.
What I want to do is overwrite variables in workers.properties and server.xml with the IP addresses of the 2 servers.
This is done with
sed -i 's/tomcatnode1ip/tomcat1/g' /usr/share/tomcat/conf/server.xml
and
sed -i 's/tomcatnode1ip/tomcat1/g' /etc/httpd-2.4.39/modules/tomcat-connectors-1.2.46-src/conf/workers.properties
using the variables tomcat1 and tomcat2.
So basically I have two working sed scripts, and what I want is for either:
the output of the first script to feed the second script, or
nest the scripts, so that the IP address is sent directly to the variable
Are either of these possible?
I am trying to change the IP address set to a particular site in the host file.
For example :
# 123.123.123 www.google.com
# 456.456.456 www.google.com
I want to make a test that I enter Google through 123.123.123 and as the program changes and open Google through 456.456.456.
Changing the servers manually is removing the # from the beginning of the line.
I do not want to use selenium grid with some machines since any machine on another server do not have the resources for it.
I want to change this in the same machine while running through the code.
As the etc/hosts file is picked up immediately by the system without a restart you can manipulate or even completely overwrite this file during your run.
The trouble is that to edit the hosts file you need 'root' rights and you are actually changing the behaviour of your host system. In order to prevent this you might think about running in a docker environment but if that is not possible you can do something like this with root access:
/etc/hosts file
# 123.123.123 www.google.com
# 456.456.456 www.google.com
as part of your test run:
# at start of run
sed -i .bak 's/# 123.123.123/123.123.123/g' /etc/hosts
# do other tests now
# later when stuff has changed
sed -i .bak 's/123.123.123/456.456.456/g' /etc/hosts
Something like this?
I am new to wget.Let's get straight to the question. I want to download all images from a website directory. The directory contains no index file. The image name follows a pattern like ABCXXXX where XXXX= any four digit number. So how to download all images under the directory?
I've tried
wget -p http://www.example.com
but it's downloading an index.html file instead of multiple images.
Using wget:
wget -r -A "*.jpg" http://example.com/images/
Using cURL:
curl "http://example.com/images/ABC[0000-9999].jpg" -o "ABC#1.jpg"
According to man curl:
You can specify multiple URLs or parts of URLs by writing part sets
within braces as in:
http://site.{one,two,three}.com
or you can get sequences of alphanumeric series by using [] as in:
ftp://ftp.numericals.com/file[1-100].txt
ftp://ftp.numericals.com/file[001-100].txt (with leading zeros)
ftp://ftp.letters.com/file[a-z].txt
And explanation for #1:
-o, --output <file>
Write output to instead of stdout. If you are using {} or [] to
fetch multiple documents, you can use '#' followed by a number in the
specifier. That variable will be replaced with the current
string for the URL being fetched. Like in:
curl http://{one,two}.site.com -o "file_#1.txt"
or use several variables like:
curl http://{site,host}.host[1-5].com -o "#1_#2"
You may use this option as many times as the number of URLs you have.
See also the --create-dirs option to create the local directories
dynamically. Specifying the output as '-' (a single dash) will force
the output to be done to stdout.
It doesn't happen too often, but every once in a while I'll fumble with my typing and accidentally invoke "hg resolve -m" without a file argument. This then helpfully marks all the conflicts resolved. Is there any way to prevent it from resolving without one or more file arguments?
You can do this with a pre-resolve hook but you'd have to parse the arguments yourself to ensure that they are valid which could be tricky.
The relevant environment variables that you might need to look at are:
HG_ARGS - the contents of the whole command line e.g. resolve -m
HG_OPTS - a dictionary object containing options specified. This would have an an entry called mark with a value of True if -m had been specified
HG_PATS - this is the list of files specified
Depending upon the scripting language you would use, you should be able to test if HG_OPTS contains a value of True for mark and fail if it does and the HG_PATS array is empty.
It starts to get complicated when you take into account the --include and --exclude arguments.
If you specify the files to resolve as part of the --include option then the files to include would be in HG_OPTS, not HG_PATS. Also, I don't know what would happen if you specified hg resolve -m test.txt --exclude test.txt. I'd hope that it would not resolve anything but you'd need to test that.
Once you've parsed the command arguments, you'd return either 0 to allow the command or 1 to prevent it. You should echo a reason for the failure if you return 1 to avoid confusion later.
If you don't know how to do this then you'd need to specify what OS and shell you are using for anyone to provide more specific help.
Using zkCli.sh,
create -s /myznode “Hello World!” null
creates a znode using the string "Hello World!"
How do I get it to use the contents of a file instead of a string?
With a multi line file containing spaces or line breaks, try something like this:
./bin/zkCli.sh create /test-node "`cat my-znode-content.xml`"
To set some data on zk node
./bin/zkCli.sh -server 172.26.65.11:2181 set /path "\`cat employee.xml\`"
or
./bin/zkCli.sh -server 172.26.65.11:2181 set /path "\`echo 'Node data is set.'\`"
If you look at ZooKeeperMain.java you can see that the only args it takes on the command line are for the server host and port to connect to.
If you then look at the method processZKCmd() you can see that it only takes arguments for sequential and ephemeral.
You can however send input to the command, e.g.
./zkCli.sh < script
where script contains "create mynode null"
From there it's not a long way to creating an input file that is itself created from the contents of a file. For example:
echo "create `cat myfile` > script; ./zkCli.sh < script
Bear in mind that zk nodes should be of fairly small size.