Making selenium webdriver accessible from the internet - perl

I want to run a Perl script which deploys Selenium::Remote::Driver to fetch data from a web site relying heavily on JavaScript. The Perl script and the Selenium WebDriver should run on different machines (client and server, respectively).
I fired up Selenium WebDriver on the server using
/usr/bin/xvfb-run -e /dev/stdout java -Dwebdriver.gecko.driver=/opt/gecko/geckodriver -jar /opt/selenium/selenium.jar
Xvfb has already been started.
The WebDriver reports being properly started and appears in the process list
root 6830 6800 7 18:17 pts/0 00:00:01 java -Dwebdriver.gecko.driver=/opt/gecko/geckodriver -jar /opt/selenium/selenium.jar
The process is listening at port 4444
netstat -tulpn | grep 6830
tcp6 0 0 :::4444 :::* LISTEN 6830/java
I then tried to configure the firewall to accept connects from the client machine according to this thread
iptables -A INPUT -p tcp --dport 4444 -s <client-ip> -j ACCEPT
iptables -A INPUT -p tcp --dport 4444 -j DROP
The Perl script
use Selenium::Remote::Driver;;
$sel = Selenium::Remote::Driver->new;
$sel->get("http://example.com");
print $sel->get_page_source();
$sel->quit;
This works properly when run locally on the server, and returns the desired data.
But when run on the client machine, with a modified constructor
$sel = Selenium::Remote::Driver->new( remote_server_addr => '<server-ip>');
it refuses to work
Selenium server did not return proper status at /Selenium/Remote/Driver.pm line 401.
Issuing a telnet command on the client to check the server's WebDriver port
telnet <server-ip> 4444
Trying <server-ip>...
Indicates that the port is not open.
What am I missing out?
ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:5015907 errors:0 dropped:0 overruns:0 frame:0
TX packets:5015907 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1434694690 (1.3 GiB) TX bytes:1434694690 (1.3 GiB)
venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:127.0.0.1 P-t-P:127.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.255
inet6 addr: ::2/128 Scope:Compat
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:17549498 errors:0 dropped:0 overruns:0 frame:0
TX packets:18398702 errors:0 dropped:111 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:11881977702 (11.0 GiB) TX bytes:4577577462 (4.2 GiB)
venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:178.254.xxx.xxx P-t-P:178.254.xxx.xxx Bcast:178.254.xxx.xxx Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
venet0:1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:178.254.xxx.xxx P-t-P:178.254.xxx.xxx Bcast:178.254.xxx.xxx Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1

Related

Connect between client docker and postgres server on host [duplicate]

This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 3 years ago.
I have a postgres server on my host machine and I want to make a docker container that connects to this postgres server.
So I guess I need to expose the postgres server on a connection IP:5432 to docker. Expose 5432 on the docker and specify the correct connection information inside the docker something like:
SQLALCHEMY_DATABASE_URI = "postgresql+psycopg2://username:password#IP/db_name"
The host docker IP's are:
docker0 Link encap:Ethernet HWaddr 02:42:b3:d9:eb:e2
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:b3ff:fed9:ebe2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:213512 errors:0 dropped:0 overruns:0 frame:0
TX packets:351284 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:9157933 (9.1 MB) TX bytes:826914241 (826.9 MB)
docker_gwbridge Link encap:Ethernet HWaddr 02:42:5c:b9:3b:0a
inet addr:172.18.0.1 Bcast:172.18.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:5cff:feb9:3b0a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:436 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:64397 (64.3 KB)
What am I missing and how does I expose the postgres server to the relavant ports both on host side.
Maybe you can consider use network mode: host for your container.
It means that container is not really isolated from your host network and can use localhost:5432 to connect to your database.

openvpn: Can't ping client when it's connected from inside LAN

We have an openvpn server (I beleive on our router), and mobile clients that connect to the internet from far away locations, but also occasionally from inside our office. These systems are headless so configuring them differently before connecting to the in-office network is a nonstarter - we would like to SSH into them via their avahi hostnames regardless of where they physically are.
Right we can ping and SSH when they are connected to the internet outside of our network. When they are connected from inside our LAN, sometimes hostname.local resolves to 192.168.10.3 (and ping and SSH don't work) and sometimes to 192.168.1.211 (and ping and ssh do work).
When monitoring wireshark on the mobile client, ping requests to the 192.168.10.3 address do appear but are not answered.
How can we configure our clients so they can be reached when connecting from inside of our network?
output of ifconfig on client (connected to VPN via our office LAN):
eth0 Link encap:Ethernet HWaddr 00:04:4b:a7:fa:e5
inet addr:192.168.1.223 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::7a45:f5b1:1b87:c6f0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8964 errors:0 dropped:0 overruns:0 frame:0
TX packets:771 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1847719 (1.8 MB) TX bytes:160760 (160.7 KB)
Interrupt:42
tap0 Link encap:Ethernet HWaddr ce:d4:a6:18:48:21
inet addr:192.168.10.3 Bcast:192.168.10.255 Mask:255.255.255.0
inet6 addr: fe80::ccd4:a6ff:fe18:4821/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1381 errors:0 dropped:0 overruns:0 frame:0
TX packets:58 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:214474 (214.4 KB) TX bytes:7149 (7.1 KB)
output of route on client (connected to VPN via our office LAN):
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0
default 192.168.10.1 0.0.0.0 UG 50 0 0 tap0
default 192.168.1.1 0.0.0.0 UG 100 0 0 eth0
link-local * 255.255.0.0 U 1000 0 0 eth1
192.168.1.0 * 255.255.255.0 U 100 0 0 eth0
192.168.2.0 * 255.255.255.0 U 0 0 0 eth1
192.168.10.0 * 255.255.255.0 U 50 0 0 tap0
Back-to-back pings from another machine on the same LAN to our mobile client. For whatever reason avahi .local names unpredictably resolve to the VPN IP or the other. Anyway, the ping to the VPN IP (second one) just hangs:
[15:51:25]~$ ping liber0.local
PING liber0.local (192.168.1.223) 56(84) bytes of data.
64 bytes from 192.168.1.223: icmp_seq=1 ttl=64 time=4.00 ms
64 bytes from 192.168.1.223: icmp_seq=2 ttl=64 time=6.09 ms
64 bytes from 192.168.1.223: icmp_seq=3 ttl=64 time=38.8 ms
^C
--- liber0.local ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 4.003/16.302/38.805/15.935 ms
[15:51:29]~$ ping liber0.local
PING liber0.local (192.168.10.3) 56(84) bytes of data.
^C
--- liber0.local ping statistics ---
27 packets transmitted, 0 received, 100% packet loss, time 26629ms
OpenVPN configuration file:
client
dev tap
proto udp
remote <redacted>
float
resolv-retry infinite
nobind
persist-key
persist-tun
verb 3
ca <redacted>.pem
cert <redacted>.pem
key <redacted>.key
cipher AES-256-CBC
auth SHA256
The key hint was that the ICMP packets made it to the VPN connected client, but were not answered. It turned out that the default rp_filter (reverse path filter) is the strictly checking and dropping packets. adding net.ipv4.conf.default.rp_filter = 2 to /etc/sysctl.conf sets rp_filter to loose reverse path checking, and everything works.

Eth1 interface not visible in ifconfig and not able to interact with other servers in centos6

I just installed centos 6 on my virtual box but I am unable to find eth1 interface to make interactive with other virtual servers on my machine. Could you please help.
ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:480 (480.0 b) TX bytes:480 (480.0 b)
cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=localhost.localdomain
To fix this problem, simply remove the file /etc/udev/rules.d/70-persistent-net.rules and reboot your system.
rm /etc/udev/rules.d/70-persistent-net.rules

Unable to start wildfly 10.1.0.Final in cluster mode when enabling IPv6

Currently we use wildfly 10.1.0.Final with OpenJDK 7. When I set java.net.preferIPv4Stack=false to support IpV6 as new requirement from customers. But I see this problem below in cluster mode only
What should we do now? If I set java.net.preferIPv4Stack=true, it is working properly, but this means that IPv6 is not supported.
Thank you!
java -version
java version "1.7.0_79"
OpenJDK Runtime Environment (rhel-2.5.5.4.el6-x86_64 u79-b14)
OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)
Exception:
2017-10-06 10:35:51,667 ERROR [ServerService Thread Pool --
3]-[org.jboss.modcluster] MODCLUSTER000034: Failed to start advertise
listener: java.net.SocketException: bad argument for IP_MULTICAST_IF:
address not bound to any interface
at java.net.PlainDatagramSocketImpl.socketSetOption(Native Method)
at java.net.AbstractPlainDatagramSocketImpl.setOption(AbstractPlainDatagramSocketImpl.java:310)
at java.net.MulticastSocket.setInterface(MulticastSocket.java:471)
at org.jboss.modcluster.advertise.impl.AdvertiseListenerImpl.init(AdvertiseListenerImpl.java:151)
at org.jboss.modcluster.advertise.impl.AdvertiseListenerImpl.start(AdvertiseListenerImpl.java:165)
at org.jboss.modcluster.ModClusterService.init(ModClusterService.java:178)
at org.wildfly.mod_cluster.undertow.UndertowEventHandlerAdapter.start(UndertowEventHandlerAdapter.java:100)
at org.wildfly.clustering.service.AsynchronousServiceBuilder$1.run(AsynchronousServiceBuilder.java:102)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at org.jboss.threads.JBossThread.run(JBossThread.java:320)
2017-10-06 10:35:51,797 INFO [MSC service thread
1-7]-[org.jboss.as.remoting] WFLYRMT0001: Listening on [::]:9999
2017-10-06 10:35:52,014 INFO [MSC service thread
1-5]-[org.jboss.as.remoting] WFLYRMT0001: Listening on [::]:4447
Thank you!
mutiple cast: 225.1.2.5
ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:50:56:9C:DD:DC
inet addr:192.168.92.204 Bcast:192.168.92.255 Mask:255.255.255.0
inet6 addr: fe80::250:56ff:fe9c:dddc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:971537495 errors:0 dropped:0 overruns:0 frame:0
TX packets:666680550 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:468632879172 (436.4 GiB) TX bytes:548433354959 (510.7 GiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:838757022 errors:0 dropped:0 overruns:0 frame:0
TX packets:838757022 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:280155497754 (260.9 GiB) TX bytes:280155497754 (260.9 GiB)
jboss.bind.address = 0.0.0.0
I see similar issue here, but not anwsered yet
https://developer.jboss.org/thread/233410

Java NIO Socket inside docker between windows and unix with failed

I made NIO socket client and server by this. It works fine on local machine. I deployed NIO_Server.jar on docker container with this args on 7878 port:
docker run -ti --net=host -v $HOME:/usr/app -w /usr/app --name=test java:7 java -jar NIO_Server.jar
and server successful started. Notice, that i set --net=host.
With -p option same affect. I didn't use --expose, because with -p ports exposes implicitly.
root#bess2:~# docker run -ti --net=host -v $HOME:/usr/app -w /usr/app --name=test java:7 java -jar NIO_Server.jar
EchoServer started...
I wanted to make sure that it is work inside host:
root#bess2:~# netstat -antu | grep 7878
tcp6 0 0 127.0.0.1:7878 :::* LISTEN
root#bess2:~# telnet localhost 7878
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Inside container i see events:
EchoServer started...
Something received...
It is acceptable...
Connected to: /127.0.0.1:55974
Something received...
It is readable...
Something received...
It is readable...
But on may local machine I try to connect on host server with failure:
C:\Users\MONSTERMASH>telnet 31.148.99.130 7878 Connecting To
31.148.99.130...Could not open connection to the host, on port 7878: Connect failed
but with ping:
C:\Users\MONSTERMASH>ping 31.148.99.130
Pinging 31.148.99.130 with 32 bytes of data:
Reply from 31.148.99.130: bytes=32 time=16ms TTL=56
Reply from 31.148.99.130: bytes=32 time=16ms TTL=56
Reply from 31.148.99.130: bytes=32 time=21ms TTL=56
Reply from 31.148.99.130: bytes=32 time=18ms TTL=56
Ping statistics for 31.148.99.130:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 16ms, Maximum = 21ms, Average = 17ms
This is IP config from host:
root#filesbess2:~# ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:81:11:7b:c7
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:81ff:fe11:7bc7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:401 errors:0 dropped:0 overruns:0 frame:0
TX packets:28 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:27254 (27.2 KB) TX bytes:6704 (6.7 KB)
eth0 Link encap:Ethernet HWaddr 52:54:00:97:1e:86
inet addr:31.148.99.130 Bcast:31.148.99.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe97:1e86/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:137351 errors:0 dropped:0 overruns:0 frame:0
TX packets:577681422 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:10756344 (10.7 MB) TX bytes:534585313126 (534.5 GB)
Firewalls and antiviruses on local machine and host are turn off or inactive. Host Ubuntu 14.04 deploys on KVM. I made this test, with host was been deployed on virtualbox. Issue is same.
You should throw a packet sniffer on both environments and look at the NIO payloads.
Linux and Windows libraries can have reversed endianity (little vs big). The handshake may be failing because one is "left-handed", and the other is "right-handed".
Root around in your NIO API for an endian switch. Do an OS check in the client and make sure it's NIO messages are oriented properly for the server.
This is known as an OS "boundary condition", which appear esp. between windows and linux.
Also check the whitespace and padding in the payloads, and make sure the versions of NIO on both environments are compatible.
You might want to use a higher level NIO framework (eg. Netty) which handle many boundary conditions under the hood, like browsers do.