We have an openvpn server (I beleive on our router), and mobile clients that connect to the internet from far away locations, but also occasionally from inside our office. These systems are headless so configuring them differently before connecting to the in-office network is a nonstarter - we would like to SSH into them via their avahi hostnames regardless of where they physically are.
Right we can ping and SSH when they are connected to the internet outside of our network. When they are connected from inside our LAN, sometimes hostname.local resolves to 192.168.10.3 (and ping and SSH don't work) and sometimes to 192.168.1.211 (and ping and ssh do work).
When monitoring wireshark on the mobile client, ping requests to the 192.168.10.3 address do appear but are not answered.
How can we configure our clients so they can be reached when connecting from inside of our network?
output of ifconfig on client (connected to VPN via our office LAN):
eth0 Link encap:Ethernet HWaddr 00:04:4b:a7:fa:e5
inet addr:192.168.1.223 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::7a45:f5b1:1b87:c6f0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8964 errors:0 dropped:0 overruns:0 frame:0
TX packets:771 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1847719 (1.8 MB) TX bytes:160760 (160.7 KB)
Interrupt:42
tap0 Link encap:Ethernet HWaddr ce:d4:a6:18:48:21
inet addr:192.168.10.3 Bcast:192.168.10.255 Mask:255.255.255.0
inet6 addr: fe80::ccd4:a6ff:fe18:4821/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1381 errors:0 dropped:0 overruns:0 frame:0
TX packets:58 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:214474 (214.4 KB) TX bytes:7149 (7.1 KB)
output of route on client (connected to VPN via our office LAN):
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0
default 192.168.10.1 0.0.0.0 UG 50 0 0 tap0
default 192.168.1.1 0.0.0.0 UG 100 0 0 eth0
link-local * 255.255.0.0 U 1000 0 0 eth1
192.168.1.0 * 255.255.255.0 U 100 0 0 eth0
192.168.2.0 * 255.255.255.0 U 0 0 0 eth1
192.168.10.0 * 255.255.255.0 U 50 0 0 tap0
Back-to-back pings from another machine on the same LAN to our mobile client. For whatever reason avahi .local names unpredictably resolve to the VPN IP or the other. Anyway, the ping to the VPN IP (second one) just hangs:
[15:51:25]~$ ping liber0.local
PING liber0.local (192.168.1.223) 56(84) bytes of data.
64 bytes from 192.168.1.223: icmp_seq=1 ttl=64 time=4.00 ms
64 bytes from 192.168.1.223: icmp_seq=2 ttl=64 time=6.09 ms
64 bytes from 192.168.1.223: icmp_seq=3 ttl=64 time=38.8 ms
^C
--- liber0.local ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 4.003/16.302/38.805/15.935 ms
[15:51:29]~$ ping liber0.local
PING liber0.local (192.168.10.3) 56(84) bytes of data.
^C
--- liber0.local ping statistics ---
27 packets transmitted, 0 received, 100% packet loss, time 26629ms
OpenVPN configuration file:
client
dev tap
proto udp
remote <redacted>
float
resolv-retry infinite
nobind
persist-key
persist-tun
verb 3
ca <redacted>.pem
cert <redacted>.pem
key <redacted>.key
cipher AES-256-CBC
auth SHA256
The key hint was that the ICMP packets made it to the VPN connected client, but were not answered. It turned out that the default rp_filter (reverse path filter) is the strictly checking and dropping packets. adding net.ipv4.conf.default.rp_filter = 2 to /etc/sysctl.conf sets rp_filter to loose reverse path checking, and everything works.
Related
I installed three operating systems(let's say 3 hosts) in VMware, all with NAT mode. 3 hosts are named centos, centos 1,centos 2.(As the pic shows below)
3 hosts in VMware
The first host's IP address is 192.168.248.132, the second is 192.168.248.136, and we don't need to know third host's IP because it's not related to this issue.
I typed the command "ping 192.168.248.136", and the output on the screen is:
PING 192.168.248.136 (192.168.248.136) 56(84) bytes of data.
64 bytes from 192.168.248.136: icmp_seq=1 ttl=64 time=0.435 ms
64 bytes from 192.168.248.136: icmp_seq=2 ttl=64 time=0.313 ms
64 bytes from 192.168.248.136: icmp_seq=3 ttl=64 time=0.385 ms
This means ping command has succeeded and host no.2(whose IP addr is 192.168.248.136) has received ICMP and replied.
Meanwhile, I typed the command "tcpdump -i ens33" in host no.3. If everything had worked correctly, host no.3 would not have received any data between host no.1 and host no.2, because ICMP is neither broadcast nor multicast, so only host no.1 and 2 can send and receive. Also, host no.3's network interface is not promiscuous mode, so it can only receive it's own frame. The output from host no.3 below can show it is not promiscuous mode.
[root#localhost usr]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.248.137 netmask 255.255.255.0 broadcast 192.168.248.255
inet6 fe80::b488:bc2c:3770:a95f prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:0d:dc:86 txqueuelen 1000 (Ethernet)
RX packets 351081 bytes 512917768 (489.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 34947 bytes 2166260 (2.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flag is 4163<UP,BROADCAST,RUNNING,MULTICAST>, "PROMISC" is not mentioned, so it is not promiscuous mode.
However, after I typed "tcpdump -i ens33" in host no.3, something appeared on the screen:
06:28:11.511233 IP 192.168.248.132 > 192.168.248.136: ICMP echo request, id 3137, seq 5, length 64
06:28:11.511503 IP 192.168.248.136 > 192.168.248.132: ICMP echo reply, id 3137, seq 5, length 64
Host no.3 received the dataflow between no.1 and 2, and this was supposed to be sent to no.2, but no.3 received it.
So here comes the question, why can host no.3 receive packet which was not supposed to be sent to it?
tcpdump by default activates "promiscuous mode" making it able to see anything on the network it is connected to (even if not explicitly sent to it).
the three hosts seem to be connected to a virtual switch that do not isolate the hosts from each other.
I have 2 RPis connected together with an ethernet cable. For the 1st RPi, the wifi is disabled, and it should get the internet connection from the 2nd RPi that is connected to the internet by wifi.
I am using the Network Manager (NM), and I also need that both RPis have static IPs on their eth0 interface:
RPi1 : 192.168.4.115/24 # The RPi that is not connected to wifi
RPi2 : 192.168.4.1/24 # The RPi that is connected to wifi
I configured the static IP of the RPi1 in /etc/dhcpcd.conf. For the RPi2, I used the NM when I configured the shared connection :
# On RPi2
nmcli connection add type ethernet ifname eth0 ipv4.method shared con-name local
nmcli connection modify local ipv4.addresses 192.168.4.1/24
nmcli connection up local
When I check the connection of the RPi2, I have the good IP, and when I ping 1.1.1.1 I have a reply:
pi#raspberrypi2:~ $ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.1 netmask 255.255.255.0 broadcast 192.168.4.255
inet6 fe80::514:af1e:da15:6f80 prefixlen 64 scopeid 0x20<link>
ether e4:5f:01:4c:5c:00 txqueuelen 1000 (Ethernet)
RX packets 105 bytes 20375 (19.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 178 bytes 22385 (21.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.11.16 netmask 255.255.255.0 broadcast 192.168.11.255
inet6 fe80::750f:5ec2:8158:fb80 prefixlen 64 scopeid 0x20<link>
ether e4:5f:01:4c:5c:01 txqueuelen 1000 (Ethernet)
RX packets 488 bytes 59706 (58.3 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 206 bytes 30178 (29.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
But on the first RPi, even if I have the good IP on eth0 (192.168.4.115), when I try a ping, I have connect: network is unreachable
So I don't know what is missing to achieve the sharing of the connection. And I don't know what I can check? Feel free to ask for any useful data, I don't know what can be useful.
The first RPi doesn't use the NM because I don't need it, the normal way of the RPi is enough.
I am using Contiki to create an IoT product involving multiple STM32L152 based nodes and a Linux board. I have one embedded Linux board (based on iMX6) that receives data from nodes, sends to the internet using cellular and 10 nodes that sense the different environmental parameter and deliver to Linux board. Linux board has a coprocessor that running border/edge router code, UART2 lines of that coprocessor is connected to Linux board. I use Contiki tool tunslip6 to create tun0 interface, I am able to ping each node.
To make the question more understandable, I will explain the hardware setup and step that I performed.
I am running UDP sender example code (STM32CubeExpansion_SUBG1_V3.0.0/Projects/Multi/Applications/Contiki/Udp-sender) on STM32L152RE-NUCLEO board that has x-nucleo-ids01a5 (SPSGRF-915 module) board sitting on top.
On second similar hardware setup, I am running Border-router example code. USB cable is attached to my Linux box.
after doing this; sudo ./tunslip6 –s /dev/ttyACM0 aaaa::1/64, I am able to see all neighbor node on the webpage, I am able to ping6 each node too.
I want to write application code on Linux to receive and send data to each node, I am stuck at this point.
sudo ./tunslip6 -s /dev/ttyACM0 aaaa::1/64
********SLIP started on ``/dev/ttyACM0''
opened tun device ``/dev/tun0''
ifconfig tun0 inet `hostname` mtu 1500 up
ifconfig tun0 add aaaa::1/64
ifconfig tun0 add fe80::0:0:0:1/64
ifconfig tun0
tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-
00-00-00-00-00
inet addr:127.0.1.1 P-t-P:127.0.1.1 Mask:255.255.255.255
inet6 addr: fe80::1/64 Scope:Link
inet6 addr: aaaa::1/64 Scope:Global
inet6 addr: fe80::8fad:d1a:c8d0:b76f/64 Scope:Link
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
*** Address:aaaa::1 => aaaa:0000:0000:0000
Got configuration message of type P
Setting prefix aaaa::
Server IPv6 addresses:
aaaa::900:f4ff:c3a:f3c5
fe80::900:f4ff:c3a:f3c5
ifconfig
tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:127.0.1.1 P-t-P:127.0.1.1 Mask:255.255.255.255
inet6 addr: fe80::1/64 Scope:Link
inet6 addr: aaaa::1/64 Scope:Global
inet6 addr: fe80::8fad:d1a:c8d0:b76f/64 Scope:Link
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:37 errors:0 dropped:0 overruns:0 frame:0
TX packets:67 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:3422 (3.4 KB) TX bytes:5653 (5.6 KB)
ip addr show tun0
3: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 500
link/none
inet 127.0.1.1/32 scope host tun0
valid_lft forever preferred_lft forever
inet6 aaaa::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::1/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::8fad:d1a:c8d0:b76f/64 scope link flags 800
valid_lft forever preferred_lft forever
sudo ip -6 route
aaaa::/64 dev tun0 proto kernel metric 256 pref medium
fe80::/64 dev tun0 proto kernel metric 256 pref medium
This is what I am seeing on the webpage, I have one neighbor node I am able to ping this.
ping6 aaaa::fdff:d2fa:2d05
PING aaaa::fdff:d2fa:2d05(aaaa::fdff:d2fa:2d05) 56 data bytes
64 bytes from aaaa::fdff:d2fa:2d05: icmp_seq=1 ttl=63 time=130 ms
64 bytes from aaaa::fdff:d2fa:2d05: icmp_seq=2 ttl=63 time=131 ms
64 bytes from aaaa::fdff:d2fa:2d05: icmp_seq=3 ttl=63 time=130 ms
64 bytes from aaaa::fdff:d2fa:2d05: icmp_seq=4 ttl=63 time=130 ms
64 bytes from aaaa::fdff:d2fa:2d05: icmp_seq=6 ttl=63 time=130 ms
64 bytes from aaaa::fdff:d2fa:2d05: icmp_seq=7 ttl=63 time=130 ms
64 bytes from aaaa::fdff:d2fa:2d05: icmp_seq=8 ttl=63 time=131 ms
^C
--- aaaa::fdff:d2fa:2d05 ping statistics ---
8 packets transmitted, 7 received, 12% packet loss, time 7040ms
rtt min/avg/max/mdev = 130.681/131.068/131.863/0.555 ms
I am not an expert in networking and socket programming, I wrote some code that I found on the internet and tried. I tried something like this;
import socket
UDP_IP = "aaaa::fdff:d2fa:2d05"
UDP_PORT = 1234
print "UDP target IP:", UDP_IP
print "UDP target port:", UDP_PORT
sock = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM) # UDP
sock.connect((UDP_IP, UDP_PORT))
while True:
data = sock.recv(1024)
print 'Received', repr(data)
Question: In Linux userspace, my colleague wants to write an application code that can read and write each node (in this case aaaa::fdff:d2fa:2d05), how can we achieve that? On microcontroller board I am able to read and write with two nodes, but not in Linux space. Please help me, how can I read and write data from Linux userspace to each node? If possible please share some example code. Thanks!
Update - I tried to communicate between Linux host and node with the different Contiki example, contiki/examples/ipv6/rpl-udp/udp-client.c and had success, I was able to receive data from node. My python code is;
import socket, struct
UDP_LOCAL_IP = 'aaaa::1'
UDP_LOCAL_PORT = 5678
UDP_REMOTE_IP = 'fe80::fdff:d2fa:2d05'
UDP_REMOTE_PORT = 8765
try:
socket_rx = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
socket_rx.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
socket_rx.bind((UDP_LOCAL_IP, UDP_LOCAL_PORT))
except Exception:
print "ERROR: Server Port Binding Failed"
print 'UDP server ready: %s'% UDP_LOCAL_PORT
print
while True:
data, addr = socket_rx.recvfrom(1024)
print "address : ", addr
print "received message: ", data
print "\n"
socket_rx.sendto("Hello from serevr\n", (UDP_REMOTE_IP, UDP_REMOTE_PORT))
Above python code is working.
The border router has a hard-coded IPv6 address, according to https://anrg.usc.edu/contiki/index.php/RPL_Border_Router this address is fe80:0000:0000:0000:0212:7401:0001:0101 ( possibly run ip addr show tun0 -> your edit shows that the assigned address is fe80:0000:0000:0000:8fad:d1a:c8d0:b76f ). To this address you bind the socket of your application, have no code for this in python. Because you use tunslip you can possibly also bind to localhost with accurate port and ipv4 protocol
For testing you can use netcat to send UDP packets directly to the nodes ( http://www.tutorialspoint.com/unix_commands/nc.htm )
To get rid of the error ( comment ) you have to apply inet_pton for converting the IPv6 address ( http://man7.org/linux/man-pages/man3/inet_pton.3.html )
in the following is working C code that you can translate to python. was written for using a Raven USB stick as border router ( search for Contiki Jackdaw, no tunslip used )
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <net/if.h>
int fd = 0; //socket file descriptor
struct sockaddr_in6 server;
/* ipv6 address to in6_addr structure */
const char *ip6str = "fe80::8fad:d1a:c8d0:b76f";
struct in6_addr ravenipv6;
if (inet_pton(AF_INET6, ip6str, &ravenipv6) == 1) // successful
{
printf("%s \n", "ipv6 address ...");
}
/* Create an empty IPv6 socket interface specification */
memset(&server, 0, sizeof(server));
server.sin6_family = AF_INET6;
server.sin6_flowinfo = 0;
server.sin6_port = htons(1234); // port
server.sin6_addr = ravenipv6; <- here the address converted with inet_ptons is inserted
server.sin6_scope_id = if_nametoindex("tun0"); // if your border router is on tun0
/*create socket*/
if ((fd = socket(AF_INET6, SOCK_DGRAM, IPPROTO_UDP)) == -1)
{
printf("%s \n", "failed to create socket");
}
/*bind to socket*/
if(bind(fd, (struct sockaddr_in6*)&server, sizeof(server)) == -1)
{
printf("%s \n", "no binding ! ");
}
I want to run a Perl script which deploys Selenium::Remote::Driver to fetch data from a web site relying heavily on JavaScript. The Perl script and the Selenium WebDriver should run on different machines (client and server, respectively).
I fired up Selenium WebDriver on the server using
/usr/bin/xvfb-run -e /dev/stdout java -Dwebdriver.gecko.driver=/opt/gecko/geckodriver -jar /opt/selenium/selenium.jar
Xvfb has already been started.
The WebDriver reports being properly started and appears in the process list
root 6830 6800 7 18:17 pts/0 00:00:01 java -Dwebdriver.gecko.driver=/opt/gecko/geckodriver -jar /opt/selenium/selenium.jar
The process is listening at port 4444
netstat -tulpn | grep 6830
tcp6 0 0 :::4444 :::* LISTEN 6830/java
I then tried to configure the firewall to accept connects from the client machine according to this thread
iptables -A INPUT -p tcp --dport 4444 -s <client-ip> -j ACCEPT
iptables -A INPUT -p tcp --dport 4444 -j DROP
The Perl script
use Selenium::Remote::Driver;;
$sel = Selenium::Remote::Driver->new;
$sel->get("http://example.com");
print $sel->get_page_source();
$sel->quit;
This works properly when run locally on the server, and returns the desired data.
But when run on the client machine, with a modified constructor
$sel = Selenium::Remote::Driver->new( remote_server_addr => '<server-ip>');
it refuses to work
Selenium server did not return proper status at /Selenium/Remote/Driver.pm line 401.
Issuing a telnet command on the client to check the server's WebDriver port
telnet <server-ip> 4444
Trying <server-ip>...
Indicates that the port is not open.
What am I missing out?
ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:5015907 errors:0 dropped:0 overruns:0 frame:0
TX packets:5015907 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1434694690 (1.3 GiB) TX bytes:1434694690 (1.3 GiB)
venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:127.0.0.1 P-t-P:127.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.255
inet6 addr: ::2/128 Scope:Compat
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:17549498 errors:0 dropped:0 overruns:0 frame:0
TX packets:18398702 errors:0 dropped:111 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:11881977702 (11.0 GiB) TX bytes:4577577462 (4.2 GiB)
venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:178.254.xxx.xxx P-t-P:178.254.xxx.xxx Bcast:178.254.xxx.xxx Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
venet0:1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:178.254.xxx.xxx P-t-P:178.254.xxx.xxx Bcast:178.254.xxx.xxx Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
I made NIO socket client and server by this. It works fine on local machine. I deployed NIO_Server.jar on docker container with this args on 7878 port:
docker run -ti --net=host -v $HOME:/usr/app -w /usr/app --name=test java:7 java -jar NIO_Server.jar
and server successful started. Notice, that i set --net=host.
With -p option same affect. I didn't use --expose, because with -p ports exposes implicitly.
root#bess2:~# docker run -ti --net=host -v $HOME:/usr/app -w /usr/app --name=test java:7 java -jar NIO_Server.jar
EchoServer started...
I wanted to make sure that it is work inside host:
root#bess2:~# netstat -antu | grep 7878
tcp6 0 0 127.0.0.1:7878 :::* LISTEN
root#bess2:~# telnet localhost 7878
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Inside container i see events:
EchoServer started...
Something received...
It is acceptable...
Connected to: /127.0.0.1:55974
Something received...
It is readable...
Something received...
It is readable...
But on may local machine I try to connect on host server with failure:
C:\Users\MONSTERMASH>telnet 31.148.99.130 7878 Connecting To
31.148.99.130...Could not open connection to the host, on port 7878: Connect failed
but with ping:
C:\Users\MONSTERMASH>ping 31.148.99.130
Pinging 31.148.99.130 with 32 bytes of data:
Reply from 31.148.99.130: bytes=32 time=16ms TTL=56
Reply from 31.148.99.130: bytes=32 time=16ms TTL=56
Reply from 31.148.99.130: bytes=32 time=21ms TTL=56
Reply from 31.148.99.130: bytes=32 time=18ms TTL=56
Ping statistics for 31.148.99.130:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 16ms, Maximum = 21ms, Average = 17ms
This is IP config from host:
root#filesbess2:~# ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:81:11:7b:c7
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:81ff:fe11:7bc7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:401 errors:0 dropped:0 overruns:0 frame:0
TX packets:28 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:27254 (27.2 KB) TX bytes:6704 (6.7 KB)
eth0 Link encap:Ethernet HWaddr 52:54:00:97:1e:86
inet addr:31.148.99.130 Bcast:31.148.99.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe97:1e86/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:137351 errors:0 dropped:0 overruns:0 frame:0
TX packets:577681422 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:10756344 (10.7 MB) TX bytes:534585313126 (534.5 GB)
Firewalls and antiviruses on local machine and host are turn off or inactive. Host Ubuntu 14.04 deploys on KVM. I made this test, with host was been deployed on virtualbox. Issue is same.
You should throw a packet sniffer on both environments and look at the NIO payloads.
Linux and Windows libraries can have reversed endianity (little vs big). The handshake may be failing because one is "left-handed", and the other is "right-handed".
Root around in your NIO API for an endian switch. Do an OS check in the client and make sure it's NIO messages are oriented properly for the server.
This is known as an OS "boundary condition", which appear esp. between windows and linux.
Also check the whitespace and padding in the payloads, and make sure the versions of NIO on both environments are compatible.
You might want to use a higher level NIO framework (eg. Netty) which handle many boundary conditions under the hood, like browsers do.