OpenSplice V6.3 fails on Network - opensplice

although i have been using OpenSplice V5 for connecting my platform running on different nodes, now i upgraded OPensplice to V6.3 but i am getting failures on my platform as if no connection.
what i did is that i tried connecting the helloWorld windows version (one 32 bit and another 64 bit) together on the default hello world example but with no success.
do anyone can help solving this issue and advise what parameters must be fine tuned on the ospl.xml file for using network connectivity using unicast?
thx,

What I've seen before is that sometimes people use machines that have multiple interface-cards connected, in which case you have to explicitly configure the interface to use in the DDSI configuration (rather than the 'AUTO' default).
Furthermore, if you want to enforce unicast, you have to configure your unicast-peers in the DDSI discovery section of the config-file.
Note that DDSI also automatically switches between unicast and multicast depending on the number of discovered endpoints (i.e. it will use unicast in case only 1 endpoint is discovereed)

Related

TCP retransmission on RST - Different socket behaviour on Windows and Linux?

Summary:
I am guessing that the issue here is something to do with how Windows and Linux handle TCP connections, or sockets, but I have no idea what it is. I'm initiating a TCP connection to a piece of custom hardware that someone else has developed and I am trying to understand its behaviour. In doing so, I've created a .Net core 2.2 application; run on a Windows system, I can initiate the connection successfully, but on Linux (latest Raspbian), I cannot.
It appears that it may be because Linux systems do not try to retry/retransmit a SYN after a RST, whereas Windows ones do - and this behaviour seems key to how this peculiar piece of hardware works..
Background:
We have a black box piece of hardware that can be controlled and queried over a network, by using a manufacturer-supplied Windows application. Data is unencrypted and requires no authentication to connect to it and the application has some other issues. Ultimately, we want to be able to relay data from it to another system, so we decided to make our own application.
I've spent quite a long time trying to understand the packet format and have created a library, which targets .net core 2.2, that can be used to successfully communicate with this kit. In doing so, I discovered that the device seems to require a kind of "request to connect" command to be sent, via UDP. Straight afterwards, I am able to initiate a TCP connection on port 16000, although the first TCP attempt always results in a RST,ACK being returned - so a second attempt needs to be made.
What I've developed works absolutely fine on both Windows (x86) and Linux (Raspberry Pi/ARM) systems and I can send and receive data. However, when run on the Raspbian system, there seems to be problems when initiating the TCP connection. I could have sworn that we had it working absolutely fine on a previous build, but none of the previous commits seem to work - so it may well be a system/kernel update that has changed something.
The issue:
When initiating a TCP connection to this device, it will - straight away - reset the connection. It does this even with the manufacturer-supplied software, which itself then immediately re-attempts the connection again and it succeeds; so this kind of reset-once-then-it-works-the-second-time behaviour in itself isn't a "problem" that I have any control over.
What I am trying to understand is why a Windows system immediately re-attempts the connection through a retransmission...
..but the Linux system just gives up after one attempt (this is the end of the packet capture..)
To prove it is not an application-specific issue, I've tried using ncat/netcat on both the Windows system and the Raspbian system, as well as a Kali system on a separate laptop to prove it isn't an ARM/Raspberry issue. Since the UDP "request" hasn't been sent, the connection will never succeed anyway, but this simply demonstrates different behaviour between the OSes.
Linux versions look pretty much the same as above, whereby they send a single packet that gets reset - whereas the Windows attempt demonstrates the multiple retransmissions..
So, does anyone have any answer for this behaviour difference? I am guessing it isn't a .net core specific issue, but is there any way I can set socket options to attempt a retransmission? Or can it be set at the OS level with systemctl commands or something? I did try and see if there are any SocketOptionNames, in .net, that look like they'd control attempts/retries, as this answer had me wonder, but no luck so far.
If anyone has any suggestions as to how to better align this behaviour across platforms, or can explain the reason for this difference is at all, I would very much appreciate it!
Nice find! According to this, Windows´ TCP will retry a connection if it receives a RST/ACK from the remote host after sending a SYN:
... Upon receiving the ACK/RST client from the target host, the client determines that there is indeed no service listening there. In the Microsoft Winsock implementation of TCP, a pending connection will keep attempting to issue SYN packets until a maximum retry value is reached (set in the registry, this value defaults to 3 extra times)...
The value used to limit those retries is set in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\TcpMaxConnectRetransmissions according to the same article. At least in Win10 Pro it doesn´t seem to be present by default.
Although this is a conveniece for Windows machines, an application still should determine its own criteria for handling a failed connect attempt IMO (i. e number of attempts, timeouts etc).
Anyhow, as I said, surprising fact! Living and learning I guess ...
Cristian.

UDP packets received only in promiscuous mode

I am generating UDP packets on a 100 multicast groups on one VM Ubuntu 16.04 machine and subscribe to those groups on the other VM Ubuntu 16.04 machine. Both are on a HP server run by Hyper-V manager. The problem is that my application only receives 2 out of 100 groups. However, when Wireshark is capturing, the application starts receiving all messages.
I found several other similar questions like this one, where it explains that because Wireshark is running in promiscuous mode, it allows all packets to get through (through what?), and this explains why my application starts "seeing" them too. Thus, changing the Ethernet interface configuration to promiscuous mode allows the application to receive all the messages without running the Wireshark.
But what is the problem with the other packets that are not normally received? I tried to cross-verify the hex-dump of the "good" and "bad" messages and they don't seem to be different. The check sums for on the IP and UDP levels are correct. What else could be the problem?
Multicast ip range 239.1.4.1-100
Destination port 50003
Source port range ~33000 - 60900
firewall is disabled
EDIT:
It looks like when the application is subscribed to only 8 multicast groups, it works fine, however, if subscribed to more than 8, it receives only 2 (if they end on .7 or .8) or none, as described above. So, I would assume that the packets are correct. Could the problem be in the network settings? Or the application itself - need to find the bug in the script I did not write.
EDIT2:
I installed the ISO image on the other machine (Virtual box instead of HP Windows Server) and it works as it should. Thus, I assume my application works fine and all the ubuntu OS configurations are correct. Now I put all the blame on the Virtual Manager/settings. Any ideas?
It sounds as if you didn't tell the kernel about them.
See http://tldp.org/HOWTO/Multicast-HOWTO-6.html
You have to use setsockopt with IP_ADD_MEMBERSHIP. And be sure to use the correct values for your local interfaces.

Developer exception starting a client-server model on Eiffel net

I'm trying to establish a connection using sockets between 2 PC's on the same LAN using the Eiffel Programming Language. I'm trying to run the examples that are by default on the installation directory of Eiffel Studio. However right now I'm trying to make it on the same machine by addressing to localhost (127.0.0.1).
It works perfectly on Linux (Ubuntu 15.10) but on windows 7 I'm getting an exception when I try to run the client program. The code of the exception is 24 Unable to establish connection. The server program runs just fine and I already got a connection between a client on linux and a server on windows. I didn't find a solution to this exception on the documentation nor on other sites. Here is a screencap:
Screencap of the debugger
Here is a link to the doc:
https://www.eiffel.org/doc/solutions/Two%20Machines
Thank you in advance.
The issue might be caused by the fact that some ports are used and others are reserved by the system. In particular the port range 0-1023 is designated for use by common system and network services. Ports beyond this range can also be registered (e.g., Service Name and Transport Protocol Port Number Registry or List of TCP and UDP port numbers). System security settings could also prevent applications from using specific port numbers.
The solution is to look for and to use port numbers that are available for user applications. Ports currently used on Windows can be found with netstat -an, what can be used is related to TCP/IP and firewall settings. The simplest approach is to try using some other port numbers, e.g. in the range 1024-49151.

Simulate network lag when client and server are on the same dev PC

With my limited resources and to aid debugging, I am doing local testing on a client-server (game) application by running both a server and one or more clients all on my Windows 7 dev PC. Both client and server are Java applications developed through Eclipse.
Is there any easy way to introduce lag given that everything is running on the same PC... maybe 'hack' the port used or something? Or is this only possible if each application is running on a separate PC (or separate VM)?
Make a feature in the server which makes a random lag within certain time range if detected connection comes from localhost. You can then switch this feature on/off as needed.

How to capture loopback traffic in Windows Server 2008

Setup:
I have client C connecting to server S
Both C and S are on the same machine
In C the server address is hardcoded to 127.0.0.1. Likewise, in S the client address is hardcoded to 127.0.0.1
Problem:
I want to be able to sniff the traffic between the client and the server.
Due to the configuration, I cannot move the client nor the server to different locations (the address are hardcoded)
Installing the loopback interface and using tools like Wireshark+WinPcap doesn't lead anywhere (was actually already known but was worth a try)
RawCap, suggested in another topic, doesn't work. IP 127.0.0.1 is listed, but does not record any traffic.
Using rinetd to route the traffic elsewhere, as suggested here doesn't work (cannot bind on 127.0.0.1)
Not interested in using a HTTP local proxy, such as Fiddler, because I'd like to capture also other protocols
Two commercial tools work, specifically CommView and Local Network Monitor, which means it must be possible to do that ;)
How can I do to capture the traffic?
Any pointer on functions I should use or documentation I should read?
Thanks!
Basically you need to write a TDI filter driver to achieve that... for some pointers see:
http://msdn.microsoft.com/en-us/library/windows/hardware/ff565685%28v=vs.85%29.aspx
http://msdn.microsoft.com/en-us/library/windows/hardware/ff563317%28v=VS.85%29.aspx
Another option is to write a WinSock LSP.
BEWARE
Since Windows 8 it is strongly encouraged to use WFP (Windows Filtering Platform) for this sort of thing...
Although it might be more cost-effective to just use/buy an existing solution - esp. if you are not a very experienced driver developer...
Use RawCap, which can solve your concerns, see this