LwIP on STM32F769 - rtos

I'm trying to make a webserver out of my STM32F769I-Discovery board with RTOS and LwIP. It's supposed to return a few simple html/image files.
Here's the link to the full code I have so far:
https://github.com/xtrinch/stm32f7-demos/tree/master/05-rtos-lwip
Note that it's 90% copied from STM32Cube_FW_F7_V1.7.0\Projects\STM32F769I-Discovery\Applications\LwIP\LwIP_HTTP_Server_Socket_RTOS.
50% of the time, the board gets an IP from DHCP, 50% of the time, the DHCP requests timeout.
When the board does get an IP from DHCP, I can ping it, but when I try to access it via a browser, it doesn't return anything and ping stops working after the attempt.
If the IP is assigned statically with LwIP, I cannot ping it at all.
I have zero idea where to start. Maybe there's an issue with my RTOS thread priorities? I have tcp/ip thread on osPriorityHigh, DHCP thread on osPriorityBelowNormal and webserver thread on osPriorityAboveNormal.
My webserver thread successfuly binds itself to port 80, but the following:
newconn = accept(sock, (struct sockaddr *)&remotehost, (socklen_t *)&size);
is never executed.

Related

Lua Networking - Passing data through a 'closed' port

This might be a bit weird to explain, but I'll try my best.
I have a Lua program that's intended to serve some data through the network. Specifically, the internet. The data the program is actually transmitting are only strings stored within UDP packets. Generalized, this is how the program operates:
The first client launches the program and specifies that they are the 'host' of the connection. The program opens a connection on UDP port 6000 and the main loop listens for any packets received on said port.
The second client launches the program and specifies that they are to connect to the 'host' on port 6000. The user enters the IP, and the client opens a UDP connection using a random port between 6050 and 7000
When the client successfully connects to the server, they send a 'connection' packet, simply containing a '202 OK' string. The 'host' receives this and registers the new client
Now that the connection has been initialized, the programs can send data between each other using the registered data.
Now, on a local network this program works fine. The purpose of the 'host' mode is to have multiple clients connect to the host and have the host relay packets from one clients to all the currently registered clients. Port selections are arbitrary and random port selection from the client was simply to allow debugging and testing from a single computer. This has been tested between two and more computers on a physical network, and worked successfully. However, when I attempt to run this over the internet it's a no go. I know that the ports are closed and that's why it's not working. But seeing as I'm going to be distributing this program (privately) I can't expect every person to open ports on their router (or know how to). Security is not currently a concern with the program, and should be disregarded in the current state. That being said, I recognise there's the potential for a lot to go wrong with the use of this program through the network and I accept that. Onto the main question, how can I have the host and client communicate over the internet without having to open ports?
I'll elaborate - for example, browsers. Although the technology is quite different to what I'm doing, it's easier to paint a picture - the browser requests data from a web server, and it gets sent back to the client. But wait, if the router is in it's default state (I hope) all the ports are closed? So how does the client receive this data if the port is closed?
I hope this makes some kind of sense and I don't sound like a complete fool.
I managed to find some suitable libraries and utilities to be able to communicate through the internet (NAT traversal is now a term I am familiar with), those libraries being that supplied by NMAP. These libraries include an implementation for STUN in LUA, among HEAPS of other useful networking-related libraries and scripts.
To actually answer my own question (very simply), the clients and servers are behind what's known as a NAT gateway. Due to the limitations of addresses of IPv4, NAT gateways were implemented to bypass this limitation of IPv4 (a total of about 4.2 billion addresses) by separating the clients' internal network from the external network - in this case the internet. The NAT is supplied with a single IP address, and the NAT then supplies all of its users within the internal network with an IP respective to the network they're on. As such, the devices cannot directly be accessed without forwarding connections from the NAT gateway (generally the router) to the client. However, when using UDP connections the NAT gateway opens a port for the purposes of this connection which gets closed after the connection dies. This port that is opened differs from what is specified by the client when they open the connection, which is where the STUN methods come in. STUN allows the host to find the port that the client is connecting from and send data back to this port so the user can receive it. Bear in mind this is an EXTREMELY simple explanation of how the technology works, and I'd suggest reading up on the Wiki and some of the Request for Comments for STUN.

Windows 7 temporarily routes UDP packets for local network to default gateway

I have a Windows service running on a multi-homed Windows 7 machine communicating via UDP to a machine on the local network. This works fine, except sometimes during Windows startup the network traffic is temporarily (30 seconds) being routed to the default gateway, resulting in UDP packet loss. This packet loss is not necessarily a problem, but leads to an unnecessarily long startup time of the application.
The service binds to the socket using INADDR_ANY. Now when I change this to bind to the IP address of the control network NIC (192.168.32.1) I don't observe the problem. However I don't understand why the binding matters in this situation, and also I don't understand why the problem is there only temporarily. Do any of you have an explanation for this?
Besides my curiosity to find the root cause of this issue, I would also like to get an answer to this question so I can remove the bind to the specific IP address from my code. This decouples my application code from the network layout.
Network details:
Machine A, Windows 7, two NICs:
NIC #1 (ext network): 192.168.116.x/23 (DHCP), gateway 192.168.117.1
NIC #2 (int network): 192.168.32.1/26 (fixed)
Machine B, VxWorks, one NIC:
NIC #1 (int network): 192.168.32.16/26 (DHCP, assigned by Machine A)
When using INADDR_ANY, you bind your socket to the default IP address - the one with the lowest interface address. From the symptomps you are describing, it seems like this interface is not yet configured during startup, which makes sense.
The question is, why do you bind sending socket to any address at all. Implicit binding during send should be OK for you, I imagine?

What occurs when socket's IP changed, and how to deal with it?

I'm developing a C/S program using Delphi 7, TServerSocket and TClientSocket controls. One problem is now I can only use my PC as server, and my PC is using virtual dialer, so ISP keeps changing my IP, about once in one or two days.
Because I'm using a router, the ServerSocket is opened directly in my local IP (192.168.1.x), just mapped to my public IP, so I suppose the ServerSocket itself shouldn't crash when my public IP changes. What I suppose should be: when my IP changes, all connecting sockets become unavailable, and when my application doesn't know it and still using the socket, ServerSocket should receive some event like OnClientError.
But I found a weird problem - when my IP changed, the server application automatically shut down. I don't know exactly what happened because the shut-down time is afternoon, I was in my office, but I noticed another result: even I used heartbeat in my application layer protocol, the server didn't catch the keep-alive failure - because I recorded everything in a log file on my server, and didn't find it. So my server must be shut down instantly when my IP changed, which even didn't reach the keep-alive logic.
This seems very weird, how can a socket error(due to IP change) directly lead to the whole application shut down? Please if someone have any explanations, and how to deal with this problem, thanks
Once the socket is opened, its bound IP address will never change. This can not be 'fixed' on the server side. I would recommend to work on the servers stability, also the clients should detect that the server no longer exists at the given IP address, and re-connect. (This is independent of why the server became unavailable - a restarting server is normal.)

Sockets on a webhost

If you telnet to the ip address 192.43.244.18 port 13, you'll get the current time.
well, if I'm not wrong, this is simply a server socket. But there's one thing strange: how's this socket always listening?
If I take a PHP page and program sockets in there, I still have to request for the page first in order to activate the server socket, but this one isn't associated with any pages, and even if a make a perl script, I still have to request for that in order to run the server socket!
My question is: how can I make such a thing - an always listening socket - on a webhost (any language will do)?
You can run the process that's listening on the socket as a daemon (Linux) or service (Windows), or just a regular program really (although that's less elegant).
A simple place to begin would be http://docs.oracle.com/javase/tutorial/networking/sockets/clientServer.html which teaches you how to make a simple serversocket in Java that listens for a connection on a specific port. The program created will have to be run at all times to be able to accept the connections.

Communication protocols in UDP

After many hours, I have discovered that the given udp server needs the following steps for a successful communication:
1- Send "Start Message" on a given port
2- Wait to receive from server on any port
3- Then the port dedicated to you to send further data to the server equals the port you have received on it + 1
So I am asking if this kind is a known protocol/handshaking, or it is only special to this server??
PS: All above communication were in udp sockets in C#
PS: Related to a previous question: About C# UDP Sockets
Thanks
There's no special "handshake" for UDP -- each UDP service, if it needs one, specifies its own. Usually, though, a server doesn't expect the client to be able to listen on all of its ports simultaneously. If you mean that the client expects a message from any port on the server, to the port the client sent the start message from, then that makes a lot more sense -- and is very close to how TFTP works. (The only difference i'm seeing so far, is that TFTP doesn't do the "+ 1".)
The server is, effectively, listening on a 'well known port' and then switching subsequent communications to a dedicated port per client. Requiring the client to send to the port + 1 is a little strange
Client 192.168.0.1 - port 12121 ------------------------> Server 192.168.0.2 - port 5050
Client 192.168.0.1 - port 12121 <------------------------ Server 192.168.0.2 - port 23232
Client 192.168.0.1 - port 12121 ------------------------> Server 192.168.0.2 - port 23232 + 1
<------------------------ Server 192.168.0.2 - port 23232
------------------------> Server 192.168.0.2 - port 23232 + 1
The server probably does this so that it doesn't have to demultiplex the inbound client data based on the client's address/port. Doing it this way is a little more efficient (generally) and also has some advantages, depending on the design of the server, as on the server there's a 'dedicated' socket for you which means that if they're doing overlapped I/O then the socket stays the same for the whole period of communications with you which can make it easier and more efficient to associate data with the socket (this way they can probably avoid any lookups or locking to process each datagram). Anyway, enough of that (see here, if you want to know why I do it that way).
From your point of view as a client (and I'm assuming async sockets here) you need to first Bind() your local socket (just use INADDR_ANY and 0 to allow the OS to pick the port for you) then issue a RecvFrom() on the socket (so there's no race between you sending data to the server on this socket and it sending you data back before you issue a recv). Then issue a SendTo() to the 'well known port' of the server. The server will then send you back some data and your RecvFrom() will return you the data and the address that the server sent to you from. You can then take that address, add one to the port, store that address and from then on issue SendTo()s to that new sending address whilst continuing to issue RecvFrom()s for reading the server's data; or you could do something clever with Connect() to bind the remote end of the socket to the server's 'send to address' and simply use Write() and RecvFrom() from then on.