select() Alternative for Windows for both console input and sockets? - perl

For Linux, select() works for both sockets and console input. But on windows, it only works for sockets.
This is problematic because I am looking to write a client in perl console where I can connect to a server, print and parse messages, and be informed when the user inputs commands into the console. Something like a chatroom, when I can both print messages to the console and read user input to send messages.
Is there any alternative to do this on Windows? Or am I forced to use Window Applications instead of the console?

Unfortunately not. In fact, this is one of the core problems facing porting asynchronous software onto Windows.
About the closest thing you could get is WaitForMultipleObjectsEx, which has all sorts of interesting and well-known issues with it (such as its 64 handle limit, and how it copes with than one handle being ready at once). But if you want to multiple console and network socket IO, it's about the only option on Windows.

Related

Reading messages from can bus and display on raspberry pi webpage

This is a theorical question. I have no code and I am not looking for that, just knowledge.
I have a raspberry pi with a webserver and a waveshare can-hat. It receives various messages from a dozen devices.
among those messages a few contains data (some informations are divided over multiple messages).
My idea is to receive the messages, restore complete informations and write one file each.
then an ajax call reads each file and displays each value in the webpage. Probably once every second.
Is it possible to do that? is there a better way?
the receiving script will be made in C.
thank you for helping and sharing your knowledge!
I think it would be a better practice if you create some kind of process (or some kind of kernel module or daemon for example) which read out the data from CAN and you use this process with Python and use some sort of Webserver-API to display the data via web.
You can find some ideas for IPC between a C and a Python application here.
So one simple solution would be to create a socket system with a C guest and a Python master. Your Python application is a flask application which waits for a connection of the C application (or vice versa) and the C application transmit all incomming data to your Python application.
This would be a more neat solution than writing and reading a file.

What's the conventional way to send commands to running processes?

Is there a conventional way to write a program such that commands can be issued to the program from the command line without a repl? For example, how you can send commands to a running nginx server using sudo /etc/init.d/nginx restart (or any other valid command besides restart)
One idea I had was having the long-running program create and monitor a unix socket that other programs can write to to send it commands. Another was to create a local server with a REST interface that can be sent commands that way, though that seems a bit gross.
What's the right way to do this?
Both ways are ok, and you could even consider using some RPC machinery, such as making your application serve JSONRPC on some unix(7) socket. Or use a fifo(7). Or use D-Bus.
A common habit on Unix is to have applications reload their configuration files on e.g. SIGHUP signal, and save some persistent state (before terminating) on SIGTERM. Read signal(7) (notice that only async-signal-safe routines can be called fro signal handlers; a good way is to only set some volatile sig_atomic_t variable inside the handler and test it outside). See also POSIX signal.h documentation.
You might make your application become a specialized HTTP server (e.g. using some HTTP server library like libonion) and give it some Web interface (or REST, or SOAP ...); the user (or sysadmin) will then use his browser to interact with your application.
You could make your server systemd compatible. (I don't know exactly what that requires, it is perhaps D-bus related).
You could embed some command interpreter (like Guile and Lua) in your app and have some limited kind of REPL loop running on some IPC like a socket or a fifo. Beware of nasty code injection.
I had a similar issue where I have a plethora of services running on any number of machines and each is in need of communicating with several others.
My main problem was not so much the communication between the services. That can be done with a simple message sent over a connection (as Basile mentioned, it can be TCP, UDP, Unix sockets, FIFOs...). However, when you have over 20 services, many of which need to communicate with several other services, you start having a headache on how to get all the connections right (I have such a system, although it has a relatively limited number of services, like just 10 and that's already very complicated).
So I created a process (yet another service) called Communicator. All services connect to the Communicator service and when they need to send a message, they include the name of the service they want to reach. The Communicator service is in charge of sending the message to the right placeā€”i.e. it could be to another Communicator service running on a different computer. Communicator has a graph of all the services available on your network and knows how to send messages to them without your service having to know anything about all of that. Computing a graph can be really complex.
For the purpose, I created the eventdispatcher project. It is in C++, which may not be what you're interested in, although you could use it in other languages that interface with C/C++. The structure of the messages are "proprietary" (specific to the Communicator), but you can create any message you want. A message includes a name and parameters (param-name=value). The first version has a simple one line text communication system. The newer version accepts JSON as well (still must be one line of text per message).
The system supports TCP, UDP, Unix sockets, FIFO, and between threads, you can have thread safe fifos. It also understand signals (like SIGHUP, SIGTERM, etc.) It has a specific connection to listen for the death of a thread. It supports encryption over TCP via OpenSSL. The messages can automatically be dispatched (hence the current name of the library). Connections are assigned a timer. And there are CUI and GUI (Qt) extensions as well.
The one main point here is that all your connections can be polled (see poll()) and thus you can implement a system that reacts to events instead of a system which sleeps and checks for events, sleeps and check, etc. or worth, you have a single blocking connection and everything has to happen on that one connection or your service gets stuck. This is one reason Unix has been using signals since early version of Unix did not have select() nor poll().

Detecting VoIP Calls from command line - Wireshark

I'm using Wireshark to sniff the network and detect the VoIP calls. Detected VoIP calls can be seen from GUI (Telephony->VoIP Calls).
Now I want to get this list from command line. I searched through wireshark documents, but couldn't find a command to do that.
I'm using the commands like
tshark -r myFile -R "sip.CSeq.method eq INVITE"
from this topic :
Filtering VoIP calls with tshark
Is there a command to show that voip call list from command line, or do i have to parse the outputs and create my own list? Do you suggest any other tool to do that?
Any help would be greatly appreciated.
I don't know of any way to coax tshark to give you what the Wireshark GUI does. You can do this by post-processing the output from tshark, but it will be a fair amount of work. One approach would be to:
Have tshark to display the full details of the SIP packets (e.g., with -v)
Pipe this to a process that will extract info from each packet. This process will need to detect packet boundaries, since the input will have multiple lines per packet.
This process will need to store selected info from these packets (such as From, To, Start Time, etc.) and correlate this info across packets based on dialog identifiers.
The process will need to understand the SIP protocol well enough to determine when calls are confirmed, terminated, etc.
This is certainly doable, but I wanted you to know what you are getting into.
An alternative to a separate process (that I have no experience with) is to write a Wireshark script in Lua, and invoke that via tshark -Xlua_script:my_script.lua (using a version of tshark compiled with Lua support). An example to help you get started can be found here under the example "Dump VoIP calls into separate files" (or similarly here on Google Code). The advantages are:
You automatically have access to the parsed SIP message.
It is easy to tell where the packet begins and ends.
Everything runs in a single process.
For me, the downside is that I would have to learn a new language (not the worst thing in the world).
EDIT: Looks like the SIP dissector in wireshark/tshark can help quite a bit if you use the Lua script approach; for instance, you can inspect sip.response-request on a SIP response to find the packet number of matching request.

Single client talking to multiple Servers

I'm working on a project where I have a single client that need to open a Telnet session to multiple servers (100) and wait for messages. The messages are small (< 80 bytes) and will occur at random.
I've read that it's bad form to do this by creating a thread for each "server". I'm looking for suggestions as to the best way to handle the multiple sites with TCPClient, or Winsock or Catalyst or ???
Thanks for the help !
Gary M
As it is Windows platform, there are many options. You can use Winsock select function, or WSAPoll, or WSAAsyncSelect, or completition ports.
select/WSAPoll work almost like in Posix, and there are plenty of examples, and some ready libraries on how to use those.
WSAAsyncSelect will send events to the UI thread (you need to have window for that). If you application has a window, this might be the simplest option, as all activity will occur in the window thread, and library takes care of event serialization.
Also take a look at (it is important as you have more then 64 connections):
http://msdn.microsoft.com/en-us/library/windows/desktop/ms739169(v=vs.85).aspx
Using Windows completition ports:
http://msdn.microsoft.com/en-us/magazine/cc302334.aspx
http://msdn.microsoft.com/en-us/magazine/ms810436.aspx

Whats more portable in Perl, sockets or named pipes (fifos)?

I'm writing some Perl code. I want it to run on Windows and Linux/UNIX/OSX. So far it works on *NIX and uses fifos.
I am considering switching to sockets to avoid the problem that POSIX::mkfifo() doesn't work on Windows, so I need to write some separate code to use Win32::Pipe.
I'm feeling ambivalent about the whole thing. It seems to me both fixes require about the same amount of work. Is it a good idea to switch to sockets?
Short answer: IO::Socket::INET works on both Windows and *NIX.
Named Pipes
Slightly easier to code up quickly. You don't need write connect code.
Slightly faster. Sockets have the overhead of TCP and setting up the initial connection.
Works on all platforms.
Works even when network card doesn't exist. Some laptops shut down the network card to save power which can prevent even local sockets from working.
Sockets
Works on all platforms. However, some laptops shut down the network card to save power and even local sockets won't work if there is no network interface.
More portable in Perl. IO::Socket::INET works on both *NIX and Windows.
Allows you to have a separate conversation with each client.
Firewalls are not a problem. Ports over 1024 should work.
Personally, I've decided to switch to sockets. In my application it doesn't matter much. But I think it makes the code a bit simpler, gives me the flexibility to move to > 1 client in the future, and I want to learn IO::Socket anyway.
Answering more generically (ie, it's not perl specific):
Doing this sort of thing in windows vs the rest of the world almost always requires separate code for windows vs everything-else. Pretty much everything-else has good solutions for things like this, like unix file sockets or fifo's or ... Then on windows you have to fall back to sockets.
The right thing to do, IMHO, is to use the right solution on windows that isn't network sockets because that opens the application up to security issues. So on everything else "do it correctly" but then on windows, fall back to something like network sockets instead. But, make sure if you take the network-socket route you should at least use local sockets only (ie, bound to 127.0.0.1).
For perl, I'd be tempted to look in CPAN for a class that's already made this generic. But... I wouldn't be surprised if nothing exists.
LWP::socket works fine in Windows and *NIX. If you opt for sockets over fifos, then you eventually would be able to communicate Windows and *NIX processes. May be you don't need it today, but who knows.
IIRC, later versions of Perl have a working socketpair on Windows.