In gtk applications all execution is taking place inside the gtk_main function. And other graphical frame works have similar event loops like app.exec for QT and clutter_main for Clutter. However ZeroMQ is based on the assumption that there is an while (1) ... loop that it is inserted into (see for instance here for examples).
How do you combine those two execution strategies?
I am currently wanting to use zeromq in a clutter application written in C, so I would of course like direct answers to that, but please add answers for other variants as well.
The proper way to combine zmq and gtk or clutter is to connect the file-descriptor of the zmq queue to the main event loop. The fd can be retrieved by using
int fd;
size_t sizeof_fd = sizeof(fd);
if(zmq_getsockopt(socket, ZMQ_FD, &fd, &sizeof_fd))
perror("retrieving zmq fd");
Connecting it to the main loop is the matter of using io_add_watch:
GIOChannel* channel = g_io_channel_unix_new(fd);
g_io_add_watch(channel, G_IO_IN|G_IO_ERR|G_IO_HUP, callback_func, NULL);
In the callback function, it is necessary to first check if there is really stuff to read, before reading. Otherwise, the function might block waiting for IO.
gboolean callback_func(GIOChannel *source, GIOCondition condition,gpointer data)
{
uint32_t status;
size_t sizeof_status = sizeof(status);
while (1){
if (zmq_getsockopt(socket, ZMQ_EVENTS, &status, &sizeof_status)) {
perror("retrieving event status");
return 0; // this just removes the callback, but probably
// different error handling should be implemented
}
if (status & ZMQ_POLLIN == 0) {
break;
}
// retrieve one message here
}
return 1; // keep the callback active
}
Please note: this is not actually tested, I did a translation from Python+Clutter, which is what I use, but I'm pretty sure that it'll work.
For reference, below is full Python+Clutter code which actually works.
import sys
from gi.repository import Clutter, GObject
import zmq
def Stage():
"A Stage with a red spinning rectangle"
stage = Clutter.Stage()
stage.set_size(400, 400)
rect = Clutter.Rectangle()
color = Clutter.Color()
color.from_string('red')
rect.set_color(color)
rect.set_size(100, 100)
rect.set_position(150, 150)
timeline = Clutter.Timeline.new(3000)
timeline.set_loop(True)
alpha = Clutter.Alpha.new_full(timeline, Clutter.AnimationMode.EASE_IN_OUT_SINE)
rotate_behaviour = Clutter.BehaviourRotate.new(
alpha,
Clutter.RotateAxis.Z_AXIS,
Clutter.RotateDirection.CW,
0.0, 359.0)
rotate_behaviour.apply(rect)
timeline.start()
stage.add_actor(rect)
stage.show_all()
stage.connect('destroy', lambda stage: Clutter.main_quit())
return stage, rotate_behaviour
def Socket(address):
ctx = zmq.Context()
sock = ctx.socket(zmq.SUB)
sock.setsockopt(zmq.SUBSCRIBE, "")
sock.connect(address)
return sock
def zmq_callback(queue, condition, sock):
print 'zmq_callback', queue, condition, sock
while sock.getsockopt(zmq.EVENTS) & zmq.POLLIN:
observed = sock.recv()
print observed
return True
def main():
res, args = Clutter.init(sys.argv)
if res != Clutter.InitError.SUCCESS:
return 1
stage, rotate_behaviour = Stage()
sock = Socket(sys.argv[2])
zmq_fd = sock.getsockopt(zmq.FD)
GObject.io_add_watch(zmq_fd,
GObject.IO_IN|GObject.IO_ERR|GObject.IO_HUP,
zmq_callback, sock)
return Clutter.main()
if __name__ == '__main__':
sys.exit(main())
It sounds like the ZeroMQ code wants simply to be executed over and over again as often as possible. The simplest way is to put the ZeroMQ code into an idle function or timeout function, and use non-blocking versions of the functions if they exist.
For Clutter, you would use clutter_threads_add_idle() or clutter_threads_add_timeout(). For GTK, you would use g_idle_add() or g_timeout_add().
The more difficult, but possibly better, way is to create a separate thread for the ZeroMQ code using g_thread_create(), and just use the while(1) construction with blocking functions as they suggest. If you do that, you will also have to find some way for the threads to communicate with each other - GLib's mutexes and async queues usually do fine.
I found that there is a QT integration library called Zeromqt. Looking at the source, the core of the integration is the following:
ZmqSocket::ZmqSocket(int type, QObject *parent) : QObject(parent)
{
...
notifier_ = new QSocketNotifier(fd, QSocketNotifier::Read, this);
connect(notifier_, SIGNAL(activated(int)), this, SLOT(activity()));
}
...
void ZmqSocket::activity()
{
uint32_t flags;
size_t size = sizeof(flags);
if(!getOpt(ZMQ_EVENTS, &flags, &size)) {
qWarning("Error reading ZMQ_EVENTS in ZMQSocket::activity");
return;
}
if(flags & ZMQ_POLLIN) {
emit readyRead();
}
if(flags & ZMQ_POLLOUT) {
emit readyWrite();
}
...
}
Hence, it is relying on QT's integrated socket handling and Clutter will not have something similar.
You can get a file descriptor for 0MQ socket (ZMQ_FD option) and integrate that with your event loop. I presume gtk has some mechanism for handling sockets.
This an example in Python, using the PyQt4. It's derived from a working application.
import zmq
from PyQt4 import QtCore, QtGui
class QZmqSocketNotifier( QtCore.QSocketNotifier ):
""" Provides Qt event notifier for ZMQ socket events """
def __init__( self, zmq_sock, event_type, parent=None ):
"""
Parameters:
----------
zmq_sock : zmq.Socket
The ZMQ socket to listen on. Must already be connected or bound to a socket address.
event_type : QtSocketNotifier.Type
Event type to listen for, as described in documentation for QtSocketNotifier.
"""
super( QZmqSocketNotifier, self ).__init__( zmq_sock.getsockopt(zmq.FD), event_type, parent )
class Server(QtGui.QFrame):
def __init__(self, topics, port, mainwindow, parent=None):
super(Server, self).__init__(parent)
self._PORT = port
# Create notifier to handle ZMQ socket events coming from client
self._zmq_context = zmq.Context()
self._zmq_sock = self._zmq_context.socket( zmq.SUB )
self._zmq_sock.bind( "tcp://*:" + self._PORT )
for topic in topics:
self._zmq_sock.setsockopt( zmq.SUBSCRIBE, topic )
self._zmq_notifier = QZmqSocketNotifier( self._zmq_sock, QtCore.QSocketNotifier.Read )
# connect signals and slots
self._zmq_notifier.activated.connect( self._onZmqMsgRecv )
mainwindow.quit.connect( self._onQuit )
#QtCore.pyqtSlot()
def _onZmqMsgRecv():
self._test_info_notifier.setEnabled(False)
# Verify that there's data in the stream
sock_status = self._zmq_sock.getsockopt( zmq.EVENTS )
if sock_status == zmq.POLLIN:
msg = self._zmq_sock.recv_multipart()
topic = msg[0]
callback = self._topic_map[ topic ]
callback( msg )
self._zmq_notifier.setEnabled(True)
self._zmq_sock.getsockopt(zmq.EVENTS)
def _onQuit(self):
self._zmq_notifier.activated.disconnect( self._onZmqMsgRecv )
self._zmq_notifier.setEnabled(False)
del self._zmq_notifier
self._zmq_context.destroy(0)
Disabling and then re-enabling the notifier in _on_ZmqMsgRecv is per the documentation for QSocketNotifier.
The final call to getsockopt is for some reason necessary. Otherwise, the notifier stops working after the first event. I was actually going to post a new question for this. Does anyone know why this is needed?
Note that if you don't destroy the notifier before the ZMQ context, you'll probably get an error like this when you quit the application:
QSocketNotifier: Invalid socket 16 and type 'Read', disabling...
Related
I'm writing a server application in D, who should be able to manage n connections simultaneously.
To achieve this i am using std.socket.Socket.select. This works fine. But I can't bind session specific data to the socket and i don't see any way to do this, cause Socket does not allow to save a handle to user specific data. After
Socket.select(socketSet, null, null);
I'm able to get all affected sockets, but I can't assign this sockets to my user specific session data. What's my mistake? Is it possible to reach my goal in this way? Or should I choose another way for my requirements?
My relevant code:
ushort port = 5010;
stoprequest = false;
auto listener = new TcpSocket();
assert(listener.isAlive);
listener.blocking = false;
listener.bind(new InternetAddress(port));
listener.listen(10);
enum MAX_CONNECTIONS = 100;
auto socketSet = new SocketSet(MAX_CONNECTIONS + 1);
Socket[] reads;
Session[] sessions;
while (true)
{
socketSet.add(listener);
foreach (session; sessions)
socketSet.add(session.socket);
Socket.select(socketSet, null, null);
for (size_t i = 0; i < reads.length; i++)
{
if (socketSet.isSet(reads[i]))
{
// Now i should acces to session related data, but how?
char[1024] buf;
auto datLength = reads[i].receive(buf[]);
if (datLength == Socket.ERROR)
writeln("Connection error.");
else if (datLength != 0)
{
writefln("Received %d bytes from %s: \"%s\"", datLength, reads[i].remoteAddress().toString(), buf[0..datLength]);
continue;
}
else { // Error Handling. Shortened, since unimportant for the example}
reads[i].close();
reads = reads.remove(i);
i--;
}
}
if (socketSet.isSet(listener))
{
Socket sn = null;
sn = listener.accept();
if (reads.length < MAX_CONNECTIONS)
{
Session session = new Session();
session.socket = sn;
sessions ~= session;
}
else { // Error Handling for too many connection. Shortened, since unimportant for the example}}
}
socketSet.reset();
}
The hint to use poll() was helpful. After reading https://daniel.haxx.se/docs/poll-vs-select.html I think that both variants work and neither of them are the real thing. For an efficient way, I should better deal with libev. Fortunately, efficiency is not my problem in this particular project. For this reason I will use select(), because i found out, that accessing handle gives me a unique number which can be passed to a own lookup table. This allows me to assign session data to a socket. So I prefer to stick with the encapsulated functionality of std.socket.Socket and don't work around it.
My concrete question can therefore be answered with :
Use Socket.handle to identify the socket and manage session related
data
A few other alternatives you can consider:
1) use a subclass of Socket. You can make your own class that inherits from it and adds more stuff.
2) The poll function is found in import core.sys.posix.poll;, and you can pass socket.handle to that as well. But note it will not work on Windows without modification.
or indeed 3) do your own lookup table, that works too.
Note that the std.socket.Socket is a very thin wrapper around the bsd socket api, just internally it does conveniently handle the slight differences between Windows and posix. Still it is pretty easy to adapt code to use the other apis with it (or tutorials on C language stuff to D) since it is all basically the same thing - and literally the same functions if you import core.sys stuff.
At a Vert.x verticle I'm implementing I have a Buffer that was previously loaded into memory and now I want to dump it into disk.
As far as I understood we should use a Pump to make sure not to overload the WriteStream.
But I'm not finding a way to get a ReadStream child instance from a Buffer. Shouldn't there be an easy / standard way to do this?
Regards
Generally, vert.x does not warn on any issues writing directly into AsyncFiles. Furthermore, they provide the corresponding example of using AsyncFile.write directly here and state that you can use those to write directly: http://vertx.io/docs/vertx-core/java/#_asynchronous_files
However, if you want the pump with Buffer you need an instance of ReadStream<Buffer> along with an AsyncFile to pump into. You can make use of the implementation by PitchPoint Solutions (Copyright 2016 The Simple File Server Authors):
https://github.com/pitchpoint-solutions/sfs/blob/master/sfs-server/src/main/java/org/sfs/io/BufferReadStream.java
Putting it all together:
CompletableFuture<Void> done = new CompletableFuture<>();
Buffer buffer = Buffer.buffer(new byte[100]);
Vertx.vertx().fileSystem.open("myfile.txt", new OpenOptions(), res -> {
if (res.succeeded()) {
AsyncFile outputFile = res.result();
BufferReadStream reader = new BufferReadStream(buffer)
Pump pump = Pump.pump(reader, outputFile);
pump.start();
reader.endHandler((r) -> {
pump.stop(); // not sure this is required
done.complete(null);
});
} else {
// Something went wrong!
}
});
// wait elsewhere
done.get();
Again I came across a doubt. I inserted in my implementation the use of ACK.
In the function:
AMSend.sendDone (message_t * bufPtr, error_t error) {
if (call PacketAcknowledgements.wasAcked (bufPtr)) {
dbg ("test", "SEND_ACK \ n");
}
}
And it's apparently working correctly, depending on the output log.
Already in function:
AMControl.startDone (error_t err) {
radio = TRUE;
dbg ("test", "SLOT_ACTIVE \ n");
if (err == SUCCESS) {
if ((call Clock.get ()> (ultpkdados + 5000)) && (TOS_NODE_ID! = 0)) {
test_msg_t * rcm = (test_msg_t *) call Packet.getPayload (& pkt, sizeof (test_msg_t));
rcm-> type = 1;
rcm-> nodeid = TOS_NODE_ID;
rcm-> proxsalto = syncwith;
call PacketAcknowledgements.requestAck (& pkt);
if (call AMSend.send (syncwith, & pkt, sizeof (test_msg_t)) == SUCCESS) {
Dbg ("test", "SEND_PKT_DATA \ n"));
locked = TRUE;
ultpkdados call Clock.get = ();
}
}
}
}
This function startDone is sending this packet of "data" normally, and I made a call PacketAcknowledgements.requestAck to request the ACK.
My question is whether at this point, if the ACK is not confirmed, the original message is retransmitted. If this is not happening, could you suggest me the appropriate changes for this to happen?
My question is whether at this point, if the ACK is not confirmed, the
original message is retransmitted.
No, the message will not be retransmitted.
If this is not happening, could you suggest me the appropriate changes
for this to happen?
What you are doing is just requesting acknowledgments and not enabling retransmissions. Retransmissions are sent by the Packet link layer which is used as specified HERE.
In order to enable re-transmissions you need to:
1) add the PACKETLINK preprocessor variable to your makefile. This can be done by simply adding "-DPACKETLINK" to PFlags in your makefile i.e
PFLAGS = -DPACKETLINK
2) Specify the maximum number of retries that your device can transmit and the delay between each retry. This is done by appropriately calling the setRetries and setRetryDelay functions in the Packet link interface (These are found on a instantiation of a PacketLink interface so you will need a uses interface PacketLink statement in the wiring section of your module). You need to set the number of retries before calling AMSend.send. i.e you would need to have something that goes along the lines of:
#if defined(PACKET_LINK)
maxRetries = 100; //max retries
myDelay = 10; //delay between retries
call PacketLink.setRetries(&pkt, maxRetries); //set retries
call PacketLink.setRetryDelay(&pkt, myDelay); //set delay
#endif
3. In your configuration file you need to provide a Packetlink implementation instantiated and link it to your module. For instance if you are using a node with a CC2420 transceiver (such as the TelosB node) you would have the following in the implementation section of your configuration file.
components CC2420ActiveMessageC, myModuleP as App;
App.PacketLink -> CC2420ActiveMessageC.PacketLink;
What the above will do is compile the Packet Link Layer along with the rest the the Communication stack. You can look the PacketLinkP.nc file to see how the values you are passing to the PacketLink interface are being used.
If you are using the PacketLink interface and PacketAcknowledgements.wasAcked returns FALSE in your AMSend.sendDone method then it means transmission has still failed despite all the retries. You can at this point try a fresh retransmit again (which the device will again try to retransmit up to a total of maxRetries times).
Okay this is my first question here on Stack Overflow, so bare over with it if I'm not asking properly.
Basically I'm trying to code some asynchronous sockets using std.socket, but I'm not sure if I've understood the concept correct. I've only ever worked with asynchronous sockets in C# and in D it seem to be on a much lower level. I've researched a lot and looked up a lot of code, documentation etc. both for D and C/C++ to get an understanding, however I'm not sure if I understand the concept correctly and if any of you have some examples. I tried looking at splat, but it's very outdated and vibe seems to be too complex just for a simple asynchronous socket wrapper.
If I understood correctly there is no poll() function in std.socket so you'd have to use SocketSet with a single socket on select() to poll the status of the socket right?
So basically how I'd go about handling the sockets is polling to get the read status of the socket and if it has a success (value > 0) then I can call receive() which will return 0 for disconnection else the received value, but I'd have to keep doing this until the expected bytes are received.
Of course the socket is set to nonblocked!
Is that correct?
Here is the code I've made up so far.
void HANDLE_READ()
{
while (true)
{
synchronized
{
auto events = cast(AsyncObject[int])ASYNC_EVENTS_READ;
foreach (asyncObject; events)
{
int poll = pollRecv(asyncObject.socket.m_socket);
switch (poll)
{
case 0:
{
throw new SocketException("The socket had a time out!");
continue;
}
default:
{
if (poll <= -1)
{
throw new SocketException("The socket was interrupted!");
continue;
}
int recvGetSize = (asyncObject.socket.m_readBuffer.length - asyncObject.socket.readSize);
ubyte[] recvBuffer = new ubyte[recvGetSize];
int recv = asyncObject.socket.m_socket.receive(recvBuffer);
if (recv == 0)
{
removeAsyncObject(asyncObject.event_id, true);
asyncObject.socket.disconnect();
continue;
}
asyncObject.socket.m_readBuffer ~= recvBuffer;
asyncObject.socket.readSize += recv;
if (asyncObject.socket.readSize == asyncObject.socket.expectedReadSize)
{
removeAsyncObject(asyncObject.event_id, true);
asyncObject.event(asyncObject.socket);
}
break;
}
}
}
}
}
}
So basically how I'd go about handling the sockets is polling to get the read status of the socket
Not quite right. Usually, the idea is to build an event loop around select, so that your application is idle as long as there are no network or timer events that need to be handled. With polling, you'd have to check for new events continuously or on a timer, which leads to wasted CPU cycles, and events getting handled a bit later than they occur.
In the event loop, you populate the SocketSets with sockets whose events you are interested in. If you want to be notified of new received data on a socket, it goes to the "readable" set. If you have data to send, the socket should be in the "writable" set. And all sockets should be on the "error" set.
select will then block (sleep) until an event comes in, and fill the SocketSets with the sockets which have actionable events. Your application can then respond to them appropriately: receive data for readable sockets, send queued data for writable sockets, and perform cleanup for errored sockets.
Here's my D implementation of non-fiber event-based networking: ae.net.asockets.
A friend of mine came to me with a problem: when using the NetworkStream class on the server end of the connection, if the client disconnects, NetworkStream fails to detect it.
Stripped down, his C# code looked like this:
List<TcpClient> connections = new List<TcpClient>();
TcpListener listener = new TcpListener(7777);
listener.Start();
while(true)
{
if (listener.Pending())
{
connections.Add(listener.AcceptTcpClient());
}
TcpClient deadClient = null;
foreach (TcpClient client in connections)
{
if (!client.Connected)
{
deadClient = client;
break;
}
NetworkStream ns = client.GetStream();
if (ns.DataAvailable)
{
BinaryFormatter bf = new BinaryFormatter();
object o = bf.Deserialize(ns);
ReceiveMyObject(o);
}
}
if (deadClient != null)
{
deadClient.Close();
connections.Remove(deadClient);
}
Thread.Sleep(0);
}
The code works, in that clients can successfully connect and the server can read data sent to it. However, if the remote client calls tcpClient.Close(), the server does not detect the disconnection - client.Connected remains true, and ns.DataAvailable is false.
A search of Stack Overflow provided an answer - since Socket.Receive is not being called, the socket is not detecting the disconnection. Fair enough. We can work around that:
foreach (TcpClient client in connections)
{
client.ReceiveTimeout = 0;
if (client.Client.Poll(0, SelectMode.SelectRead))
{
int bytesPeeked = 0;
byte[] buffer = new byte[1];
bytesPeeked = client.Client.Receive(buffer, SocketFlags.Peek);
if (bytesPeeked == 0)
{
deadClient = client;
break;
}
else
{
NetworkStream ns = client.GetStream();
if (ns.DataAvailable)
{
BinaryFormatter bf = new BinaryFormatter();
object o = bf.Deserialize(ns);
ReceiveMyObject(o);
}
}
}
}
(I have left out exception handling code for brevity.)
This code works, however, I would not call this solution "elegant". The other elegant solution to the problem I am aware of is to spawn a thread per TcpClient, and allow the BinaryFormatter.Deserialize (née NetworkStream.Read) call to block, which would detect the disconnection correctly. Though, this does have the overhead of creating and maintaining a thread per client.
I get the feeling that I'm missing some secret, awesome answer that would retain the clarity of the original code, but avoid the use of additional threads to perform asynchronous reads. Though, perhaps, the NetworkStream class was never designed for this sort of usage. Can anyone shed some light?
Update: Just want to clarify that I'm interested to see if the .NET framework has a solution that covers this use of NetworkStream (i.e. polling and avoiding blocking) - obviously it can be done; the NetworkStream could easily be wrapped in a supporting class that provides the functionality. It just seemed strange that the framework essentially requires you to use threads to avoid blocking on NetworkStream.Read, or, to peek on the socket itself to check for disconnections - almost like it's a bug. Or a potential lack of a feature. ;)
Is the server expecting to be sent multiple objects over the same connection? IF so I dont see how this code will work, as there is no delimiter being sent that signifies where the first object starts and the next object ends.
If only one object is being sent and the connection closed after, then the original code would work.
There has to be a network operation initiated in order to find out if the connection is still active or not. What I would do, is that instead of deserializing directly from the network stream, I would instead buffer into a MemoryStream. That would allow me to detect when the connection was lost. I would also use message framing to delimit multiple responses on the stream.
MemoryStream ms = new MemoryStream();
NetworkStream ns = client.GetStream();
BinaryReader br = new BinaryReader(ns);
// message framing. First, read the #bytes to expect.
int objectSize = br.ReadInt32();
if (objectSize == 0)
break; // client disconnected
byte [] buffer = new byte[objectSize];
int index = 0;
int read = ns.Read(buffer, index, Math.Min(objectSize, 1024);
while (read > 0)
{
objectSize -= read;
index += read;
read = ns.Read(buffer, index, Math.Min(objectSize, 1024);
}
if (objectSize > 0)
{
// client aborted connection in the middle of stream;
break;
}
else
{
BinaryFormatter bf = new BinaryFormatter();
using(MemoryStream ms = new MemoryStream(buffer))
{
object o = bf.Deserialize(ns);
ReceiveMyObject(o);
}
}
Yeah but what if you lose a connection before getting the size? i.e. right before the following line:
// message framing. First, read the #bytes to expect.
int objectSize = br.ReadInt32();
ReadInt32() will block the thread indefinitely.