Unable to capture MQTT log callback - callback

I am having trouble getting the on_log callback to trigger. I have used it on other programs w/o problems but this one is being difficult. I have included the relevant code snippets (I hope!). All other call backs are working fine. This program isn't threaded (except MQTT.start) so there aren't any other actions. Any suggestions where to look would be appreciated. fwiw, the problem I'm trying to track down is that MQTT stops responding after a few hours. The MQTT server is on a separate server, is used by numerous other processes and has no known issues.
# Set up MQTT - wait until we have an ipaddr so we know the network has been started
logger.debug("Waiting for ip address to be assigned")
while True:
ipaddr = get_local_IP()
if ipaddr is not None:
logger.info('IP address is {}'.format(ipaddr))
break
sleep(2.0)
logger.debug("Waiting for MQTT broker connection")
mqttc = mqtt.Client()
mqttc.on_message = on_message
mqttc.on_connect = on_connect
mqttc.on_publish = on_publish
mqttc.on_subscribe = on_subscribe
while True:
try:
mqttc.connect("192.168.0.18", 1884, 30)
except IOError as e:
if e.errno != errno.ENETUNREACH:
raise
logger.warning('Network error - retrying')
sleep(15)
continue
logger.debug('Connect initiated without error')
break
mqttc.loop_start()
mqttc.on_log = on_log
while not MQ_link:
sleep(1)
def on_connect(mqttc, obj, flags, rc):
global MQ_link
logger.debug("Connected: rc = " + str(rc))
if rc == 0:
MQ_link = True
def on_log(mqttc, obj, level, string):
verb = string.split('(').strip()
if verb[0] not in ['Sending PINGREQ', 'Received PINGRESP']:
logger.debug('LOG: ' + string)

Related

Enqueue liquidsoap request from script instead of command

I'm trying to write my very first liquidsoap program. It goes something like this:
sounds_path = "../var/sounds"
# Log file
set("log.file.path","var/log/liquidsoap.log")
set("harbor.bind_addr", "127.0.0.1")
set("harbor.timeout", 5)
set("harbor.verbose", true)
set("harbor.reverse_dns", false)
silence = blank()
queue = request.queue()
def play(~protocol, ~data, ~headers, uri) =
request.push("#{sounds_path}#{uri}")
http_response(protocol=protocol, code=20000)
end
harbor.http.register(port=8080, method="POST", "^/(?!\0)+", play)
stream = fallback(track_sensitive=false, [queue, silence])
...output.whatever...
And I was wondering if there is any way to push to the queue from the harbor callback.
Else, how should I proceed about making requests originate from HTTP calls? I really want to avoid telnet. My final objective is having an endpoint that I can call to make my stream play a file on demand and be silent the rest of the time.
give this a go its liquidsoap so its tricky to understand but it should do the trick
########### functions ##############
def playnow(source,~action="override", ~protocol, ~data, ~headers, uri) =
queue_count = list.length(server.execute("playnow.primary_queue"))
arr = of_json(default=[("key","value")], data)
track = arr["track"];
log("adding playnow track '#{track}'")
if queue_count != 0 and action == "override" then
server.execute("playnow.insert 0 #{track}")
source.skip(source)
print("skipping playnow queue")
else
server.execute("playnow.push #{track}")
print("no skip required")
end
http_response(
protocol=protocol,
code=200,
headers=[("Content-Type","application/json; charset=utf-8")],
data='{"status":"success", "track": "#{track}", "action": "#{action}"}'
)
end
######## live stuff below #######
playlist= playlist(reload=1, reload_mode="watch", "/etc/liquidsoap/playlist.xspf")
requested = crossfade(request.equeue(id="playnow"))
live= fallback(track_sensitive=false,transitions=[crossfade, crossfade],[requested, playlist])
output.harbor(%mp3,id="live",mount="live_radio", radio)
harbor.http.register(port=MY_HARBOR_PORT, method="POST","/playnow", playnow(live))
to use the above you need to send a post request with json data like so:
{"track":"http://mydomain/mysong.mp3"}
this is also with the assumption you have the harbor running which you should be able to find out using the liquidsoap docs
there are multiple methods of sending into the queue, there is telnet, you can create a http input, or a metadata request to playnow via the harbor, let me know which one you opt for and i can provide you with a code example

ACK with packet retransmission

Again I came across a doubt. I inserted in my implementation the use of ACK.
In the function:
AMSend.sendDone (message_t * bufPtr, error_t error) {
if (call PacketAcknowledgements.wasAcked (bufPtr)) {
dbg ("test", "SEND_ACK \ n");
}
}
And it's apparently working correctly, depending on the output log.
Already in function:
AMControl.startDone (error_t err) {
radio = TRUE;
dbg ("test", "SLOT_ACTIVE \ n");
if (err == SUCCESS) {
if ((call Clock.get ()> (ultpkdados + 5000)) && (TOS_NODE_ID! = 0)) {
test_msg_t * rcm = (test_msg_t *) call Packet.getPayload (& pkt, sizeof (test_msg_t));
rcm-> type = 1;
rcm-> nodeid = TOS_NODE_ID;
rcm-> proxsalto = syncwith;
call PacketAcknowledgements.requestAck (& pkt);
if (call AMSend.send (syncwith, & pkt, sizeof (test_msg_t)) == SUCCESS) {
Dbg ("test", "SEND_PKT_DATA \ n"));
locked = TRUE;
ultpkdados call Clock.get = ();
}
}
}
}
This function startDone is sending this packet of "data" normally, and I made a call PacketAcknowledgements.requestAck to request the ACK.
My question is whether at this point, if the ACK is not confirmed, the original message is retransmitted. If this is not happening, could you suggest me the appropriate changes for this to happen?
My question is whether at this point, if the ACK is not confirmed, the
original message is retransmitted.
No, the message will not be retransmitted.
If this is not happening, could you suggest me the appropriate changes
for this to happen?
What you are doing is just requesting acknowledgments and not enabling retransmissions. Retransmissions are sent by the Packet link layer which is used as specified HERE.
In order to enable re-transmissions you need to:
1) add the PACKETLINK preprocessor variable to your makefile. This can be done by simply adding "-DPACKETLINK" to PFlags in your makefile i.e
PFLAGS = -DPACKETLINK
2) Specify the maximum number of retries that your device can transmit and the delay between each retry. This is done by appropriately calling the setRetries and setRetryDelay functions in the Packet link interface (These are found on a instantiation of a PacketLink interface so you will need a uses interface PacketLink statement in the wiring section of your module). You need to set the number of retries before calling AMSend.send. i.e you would need to have something that goes along the lines of:
#if defined(PACKET_LINK)
maxRetries = 100; //max retries
myDelay = 10; //delay between retries
call PacketLink.setRetries(&pkt, maxRetries); //set retries
call PacketLink.setRetryDelay(&pkt, myDelay); //set delay
#endif
3. In your configuration file you need to provide a Packetlink implementation instantiated and link it to your module. For instance if you are using a node with a CC2420 transceiver (such as the TelosB node) you would have the following in the implementation section of your configuration file.
components CC2420ActiveMessageC, myModuleP as App;
App.PacketLink -> CC2420ActiveMessageC.PacketLink;
What the above will do is compile the Packet Link Layer along with the rest the the Communication stack. You can look the PacketLinkP.nc file to see how the values you are passing to the PacketLink interface are being used.
If you are using the PacketLink interface and PacketAcknowledgements.wasAcked returns FALSE in your AMSend.sendDone method then it means transmission has still failed despite all the retries. You can at this point try a fresh retransmit again (which the device will again try to retransmit up to a total of maxRetries times).

error 9 Bad file descriptor error using sockets in python

I am trying to implement a very basic code of client server in python using non blocking sockets. I have made two threads for reading and writing.
My client code is below.
import sys
import socket
from time import sleep
from _thread import *
import threading
global s
def writeThread():
while True:
data = str(input('Please input the data you want to send to client 2 ( to end connection type end ) : '))
data = bytes(data, 'utf8')
print('You are trying to send : ', data)
s.sendall(data)
def readThread():
while True:
try:
msg = s.recv(4096)
except socket.timeout as e:
sleep(1)
print('recv timed out, retry later')
continue
except socket.error as e:
# Something else happened, handle error, exit, etc.
print(e)
sys.exit(1)
else:
if len(msg) == 0:
print('orderly shutdown on server end')
sys.exit(0)
else:
# got a message do something :)
print('Message is : ', msg)
if __name__ == '__main__':
global s
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('',6188))
s.settimeout(2)
wThread = threading.Thread(None,writeThread)
rThread = threading.Thread(None,readThread)
wThread.start()
rThread.start()
s.close()
Question:
I know this can be implemented through select module too but I would like to know how to do it this way.
Your main thread creates the socket, then creates thread1 and thread2. Then it closes the socket (and exits because the program ends after that). So that when thread1 and thread2 try to use it, it's no longer open. Hence EBADF (Bad file descriptor error).
Your main thread should not close the socket while the other threads are still running. It could wait for them to end:
[...]
s.settimeout(2)
wThread = threading.Thread(None,writeThread)
rThread = threading.Thread(None,readThread)
wThread.start()
rThread.start()
wThread.join()
rThread.join()
s.close()
However, since the main thread has nothing better to do than wait, it might be better to create only one additional thread (say rThread), then have the main thread take over the task currently being performed by the other. I.e.
[...]
s.settimeout(2)
rThread = threading.Thread(None,readThread)
rThread.start()
writeThread()

How to use ZeroMQ in an GTK/QT/Clutter application?

In gtk applications all execution is taking place inside the gtk_main function. And other graphical frame works have similar event loops like app.exec for QT and clutter_main for Clutter. However ZeroMQ is based on the assumption that there is an while (1) ... loop that it is inserted into (see for instance here for examples).
How do you combine those two execution strategies?
I am currently wanting to use zeromq in a clutter application written in C, so I would of course like direct answers to that, but please add answers for other variants as well.
The proper way to combine zmq and gtk or clutter is to connect the file-descriptor of the zmq queue to the main event loop. The fd can be retrieved by using
int fd;
size_t sizeof_fd = sizeof(fd);
if(zmq_getsockopt(socket, ZMQ_FD, &fd, &sizeof_fd))
perror("retrieving zmq fd");
Connecting it to the main loop is the matter of using io_add_watch:
GIOChannel* channel = g_io_channel_unix_new(fd);
g_io_add_watch(channel, G_IO_IN|G_IO_ERR|G_IO_HUP, callback_func, NULL);
In the callback function, it is necessary to first check if there is really stuff to read, before reading. Otherwise, the function might block waiting for IO.
gboolean callback_func(GIOChannel *source, GIOCondition condition,gpointer data)
{
uint32_t status;
size_t sizeof_status = sizeof(status);
while (1){
if (zmq_getsockopt(socket, ZMQ_EVENTS, &status, &sizeof_status)) {
perror("retrieving event status");
return 0; // this just removes the callback, but probably
// different error handling should be implemented
}
if (status & ZMQ_POLLIN == 0) {
break;
}
// retrieve one message here
}
return 1; // keep the callback active
}
Please note: this is not actually tested, I did a translation from Python+Clutter, which is what I use, but I'm pretty sure that it'll work.
For reference, below is full Python+Clutter code which actually works.
import sys
from gi.repository import Clutter, GObject
import zmq
def Stage():
"A Stage with a red spinning rectangle"
stage = Clutter.Stage()
stage.set_size(400, 400)
rect = Clutter.Rectangle()
color = Clutter.Color()
color.from_string('red')
rect.set_color(color)
rect.set_size(100, 100)
rect.set_position(150, 150)
timeline = Clutter.Timeline.new(3000)
timeline.set_loop(True)
alpha = Clutter.Alpha.new_full(timeline, Clutter.AnimationMode.EASE_IN_OUT_SINE)
rotate_behaviour = Clutter.BehaviourRotate.new(
alpha,
Clutter.RotateAxis.Z_AXIS,
Clutter.RotateDirection.CW,
0.0, 359.0)
rotate_behaviour.apply(rect)
timeline.start()
stage.add_actor(rect)
stage.show_all()
stage.connect('destroy', lambda stage: Clutter.main_quit())
return stage, rotate_behaviour
def Socket(address):
ctx = zmq.Context()
sock = ctx.socket(zmq.SUB)
sock.setsockopt(zmq.SUBSCRIBE, "")
sock.connect(address)
return sock
def zmq_callback(queue, condition, sock):
print 'zmq_callback', queue, condition, sock
while sock.getsockopt(zmq.EVENTS) & zmq.POLLIN:
observed = sock.recv()
print observed
return True
def main():
res, args = Clutter.init(sys.argv)
if res != Clutter.InitError.SUCCESS:
return 1
stage, rotate_behaviour = Stage()
sock = Socket(sys.argv[2])
zmq_fd = sock.getsockopt(zmq.FD)
GObject.io_add_watch(zmq_fd,
GObject.IO_IN|GObject.IO_ERR|GObject.IO_HUP,
zmq_callback, sock)
return Clutter.main()
if __name__ == '__main__':
sys.exit(main())
It sounds like the ZeroMQ code wants simply to be executed over and over again as often as possible. The simplest way is to put the ZeroMQ code into an idle function or timeout function, and use non-blocking versions of the functions if they exist.
For Clutter, you would use clutter_threads_add_idle() or clutter_threads_add_timeout(). For GTK, you would use g_idle_add() or g_timeout_add().
The more difficult, but possibly better, way is to create a separate thread for the ZeroMQ code using g_thread_create(), and just use the while(1) construction with blocking functions as they suggest. If you do that, you will also have to find some way for the threads to communicate with each other - GLib's mutexes and async queues usually do fine.
I found that there is a QT integration library called Zeromqt. Looking at the source, the core of the integration is the following:
ZmqSocket::ZmqSocket(int type, QObject *parent) : QObject(parent)
{
...
notifier_ = new QSocketNotifier(fd, QSocketNotifier::Read, this);
connect(notifier_, SIGNAL(activated(int)), this, SLOT(activity()));
}
...
void ZmqSocket::activity()
{
uint32_t flags;
size_t size = sizeof(flags);
if(!getOpt(ZMQ_EVENTS, &flags, &size)) {
qWarning("Error reading ZMQ_EVENTS in ZMQSocket::activity");
return;
}
if(flags & ZMQ_POLLIN) {
emit readyRead();
}
if(flags & ZMQ_POLLOUT) {
emit readyWrite();
}
...
}
Hence, it is relying on QT's integrated socket handling and Clutter will not have something similar.
You can get a file descriptor for 0MQ socket (ZMQ_FD option) and integrate that with your event loop. I presume gtk has some mechanism for handling sockets.
This an example in Python, using the PyQt4. It's derived from a working application.
import zmq
from PyQt4 import QtCore, QtGui
class QZmqSocketNotifier( QtCore.QSocketNotifier ):
""" Provides Qt event notifier for ZMQ socket events """
def __init__( self, zmq_sock, event_type, parent=None ):
"""
Parameters:
----------
zmq_sock : zmq.Socket
The ZMQ socket to listen on. Must already be connected or bound to a socket address.
event_type : QtSocketNotifier.Type
Event type to listen for, as described in documentation for QtSocketNotifier.
"""
super( QZmqSocketNotifier, self ).__init__( zmq_sock.getsockopt(zmq.FD), event_type, parent )
class Server(QtGui.QFrame):
def __init__(self, topics, port, mainwindow, parent=None):
super(Server, self).__init__(parent)
self._PORT = port
# Create notifier to handle ZMQ socket events coming from client
self._zmq_context = zmq.Context()
self._zmq_sock = self._zmq_context.socket( zmq.SUB )
self._zmq_sock.bind( "tcp://*:" + self._PORT )
for topic in topics:
self._zmq_sock.setsockopt( zmq.SUBSCRIBE, topic )
self._zmq_notifier = QZmqSocketNotifier( self._zmq_sock, QtCore.QSocketNotifier.Read )
# connect signals and slots
self._zmq_notifier.activated.connect( self._onZmqMsgRecv )
mainwindow.quit.connect( self._onQuit )
#QtCore.pyqtSlot()
def _onZmqMsgRecv():
self._test_info_notifier.setEnabled(False)
# Verify that there's data in the stream
sock_status = self._zmq_sock.getsockopt( zmq.EVENTS )
if sock_status == zmq.POLLIN:
msg = self._zmq_sock.recv_multipart()
topic = msg[0]
callback = self._topic_map[ topic ]
callback( msg )
self._zmq_notifier.setEnabled(True)
self._zmq_sock.getsockopt(zmq.EVENTS)
def _onQuit(self):
self._zmq_notifier.activated.disconnect( self._onZmqMsgRecv )
self._zmq_notifier.setEnabled(False)
del self._zmq_notifier
self._zmq_context.destroy(0)
Disabling and then re-enabling the notifier in _on_ZmqMsgRecv is per the documentation for QSocketNotifier.
The final call to getsockopt is for some reason necessary. Otherwise, the notifier stops working after the first event. I was actually going to post a new question for this. Does anyone know why this is needed?
Note that if you don't destroy the notifier before the ZMQ context, you'll probably get an error like this when you quit the application:
QSocketNotifier: Invalid socket 16 and type 'Read', disabling...

Detecting client TCP disconnection while using NetworkStream class

A friend of mine came to me with a problem: when using the NetworkStream class on the server end of the connection, if the client disconnects, NetworkStream fails to detect it.
Stripped down, his C# code looked like this:
List<TcpClient> connections = new List<TcpClient>();
TcpListener listener = new TcpListener(7777);
listener.Start();
while(true)
{
if (listener.Pending())
{
connections.Add(listener.AcceptTcpClient());
}
TcpClient deadClient = null;
foreach (TcpClient client in connections)
{
if (!client.Connected)
{
deadClient = client;
break;
}
NetworkStream ns = client.GetStream();
if (ns.DataAvailable)
{
BinaryFormatter bf = new BinaryFormatter();
object o = bf.Deserialize(ns);
ReceiveMyObject(o);
}
}
if (deadClient != null)
{
deadClient.Close();
connections.Remove(deadClient);
}
Thread.Sleep(0);
}
The code works, in that clients can successfully connect and the server can read data sent to it. However, if the remote client calls tcpClient.Close(), the server does not detect the disconnection - client.Connected remains true, and ns.DataAvailable is false.
A search of Stack Overflow provided an answer - since Socket.Receive is not being called, the socket is not detecting the disconnection. Fair enough. We can work around that:
foreach (TcpClient client in connections)
{
client.ReceiveTimeout = 0;
if (client.Client.Poll(0, SelectMode.SelectRead))
{
int bytesPeeked = 0;
byte[] buffer = new byte[1];
bytesPeeked = client.Client.Receive(buffer, SocketFlags.Peek);
if (bytesPeeked == 0)
{
deadClient = client;
break;
}
else
{
NetworkStream ns = client.GetStream();
if (ns.DataAvailable)
{
BinaryFormatter bf = new BinaryFormatter();
object o = bf.Deserialize(ns);
ReceiveMyObject(o);
}
}
}
}
(I have left out exception handling code for brevity.)
This code works, however, I would not call this solution "elegant". The other elegant solution to the problem I am aware of is to spawn a thread per TcpClient, and allow the BinaryFormatter.Deserialize (née NetworkStream.Read) call to block, which would detect the disconnection correctly. Though, this does have the overhead of creating and maintaining a thread per client.
I get the feeling that I'm missing some secret, awesome answer that would retain the clarity of the original code, but avoid the use of additional threads to perform asynchronous reads. Though, perhaps, the NetworkStream class was never designed for this sort of usage. Can anyone shed some light?
Update: Just want to clarify that I'm interested to see if the .NET framework has a solution that covers this use of NetworkStream (i.e. polling and avoiding blocking) - obviously it can be done; the NetworkStream could easily be wrapped in a supporting class that provides the functionality. It just seemed strange that the framework essentially requires you to use threads to avoid blocking on NetworkStream.Read, or, to peek on the socket itself to check for disconnections - almost like it's a bug. Or a potential lack of a feature. ;)
Is the server expecting to be sent multiple objects over the same connection? IF so I dont see how this code will work, as there is no delimiter being sent that signifies where the first object starts and the next object ends.
If only one object is being sent and the connection closed after, then the original code would work.
There has to be a network operation initiated in order to find out if the connection is still active or not. What I would do, is that instead of deserializing directly from the network stream, I would instead buffer into a MemoryStream. That would allow me to detect when the connection was lost. I would also use message framing to delimit multiple responses on the stream.
MemoryStream ms = new MemoryStream();
NetworkStream ns = client.GetStream();
BinaryReader br = new BinaryReader(ns);
// message framing. First, read the #bytes to expect.
int objectSize = br.ReadInt32();
if (objectSize == 0)
break; // client disconnected
byte [] buffer = new byte[objectSize];
int index = 0;
int read = ns.Read(buffer, index, Math.Min(objectSize, 1024);
while (read > 0)
{
objectSize -= read;
index += read;
read = ns.Read(buffer, index, Math.Min(objectSize, 1024);
}
if (objectSize > 0)
{
// client aborted connection in the middle of stream;
break;
}
else
{
BinaryFormatter bf = new BinaryFormatter();
using(MemoryStream ms = new MemoryStream(buffer))
{
object o = bf.Deserialize(ns);
ReceiveMyObject(o);
}
}
Yeah but what if you lose a connection before getting the size? i.e. right before the following line:
// message framing. First, read the #bytes to expect.
int objectSize = br.ReadInt32();
ReadInt32() will block the thread indefinitely.