How to write a Lighttpd plugin to do live streaming - streaming

I'd like to write a Lighttpd plugin to do streaming..
so far i duplicate the client_socket (con->fd) in the function 'mod_strm_handle_physical'. So that I can send streaming data thru it in a child process. As for the main process, I set some status to the connection struct to tell the server not to close this conneciton.
Here are how i did in the 'mod_strm_handle_physical' function:
URIHANDLER_FUNC(mod_strm_handle_physical)
{
if(con->uri.path->ptr)
{
if(!strcmp("/abcd", con->uri.path->ptr))
{
// change Content-Type
response_header_overwrite(srv, con
, CONST_STR_LEN("Content-Type")
, CONST_STR_LEN("application/octet-stream"));
con->http_status = 200;
con->file_finished = 0; // not to close the connection
con->response.keep_alive = 1;
int dup_fd = dup(con->fd); // duplicate the client-socket
int child = fork();
if(child>0)
return HANDLER_FINISHED;
else if(child==0)
{
send(dup_fd, STREAMING_DATA, LENGTH, 0);
close(dup_fd);
exit(0);
}
else
perror("fork()");
}
}
return HANDLER_GO_ON;
}
The problem is..
in this way, the server can do streaming and seems okay. However, the server cannot do more than one streaming at the same time. Is something I am doing wrong? I though the streaming job is out-process

Related

OPC UA Client capture the lost item values from the UA server after a disconnect/connection error?

I am building a OPC UA Client using OPC Foundation SDK. I am able to create a subscription containing some Monitoreditems.
On the OPC UA server these monitored items change value constantly (every second or so).
I want to disconnect the client (simulate a connection broken ), keep the subcription alive and wait for a while. Then I reconnect having my subscriptions back, but I also want all the monitored Item values queued up during the disconnect. Right now I only get the last server value on reconnect.
I am setting a queuesize:
monitoredItem.QueueSize = 100;
To kind of simulate a connection error I have set the "delete subscription" to false on ClosesSession:
m_session.CloseSession(new RequestHeader(), false);
My question is how to capture the content of the queue after a disconnect/connection error???
Should the ‘lost values’ be “new MonitoredItem_Notification” automatically when the client reconnect?
Should the SubscriptionId be the same as before the connection was broken?
Should the sessionId be the same or will a new SessionId let med keep the existing subscriptions? What is the best way to simulate a connection error?
Many questions :-)
A sample from the code where I create the subscription containing some MonitoredItems and the MonitoredItem_Notification event method.
Any OPC UA Guru out there??
if (node.Displayname == "node to monitor")
{
MonitoredItem mon = CreateMonitoredItem((NodeId)node.reference.NodeId, node.Displayname);
m_subscription.AddItem(mon);
m_subscription.ApplyChanges();
}
private MonitoredItem CreateMonitoredItem(NodeId nodeId, string displayName)
{
if (m_subscription == null)
{
m_subscription = new Subscription(m_session.DefaultSubscription);
m_subscription.PublishingEnabled = true;
m_subscription.PublishingInterval = 3000;//1000;
m_subscription.KeepAliveCount = 10;
m_subscription.LifetimeCount = 10;
m_subscription.MaxNotificationsPerPublish = 1000;
m_subscription.Priority = 100;
bool cache = m_subscription.DisableMonitoredItemCache;
m_session.AddSubscription(m_subscription);
m_subscription.Create();
}
// add the new monitored item.
MonitoredItem monitoredItem = new MonitoredItem(m_subscription.DefaultItem);
//Each time a monitored item is sampled, the server evaluates the sample using a filter defined for each monitoreditem.
//The server uses the filter to determine if the sample should be reported. The type of filter is dependent on the type of item.
//DataChangeFilter for Variable, Eventfilter when monitoring Events. etc
//MonitoringFilter f = new MonitoringFilter();
//DataChangeFilter f = new DataChangeFilter();
//f.DeadbandValue
monitoredItem.StartNodeId = nodeId;
monitoredItem.AttributeId = Attributes.Value;
monitoredItem.DisplayName = displayName;
//Disabled, Sampling, (Report (includes sampling))
monitoredItem.MonitoringMode = MonitoringMode.Reporting;
//How often the Client wish to check for new values on the server. Must be 0 if item is an event.
//If a negative number the SamplingInterval is set equal to the PublishingInterval (inherited)
//The Subscriptions KeepAliveCount should always be longer than the SamplingInterval/PublishingInterval
monitoredItem.SamplingInterval = 500;
//Number of samples stored on the server between each reporting
monitoredItem.QueueSize = 100;
monitoredItem.DiscardOldest = true;//Discard oldest values when full
monitoredItem.CacheQueueSize = 100;
monitoredItem.Notification += m_MonitoredItem_Notification;
if (ServiceResult.IsBad(monitoredItem.Status.Error))
{
return null;
}
return monitoredItem;
}
private void MonitoredItem_Notification(MonitoredItem monitoredItem, MonitoredItemNotificationEventArgs e)
{
if (this.InvokeRequired)
{
this.BeginInvoke(new MonitoredItemNotificationEventHandler(MonitoredItem_Notification), monitoredItem, e);
return;
}
try
{
if (m_session == null)
{
return;
}
MonitoredItemNotification notification = e.NotificationValue as MonitoredItemNotification;
if (notification == null)
{
return;
}
string sess = m_session.SessionId.Identifier.ToString();
string s = string.Format(" MonitoredItem: {0}\t Value: {1}\t Status: {2}\t SourceTimeStamp: {3}", monitoredItem.DisplayName, (notification.Value.WrappedValue.ToString().Length == 1) ? notification.Value.WrappedValue.ToString() : notification.Value.WrappedValue.ToString(), notification.Value.StatusCode.ToString(), notification.Value.SourceTimestamp.ToLocalTime().ToString("HH:mm:ss.fff"));
richTextBox1.AppendText(s + "SessionId: " + sess);
}
catch (Exception exception)
{
ClientUtils.HandleException(this.Text, exception);
}
}e here
I don't know how much of this, if any, the SDK you're using does for you, but the approach when reconnecting is generally:
try to resume (re-activate) your old session. If this is successful your subscriptions will already exist and all you need to do is send more PublishRequests. Since you're trying to test by closing the session this probably won't work.
create a new session and then call the TransferSubscription service to transfer the previous subscriptions to your new session. You can then start sending PublishRequests and you'll get the queued notifications.
Again, depending on the stack/SDK/toolkit you're using some or none of this may be handled for you.

Opening a UDP connection in Veins toward external server

I'm using Veins 4.4 and I need to store some results in an outer server, so I would like to open a UDP connection toward it.
I've read several posts about using a TCP connection for the mobility in Veins,and I understood I should resort to the Inet module to open a connection.
Although I don't need it for the mobility, but to send data to an external server.
Is there any suggestion?
I was trying to use the method processCommandFromApp method from inet/src/transport/UDP.cc class:
void UDP::processCommandFromApp(cMessage *msg)
{
switch (msg->getKind())
{
case UDP_C_BIND: {
UDPBindCommand *ctrl = check_and_cast<UDPBindCommand*>(msg->getControlInfo());
bind(ctrl->getSockId(), msg->getArrivalGate()->getIndex(), ctrl->getLocalAddr(), ctrl->getLocalPort());
break;
}
case UDP_C_CONNECT: {
UDPConnectCommand *ctrl = check_and_cast<UDPConnectCommand*>(msg->getControlInfo());
connect(ctrl->getSockId(), msg->getArrivalGate()->getIndex(), ctrl->getRemoteAddr(), ctrl->getRemotePort());
break;
}
case UDP_C_CLOSE: {
UDPCloseCommand *ctrl = check_and_cast<UDPCloseCommand*>(msg->getControlInfo());
close(ctrl->getSockId());
break;
}
case UDP_C_SETOPTION: {
UDPSetOptionCommand *ctrl = check_and_cast<UDPSetOptionCommand *>(msg->getControlInfo());
SockDesc *sd = getOrCreateSocket(ctrl->getSockId(), msg->getArrivalGate()->getIndex());
if (dynamic_cast<UDPSetTimeToLiveCommand*>(ctrl))
setTimeToLive(sd, ((UDPSetTimeToLiveCommand*)ctrl)->getTtl());
else if (dynamic_cast<UDPSetTypeOfServiceCommand*>(ctrl))
setTypeOfService(sd, ((UDPSetTypeOfServiceCommand*)ctrl)->getTos());
else if (dynamic_cast<UDPSetBroadcastCommand*>(ctrl))
setBroadcast(sd, ((UDPSetBroadcastCommand*)ctrl)->getBroadcast());
else if (dynamic_cast<UDPSetMulticastInterfaceCommand*>(ctrl))
setMulticastOutputInterface(sd, ((UDPSetMulticastInterfaceCommand*)ctrl)->getInterfaceId());
else if (dynamic_cast<UDPSetMulticastLoopCommand*>(ctrl))
setMulticastLoop(sd, ((UDPSetMulticastLoopCommand*)ctrl)->getLoop());
else if (dynamic_cast<UDPSetReuseAddressCommand*>(ctrl))
setReuseAddress(sd, ((UDPSetReuseAddressCommand*)ctrl)->getReuseAddress());
else if (dynamic_cast<UDPJoinMulticastGroupsCommand*>(ctrl))
{
UDPJoinMulticastGroupsCommand *cmd = (UDPJoinMulticastGroupsCommand*)ctrl;
std::vector<IPvXAddress> addresses;
std::vector<int> interfaceIds;
for (int i = 0; i < (int)cmd->getMulticastAddrArraySize(); i++)
addresses.push_back(cmd->getMulticastAddr(i));
for (int i = 0; i < (int)cmd->getInterfaceIdArraySize(); i++)
interfaceIds.push_back(cmd->getInterfaceId(i));
joinMulticastGroups(sd, addresses, interfaceIds);
}
else if (dynamic_cast<UDPLeaveMulticastGroupsCommand*>(ctrl))
{
UDPLeaveMulticastGroupsCommand *cmd = (UDPLeaveMulticastGroupsCommand*)ctrl;
std::vector<IPvXAddress> addresses;
for (int i = 0; i < (int)cmd->getMulticastAddrArraySize(); i++)
addresses.push_back(cmd->getMulticastAddr(i));
leaveMulticastGroups(sd, addresses);
}
else
throw cRuntimeError("Unknown subclass of UDPSetOptionCommand received from app: %s", ctrl->getClassName());
break;
}
default: {
throw cRuntimeError("Unknown command code (message kind) %d received from app", msg->getKind());
}
}
delete msg; // also deletes control info in it
}
I included the inet path as follows:
#include <inet/src/transport/udp/UDP.h>
and I call it as follows, by passing as input UDP_C_CONNECT message.:
cMessage *UDP_C_CONNECT;
void Inet::UDP::processCommandFromApp(UDP_C_CONNECT);
But when I run the simulation, it crashes, returning this error:
Errors occurred during the build.
Errors running builder 'OMNeT++ Makefile Builder' on project 'veins'.
java.lang.NullPointerException
1) Is there the correct way to set up the required connection?
2) Why I'm getting this error as soon as I include the inet path?
UPDATE
I also tried another way to establish the connection:
std::string host;
host = "16777343";
int port = 5144;
Veins::TraCIConnection* connection;
connection = TraCIConnection::connect(host.c_str(), port);
but as soon as it load the plugin, then it's like it is waiting for something at time 0.0 without starting the generation of the nodes.
Thanks for helping
Simulations using OMNeT++ are C++ programs, so you can use the full range of libraries and system calls available to any C++ program. If you want to open a UDP connection to some other computer on your network, just create a UDP socket as you would in any C++ program, then send the data whenever needed.
Maybe the easiest way to go about writing this is to
start with a plain C++ program that has nothing to do with OMNeT++
move the part of the program that has to run before everything else into the initialize method of a module in your simulation, the rest to a handleMessage method.

How to make server ignores sent data from client after client get timeout on getting server response?

I'm using socket with O_NONBLOCK, select, keep alive connection and something like HTTP.
each server connection and client side uses a buffer to get all sent data until complete message be received
How to working:
client send data "A"
client try receive response from server
server receive but don't reply immediately
client gets timeout
server send response "A" (but client don't receive now)
another request:
client send data "B"
server send response "B"
client receive "AB" response instead only "B"
the problem is that client receives previous buffer message
source code bellow:
Server.cpp base class:
bool Server::start()
{
struct sockaddr_in client_addr;
SOCKET client_socket, max_sock;
Connection* conn;
int addrlen = sizeof(struct sockaddr_in);
std::list<Connection*>::iterator it, itr;
int response;
fd_set fdset, read_fds;
max_sock = m_socket;
FD_ZERO(&fdset);
FD_SET(m_socket, &fdset);
onStart();
while(true)
{
// make a copy of set
read_fds = fdset;
// wait for read event
response = select(max_sock + 1, &read_fds, NULL, NULL, NULL);
if(response == -1)
break;
// check for new connections
if(FD_ISSET(m_socket, &read_fds))
{
response--;
// accept connections
client_socket = accept(m_socket, (struct sockaddr*)&client_addr, &addrlen);
if(client_socket != INVALID_SOCKET)
{
conn = new Connection(*this, client_socket, &client_addr);
m_connections.push_front(conn);
// add connection to set for wait for read event
FD_SET(client_socket, &fdset);
// allow select new sock from select funcion
if(max_sock < client_socket)
max_sock = client_socket;
}
}
// check for received data from clients
it = m_connections.begin();
while(it != m_connections.end() && response > 0)
{
conn = *it;
// verify if connection can be readed
if(FD_ISSET(conn->getSocket(), &read_fds))
{
response--;
conn->receive();
if(!conn->isConnected())
{
FD_CLR(conn->getSocket(), &fdset);
// remove connection from list
itr = it;
it++;
m_connections.erase(itr);
delete conn;
continue;
}
}
it++;
}
}
onFinish(response >= 0);
return response >= 0;
}
main.cpp Server implementation:
void ClientConnection::onReceive(const void * data, size_t size)
{
const char *str, *pos = NULL;
HttpParser* p;
buffer->write(data, size);
do
{
str = (const char*)buffer->data();
if(contentOffset == 0)
{
pos = strnstr(str, buffer->size(), "\r\n\r\n");
if(pos != NULL)
{
contentOffset = pos - str + 4;
p = new HttpParser((const char*)buffer->data(), contentOffset);
contentLength = p->getContentLength();
delete p;
}
}
if(buffer->size() - contentOffset < contentLength || contentOffset == 0)
return;
proccessRequest();
keepDataStartingOf(contentOffset + contentLength);
}
while(buffer->size() > 0);
}
client side code is a simple recv send with timeout
any idea how to solve?
The first thing that comes to mind is to make the client's timeout large enough that the client won't timeout unless the server is actually dead... but I'm sure you've already thought of that. :)
So assuming that's not a good enough fix, the next thing to try is to have the client send an ID number with each request it sends. The ID number can be generated with a simple counter (e.g. for the client's first request, it tags the request with 0, for the second it tags it with 1, etc). The server, when sending its reply, will include that same ID number with the reply.
When the client receives a reply, it compares the ID number in the reply data against the current value of its counter. If the two numbers are the same, it processes the data. If not, it ignores the data. Et voila!

Asynchronous socket data handling in VC

My my application i used Asynchronous Socket to communication. I want to download some data from server. Server sends data as fixed packets. I want to download full data and process it. I used a byte array to store full data and process it. I want to wait till full data download.
void download()
{
sendownloadrequest();
wait();
processdata();
}
void wait()
{
m_bwait = 1;
MSG msg;
while( m_bwait == 1 )
{
if( GetMessage( &msg, NULL, NULL, NULL ))
{
TranslateMessage( &msg );
DispatchMessage( &msg );
}
}
}
void onreceive()
{
.....
if( m_nReceivedSize >= m_nTotalSize )
{
m_bwait = 0;
}
}
i am not satisfied with above code, please suggest a better method
thanks

Detecting client TCP disconnection while using NetworkStream class

A friend of mine came to me with a problem: when using the NetworkStream class on the server end of the connection, if the client disconnects, NetworkStream fails to detect it.
Stripped down, his C# code looked like this:
List<TcpClient> connections = new List<TcpClient>();
TcpListener listener = new TcpListener(7777);
listener.Start();
while(true)
{
if (listener.Pending())
{
connections.Add(listener.AcceptTcpClient());
}
TcpClient deadClient = null;
foreach (TcpClient client in connections)
{
if (!client.Connected)
{
deadClient = client;
break;
}
NetworkStream ns = client.GetStream();
if (ns.DataAvailable)
{
BinaryFormatter bf = new BinaryFormatter();
object o = bf.Deserialize(ns);
ReceiveMyObject(o);
}
}
if (deadClient != null)
{
deadClient.Close();
connections.Remove(deadClient);
}
Thread.Sleep(0);
}
The code works, in that clients can successfully connect and the server can read data sent to it. However, if the remote client calls tcpClient.Close(), the server does not detect the disconnection - client.Connected remains true, and ns.DataAvailable is false.
A search of Stack Overflow provided an answer - since Socket.Receive is not being called, the socket is not detecting the disconnection. Fair enough. We can work around that:
foreach (TcpClient client in connections)
{
client.ReceiveTimeout = 0;
if (client.Client.Poll(0, SelectMode.SelectRead))
{
int bytesPeeked = 0;
byte[] buffer = new byte[1];
bytesPeeked = client.Client.Receive(buffer, SocketFlags.Peek);
if (bytesPeeked == 0)
{
deadClient = client;
break;
}
else
{
NetworkStream ns = client.GetStream();
if (ns.DataAvailable)
{
BinaryFormatter bf = new BinaryFormatter();
object o = bf.Deserialize(ns);
ReceiveMyObject(o);
}
}
}
}
(I have left out exception handling code for brevity.)
This code works, however, I would not call this solution "elegant". The other elegant solution to the problem I am aware of is to spawn a thread per TcpClient, and allow the BinaryFormatter.Deserialize (née NetworkStream.Read) call to block, which would detect the disconnection correctly. Though, this does have the overhead of creating and maintaining a thread per client.
I get the feeling that I'm missing some secret, awesome answer that would retain the clarity of the original code, but avoid the use of additional threads to perform asynchronous reads. Though, perhaps, the NetworkStream class was never designed for this sort of usage. Can anyone shed some light?
Update: Just want to clarify that I'm interested to see if the .NET framework has a solution that covers this use of NetworkStream (i.e. polling and avoiding blocking) - obviously it can be done; the NetworkStream could easily be wrapped in a supporting class that provides the functionality. It just seemed strange that the framework essentially requires you to use threads to avoid blocking on NetworkStream.Read, or, to peek on the socket itself to check for disconnections - almost like it's a bug. Or a potential lack of a feature. ;)
Is the server expecting to be sent multiple objects over the same connection? IF so I dont see how this code will work, as there is no delimiter being sent that signifies where the first object starts and the next object ends.
If only one object is being sent and the connection closed after, then the original code would work.
There has to be a network operation initiated in order to find out if the connection is still active or not. What I would do, is that instead of deserializing directly from the network stream, I would instead buffer into a MemoryStream. That would allow me to detect when the connection was lost. I would also use message framing to delimit multiple responses on the stream.
MemoryStream ms = new MemoryStream();
NetworkStream ns = client.GetStream();
BinaryReader br = new BinaryReader(ns);
// message framing. First, read the #bytes to expect.
int objectSize = br.ReadInt32();
if (objectSize == 0)
break; // client disconnected
byte [] buffer = new byte[objectSize];
int index = 0;
int read = ns.Read(buffer, index, Math.Min(objectSize, 1024);
while (read > 0)
{
objectSize -= read;
index += read;
read = ns.Read(buffer, index, Math.Min(objectSize, 1024);
}
if (objectSize > 0)
{
// client aborted connection in the middle of stream;
break;
}
else
{
BinaryFormatter bf = new BinaryFormatter();
using(MemoryStream ms = new MemoryStream(buffer))
{
object o = bf.Deserialize(ns);
ReceiveMyObject(o);
}
}
Yeah but what if you lose a connection before getting the size? i.e. right before the following line:
// message framing. First, read the #bytes to expect.
int objectSize = br.ReadInt32();
ReadInt32() will block the thread indefinitely.