Eclipse Milo - writeValue(NodeId, DataValue) is very slow - eclipse

We are successfully communicating to OPC UA server reading and setting tags. Everything works fine but only issue now is that writing tag value takes a long time around 600ms per tag. So to set 10 tags it takes around 6 seconds which is unacceptable in production environment...please suggest.

How long a write takes is almost always the responsibility of the server, not the client, so there is probably not much else you can do on the Milo client side other than make sure to batch your writes into a single call whenever possible, which it sounds like you are already doing.
You could verify this for yourself by connecting without encryption, capturing the traffic in Wireshark, and verifying that the delay you see is between the request being sent from the client and the response being received from the server.

Related

low connectivity protocols or technologies

I'm trying to enhance a server-app-website architecture in reliability, another programmer has developed.
At the moment, android smartphones start a tcp connection to a server component to exchange data. The server takes the data, writes them into a DB and another user can have a look on the data through a website. The problem is that the smartphones very regularly are in locations where connectivity is really bad. The consequence is that the smartphones lose the tcp connection and it's hard to reconnect. Now my question is, if there are any protocols that are so lightweight or accomodating concerning bad connectivity that the data exchange could work better or more reliable.
For example, I was thinking about replacing the raw TCP interface with a RESTful API, but I don't really know how well REST works in this scenario, as I don't have any experience in this area.
Maybe useful to know for answering this question: The server component is programmed in c#. The connecting components are android smartphones.
Please understand that I dont add some code to this question, because in my opinion its just a theoretically question.
Thank you in advance !
REST runs over HTTP which runs over TCP so it would have the same issues with connectivity.
Moving up the stack to the application you could perhaps think in terms of 'interference'. I quite often have to use technical stuff in remote areas with limited reception and it reminds of trying to communicate in a storm. If you think about it, if you're trying to get someone to do something in a storm where they can hardly hear you and the words get blown away (dropped signal), you don't read them the manual on how to fix something, you shout key words such as 'handle', 'pull', 'pull', 'PULL', 'ok'. So the information reaches them in small bursts you can repeat (pull, what? pull, eh? PULL! oh righto!)
Can you redesign the communications between the android app and the server so the server can recognise key 'words' with corresponding data and build up the request over a period of time? If you consider idempotency, each burst of data would not alter the request if it has already been received (pull, PULL!) and over time the android app could send/receive smaller chunks of the request. If the signal stays up, just keep sending. If it goes down, note which parts of the request haven't been sent and retry them when the signal comes back.
So you're sending the request jigsaw-style but the server knows how to reassemble the pieces in the right order. A STOP word at the end tells the server ok this request is complete, go work on it. Until that word arrives the server can store the incomplete request or discard it if no more data comes in.
If the server respond to the first request chunk with an id, the app can use the id to get the response and keep trying until the full response comes back, at which point the server can remove the response from its jigsaw cache. A fair amount of work though.

How to send data to multiple client sockets

I have an AIR server application. Several mobile clients connect to it. Everything works good, if there is only one client, but when the server sends data to several clients in a loop, the clients fail to process the data immediately. The data is late by one step.
This bit of code is inside a for loop:
clients[i].client.writeObject(data);
clients[i].client.flush();
Only the client sending the data is getting the update from the server. Everyone else is quiet for one step. If the client sends another message, then all other clients are updated to the state of previously sent update.
The code is fine on clients, as the client running on a computer is receiving the updates on time. Only the mobile clients are failing to update.
What could be the reason for this issue?
What is the proper way of sending data to multiple client sockets at the same time?
I have solved the issue by setting a timer to delay the data transfer by 1/3 of a second. Less delay time caused the same issue. I do not think it is the only solution, but it worked.
The problem with this solution is, if the there are 100 clients, the last will receive the data updates in 30 seconds.

Socket has timed out

We are trying to access our web-app (through web server, IHS). When we use http we are fine ;https protocol is working as it submits the requests, however we observe Socket Time Out Exception continuously after some requests have been processed. Thereafter the request processing resumes again. We have tested the application with quite large concurrent load using https earlier; but in this case we are not sure why we are getting this error.
Oh boy, this can be due to thousands of different things. I would suggest a layer analysis approach starting off by the Web Server logs, you need to make sure the requests are reaching your web server and what is happening to the ones dictating a time out, you could be facing anything from network latency to a resource bounded host, contention or who knows, it all depends on your application's design.
Start off by checking out the network layer. Maybe if you provide some more information I can help you out.
Also check out http and https time out configurations on your web server.

RTP/RTSP start up latency: Would this method help to reduce it, and if yes, why we don't have it

This is probably not the best forum for such a specialized question, but at the moment I don't know of a better one (open to suggestions/recommendations).
I work on a video product which for the last 10+ years has been using proprietary communications protocol (DCOM-based) to send the video across the network. A while ago we recognized the need to standardize and currently are almost at a point of ripping out all that DCOM baggage and replacing it with a fully compliant RTP/RTSP client/server framework.
One thing we noticed during testing over the last few months is that when we switch the client to use RTP/RTSP, there's a noticeable increase in start-up latency. The problem is that it's not us but RTSP.
BEFORE (DCOM): we would send one DCOM command and before that command even returned back to the client, the server would already be sending video. -- total latency 1 RTT
NOW (RTSP): This is the sequence of commands, each one being a separate network request: DESCRIBE, SETUP, SETUP, PLAY (assuming the session has audio and video) -- total of 4 RTTs.
Works as designed - unfortunately it feels like a step backwards because prior user experience was actually better.
Can this be improved? If you stay with the standard, short answer is, NO. However, my team fully controls our entire RTP/RTSP stack and I've been thinking we could introduce a new RTSP command (without touching any of existing commands so we are still fully inter-operable) as a solution: DESCRIBE_SETUP_PLAY.
We could send this one command, pass in types of streams interested in (typically, there's only one video and 0..1 audio). Response would include the full SDP text, as well as all the port information and just like before, server would start streaming instantly without waiting for anything else from the client.
Would this work? any downside that I may not be seeing? I'm curious why this wasn't considered (or was dropped) from official spec, since latency even in local intranet is definitely noticeable.
FYI, it is possible according to the RTSP 1.0 specification:
9.1 Pipelining
A client that supports persistent connections or connectionless mode
MAY "pipeline" its requests (i.e., send multiple requests without
waiting for each response). A server MUST send its responses to those
requests in the same order that the requests were received.
The RTSP 2.0 draft also contains support for pipelining.
However none of the clients/servers I've used implement it AFAIK.

RequestBuilder timeouts and browser connection limits per domain

This is specifically about GWT's RequestBuilder, but should apply to general XHR as well. My company is having me build a near realtime chat application over HTTP. Yes, I do realize there are better ways to do chat aplications, but this is what they want. Eventually we want it working on the iPad/iPhone as well so flash is out, which rules out websockets and comet as well, I think?
Anyway, I'm running into issues were I've set GWT's RequestBuilder timeout to 10 seconds and we get very random and sporadic timeouts. We've got error handling and emailing on the server side and never get any errors, which suggests the underlying XHR request that RequestBuilder is built on, never gets to the server and times out after 10 seconds.
We're using these request to poll the server for new messages rather often and also for sending new messages to the server and also polling (less frequently) for other parts of application. What I'm afraid of is that we're running into the browsers limit on concurrent connections to the same domain (2 for IE by default?).
Now my question is - If I construct a RequestBuilder and call it's send() method and the browser blocks it from sending until one of the 2 connections per domain is free, does the timeout still start while the request is being blocked or will it not start until the browser actually releases the underlying XHR?
I hope that's clear, if not please let me know and I'll try to explain more.
On the GWT Incubator doc page is an article explaining server push.
With said technique you only hold one connection open all the time.
Browsers allowed only 2 connections per hostname; that has now changed. 'Modern' browsers allow upto 6 simultaneous connections - it varies between browsers. See http://www.browserscope.org/ - network tab.
As regards the timer, it starts before GWT invokes xhr.send(), so your suspicion is right. See Request.java and RequestBuilder.java if you want to trace it out.
Seems like half the time, you answer your own question as soon as you post it.
Via: http://google-web-toolkit.googlecode.com/svn/javadoc/1.6/com/google/gwt/http/client/package-summary.html
Pending Request Limit
- Modern web browsers are limited to having only two HTTP requests outstanding at any one time. If your server experiences an error that prevents it from sending a response, it can tie up your outstanding requests. If you are concerned about this, you can always set timeouts for the request via RequestBuilder.setTimeoutMillis(int).