I would like someone to clarify something for me:
There are two kinds of timeouts that exist during SOAP requests/responses:
1- Connection Timeout
2- Read Timeout
This applies at least to Axis1/Axis2, which I'm currently using.
The connection timeout happens when the client couldn't connect to the web service in question within the set Connection Timeout value, and which would eventually result in throwing the following exception :
Could not connect to host within a timeout of "value".
As for the Read Timeout, I'm really not sure about it, and I don't know which assumption is true. Let's take a scenario for example, in which a client is sending data to a web service, which will in turn process the data, checks for their sanity, inserts them into the database when they are, and then the web service will send some data back to the client. Bottom line, we have a significant amount of processing time on the server, and significant data that's being sent back and forth between the client and the web service.
What I'm unable to understand is when is a read timeout exception thrown by the client?
1- Could it happen when the client is still in the process of marshaling the objects that are being sent to the web service?
2- Could it happen during the process when the web service has already started writing its response to the open socket?
I could really appreciate clear answers on this. Thanks a lot in advance.
It's much clearer now thanks to the efforts I did to research this. A "Read Timeout" is basically when the client hasn't gotten anything byte of date still. So let's take a scenario where a server needs to reply back to a client with 4 MBs of data. Read Timeout will be reset with every byte of data the client is receiving from the server.
Related
I'm trying to enhance a server-app-website architecture in reliability, another programmer has developed.
At the moment, android smartphones start a tcp connection to a server component to exchange data. The server takes the data, writes them into a DB and another user can have a look on the data through a website. The problem is that the smartphones very regularly are in locations where connectivity is really bad. The consequence is that the smartphones lose the tcp connection and it's hard to reconnect. Now my question is, if there are any protocols that are so lightweight or accomodating concerning bad connectivity that the data exchange could work better or more reliable.
For example, I was thinking about replacing the raw TCP interface with a RESTful API, but I don't really know how well REST works in this scenario, as I don't have any experience in this area.
Maybe useful to know for answering this question: The server component is programmed in c#. The connecting components are android smartphones.
Please understand that I dont add some code to this question, because in my opinion its just a theoretically question.
Thank you in advance !
REST runs over HTTP which runs over TCP so it would have the same issues with connectivity.
Moving up the stack to the application you could perhaps think in terms of 'interference'. I quite often have to use technical stuff in remote areas with limited reception and it reminds of trying to communicate in a storm. If you think about it, if you're trying to get someone to do something in a storm where they can hardly hear you and the words get blown away (dropped signal), you don't read them the manual on how to fix something, you shout key words such as 'handle', 'pull', 'pull', 'PULL', 'ok'. So the information reaches them in small bursts you can repeat (pull, what? pull, eh? PULL! oh righto!)
Can you redesign the communications between the android app and the server so the server can recognise key 'words' with corresponding data and build up the request over a period of time? If you consider idempotency, each burst of data would not alter the request if it has already been received (pull, PULL!) and over time the android app could send/receive smaller chunks of the request. If the signal stays up, just keep sending. If it goes down, note which parts of the request haven't been sent and retry them when the signal comes back.
So you're sending the request jigsaw-style but the server knows how to reassemble the pieces in the right order. A STOP word at the end tells the server ok this request is complete, go work on it. Until that word arrives the server can store the incomplete request or discard it if no more data comes in.
If the server respond to the first request chunk with an id, the app can use the id to get the response and keep trying until the full response comes back, at which point the server can remove the response from its jigsaw cache. A fair amount of work though.
I have a proxy server implemented, after sending the final response to client if I directly close the socket (System.Net.Sockets TCPClient.Client.Close()) then client end receives connection aborted error but instead if I use System.Net.Sockets TCPClient.getStream().Close(), it works successfully.I want to understand what's the difference and why is client side receiving an error in the first scenario?
I would say, that Close of sockets is not trivial operation as most people think :)
First of all, you should understand the how the close should be done correctly. Basically, you have to consider that close is a kind of message like any other message sent out your socket. Or other words close() is an information on the other side of communication that the peer finished some kind of work.
Now the important thing to understand that having a TCP socket you can inform the peer that you finished sending or finished listening.
On this page, you can check out how it works in the background (note that ACK and FIN are IP layer messages so even using plain sockets implementation you will never see them): http://www.tcpipguide.com/free/t_TCPConnectionTermination-2.htm
So now the more practical step. Please consider that you have a client and server. The server needs to receive a message and close the connection. Please consider that client is just going to send a message and then closes the connection. If you will also consider that networks need some time to process your communication, you will realize that if you do it quickly, client will close the connection before server received your message. If you can the TCPClient.Client.Close() client will stop listening for anything (that means also for information about that the server closed the connection). So here comes the TCP stack to play (windows does it for you) - in case you will close this way the socket, TCP stack, needs to inform the server site that whatever server has sent goes to dump. So that's why you have an exception.
So the correct way is to:
inform the server that client finished sending any data (FIN)
wait until server confirms that he knows that client will not send any data (ACK)
now server should inform client that will stop sending data (FIN)
now the client can say - "ok I got it, I will not listen anymore" (ACK)
Anyway, the C# TCPClient seems to hide the logic of the background socket closing routine, but if you will not call the close sequence correct way, you'll end up with errors.
I hope that this little bit long explanation will help you understand how it works in the background and finally let you understand why.
It's also a good way to read more about TCP protocol details if you wish to learn more: http://www.tcpipguide.com/free/t_TCPIPTransmissionControlProtocolTCP.htm
I suppose that in order to close connection, you need to send some special bytes sequence. And looks like it is implemented only by tcpclient library , and not implemented by socket library. Probably something like Eof should be sent.
You may check it by some net traffic utilities like tcpdump.
Good luck!
In a client - server type system, it would simplify my server code somewhat if the client could indicate if it was trying to make a new connection or was attempting to reconnect after a connection failure.
I realize that in reality a new connection is a new connection, period. But by passing this one extra bit of information it would simplify my server's handling of the situation - which threads and data areas can be reused and which threads should be killed, etc. By not having this one extra bit the server is forced to assume reconnection when possible, and then reassess that assumption when the first message arrives, where the client indicates whether it is attempting to revive the previous conversation or wishes to start a completely new relationship.
I'm guessing the answer is no, but any suggestions are welcome.
By the way, the client is an Android program and the server is .Net Windows.
I'm guessing the answer is no
The answer is no.
but any suggestions are welcome.
Either (a) it should be obvious from your application protocol whether the client is connecting or reconnecting, or (b) it shouldn't make any difference which it is. Much more usually, (b).
We are trying to access our web-app (through web server, IHS). When we use http we are fine ;https protocol is working as it submits the requests, however we observe Socket Time Out Exception continuously after some requests have been processed. Thereafter the request processing resumes again. We have tested the application with quite large concurrent load using https earlier; but in this case we are not sure why we are getting this error.
Oh boy, this can be due to thousands of different things. I would suggest a layer analysis approach starting off by the Web Server logs, you need to make sure the requests are reaching your web server and what is happening to the ones dictating a time out, you could be facing anything from network latency to a resource bounded host, contention or who knows, it all depends on your application's design.
Start off by checking out the network layer. Maybe if you provide some more information I can help you out.
Also check out http and https time out configurations on your web server.
I am getting started with Jersey 2.1 which I want to use as a client, to make REST calls to someone elses web service.
I have been working through the tutorials, and I think I understand how to open a connection, and make calls to the web service.
The question I have is, since my service will persist, and have to process events when they happen, how do I manage and maintain session connectivity?
I have been trying to understand if I need to:
Close connections? This does not seem to be discussed. So are the implicitly auto-closed after making a call?
If not auto-closed, can I check the state to see if a Connection is still valid?
The underlying connections are opened for each request and closed after the response is received and entity is processed (entity is read).
final WebTarget target = ... some web target
Response response = target.path("resource").request().get();
System.out.println("Connection is still open.");
System.out.println("string response: " + response.readEntity(String.class));
System.out.println("Now the connection is closed.");
If you don't read the entity, then you need to close the response manually by response.close(). Also if the entity is read into an input stream (by response.readEntity(InputStream.class)), the connection stays open until you finish reading from the InputStream. In that case, the InputStream or the Response should be closed manually at the end of reading from InputStream.
Based on the "How To Use Jersey Client Efficiently", closing response connection is one of the best practices to maintain the performance of your application from client side point of view. Also on another note, it also helps to close the client connection as well.
Key Points are :
To close response after collecting/reading all the information needed in your client code.
And to close client connection after completing all the CRUD operation with in your client service class. This is very important in situation where you get some-type of connection/service-endpoint related exception during the service call.
It's also worth mentioning about connection and read time out.
Example:
client.setConnectTimeout(10000);// In millisecond.
client.setReadTimeout(60000); // In millisecond.
Hope this helps.