In this case, I didn't close the s3 client.
Is it problem as code below?
For example, for every single request, client object still remains in the memory if I didn't close it?
Related
For some client side procedures, I implement remote logging to log the calling of the procedure. The log is printed several times with different thread id, even though the procedure is only called once. Some rpc requests are sent to the sever a few times which causes some database session problem. Is it normal? Is there anyway to avoid it?
Thanks
This is not normal, and suggests there is a bug on your client causing it to send the same call more than once. Try adding logging on the client where you invoke the RPC call, and possibly add breakpoints to confirm why it is being called twice.
My best guess with no other information would be that you have more than one event handler wired up to the same button, or something like that.
--
More specifically, your servlet container starts multiple threads to handle incoming requests - if two requests come in close succession, they might be handled by different threads.
As you noted, this can cause problems with a database, where two simultaneous calls could be made to change the same data, especially if you have some checks to ensure that a servlet call cannot accidentally overwrite some newer data. This is almost certainly a bug in your client code, and debugging it should start there.
I'm making a simple mmorpg server with IOCP.
I implemented a simple movement function so I tested with dummy clients(also IOCP).
Everything works fine only when few clients are connected. After around 500~1000 clients are connected, some dummy clients occasionally read weird data. I checked that server sends data as I expected but when it comes to dummy clients reading them, they read random data.
My guess is that it could be related to operation system's recv buffer being overflowed but I'm only guessing right now... I have no idea how to check them.
Any suggestion would be very thankful!
The problem with too many WSASends doesn't usually manifest as corrupted data; that's more likely to be a bug in your code. Perhaps your problem is caused by you failing to manage the lifetime of the buffer that is being used to send data correctly? It needs to stay stable until you get the completion for the WSASend call. If you were reusing it sooner than that then you would corrupt the data being sent.
The reason this may show up when you have lots of WSASends outstanding to lots of clients is that the send operations may be taking longer to complete and so make it more likely that your bug will be hit...
It doesn't matter how many WSASends you issue as long as your clients are able to receive the data as fast as you can send it. As soon as you are sending faster than they can receive then there will be problems. I address these problems in this answer.
We use Retrofit/OkHttp3 for all network traffic from our Android application. So far everything seems to run quite smoothly.
However, we have now occasionally had our app/process run out of file handles.
Android allows for a max of 1024 file handles per process
OkHttp will create a new thread for each async call
Each thread created this way will (from our observation) be responsible for 3 new file handles (2 pipes and one socket).
We were able to debug this exactly, where each dispached async call using .enqueue() will lead to an increase of open file handles of 3.
The problem is, that the ConnectionPool in OkHttp seems to be keeping the connection threads around for much longer than they are actually needed. (This post talks of five minutes, though I haven't seen this specified anywhere.)
That means if you are quickly dispatching request, the connection pool will grow in size, and so will the number of file handles - until you reach 1024 where the app crashes.
I've seen that it is possible to limit the number of parallel calls with Dispatcher.setMaxRequests()(although it seems unclear whether this actually works, see here) - but that still doesn't quite address the issue with the open threads and file handles piling up.
How could we prevent OkHttp from creating too many file handles?
I am answering my own question here to document this issue we had. It took us a while to figure this out and I think others might encounter this too and might be glad for this answer.
Our problem was that we created one OkHttpClient per request, as we used it's builder/interceptor API to configure some per-request parameters like HTTP headers or timeouts.
By default each OkHttpClient comes with its own connection pool, which of course blows up the number of connections/threads/file handles and prevents proper reuse in the pool.
Our solution
We solved the problem by manually creating a global ConnectionPool in a singleton, and then passing that to the OkHttpClient.Builder object which builds the actual OkHttpClient.
This still allows for per-request configuration using the OkHttpClient.Builder
Makes sure all OkHttpClient instances are still using a common connection pool.
We were then able to properly size the global connection pool.
I wrote a multithreaded Socket Server application which accepts over a 1000 concurrent connections. Recently we had application crash; after analyzing the dump files came to know app has crash due to heap corruption. I found the same issue discussed in following links.
.NET Does NOT Have Reliable Asynchronouos Socket Communication?
http://support.microsoft.com/kb/947862
And also discussion suggest 3 solutions.
The network application should have an upper bound on the number of outstanding asynchronous IO that it posts.
Use Microsoft CCR
Use TPL
Due to the time factor, I thought to stick with #1, but I don't have a clear picture how to implement this. Can some one give a good starting point please?
And also has anyone used Async with TPL to solve this issue?
You mean a better starting point than the blog posting that I linked to in the answer that you refer to?
The issue is this:
Memory and other per-operation resources that are used during an async write are often "in use" until the remote peer's TCP stack acks the data and the local stack can complete your async write operation to tell you that you can reuse your buffer.
The local peer has no control over this as it's all governed by the speed at which the remote peer reads data from its socket and the congestion on the link between the two peers.
Because of the above you need to have a hard limit on the amount of async writes that you have outstanding at any one time. You can track this by incrementing a counter just before you issue an async write and decrementing it in the completion handler.
What you do once you hit that limit is up to you. In the original article I favour a queue that data to be written is placed into. This queue can then be used as a source of data as write completions occur. Once the queue is empty you can send normally again. Of course this simply moves the problem - you still have a memory resource that's controlled by the remote peer (the queued data) but you don't also have other OS resources used too (non-paged pool, I/O page lock limit, etc).
You could simply stop your peer sending when you reach your limit - and now the API that you build over the async API needs to have a 'can't sent at the moment, try again later' return from a send which previously used to always "work".
If you're doing this I would also seriously look at avoiding the pinned memory issue by allocating a large block of buffers in one contiguous block and using them from the pool.
First, that's a very old KB article. How are you sure you have that particular problem?
Then, as Hans Passant answers in the SO question, if you write bad async code, it will bite you. If you don't take care of your resources (and memory buffers are resources), a concurrent program will face memory errors
It's very hard to write good concurrent code using raw Threads and TPL does make it easier but it won't fix the bugs you already have. In fact, unless you identify your current problems you are likely to transfer them to the version that uses TPL.
Without knowing the specific problem that caused your application to crash, I can only make some suggestions:
Use BufferManager to reuse memory buffers instead of allocating new ones.
Use a queue to store requests and process them asynchronously instead of starting a new thread for each request.
There are other techniques you can use as well, depending on the type of application you are building. Eg you could use TPL DataFlow to break processing in independent steps.
As for CCR, there is not much point in using it outside Robotics Studio. TPL contains most of the relevant functionality you need to write concurrent apps.
I am currently programming a iPhone-app for my maturity research. But there is an behavior I don't understand: Sometimes when i compile my project there is:
Thread 1: Program received signal : "EXC_BAD_ACCESS".
But when I compile the same code a second, or a third time the code just runs fine and i can't get why. I use some MonteCarloSimulation but when it fails it fails executing one of the first 100 simulations. But when every thing runs fine it executes 1000000 simulations without an error.. Really strange isn't it?
Do you have any idea? Can this be an issue of Xcode or arc?
All other things just work perfect.
Do you have to get any further information? I can also send you my code as an email.
This usually means you're trying to access an object that has already been deallocated.
In order to debug these things, Objective C uses something called "NSZombie" that will keep those objects around so you can at least see what it is that's trying to be called. See this question for some details on how to use it.
This is typically caused by accessing some memory that's been corrupted, chances are you have a reference to an object which has been deleted. A lot of the time you may find that the memory where the object was located has not yet been overwritten, so when you attempt to access that memory your data is still intact and there is no problem, hence it working some of the time.
Another scenario would be that you've got some code writing into memory using a bad reference, so you're writing into an area you shouldn't be. Depending on the memory layout when the program starts, this could have no effect some of the time but cause something catastrophic at other times.