Close TcpClient and its underlying NetworkStream - dispose

I am now developing a server/client application using TCP protocol for its communication protocol. I have two questions about TcpClient and its underlying NetworkStream. I was goolging, but could not find a clear answer
(1) If I close TcpClient using TcpClient.Close() method, is the underlying NetworkStream will also be closed automatically? For .Net framework 4.5 here (http://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient.getstream(v=vs.110).aspx) says that "You must close the NetworkStream when you are through sending and receiving data. Closing TcpClient does not release the NetworkStream." However, also for .Net framework 4.5 here ("http://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient.close(v=vs.110).aspx") says that "Calling this method (TcpClient.Close) will eventually result in the close of the associated Socket and will also close the associated NetworkStream that is used to send and receive data if one was created." I am very confused now.
(2) If I keep my TcpClient open and connected, but I close/dispose the underlying NetworkStream obtained by calling TcpClient.GetStream, can I get such a stream again by calling GetStream method?
Thank you for your inputs!

The general rule is that you should try hard to dispose of any disposable resources, except if it is well known that doing so has no benefit (the best example would be Task).
If you see a significant reason why you would like to not dispose of the TcpClient or the paired NetworkStream: Reflector shows that GetStream transfers ownership of the underlying socket to the NetworkStream. But disposing either of the two will shutdown and close the socket.
You can safely dispose of just one of the two because the socket is the only unmanaged resource held.
This means that your first quote is a documentation bug.
What I just said is undocumented knowledge derived from decompiling the source code. I'd feel pretty safe relying on that knowledge because the behavior has been like this for 10 years and can never be changed for compatibility reasons. Microsoft tries very hard to not break user code.

Related

How to correctly dispose a client stream if the connection has disconnected?

I'm using Microsoft.AspNetCore.SignalR 2.1 v1.0.4 and have a ChannelReader stream being consumed by a typescript client using v1.0.4.
The channel surfaces event data specific to a single entity, so it's expected the client would subscribe to a channel when the user of the client navigates to a page rendering data for that single entity. If the user navigates to the same page but for a different entity then the client would make another subscribe call.
Now the questions I have are about how best to unsubscribe the stream, and also in general, what the lifetime of the stream is to the client under hub connection stop/start scenarios, and if the server explicitly aborts a connection (due to access_token timeouts and so to trigger the client to refresh their connection)?
There doesn't appear to be some connection state surfaced from the api so I currently use an RxJs Subject to surface some connection state to my UI components/services, i.e. when the hub connection's start call is successful I surface "true", and when the onclose callback is called I surface "false". This allows me to attempt to call dispose on a previously subscribed stream to clean things up during a connection disconnect/stop, and then if necessary call subscribe to the stream again on a successful start call.
I have tried calling dispose on a stream which is fine if the hub is connected, but it errors if the connection is in a disconnected state. I'm wondering if this is a bug or not. Should I be able to dispose a stream even when the hub is disconnected?
Is it okay to just do a delete streamsubscription and then recreate as required, or will this leak in any way?
what the lifetime of the stream is to the client under hub connection stop/start scenarios, and if the server explicitly aborts a connection (due to access_token timeouts and so to trigger the client to refresh their connection).
When the connection is terminated (either because of stop being called on the client or the server aborting the connection) the error method of your subscriber will be called with an error indicating the stream has been terminated because the connection was terminated. In general, you should handle the error method and consider it a terminal event (i.e. the stream will never yield additional objects). On the server, the Context.ConnectionAborted token will be triggered if the connection is terminated (by either side) and you can stop writing to your stream.
If you're already using RxJS, I'd highly recommend building a small wrapper to convert the object you get back from SignalR into a proper RxJS Observable. The object we return is not actually an Observable, but it has all the same basic methods (a subscribe method that takes an object with complete, next and error methods), so it should be trivial to wrap it.
I have tried calling dispose on a stream which is fine if the hub is connected, but it errors if the connection is in a disconnected state. I'm wondering if this is a bug or not.
Yeah, that's probably a bug. We shouldn't throw if you dispose after a hub is disconnected. Can you file that on https://github.com/aspnet/SignalR ? To work around it you can fairly safely just try...catch the error and suppress it (or maybe log it if you're paranoid).
Is it okay to just do a delete streamsubscription and then recreate as required, or will this leak in any way?
You should always dispose the subscription. If you just delete it, then we have no way to know that you're done with it and we never tell the server to stop. If you call dispose (and are connected) we send a message to the server "cancelling" the stream. In ASP.NET Core 2.1 we don't expose this cancellation to you, but we do stop reading from the ChannelReader. In ASP.NET Core 2.2 we allow you to accept a CancellationToken in your Hub method and the dispose method on the client will trigger this token in the Hub method. I'd highly recommend you try the latest preview of ASP.NET Core 2.2 and use a CancellationToken in your Hub method to stop the stream:
public ChannelReader<object> MyStreamingMethod(..., CancellationToken cancellationToken) {
// pass 'cancellationToken' over to whatever process is writing to the channel
// and stop writing when the token is triggered
}
Note: If you do this, you don't need to monitor Context.ConnectionAborted, the token passed in to your Hub method will cover all cancellation cases.
On a related note, you should always use Channel.CreateBounded<T>(size) to create your channel. If you use an unbounded channel it's much easier to leak memory since the writer can just keep writing indefinitely. If you use a bounded channel, the writer will be stopped (WriteAsync and WaitToWriteAsync will "block") if there are size un-read items in the channel (because, for example, the client has disconnected and we've stopped reading).

.Net 4.5 TCP Server scale to thousands of connected clients

I need to build a TCP server using C# .NET 4.5+, it must be capable of comfortably handling at least 3,000 connected clients that will be send messages every 10 seconds and with a message size from 250 to 500 bytes.
The data will be offloaded to another process or queue for batch processing and logging.
I also need to be able to select an existing client to send and receive messages (greater then 500 bytes) messages within a windows forms application.
I have not built an application like this before so my knowledge is based on the various questions, examples and documentation that I have found online.
My conclusion is:
non-blocking async is the way to go. Stay away from creating multiple threads and blocking IO.
SocketAsyncEventArgs - Is complex and really only needed for very large systems, BTW what constitutes a very large system? :-)
BeginXXX methods will suffice (EAP).
Using TAP I can simplify 3. by using Task.Factory.FromAsync, but it only produces the same outcome.
Use a global collection to keep track of the connected tcp clients
What I am unsure about:
Should I use a ManualResetEvent when interacting with the TCP Client collection? I presume the asyc events will need to lock access to this collection.
Best way to detect a disconnected client after I have called BeginReceive. I've found the call is stuck waiting for a response so this needs to be cleaned up.
Sending messages to a specific TCP Client. I'm thinking function in custom TCP session class to send a message. Again in an async model, would I need to create a timer based process that inspects a message queue or would I create an event on a TCP Session class that has access to the TcpClient and associated stream? Really interested in opinions here.
I'd like to use a thread for the entire service and use non-blocking principals within, are there anythings I should be mindful of espcially in context of 1. ManualResetEvent etc..
Thank you for reading. I am keen to hear constructive thoughts and or links to best practices/examples. It's been a while since I've coded in c# so apologies if some of my questions are obvious. Tasks, async/await are new to me! :-)
I need to build a TCP server using C# .NET 4.5+
Well, the first thing to determine is whether it has to be base-bones TCP/IP. If you possibly can, write one that uses a higher-level abstraction, like SignalR or WebAPI. If you can write one using WebSockets (SignalR), then do that and never look back.
Your conclusions sound pretty good. Just a few notes:
SocketAsyncEventArgs - Is complex and really only needed for very large systems, BTW what constitutes a very large system? :-)
It's not so much a "large" system in the terms of number of connections. It's more a question of how much traffic is in the system - the number of reads/writes per second.
The only thing that SocketAsyncEventArgs does is make your I/O structures reusable. The Begin*/End* (APM) APIs will create a new IAsyncResult for each I/O operation, and this can cause pressure on the garbage collector. SocketAsyncEventArgs is essentially the same as IAsyncResult, only it's reusable. Note that there are some examples on the 'net that use the SocketAsyncEventArgs APIs without reusing the SocketAsyncEventArgs structures, which is completely ridiculous.
And there's no guidelines here: heavier hardware will be able to use the APM APIs for much more traffic. As a general rule, you should build a barebones APM server and load test it first, and only move to SAEA if it doesn't work on your target server's hardware.
On to the questions:
Should I use a ManualResetEvent when interacting with the TCP Client collection? I presume the asyc events will need to lock access to this collection.
If you're using TAP-based wrappers, then await will resume on a captured context by default. I explain this in my blog post on async/await.
There are a couple of approaches you can take here. I have successfully written a reliable and performant single-threaded TCP/IP server; the equivalent for modern code would be to use something like my AsyncContextThread class. It provides a context that will cause await to resume on that same thread by default.
The nice thing about single-threaded servers is that there's only one thread, so no synchronization or coordination is necessary. However, I'm not sure how well a single-threaded server would scale. You may want to give that a try and see how much load it can take.
If you do find you need multiple threads, then you can just use async methods on the thread pool; await will not have a captured context and so will resume on a thread pool thread. In this case, yes, you'd need to coordinate access to any shared data structures including your TCP client collection.
Note that SignalR will handle all of this for you. :)
Best way to detect a disconnected client after I have called BeginReceive. I've found the call is stuck waiting for a response so this needs to be cleaned up.
This is the half-open problem, which I discuss in detail on my blog. The best way (IMO) to solve this is to periodically send a "noop" keepalive message to each client.
If modifying the protocol isn't possible, then the next-best solution is to just close the connection after a no-communication timeout. This is how HTTP "persistent"/"keep-alive" connections decide to close. There's another possibile solution (changing the keepalive packet settings on the socket), but it's not as easy (requires p/Invoke) and has other problems (not always respected by routers, not supported by all OS TCP/IP stacks, etc).
Oh, and SignalR will handle this for you. :)
Sending messages to a specific TCP Client. I'm thinking function in custom TCP session class to send a message. Again in an async model, would I need to create a timer based process that inspects a message queue or would I create an event on a TCP Session class that has access to the TcpClient and associated stream? Really interested in opinions here.
If your server can send messages to any client (i.e., it's not just a request/response protocol; any part of the server can send messages to any client without the client requesting an update), then yes, you'll need a proper queue of outgoing requests because you can't (reliably) issue multiple concurrent writes on a socket. I wouldn't have the consumer be timer-based, though; there are async-compatible producer/consumer queues available (like BufferBlock<T> from TPL Dataflow, and it's not that hard to write one if you have async-compatible locks and condition variables).
Oh, and SignalR will handle this for you. :)
I'd like to use a thread for the entire service and use non-blocking principals within, are there anythings I should be mindful of espcially in context of 1. ManualResetEvent etc..
If your entire service is single-threaded, then you shouldn't need any coordination primitives at all. However, if you do use the thread pool instead of syncing back to the main thread (for scalability reasons), then you will need to coordinate. I have a coordination primitives library that you may find useful because its types have both synchronous and asynchronous APIs. This allows, e.g., one method to block on a lock while another method wants to asynchronously block on a lock.
You may have noticed a recurring theme around SignalR. Use it if you possibly can! If you have to write a bare-bones TCP/IP server and can't use SignalR, then take your initial time estimate and triple it. Seriously. Then you can get started down the path of painful TCP with my TCP/IP FAQ blog series.

Netty send event to sockets

I am building socket web server with Netty 5.0. I came through WebSocketServer example (https://github.com/netty/netty/tree/master/example/src/main/java/io/netty/example/http/websocketx/server).
But I can't understand how to send events to sockets from separate thread. So I have a thread which each second loads some data from external resource. This is StockThread which receives stock data. After receiving data the thread should send events to sockets. What is best practise to do this?
It am using following approach: inside StockThread I store list of ChannelHandlerContext. After receving data I just call write() method of ChannelHandlerContext. So write() method is called from StockThread. Is it okay or there is more appropriate way for this?
Yes, ChannelHandlerContext is thread-safe and can be cached, so this way of usage is completely ok.
See note from "Netty In Action" book, that proves my words:
You can keep the ChannelHandlerContext for later use,
such as triggering an event outside the handler methods,
even from a different Thread.

Is it possible to pass one extra bit of information from Socket.Connect to Socket.Accept?

In a client - server type system, it would simplify my server code somewhat if the client could indicate if it was trying to make a new connection or was attempting to reconnect after a connection failure.
I realize that in reality a new connection is a new connection, period. But by passing this one extra bit of information it would simplify my server's handling of the situation - which threads and data areas can be reused and which threads should be killed, etc. By not having this one extra bit the server is forced to assume reconnection when possible, and then reassess that assumption when the first message arrives, where the client indicates whether it is attempting to revive the previous conversation or wishes to start a completely new relationship.
I'm guessing the answer is no, but any suggestions are welcome.
By the way, the client is an Android program and the server is .Net Windows.
I'm guessing the answer is no
The answer is no.
but any suggestions are welcome.
Either (a) it should be obvious from your application protocol whether the client is connecting or reconnecting, or (b) it shouldn't make any difference which it is. Much more usually, (b).

ConnectEx with IOCP problem

I've made a simple dummy server/dummy client program using IOCP for some testing/profiling purpose. (And I also wanted to note that I'm new to asynchronous network programming)
It looks like the server works well with original client, but when the dummy client tries to connect to the server with ConnectEx function, IOCP Worker thread still gets blocked by GetQueuedCompletionStatus function and never returns result while the server succeeds in accepting the connection.
What is the problem and/or the reason, and how should I do to solve this problem?
I think you answer your own question with your comment.
Your sequence of events is incorrect, you say that you Bind, ConnectEx, Associate to IOCP.
You should Bind, associate the socket with the IOCP and THEN call ConnectEx.
Even after you associate your accepted socket to IOCP, your worker thread will remain blocked on GetQueuedCompletionStatus untill you post an "unlocking" completion event.
Completion events for receive/write operation wo'nt be sent by the system unless you "unlock" your new socket.
For details ckeck the source code of Push Framework http://www.pushframework.com It is a C++ network application framework using IOCP.
The "unlocking" trick exists in the "IOCPQueue" class.