In the tokio-postgres documentation in the first example, there is an example showing that you should run the database connection in a separate thread:
// The connection object performs the actual communication with the database,
// so spawn it off to run on its own.
tokio::spawn(async move {
if let Err(e) = connection.await {
eprintln!("connection error: {}", e);
}
});
If you do so, how can you kill that connection afterwards?
If you're on tokio 1, tokio::task::JoinHandle has an abort() function that cancels the task, thus dropping the connection.
let handle = task::spawn(async move {
if let Err(e) = connection.await {
eprintln!("connection error: {}", e);
}
}
handle.abort(); // this kills the task and drops the connection
Using my snippet as-is will immediately kill the task, thus this is probably not what you want in the end, but if you keep the handle around and use it e.g. in combination with some kind of shutdown listener you should be able to control the connection as wanted.
Related
I am opening a socket in jmeter (using groovy in JSR223 Sampler), and storing the message in a jmeter variable. This is the below code:
SocketAddress inetSocketAddress = new InetSocketAddress(InetAddress.getByName("localhost"),4801);
def server = new ServerSocket()
server.bind(inetSocketAddress)
while(!vars.get("caseId"))) {
server.accept { socket ->
log.info('Someone is connected')
socket.withStreams { input, output ->
InputStreamReader isReader = new InputStreamReader(input);
BufferedReader reader = new BufferedReader(isReader);
StringBuffer sb = new StringBuffer();
String str;
while((str = reader.readLine())!= null){
sb.append(str);
}
String finalStr = sb.toString()
String caseId = finalStr.split("<caseId>")[1].split("</caseId>")[0]
vars.put("caseId", caseId)
}
log.info("Connection processed")
}
}
if(vars.get("caseId"))
{
try
{
server.close();
vars.put("socketClose",true);
}
catch(Exception e)
{
log.info("Error in closing the socket: " + e.getMessage());
}
}
Now, there is some time delay between the first loop is executed and the message being recieved from the port. It doesnt receive the message immediately, and hence while loop is executed again. And then message is received and it sets caseId. It goes on to close the socket, because caseId is set. And that is throwing the error, because socket is still waiting for the message. So is there a way, to wait until socket has recieved all the messages, so i could properly close it?
Or just force close the socket, and Jmeter wont throw any exception?
Or when i execute next component, say IF controller in Jmeter, it waits until variable socketClose is set true? In that way, instead of while loops inside JSR223 sampler, i could use multiple If Controllers in Jmeter thread.
This is how ServerSocket.close() function works
public void close()
throws IOException
Closes this socket. Any thread currently blocked in accept() will throw a SocketException.
I don't think there is a way "to wait until socket has recieved all the messages" because Socket is dump as a rock and it can either listen for connections or shut down.
Maybe you might be interested in setSoTimeout() function?
Also this line:
vars.put("socketClose",true)
is very suspicious, I think you need to change it either to:
vars.put("socketClose", "true")
or to
vars.putObject("socketClose",true)
as JMeterVariables.put() function can accept only a String, see Top 8 JMeter Java Classes You Should Be Using with Groovy article for more details.
When cancelling IMailFolder.Fetch method with the cancellationToken, I get an exception that the client is disconnected.
I debugged MailKit and traced the issue to ImapEngine.Iterate() method where there is the following:
try {
while (current.Step ()) {
// more literal data to send...
}
if (current.Bye)
Disconnect ();
} catch {
Disconnect ();
throw;
} finally {
current = null;
}
Is it the right approach to disconnect the client on every exception type being caught?
Should this also apply to the case when we are cancelling the operation, so we can prioritize another operation, and we do not want to disconnect?
How else would you cancel a command that is in progress if not disconnecting the socket?
I seem to be struggling with the std::io::TcpStream. I'm actually trying to open a TCP connection with another system but the below code emulates the problem exactly.
I have a Tcp server that simply writes "Hello World" to the TcpStream upon opening and then loops to keep the connection open.
fn main() {
let listener = io::TcpListener::bind("127.0.0.1", 8080);
let mut acceptor = listener.listen();
for stream in acceptor.incoming() {
match stream {
Err(_) => { /* connection failed */ }
Ok(stream) => spawn(proc() {
handle(stream);
})
}
}
drop(acceptor);
}
fn handle(mut stream: io::TcpStream) {
stream.write(b"Hello Connection");
loop {}
}
All the client does is attempt to read a single byte from the connection and print it.
fn main() {
let mut socket = io::TcpStream::connect("127.0.0.1", 8080).unwrap();
loop {
match socket.read_byte() {
Ok(i) => print!("{}", i),
Err(e) => {
println!("Error: {}", e);
break
}
}
}
}
Now the problem is my client remains blocked on the read until I kill the server or close the TCP connection. This is not what I want, I need to open a TCP connection for a very long time and send messages back and forth between client and server. What am I misunderstanding here? I have the exact same problem with the real system i'm communicating with - I only become unblocked once I kill the connection.
Unfortunately, Rust does not have any facility for asynchronous I/O now. There are some attempts to rectify the situation, but they are far from complete yet. That is, there is a desire to make truly asynchronous I/O possible (proposals include selecting over I/O sources and channels at the same time, which would allow waking tasks which are blocked inside an I/O operation via an event over a channel, though it is not clear how this should be implemented on all supported platforms), but there's still a lot to do and there's nothing really usable now, as far as I'm aware.
You can emulate this to some extent with timeouts, however. This is far from the best solution, but it works. It could look like this (simplified example from my code base):
let mut socket = UdpSocket::bind(address).unwrap();
let mut buf = [0u8, ..MAX_BUF_LEN];
loop {
socket.set_read_timeout(Some(5000));
match socket.recv_from(buf) {
Ok((amt, src)) => { /* handle successful read */ }
Err(ref e) if e.kind == TimedOut => {} // continue
Err(e) => fail!("error receiving data: {}", e) // bail out
}
// do other work, check exit flags, for example
}
Here recv_from will return IoError with kind set to TimedOut if there is no data available on the socket during 5 seconds inside recv_from call. You need to reset the timeout before inside each loop iteration since it is more like a "deadline" than a timeout - when it expires, all calls will start to fail with timeout error.
This is definitely not the way it should be done, but Rust currently does not provide anything better. At least it does its work.
Update
There is now an attempt to create an asynchronous event loop and network I/O based on it. It is called mio. It probably can be a good temporary (or even permanent, who knows) solution for asynchronous I/O.
I'm using the MongoDB C# driver 1.8.1.20 with Mongo 2.4.3. I use the following infinite loop to poll new messages from a capped collection and process them as they come (with a tailable cursor and await data). It works for the most part, but in production, it seems that from time to time the call to enumerator.MoveNext() blocks and never returns. This causes the loop to stall, and my application no longer receives updates. It seems to be happening when the connection is closed unexpectedly.
while (true)
{
try
{
using (MongoCursorEnumerator<QueueMessage> enumerator = GetCursor())
{
while (!enumerator.IsDead)
{
while (enumerator.MoveNext()) // This is blocking forever when connection is temporarily lost
this.processMessage(enumerator.Current);
}
}
}
catch (Exception ex)
{
Trace.TraceError("Error in the ReceiveCore loop: " + ex.ToString());
}
}
The GetCursor function does this:
cursor = (MongoCursorEnumerator<QueueMessage>)collection
.FindAllAs<QueueMessage>()
.SetFlags(QueryFlags.AwaitData | QueryFlags.TailableCursor | QueryFlags.NoCursorTimeout)
.SetSortOrder(SortBy.Ascending("$natural"))
.GetEnumerator();
Why is that blocking forever, and what can I do to make sure it throws an exception when it can't complete (possibly by timing out)?
I think I would just remove the QueryFlags.NoCursorTimeout, then let it timeout occasionally and just restart.
I'm working with a windows form application in C#. I'm using a socket client which is connecting in an asynchronous way to a server. I would like the socket to try reconnecting immediately to the server if the connection is broken for any reason. Which is the best design to approach the problem? Should I build a thread which is continuously checking if the connection is lost and tries to reconnect to the server?
Here is the code of my XcomClient class which is handling the socket communication:
public void StartConnecting()
{
socketClient.BeginConnect(this.remoteEP, new AsyncCallback(ConnectCallback), this.socketClient);
}
private void ConnectCallback(IAsyncResult ar)
{
try
{
// Retrieve the socket from the state object.
Socket client = (Socket)ar.AsyncState;
// Complete the connection.
client.EndConnect(ar);
// Signal that the connection has been made.
connectDone.Set();
StartReceiving();
NotifyClientStatusSubscribers(true);
}
catch(Exception e)
{
if (!this.socketClient.Connected)
StartConnecting();
else
{
}
}
}
public void StartReceiving()
{
StateObject state = new StateObject();
state.workSocket = this.socketClient;
socketClient.BeginReceive(state.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(OnDataReceived), state);
}
private void OnDataReceived(IAsyncResult ar)
{
try
{
StateObject state = (StateObject)ar.AsyncState;
Socket client = state.workSocket;
// Read data from the remote device.
int iReadBytes = client.EndReceive(ar);
if (iReadBytes > 0)
{
byte[] bytesReceived = new byte[iReadBytes];
Buffer.BlockCopy(state.buffer, 0, bytesReceived, 0, iReadBytes);
this.responseList.Enqueue(bytesReceived);
StartReceiving();
receiveDone.Set();
}
else
{
NotifyClientStatusSubscribers(false);
}
}
catch (SocketException e)
{
NotifyClientStatusSubscribers(false);
}
}
Today I try to catch a disconnection by checking the number of bytes received or catching a socket exception.
If your application only receives data on a socket, then in most cases, you will never detect a broken connection. If you don't receive any data for a long time, you don't know if it's because the connection is broken or if the other end simply hasn't sent any data. You will, of course, detect (as EOF on the socket) connections closed by the other end in the normal fashion despite this.
In order to detect a broken connection, you need a keepalive. You need to either:
make the other end guarantee that it will send data on a set schedule, and you time out and close the connection if you don't get it, or,
send a probe to the other end once in a while. In this case the OS will take care of noticing a broken connection and you will get an error reading the socket if it's broken, either promptly (connection reset by peer) or eventually (connection timed out).
Either way, you need a timer. Whether you implement the timer as an event in an event loop or as a thread that sleeps is up to you and the best solution probably depends on how the rest of your application is structured. If you have a main thread that runs an event loop then it's probably best to hook in to that.
You can also enable the TCP keepalives option on the socket, but an application-layer keepalive is generally considered more robust.