Confused by official mongodb documentation, what is socketTimeoutMS? - mongodb

I found two different places with different explanation what does socketTimeoutMS do.
The time in milliseconds to attempt a send or receive on a socket before the attempt times out. The default is never to timeout, though different drivers might vary. See the driver documentation.
From here
And following one:
The socketTimeoutMS sets the number of milliseconds a socket stays inactive after the driver has successfully connected before closing. If the value is set to 360000 milliseconds, the socket closes if there is no activity during a 30 seconds window.
From here
What does really socketTimeoutMS do?

Due to what the docs say, it will be the functionality what the driver is providing
See the driver documentation.
In your case if you're using nodejs (the link you sent) it will be as you quoted in your second quote
The socketTimeoutMS sets the number of milliseconds a socket stays inactive after the driver has successfully connected before closing. If the value is set to 360000 milliseconds, the socket closes if there is no activity during a 30 seconds window.

Related

Monitoring memcached flush with delay

In order to not overload our database server we are trying to flush each server with a 60 second delay between them. I'm having a bit of issue determining when a server was actually flushed when a delay is given.
I'm using BeITMemcached and calling the FlushAll with a 60 second delay and staggered set to true.
I've tried using command line telnet host port followed by stats to see if the flush delay is working, however when I look at the cmd_flush the value goes up instantly on all of the host/port combinations being flushed without a delay. I've tried stats items and stats slabs but can't find information on what all the values represent and if there is anything that shows that it has been invalidated.
Is there another place I can look to determine when the server was actually flushed? Or does that value going up instantly mean that the delay isn't working as expected?
I found a round about way of testing this. Even though the cmd_flush gets updated right away the actual keys don't until after the delay.
So I connected with telnet to the server/port I wanted to monitor. Then used gets key to find a key with a value set. Once found I ran the flushall with a delay between the first servers and this one and continued to monitor that key value. After the delay was up the key started to return no value.

libpq Postgres PQexecParams 2 hours timeout

I am using libpq v9.6.8 for my Application (running 24/7), which inserts data into the postgres database. I also run PQexecParams to get the table columns. But randomly (sometimes just once a week, but then twice a weekend) this blocking PQexecParams call somehow returns after about 2 hours. Within these two hours my application just hangs... The inserts are done via async PQsendQueryParams.
Is there a way to configure the timeout for PQexecParams (as I cannot find any appropriate timeout settings in the lib maybe on the postgres server)? Is there a better way to perform the select synchronous?
Thank you in advance
The two hours suggest TCP keepalive kicking in and determining that the connection has gone bad.
You can set the keepalives_idle connection parameter so that the timeout happens earlier and you are not stalled for two hours.
But you probably also want to know what aborts the network connection. Your first look should be at the PostgreSQL server log; you should see an error message that matches the one on the client side. Probably a network component is at fault – look for firewalls in particular.

HttpURL Connection setReadTimeout

public void setReadTimeout (int timeoutMillis)
Sets the maximum time to wait for an input stream read to complete before giving up. Reading will fail with a SocketTimeoutException if the timeout elapses before data becomes available. The default value of 0 disables read timeouts; read attempts will block indefinitely.
Parameters
timeoutMillis - the read timeout in milliseconds. Non-negative.
What is the meaning of info with bold characters?Is it good to include this in network connection?
It indicates the time between when the socket is connected and when it expects response for the request the client makes.
Default value is good enough unless you want to return the call within a certain duration and you are sure the server doesn't exceed responses beyond certain time.

Golang tcp socket read gives EOF eventually

I have problem reading from socket. There is asterisk instance running with plenty of calls (10-60 in a minute) and I'm trying to read and process CDR events related to those calls (connected to AMI).
Here is library which I'm using (not mine, but was pushed to fork because of bugs) https://github.com/warik/gami
Its pretty straightforward, main action goes in gami.go - readDispatcher.
buf := make([]byte, _READ_BUF) // read buffer
for {
rc, err := (*a.conn).Read(buf)
So, there is TCPConn (a.conn) and buffer with size 1024 to which I'm reading messages from socket. So far so good, but eventually, from time to time (this time may vary from 10 minutes to 5 hours independently of data amount which comes through socket) Read operation fails with io.EOF error. I was trying to reconnect and relogin immediately, but its also impossible - connection times out, so i was pushed to wait for about 40-60sec, and this time is very crucial to me, I'm losing a lot of data because of delay. I was googling, reading sources and trying a lot of stuff - nothing. The most strange thing, that simple socket opened in python or php does not fail.
Is it possible that problem because of lack of file descriptors to represent socket on mine machine or on asterisk server?
Is it possible that problem in asterisk configuration (because i have another asterisk on which this problem doesn't reproduce, but also, i have time less calls on last one)?
Is it possible that problem in my way to deal with socket connection or with Go in general?
go version go1.2.1 linux/amd64
asterisk 1.8
Update to latest asterisk. There was bug like that when AMI send alot of data.
For check issue, you have send via ami command like "COMMAND sip show peers"(or any other long output command) and see result.
Ok, problem was in OS socket buffer overflow. As appeared there were to much data to handle.
So, were are three possible ways to fix this:
increase socket buffer volume
increase somehow speed of process which reeds data from socket
lower data volume or frequency
The thing that gami is by default reading all data from asterisk. And i was reading all of them and filter them after actual read operation. According that AMI listening application were running on pretty poor PC it appeared that it simply cannot read all the data before buffer capacity will be exposed.But its possible to receive only particular events, by sending "Events" action to AMI and specifying desired "EventMask".
So, my decision was to do that. And create different connections for different events type.

gevent create_connection seems to take five-seconds, every time

I'm connecting to a third-party server whose official client connects immediately. However, my client takes five-seconds every time.
The create_connection() call takes a timeout, and, if omitted, it uses the value from gevent.socket.getdefaulttimeout(). On my system, this is "None". Even if I explicitly set the timeout to "1" (whether this represents seconds, milliseconds, microseconds, etc...), it takes the same amount of time, consistently.
Any ideas?