Is it possible to set the read command timeout when the function argument is an URL source?
Is it possible to set the timeout duration to 5 or 10 sec?
Related
My simulation fails to stop until an inbuilt timeout is reached. The default is 3600 seconds. How do I set this to a different time period?
The openCPI version is 2.4.3.
Any suggestions?
This is my first use of OpenCPI.
You can change the default timeout for tests using the Timeout attribute in the Tests element of your test XML like below:
<Tests UseHdlFileIO="true" Timeout="30">
<Case>
<!-- Inputs and Outputs in here -->
</Case>
</Tests>
The timeout is defined in seconds. More information can be found in section 13.3.5 of the OpenCPI Component Development Guide
I am looking at some code where data is written to cassandra using
Await.result(casDB.store(someVal), Duration.Inf)
I am seeing OperationTimeoutException and WriteTimeoutException. Say if we had 5 seconds timeout and server didnt respond within 5 second then i can think of these exception. But when duration is set to Inf not able to understand what is the reason for these exception.
There is a configuration named reference.conf in the akka.jar. The default timeout for akka is 5 seconds.
So you should create a new configuration and named it application.conf. set actor->typed->timeout = 3000s
I tried setting connectTimeoutMS and socketTimeoutMS to a low value but it still takes about 20 seconds before my script times out. Am I not using the options correctly? I want the script to exit after 5 seconds.
def init_mongo():
mongo_connection = MongoClient('%s' %MONGO_SERVER, connectTimeoutMS=5000, socketTimeoutMS=5000)
if mongo_connection is None:
return
try:
<code>
except:
<code>
So if anyone comes across this later, I was using the wrong option.
What I was looking for is serverSelectionTimeoutMS
The web page:
https://api.mongodb.com/python/current/api/pymongo/mongo_client.html
says:
connectTimeoutMS: (integer or None) Controls how long (in milliseconds) the driver will wait during server monitoring when connecting a new socket to a server before concluding the server is unavailable. Defaults to 20000 (20 seconds)
(Where "server monitoring" is undefined)
So what? Is connectTimeoutMS sort of like a decoy to keep out the amateurs (like me)
I use this command in sipp for generating load on my SIP servlet container
./sipp -sf uac.xml 127.0.0.1:5080 -trace_rtt
Two things I need. The first one is increasing the load automatically, for example: add 100 call/second. The second one is the CSV file I get just have response time and time stamp, it does not include the call rate.
Any one can help??
I found the asnwer in SIPp documentation
First problem:
-rate_increase 10 -fd 5s
this code increases call rate by 10 each 5 seconds.
Second problem:
add this parameter
-trace_stat
so my command should be like this
./sipp -sf uac.xml 127.0.0.1:5080 -trace_rtt -trace_stat -rate_increase 10 -fd 5s
I recently patched my copy of GStreamer 0.10.36 to time out the tcpclientsink if the network connection is switched between wired/wireless (More information at Method to Cancel/Abort GStreamer tcpclientsink Timeout). It's a simple change. I just added the following to the gst_tcp_client_sink_start() function of gsttcpclientsink.c:
struct timeval timeout;
timeout.tv_sec = 60;
timeout.tv_usec = 0;
...
setsockopt (this->sock_fd.fd, SOL_SOCKET, SO_SNDTIMEO, (char *)&timeout, sizeof(timeout));
The strange thing is that the actual timeout (measured by wall clock time) is always double the value I set. If I disrupt the network connection with the timeout set to 60 seconds, it will take 120 seconds for GStreamer/socket to abort. If I set the timeout to 30 seconds, it will take 60 seconds. If I set the timeout to 180 seconds, it will take 360 seconds. Is there something about sockets that I don't understand that might be causing this behavior? I'd really like to know what's going on here.
This might be a duplicate of Socket SO_RCVTIMEO Timeout is double the set value in C++/VC++
I'm pasting my answer below since I think I had a similar problem.
Pasted answer
SO_RCVTIMEO and SO_SNDTIMEO do not work on all socket operations, you should use non-blocking mode and select.
The behaviour may change on different operating system configurations.
On my system the connect timeouts after two times the value I set in SO_RCVTIMEO. A quick hack like setting SO_RCVTIMEO to x/2 before a connect and x after it works, but the proper solution is using select.
References
Discussion on this problem (read comments to answer):
https://stackoverflow.com/a/4182564/4074995
How to use select to achive the desired result:
http://beej.us/guide/bgnet/output/html/multipage/advanced.html#select
C: socket connection timeout