The default network socket timeout in DCMTK is 60 seconds.
How to change it to 30?
I could see the code written as below, but could not change it to 30:
extern DCMTK_DCMNET_EXPORT OFGlobal<Sint32> dcmSocketReceiveTimeout; /* default: 60 */
As far as I understand your question, you want to set the timeout programmtically.
You can check how to do this in the dcmtk tools like echoscu -- basically you have to call:
#include "dcmtk/dcmnet/dcmtrans.h"
dcmSocketReceiveTimeout.set(OFstatic_cast(Sint32, new_socket_timeout));
and the global timeout will change accordingly.
The same is true for setting the send timeout, where you use dcmSocketSendTimeout instead.
Related
I cannot find how to set baudrate for the Beckhoff EL6002. I got a hint that I should use CoeWrite block for that but as I am bit new to TwinCAT I cannot find the correct function block. Could someone send a code example (on structured text) how to do that?
An alternative to programming it would be to configure it directly via the IO configuration. If you add a Startup value, it will be set every time the IO changes from a specified state to another. In the pic below, PS means when going from Pre-Op to Safety. So it will work, even if you replace the IO.
Another solution is to change it under the IO configuration and COE-online tab. When you update it there, it will always remember the value.
In code, you can update it through CoE (Can over EtherCAT) too. You can find the index number of the setting variable from documentation. For channel 1, it seems to be 8000:11 so index = 8000 and subindex = 11.
Then by using mailbox writer block (FB_EcCoESdoWriteEx) from Tc2_EtherCAT library it is possible to write a value to that parameter. So when your PLC program starts, first run the code that updates the variable to desired baudrate.
For example, something like this:
TargetValue := 1; //WORD, Check documentation for correct value
//MailBoxWriter = Instance of FB_EcCoESdoWriteEx
MailBoxWriter(
sNetId:= **AmsNetId of the EtherCAT master**,
nSlaveAddr:= **Serial interface terminal port**,
nSubIndex:= 11,
nIndex:= 8000,
pSrcBuf:= ADR(TargetValue),
cbBufLen:= SIZEOF(TargetValue),
bExecute:= TRUE,
tTimeout:= T#500MS,
bCompleteAccess:= FALSE,
bBusy=> ,
bError=> ,
nErrId=>
);
The sNetIdis AmsNetId of the EtherCAT bus master. It can be linked from IO configuration, see Master->Infodata->AmsNetId.
The nSlaveAddr is terminal port from EL6002 and it can be linked from IO configuration, see Terminal->InfoData->AdsAddr->port.
In an existing networking library I've been tasked to work on there is a call to setsockopt which I don't understand
Here you can see a TCP socket begin created:
[socket] fd(11) domain(2:AF_INET) type(1:SOCK_STREAM) protocol(0:default)
Immediately afterward, a call to setsockopt is made for option SO_BROADCAST at the IPPROTO_TCP protocol level, with option value 5
[setsockopt] fd(11) level(6:IPPROTO_TCP) option(6:SO_BROADCAST) ret(0) option:
0 0500 0000 ....
According to Beej's guide to networking this "Does nothing—NOTHING!!—to TCP stream sockets! Hahaha!"
Questions:
What exactly are they doing here?
Does this make any sense?
If anything, surely it should be option_value=1, so what is the 5 about?
I think your setsockopt decoder is wrong. Are you sure it isn't one of these?
#define TCP_NODELAY 1 /* Don't delay send to coalesce packets */
#define TCP_MAXSEG 2 /* Set maximum segment size */
#define TCP_CORK 3 /* Control sending of partial frames */
#define TCP_KEEPIDLE 4 /* Start keeplives after this period */
#define TCP_KEEPINTVL 5 /* Interval between keepalives */
#define TCP_KEEPCNT 6 /* Number of keepalives before death */
That isn't a full list. See /usr/include/netinet/tcp.h for everything.
is there anyway to set request time-out while sending message from initiator ??
we had a issue where we got late reply from acceptor and application went in not responsive mode. issue can be with network delay or etc. but I think it will be good if we can set time-out option here.
Seeing with Application call back didn't find anything .
I want to set time-out option with SendToTarget API,,
any suggestion
Did you add CheckLatency and MaxLatency in your config file and confirmed ?
CheckLatency If set to Y, messages must be received from the counterparty within a defined number of seconds (see MaxLatency). It is useful to turn this off if a system uses localtime for it's timestamps instead of GMT.
MaxLatency If CheckLatency is set to Y, this defines the number of seconds latency allowed for a message to be processed. Default is 120. positive integer
I'm experiencing the same problem using QuickFix /n
Looking at the source code for version 1.4 the section that reads those settings from the configuration file is commented out and replaced with hard coded default values.
// FIXME to get from config if available
session.MaxLatency = 120;
session.CheckLatency = true;
I recently patched my copy of GStreamer 0.10.36 to time out the tcpclientsink if the network connection is switched between wired/wireless (More information at Method to Cancel/Abort GStreamer tcpclientsink Timeout). It's a simple change. I just added the following to the gst_tcp_client_sink_start() function of gsttcpclientsink.c:
struct timeval timeout;
timeout.tv_sec = 60;
timeout.tv_usec = 0;
...
setsockopt (this->sock_fd.fd, SOL_SOCKET, SO_SNDTIMEO, (char *)&timeout, sizeof(timeout));
The strange thing is that the actual timeout (measured by wall clock time) is always double the value I set. If I disrupt the network connection with the timeout set to 60 seconds, it will take 120 seconds for GStreamer/socket to abort. If I set the timeout to 30 seconds, it will take 60 seconds. If I set the timeout to 180 seconds, it will take 360 seconds. Is there something about sockets that I don't understand that might be causing this behavior? I'd really like to know what's going on here.
This might be a duplicate of Socket SO_RCVTIMEO Timeout is double the set value in C++/VC++
I'm pasting my answer below since I think I had a similar problem.
Pasted answer
SO_RCVTIMEO and SO_SNDTIMEO do not work on all socket operations, you should use non-blocking mode and select.
The behaviour may change on different operating system configurations.
On my system the connect timeouts after two times the value I set in SO_RCVTIMEO. A quick hack like setting SO_RCVTIMEO to x/2 before a connect and x after it works, but the proper solution is using select.
References
Discussion on this problem (read comments to answer):
https://stackoverflow.com/a/4182564/4074995
How to use select to achive the desired result:
http://beej.us/guide/bgnet/output/html/multipage/advanced.html#select
C: socket connection timeout
I'm trying to detect ping flood attacks with Snort. I have included the rule
(drop icmp any any -> any any (itype:8; threshold, track by_src, count 20, seconds; msg:"Ping flood attack detected"; sid:100121))
in the Snort's ddos.rule file.
I'm attacking using the command
hping3 -1 --fast
The ping statistics in the attacking machine says
100% packet loss
However, the Snort action stats shows the verdicts as
Block ->0.
Why is this happening?
A few things to note:
1) This rule is missing the value for seconds. You need to specify a timeout value, you currently have "seconds;" You need something like "seconds 5;". Since this is not valid I'm not sure when snort is actually going to generate an alert, which means it may just be dropping all of the icmp packets, but not generating any alerts.
2) This rule is going to drop EVERY icmp packet for itype 8. The threshold only specifies when to alert, not when to drop. So this is going to drop all packets that match and then generate 1 alert per 20 that it drops. See the manual on rule thresholds here.
3) If you do not have snort configured in inline mode, you will not be able to actually block any packets. See more on the information about the three different modes here.
If you just want to detect and drop ping floods you should probably change this to use the detection_filter option, instead of threshold. If you want to allow legitimate pings, and drop ping floods you do not want to use threshold because the way you have this rule written it will block all icmp itype 8 packets. If you use detection_filter you can write a rule that if snort sees 20 pings in 5 seconds from the same source host then drop. Here is an example of what your rule might look like:
drop icmp any any -> any any (itype:8; detection_filter:track by_src, count 20, seconds 5; sid:100121)
If snort sees 20 pings from the same source host within 5 seconds of each other it will then drop and generate an alert. See the snort manual for detection filters here.
With this configuration, you can allow legitimate pings on the network and block ping floods from the same source host.