Simulate a TCP tarpit - sockets

I want to simulate test cross-platform connection failures / timeouts, starting with blocking connect()s:
#!/usr/bin/python3
import socket
s = socket.socket()
endpoint = ('localhost', 28813)
s.bind((endpoint))
# listen for connections, accept 0 connections kept waiting (backlog)
# all other connect()s should block indefinitely
s.listen(0)
for i in range(1,1000):
c = socket.socket()
c.connect(endpoint)
# print number of successfully connected sockets
print(i)
On Linux, it prints "1" and hangs indefinitely (i.e. the behavior I want).
On Windows (Server 2012), it prints "1" and aborts with a ConnectionRefusedError.
On macOS, it prints all numbers from 1 to 128 and then hangs indefinitely.
Thus, I could accept the macOS ignores the backlog parameter and just connect enough sockets for clients to block on new connections.
How can I get Windows to also block connect() attempts?

On Windows, the SO_CONDITIONAL_ACCEPT socket option allows the application to have the incoming connections wait until it's accept()ed. The constant (SO_CONDITIONAL_ACCEPT=0x3002) isn't exposed in the Python module, but can be supplied manually:
s.bind(endpoint)
s.setsockopt(socket.SOL_SOCKET, 0x3002, 1)
s.listen(0)
It's so effective that even the first connect is kept waiting.
On macOS, backlog=0 is reset to backlog=SOMAXCONN, backlog=1 keeps all connections except the first waiting.

Related

How to keep alive tcp socket in tcl

I have an issue,there is a piece of code which is in tcl opens a client socket in Linux. Because of my tests the socket is connection timed out after 2 hours. I'm looking for commands in tcl to keep alive the opened socket. So far I have got setsockopt,SO_KEEPALIVE all of these are in C language. Can someone help me how do we keep alive sockets in tcl.
I tried with setsockopt but it didn't worked as it is in C.I looked at tcp_keepalive_time,tcp_keepalive_intrvl,tcp_keepalive_probes which are in order 7200,75,9. I tried to modify those parameters but I dont have user permissions(admin restrictions).
if [catch {socket $use_host $use_port} comIdW] {
error "Error: Unable to connect to host $use_host port $use_port: $comIdW"
}
fconfigure $comIdW -buffering none -translation binary
set comIdR $comIdW
# I added following code based on my understanding
set optval 1
set optlen [llength $optval]
seetsockopt($comIdW,SOL_SOCKET,SO_KEEPALIVE,$optval,$optlen)
puts "SO_KEEPALIVE is ($optval ? "ON" : "OFF"))"
I wanted to keep this channel alive, it might be good If I can ping after 30 minutes of channel open
tried with setsockopt but it didn't worked as it is in C
There is currently no built-in way of setting at the script level the SO_KEEPALIVE on a socket controlled by Tcl as a Tcl channel (fconfigure). See also TIP#344.

how can I make large number of connections without error at client side

I have written a program in golang to make request about 2000qps to different remote ip with local port randomly selected by linux, and close request immediately after connection established, but still encounter bind: address already in use error periodically
what I have done:
net.ipv4.ip_local_port_range is 15000-65535
net.ipv4.tcp_tw_recycle=1 net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_fin_timeout=30
above is sockstat:
sockets: used 1200 TCP: inuse 2302 orphan 1603 tw 40940 alloc 2325 mem 201
I don't figure it out why this error still there with kernel selecting available local port,will kernel return a port in use ?
This is a good answer from 2012:
https://serverfault.com/questions/342741/what-are-the-ramifications-of-setting-tcp-tw-recycle-reuse-to-1#434669
As of 2018, tcp_tw_recycle exists only in the sysctl binary, is otherwise gone from the kernel:
https://github.com/torvalds/linux/search?utf8=%E2%9C%93&q=tcp_tw_recycle&type=
tcp_tw_reuse is still in use as described in the above answer:
https://github.com/torvalds/linux/blob/master/net/ipv4/tcp_ipv4.c#L128
However, while a TCP_TIMEWAIT_LEN is in use:
https://github.com/torvalds/linux/search?utf8=%E2%9C%93&q=TCP_TIMEWAIT_LEN&type=
the value is hardcoded:
https://github.com/torvalds/linux/blob/master/include/net/tcp.h#L120
and tcp_fin_timeout refers to a different state:
https://github.com/torvalds/linux/blob/master/Documentation/networking/ip-sysctl.txt#L294
One can relatively safely change the local port range to 1025-65535.
For kicks, if there were a situation where this client was talking to servers and network under my control, I would build a new kernel with a not-to-spec TCP_TIMEWAIT_LEN, and perhaps also fiddle with tcp_max_tw_buckets:
https://github.com/torvalds/linux/blob/master/Documentation/networking/ip-sysctl.txt#L379
But doing so in other circumstances- if this client is behind a NAT and talking to common public servers- will likely be disruptive.

how to know if a server is listening at a port

I have created a TCP socket at one end of my application. Say the end is 1. This socket closes after about 10sec. Now the other side of my application (end 2) is allowed to connect to the above created socket. Im coding this socket app in python, so suppose my end 2 is trying to connect to the TCP socket, but the socket no longer exists, my program terminates because of some exception. I dont want that to happen. It's like there is a while loop in my end 2. So if a connection is not available it goes back and wait.
Are you handling the exception correctly ..
try:
s.connect((host,port))
except socket.error, (value,message):
if s:
s.close()
print "Could not open socket: " + message
"""Code to handle a retry"""
On getting an error .. you can retry by doing a bind and listen again.. Also you need to have retry count ..say 5 and then perhaps exit.

Lua Socket cannot be properly stopped by Ctrl+C

I have a standalone lua script that uses lua sockets to connect to a server via TCP IP. It uses receive call to receive data from that server. It works, however, when I try to stop it with Ctrl+C, one of the two scenarios is happening:
-If there is currently no traffic and receive is waiting, Ctrl+C will have no effect. The program will continue to run, and will have to be terminated by kill.
-If there is traffic, the program will exit with the below printout and with the socket still open and with the server not accepting another connection:
lua: luaSocketTest.lua:15: interrupted!
stack traceback:
[C]: in function 'receive'
luaSocketTest.lua:15: in function 'doWork'
luaSocketTest.lua:22: in main chunk
[C]: ?
I tried using pcall to solve the second scenario, without success. pcall doesn't return, the process still throws the error.
Sample of my program is below:
local socket = require ("socket")
local ip = "localhost"
local port = 5003
function doWork ()
print ("Starting socket: "..ip..":"..port)
client = assert(socket.connect(ip, port))
print ("Socket Accepted")
client:send("TEST TEST")
while 1 do
local byte, err = client:receive (1)
if not err then
print (byte)
end
end
end
while 1 do
local status = pcall(doWork())
print ("EXITED PCALL WITH STATUS: "..tostring(status))
if not status then client:close() end
end
This would be quite a change, but you could employ lua-ev. It allows to add Signal handlers, which is exactly what is required to react to ctrl-c.
local socket = require'socket'
-- make connect and send in blocking mode
local client = socket.connect(ip,port)
client:send('TEST TEST')
-- make client non-blocking
client:settimeout(0)
ev.IO.new(function()
repeat
local data,err,part = client:receive(10000)
print('received',data or part)
until err
end,client:getfd(),ev.READ):start(ev.Loop.default)
local ev = require'ev'
local SIGINT = 2
ev.Signal.new(function()
print('SIGINT received')
end,SIGINT):start(ev.Loop.default)
ev.Loop.default:loop()

memcached apparently resetting connections

UPDATE:
It's not memcached, it's a lot of sockets in TIME_WAIT state:
% ss -s
Total: 2494 (kernel 2784)
TCP: 43323 (estab 2314, closed 40983, orphaned 0, synrecv 0, timewait 40982/0), ports 16756
BTW, I have modified previous version (below) to use Brad Fitz's memcache client and to reuse the same memcache connection:
http://dpaste.com/1387307/
OLD VERSION:
I have thrown together the most basic webserver in Go that has handler function doing only one thing:
retrieving a key from memcached
sending it as http response to client
Here's the code: http://dpaste.com/1386559/
The problem is I'm getting a lot of connection resets on memcached:
2013/09/18 20:20:11 http: panic serving [::1]:19990: dial tcp 127.0.0.1:11211: connection reset by peer
goroutine 20995 [running]:
net/http.funcĀ·007()
/usr/local/go/src/pkg/net/http/server.go:1022 +0xac
main.maybe_panic(0xc200d2e570, 0xc2014ebd80)
/root/go/src/http_server.go:19 +0x4d
main.get_memc_val(0x615200, 0x7, 0x60b5c0, 0x6, 0x42ee58, ...)
/root/go/src/http_server.go:25 +0x64
main.funcĀ·001(0xc200149b40, 0xc2017b3380, 0xc201888b60)
/root/go/src/http_server.go:41 +0x35
net/http.HandlerFunc.ServeHTTP(0x65e950, 0xc200149b40, 0xc2017b3380, 0xc201888b60)
/usr/local/go/src/pkg/net/http/server.go:1149 +0x3e
net/http.serverHandler.ServeHTTP(0xc200095410, 0xc200149b40, 0xc2017b3380, 0xc201888b60)
/usr/local/go/src/pkg/net/http/server.go:1517 +0x16c
net/http.(*conn).serve(0xc201b9b2d0)
/usr/local/go/src/pkg/net/http/server.go:1096 +0x765
created by net/http.(*Server).Serve
/usr/local/go/src/pkg/net/http/server.go:1564 +0x266
I have taken care to set Linux kernel networking in such way as not to get in the way (turning off SYN flooding protection etc).
...
...
And yet on testing with "ab" (below) I'm getting those errors.
ab -c 1000 -n 50000 "http://localhost:8000/"
There is no sign whatsoever anywhere I looked that it's the kernel (dmesg, /var/log).
I would guess that is because you are running out of sockets - you never close the memc here. Check with netstat while your program is running.
func get_memc_val(k string) []byte {
memc, err := gomemcache.Connect(mc_ip, mc_port)
maybe_panic(err)
val, _, _ := memc.Get(k)
return val
}
I'd use this go memcache interface if I were you - it was written by the author of memcached who now works for Google on Go related things.
Try memcache client from YBC library. Unlike gomemcache it opens and re-uses only a few connections to memcache server irregardless of the number of concurrent requests issued via the client. It achieves high performance by pipelining concurrent requests over a small number of open connections to the memcache server.
The number of connections to the memcache server can configured via ClientConfig.ConnectionsCount.