What does the Recv-Q values in a Listen socket mean? - sockets

My program runs in trouble with a netstat output like bellow. It cannot receive a packet. What does the Recv-Q value in the first line mean? I see the man page, and do some googling, but no result found.
[root#(none) /data]# netstat -ntap | grep 8000
tcp 129 0 0.0.0.0:8000 0.0.0.0:* LISTEN 1526/XXXXX-
tcp 0 0 9.11.6.36:8000 9.11.6.37:48306 SYN_RECV -
tcp 0 0 9.11.6.36:8000 9.11.6.34:44936 SYN_RECV -
tcp 365 0 9.11.6.36:8000 9.11.6.37:58446 CLOSE_WAIT -
tcp 364 0 9.11.6.36:8000 9.11.6.37:55018 CLOSE_WAIT -
tcp 364 0 9.11.6.36:8000 9.11.6.37:42830 CLOSE_WAIT -
tcp 364 0 9.11.6.36:8000 9.11.6.37:56344 CLOSE_WAIT -
tcp 0 364 9.11.6.34:38947 9.11.6.36:8000 FIN_WAIT1 -
tcp 364 0 9.11.6.36:8000 9.11.6.37:52406 CLOSE_WAIT -
tcp 365 0 9.11.6.36:8000 9.11.6.37:53603 CLOSE_WAIT -
tcp 364 0 9.11.6.36:8000 9.11.6.37:47522 CLOSE_WAIT -
tcp 364 0 9.11.6.36:8000 9.11.6.34:48191 CLOSE_WAIT -
tcp 364 0 9.11.6.36:8000 9.11.6.37:51813 CLOSE_WAIT -
tcp 364 0 9.11.6.36:8000 9.11.6.34:57789 CLOSE_WAIT -
tcp 364 0 9.11.6.36:8000 9.11.6.37:34252 CLOSE_WAIT -
tcp 364 0 9.11.6.36:8000 9.11.6.34:38930 CLOSE_WAIT -
tcp 365 0 9.11.6.36:8000 9.11.6.37:44121 CLOSE_WAIT -
tcp 365 0 9.11.6.36:8000 9.11.6.37:60465 CLOSE_WAIT -
tcp 365 0 9.11.6.36:8000 9.11.6.37:37461 CLOSE_WAIT -
tcp 0 362 9.11.6.34:35954 9.11.6.36:8000 FIN_WAIT1 -
tcp 364 0 9.11.6.36:8000 9.11.6.37:55241 CLOSE_WAIT -
P.S.
See also at https://groups.google.com/forum/#!topic/comp.os.linux.networking/PoP0YOOIj70

Recv-Q is the Receive Queue. It is the number of bytes that are currently in a receive buffer. Upon reading the socket, the bytes are removed from the buffer and put into application memory. If the Recv-Q number gets too high, packets will be dropped because there is no place to put them.
More info here netstat

Related

netstat not showing server port in listen state

In one of my application, we have a server port 8654 which is listening to connections.
Somehow it went into CLOSE_WAIT state, but i am wondering why netstat output not showing 8654 port in listen state.
Actual Output::
[usr#server ~]$ sudo netstat -anop | grep 8654
tcp 1 0 1.2.3.4:8654 1.2.4.5:34567 CLOSE_WAIT 54321/abc off (0.00/0/0)
Expected Output::
[usr#server ~]$ sudo netstat -anop | grep 8654
tcp 0 0 1.2.3.4:9675 0.0.0.0:* LISTEN 53421/abc off (0.00/0/0)
tcp 1 0 1.2.3.4:8654 1.2.4.5:34567 CLOSE_WAIT 54321/abc off (0.00/0/0)
What could be the reason of it?

How can I show the netstat command in powershell without the 0 in the Local address?

I hope I could explain, sorry for my english
Proto Local Address Foreign Address State PID
TCP 0.0.0.0:135 0.0.0.0:0 LISTENING 1160
TCP 0.0.0.0:445 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:5040 0.0.0.0:0 LISTENING 8864
TCP 0.0.0.0:5357 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:7680 0.0.0.0:0 LISTENING 14052
TCP 0.0.0.0:49664 0.0.0.0:0 LISTENING 964
TCP 0.0.0.0:49665 0.0.0.0:0 LISTENING 872
TCP 0.0.0.0:49666 0.0.0.0:0 LISTENING 1696
TCP 0.0.0.0:49667 0.0.0.0:0 LISTENING 1448
TCP 0.0.0.0:49668 0.0.0.0:0 LISTENING 3380
TCP 0.0.0.0:49710 0.0.0.0:0 LISTENING 944
but what i want
Local Address
135
445
5040
5357
7680
49664
49665
49666
49667
49668
49710
Also, how can I show this on the screen with what code?
Get-NetTCPConnection is the powershell-equivalent of netstat, and it helpfully separates out the port numbers you're looking for. For example, here's what it looks like normally:
Get-NetTCPConnection -LocalAddress 0.0.0.0 -State Listen
LocalAddress LocalPort RemoteAddress RemotePort State AppliedSetting OwningProcess
------------ --------- ------------- ---------- ----- -------------- -------------
0.0.0.0 58369 0.0.0.0 0 Listen 3892
0.0.0.0 49677 0.0.0.0 0 Listen 792
0.0.0.0 49672 0.0.0.0 0 Listen 3900
And then to display just the port numbers, you can add Select-Object:
Get-NetTCPConnection -State Listen |
Select-Object -ExpandProperty LocalPort
58369
49677
49672
edit: To filter by listening address, you can use the -LocalAddress parameter, or use Where-Object:
# Using LocalAddress
Get-NetTCPConnection -LocalAddress 0.0.0.0,127.0.*,192.168.* -State Listen
LocalAddress LocalPort RemoteAddress RemotePort State AppliedSetting OwningProcess
------------ --------- ------------- ---------- ----- -------------- -------------
127.0.0.1 62522 0.0.0.0 0 Listen 3432
0.0.0.0 58369 0.0.0.0 0 Listen 3892
127.0.0.1 50595 0.0.0.0 0 Listen 16596
If the string output is acceptable, then one of the easiest ways to achieve your desired result is to simply remove the unwanted string with regex. However it will mess up the formatting.
(netstat -ano) -replace '0\.0\.0\.0:'
Proto Local Address Foreign Address State PID
TCP 135 0 LISTENING 868
TCP 445 0 LISTENING 4
TCP 5040 0 LISTENING 7288
TCP 5357 0 LISTENING 4
TCP 5985 0 LISTENING 4
TCP 6783 0 LISTENING 5128
TCP 47001 0 LISTENING 4
TCP 49664 0 LISTENING 976
TCP 127.0.0.1:6463 0 LISTENING 14660
TCP 127.0.0.1:6800 0 LISTENING 7468
TCP 127.0.0.1:8094 0 LISTENING 4348
This is a huge drawback from Powershell's object based output. You could try to correct the alignment manually if you so desire..
(netstat -ano) -replace '0\.0\.0\.0:(\d+)','$1 '
Proto Local Address Foreign Address State PID
TCP 135 0 LISTENING 868
TCP 445 0 LISTENING 4
TCP 5040 0 LISTENING 7288
TCP 5357 0 LISTENING 4
TCP 5985 0 LISTENING 4
TCP 6783 0 LISTENING 5128
TCP 47001 0 LISTENING 4
TCP 127.0.0.1:8094 0 LISTENING 4348
TCP 127.0.0.1:8763 0 LISTENING 5128
TCP 127.0.0.1:9527 0 LISTENING 5128
TCP 127.0.0.1:37014 0 LISTENING 4576
Again, these examples really only benefit the user viewing it. If you want to use the data later on, you'd have to parse it. At this point you really should look at the powershell alternatives such as Cpt.Whale's answer shows.
If not using Get-NetTCPConnection
Here's an example of how to correctly parse netstats output, similar to Get-NetTCPConnection
Objects are Created Automatically from a Regex's Capture Group Names
$RegexNetstat = #'
(?x)
# parse output from: "netstat -a -n -o
# you do not need to skip or filter lines like: "| Select-Object -Skip 4"
# because this correctly captures records with empty States
^\s+
(?<Protocol>\S+)
\s+
(?<LocalAddress>\S+)
\s+
(?<ForeignAddress>\S+)
\s+
(?<State>\S{0,})?
\s+
(?<Pid>\S+)$
'#
if (! $NetstatStdout) {
$NetstatStdout = & netstat -a -n -o
}
# If you're on Pwsh7 you can simplify it using null-*-operators
# $NetstatStdout ??= & netstat -a -n -o
function Format-NetStat {
param(
# stdin
[Parameter(Mandatory, ValueFromPipeline)]
[AllowEmptyString()]
[AllowNull()]
[Alias('Stdin')]
[string]$Text
)
process {
if ($Text -match $RegexNetstat) {
$Matches.Remove(0)
$hash = $Matches
$hash['Process'] = Get-Process -Id $hash.Pid
$hash['ProcessName'] = $hash['Process'].ProcessName
$hash['LocalPort'] = $hash['LocalAddress'] -split ':' | select -last 1
[pscustomobject]$Matches
}
}
}
Piping Results
They are true objects, so you can pipe, filter, group, etc. as normal. (I cached Stdout for this demo, so you can compare output of the same results)
usage:
$Stats = $NetstatStdout | Format-NetStat
$stats | Format-Table
Your Original Column Layout
PS> $stats | Ft -AutoSize Protocol, LocalPort, ForeignAddress, State, PID
Protocol LocalPort ForeignAddress State Pid
-------- --------- -------------- ----- ---
TCP 135 0.0.0.0:0 LISTENING 1484
TCP 445 0.0.0.0:0 LISTENING 4
TCP 808 0.0.0.0:0 LISTENING 5608
TCP 5040 0.0.0.0:0 LISTENING 9300
TCP 5357 0.0.0.0:0 LISTENING 4
TCP 5432 0.0.0.0:0 LISTENING 7480
TCP 11629 0.0.0.0:0 LISTENING 14400
TCP 27036 0.0.0.0:0 LISTENING 9196
TCP 49664 0.0.0.0:0 LISTENING 1116
TCP 49665 0.0.0.0:0 LISTENING 880
TCP 49666 0.0.0.0:0 LISTENING 1012
TCP 49667 0.0.0.0:0 LISTENING 1272
TCP 49668 0.0.0.0:0 LISTENING 3440
TCP 49669 0.0.0.0:0 LISTENING 4892
TCP 49678 0.0.0.0:0 LISTENING 1096
TCP 57621 0.0.0.0:0 LISTENING 14400
TCP 1053 127.0.0.1:1054 ESTABLISHED 22328
TCP 1054 127.0.0.1:1053 ESTABLISHED 22328
TCP 5354 0.0.0.0:0 LISTENING 5556
TCP 5354 127.0.0.1:49671 ESTABLISHED 5556
TCP 5354 127.0.0.1:49672 ESTABLISHED 5556
TCP 6463 0.0.0.0:0 LISTENING 16780
TCP 7659 127.0.0.1:7660 ESTABLISHED 18428
TCP 7660 127.0.0.1:7659 ESTABLISHED 18428
TCP 7661 127.0.0.1:7662 ESTABLISHED 4792
TCP 7662 127.0.0.1:7661 ESTABLISHED 4792
TCP 7665 127.0.0.1:7666 ESTABLISHED 1340
TCP 7666 127.0.0.1:7665 ESTABLISHED 1340
TCP 7667 127.0.0.1:7668 ESTABLISHED 11212
TCP 7668 127.0.0.1:7667 ESTABLISHED 11212
Originally from: Parsing Native Apps/Invoke-Netstat

Minikube / Docker proxy is using port 80

I am using minikube in no-driver mode ( sudo minikube start --vm-driver none ) and I can't free port 80.
With
sudo netstat -nlplute
I get:
tcp 0 0 192.168.0.14:2380 0.0.0.0:* LISTEN 0 58500 7200/etcd
tcp6 0 0 :::80 :::* LISTEN 0 62030 8681/docker-proxy
tcp6 0 0 :::8080 :::* LISTEN 0 57318 8656/docker-proxy
I tried to stop minikube, but it doesn't seem to be working when using driver=none
How should I free port 80 ?
EDIT: Full netstat ouput
➜ ~ sudo netstat -nlpute
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 102 35399 1019/systemd-resolv
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 0 6629864 11358/cupsd
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 128 45843 1317/postgres
tcp 0 0 127.0.0.1:6942 0.0.0.0:* LISTEN 1000 14547489 16086/java
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 0 58474 1053/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 0 71361 10409/kube-proxy
tcp 0 0 127.0.0.1:45801 0.0.0.0:* LISTEN 0 57445 1053/kubelet
tcp 0 0 192.168.0.14:2379 0.0.0.0:* LISTEN 0 56922 7920/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 0 56921 7920/etcd
tcp 0 0 192.168.0.14:2380 0.0.0.0:* LISTEN 0 56917 7920/etcd
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 0 56084 7920/etcd
tcp 0 0 127.0.0.1:63342 0.0.0.0:* LISTEN 1000 14549242 16086/java
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 15699 1/init
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 0 60857 7889/kube-controlle
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 0 56932 7879/kube-scheduler
tcp 0 0 127.0.0.1:5939 0.0.0.0:* LISTEN 0 48507 2205/teamviewerd
tcp6 0 0 ::1:631 :::* LISTEN 0 6629863 11358/cupsd
tcp6 0 0 :::8443 :::* LISTEN 0 55158 7853/kube-apiserver
tcp6 0 0 :::44444 :::* LISTEN 1000 16217187 7252/___go_build_gi
tcp6 0 0 :::32028 :::* LISTEN 0 74556 10409/kube-proxy
tcp6 0 0 :::10250 :::* LISTEN 0 58479 1053/kubelet
tcp6 0 0 :::30795 :::* LISTEN 0 74558 10409/kube-proxy
tcp6 0 0 :::10251 :::* LISTEN 0 56926 7879/kube-scheduler
tcp6 0 0 :::10252 :::* LISTEN 0 60851 7889/kube-controlle
tcp6 0 0 :::30285 :::* LISTEN 0 74559 10409/kube-proxy
tcp6 0 0 :::31406 :::* LISTEN 0 74557 10409/kube-proxy
tcp6 0 0 :::111 :::* LISTEN 0 15702 1/init
tcp6 0 0 :::80 :::* LISTEN 0 16269016 16536/docker-proxy
tcp6 0 0 :::8080 :::* LISTEN 0 16263128 16524/docker-proxy
tcp6 0 0 :::10256 :::* LISTEN 0 75123 10409/kube-proxy
udp 0 0 0.0.0.0:45455 0.0.0.0:* 115 40296 1082/avahi-daemon:
udp 0 0 224.0.0.251:5353 0.0.0.0:* 1000 16274723 23811/chrome --type
udp 0 0 224.0.0.251:5353 0.0.0.0:* 1000 16270144 23728/chrome
udp 0 0 224.0.0.251:5353 0.0.0.0:* 1000 16270142 23728/chrome
udp 0 0 0.0.0.0:5353 0.0.0.0:* 115 40294 1082/avahi-daemon:
udp 0 0 127.0.0.53:53 0.0.0.0:* 102 35398 1019/systemd-resolv
udp 0 0 192.168.0.14:68 0.0.0.0:* 0 12307745 1072/NetworkManager
udp 0 0 0.0.0.0:111 0.0.0.0:* 0 18653 1/init
udp 0 0 0.0.0.0:631 0.0.0.0:* 0 6628156 11360/cups-browsed
udp6 0 0 :::5353 :::* 115 40295 1082/avahi-daemon:
udp6 0 0 :::111 :::* 0 15705 1/init
udp6 0 0 :::50342 :::* 115 40297 1082/avahi-daemon:
Ive reproduced your environment (--vm-driver=none). At first I thought it might be connected with minikube built-in configuration, however clean Minikube don't use port 80 on default configuration.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
$ minikube version
minikube version: v1.6.2
$ sudo netstat -nlplute
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 0 49556 9345/kube-controlle
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 0 50223 9550/kube-scheduler
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 101 15218 752/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 21550 1541/sshd
tcp 0 0 127.0.0.1:44197 0.0.0.0:* LISTEN 0 51016 10029/kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 0 51043 10029/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 0 52581 10524/kube-proxy
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 0 49728 9626/etcd
tcp 0 0 10.156.0.11:2379 0.0.0.0:* LISTEN 0 49727 9626/etcd
tcp 0 0 10.156.0.11:2380 0.0.0.0:* LISTEN 0 49723 9626/etcd
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 0 49739 9626/etcd
tcp6 0 0 :::10256 :::* LISTEN 0 52577 10524/kube-proxy
tcp6 0 0 :::22 :::* LISTEN 0 21552 1541/sshd
tcp6 0 0 :::8443 :::* LISTEN 0 49120 9419/kube-apiserver
tcp6 0 0 :::10250 :::* LISTEN 0 51050 10029/kubelet
tcp6 0 0 :::10251 :::* LISTEN 0 50217 9550/kube-scheduler
tcp6 0 0 :::10252 :::* LISTEN 0 49550 9345/kube-controlle
udp 0 0 127.0.0.53:53 0.0.0.0:* 101 15217 752/systemd-resolve
udp 0 0 10.156.0.11:68 0.0.0.0:* 100 15574 713/systemd-network
udp 0 0 127.0.0.1:323 0.0.0.0:* 0 23984 2059/chronyd
udp6 0 0 ::1:323 :::* 0 23985 2059/chronyd
Good description for what docker-proxy is used you can check this article.
When a container starts with its port forwarded to the Docker host on which it runs, in addition to the new process that runs inside the container, you may have noticed an additional process on the Docker host called docker-proxy
This docker-proxy might be something similar like docker zombie process where container was removed, however allocated port wasn't unlocked. Unfortunately it seems that this is recurrent docker issue occuring across versions and OS since 2016. As I mentioned I think currently there is no fix for this, however you can find workaround.
cd /usr/libexec/docker/
ln -s docker-proxy-current docker-proxy
service docker restart
===
$ sudo service docker stop
$ sudo service docker start
===
$ sudo service docker stop
# remove all internal docker network: rm /var/lib/docker/network/files/
$ sudo service docker start
===
$ sudo systemctl stop docker
$ sudo systemctl start docker
There are a few github threads mentioning about this issue. For more information please check this and this thread.
After checking that my port 8080 was also used by docker proxy, I did
$ docker ps
and notices that both port 80 and port 8080 are used by traefik controller:
$ kubectl get services
traefik-ingress-service ClusterIP 10.96.199.177 <none> 80/TCP,8080/TCP 25d
When I checked for traefik service, I found:
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
So, I think this is why I get a docker-proxy. If I need it to use another port, I can change it here. My bad :(

Too many TCP ports in CLOSE WAIT condition in kafka broker

Too many TCP Connections are in CLOSE_WAIT status in a kafka broker causing DisconnectionException in kafka clients.
tcp6 27 0 172.31.10.143:9092 172.31.0.47:45138 ESTABLISHED -
tcp6 25 0 172.31.10.143:9092 172.31.46.69:41612 CLOSE_WAIT -
tcp6 25 0 172.31.10.143:9092 172.31.0.47:45010 CLOSE_WAIT -
tcp6 25 0 172.31.10.143:9092 172.31.46.69:43000 CLOSE_WAIT -
tcp6 194 0 172.31.10.143:8080 172.31.20.219:45952 CLOSE_WAIT -
tcp6 25 0 172.31.10.143:9092 172.31.20.219:48006 CLOSE_WAIT -
tcp6 1 0 172.31.10.143:9092 172.31.0.47:44582 CLOSE_WAIT -
tcp6 25 0 172.31.10.143:9092 172.31.46.69:42828 CLOSE_WAIT -
tcp6 25 0 172.31.10.143:9092 172.31.46.69:41934 CLOSE_WAIT -
tcp6 25 0 172.31.10.143:9092 172.31.46.69:41758 CLOSE_WAIT -
tcp6 25 0 172.31.10.143:9092 172.31.46.69:41584 CLOSE_WAIT -
tcp6 25 0 172.31.10.143:9092 172.31.46.69:41852 CLOSE_WAIT -
tcp6 1 0 172.31.10.143:9092 172.31.0.47:44342 CLOSE_WAIT -
Error in debezium
connect-prod | 2019-02-14 06:28:54,885 INFO || [Consumer clientId=consumer-3, groupId=4] Error sending fetch request (sessionId=1727876188, epoch=INITIAL) to node 2: org.apache.kafka.common.errors.DisconnectException. [org.apache.kafka.clients.FetchSessionHandler] connect-prod | 2019-02-14 06:28:55,448 INFO || [Consumer clientId=consumer-1, groupId=4] Error sending fetch request (sessionId=1379896198, epoch=INITIAL) to node 2: org.apache.kafka.common.errors.DisconnectException. [org.apache.kafka.clients.FetchSessionHandler]
What can be the reason behind this?
It appears that this is a known issue in Kafka 2.1.0.
https://issues.apache.org/jira/browse/KAFKA-7697
I think the connections stuck in Close_wait is a side effect of the real problem.
This issue has been fixed in Kafka version 2.1.1 which should be released in a few days. Looking forward to it.

Zabbix agent windows TIME_WAIT sockets

I have a big problem with Zabbix windows agent.
The agent has lot of sockets in time_wait state:
...........
TCP 10.0.10.4:10050 10.0.10.8:38681 TIME_WAIT 0
TCP 10.0.10.4:10050 10.0.10.8:38683 TIME_WAIT 0
TCP 10.0.10.4:10050 10.0.10.8:38710 TIME_WAIT 0
TCP 10.0.10.4:10050 10.0.10.8:38736 TIME_WAIT 0
TCP 10.0.10.4:10050 10.0.10.8:38755 TIME_WAIT 0
TCP 10.0.10.4:10050 10.0.10.8:38764 TIME_WAIT 0
TCP 10.0.10.4:10050 10.0.10.8:38781 TIME_WAIT 0
TCP 10.0.10.4:10050 10.0.10.8:38811 TIME_WAIT 0
TCP 10.0.10.4:10050 10.0.10.8:38835 TIME_WAIT 0
TCP 10.0.10.4:10050 10.0.10.8:38849 TIME_WAIT 0
TCP 10.0.10.4:10050 10.0.10.8:38878 TIME_WAIT 0
TCP 10.0.10.4:10050 10.0.10.8:38888 TIME_WAIT 0
TCP 10.0.10.4:10050 10.0.10.8:38913 TIME_WAIT 0
TCP 10.0.10.4:10050 10.0.10.8:38933 TIME_WAIT 0
TCP 10.0.10.4:10050 10.0.10.8:38952 TIME_WAIT 0
C:\>netstat -nao | find /c "TIME_WAIT"
200 <- it is too much.
Why does the agent open all this sockets?
Is there a way to close this socket?
I have lot of monitored item, could be this the problem?
The intervall time is about 10 minutes.
thank you
any help is appreciated
IMHO it's not a big problem, it's concept how TCP works. Do you have any performance issue because your device has 200 TIME-WAIT connections?
If you have a lot of monitored items and your agent is in passive mode, then zabbix server has to create a lot of TCP connections to your agent. TIME-WAIT is almost last state of this TCP connection. TIME_WAIT indicates that this side has closed the connection. The connection is being kept around so that any delayed packets can be matched to the connection and handled appropriately. Common duration of TIME-WAIT state can be 30 seconds.
You can play with Windows registry to decrease duration of TIME-WAIT state. But I don't recommend it, if you don't know what are you doing.
http://help.globalscape.com/help/secureserver3/Windows_Registry_keys_for_TCP_IP_Performance_Tuning.htm
About TCP states:
http://commons.wikimedia.org/wiki/File:Tcp_state_diagram_fixed_new.svg
About TIME-WAIT state (on linux)
http://www.fromdual.com/huge-amount-of-time-wait-connections