tshark command to follow tcp stream without tcp length - tshark

when we follow tcp stream using command:
tshark -q -r test.pcap -z follow,tcp,ascii,0
will get the following output with TCP length in the middle of the streamed output.
How to eliminate the tcp.len? do we have any tshark command to print only TCP stream output not the tcp.len
Follow: tcp,ascii
Filter: tcp.stream eq 0
Node 0: 10.10.30.50:57887
Node 1: 10.10.30.95:4902
**1448** ---> this is tcp length
POST /pushnotification/v1.0/message HTTP/1.1
Accept: */*
Host: 10.10.30.95:4902
Connection: Close
Content-Type: application/json
Authorization: Basic QWxhZGRpbjpraHVsamFzaW1zaW0=
Content-Length: 1277
{"push-message":{"serviceName":"Sync App","TTL":"600","recipients":[{"uri":"sip:919880018501#lab.t-mobile.com"}],"channel":"","pns-type":"RCSPage","pns-subtype":"Chat","nmsEventList":{"nmsEvent":[{"changedObject":{"parentFolder":"https:///oemclient/nms/v1/ums/tel%3a%2b1234567890/folders/97d38f52-bed0-4046-8784-bb110e3b0ea3","flags":{"flag":["\\RECENT"]},"resourceURL":"https://resourceurl","correlationId":"75114622-099d-4503-8166-e84bd1b620dc","message":{"id":"1","store":"RCSMessageStore/Chat","objectURL":"https://data1","direction":"In","message-time":"2016-05-19T08:46:49-08:00","status":"RECENT","sender":"sip:1234","recipients":[{"uri":"sip:2345"}],"imdn-message-id":"75114622-099d-4503-8166-e84bd1b620dc","content":[{"rcs-data":{"sip-call-id":"005056884776-4d72-eb161700-1e2-571fa736-a0e46","feature-tag":"urn:urn-7:3gpp-service.ims.icsi.oma.cpm.msg.group","p-asserted-service":"urn:urn-7:3gpp-service.ims.icsi.oma.cpm.msg.group","contribution-id":"e0a1029e-a48b-4ca6-b185-299dada439be","conversation-id":"2dbc584e-
**38** ---> this is tcp length
fc46-4a37-9a56-c2b93246d788"}}]}}}]}}}
**17**
HTTP/1.0 200 OK
**35**
Server: BaseHTTP/0.3 Python/2.6.6
**37**
Date: Mon, 12 Feb 2018 19:14:17 GMT
**2**
**9**
Thread-1

Related

HAProxy 1.8 delay http/2 (h2) requests using tcp-request inspect-delay

Using HAProxy 1.8, I want to slow down certain traffic. This all works when testing over HTTP 1.1. However as soon as http/2 (h2) is enabled in HAProxy, the 10s delay is no longer taking effect. How can I delay h2 requests?
frontend web
bind [...] alpn h2,http/1.1
tcp-request inspect-delay 10s
tcp-request content accept if WAIT_END
[...]
I'm testing using curl:
time curl -I 'https://[url]/' -v
* Trying 10.233.1.97...
* TCP_NODELAY set
* Connected to [url] (10.233.1.97) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
[...]
* ALPN, server accepted to use h2
[...]
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fd3f5808200)
> GET / HTTP/2
> Host: [...]
> User-Agent: curl/7.64.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
< HTTP/2 411
HTTP/2 411
< content-type: text/html; charset=us-ascii
content-type: text/html; charset=us-ascii
< server: Microsoft-HTTPAPI/2.0
server: Microsoft-HTTPAPI/2.0
< date: Thu, 02 Apr 2020 19:18:22 GMT
date: Thu, 02 Apr 2020 19:18:22 GMT
< content-length: 344
content-length: 344
<
* Excess found in a non pipelined read: excess = 344 url = / (zero-length body)
* Connection #0 to host app.cloudbilling.nl left intact
* Closing connection 0
curl -I 'https://[url]/' -v 0.02s user 0.01s system 28% cpu 0.101 total

Istio envoy upstream reset: reset reason connection failure

I have a GKE cluster (gke v1.13.6) and using istio (v1.1.7) with several services deployed and working successfully except one of them which always responds with HTTP 503 when calling through the gateway : upstream connect error or disconnect/reset before headers. reset reason: connection failure.
I've tried calling the pod directly from another pod with curl enabled and it ends up in 503 as well :
$ kubectl exec sleep-754684654f-4mccn -c sleep -- curl -v d-vine-machine-dev:8080/d-vine-machine/swagger-ui.html
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 10.3.254.3...
* TCP_NODELAY set
* Connected to d-vine-machine-dev (10.3.254.3) port 8080 (#0)
> GET /d-vine-machine/swagger-ui.html HTTP/1.1
> Host: d-vine-machine-dev:8080
> User-Agent: curl/7.60.0
> Accept: */*
>
upstream connect error or disconnect/reset before headers. reset reason: connection failure< HTTP/1.1 503 Service Unavailable
< content-length: 91
< content-type: text/plain
< date: Thu, 04 Jul 2019 08:13:52 GMT
< server: envoy
< x-envoy-upstream-service-time: 60
<
{ [91 bytes data]
100 91 100 91 0 0 1338 0 --:--:-- --:--:-- --:--:-- 1378
* Connection #0 to host d-vine-machine-dev left intact
Setting the log level to TRACE at the istio-proxy level :
$ kubectl exec -it -c istio-proxy d-vine-machine-dev-b8df755d6-bpjwl -- curl -X POST http://localhost:15000/logging?level=trace
I looked into the logs of the injected sidecar istio-proxy and found this :
[2019-07-04 07:30:41.353][24][debug][router] [external/envoy/source/common/router/router.cc:381] [C119][S9661729384515860777] router decoding headers:
':authority', 'api-dev.d-vine.tech'
':path', '/d-vine-machine/swagger-ui.html'
':method', 'GET'
':scheme', 'http'
'cache-control', 'max-age=0'
'upgrade-insecure-requests', '1'
'user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36'
'accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3'
'accept-encoding', 'gzip, deflate'
'accept-language', 'fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7'
'x-forwarded-for', '10.0.0.1'
'x-forwarded-proto', 'http'
'x-request-id', 'e38a257a-1356-4545-984a-109500cb71c4'
'content-length', '0'
'x-envoy-internal', 'true'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/default/sa/default;Hash=8b6afba64efe1035daa23b004cc255e0772a8bd23c8d6ed49ebc8dabde05d8cf;Subject="O=";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account;DNS=istio-ingressgateway.istio-system'
'x-b3-traceid', 'f749afe8b0a76435192332bfe2f769df'
'x-b3-spanid', 'bfc4618c5cda978c'
'x-b3-parentspanid', '192332bfe2f769df'
'x-b3-sampled', '0'
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:88] creating a new connection
[2019-07-04 07:30:41.353][24][debug][client] [external/envoy/source/common/http/codec_client.cc:26] [C121] connecting
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:644] [C121] connecting to 127.0.0.1:8080
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:653] [C121] connection in progress
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/conn_pool_base.cc:20] queueing request due to no available connections
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:811] [C119][S9661729384515860777] decode headers called: filter=0x4f118b0 status=1
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:384] [C119] parsed 1272 bytes
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:282] [C119] readDisable: enabled=true disable=true
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C121] socket event: 3
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C121] write ready
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:526] [C121] delayed connection error: 111
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:183] [C121] closing socket: 0
[2019-07-04 07:30:41.353][24][debug][client] [external/envoy/source/common/http/codec_client.cc:82] [C121] disconnect. resetting 0 pending requests
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:129] [C121] client disconnected, failure reason:
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:164] [C121] purge pending, failure reason:
[2019-07-04 07:30:41.353][24][debug][router] [external/envoy/source/common/router/router.cc:644] [C119][S9661729384515860777] upstream reset: reset reason connection failure
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0e5f0 status=0
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0edc0 status=0
[2019-07-04 07:30:41.353][24][debug][filter] [src/envoy/http/mixer/filter.cc:133] Called Mixer::Filter : encodeHeaders 2
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0f0e0 status=0
[2019-07-04 07:30:41.353][24][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1305] [C119][S9661729384515860777] encoding headers via codec (end_stream=false):
':status', '503'
'content-length', '91'
'content-type', 'text/plain'
'date', 'Thu, 04 Jul 2019 07:30:41 GMT'
'server', 'istio-envoy'
Has anyone encountered such an issue ? If you need more info about the configuration, I can provide.
Thanks for your answer Manvar. There was no problem with the curl enabled pod but thanks for the insight. It was a misconfiguration of our tomcat port that was not matching the service/virtualService config.
When pod with an istio side car is started, the follwing things happen
an init container changes the iptables rules so that all the outgoing tcp traffic is routed to the sidecar istio-proxy on port 15001.
the containers of the pod are started in parallel (curl and istio-proxy)
If your curl container is executed before istio-proxy listens on port 15001, you get the error.

Server-Sent Events with Play: response only received when process killed

I'm trying to get the sample webapp play-streaming-scala to run and in some circumstances I get a weird behavior.
I've got the app running directly on port 80 of some host and I'm checking the output with curl -iv --raw http://somehost/scala/eventSource/liveClock.
What I'm expecting is something like this:
* Hostname was NOT found in DNS cache
* Trying 195.176.3.71...
* Connected to somehost (0.0.0.0) port 80 (#0)
> GET /scala/eventSource/liveClock HTTP/1.1
> User-Agent: curl/7.39.0
> Host: somehost
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Transfer-Encoding: chunked
Transfer-Encoding: chunked
< Content-Type: text/event-stream; charset=utf-8
Content-Type: text/event-stream; charset=utf-8
< Date: Wed, 18 Jan 2017 13:24:55 GMT
Date: Wed, 18 Jan 2017 13:24:55 GMT
<
10
data: 14 24 56
10
data: 14 24 56
10
data: 14 24 56
etc., and clearly see the chunks appear one after the other as time goes by.
Now, on some machines, this works well. On some others on campus, this fails. curl only shows this and then stops:
* Trying 195.176.3.71...
* Connected to somehost (0.0.0.0) port 80 (#0)
> GET /scala/eventSource/liveClock HTTP/1.1
> Host: somehost
> User-Agent: curl/7.43.0
> Accept: */*
>
Now the interesting thing is: if I kill the webapp on the host, curl suddenly “catches up” and spits all the chunks together, closing the connection like this:
10
data: 14 35 20
* transfer closed with outstanding read data remaining
* Closing connection 0
curl: (18) transfer closed with outstanding read data remaining
What can be causing the behavior? What on earth is going on and intercepting these events? Is there any way I can “force flush” something from the Play response?
Turns out the local “hidden” proxy set up automatically by OS X's parental controls system is not forwarding chunked responses properly, thus making a system based on Server-Sent Events inoperable. A shame.

curl: (6) could not resolve host ;401 Unauthorized on Openstack Swift (SAIO)

I'm trying to set up a 'Swift All In One' system on a Ubuntu 12.04 VM by the link:http://docs.openstack.org/developer/swift/development_saio.html.
I use VMware WorkStation 12 Pro on Win7 64bit system and use 'Host-only' network mode.The VM ip address is "192.168.137.200".
When I run the command on the VM:
curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' http://192.168.137.200/auth/v1.0
It works well.
But when I run the command on the host machine(Win7 platform), It fails and returns:
* Could not resolve host: test:tester'; Host not found
* Closing connection #0
curl: (6) Could not resolve host: test:tester'; Host not found
* Could not resolve host: testing'; Host not found
* Closing connection #0
curl: (6) Could not resolve host: testing'; Host not found
* About to connect() to 192.168.137.200 port 80 (#0)
* Trying 192.168.137.200... connected
* Connected to 192.168.137.200 (192.168.137.200) port 80 (#0)
> GET /auth/v1.0 HTTP/1.1
> User-Agent: curl/7.20.1 (amd64-pc-win32) libcurl/7.20.1 OpenSSL/0.9.8n zlib/1.
2.3
> Host: 192.168.137.200
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Date: Fri, 25 Mar 2016 05:57:24 GMT
< Content-Length: 131
< Content-Type: text/html; charset=UTF-8
< Www-Authenticate: Swift realm="unknown"
< X-Trans-Id: tx081d67bec35b457bb4cb8-0056f4d343
< Vary: Accept-Encoding
<
<html><h1>Unauthorized</h1><p>This server could not verify that you are authoriz
ed to access the document you requested.</p></html>* Connection #0 to host 192.1
68.137.200 left intact
* Closing connection #0
Then I make another Ubuntu 12.04 VM and try to run the command above on the second VM, it works well.
Try to use X-Auth-User and X-Auth-Key headers instead.https://swiftstack.com/docs/cookbooks/swift_usage/auth.html

POST request and nginx

I'm trying to send a lot of post-requests to the localhost:80 (nginx-server).
The headers I'm sending are:
POST /LINK HTTP/1.1
User-Agent: User agent
Host: localhost
Accept: */*
Connection: Keep-Alive
Content-Type: application/octet-stream
Content-Length: 16
DATA 16 BYTES
The pseudocode is:
TCPSocket sock('localhost', 80);
for(;;) {
sock.sendPost();
}
sock.close();
But server returns first time:
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 16 Apr 2012 14:54:26 GMT
Content-Type: application/json
Content-Length: 92
Connection: close
ANSWER 92 BYTES
So server doesn't work with all another post-request from cycle.
Why does not Connection: Keep-Alive work and server returns Connection: close?
Set keepalive_timeout and keepalive_requests to proper values.