Very long response time from HTTP REST request on Extreme OS switch - rest

I'm trying to use REST API on Extreme OS to retrieve information on interfaces, PoE etc.... It works perfectly fine with Cisco switches, but now I'm trying the same thing on an Extreme Networks switch. However the delay between my HTTP GET request and the answer is very long (~7 minutes). On Cisco it takes ~15 seconds.
I have used tcpdump, but it doesn't know how to interpret or solve the reset and keep-alive (response start at frame 1302).
tcpdump capture
I use the following command to make my HTTP request:
curl -k -v http://<ip>/rest/restconf/data/openconfig_interfaces:interfaces/ -u "<user>:<password>"

Related

Looking for debugging advice on SSL errors from EKS using varnish

I know this is somewhat specific of a question, but I'm having a problem I can't seem to track down. I have a single pod deployed to EKS - the pod contains a python app, and a varnish reverse caching proxy. I'm serving chunked json (that is, streaming lines of json, a la http://jsonlines.org/), and it can be multiple GB of data.
The first time I make a request, and it hits the python server, everything acts correctly. It takes (much) longer than the cached version, but the entire set of json lines is downloaded. However, now that it's cached in varnish, if I use curl, I get:
curl: (56) GnuTLS recv error (-110): The TLS connection was non-properly terminated.
or
curl: (56) GnuTLS recv error (-9): A TLS packet with unexpected length was received.
The SSL is terminated at the ELB, and when I use curl from the proxy container itself (using curl http://localhost?....), there is no problem.
The hard part of this is that the problem is somewhat intermittent.
If there is any advice in terms of clever varnishlog usage, or anything of the same ilk on AWS, I'd be much obliged.
Thanks!
Because TLS is terminated on your ELB loadbalancers, the connection between should be in plain HTTP.
The error is probably not coming from Varnish, because Varnish currently doesn't handle TLS natively. I'm not sure if varnishlog can give you better insights in what is actually happening.
Checklist
The only checklist I can give you is the following:
Make sure the certificate you're using is valid
Make sure you're connecting to your target group over HTTP, not HTTPS
If you enable the PROXY protocol on your ELB, make sure Varnish has a -a listener that listens for PROXY protocol requests, on top of regular HTTP requests.
Debugging
Perform top-down debugging:
Increase the verbosity of your cURL calls and try to get more information about the error
Try accessing the logs of your ELB and get more details there
Get more information from your EKS logs
And finally, perform a varnislog -g request -q "ReqUrl eq '/your-url'" to get a full Varnishlog for a specific URL

How to send HTTP Commands through Port 80

Breif Description of what I am trying to accomplish. So I am working with Crestrons Simpl+ software. My job is to create a module for a sound masking system called QT Pro. Now, QT Pro has an API where you can control it via HTTP. I need a way to establish a connection with the QT Pro via HTTP( I have everything I need, IP, Username, Password).
Whats the problem? I have just started working with this language. Unfortunately there isn't as much documentation as I would like, otherwise I wouldn't be here. I know I need to create a socket connection via TCP on port 80. I just don't know what I'm supposed to send through it.
Here is an example:
http://username:password#address/cmd.htm?cmd=setOneZoneData&ZN=Value&mD=Value
&mN=Value&auxA=Value&auxB=Value&autoR=Value
If I were to put this into the URL box, and fill it in correctly. then it would change the values that I specify. Am I supposed to send the entire thing? Or just after cmd.htm? Or is there some other way I'm supposed to send data? I'd like to stay away from the TCP/IP Module so I can keep this all within the same module.
Thanks.
You send
GET /cmd.htm?cmd=setOneZoneData&ZN=Value&mD=Value&mN=Value&auxA=Value&auxB=Value&autoR=Value HTTP/1.1
Host: address
Connection: close
(End with a couple of newlines.)
If you need to use HTTP basic authentication, then also include a header like
Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
where the gibberish is the base64-encoded version of username:password.
But surely there is some mechanism for opening HTTP connections already there for you? Just blindly throwing out headers like this and hoping the response is what you expect is not robust, to say the least.
To see what is going on with your requests and responses, a great tool is netcat (or telnet, for that matter.)
Do nc address 80 to connect to server address on port 80, then paste your HTTP request:
GET /cmd.htm HTTP/1.1
Host: address
Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
Connection: close
and see what comes back. SOMETHING should come back. (Remember to terminate with two newlines.)
To see what requests your browser is sending when you do something that works, you can listen like this: nc -l -p 8080.
Then direct your browser to localhost:8080 with the rest of the URL as before, and you'll see the request that was sent. (Then you can type back to see how the browser handles the response.)

Perl Most effective way for scanning for a particular web server http banner?

So basically I'm trying to scan web servers that run for example version apache 2.2.4 on their web server, what's the best way of doing this?
Scan for IP's range from blah blah to blah blah, with port 80 open + web server enabled then just make a script that loads ips and checks to see if they have the server banner i want.
Or what's an alternative faster way?
Basically I'm trying to make a script like ShodanHQ.
I'm trying to get a large amount of web servers running a certain version, can anybody give me a direction, thanks hope i was clear.
For doing Internet-wide surveys like Shodan or Scans.io, you need very-high-bandwidth access, legal approval (or at least a blind eye turned) from your ISP, and likely an asynchronous scanner like Zmap or masscan. Nmap is a decent alternative with the --min-rate argument. Anything using the default TCP stack on your OS (e.g. curl, netcat, or Perl solutions) will not be able to keep up with the high packet volume and number of targets required.
If, however, you want to scan a smaller network (say a /16 with 65K addresses), then Nmap is up to the job, requires less setup than the asynchronous scanners (since they require firewall settings to prevent the native TCP stack from responding to returned probes), and is widely available. You could get reasonable performance with this command:
sudo nmap -v -T5 -PS80 -p80 -sS --script http-server-header -oA scan-results-%D%T 10.10.0.0/16
This breaks down to:
-v - verbose output
-T5 - Fastest timing options. This may be too much for some networks; try -T4 if you suspect lost results.
-PS80 - Only consider hosts that respond on port 80 (open or closed).
-p80 - Scan port 80 on alive hosts
-sS - Use Nmap's half-open SYN scan, which has the best timing performance
--script http-server-header - This script will grab the Server header from a basic GET request. Alternatively you could use http-headers to get all headers, or use -sV --version-light to do basic version detection from probe responses.
-oA scan-results-%D%T - Output 3 formats into separate timestamped files. You can process results with one of the many tools that imports Nmap XML output.
You could use curl and sed:
curl -sI 'http://192.0.2.1' | sed -n 's/^Server:[[:blank:]]*//p'
Call it from perl with:
perl -e '$server=`curl -sI 'http://192.0.2.1' | sed -n 's/^Server:[[:blank:]]*//p'`; print $server'
The -I option in curl prints the http headers using a HEAD request.

REST server sampler for JMeter

I need to test a blackbox that outputs a POST request to a REST service.
Intend to use JMeter for this.
So my sampler should be a REST server and listen to post from the tested module.
What's the best way? Any ready-made solutions around? (seems like JMeter's REST sampler can only be a REST client)
Not sure JMeter is the solution you're looking for here. You can always start a JMeter proxy on the black box to catch everything that it sends out, but there's not much beyond that it can do.
If watching traffic is all you're doing, I think it would be simpler to ssh the destination server this black box is transmitting to and run
sudo tcpdump -n dst port 8080 -A
Depending on what port the black box POSTs on, you can just change the above port. You can stream these results to file and run whatever tests you need on these with a more appropriate testing tool/script.

Perl - creating multiple HTTP servers listening on different ports

An external application will send HTTP POST request to multiple HTTP/HTTPS servers (e.g. 10 HTTP servers). These HTTP servers may get almost same HTTP Post request. These HTTP servers will analyze the data and send 200 OK response if data validation pass.
I am having all these HTTP servers listening on single host with different ports.
Please suggest me some way to achieve it.
FYI - This request response between Application and HTTP Server(s) will happen only once and then HTTP server will be closed.
I am thinking to implement it using forking the HTTP:Daemon 10 times but looking forward for some light solution.
Also I am thinking to capture these data through a single interface rather then checking the data from all 10 individual 10 HTTP server.
for PORT in `seq 11111 11121` ; do plackup -Ilib --listen :$PORT app.psgi & done