mIRC socket inconsistently not working - sockets

I'm currently working on a new uptime command for my Twitch bot and I'm having problems with sockets. I'm trying to use this page http://api.twitch.tv/api/channels/duilio1337 to check if a streamer is online. It works fine sometimes but other times it reads the page header but not the page. Seems to be random.
alias uptimecheck {
echo -a CHECKING CHANNEL $1 STATUS
%uptimeurl = /api/channels/ $+ $1
%uptimecheckchan = $1
sockopen wuptime api.twitch.tv 80
}
on *:sockopen:wuptime: {
sockwrite -n $sockname GET %uptimeurl HTTP/1.0
sockwrite -n $sockname User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
sockwrite -n $sockname Host: api.twitch.tv
sockwrite -n $sockname Accept-Language: en-us
sockwrite -n $sockname Accept: */*
sockwrite -n $sockname
}
on *:sockread:wuptime: {
if ($sockerr) {
echo -a UPTIME Connection Error.
halt
}
else {
sockread -f %uptimeliverail
if ($sockbr == 0) return
;echo -a %uptimeliverail
}
}
on *:sockclose:wuptime: {
var %TEMPliverail $mid(%uptimeliverail, $calc($pos(%uptimeliverail, "twitch_liverail_id") + 21), 4)
if (%TEMPliverail == null) {
set %uptimelive. $+ %uptimecheckchan 0
}
else if (%TEMPliverail isnum) {
set %uptimelive. $+ %uptimecheckchan 1
}
}

The content of http://api.twitch.tv/api/channels/duilio1337 is just:
{"display_name":"duilio1337","game":null,"status":"Test","fight_ad_block":false,"_id":47890122,"name":"duilio1337","partner":false,"comscore_id":null,"comscore_c6":null,"twitch_liverail_id":null,"hide_ads_for_subscribers":false,"liverail_id":null,"ppv":false,"video_banner":null,"steam_id":null,"broadcaster_software":"unknown_rtmp","prerolls":true,"postrolls":true,"product_path":""}
It is not a HTML page, so what you mean with: it reads the page header but not the page. ?
Maybe you can explain better your problem.
Anyway... i think for your purposes in your sockopen event it should be enough to send GET and Host headers:
on *:sockopen:wuptime: {
sockwrite -nt $sockname GET %uptimeurl HTTP/1.1
sockwrite -nt $sockname Host: api.twitch.tv
sockwrite $sockname $crlf
}
In HTTP/1.0 the Host is not needed, but doesn't hurt. In HTTP/1.1 is required.
The -t parameter is recommended to avoid & to be treated as binary variable.
The sockwrite $sockname $crlf is a more explicit way to finish the HTTP request, and is equivalent (IIRC) to your sockwrite -n $sockname (because -n adds a $crlf).
Also, your sockread event in general should look something like:
on *:sockread:wuptime: {
if ($sockerr) {
;error msg code here
return
}
var %data
sockread %data
;your code here
unset %data
}
Avoid halt, that command is used to immediately stop any further processing. In most cases is more appropriate to use return.
Avoid unnecessary if-else conditionals.
And is important to note, that each time the server replies with a line, the sockread event is initiated.

The server is most likely returning
Vary: Accept Encoding
in the response header. Try replacing your Accept */* line with
sockwrite -n $sockname Accept: gzip
and see what happens. It will most likely return compressed binary data (rather than timing out, or returning nothing at all). The problem is, mIRC can't decompress this and doesn't support gzip decoding. I ran into this exact same problem last night and here is how I solved it for anyone that is working with mIRC sockets and needs to fetch gzip'd content.
First make sure you change the header like I have above.. to Accept: gzip
Second, grab a copy of 7za (7-zip for windows console) from http://www.7-zip.org/a/7za920.zip. Extract 7za and put it in the c:\program files (x86)\mIRC\ folder, or wherever your mirc.exe resides.
I've modified the OP's *on:sockread: block to fetch and extract the gzip'd content below:
on *:sockread:wuptime:{
var %bare $rand(423539,999999)
var %gz 1
:play
var %dg
sockread %dg
:wile
if ( %gz = 2 ) {
sockread -f &gzd
}
if ($sockerr) {
return
}
if ( %gz = 2 ) {
while ($sockbr) {
bwrite %bare -1 -1 &gzd
sockread -f &gzd
}
}
if ($sockbr == 0) {
sockclose $sockname
echo -at closed
run -h 7za.exe e %bare
.timer 1 2 wuptimeparse %bare
return
}
if ( %dg = $null) {
var %gz 2
goto wile
}
goto play
}
This is probably pretty crude, but it starts bwriteing to a random file once the headers are finished returning, and once $sockbr is empty, it runs 7zip on the compressed binary data. The alias below will be called in a couple of seconds (time for 7zip to do its magic.. you can probably do this in a far more elegant way im sure) to spit out the plaintext JSON you're looking for. Be warned, if the JSON in your example is huge (4000+ characters) you will need to read it as a binvar with bread because the string will be too large to fit into a variable.
alias wuptimeparse {
var %file $1
var %bare $read(%file $+ ~,1)
echo %bare
;do something with %bare
.timer 1 1 remove %file
.timer 1 1 remove %file $+ ~
}

Related

Play framework 2.5 will not stream the request into the response

Hi I have a requirement to use play framework 2.5 (scala) to receive a large request body, then transform it and then stream it straight back out.
So far I've been unable to get the request stream to be sent out correctly using a chunked response (even untransformed).
Code example:
def endpointA = EssentialAction { requestHeader =>
Accumulator.source.map { source: Source[ByteString, _] =>
Ok.chunked(source)
}
}
POSTing data to the endpoint with curl does not output the posted data as expected and just results in the error below. I confirmed with wireshark that no response body is sent.
curl -v --data 'hello' -H "Connection: Keep-Alive" -H "Keep-Alive: 300" -H "Content-type: text/plain" http://localhost:9584/binding-tariff-admin/upload-csv
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 9584 (#0)
> POST /binding-tariff-admin/endpoint-a HTTP/1.1
> Host: localhost:9584
> User-Agent: curl/7.64.1
> Accept: */*
> Connection: Keep-Alive
> Keep-Alive: 300
> Content-type: text/plain
> Content-Length: 5
>
* upload completely sent off: 5 out of 5 bytes
< HTTP/1.1 200 OK
< Transfer-Encoding: chunked
< Cache-Control: no-cache,no-store,max-age=0
< Content-Security-Policy: default-src 'self' 'unsafe-inline' *.s3.amazonaws.com www.google-analytics.com data:
< X-Permitted-Cross-Domain-Policies: master-only
< Content-Type: application/octet-stream
< Date: Wed, 19 Feb 2020 10:15:36 GMT
<
* transfer closed with outstanding read data remaining
* Closing connection 0
curl: (18) transfer closed with outstanding read data remaining
Also, if I change the code to return a stream I create myself, it works fine:
val testStream: Source[ByteString, NotUsed] = Source(List("hello")).map(ByteString.apply)
Is there something fundamentally wrong with what I'm trying to do here? I have seen other stack overflow examples with people suggesting this should be possible, e.g.
Play Framework Scala: How to Stream Request Body
I also tried using the verbatimBodyParser method described in the link but got the same results.
Thanks!
NFV

play scala 2.4.3 redicrect not working

I'm trying the Reverse routing sample code
Here are my routes
GET /hello/:name controllers.Application.hello(name)
GET /bob controllers.Application.helloBob
and my codes
def helloBob = Action {
Redirect(routes.Application.hello("Bob"))
}
def hello(name: String) = Action {
Ok("Hello " + name + "!")
}
I can get hello response
$ curl -v localhost:9001/hello/play
Hello play!
But, can't get "Bob" response after redirect?
$ curl -v localhost:9001/bob
* Trying ::1...
* Connected to localhost (::1) port 9001 (#0)
> GET /bob HTTP/1.1
> Host: localhost:9001
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 303 See Other
< Location: /hello/Bob
< Date: Fri, 18 Sep 2015 03:19:04 GMT
< Content-Length: 0
<
* Connection #0 to host localhost left intact
The path component of a URI is case sensitive. Check it.
try
curl -v localhost:9001/hello/Bob
Update
You code is correct (verified on my project) and you show the correct log - it prints 303 code. I think, you just need to say curl to follow redirect, like this
curl -L localhost:9000/bob

curl command line equivalent to this perl code

I want to write a curl command for a POST request equivalent to this Perl code:
use strict;
use warnings;
use LWP::UserAgent;
my $base = 'http://www.uniprot.org/mapping/';
my $params = {
from => 'ACC',
to => 'P_REFSEQ_AC',
format => 'tab',
query => 'P13368'
};
my $agent = LWP::UserAgent->new();
push #{$agent->requests_redirectable}, 'POST';
my $response = $agent->post($base, $params);
$response->is_success ?
print $response->content :
die 'Failed, got ' . $response->status_line .
' for ' . $response->request->uri . "\n";
I tried with (and many other variants) :
curl -X POST -H "Expect:" --form "from=ACC;to=P_REFSEQ_AC;format=tab; query=P13368" http://www.uniprot.org/mapping/ -o out.tab
The Perl code retrieves the expected result, but the curl command line does not. It retrieves the web page from "http://www.uniprot.org/mapping/" but does not make the POST request.
I looked for an error in the response header, but didn't find anything suspicious.
> POST http://www.uniprot.org/mapping/ HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: www.uniprot.org
> Accept: */*
> Proxy-Connection: Keep-Alive
> Content-Length: 178
> Content-Type: multipart/form-data; boundary=----------------------------164471d8347f
>
} [data not shown]
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Server: Apache-Coyote/1.1
< Vary: User-Agent
< Vary: Accept-Encoding
< X-Hosted-By: European Bioinformatics Institute
< Content-Type: text/html;charset=UTF-8
< Date: Wed, 05 Aug 2015 20:32:00 GMT
< X-UniProt-Release: 2015_08
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Headers: origin, x-requested-with, content-type
< X-Cache: MISS from localhost
< X-Cache-Lookup: MISS from localhost:3128
< Via: 1.0 localhost (squid/3.1.20)
< Connection: close
<
I spent almost three days looking for a solution in the web, but nothing is working for me.
It looks like the server expects the data as application/x-www-form-urlencoded and not as multipart/form-data as you do with the --form argument. The following should work:
curl -v -L --data \
"from=ACC&to=P_REFSEQ_AC&format=tab&query=P13368" \
http://www.uniprot.org/mapping/ -o out.tab
With --data you get the expected content-type header, but you must do the encoding yourself. With -L curl follows a redirect which is needed here to get the resulting data.
The -X POST option is not needed since POST is the default method when sending data. And -H "Expect:" is not needed either.

REST interface using nginx httpluamodule

I'm just getting started with nginx and the HttpLuaModule. I've created a test configuration file to familiarize myself with how things work.
now I'm trying to write some logic to accept GET, POST and DELETE requests for a specific type of resource.
I would like to create a "location" entry that would match the following URI / accept the following curl calls:
curl -i -X GET http://localhost/widgets/widget?name=testname&loc=20000 -H "Accept:application/json"
This is what my current nginx.conf looks like:
server {
listen 80;
server_name nsps2;
root /var/www/;
index index.html index.htm;
#charset koi8-r;
#access_log logs/host.access.log main;
#curl http://localhost/hello?name=johndoe
location /hello {
default_type "text/plain";
content_by_lua '
local rquri = ngx.var.request_uri;
ngx.say("the uri is ", rquri ,".")
local name = ngx.var.arg_name or "Anonymous"
ngx.say("Hello, ", name, "!")
';
}
location / {
root /var/www/;
index index.html index.htm;
}
#curl -i -X GET http://localhost/widgets/widget?name=testname&loc=20000 -H "Accept:application/json"
location /widgets/widget {
root /var/www/widgets;
default_type "text/pain";
content_by_lua '
local arga,argb = ngx.arg[1], ngx.arg[2] ;
ngx.say("the arga is ", arga ,".")
ngx.say("the argb is ", argb, ".")
';
}
Using the last "location" entry, I'm trying to
1. prove that the system is getting the GET request
2. prove that I understand how to access the parameters passed in with the GET request.
I'm getting an error right now that looks like this:
2015/02/24 20:18:19 [error] 2354#0: *1 lua entry thread aborted: runtime error: content_by_lua:2: API disabled in the context of content_by_lua*
stack traceback:
coroutine 0:
[C]: in function '__index'
content_by_lua:2: in function <content_by_lua:1>, client: 127.0.0.1, server: nsps2, request: "GET /widgets/widget?name=testname?loc=20000 HTTP/1.1", host: "localhost"
I'm not too sure about what this error means / is trying to tell me.
Any tips would be appreciated.
Thank you.
EDIT 1
I think the problem was something syntactically wrong. I was reading the manual and found an example to try. I've changed the code to look like this:
location /widgets/widget {
default_type "text/pain";
content_by_lua '
local args = ngx.req.get_uri_args()
for key, val in pairs(args) do
if type(val) == "table" then
ngx.say(key, ": ", table.concat(val, ", "))
else
ngx.say(key, ": ", val)
end
end
';
}
Now when I call the app like this:
mytestdevbox2:/var/www/nsps2# curl -i -X GET http://localhost/widgets/widget?name=testname&loc=20000 -H "Accept:application/json"
-ash: -H: not found
mytestdevbox2:/var/www/nsps2# HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Tue, 24 Feb 2015 21:32:44 GMT
Content-Type: text/pain
Transfer-Encoding: chunked
Connection: keep-alive
name: testname
[1]+ Done curl -i -X GET http://localhost/widgets/widget?name=testname
After the system displays the "name: testname" stuff, it just sits there until I hit "enter". After I do that, then it proceeds to display the 1+ stuff.
I'm not too sure what it's doing.
EDIT 2:
Adding quotes to the curl call did fix the problem:
mytestdevbox2:/var/www/nsps2# curl -i -X GET 'http://localhost/widgets/widget?name=testname&loc=20000'
HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Wed, 25 Feb 2015 12:59:29 GMT
Content-Type: text/pain
Transfer-Encoding: chunked
Connection: keep-alive
loc: 20000
name: testname
mytestdevbox2:/var/www/nsps2#
The problem described in EDIT 1 doesn't have anything to do with your question about nginx and is caused by not quoting the parameters for curl when you execute the command:
curl -i -X GET http://localhost/widgets/widget?name=testname&loc=20000
This is executed as two commands separated by '&': curl -i -X GET http://localhost/widgets/widget?name=testname and loc=20000; that's why you see the output after you already got the prompt back as the first command is now executed in the background. "1+ Done" message is a confirmation that the background process is terminated; it's just shown after you press Enter.
Wrap the URL with the query string in quotes and you should see the expected behavior.

Robots Text Blocked

header("Content-Type: text/plain; charset=utf-8");
if ($_SERVER['SERVER_PORT'] == 443) {
echo "User-agent: *\n" ;
echo "Disallow: /\n" ;
} else {
echo "User-agent: *\n" ;
echo "Disallow: \n" ;
}
What does this code do in robots.php?
I found it on my server and it seems to block text from being indexed by the search engines
When you read that page on port 443 (usually reserved for secure connection) e.g. https://yoursite.com/robots.php, returned content will be as follows:
User-agent: *
Disallow: /
The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site.
Otherwise (page robots.php visited on any other port - http://yoursite.com/robots.php) returned content will be as follows:
User-agent: *
Disallow:
In this case robot can visit any page on the site.
Also header("Content-Type: text/plain; charset=utf-8"); displays a page content as regular plain text.