REST interface using nginx httpluamodule - rest

I'm just getting started with nginx and the HttpLuaModule. I've created a test configuration file to familiarize myself with how things work.
now I'm trying to write some logic to accept GET, POST and DELETE requests for a specific type of resource.
I would like to create a "location" entry that would match the following URI / accept the following curl calls:
curl -i -X GET http://localhost/widgets/widget?name=testname&loc=20000 -H "Accept:application/json"
This is what my current nginx.conf looks like:
server {
listen 80;
server_name nsps2;
root /var/www/;
index index.html index.htm;
#charset koi8-r;
#access_log logs/host.access.log main;
#curl http://localhost/hello?name=johndoe
location /hello {
default_type "text/plain";
content_by_lua '
local rquri = ngx.var.request_uri;
ngx.say("the uri is ", rquri ,".")
local name = ngx.var.arg_name or "Anonymous"
ngx.say("Hello, ", name, "!")
';
}
location / {
root /var/www/;
index index.html index.htm;
}
#curl -i -X GET http://localhost/widgets/widget?name=testname&loc=20000 -H "Accept:application/json"
location /widgets/widget {
root /var/www/widgets;
default_type "text/pain";
content_by_lua '
local arga,argb = ngx.arg[1], ngx.arg[2] ;
ngx.say("the arga is ", arga ,".")
ngx.say("the argb is ", argb, ".")
';
}
Using the last "location" entry, I'm trying to
1. prove that the system is getting the GET request
2. prove that I understand how to access the parameters passed in with the GET request.
I'm getting an error right now that looks like this:
2015/02/24 20:18:19 [error] 2354#0: *1 lua entry thread aborted: runtime error: content_by_lua:2: API disabled in the context of content_by_lua*
stack traceback:
coroutine 0:
[C]: in function '__index'
content_by_lua:2: in function <content_by_lua:1>, client: 127.0.0.1, server: nsps2, request: "GET /widgets/widget?name=testname?loc=20000 HTTP/1.1", host: "localhost"
I'm not too sure about what this error means / is trying to tell me.
Any tips would be appreciated.
Thank you.
EDIT 1
I think the problem was something syntactically wrong. I was reading the manual and found an example to try. I've changed the code to look like this:
location /widgets/widget {
default_type "text/pain";
content_by_lua '
local args = ngx.req.get_uri_args()
for key, val in pairs(args) do
if type(val) == "table" then
ngx.say(key, ": ", table.concat(val, ", "))
else
ngx.say(key, ": ", val)
end
end
';
}
Now when I call the app like this:
mytestdevbox2:/var/www/nsps2# curl -i -X GET http://localhost/widgets/widget?name=testname&loc=20000 -H "Accept:application/json"
-ash: -H: not found
mytestdevbox2:/var/www/nsps2# HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Tue, 24 Feb 2015 21:32:44 GMT
Content-Type: text/pain
Transfer-Encoding: chunked
Connection: keep-alive
name: testname
[1]+ Done curl -i -X GET http://localhost/widgets/widget?name=testname
After the system displays the "name: testname" stuff, it just sits there until I hit "enter". After I do that, then it proceeds to display the 1+ stuff.
I'm not too sure what it's doing.
EDIT 2:
Adding quotes to the curl call did fix the problem:
mytestdevbox2:/var/www/nsps2# curl -i -X GET 'http://localhost/widgets/widget?name=testname&loc=20000'
HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Wed, 25 Feb 2015 12:59:29 GMT
Content-Type: text/pain
Transfer-Encoding: chunked
Connection: keep-alive
loc: 20000
name: testname
mytestdevbox2:/var/www/nsps2#

The problem described in EDIT 1 doesn't have anything to do with your question about nginx and is caused by not quoting the parameters for curl when you execute the command:
curl -i -X GET http://localhost/widgets/widget?name=testname&loc=20000
This is executed as two commands separated by '&': curl -i -X GET http://localhost/widgets/widget?name=testname and loc=20000; that's why you see the output after you already got the prompt back as the first command is now executed in the background. "1+ Done" message is a confirmation that the background process is terminated; it's just shown after you press Enter.
Wrap the URL with the query string in quotes and you should see the expected behavior.

Related

Does jetty overwrite custom http status code ?

I am trying to develop a rest api for my service wherein I set custom http status code depending on authorisation failure. But when I test it out using curl I am receiving 404 instead of 403. I am perplexed as to what might be causing this? Please help.
This is what I see from curl output or swagger UI:
root#ubuntu:~# curl -X GET http://localhost:8082/mr/v1/topic/bhakk -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8082 (#0)
> GET /mr/v1/topic/bhakk HTTP/1.1
> Host: localhost:8082
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Date: Mon, 07 May 2018 22:00:10 GMT
< Exception: serviceBlockedException
< Content-Type: application/vnd.kafka.v1+json
< Content-Length: 83
< Server: Jetty(9.2.z-SNAPSHOT)
<
* Connection #0 to host localhost left intact
{"error_code":40301,"message":"This service does not have access to the resource."}
Here is the code:
public Collection<String> list(#HeaderParam("x-nssvc-serviceid") String serviceID) {
Date now = new java.util.Date();
if (! ctx.getSecurityRestrictions().isServiceAllowed(uri, httpHeaders, "Describe", "Cluster", "kafka-cluster"))
throw Errors.serviceBlockedException(ctx,httpServletResponse);
List<String> topicsCopy = new ArrayList<String>(topics);
for (Iterator<String> iterator = topicsCopy.iterator(); iterator.hasNext();) {
String topic = iterator.next();
if (! ctx.getSecurityRestrictions().hasAccess (serviceId, "Describe", "Topic", topic)) {
iterator.remove();
}
}
return topicsCopy;
}
public static RestException serviceBlockedException(Context ctx,HttpServletResponse httpServletResponse) {
httpServletResponse.setHeader("Exception","serviceBlockedException");
httpServletResponse.setStatus(Status.FORBIDDEN.getStatusCode()); <----// here i am setting status code.
return new RestNotFoundException(SERVICE_ID_BLOCKED_MESSAGE, SERVICE_ID_BLOCKED_ERROR_CODE);
}
Kafka sets the Response 404 status in its RestNotFoundException
See: https://github.com/confluentinc/rest-utils/blob/master/core/src/main/java/io/confluent/rest/exceptions/RestNotFoundException.java

play scala 2.4.3 redicrect not working

I'm trying the Reverse routing sample code
Here are my routes
GET /hello/:name controllers.Application.hello(name)
GET /bob controllers.Application.helloBob
and my codes
def helloBob = Action {
Redirect(routes.Application.hello("Bob"))
}
def hello(name: String) = Action {
Ok("Hello " + name + "!")
}
I can get hello response
$ curl -v localhost:9001/hello/play
Hello play!
But, can't get "Bob" response after redirect?
$ curl -v localhost:9001/bob
* Trying ::1...
* Connected to localhost (::1) port 9001 (#0)
> GET /bob HTTP/1.1
> Host: localhost:9001
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 303 See Other
< Location: /hello/Bob
< Date: Fri, 18 Sep 2015 03:19:04 GMT
< Content-Length: 0
<
* Connection #0 to host localhost left intact
The path component of a URI is case sensitive. Check it.
try
curl -v localhost:9001/hello/Bob
Update
You code is correct (verified on my project) and you show the correct log - it prints 303 code. I think, you just need to say curl to follow redirect, like this
curl -L localhost:9000/bob

curl command line equivalent to this perl code

I want to write a curl command for a POST request equivalent to this Perl code:
use strict;
use warnings;
use LWP::UserAgent;
my $base = 'http://www.uniprot.org/mapping/';
my $params = {
from => 'ACC',
to => 'P_REFSEQ_AC',
format => 'tab',
query => 'P13368'
};
my $agent = LWP::UserAgent->new();
push #{$agent->requests_redirectable}, 'POST';
my $response = $agent->post($base, $params);
$response->is_success ?
print $response->content :
die 'Failed, got ' . $response->status_line .
' for ' . $response->request->uri . "\n";
I tried with (and many other variants) :
curl -X POST -H "Expect:" --form "from=ACC;to=P_REFSEQ_AC;format=tab; query=P13368" http://www.uniprot.org/mapping/ -o out.tab
The Perl code retrieves the expected result, but the curl command line does not. It retrieves the web page from "http://www.uniprot.org/mapping/" but does not make the POST request.
I looked for an error in the response header, but didn't find anything suspicious.
> POST http://www.uniprot.org/mapping/ HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: www.uniprot.org
> Accept: */*
> Proxy-Connection: Keep-Alive
> Content-Length: 178
> Content-Type: multipart/form-data; boundary=----------------------------164471d8347f
>
} [data not shown]
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Server: Apache-Coyote/1.1
< Vary: User-Agent
< Vary: Accept-Encoding
< X-Hosted-By: European Bioinformatics Institute
< Content-Type: text/html;charset=UTF-8
< Date: Wed, 05 Aug 2015 20:32:00 GMT
< X-UniProt-Release: 2015_08
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Headers: origin, x-requested-with, content-type
< X-Cache: MISS from localhost
< X-Cache-Lookup: MISS from localhost:3128
< Via: 1.0 localhost (squid/3.1.20)
< Connection: close
<
I spent almost three days looking for a solution in the web, but nothing is working for me.
It looks like the server expects the data as application/x-www-form-urlencoded and not as multipart/form-data as you do with the --form argument. The following should work:
curl -v -L --data \
"from=ACC&to=P_REFSEQ_AC&format=tab&query=P13368" \
http://www.uniprot.org/mapping/ -o out.tab
With --data you get the expected content-type header, but you must do the encoding yourself. With -L curl follows a redirect which is needed here to get the resulting data.
The -X POST option is not needed since POST is the default method when sending data. And -H "Expect:" is not needed either.

mIRC socket inconsistently not working

I'm currently working on a new uptime command for my Twitch bot and I'm having problems with sockets. I'm trying to use this page http://api.twitch.tv/api/channels/duilio1337 to check if a streamer is online. It works fine sometimes but other times it reads the page header but not the page. Seems to be random.
alias uptimecheck {
echo -a CHECKING CHANNEL $1 STATUS
%uptimeurl = /api/channels/ $+ $1
%uptimecheckchan = $1
sockopen wuptime api.twitch.tv 80
}
on *:sockopen:wuptime: {
sockwrite -n $sockname GET %uptimeurl HTTP/1.0
sockwrite -n $sockname User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
sockwrite -n $sockname Host: api.twitch.tv
sockwrite -n $sockname Accept-Language: en-us
sockwrite -n $sockname Accept: */*
sockwrite -n $sockname
}
on *:sockread:wuptime: {
if ($sockerr) {
echo -a UPTIME Connection Error.
halt
}
else {
sockread -f %uptimeliverail
if ($sockbr == 0) return
;echo -a %uptimeliverail
}
}
on *:sockclose:wuptime: {
var %TEMPliverail $mid(%uptimeliverail, $calc($pos(%uptimeliverail, "twitch_liverail_id") + 21), 4)
if (%TEMPliverail == null) {
set %uptimelive. $+ %uptimecheckchan 0
}
else if (%TEMPliverail isnum) {
set %uptimelive. $+ %uptimecheckchan 1
}
}
The content of http://api.twitch.tv/api/channels/duilio1337 is just:
{"display_name":"duilio1337","game":null,"status":"Test","fight_ad_block":false,"_id":47890122,"name":"duilio1337","partner":false,"comscore_id":null,"comscore_c6":null,"twitch_liverail_id":null,"hide_ads_for_subscribers":false,"liverail_id":null,"ppv":false,"video_banner":null,"steam_id":null,"broadcaster_software":"unknown_rtmp","prerolls":true,"postrolls":true,"product_path":""}
It is not a HTML page, so what you mean with: it reads the page header but not the page. ?
Maybe you can explain better your problem.
Anyway... i think for your purposes in your sockopen event it should be enough to send GET and Host headers:
on *:sockopen:wuptime: {
sockwrite -nt $sockname GET %uptimeurl HTTP/1.1
sockwrite -nt $sockname Host: api.twitch.tv
sockwrite $sockname $crlf
}
In HTTP/1.0 the Host is not needed, but doesn't hurt. In HTTP/1.1 is required.
The -t parameter is recommended to avoid & to be treated as binary variable.
The sockwrite $sockname $crlf is a more explicit way to finish the HTTP request, and is equivalent (IIRC) to your sockwrite -n $sockname (because -n adds a $crlf).
Also, your sockread event in general should look something like:
on *:sockread:wuptime: {
if ($sockerr) {
;error msg code here
return
}
var %data
sockread %data
;your code here
unset %data
}
Avoid halt, that command is used to immediately stop any further processing. In most cases is more appropriate to use return.
Avoid unnecessary if-else conditionals.
And is important to note, that each time the server replies with a line, the sockread event is initiated.
The server is most likely returning
Vary: Accept Encoding
in the response header. Try replacing your Accept */* line with
sockwrite -n $sockname Accept: gzip
and see what happens. It will most likely return compressed binary data (rather than timing out, or returning nothing at all). The problem is, mIRC can't decompress this and doesn't support gzip decoding. I ran into this exact same problem last night and here is how I solved it for anyone that is working with mIRC sockets and needs to fetch gzip'd content.
First make sure you change the header like I have above.. to Accept: gzip
Second, grab a copy of 7za (7-zip for windows console) from http://www.7-zip.org/a/7za920.zip. Extract 7za and put it in the c:\program files (x86)\mIRC\ folder, or wherever your mirc.exe resides.
I've modified the OP's *on:sockread: block to fetch and extract the gzip'd content below:
on *:sockread:wuptime:{
var %bare $rand(423539,999999)
var %gz 1
:play
var %dg
sockread %dg
:wile
if ( %gz = 2 ) {
sockread -f &gzd
}
if ($sockerr) {
return
}
if ( %gz = 2 ) {
while ($sockbr) {
bwrite %bare -1 -1 &gzd
sockread -f &gzd
}
}
if ($sockbr == 0) {
sockclose $sockname
echo -at closed
run -h 7za.exe e %bare
.timer 1 2 wuptimeparse %bare
return
}
if ( %dg = $null) {
var %gz 2
goto wile
}
goto play
}
This is probably pretty crude, but it starts bwriteing to a random file once the headers are finished returning, and once $sockbr is empty, it runs 7zip on the compressed binary data. The alias below will be called in a couple of seconds (time for 7zip to do its magic.. you can probably do this in a far more elegant way im sure) to spit out the plaintext JSON you're looking for. Be warned, if the JSON in your example is huge (4000+ characters) you will need to read it as a binvar with bread because the string will be too large to fit into a variable.
alias wuptimeparse {
var %file $1
var %bare $read(%file $+ ~,1)
echo %bare
;do something with %bare
.timer 1 1 remove %file
.timer 1 1 remove %file $+ ~
}

nginx rewrite rule to remove - and _

i need a nginx rewrite rule for the following problem:
I have Urls that include several hyphen and eventually underscores
Example request: http://www.example.com/cat/cat2/200-AB---a-12_12-123.312/cat-_-cat/cat/dog---I
would give a 404 error
so in need a 301- redirect to:
http://www.example.com/cat/cat2/200-AB-a-12-12-123.312/cat-cat/cat/dog-I
So all underscores should be replaced with hyphens and there should be only one hyphen a time.
short version:
replace --- with - and replace _ with -
but by replacing _ with - this -_- will become --- and rule one would have to be called again.
Is it possible to to that in one rule? and if not how to do it any other way :)i have absolutely no idea how to do that with nginx
any help appreciated :)
% nginx -c $PWD/test.conf
% curl -I localhost:8080/cat/cat2/200-AB---a-12_12-123.312/cat-_-cat/cat/dog---I
HTTP/1.1 301 Moved Permanently
Server: nginx/1.3.13
Date: Wed, 20 Feb 2013 00:09:50 GMT
Content-Type: text/html
Content-Length: 185
Location: http://localhost:8080/cat/cat2/200-AB-a-1212-123.312/cat-cat/cat/dog-I
Connection: keep-alive
% cat test.conf
events { }
#error_log logs/error.log debug;
http {
server {
listen 8080;
location /cat/cat2/ {
# replace up to 3 inconsecutive
# uderscores per internal redirect
rewrite "^(.+?)_+(?:(.+?)_+)?(?:(.+?)_+)?(.+)$" $1$2$3$4 last;
# replace up to 3 inconsecutive multiple
# hyphens per internal redirect
rewrite "^(.+?-)-+(?:(.+?-)-+)?(?:(.+?-)-+)?(.+)$" $1$2$3$4 last;
return 301 $uri;
}
}
}