I'm trying to POST an image in tcllib's rest package as part of a multipart/form-data document. I believe I just need to format the payoad properly.
The url looks like:
POST /api/v1/rawImage/1000?slice=1
I can do this easily with curl (and other things as well) with:
curl -v -X POST -H "Content-Type: multipart/form-data" -F imageFile=#../../images/data/Image_1.bin http://${HOST}/api/v1/rawImage/1000?slice=1
In looking through rest.tcl, I don't see anything that explictly formats the boundaries for the payload.
Here is what I have to try the POST:
#!/usr/bin/tclsh
package require rest
package require json
# pull in Image data
set fh [open "Image_1.bin"]
fconfigure $fh -encoding binary -translation lf
set filedata [read $fh]
close $fh
puts "filedata length: [string length $filedata]"
# POST request
set url http://localhost:5007/api/v1/rawImage/100?slice=1
set header [list content-type multipart/form-data]
set config [list format json method post headers $header]
set form_data {rawImage $filedata}
set res [::rest::simple $url {} $config $form_data]
puts $res
The following HTTP header is seen wireshark:
You could take a look to the create interface command documentation, the example of Google Docs.
Also, if you want to know more about the mime boundary string, you also could see the proc mime_multipart in the source code of "rest".
Related
I want to send ['1', '2', '3'] as a GET request.
I thought GET request are used when you are retrieving data opposed to POST (when you are modifying/creating data)
Failing to google how to send list of strings with GET does make me wonder, if it's better to use POST here?
Once you intend to perform a GET request, you could send data in the query string using one of the following approaches:
curl -G http://example.org -d "query=1,2,3"
curl -G http://example.org -d "query=1&query=2&query=3"
Let me highlight that payloads in GET requests are not recommended. Quoting the RFC 7231:
A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.
Also bear in mind that GET requests shouldn't be used to modify resources: they are intended to be used for information retrieval only, without side effects. Having said that, GET is both safe and idempotent. You can see more details on these concepts in this answer.
If the data must be sent in the payload (and you intend to modify the resource) then stick to POST. Assuming your payload is a JSON document, you would have something like:
curl -X POST http://example.org \
-H "Content-Type: application/json" \
-d '["1", "2", "3"]'
If you want to send it in the body with curl you can call your service like this:
curl -X GET --data "['1', '2', '3']" "https://example.com/test.php"
For example in PHP you can read it from the read-only stream php://input
<?php
$get_body = file_get_contents('php://input');
A better way would be to assign the array to a parameter e.g. x:
curl -X GET "https://example.com/test.php?x[]=1&x[]=2&x[]=3"
In PHP you'll receive these values as an array in $_GET['x']:
<?php
print_r($_GET['x']);
Output:
Array
(
[0] => 1
[1] => 2
[2] => 3
)
I am building a web server using Apache and Perl CGI which processes the POST requests sent to it. The processing part requires me to get the completely unprocessed data from the request and verify its signature.
The client sends two different kinds of POST requests: one with the content-type set as application/json, and the second one with content type as application/x-www-form-urlencoded.
I was able to fetch the application/json data using cgi->param('POSTDATA'). But if I do the same for application/x-www-form-urlencoded data, i.e. cgi->param('payload'), I get the data but it's already decoded. I want the data in its original URL-encoded format. i.e I want the unprocessed data as it is sent out by the client.
I am doing this for verifying requests sent out by Slack.
To handle all cases, including those when Content-Type is multipart/form-data, read (and put back) the raw data, before CGI does.
use strict;
use warnings;
use IO::Handle;
use IO::Scalar;
STDIN->blocking(1); # ensure to read everything
my $cgi_raw = '';
{
local $/;
$cgi_raw = <STDIN>;
my $s;
tie *STDIN, 'IO::Scalar', \$s;
print STDIN $cgi_raw;
tied(*STDIN)->setpos(0);
}
use CGI qw /:standard/;
...
Though I'm not sure which Perl Module can handle it all for you, but here is a basic rundown.
Your HTML form should submit to a .cgi file (or any other handler which is properly defined).
The raw request is something similar to this:
POST HTTP/1.1
UserAgent: Mozilla/5.0
Content-Length: 69
Host: 127.0.0.1
(More headers depending on situation and then a single blank line)
(Message Body Containing data)
username=John&password=123J (example)
https://en.wikipedia.org/wiki/List_of_HTTP_header_fields)
What will happen is that, this data is available via the CGI (not CGI perl module aka CGI.pm) using the environment variables and stdin (header feilds are passed using EV and message body using stdin).
In Perl, I think you need this to read those EVs: http://perldoc.perl.org/Env.html
And this to read stdin: https://perlmaven.com/read-from-stdin
From there on, you can process as needed.
BE CAREFULL, when reading any of these. You can be sent malformed information like 100GB valid data in one of the HTTP headers, or in message body, which can break havoc on you or dangerous system calls, etc. Sterilizing is necessary, before passing the data to other places.
I've checked the documentation https://metacpan.org/pod/Furl
but can't found how can I get sites base URI while using Furl?
With LWP it's easy:
my $res = $ua->get($url);
my $base_uri = $res->base;
The base function try to get values from this header fields
my $base = (
$self->header('Content-Base'), # used to be HTTP/1.1
$self->header('Content-Location'), # HTTP/1.1
$self->header('Base'), # HTTP/1.0
)[0];
But I couldn't do the same with Furl.
First: it seems you want to do an anonymous array at $base, thus, it should be:
my $base = [
$res->header('header1'),
$res->header('header2'),
$res->header('header3')
];
Because the code you had just saved the first header (in your case, Content-Base) and did nothing with the last two, you can check that with Data::Dumper.
Maybe that's why it didn't work.
Second: But, after reading through the code of the Furl module, I found out there's no exposed method for getting an url's base, so unless you are also checking in your own code for the <base> html tag and the uri you used to request your response (even after redirects), your code might break with some oldish sites. HTTP::Response does this checking, and that's what LWP uses.
Citation for hierarchy of base URIs: HTTP::Response - HTTP style response message
I was trying to do image classification through a web at service at and
I had no problem to use curl to retrieve token:
curl -i -X POST -H "Authorization:MY Key" -F "image_request[image]=#/path/to/myimage" -F "image_request[locale]=en-US" https://api.cloudsightapi.com/image_requests
However, it is not successful when I tried webwrite, it returned "HTTP 400" error.
option=weboptions('KeyName','Authorization','KeyValue','mykey')
fid = fopen('/path/to/myimage');
img = fread(fid,Inf,'*uint8');
fclose(fid);
response=webwrite('https://api.cloudsightapi.com/image_requests',...
'image_request[image]',img,...
'image_request[locale]','en-US',option);
I guess it because the function webwrite in this format doesn't support "multipart/form-data" and I need change the media type. Then I tried to send data as an JSON object
option=weboptions('KeyName','Authorization','KeyValue','mykey','MediaType','application/json')
data=struct('image_request[image]',im,'image_request[locale]','en-US');
response=webwrite('https://api.cloudsightapi.com/image_requests',data,option)
But the field name in Matlab struct does not allow "[".
Any suggestion?
I'm trying to write a Perl CGI script to handle XML-RPC requests, in which an XML document is sent as the body of an HTTP POST request.
The CGI.pm module does a great job at extracting named params from an HTTP request, but I can't figure out how to make it give me the entire HTTP request body (i.e. the XML document in the XML-RPC request I'm handling).
If not CGI.pm, is there another module that would be able to parse this information out of the request? I'd prefer not to have to extract this information "by hand" from the environment variables. Thanks for any help.
You can get the raw POST data by using the special parameter name POSTDATA.
my $q = CGI->new;
my $xml = $q->param( 'POSTDATA' );
Alternatively, you could read STDIN directly instead of using CGI.pm, but then you lose all the other useful stuff that CGI.pm does.
The POSTDATA trick is documented in the excellent CGI.pm docs here.
Right, one could use POSTDATA, but that only works if the request Content-Type has not been set to 'multipart/form-data'.
If it is set to 'multipart/form-data', CGI.pm does its own content processing and POSTDATA is not initialized.
So, other options include $cgi->query_string and/or $cgi->Dump.
The $cgi->query_string returns the contents of the POST in a GET format (param=value&...), and there doesn't seem to be a way to simply get the contents of the POST STDIN as they were passed in by the client.
So to get the actual content of the standard input of a POST request, if modifying CGI.pm is an option for you, you could modify around line 620 to save the content of #lines somewhere in a variable, such as:
$self->{standard_input} = join '', #lines;
And then access it through $cgi->{standard_input}.
To handle all cases, including those when Content-Type is multipart/form-data, read (and put back) the raw data, before CGI does.
use strict;
use warnings;
use IO::Handle;
use IO::Scalar;
STDIN->blocking(1); # ensure to read everything
my $cgi_raw = '';
{
local $/;
$cgi_raw = <STDIN>;
my $s;
tie *STDIN, 'IO::Scalar', \$s;
print STDIN $cgi_raw;
tied(*STDIN)->setpos(0);
}
use CGI qw /:standard/;
...