I know that I can run wget --post-data=key=value, but is there a way to send $_GET data instead? Such as through get-data?
GET data will be transmitted via the url. Like this:
wget http://your.server.net/?key=value
You can refer to RFC 2616
Related
I am following the docs here https://docs.github.com/en/rest/actions/artifacts#download-an-artifact to use Github actions rest API to download artifacts. Given an ARTIFACT_ID and access token if the repo is private, one can call the API via cURL or the github CLI to get a response from github. The response header contains Location:... which provides a temporary URL lasting 1 minute from which the artifact can be downloaded. The artifact can then be downloaded via a second call to cURL.
I would like to know the reason for this design decision on the part of Github. In particular, why not just return the artifact in response to the first call to cURL? Additionally, given that the first call to cURL is intended to return a temporary URL from which the artifact can be retrieved, why not have this temporary URL returned directly by call to cURL rather than having it only contained in the header. Other information such as if the credentials are bad, or if the object has been moved are returned in json when this cURL command is run, so why can't the temporary URL also be contained here?
To help clarify my question, here is some relevant code:
# The initial cURL command looks something like this:
curl -v \
-H "Accept: application/vnd.github+json" \
-H "Authorization: token <TOKEN>" \
https://api.github.com/repos/OWNER/REPO/actions/artifacts/ARTIFACT_ID/ARCHIVE_FORMAT
# the temporary URL, which can be curled to retrieve the artifact, looks like something like this:
curl https://pipelines/actions/githubusercontent.com/serviceHosts/<HEXSTRING>/_apis/pipelines/1/runs/16/\
signedartifactscontent?artifactName=<artName>&urlExpires=<date>&urlSigningMethod=HMACV2&urlSignature=<SIGNATURE>
Additionally, I am currently capturing the standard error of the cURL command and then running regex on it so as to extract the temporary URL. Is there a better way to do this? For example, is there a flag I could pass to cURL that would give me the value of Location directly?
Additionally, it is stated that The archive_format must be zip. Given this is the case, what is the benefit of having this parameter. Is it not redundant? If so, what is the benefit of this redundency?
This is a consequence of a 2011 design decision regarding https://github.blog/2011-08-02-nodeload2-downloads-reloaded/
When implementing a proxy of any kind, you have to deal with clients that can’t read content as fast as you can send it.
When an HTTP server response stream can’t send any more data to you, write() returns false.
Then, you can pause the proxied HTTP request stream, until the server response emits a drain event.
The drain event means it’s ready to send more data, and that you can now resume the proxied HTTP request stream.
TO avoid DDOS, it is better to manage that stream from a temporary URL, rather than a fixed one.
You can use -D to display response header, but you would still need to post-process its answer to get the redirection URL.
I am sending a PERL POST Request over HTTPS. During sending the request i need to send two things in content one is an authorization token and other is the command need to be executed on the server side.
What should be the approach to send these two things as the content?
Should it be:-
$request->content($token)
$request->content($command)
OR should it be
my #content =($token,$command)
$request->content(\#content)
The module which i am using is LWP::UserAgent and in that i will be creating a HTTP::Request type object my $request = HTTP::Request->new(POST => "<url>"); and in
this object i am sending content.
There is only a single content (request body) for a POST request. So any call of content just replaces the previously defined content. Please have a look at the documentation for LWP::UserAgent::post which clearly defines how to send POST data with multiple values. Also, it might be useful if you understand how forms in HTML work, both on the client (browser) and on the server side. Because only if you know what the server side expects in detail you can create the proper request.
I am designing a RESTful API when I noticed something strange.
When I make a POST request for creating a new record, the form data is sent in request payload.
But when I make a PUT request to update a record, it appends form data in the URL, very similar to GET request.
Now a URL has certain length limit. So what would happen if PUT request has larger data than this limit.
Will the PUT request fail?
Is it unsafe to use PUT instead of POST to update a record having large form data?
EDIT:
I am using NodeJS server. I am using restangular(angular framework) to build my PUT request.
Use customPUT to send the form data in payload.
baseObj.customPUT(newObj).then(function(xyz){})
Have a look at these threads
Can HTTP PUT request have application/x-www-form-urlencoded as the Content-Type?
PHP multipart form data PUT request?
application/x-www-form-urlencoded or multipart/form-data?
Sounds like you can basically set a Content-type: multipart/form-data header and be golden. Basically comes down to configuration of the request with restangular and support thereof on the NodeJS server.
I want to capture how parameters are being sent. Usually what I do is to make a request and check on Firebug's params tab what are the parameters sent. However, when I try to do this on the following site (http://www.infraero.gov.br/voos/index_2.aspx), it doesn't work - I can't see what are the parameters in order to repeat this request using curl. How can I get it? I'm not sure but I think that cookies are being used.
EDIT
I was able to get the request content, but couldn't understand it. It seems it uses javascript to generate the proper request. How can I reproduce this request via cURL?
Did you see this previous question cURL post data to asp.net page ? That might answer the question right there (all I did was search "ASP.NET cURL"). And this one: Unable to load ASP.NET page using Python urllib2 talks about Python, but it approaches it in a way that should translate to cURL.
But for my $0.02, I wouldn't bother trying to untangle ASP.NET's and __VIEWSTATE and javascript. Is it an absolute requirement that you use cURL?
I think you would be better off using a client that works more like a real browser and understands javascript. That's a bit of work, but it isn't as bad as it sounds. I've done this before with http://watirwebdriver.com/ and a short Ruby script. Here's how to do it with Python and Mechanize (this is probably a bit more lightweight).
http://phantomjs.org/ is another option that you script using javascript. If you Google "Scraping ASP.NET" you will see that this is a common problem.
You didn't say how you want it done, but you can send the request with curl simply with curl -d name1=contents1&name2=contents2 [TARGETURL] etc.
Note that you probably first need to fetch the main page and extract the "__VIEWSTATE" form field and submit back that (VERY huge) contents to get your submission accepted.
My understanding is that Zend HTTP Client is the best way to send (possibly) large files to the user; can any confirm this and show me an example? Or a better solution.
Zend_Http_Client is an advanced HTTP client which gives you the possibility to communicate with a HTTP server like a browser or other client.
It's a more advanced client like file_get_contents() or simply file().
So send data to the browser the best way is to send the data with readfile() or if you have only the binary data as a variable you can send is simply with echo.
eg.:
header("Content-Type: image/jpeg");
readfile("/path/to/the/file.jpg");
exit;