What is the use of -i with curl command [closed] - rest

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Used to check if the index (indices) exists or not. For example:
curl -XHEAD -i 'http://localhost:9200/twitter'
The HTTP status code indicates if the index exists or not. A 404 means it does not exist, and 200 means it does.
What is the use of -i option in above example?

This is related to cURL, I suppose. So it means it should be written in documentation:
Different protocols provide different ways of getting detailed
information about specific files/documents. To get curl to show
detailed information about a single file, you should use -I/--head
option. It displays all available info on a single file for HTTP and
FTP. The HTTP information is a lot more extensive.
Or alternatively in here:
-i, --include
(HTTP) Include the HTTP-header in the output. The HTTP-header includes
things like server-name, date of the document, HTTP-version and
more...

Related

What is the -X flag mean in wget? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 months ago.
Improve this question
In the documentation it says:
-X list
But what does it actually mean, when i call.
wget -X GET https://www.google.com
Can anybody explain please?
From the man page:
-X list
--exclude-directories=list
Specify a comma-separated list of directories you wish to exclude from download. Elements of list may contain wildcards.

JWT links in emails coming back corrupt/invalid [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 11 months ago.
Improve this question
A change was made to have links in our emails changed from a token-based format to JWTs. After enabling this feature in production we started seeing a percentage of errors on our servers about not being able to decode the JWT as it was not valid. Looking at the invalid JWTs they appear to be totally different than what we were sending out (not even a subset of the JWT appears to be the same). Our best guess is something along the way was mangling the base64 encoding of the token parameter in our URL querystring. Every invalid request came from an IP associated with a "Microsoft Corporation" data center, widely spread across the US, not just a single data center or two. Also the user agent is predominately windows, although we have seen one or two linux. Interestingly no errors from OSX yet.
Is there some kind of link prefetch/virus scanner/etc somewhere in azure/microsoft/outlook/live.com land that I don't know about that may be causing this?

How can I specify a map_remote rule in mitmproxy to redirect from a remote HTTPS URL to an HTTP URL? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I'd like to set up a map_remote rewrite from an https address to my local machine running a service on http only.
The documentation for the option (https://docs.mitmproxy.org/stable/concepts-options/) seems to indicate I should do this
mitmproxy --map-remote "|https://foo.bar.com|http://localhost:8081|"
But this doesn't seem to rewrite any requests.
What's the correct syntax to accomplish this?
The problem in your example is the trailing |. Map Remote specifications can either be:
|flow-filter|url-regex|replacement or
|url-regex|replacement
By append a final | to your two-part spec, you inadvertedly use the first form, and https://foo.bar.com is applied as the filter and not as the url regex. Long story short:
mitmproxy --map-remote "|https://foo.bar.com|http://localhost:8081|" # wrong
mitmproxy --map-remote "|https://foo.bar.com|http://localhost:8081" # correct
You may also find the extended feature documentation at https://docs.mitmproxy.org/stable/overview-features/#map-remote helpful. :)

Charles Proxy: Map to GET Request instead of OPTIONS [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
Is there a way to do a local mapping in Charles based on a specific request? My API makes both a OPTIONS request followed by a GET request.
When I do a local mapping in Charles, it maps the response to that OPTIONS request. I would like it to map the response to the GET request instead.
Is there a way around this? I'm new to this tool.
Any help is appreciated. Thanks in advance!
I think this has already been discussed here.
As it is said there, Charles does not provide any way to be able to distinguish between different HTTP requests by its method, but you may be able to workaround it by filtering by empty body requests.
It's probably off-topic but one way to solve this problem is to disable the OPTIONS request..
open -na Google\ Chrome --args --disable-web-security --user-data-dir=$HOME/profile-folder-name
Then, Chrome & Charles proxy can work together

Charles: Export only the request body as a seperate JSON file [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I use charles proxy tool to monitor the requests and response.
Although I can use the export feature of charles proxy tool to extract the complete file as a .chls file. As shown in the below picture,
That .chls file will have the URL, request headers, request body, response header, response body and many other details.
But what I need is only the request body to be saved as a .json file. Is there anyway that I can automate this process?
Go to Charles > Proxy Tab > Enable the Web Interface:
After completing charles session you can launch http://control.charles/session/export-json url to get the JSON format of the request.
To automate this you can use curl command:
Runtime.getRuntime().exec("curl -o file.json http://control.charles/session/export-csv");
In Charles v4.2.8, this is pretty easy. Just right click the recorded HTTP request, and click "Save Request...".
If HTTP request Content-Type is application/json, its body would be saved. Save the file as xxx.json and it's done.
Note: this feature is possibly added earlier than v4.2.8, but I'm not able to find any announcement in Charles' version history.