I am trying to capture payload information and Endpoint link from a particular web site i am using, Since i don`t have any documentation i have to capture endpoint manually.
i see Endpoint link is appears in web inspect for 3-4 seconds when i do any post request and disappears before i can capture all details.. is there any way i can delay endpoint visibility in browser and capture information?
This may not be programming question but this task mostly performed by programmer, Hence looking for answer in this site.
Thanks
I am able to resolve it.
From Chrome network option Check option-- > preserve Log
Related
im trying to find a away to go to this site https://www.deadstock.ca, and monitor on the backend for when the upload early the products early
hello, im trying to find a away to go to this site https://www.deadstock.ca, and monitor on the backend for when the upload the products early, doing so i would need to find a hidden endpoint or api, after researching it turns out the best way of doing this is by using fiddler to find all the requests sent, and look for it, chrome dev tools wont work because it doesn't show all the requests, but yea after using fiddler, I still can't find the hidden endpoint, So what should I do,
im expecting to learn what im doing wrong, and learn how to find hidden endpoints on a site
I'm building a service with API Gateway + Lambda that tracks email link clicks. The links inside the email lead to my endpoint, which gathers the click info and redirects to another URL. However, I'm detecting that in some cases, some software automatically clicks most of the links, probably to prevent phishing, and the usual suspect here is an antivirus. Since I'm targeting only real user clicks, I want to discard them, but didn't find anything weird in the request headers. How would you check that the request comes from a non user?
In API Gateway settings, you can turn on CloudWatch logs to see all the request headers. Specifically, you can use $context and $input variables to log context variables like user-agent, source-ip or log all the headers.
If the bots are using exact same user-agent and set of headers, I do not see a way to distinguish them at API Gateway side.
I'm trying to create an application using Ruby on Rails that allows a user to download all of the songs that they have uploaded. At this point I've authenticated the user using OmniAuth, and I've managed to pull some data using Soundcloud's Ruby interface with their API.
According to the API, every track has an associated title, artwork_url, and download_url (I have already used the API to get this information). I'd like to display each song showing its title and artwork (if any exists), and then if they choose to download that track, they'll be able to click a button and download that track from the download_url.
So here's what I've realized: for most tracks (or sounds, as they are called in Soundcloud), downloading is disabled by default when you upload it. The thing is, there is also an option that says "Apps Enabled/Disabled." What I'm wondering is if downloading for a song is disabled, can a 3rd party application that has been authenticated still use the download URL to grab the track? I'd like to know if users will need to individually go through all their tracks and enable downloading in order for this to work.
If you need any more detail, please say so. Apologies if this is an obvious question.
Thanks,
Nat
What I'm wondering is if downloading for a song is disabled, can a 3rd
party application that has been authenticated still use the download
URL to grab the track?
Yes, it can. However, note that in order for API to actually serve the file, there has to be an HTTP Authorisation header and client_id GET parameter.
This means that simply spitting out the links like
Download {title}
will not help. As for client_id, you could simply append it to the href, but since browser doesn't know about the fact that your app has been authorised and can't send HTTP headers with simple requests, API won't let the user download tracks that are not publicly downloadable.
Because of that, you'll need to build a proxy in ruby and set hrefs to your local endpoints and handle the connection to the SoundCloud API on the ruby side, so you can pass HTTP header with OAuth token.
Making an HTTP request (Get) to the API invoking the '/convert' route will return your track ID.
Then you can make a POST request to the api '/stream' passing the track id and ?client_id=YOURCLIENTID
From the response you can get the stream url that can be used to stream and download every track in Soundcloud. The latter option isn't really legal so take all of this as an example.
Edit: This can be done with pure JS/jQuery, no need to run servers
I want to delete record of those peoples who have remove app from their application's list, to do this I have entered that URL where I make a code to delete record of active user from my database in de-authorize callback. But still I'm unable to de-authorize users from by db.
Edit: See Facebook Deauthorize Callback over HTTPS for what my original problem really was. Summary: Improper web server configuration on my part.
Original answer was:
One potential problem has to do with https based deauthorize callbacks. At least some SSL certificates are not compatible with the Facebook back end servers that send the ping to the deauthorize callback. I was only able to process the data once I implemented a callback on an http based handler.
Some things to check...
That the URL of your server is visible from facebook's servers (ie not 192.168 or 10.0 unless you've got proper firewall and dns config).
Try using an anonymous surfing service and browsing to the URL you gave facebook - do you see a PHP Error?
Increase the loglevel for PHP and Apache/IIS to maximum and see if you get any more information
We can't do much more unless you give us your code...
Say I have a web page which does a permanent redirect to another page. The status code sent should be 301. I would like to test this (ie to check that the status code is indeed 301) but the browser redirects automatically to the new page and I don't have the time to check the status code returned.
Any ideas?
Fiddler is your friend here, it can monitor all web traffic and you will be able to see the 301 being sent back.
You can download it from http://www.fiddler2.com/fiddler2/
You can check your logs in IIS, it keeps track of requests and the response code it sent back.
You can also use a tool like Fiddler which works with IE and it will show you the request/response data.
Other browsers likely have their own tools that will show you this information also.
I would recommend that you use a web debugging tool that will allow you to look at what requests and responses have been received by the browser. Fiddler is a free, useful tool in seeing these items.