Getting all requests while loading website - flutter

In my Flutter app...
I was wondering if it would be possible if I could get all the requests a website was requesting. I want something like chrome dev tools offers:
Let's say I would call a HTTP request to a website and then I would receive requested data and these request that were made while loading the web.

You have to study how to work a webserver and the client / server architecture.
To make it easier for you, the client makes a request in a specific path to the web server, the web server provides a response, the client processes the data and, if necessary, makes other requests to complete its "task".
In chrome dev tools, you see only a log/debugger of requests.
To get all requestes, you need to simulate a complete functional client, or get from response the specific information you want.
I link you one package in Dart and one in NodeJS to start understading what you need, web_scarper_dart, puppeteer

Related

Web based open source REST client to integrate in a webapp

I am looking for a web based open source REST client, that I can integrate in my web app. The requirement is to enter the URL, configure GET/POST/PUT/etc, request body, authorization (e.g. Basic, oAuth), parameterize some variables like query, header, etc. I should be able to extend it further as well by forking it.
May be a lighter version of Postman or Insomnia client.
Any suggestions here would be much appreciated.

RESTful API: how to distinguish users requests from front-end requests?

So, I have a RESTful API (built with Hapi.js) that has endpoints consumed by users and my front-end app (built with Next.js). GET api/candies is one of them, I'll take it as an example.
The front-end asks the list of candies stored in my DB and displays them on a page anyone can access (it has to be this way). The front-end doesn't provide an API token since people could read/use it. But, users who want to get this list of candies (to build whatever they want with it) must provide a valid API token (which they get by creating an account on my front-end app).
How could my API tell if a request for api/candies is from a user or from my front-end app, so it can verify (or not) the validity of their token?
I'm wondering if my problem isn't also about web scraping.
Can anyone help me please? :D
I thought about the same problem a while ago. If your frontend has a client side REST client (JS+XHR/fetch), then I don't think it is possible to do this reliably, because no matter how you identify your frontend REST client, your users will be able to copy it just by checking the HTTP requests in browser via CTRL+SHIFT+I. There are even automation tools, which use the browser e.g. Selenium. If you have a server side REST client (e.g. PHP+CURL), then just create a consumer id for the frontend and use a token. Even in this case I can easily write a few lines of code that uses the frontend for the same request. So if you want to sell the same service for money that you provide for free on your frontend, then you are out of luck here. This does not mean that there won't be consumers who are willing to pay for it.
I think your problem is bad business model.
Your requirement can be addressed by inspecting different headers sent by different user agents. You can also add custom headers from your front-end and validate the same on the backend.

How can I avoid hardcoding URLs in a RESTful client/server web app with deep linking?

I'm working on a SPA which is a client to a RESTful web service. Both the client and server are part of the same project, i.e. I can modify the code for both sides freely. I've been reading up on RESTful API design to try and make sure I'm doing everything the "right" way. One of my takeaways from reading is that a RESTful service should publish hyperlinks so clients can access more information, and that clients should have no hardcoded information about service URLs other than an entry point. Using hyperlinks allows the client to be more flexible in the event that the server makes URL changes.
However I can't figure out how this architecture is supposed to work when users are allowed to link to a specific client state. For example:
One of the views is a list of books available for purchase. The client sets the browser's location to /books/ to identify this page, and the backend data comes from an endpoint /api/books/, retrieved from an API entry point that publishes that URL. The service URL responds with a JSON document like this:
[
{"title": "The Great Gatsby",
"id": 24,
"url": "http://localhost/api/books/24/"},
< and so on >
]
The client uses this to generate readable links that, when clicked, go to a detailed view of a single book. The browser's location is updated to /books/the-great-gatsby/24/ so users can bookmark this view and link to it.
How does the client handle when users click this link directly?? How would it know where to get the information for this book without having a hardcoded URL?
The best I could come up with is the following sequence of requests:
GET /api/ - view which services are available (to find there are books at all)
OPTIONS /api/books/ - view a description of what operations are available on books (so e.g. it can make sure it can find books by ID)
GET /api/books/?id=24 - See if it can find a book with an ID that matches the ID in the browser's location.
GET /api/books/24/ - Actually retrieve the data
Anything shorter would imply that the client has hardcoded knowledge of the API's URLs. However, from a web app point of view, this seems grossly inefficient.
Is there some trick I'm missing? Is there a way for the client to "know" how to get more detail about book ID 24 without somehow having the /api/books/24/ endpoint hardcoded?
if you request this resource /books/the-great-gatsby/24/ from the server, the server should respond with something specific to that URL. Currently, you are probably analyzing window.location which is a bit of a hack.
If /books/the-great-gatsby/24/ is static content, then you have very little choice: You store the client's current state explicitly somewhere (i.e. /books?data=api/books/24 or implicitly /books/the-great-gatsby/24/ which then leads to the client having to know how to translate that to an API resource.
The RESTful way is to use hypertext to indicate where any related resources (i.e. your data to render is) are which makes a tag an appropriate choice.
i.e. ditch the static content, and render /books/the-great-gatsby/24/ with a <head><link href="api/books/24" ....></link></head>
However, if you always retain control of your client side and don't plan to publish the API to third parties, you might be more productive ditching RESTful and just go RESTish.
The Resource URL Locator pattern
In this answer: user is (the human interacting with) the internet browser, client is the Single Page Application (SPA) and server is the REST API.
Deep linking is a convenience of the client to the user; the client itself may still not have knowledge of the server's URLs, so the client must start at the root URL of the server. The client uses content negotiation to indicate which media type it needs. The first request of the client to the server when bootstrapping itself could be as follows:
GET /?id=24 HTTP/1.1
Accept: application/vnd.company.book+json
Optionally, the client uses the id querystring parameter as a hint to the server to select the specific resource it is looking for.
When the server has determined which resource the client is looking for it can respond with a redirect to the canonical URL of the resource:
HTTP/1.1 303 See Other
Location: https://example.com/api/books/24
The client can now follow the redirect and get the resource it needs to bootstrap the application.
#Evert's comment got me thinking. Isn't a deep link or a bookmark just a continuation of application state from a previous point in time? It doesn't really matter how much time has passed after the previous application state transition.
You could say that the 'current' application state in HATEOAS is the last followed link. The current state must be stored somewhere and it might as well be stored in the application URL.
Starting the application at a deep link indicates to the application that it should rebuild the application state by requesting the resource indicated by the application URL. If the resource is no longer available or has moved, the server should respond with a 404 Not Found or 301 Moved Permanently respectively.
With this approach the server is still in control of the URLs. The application follows the hypermedia links in the server's responses and doesn't generate URLs itself.

Programatically POST'ing a form is not doing what my browser is doing. Why?

I'm trying to programmatically submit a form on a web site that I do not own. I'm trying to simulate what I would manually do with a web browser. I am issuing an HTTP POST request using an HTTP library.
For a reason that I don't know I am getting a different result (an error, a different response, ...) when I programmatically submit the form compared to a manual submission in a web browser.
How can that be and how can I find out what mistake I have made?
This question is intentionally language and library agnostic. I'm asking for the general procedure for debugging such issues.
All instances of this problem are equivalent. Here is how to resolve all of them:
The web site you are posting to cannot tell different clients apart. It cannot find out whether you are using a web browser or an HTTP library. Therefore, only what you send matters for the decision of the server on how to react.
If you observe different responses from the server this means that you are sending different requests.
A few important things that you probably have to send correctly:
URL
Verb (GET or POST)
Headers: Host, User-Agent, Content-Length
Cookies (the Cookie and Set-Cookie headers)
The request body
Use an HTTP sniffer like Fiddler to capture what you are programmatically sending and what your browser is sending. Compare the requests for differences. Eliminate the differences one by one to see which one caused the problem. You can drag an HTTP request into the Composer Window to be able to modify and reissue it.
It is impossible to still get a different result if you truly have eliminated all differences between the manual and the programmatic requests.

How do I call a POST method to a RESTful service in Sencha?

ANSWER: The problem is that I am trying to make a cross-domain call using Ext.Ajax.Request().
I'm designing an app that will be making http POST and GET requests to a RESTful service. The service is already in place and if I use a utility like soapUI or the Chrome Rest Client to make the calls they are successful. Someone might ask: "Is this a cross domain call?" My answer: I don't know. I can tell you that the service is not hosted on my own computer but again, if I use soapUI or the Chrome Rest Client plugin for my Chrome browser I can successfully make the calls.
However, If I try to do them using Ext.Ajax.Request() they fail, almost immediately.
If I use Ext.util.JSONP.request() it won't let me do a POST. What is the solution?
You can specify the request method in the Ajax call:
new Ext.Ajax.request({ url:'http://foo', method:'POST', ... });
See also: http://docs.sencha.com/ext-js/4-0/#!/api/Ext.Ajax
Hope this helps.
The answer is that it is impossible to use Ext.Ajax.Request() to make a call to a service not on the local machine/device. This is called cross-site/cross-domain scripting and is blocked by all browsers by default. It is possible to disable this security feature, but ONLY use that for development purposes.
If you are developing for a mobile device, I recommend developing with disabled browser security on your computer, and then using PhoneGap to package your app. PhoneGap allows cross-domain/cross-site scripting and so Ext.Ajax.Request() to an outside service will work using PhoneGap.