Why does the Dev Tools "Size" column change between requests? - google-chrome-devtools

I understand that the two "Size" values in dev tools represent the data transferred over the wire (top value) and the resource size (bottom value) but what would cause the top value to change between requests with cache disabled?
My test is a simple endpoint that returns the body ok. The bottom value in all requests shows 2B as expected. Intermittently, however, the top value switches between 63B and 33B when refreshing. Why does this change? Everything else about each request in Dev Tools looks identical.

Related

How do I translate the following POST request into ESP8266 AT-command format?

I've got a working local website that takes in HTML form data.
The fields are:
Temperature
Humidity
The server successfully receives the data and spits out a graph updated with the new entries.
Using a browser tool, I was able to capture the actual POST request as follows:
http://127.0.0.1:5000/add_data
Temperature=25.4&Humidity=52.2
Content-Length:30
Now, I want to migrate from using the human interface browser with manual entries to an ESP01 device using AT commands.
According to the ESP AT-commands documentation, a POST request is performed using the following command:
AT+HTTPCPOST=
Find the link below for the full description of the command.
I cannot seem to get this POST request working. The ESP01 device immediately returns an "ERROR" message without any delay, as though it did not even try to send the request, that the syntax might be wrong.
Among many variations, the following is my best attempt:
AT+HTTPCPOST="http://MYIPADDR:5000/add_data",30,2,"Temperature: 25.4","Humidity: 52.2"
With MYIPADDR above replaced with my IP address.
How do I translate a post request into ESP01 AT command format, and are there any prerequisites needed to be in place to perform such a request?
I did connect the ESP01 device to the WiFi network.
Here's the link to the POST AT command description:
https://docs.espressif.com/projects/esp-at/en/release-v2.2.0.0_esp8266/AT_Command_Set/HTTP_AT_Commands.html#cmd-httpcpost
The documentation says:
AT+HTTPCPOST=url,length[,<http_req_header_cnt>][,<http_req_header>..<http_req_header>]
Response:
OK
The symbol > indicates that AT is ready for receiving serial data, and you can enter the data now. When the requirement of message length
determined by the parameter is met, the transmission starts.
...
Parameters
: HTTP URL. : HTTP data length to POST. The maximum
length is equal to the system allocable heap size.
<http_req_header_cnt>: the number of <http_req_header> parameters.
[<http_req_header>]: you can send more than one request header to the
server.
You're sending:
AT+HTTPCPOST="http://MYIPADDR:5000/add_data",30,2,"Temperature: 25.4","Humidity: 52.2"
The length is 30. The problem is that everything after the length is HTTP header fields; you need to send the variables in the body. So the command is:
AT+HTTPCPOST="http://MYIPADDR:5000/add_data",30
followed on the next line by after the ESP-01 send the > character:
Temperature=25.4&Humidity=52.2
Because you passed 30 as the body length, the ESP-01 will read exactly 30 characters after the end of the AT command and send that data as the post body. If the size of that data changes (for instance, maybe the temperature is 2.2, so one digit less), you'll need to send the new length rather than 30.

REST API allow update of resource depending on state of resource

I have recently read the guide on implementing RESTful API's in Spring Boot from the official Spring.io tutorials website (link to tutorial: https://spring.io/guides/tutorials/rest/)
However, something in the guide seemed to contradict my understanding of how REST API's should be built. I am now wondering if my understanding is wrong or if the guide is not of as high a quality as I expected it to be.
My problem is with this implementation of a PUT method to update the status of an order:
#PutMapping("/orders/{id}/complete")
ResponseEntity<?> complete(#PathVariable Long id) {
Order order = orderRepository.findById(id) //
.orElseThrow(() -> new OrderNotFoundException(id));
if (order.getStatus() == Status.IN_PROGRESS) {
order.setStatus(Status.COMPLETED);
return ResponseEntity.ok(assembler.toModel(orderRepository.save(order)));
}
return ResponseEntity //
.status(HttpStatus.METHOD_NOT_ALLOWED) //
.header(HttpHeaders.CONTENT_TYPE, MediaTypes.HTTP_PROBLEM_DETAILS_JSON_VALUE) //
.body(Problem.create() //
.withTitle("Method not allowed") //
.withDetail("You can't complete an order that is in the " + order.getStatus() + " status"));
}
From what I read at https://restfulapi.net/rest-put-vs-post/ a PUT method should be idempotent; meaning that you should be able to call it multiple times in a row without it causing problems. However, in this implementation only the first PUT request would have an effect and all further PUT requests to the same resource would result in an error message.
Is this okay according to RESTful API's? If not, what would be a better method to use? I don't think POST would be any better.
Also, in the same guide, they use the DELETE method in a similar way to change the status of an order to cancelled:
#DeleteMapping("/orders/{id}/cancel")
ResponseEntity<?> cancel(#PathVariable Long id) {
Order order = orderRepository.findById(id) //
.orElseThrow(() -> new OrderNotFoundException(id));
if (order.getStatus() == Status.IN_PROGRESS) {
order.setStatus(Status.CANCELLED);
return ResponseEntity.ok(assembler.toModel(orderRepository.save(order)));
}
return ResponseEntity //
.status(HttpStatus.METHOD_NOT_ALLOWED) //
.header(HttpHeaders.CONTENT_TYPE, MediaTypes.HTTP_PROBLEM_DETAILS_JSON_VALUE) //
.body(Problem.create() //
.withTitle("Method not allowed") //
.withDetail("You can't cancel an order that is in the " + order.getStatus() + " status"));
}
This looks very wrong to me. We are not deleting anything here, it is basically the same as the previous PUT method just with a different state we want to move to. Am I correct to assume that this part of the tutorial is bogus?
TL;DR: what HTTP method is right to use when you want to advance the status of a resource to the next stage without giving an option of going back to an earlier stage? Basically an update/patch that will invalidate its own pre-conditions.
something in the guide seemed to contradict my understanding of how REST API's should be built. I am now wondering if my understanding is wrong or if the guide is not of as high a quality as I expected it to be.
I wouldn't consider this guide to be a reliable authority - the described resource model has some very questionable choices.
From what I read at https://restfulapi.net/rest-put-vs-post/ a PUT method should be idempotent; meaning that you should be able to call it multiple times in a row without it causing problems. However, in this implementation only the first PUT request would have an effect and all further PUT requests to the same resource would result in an error message.
The authoritative definition of idempotent semantics in HTTP is currently RFC 7231.
A request method is considered "idempotent" if the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request.
Note: "effect", not "response".
PUT /orders/12345/complete
means "please replace the current representation of /orders/12345/complete with the representation in the payload". In other words "save this file on top of your current copy". Saving the same file two or three times in row produces the same effect as saving the file once, so that's "idempotent".
HTTP does not define exactly how a PUT method affects the state of an origin server beyond what can be expressed by the intent of the user agent request and the semantics of the origin server response. It does not define what a resource might be, in any sense of that word, beyond the interface provided via HTTP. It does not define how resource state is "stored", nor how such storage might change as a result of a change in resource state, nor how the origin server translates resource state into representations. Generally speaking, all implementation details behind the resource interface are intentionally hidden by the server. -- RFC 7231
So in their CURL example
PUT /orders/4/complete HTTP/1.1
Host: localhost:8080
User-Agent: curl/7.54.0
Accept: */*
The meaning of this message is "replace the current representation of /orders/4/complete with an empty representation". But the origin server gets to choose how to do that, and which standardized responses to return to the client.
So this is fine.
All work is transacted by politely placing documents in in-trays, and then some side effect of placing that document in an in-tray causes some business activity to occur -- Jim Webber, 2011.
In this case, the document we are putting into the "in-tray" happens to be blank.
#DeleteMapping("/orders/{id}/cancel")
I would never approve that choice in a code review. DELETE (like PUT) has semantics in the "transfer of documents over a network domain".
The DELETE method requests that the origin server remove the association between the target resource and its current functionality. In effect, this method is similar to the rm command in UNIX: it expresses a deletion operation on the URI mapping of the origin server rather than an expectation that the previously associated information be deleted.
Trying to hijack the method because the spelling is kind of like the domain action is the wrong heuristic to use in choosing methods.
Relatively few resources allow the DELETE method -- its primary use is for remote authoring environments, where the user has some direction regarding its effect.
The point being that we have a general purpose document manipulation interface, and we are using that interface as a facade that allows us to drive business activity. So we should be using our standardized message semantics the same way every other page on the web does.
#PutMapping would be defensible, using the same justification as we did for /complete.
what HTTP method is right to use when you want to advance the status of a resource to the next stage without giving an option of going back to an earlier stage? Basically an update/patch that will invalidate its own pre-conditions.
PUT, PATCH, and POST are all appropriate methods to use when editing the representation of a resource. Use PUT or PATCH when you are sending a replacement representation for the resource, use POST when you are asking the server to calculate what the edit to the representation should be.

Redirect S3 subfolder to another domain with Cloudfront

I have a static showcase website hosted on S3 and using CloudFront, and an online shop (Prestashop) and a blog (Wordpress), both hosted on OVH servers.
I want to make a hidden redirection on two subfolders of my static website so it acts like my 3 websites are on the same host, using the following pattern :
mysite.com/ --> normal behaviour
mysite.com/blog/ --> myblog.com/
mysite.com/store/ --> mystore.com/
Of course, I need every request to be handled that way, eventually having something like that :
mysite.com/store/fr/1-myproduct.html
returns what
mystore.com/fr/1-myproduct.html
would have returned.
This seems really tricky, since I've found no real solution to my problem, and at this point I doubt it may even be possible to do such a thing.
I considered using a proxy but wouldn't that be like using a sledgehammer to get rid of a fly ?
I have searched for any possible redirection and I was only able to find subdomain/domain redirections...
So my question would be "How can I do that ?"
But right now I'm wondering "Can one do that ?"
P.S : It's my first post ever, I'm used to search for a long time before posting and I always end up finding a solution, except for now. Any suggestion is welcome.
I'll check about proxies since it's my last hope
Wait.
I have a static showcase website hosted on S3 and using CloudFront
CloudFront is a reverse proxy.
Depending on how much flexibility you have with the other two sites, CloudFront can potentially take you where you want to go, combining multiple independent sites under one hostname.
This is done by creating additional origin servers for your distributions and then creating additional cache behaviors, with path patterns matching the additonal paths, such as /blog and /blog/* that send requests to the alternate origins.
There is, however, a catch. CloudFront can't remove the matched pattern, so mainsite.example.com/blog/hello-world, matching the pattern /blog/* will be forwarded to blog.example.com/blog/hello-world -- not to blog.example.com/hello-world.¹ This will require changes to the other sites in order to integrate them in this way.
Unless...
If you already have unique path patterns, no problem, but if the extra sites' content is in the root of each individual site, you see the issue, here. Not insurmoubtable, but still an issue.
Your only alternative will be a reverse proxy behind CloudFront to rewrite those paths and send the requests on to the alternate servers. Truly not a big deal either, since HAProxy, Nginx, and Varnish all offer such functionality and can handle a large number of proxied requests on surprisingly small hardware.
The recently (2017) released Lambda#Edge service allows you to rewrite paths on the fly, as requests are processed, if necessary.
But the bottom line is that the reason you have not found a real solution other than a proxy is that there is no alternative -- every path at a given hostname must be handled in one logical place -- one group of one or more identically-configured endpoints. In the case of CloudFront, the logical place is physically distributed globally.
¹ CloudFront, natively, can actually prepend onto the path before forwarding the request, so requests for mainsite.example.com/bar/fizz can be forwarded to foosite.example.com/foo/bar/fizz by setting the origin path to /foo when you configure the origin. But it can't remove path parts or otherwise modify the path without also using Lambda#Edge. In the scenario discussed above, you would leave the origin path blank when configuring the additional origin servers.
Single S3 bucket with the following behavior :
domain.com-> serves the files from root of bucket
domain.com/blog -> serves the files from subfolder in S3 bucket (this is not default behavior)
How to :
https://aws.amazon.com/ru/blogs/compute/implementing-default-directory-indexes-in-amazon-s3-backed-amazon-cloudfront-origins-using-lambdaedge/
Lambda edge code:
'use strict';
exports.handler = (event, context, callback) => {
// Extract the request from the CloudFront event that is sent to Lambda#Edge
var request = event.Records[0].cf.request;
// Extract the URI from the request
var olduri = request.uri;
// Match any '/' that occurs at the end of a URI. Replace it with a default index
var newuri = olduri.replace(/\/$/, '\/index.html');
// Log the URI as received by CloudFront and the new URI to be used to fetch from origin
console.log("Old URI: " + olduri);
console.log("New URI: " + newuri);
// Replace the received URI with the URI that includes the index page
request.uri = newuri;
// Return to CloudFront
return callback(null, request);
};
Summary of code higher :
lambda edge rewrites the path "/blog/" to "/blog/index.html"

Invalid signature returned when previewing 7digital track

I am attempting to preview a track via the 7digital api. I have utilised the reference app to test the endpoint here:-
http://7digital.github.io/oauth-reference-page/
I have specified what I consider to be the correct format query, as in:-
http://previews.7digital.com/clip/8514023?oauth_consumer_key=MY_KEY&country=gb&oauth_nonce=221946762&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1456932878&oauth_version=1.0&oauth_signature=c5GBrJvxPIf2Kci24pq1qD31U%2Bs%3D
and yet, regardless of what parameters I enter I always get an invalid signature as a response. I have also incorporated this into my javascript code using the same oauth signature library as the reference page and yet still get the same invalid signature returned.
Could someone please shed some light on what I may be doing incorrectly?
Thanks.
I was able to sign it using:
url = http://previews.7digital.com/clip/8514023
valid consumer key & consumer secret
field 'country' = 'GB'
Your query strings parameters look a bit out of order. For OAuth the base string, used to sign, is meant to be in alphabetical order, so country would be first in this case. Once generated it doesn't matter the order in the final request, but the above tool applies them back in the same order (so country is first).
Can you make sure there aren't any spaces around your key/secret? It doesn't appear to strip white space.
If you have more specific problems it may be best to get in touch with 7digital directly - https://groups.google.com/forum/#!forum/7digital-api

Get result of REST sampler in JMeter

Please show me how can get the result of REST sampler in Jmeter. Because I need to to it for checking my sampler is right or wrong
Thanks,
You can use View results tree and uncheck Errors and Success check boxes in order to show all responses you receive. The Response data tab will show response data :) which you can format using drop down list in the lower left part of the View Results Tree component, bellow the request tree.
When you run test with large number of requests I suggest you check Errors (to record only requests that failed) to avoid filling up the RAM.
Or even better (for advanced test verification), you can use Assertions to mark the requests/responses that failed (which doesn't have to mean only "response code != 200", you may want to include your business logic and check arbitrary response header/param).
Add a 'View Results Tree View' to your testplan to see how the tests run.
When clicking on a test you can view the request/response data.
Next you might want to validate that response; add a 'Response Assertion' to your test
where you can match the response to anything you want.