enableviewstatemac=true - asp.net-3.5

Does setting enableviewstatemac to "true" affects the site's performance? Could you give me some explanation?

Yes, it will affect the site's performance, straight from MSDN:
A view-state MAC is an encrypted version of the hidden variable that a page's view state is persisted to when sent to the browser. If true, the encrypted view state is checked to verify that it has not been tampered with on the client. Do not set EnableViewStateMac to true if performance is a key consideration.
That check has to do something, and something is more expensive than nothing. The larger the viewstate you're dealing with, the more overhead this will put on your requests. That being said, unless you're a really high traffic site or have really large viewstate in your pages, you probably won't notice a thing server-side. On the client however, they will be getting a larger page, which will probably have more of an impact than anything. That means they're uploading more to the server on postback...that's most likely your pain point caused by enabling this.
Keep in mind just how many things happens when the server executes a page, all of these options are "drop in the bucket" scenarios in most cases, there are of course exceptions. Current servers are powerful enough that settings like this typically don't make any noticeable impact individually, but there are of course those cases where it does, if for example you have megabytes of viewstate for some reason.

The enableviewstatemac property is used to specify that upon the receipt of each client request that a check is performed to ensure that the client has not tampered with the control / hidden data that they were served.
This is important as ASP .Net uses a stateless mechanism and relies on the changes that occur on the client side being passed back as instructions to the page upon a postback to determine what changes / events have fired. If the client were able to tamper with these with impunity then they could potentially alter the page behaviour to their own means.

Related

Using GET verb to update in rest api?

I know the use of http verbs is based on standard specification. But my question if I use "GET" for update operations and write a code logic to update, does it create issues in any scenario? Apart from the standard, what else could be the reason to use these verbs for a specific purpose only?
my question if I use "GET" for update operations and write a code logic to update, does it create issues in any scenario?
Yes.
A simple example - suppose the network between the client and the server is unreliable; specifically, for a time, HTTP responses are being lost. A general purpose component (like a web proxy) might time out, and then, noticing that the method token of the request is GET, resend the request a second/third/fourth time, with your server performing its update on every GET request.
Let us further assume that these multiple update operations lead to an undesirable outcome; where do we properly affix blame?
Second example: you send someone a copy of the link to the update operation, so that they can send you a request at the appropriate time. But suppose you send that link to them in an email, and the email client recognizes the uri and (as a performance optimization) pre-fetches the link, triggering your update operation too early. Where do we properly affix the blame?
HTTP does not attempt to require the results of a GET to be safe. What it does is require that the semantics of the operation be safe, and therefore it is a fault of the implementation, not the interface or the user of that interface, if anything happens as a result that causes loss of property -- Fielding, 2002
In these, and other examples, blame is correctly affixed to your server, because GET has a standardized meaning which include the constraint that the semantics of the request are safe.
That's not to say that you can't have side effects when handling a GET request; "hit counters" are almost as old as the web itself. You have a lot of freedom in your implementation; so long as you respect the uniform interface, there won't be too much trouble.
Experience report: one of our internal tools uses GET requests to trigger scheduling; in our carefully controlled context (which is not web scale), we get away with it, and have for a very long time.
To borrow your language, there are certainly scenarios that would give us problems; but given our controls we manage to avoid them.
I wouldn't like our chances, though, if requests started coming in from outside of our carefully controlled context.
I think it's a decent question. You're asking a hypothetical: is there any value to doing the right other than that's we agree to use GET for fetching? e.g.: is there value beyond the fact that it's 'semantically nice'. A similar question in HTML might be: "Is it ok to use a <div> with an onclick instead of a <button>? (the answer is no).
There certainly is. Clients, servers and intermediates all change their behavior depending on what method is used. Even if your server can process GET for updates, and you build a client that uses this, your browser might still get confused.
If you are interested in this subject, don't ask on a forum; read the spec. The HTTP specification tells you what clients, servers and proxies should do when they encounter certain methods, statuses and headers.
Start at RFC7231

Should this be a GET or a PATCH request?

if i do a request to get some data from a database without sending any updates, however i'm marking the record in the database to say the data has been fetched, does that make it a PATCH request or a GET?
Short answer: No, it is still a GET.
RFC 7231 defines safe
Request methods are considered "safe" if their defined semantics are essentially read-only....
This definition of safe methods does not prevent an implementation
from including behavior that is potentially harmful, that is not
entirely read-only, or that causes side effects while invoking a safe
method. What is important, however, is that the client did not
request that additional behavior and cannot be held accountable for
it. For example, most servers append request information to access
log files at the completion of every response, regardless of the
method, and that is considered safe even though the log storage might
become full and crash the server. Likewise, a safe request initiated
by selecting an advertisement on the Web will often have the side
effect of charging an advertising account.
So if the client is trying to retrieve a current representation of the resource,
the fact that your implementation happens to do a bit of bookkeeping on the side doesn't change the semantics of the request.
Part of the point of an HTTP front end is that clients are completely insulated from the underlying implementation details of the server -- everything looks like a dumb web site from the outside.
The HTTP spec is rather clear on that if you read through the definition of safe:
Request methods are considered "safe" if their defined semantics are essentially read-only; i.e., the client does not request, and does not expect, any state change on the origin server as a result of applying a safe method to a target resource. Likewise, reasonable use of a safe method is not expected to cause any harm, loss of property, or unusual burden on the origin server.
This definition of safe methods does not prevent an implementation from including behavior that is potentially harmful, that is not entirely read-only, or that causes side effects while invoking a safe method. What is important, however, is that the client did not request that additional behavior and cannot be held accountable for it. For example, most servers append request information to access log files at the completion of every response, regardless of the method, and that is considered safe even though the log storage might become full and crash the server. Likewise, a safe request initiated by selecting an advertisement on the Web will often have the side effect of charging an advertising account.
...
So a state change through a GET triggered download is fine as long as the client is not aware of that state change.
In certain situations though, exposing a state change via GET may be risky. Just think of a crawler that invokes a couple of URIs that order some Pizza or the like. According to the spec this is fine and the crawler must not made accountable for that order. This is simply telling you that it was your fault.
With that being said, you can always use POST if you feel uncomfortable with certain HTTP operations as POST literally allows you to process the request according to the resources own semantics.
Which leads me to the next point of re-thinking your design. Returning some document that includes it own state is somehow strange in my opinion. Usually such information is meta-data about a document but not the resource itself. Here you could either use HTTP headers to communicate such information to the client or design the state of that resource as yet a further resource you can hint a client about providing it a link to look it up if it is interested.
Anyway, while not elegant performing a state change on retrieving a resource via GET is not forbidden. I would though invest a couple more thoughts on whether you want to include the state within the resource itself or expose it via its own resource.

Is Precaching with Workbox mandatory for PWA?

I added a few workbox.routing.registerRoute using staleWhileRevalidate to my app and so far it has passed most lighthouse tests under PWA. I am not currently using Precaching at all. My question is, is it mandatory? What am I missing without Precaching? workbox.routing.registerRoute is already caching everything I need. Thanks!
Nothing is mandatory. :-)
Using stale-while-revalidate for all of your assets, as well as for your HTML, is definitely a legitimate approach. It means that you don't have to do anything special as part of your build process, for instance, which could be nice in some scenarios.
Whenever you're using a strategy that reads from the cache, whether it's via precaching or stale-while-revalidate, there's going to be some sort of revalidation step to ensure that you don't end up serving out of date responses indefinitely.
If you use Workbox's precaching, that revalidation is efficient, in that the browser only needs to make a single request for your generated service-worker.js file, and that response serves as the source of truth for whether anything precached actually changed. Assuming your precached assets don't change that frequently, the majority of the time your service-worker.js will be identical to the last time it was retrieved, and there won't be any further bandwidth or CPU cycles used on updating.
If you use runtime caching with a stale-while-revalidate policy for everything, then that "while-revalidate" step happens for each and every response. You'll get the "stale" response back to the page almost immediately, so your overall performance should still be good, but you're incurring extra requests made by your service worker "in the background" to re-fetch each URL, and update the cache. There's an increase in bandwidth and CPU cycles used in this approach.
Apart from using additional resources, another reason you might prefer precaching to stale-while-revalidate is that you can populate your full cache ahead of time, without having to wait for the first time they're accessed. If there are certain assets that are only used on a subsection of your web app, and you'd like those assets to be cached ahead of time, that would be trickier to do if you're only doing runtime caching.
And one more advantage offered by precaching is that it will update your cache en masse. This helps avoid scenarios where, e.g., one JavaScript file was updated by virtue of being requested on a previous page, but then when you navigate to the next page, the newer JavaScript isn't compatible with the DOM provided by your stale HTML. Precaching everything reduces the chances of these versioning mismatches from happening. (Especially if you do not enable skipWaiting.)

Are single use download links in accordance with HTTP spec?

In the context of a restful web service, is it acceptable to have side effects for GET methods?
Single use download links for example
GET /downloads/664d92b3-b373-4dac-a4fb-7a41d015109a
will return 200 and "the thing" and 404 on next request.
HTTP spec says GET methods should be safe and according to https://www.rfc-editor.org/rfc/rfc7231#section-4.2.1
Request methods are considered "safe" if their defined semantics are
essentially read-only; i.e., the client does not request, and does
not expect, any state change on the origin server as a result of
applying a safe method to a target resource.
and
This definition of safe methods does not prevent an implementation
from including behavior that is potentially harmful, that is not
entirely read-only, or that causes side effects while invoking a safe
method. What is important, however, is that the client did not
request that additional behavior and cannot be held accountable for
it.
Several clarifying examples are provided which make me think safe methods are not allowed to purposefully remove the resource.
For example, most servers append request information to access
log files at the completion of every response, regardless of the
method, and that is considered safe even though the log storage might
become full and crash the server.
And
Likewise, a safe request initiated
by selecting an advertisement on the Web will often have the side
effect of charging an advertising account.
And
For example, it is
common for Web-based content editing software to use actions within
query parameters, such as "page?do=delete". If the purpose of such a
resource is to perform an unsafe action, then the resource owner MUST
disable or disallow that action when it is accessed using a safe
request method.
Single use links are obviously a reality. I just wonder whether they're abusing the spec or I just don't get it.
Having an opinion is fine but having worked on these specs and understanding their subtleties would be most convincing.
What you're suggesting is acceptable in some situations, and not necessarily an abuse of the spec.
Firstly, 2616 says regarding safe methods that they:
SHOULD NOT have the significance of taking an action other than
retrieval
And the phrase "SHOULD NOT" is defined as follows (emphasis added):
This phrase, or the phrase "NOT RECOMMENDED" mean that there may
exist valid reasons in particular circumstances when the particular
behavior is acceptable or even useful, but the full implications
should be understood and the case carefully weighed before
implementing any behavior described with this label.
The new version you linked to (which I think supercedes 2616) doesn't use the term "SHOULD NOT" - but they haven't replaced it with "MUST NOT" either. They also acknowledge that side effects are not ruled out as long as the client is not held responsible. So I think the idea of safe methods is the same.
So since the spec acknowledges that there are situations where it's ok, how do we know if yours is such a situation - and more importantly, how do we stay generally within the "spirit" of the spec i.e. make sure we're not abusing it?
I'd refer to this quote from 7231:
The purpose of distinguishing between safe and unsafe methods is to
allow automated retrieval processes (spiders) and cache performance
optimization (pre-fetching) to work without fear of causing harm.
If your app is a private intranet app and you're not concerned with the issues mentioned here, your approach is ok. Put another way: taking into consideration all the possible ways that a GET could happen, are you ok with this side effect?
Working outside RESTful guidelines is not always bad. It's just important to make sure you understand the effect it has.
With all that said, if you are looking for a way to implement reliable, consistent one-time delivery of a resource over HTTP, it's well worth reading Bill de hÓra's HTTPLR spec (http://www.dehora.net/doc/httplr/draft-httplr-01.html). This approach relies on the client acknowledging receipt of the message. You might be able to use something like to allow this user agents that are unaware of the one-use policy (spiders etc.) to GET the resource without causing side effects, but still allow participating clients to cause the resource to become unavailable after one GET.
A transactional approach like this has the added benefit of allowing the client to re-try the download as often as they need to. This is important because otherwise the server cannot know whether the client successfully received the message or not.
If you really need to enforce the once-only policy from the server side for any possible user agent, then your original approach might be best, but bear in mind it's really an "at most once" policy.
Sometimes breaking the spec is the only way, an example is web-page visit-counters that use a hidden image. Is requested with GET but updates a counter.
However some things can go wrong. Applications that follow the spec are allowed to presume that making a GET request won't have any side effects. So is perfectly valid for example for some kind of antivirus-enabled email server to follow the links found in an email to make sure all is safe. If you send this "download-once" link in an email the recipient could never see it. For same reason also a yes-no answer with two different links in an email is hard to deploy. But also in a web page: I recall Google browsing the links of a unique-by-user page known to google only because there was an analytics script inside and because the page contained these infamous links with side effects google was actually changing the answers of people that visited it...
Fake hits are not really a problem in the case of the hidden image counter , they are in any case not considered very reliable, but in the case of the "download-once" link could be problematic.

Event sourcing and validation on writes

Still pretty young at ES and CQRS, I understand that they are tightly related to eventual consistency of data.
Eventual consistency can be problematic when we should perform validation before writing to the store, like checking that an email address isn't already used by an existing user. The only way to do that in a strongly consistent way would be to stop accepting new events, finish processing the remaining events against our view and then querying the view. We obviously don't want to go that far and Greg Young actually recommends to embrace eventual consistency and deal with (rare) cases where we break constraints.
Pushing this approach to the limits, my understanding is that this would mean, when developing a web API for example, to respond 'OK' to every request because it is impossible, at the time of the request, to validate it... Am I on the right track, or missing something here?
As hinted in my comment above, a RESTful API can return 202 Accepted.
This provides a way for a client to poll for status updates if that's necessary.
The client can monitor for state if that's desirable, but alternatively, it can also simply fire and forget, assuming that if it gets any sort of 200-range response, the command will eventually be applied. This can be a good alternative if you have an alternative channel on which you can propagate errors. For example, if you know which user submitted the command, and you have that user's email address, you can send an email in the event of a failure to apply the command.
One of the points of a CQRS architecture is that the edge of the system should do whatever it can to validate the correctness of a Command before it accepts it. Based on the known state of the system (as exposed by the Query side), the system can make a strong effort to validate that a given Command is acceptable. If it does that, the only permanent error that should happen if you accept a Command is a concurrency conflict. Depending on how fast your system approaches consistent states, such concurrency conflicts may be so few that e.g. sending the user an email is an appropriate error-handling strategy.