Argo and Kubernetes "Request entity to large: limit is 3145728" - kubernetes

I've been trying to deploy a workflow in Argo with Kubernetes and I'm getting this error
Can someone help me to know the root of the issue?
I’ve tried several things but I’ve been unsuccessful.

The way Argo solves that problem is by using compression on the stored entity, but the real question is whether you have to have all 3MB worth of that data at once, or if it is merely more convenient for you and they could be decomposed into separate objects with relationships between each other. The kubernetes API is not a blob storage, and shouldn't be treated as one.
The "error": "Request entity too large: limit is 3145728" is probably
the default response from kubernetes handler for objects larger than
3MB, as you can see here at L305 of the source code:
expectedMsgFor1MB := etcdserver: request is too large
expectedMsgFor2MB := rpc error: code = ResourceExhausted desc = trying to send message larger than max
expectedMsgFor3MB := Request entity too large: limit is 3145728
expectedMsgForLargeAnnotation := metadata.annotations: Too long: must have at most 262144 bytes
The ETCD has indeed a 1.5MB limit for processing a file and you will
find on ETCD Documentation a suggestion to try the--max-request-bytes
flag but it would have no effect on a GKE cluster because you don't
have such permission on master node.
But even if you did, it would not be ideal because usually this error means that you are consuming the objects instead of referencing them which would degrade your performance.
I highly recommend that you consider instead these options:
- Determine whether your object includes references that aren't used
- Break up your resource
- Consider a volume mount instead
There's a request for a new API Resource: File (orBinaryData) that could apply to your case. It's very fresh but it's good to keep an eye on.
Partial source for this answer: https://stackoverflow.com/a/60492986/12153576

Related

DELETE different resources with one requests - Is it ok or we should try to mix those resources to one

Let assume that I have a collection with /playrequests endpoint. It is a collection (list) for those players who want to find another player to start a match.
The server will check this collection periodically and if it finds two unassigned players, it will create another resource in another collection with /quickmatchs endpoint and also change (for example) a field in the PlayRequests collection for both players to shows that they are assigned to a quickMatch.
At this point, players can send a PUT or PATCH request to set the (for example) "ready" field of their related quickMach resource to true. so the server and each of them can find out that if both of them is ready and the match can be started.
(The Issue Part Is Below Part...)
Also, before a the playRequests assigned to a match and also after they assigned to it, they can send a DELETE request to /playrequests endpoint to tell the server that they want to give up the request. So if the match doesn't create yet, It is ok. the resource related to the player will remove from playRequests collection. but if player assigned to a match, the server must delete the related playRequest and also it must delete the related quickMatch resource from the quickMatchs collection. ( and also we should modify the playRequest related to another player to indicate that it's unassigned now. or we can check and change it later when he to check the status of his related resources in both collection. It is not the main issue for now. )
So, my question is that is it ok to change a resource that is related to the given end point and also change another resource accordingly, If it is necessary? ( I mean is it ok to manipulate different resources with different endpoints in one request? I don't want to send multiple requests.) or I need to mix those two collections to avoid such an action?
I know that many things ( different strategies ) are possible but I want to know that (from the viewpoint of RESTFUL) what is standard/appropriate and what is not? (consider that I am kinda new to restful)
it ok to change a resource that is related to the given end point and also change another resource accordingly
Yes, and there are consequences.
Suppose we have two resources, /fizz and /buzz, and the representations of those resources are related via some implementation details on the server. On the client, we have accessed both of these resources, so we have cached copies of each of them.
The potential issue here is that the server changes the representations of these resources together, but as far as the client is concerned, they are separate.
For instance, if the client sends an unsafe request to change /fizz, a successful response message from the server will invalidate the locally cached copy of that representation, but the stale representation of /buzz does not get evicted. In effect, the client now has a view of the world with version 0 /buzz and version 1 /fizz.
Is that OK? "It depends" -- will expensive things happen to you if your clients are in a state of unmatched representations? Can you patch over the "problem" in other ways (for instance, by telling the client to check resources for updates more often)?

Kubernetes resource versioning

Using kubectl get with -o yaml on a resouce , I see that every resource is versioned:
kind: ConfigMap
metadata:
creationTimestamp: 2018-10-16T21:44:10Z
name: my-config
namespace: default
resourceVersion: "163"
I wonder what is the significance of these versioning and for what purpose these are used? ( use cases )
A more detailed explanation, that helped me to understand exactly how this works:
All the objects you’ve created throughout this book—Pods,
ReplicationControllers, Services, Secrets and so on—need to be
stored somewhere in a persistent manner so their manifests survive API
server restarts and failures. For this, Kubernetes uses etcd, which
is a fast, distributed, and consistent key-value store. The only
component that talks to etcd directly is the Kubernetes API server.
All other components read and write data to etcd indirectly through
the API server.
This brings a few benefits, among them a more robust optimistic
locking system as well as validation; and, by abstracting away the
actual storage mechanism from all the other components, it’s much
simpler to replace it in the future. It’s worth emphasizing that etcd
is the only place Kubernetes stores cluster state and metadata.
Optimistic concurrency control (sometimes referred to as optimistic
locking) is a method where instead of locking a piece of data and
preventing it from being read or updated while the lock is in place,
the piece of data includes a version number. Every time the data is
updated, the version number increases. When updating the data, the
version number is checked to see if it has increased between the time
the client read the data and the time it submits the update. If this
happens, the update is rejected and the client must re-read the new
data and try to update it again. The result is that when two clients
try to update the same data entry, only the first one succeeds.
The result is that when two clients try to update the same data entry,
only the first one succeeds
Marko Luksa, "Kubernetes in Action"
So, all the Kubernetes resources include a metadata.resourceVersion field, which clients need to pass back to the API server when updating an object. If the version doesn’t match the one stored in etcd, the API server rejects the update
The main purpose for the resourceVersion on individual resources is optimistic locking. You can fetch a resource, make a change, and submit it as an update, and the server will reject the update with a conflict error if another client has updated it in the meantime (their update would have bumped the resourceVersion, and the value you submit tells the server what version you think you are updating)

Kubernetes - Using a particular ConfigMap versioning

I have couple of questions regarding the configMap versioning.
Is it possible to use a specific version of a configMap in the deployment file?
I dont see any API's to get list of versions. How to get the list of versions?
Is it possible to compare configMap b/w versions?
How to control the number of versions?
Thanks
Is it possible to use a specific version of a configMap in the deployment file?
Not really.
The closest notion of a "version" is resourceVersion, but that is not for the user to directly act upon.
See API conventions: concurrency control and consistency:
Kubernetes leverages the concept of resource versions to achieve optimistic concurrency. All Kubernetes resources have a "resourceVersion" field as part of their metadata. This resourceVersion is a string that identifies the internal version of an object that can be used by clients to determine when objects have changed.
When a record is about to be updated, it's version is checked against a pre-saved value, and if it doesn't match, the update fails with a StatusConflict (HTTP status code 409).
The resourceVersion is changed by the server every time an object is modified. If resourceVersion is included with the PUT operation the system will verify that there have not been other successful mutations to the resource during a read/modify/write cycle, by verifying that the current value of resourceVersion matches the specified value.
The resourceVersion is currently backed by etcd's modifiedIndex.
However, it's important to note that the application should not rely on the implementation details of the versioning system maintained by Kubernetes. We may change the implementation of resourceVersion in the future, such as to change it to a timestamp or per-object counter.
The only way for a client to know the expected value of resourceVersion is to have received it from the server in response to a prior operation, typically a GET. This value MUST be treated as opaque by clients and passed unmodified back to the server.
Clients should not assume that the resource version has meaning across namespaces, different kinds of resources, or different servers.
Currently, the value of resourceVersion is set to match etcd's sequencer. You could think of it as a logical clock the API server can use to order requests.
However, we expect the implementation of resourceVersion to change in the future, such as in the case we shard the state by kind and/or namespace, or port to another storage system.
In the case of a conflict, the correct client action at this point is to GET the resource again, apply the changes afresh, and try submitting again.
This mechanism can be used to prevent races like the following:
Client #1 Client #2
GET Foo GET Foo
Set Foo.Bar = "one" Set Foo.Baz = "two"
PUT Foo PUT Foo
When these sequences occur in parallel, either the change to Foo.Bar or the change to Foo.Baz can be lost.
On the other hand, when specifying the resourceVersion, one of the PUTs will fail, since whichever write succeeds changes the resourceVersion for Foo.
resourceVersion may be used as a precondition for other operations (e.g., GET, DELETE) in the future, such as for read-after-write consistency in the presence of caching.
"Watch" operations specify resourceVersion using a query parameter. It is used to specify the point at which to begin watching the specified resources.
This may be used to ensure that no mutations are missed between a GET of a resource (or list of resources) and a subsequent Watch, even if the current version of the resource is more recent.
This is currently the main reason that list operations (GET on a collection) return resourceVersion.

Avoid duplicate POSTs with REST

I have been using POST in a REST API to create objects. Every once in a while, the server will create the object, but the client will be disconnected before it receives the 201 Created response. The client only sees a failed POST request, and tries again later, and the server happily creates a duplicate object...
Others must have had this problem, right? But I google around, and everyone just seems to ignore it.
I have 2 solutions:
A) Use PUT instead, and create the (GU)ID on the client.
B) Add a GUID to all objects created on the client, and have the server enforce their UNIQUE-ness.
A doesn't match existing frameworks very well, and B feels like a hack. How does other people solve this, in the real world?
Edit:
With Backbone.js, you can set a GUID as the id when you create an object on the client. When it is saved, Backbone will do a PUT request. Make your REST backend handle PUT to non-existing id's, and you're set.
Another solution that's been proposed for this is POST Once Exactly (POE), in which the server generates single-use POST URIs that, when used more than once, will cause the server to return a 405 response.
The downsides are that 1) the POE draft was allowed to expire without any further progress on standardization, and thus 2) implementing it requires changes to clients to make use of the new POE headers, and extra work by servers to implement the POE semantics.
By googling you can find a few APIs that are using it though.
Another idea I had for solving this problem is that of a conditional POST, which I described and asked for feedback on here.
There seems to be no consensus on the best way to prevent duplicate resource creation in cases where the unique URI generation is unable to be PUT on the client and hence POST is needed.
I always use B -- detection of dups due to whatever problem belongs on the server side.
Detection of duplicates is a kludge, and can get very complicated. Genuine distinct but similar requests can arrive at the same time, perhaps because a network connection is restored. And repeat requests can arrive hours or days apart if a network connection drops out.
All of the discussion of identifiers in the other anwsers is with the goal of giving an error in response to duplicate requests, but this will normally just incite a client to get or generate a new id and try again.
A simple and robust pattern to solve this problem is as follows: Server applications should store all responses to unsafe requests, then, if they see a duplicate request, they can repeat the previous response and do nothing else. Do this for all unsafe requests and you will solve a bunch of thorny problems. Repeat DELETE requests will get the original confirmation, not a 404 error. Repeat POSTS do not create duplicates. Repeated updates do not overwrite subsequent changes etc. etc.
"Duplicate" is determined by an application-level id (that serves just to identify the action, not the underlying resource). This can be either a client-generated GUID or a server-generated sequence number. In this second case, a request-response should be dedicated just to exchanging the id. I like this solution because the dedicated step makes clients think they're getting something precious that they need to look after. If they can generate their own identifiers, they're more likely to put this line inside the loop and every bloody request will have a new id.
Using this scheme, all POSTs are empty, and POST is used only for retrieving an action identifier. All PUTs and DELETEs are fully idempotent: successive requests get the same (stored and replayed) response and cause nothing further to happen. The nicest thing about this pattern is its Kung-Fu (Panda) quality. It takes a weakness: the propensity for clients to repeat a request any time they get an unexpected response, and turns it into a force :-)
I have a little google doc here if any-one cares.
You could try a two step approach. You request an object to be created, which returns a token. Then in a second request, ask for a status using the token. Until the status is requested using the token, you leave it in a "staged" state.
If the client disconnects after the first request, they won't have the token and the object stays "staged" indefinitely or until you remove it with another process.
If the first request succeeds, you have a valid token and you can grab the created object as many times as you want without it recreating anything.
There's no reason why the token can't be the ID of the object in the data store. You can create the object during the first request. The second request really just updates the "staged" field.
Server-issued Identifiers
If you are dealing with the case where it is the server that issues the identifiers, create the object in a temporary, staged state. (This is an inherently non-idempotent operation, so it should be done with POST.) The client then has to do a further operation on it to transfer it from the staged state into the active/preserved state (which might be a PUT of a property of the resource, or a suitable POST to the resource).
Each client ought to be able to GET a list of their resources in the staged state somehow (maybe mixed with other resources) and ought to be able to DELETE resources they've created if they're still just staged. You can also periodically delete staged resources that have been inactive for some time.
You do not need to reveal one client's staged resources to any other client; they need exist globally only after the confirmatory step.
Client-issued Identifiers
The alternative is for the client to issue the identifiers. This is mainly useful where you are modeling something like a filestore, as the names of files are typically significant to user code. In this case, you can use PUT to do the creation of the resource as you can do it all idempotently.
The down-side of this is that clients are able to create IDs, and so you have no control at all over what IDs they use.
There is another variation of this problem. Having a client generate a unique id indicates that we are asking a customer to solve this problem for us. Consider an environment where we have a publicly exposed APIs and have 100s of clients integrating with these APIs. Practically, we have no control over the client code and the correctness of his implementation of uniqueness. Hence, it would probably be better to have intelligence in understanding if a request is a duplicate. One simple approach here would be to calculate and store check-sum of every request based on attributes from a user input, define some time threshold (x mins) and compare every new request from the same client against the ones received in past x mins. If the checksum matches, it could be a duplicate request and add some challenge mechanism for a client to resolve this.
If a client is making two different requests with same parameters within x mins, it might be worth to ensure that this is intentional even if it's coming with a unique request id.
This approach may not be suitable for every use case, however, I think this will be useful for cases where the business impact of executing the second call is high and can potentially cost a customer. Consider a situation of payment processing engine where an intermediate layer ends up in retrying a failed requests OR a customer double clicked resulting in submitting two requests by client layer.
Design
Automatic (without the need to maintain a manual black list)
Memory optimized
Disk optimized
Algorithm [solution 1]
REST arrives with UUID
Web server checks if UUID is in Memory cache black list table (if yes, answer 409)
Server writes the request to DB (if was not filtered by ETS)
DB checks if the UUID is repeated before writing
If yes, answer 409 for the server, and blacklist to Memory Cache and Disk
If not repeated write to DB and answer 200
Algorithm [solution 2]
REST arrives with UUID
Save the UUID in the Memory Cache table (expire for 30 days)
Web server checks if UUID is in Memory Cache black list table [return HTTP 409]
Server writes the request to DB [return HTTP 200]
In solution 2, the threshold to create the Memory Cache blacklist is created ONLY in memory, so DB will never be checked for duplicates. The definition of 'duplication' is "any request that comes into a period of time". We also replicate the Memory Cache table on the disk, so we fill it before starting up the server.
In solution 1, there will be never a duplicate, because we always check in the disk ONLY once before writing, and if it's duplicated, the next roundtrips will be treated by the Memory Cache. This solution is better for Big Query, because requests there are not imdepotents, but it's also less optmized.
HTTP response code for POST when resource already exists

A RESTful singleton resource

I'm coming from the RPC world but currently investigating if using REST is a good idea for my project. As for as I understand from Wikipedia the basic idea of RESTful services is to provide access to collections and their individual elements.
In my case the server would be a measuring instrument. I must be able to start, stop, and pause the measurement routine, and read the data at any time.
Currently, I'm considering the following:
POST /measure (start measurement, this continues until stopped by the user)
PUT /measure pause=true/false (pause/unpause)
DELETE /measure (stop)
GET /measure (get measurement data)
However, I'm not sure if this fits the REST model, since I don't really work with collections or elements here.
My question: How would I access a singleton resource and do the start/stop requests to the server break the RESTful stateless constraint?
Not RESTful
No, your approach isn't RESTful, because if I understand the workflow, you would delete the resource to stop the measurement and then get the resource to read out the final result. But deleting a resource implies that there would be nothing left to GET.
The fact that your resource is a singleton isn't a problem at all. The problem lies in the way you're mapping verbs and state.
Your description is a bit abstract, so let's be a bit more concrete: let's assume that the instrument in question measures the angular velocity of a fly wheel in radians/sec. This instrument has some cost associated with measurement, so the client needs to be able to disable measurement for some periods of time as a cost-saving measure. If this is roughly analogous to your scenario, then the exposition below should be applicable to your scenario.
Verbs
Now, let's review your verbs.
GET returns a representation of a resource. So when you GET /measure, it should return some data that represents the current measurement.
PUT creates or updates a specific, named resource. The resource is named by its URL. So PUT /measure implies that you're updating the state of a resource called /measure, or creating that resource if it doesn't already exist. In your case, the instrument value is read-only: we can't write a radian/sec value to the instrument. But the paused/active state of the instrument is mutable, so PUT /measure should include a body that modifies the state of the instrument. You could use a lot of different representations here, but one simple approach would be a request body like active=true or active=false to indicate what thew instrument's new state should be.
POST is similar to PUT, except that the client does not specify the name of the resource that should be created or updated. In a different API, for example, the client might POST /articles to create a new a article. The server would create a resource and give it a name like /articles/1234 and then it would tell the client about this new name by returning a 201 CREATED HTTP code and adding a Location: /articles/1234 header to tell the client where the new resource is. In your scenario, POST isn't a meaningful verb because you always know what the name of your singleton resource is.
DELETE means you remove a resource, and since a resource is identified by a URL, DELETE /measure implies that /measure no longer exists. A subsequent GET /measure should return either 404 NOT FOUND or 410 GONE. In your case, the client can't actually destroy the instrument, so DELETE isn't meaningful a meaningful verb.
Conclusion
So in sum, a RESTful design for your service would be to have PUT /measure with a request body that tells the instrument whether it should be active or not active (paused) and GET /measure to read the current measurement. If you GET /measure on a paused instrument, you should probably return a 409 CONFLICT HTTP status. Your service shouldn't use POST or DELETE at all.
You are still working on a resource, and the way you broke it down sounds good to me. Fielding explicitly mentions temporaral services in the REST chapter:
The key abstraction of information in REST is a resource. Any
information that can be named can be a resource: a document or image,
a temporal service (e.g. "today's weather in Los Angeles")
Maybe it would make sense to give each measurement a unique id though. That way you can uniquely refer to each measurement (you don't even have to store the old ones, but if someone refers to an old measurement you can tell them, that what they are requesting is not up to date anymore).
Building upon the last answer. Here is how you might want to break it down
measures/ - GET all the measures from the instrument (Paginate/limit if needed based on query params)
measures/:measure_id - GET a particular measure
measures/ - POST - Starts a new measure. This returns the new measure ID which you can deal with later.
measures/:measure_id - DELETE - stop the measure.
measures/:measure_id - PUT - update the measure
measures/last_measure - Last measured resource.