I'm using the Kubernetes golang operator sdk to implement an operator that manages RabbitMQ queues. I'm wondering if there's a way for k8s to enforce immutability of particular spec fields on my custom resource. I have the following golang struct which represents a rabbitMQ queue and some parameters to have it bind to a rabbitMQ exchange:
type RmqQueueSpec struct {
VHost string `json:"vhost,required"`
Exchange string `json:"exchange,required"`
RoutingKey string `json:"routingKey"`
SecretConfig map[string]string `json:"secretConfig"`
}
The reason why I want immutability, specifically for the VHost field, is because it's a parameter that's used to namespace a queue in rabbitMQ. If it were changed for an existing deployed queue, the k8s reconciler will fail to query rabbitMQ for the intended queue since it will be querying with a different vhost (effectively a different namespace), which could cause the creation of a new queue or an update of the wrong queue.
There are a few alternatives that I'm considering such as using the required ObjectMeta.Name field to contain both the concatenated vhost and the queuename to ensure that they are immutable for a deployed queue. Or somehow caching older specs within the operator (haven't figured out exactly how to do this yet) and doing a comparison of the old and current spec in the reconciler returning an error if VHost changes. However neither of these approaches seem ideal. Ideally if the operator framework could enforce immutability on the VHost field, that would be a simple approach to handling this.
This validation is possible by using the ValidatingAdmissionWebhook with future support coming via CRD's OpenAPI validation.
https://github.com/operator-framework/operator-sdk/issues/1587
https://github.com/kubernetes/kubernetes/issues/65973
AFAIK this is not yet available to CRDs. Our approach is generally to use the object name as the default name of the object being controlled (vhost name in this case) so it just naturally works out okay.
Related
I develop microservice which communicate within RESTful style.
Is there any ready-to-use frameworks for throttling mechanism implementation?
I use Spring Boot.
Throttling here is a mechanism to reduce frequency of outgoing requests by filtering, for example, duplicate requests.
In my case I think I should use a cache (don't know which one) to filter out duplicate request which was handled already.
And in what period the cache should be cleaned? (daily, hourly, etc)
Please hint me where to dig.
Spring Boot provides the #Cacheable and similar annotations which indicate that and how a method result should be cached. E.g. #CachePut(value="addresses", condition="#Person.name=='Tom'")
public String getAddress(Person person) {...} will cache the result if the parameter person's name was "Tom". This is an example of how filtering could be implemented.
Since you're using Spring Boot you can pull spring-boot-starter-cache in as a dependency and it will provide you with the annotation and the cache.
It's also possible to define how frequently the cache should be evicted, e.g. with Spring's #Scheduled annotation, the specific frequency being dependent on your use case.
This was just a small insight into the following references:
https://www.baeldung.com/spring-cache-tutorial
https://www.baeldung.com/spring-boot-evict-cache
I created a custom resource definition (CRD) and its controller in my cluster, now I can create custom resources, but how do I validate update requests to the CR? e.g., only certain fields can be updated.
The Kubernetes docs on Custom Resources has a section on Advanced features and flexibility (never mind that validating requests should be considered a pretty basic feature 😉). For validation of CRDs, it says:
Most validation can be specified in the CRD using OpenAPI v3.0 validation. Any other validations supported by addition of a Validating Webhook.
The OpenAPI v3.0 validation won't help you accomplish what you're looking for, namely ensuring immutability of certain fields on your custom resource, it's only helpful for stateless validations where you're looking at one instance of an object and determining if it's valid or not, you can't compare it to a previous version of the resource and validate that nothing has changed.
You could use Validating Webhooks. It feels like a heavyweight solution, as you will need to implement a server that conforms to the Validating Webhook contract (responding to specific kinds of requests with specific kinds of responses), but you will have the required data at least to make the desired determination, e.g. knowing that it's an UPDATE request and knowing what the old object looked like. For more details, see here. I have not actually tried Validating Webhooks, but it feels like it could work.
An alternative approach I've used is to store the user-provided data within the Status subresource of the custom resource the first time it's created, and then always look at the data there. Any changes to the Spec are ignored, though your controller can notice discrepancies between what's in the Spec and what's in the Status, and embed a warning in the Status telling the user that they've mutated the object in an invalid way and their specified values are being ignored. You can see an example of that approach here and here. As per the relevant README section of that linked repo, this results in the following behaviour:
The AVAILABLE column will show false if the UAA client for the team has not been successfully created. The WARNING column will display a warning if you have mutated the Team spec after initial creation. The DIRECTOR column displays the originally provided value for spec.director and this is the value that this team will continue to use. If you do attempt to mutate the Team resource, you can see your (ignored) user-provided value with the -o wide flag:
$ kubectl get team --all-namespaces -owide
NAMESPACE NAME DIRECTOR AVAILABLE WARNING USER-PROVIDED DIRECTOR
test test vbox-admin true vbox-admin
If we attempt to mutate the spec.director property, here's what we will see:
$ kubectl get team --all-namespaces -owide
NAMESPACE NAME DIRECTOR AVAILABLE WARNING USER-PROVIDED DIRECTOR
test test vbox-admin true API resource has been mutated; all changes ignored bad-new-director-name
Kuberentes has a mechanism for supporting versioning of CRDs. See https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/. What is not clear to me is how you actually support an evolution of CRD v1 to CRD v2 when you cannot always convert from v1 <-> v2. Suppose we introduce a new field in v2 that can not be populated by a web hook conversion, then perhaps all we can do is leave the field null? Furthermore when you request api version N you always get back an object as version N even if it was not written as version N so how can you controller know how to treat the object?
As you can read in Writing, reading, and updating versioned CustomResourceDefinition objects
If you update an existing object, it is rewritten at the version that is currently the storage version. This is the only way that objects can change from one version to another.
Kubernetes returns the object to you at the version you requested, but the persisted object is neither changed on disk, nor converted in any way (other than changing the apiVersion string) while serving the request.
If you update an existing object, it is rewritten at the version that is currently the storage version. This is the only way that objects can change from one version to another.
You read your object at version v1beta1, then you read the object again at version v1. Both returned objects are identical except for the apiVersion field, Upgrade existing objects to a new stored version
The API server also supports webhook conversions that call an external service in case a conversion is required.
The webhook handles the ConversionReview requests sent by the API servers, and sends back conversion results wrapped in ConversionResponse. You can read about Webbooks here.
Webhook conversion was introduced in Kubernetes v1.13 as an alpha feature.
When the webhook server is deployed into the Kubernetes cluster as a service, it has to be exposed via a service on port 443.
When deprecating versions and dropping support, devise a storage upgrade procedure.
In the case you describe I think you are forced to define some kind of 'default' value for that new mandatory field being introduced, so custom objects created before that change can be converted to the updated spec.
But then it becomes a backward compatible change, so no need to define that v2, you can remain in v1.
If It Is not possible to define a default value for it nor to derive any value at all (for that mandatory introduced field) from other fields existing in a custom object compliant with current v1 (i.e. the schema to follow before introducing this update) then you should assure v2 to be the new stored versione and then manually update all existing v1 compliant custom objects to properly set that mandatory parameter.
Or even to define a new custom object ...
Let say you have CRD specification like below.
// API v6.
type Frobber struct {
Height int `json:"height"`
Param string `json:"param"`
}
Then you added some new spec called Width.
// Still API v6.
type Frobber struct {
Height int `json:"height"`
Width int `json:"width"`
Param string `json:"param"`
}
So as long as you can handle this change using controller logic you do not need any version change. Which means this addition would be backward compatible.
But let say now you are changing the previous spec as below.
// Internal, soon to be v7beta1.
type Frobber struct {
Height int
Width int
Params []string
}
Here you change an existing spec which is not backward compatible. Also, you might not handle this change using controller logic. In these kinds of changes, it is better to use a version upgrade.
For more details about kubernetes API version changes, refer the following links.
[1] https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md
[2] https://groups.google.com/forum/#!topic/operator-framework/jswZUe1rlho
Using kubectl get with -o yaml on a resouce , I see that every resource is versioned:
kind: ConfigMap
metadata:
creationTimestamp: 2018-10-16T21:44:10Z
name: my-config
namespace: default
resourceVersion: "163"
I wonder what is the significance of these versioning and for what purpose these are used? ( use cases )
A more detailed explanation, that helped me to understand exactly how this works:
All the objects you’ve created throughout this book—Pods,
ReplicationControllers, Services, Secrets and so on—need to be
stored somewhere in a persistent manner so their manifests survive API
server restarts and failures. For this, Kubernetes uses etcd, which
is a fast, distributed, and consistent key-value store. The only
component that talks to etcd directly is the Kubernetes API server.
All other components read and write data to etcd indirectly through
the API server.
This brings a few benefits, among them a more robust optimistic
locking system as well as validation; and, by abstracting away the
actual storage mechanism from all the other components, it’s much
simpler to replace it in the future. It’s worth emphasizing that etcd
is the only place Kubernetes stores cluster state and metadata.
Optimistic concurrency control (sometimes referred to as optimistic
locking) is a method where instead of locking a piece of data and
preventing it from being read or updated while the lock is in place,
the piece of data includes a version number. Every time the data is
updated, the version number increases. When updating the data, the
version number is checked to see if it has increased between the time
the client read the data and the time it submits the update. If this
happens, the update is rejected and the client must re-read the new
data and try to update it again. The result is that when two clients
try to update the same data entry, only the first one succeeds.
The result is that when two clients try to update the same data entry,
only the first one succeeds
Marko Luksa, "Kubernetes in Action"
So, all the Kubernetes resources include a metadata.resourceVersion field, which clients need to pass back to the API server when updating an object. If the version doesn’t match the one stored in etcd, the API server rejects the update
The main purpose for the resourceVersion on individual resources is optimistic locking. You can fetch a resource, make a change, and submit it as an update, and the server will reject the update with a conflict error if another client has updated it in the meantime (their update would have bumped the resourceVersion, and the value you submit tells the server what version you think you are updating)
I have couple of questions regarding the configMap versioning.
Is it possible to use a specific version of a configMap in the deployment file?
I dont see any API's to get list of versions. How to get the list of versions?
Is it possible to compare configMap b/w versions?
How to control the number of versions?
Thanks
Is it possible to use a specific version of a configMap in the deployment file?
Not really.
The closest notion of a "version" is resourceVersion, but that is not for the user to directly act upon.
See API conventions: concurrency control and consistency:
Kubernetes leverages the concept of resource versions to achieve optimistic concurrency. All Kubernetes resources have a "resourceVersion" field as part of their metadata. This resourceVersion is a string that identifies the internal version of an object that can be used by clients to determine when objects have changed.
When a record is about to be updated, it's version is checked against a pre-saved value, and if it doesn't match, the update fails with a StatusConflict (HTTP status code 409).
The resourceVersion is changed by the server every time an object is modified. If resourceVersion is included with the PUT operation the system will verify that there have not been other successful mutations to the resource during a read/modify/write cycle, by verifying that the current value of resourceVersion matches the specified value.
The resourceVersion is currently backed by etcd's modifiedIndex.
However, it's important to note that the application should not rely on the implementation details of the versioning system maintained by Kubernetes. We may change the implementation of resourceVersion in the future, such as to change it to a timestamp or per-object counter.
The only way for a client to know the expected value of resourceVersion is to have received it from the server in response to a prior operation, typically a GET. This value MUST be treated as opaque by clients and passed unmodified back to the server.
Clients should not assume that the resource version has meaning across namespaces, different kinds of resources, or different servers.
Currently, the value of resourceVersion is set to match etcd's sequencer. You could think of it as a logical clock the API server can use to order requests.
However, we expect the implementation of resourceVersion to change in the future, such as in the case we shard the state by kind and/or namespace, or port to another storage system.
In the case of a conflict, the correct client action at this point is to GET the resource again, apply the changes afresh, and try submitting again.
This mechanism can be used to prevent races like the following:
Client #1 Client #2
GET Foo GET Foo
Set Foo.Bar = "one" Set Foo.Baz = "two"
PUT Foo PUT Foo
When these sequences occur in parallel, either the change to Foo.Bar or the change to Foo.Baz can be lost.
On the other hand, when specifying the resourceVersion, one of the PUTs will fail, since whichever write succeeds changes the resourceVersion for Foo.
resourceVersion may be used as a precondition for other operations (e.g., GET, DELETE) in the future, such as for read-after-write consistency in the presence of caching.
"Watch" operations specify resourceVersion using a query parameter. It is used to specify the point at which to begin watching the specified resources.
This may be used to ensure that no mutations are missed between a GET of a resource (or list of resources) and a subsequent Watch, even if the current version of the resource is more recent.
This is currently the main reason that list operations (GET on a collection) return resourceVersion.