Control Loop Implementation - Kubernetes Alternatives - kubernetes

We are in the process of designing a cloud native application that needs a control loop to keep its objects (few thousands) in desired state. Other than implementing the application as a set of Kubernetes CRDs, we are wondering whether there are any other open source alternatives. If you have developed your own custom implementation of control loop, can you please let us know the reasons behind that decision (as opposed to using Kubernetes CRDs)?

Your description seems to fit with the purpose of a CRD controller.
Check the Kubebuilder framework, you can bootstrap a controller quickly and you will just need to implement the reconcile loop

Related

Horizontal Scalability with Drools

Knowing that drools work with in memory data. Is there a way to distribute horizontally on different drools instances to enhance performance when performing CRUD operations on rules, fact types, etc? I guess the instances would need to be on sync with each other in some way, so they all have the same data in memory or share in some way a knowledge base. I'm kinda new on drools and trying to research on a way to move a monolith on a cloud environment (gcp) so it can take advantage on load balancing, scaling, etc. Want to know if there is any feature on drools itself that supports this or if there is any way to implement this myself, thanks in advance for any information/documentation/use case on this matter.
Currently I haven't tried a way to do this, but my goal is to improve performance and availability by using automatic scaling or support multiple instances of my app.
I'm not sure what kind of "CRUD" you're doing on Drools (or how). But if you just want to deploy new rules (for example), then this is identical to pushing any data or application changes to your deployment in a distributed system -- either your nodes are gradually updated, so during the upgrade process you have some mix of old and new logic/code; or you deploy new instances with the new logic/code and then transition traffic to your new instances and away from the old ones -- either all at once or in a controlled blue/green (or similar) fashion.
If you want to split a monolith, I think the best approach for you would be to consider Kogito [1] and microservice architecture. With microservices, you could even consider using the Function as a service approach - having small immutable service instances, that are just executed and disposed. Kogito mainly targets Quarkus platform, but there are also some Spring Boot examples. There is also OpenShift operator available.
As far as sharing the working memory, there was a project in the KIE community called HACEP [2]. Unfortunately that is now deprecated and we are researching other solutions to make the working memory persisted.
[1] https://kogito.kie.org/
[2] https://github.com/kiegroup/openshift-drools-hacep
The term "entry point" is related to the fact that we have multiple partitions in a Working Memory and you can choose which one you are inserting into. If you can organize your business logic to work with different entry points you can process 'logical partitions' on different machines in parallel safely. At a glance drools entry points gives you something like table partitioning in Oracle which implies the same options.
Use load balancer with sticky sessions if you can (from business point of view) partition 'by client'
you question looks more like an architecture question.
As a start, I would have a look into the Kie Execution Server component provided with Drools that helps you to create microservice decisions based on Drools rulesets.
Kie Execution Server (used in stateless mode by clients) could be embedded in different pods/instances/servers to ensure horizontal scalability.
As mentioned by #RoddyoftheFrozenPeas , one of the problem you'll face will be the simultaneous hot deploy of new rulesets on the "swarm" of kieserver that hosts your services.
That would have to be handled using a proper devops strategy.
Best
Emmanuel

How to create kubernetes custom resources or custom plugin?

Whenever the deployment has been created needs to trigger a custom function or webhook. Does Kubernetes provide any option to do this?
Custom Resources are an extension to the Kubernetes API. Just having them standalone is not going to do anything functionally for you. If you need to perform a specific action upon change or deployment or bare existence of a given custom resource, you will need a custom controller that does that.
One of the possible implementations is an Operator. I specifically mention that, as it is fairly easy to create the controller alongside the custom resource definition using Operator SDK. However you can just create a custom resource definition and deploy a custom controller.
On a closing note: there are other ways your question is very broadly formulated so there is a vast variety of ways to answer, and this is just one option.

Best way to achieve feature flag based routing in Kubernetes

I would like to set up an infrastructure that enables easy experimentation in the production environment for developers in my team.
For example, let's assume that I have a HTML page that lists purchases for on online retail shop. The production version is implemented using React, but we would like to test out some alternative implementations, for example one written in Vue.js, and the other one that is not JS based and instead uses backend rendering.
In this scenario, I would like to flip a feature flag for all the developers who are working on the Vue.js implementation to see the Vue.js page, and for the backend rendering team to see their implementation.
In Kubernetes, each implementation would be a different pod/replication set/service.
What is the best pattern to implement the above routing scheme in Kubernetes? Is Istio based intelligent HTTP header based routing a good candidate for this task?
From my perspective, more clean way is to use different path/FQDN for each type of backend and manage all of them by any ingress controller. At least it will able your developers to access the new version without customization of requests.
But, if you want to use a header as a feature flag and manage routing based on it, then yes, content based routing in Istio will be OK, I think.

How to create a replicaset through a custom resource?

I want to create a custom resource that is able to create a replicaset under a certain event. What is the best way to accomplish this?
Note that I am aware of deployment, but deployment does not meet my intended use cases.
Seems like you might be looking into building something that would suit more or less the operator pattern.
https://coreos.com/operators/
https://coreos.com/blog/introducing-operators.html
https://github.com/coreos/prometheus-operator
Generaly you need to watch on some resources including your custom ones with kube client and act based on events propagated from kube API.

How are CoreOS Kubernetes Operators different from native Kubernetes initializers?

Kubernetes 1.7 has an alpha feature called initializers. CoreOS has the concept of an operator. Both seem to involve deploying code that watches the Kubernetes API server for changes to resources—possibly custom—in the cluster, based on annotations those resources contain and which the code understands.
What's the difference? If initializers are part of the core platform, why would I need to create something new that does what looks to my eyes like the same thing?
Operators are standalone "microservices" continuously and asynchronously reconciling the configured desired state towards the system's current state. Initializers are synchronous hooks validating or mutating runtime objects before they are created or updated. Also see admission controllers. They are usually baked into some "microservice". When you consider the lifecycle of a runtime object then initializers are first to act, like once. Then operators watching runtime objects reconcile the system upon their desired definitions.
Kubernetes had the concept of initializers way before 1.7, but then they were a fixed part of the API server. The new initializers feature that you linked to is mainly a decoupling of those parts from the API server:
Today each of these plugins must be compiled into Kubernetes. As Kubernetes grows, the requirement that all policy enforcement beyond coarse grained access control be done through in-tree compilation and distribution becomes unwieldy and limits administrators and the growth of the ecosystem.
(from the design document)