I want to create a custom resource that is able to create a replicaset under a certain event. What is the best way to accomplish this?
Note that I am aware of deployment, but deployment does not meet my intended use cases.
Seems like you might be looking into building something that would suit more or less the operator pattern.
https://coreos.com/operators/
https://coreos.com/blog/introducing-operators.html
https://github.com/coreos/prometheus-operator
Generaly you need to watch on some resources including your custom ones with kube client and act based on events propagated from kube API.
Related
Whenever the deployment has been created needs to trigger a custom function or webhook. Does Kubernetes provide any option to do this?
Custom Resources are an extension to the Kubernetes API. Just having them standalone is not going to do anything functionally for you. If you need to perform a specific action upon change or deployment or bare existence of a given custom resource, you will need a custom controller that does that.
One of the possible implementations is an Operator. I specifically mention that, as it is fairly easy to create the controller alongside the custom resource definition using Operator SDK. However you can just create a custom resource definition and deploy a custom controller.
On a closing note: there are other ways your question is very broadly formulated so there is a vast variety of ways to answer, and this is just one option.
I need to be able to restrict users from creating new Deployments/ReplicaSets into existing namespace spaces if they don't match a list of approved apps, I assume a custom admission controller would be the best way but I'm unsure how to go about this.
Any solution to do so?
You're right - if you need to use data like the list of approved apps in an admission control decision you need more than RBAC. You could write a custom admission controller, but a more recommended approach would be to use Open Policy Agent (OPA) for this as it gives you the flexibility you need without having to deal with low-level API server integration concerns.
Check out OPA Gatekeeper for an open source integration of OPA with Kubernetes, or Styra for a commercial solution.
Finally, Kyverno is an alternative to OPA for policy-based admission control on Kubernetes.
With CRD we can extend the functionality of kubernetes, but how could I know which controller handle a certain CRD, I mean I know there is a CRD registered in my kubernetes named foo but how could I know which controller/pod do the reconcile for it?
There is no way of knowing just by looking at the CRDs. Several different controllers could be watching the same CRD, it's not like there is a 1-1 relationship.
If you really need to know, there would be ways of figuring this out, like enabling the audit log and inspecting the calls received by the k8s api.
We are in the process of designing a cloud native application that needs a control loop to keep its objects (few thousands) in desired state. Other than implementing the application as a set of Kubernetes CRDs, we are wondering whether there are any other open source alternatives. If you have developed your own custom implementation of control loop, can you please let us know the reasons behind that decision (as opposed to using Kubernetes CRDs)?
Your description seems to fit with the purpose of a CRD controller.
Check the Kubebuilder framework, you can bootstrap a controller quickly and you will just need to implement the reconcile loop
I'm currently looking through an Istio and Kubernetes talk and mention the management of services along with the use of sidecars. I'm not sure what that is.
I think of them as helper containers. A pod can have 1 or more containers. A container should do only one thing, like a web server or load balancer. So if you need some extra work to be done inside the pod, like github sync or data processing, you create an additional container AKA sidecar.
The best (original?) description of the "Sidecar"-pattern I know of is from Brendan Burns and David Oppenheimer in their publications on "Container Patterns for Distributed Systems".
Check out the paper + slides here:
https://www.usenix.org/conference/hotcloud16/workshop-program/presentation/burns
There are other design patterns too, like "Ambassador" or "Adapter". I'm not really sure whether the istio implementation is really a sidecar in the way they describe it there, but anyway I think that's where the term originates from.