Any way to import Kubernetes Object Schemas into CRD Schemas? - kubernetes

I am looking to create a CRD, which has some of the Specs of an existing k8s object. Is there a way of importing the schema and validation checks of the existing spec instead of manually repeating it again?
For reference, I am registering the CRD with the API like this - https://gist.github.com/tallclair/2491c8034f62629b224260fb8a1854d9#file-dynamic_crds-go-L56
And I would like to add a PodSpec into this CRD type.

CRD are managed by a controller specific to that CRD.
Validation of an object concerning the CRD is achieved through a service that takes a call from the API, in this case validation would work along these lines, admission controller validating webhook
More generally, your CRD does not need to concern itself with podspec per se. The CRD is just some declarative representation of the resource you want your controller to manage.
Extending the k8s api mostly works something like this;
think up some bundled functionality you would like to represent declaratively in one schema (the CRD)
create a controller that handles your CRD
add some validation to make sure the API will reject objects that will confuse the controller you made, and hook it up to the API by way of the Dynamic Admission Control
your controller manages the resources required to fullfil the functionality described
I'm sure you could use a podspec in your CRD, but I wouldn't. Generally that's an abstraction better left to the controller managing that specific resource.

Related

How to create kubernetes custom resources or custom plugin?

Whenever the deployment has been created needs to trigger a custom function or webhook. Does Kubernetes provide any option to do this?
Custom Resources are an extension to the Kubernetes API. Just having them standalone is not going to do anything functionally for you. If you need to perform a specific action upon change or deployment or bare existence of a given custom resource, you will need a custom controller that does that.
One of the possible implementations is an Operator. I specifically mention that, as it is fairly easy to create the controller alongside the custom resource definition using Operator SDK. However you can just create a custom resource definition and deploy a custom controller.
On a closing note: there are other ways your question is very broadly formulated so there is a vast variety of ways to answer, and this is just one option.

Restrict New Deployments to Kubernetes namespace

I need to be able to restrict users from creating new Deployments/ReplicaSets into existing namespace spaces if they don't match a list of approved apps, I assume a custom admission controller would be the best way but I'm unsure how to go about this.
Any solution to do so?
You're right - if you need to use data like the list of approved apps in an admission control decision you need more than RBAC. You could write a custom admission controller, but a more recommended approach would be to use Open Policy Agent (OPA) for this as it gives you the flexibility you need without having to deal with low-level API server integration concerns.
Check out OPA Gatekeeper for an open source integration of OPA with Kubernetes, or Styra for a commercial solution.
Finally, Kyverno is an alternative to OPA for policy-based admission control on Kubernetes.

How to know which custom controller handle a certain CRD

With CRD we can extend the functionality of kubernetes, but how could I know which controller handle a certain CRD, I mean I know there is a CRD registered in my kubernetes named foo but how could I know which controller/pod do the reconcile for it?
There is no way of knowing just by looking at the CRDs. Several different controllers could be watching the same CRD, it's not like there is a 1-1 relationship.
If you really need to know, there would be ways of figuring this out, like enabling the audit log and inspecting the calls received by the k8s api.

How to create a replicaset through a custom resource?

I want to create a custom resource that is able to create a replicaset under a certain event. What is the best way to accomplish this?
Note that I am aware of deployment, but deployment does not meet my intended use cases.
Seems like you might be looking into building something that would suit more or less the operator pattern.
https://coreos.com/operators/
https://coreos.com/blog/introducing-operators.html
https://github.com/coreos/prometheus-operator
Generaly you need to watch on some resources including your custom ones with kube client and act based on events propagated from kube API.

How are CoreOS Kubernetes Operators different from native Kubernetes initializers?

Kubernetes 1.7 has an alpha feature called initializers. CoreOS has the concept of an operator. Both seem to involve deploying code that watches the Kubernetes API server for changes to resources—possibly custom—in the cluster, based on annotations those resources contain and which the code understands.
What's the difference? If initializers are part of the core platform, why would I need to create something new that does what looks to my eyes like the same thing?
Operators are standalone "microservices" continuously and asynchronously reconciling the configured desired state towards the system's current state. Initializers are synchronous hooks validating or mutating runtime objects before they are created or updated. Also see admission controllers. They are usually baked into some "microservice". When you consider the lifecycle of a runtime object then initializers are first to act, like once. Then operators watching runtime objects reconcile the system upon their desired definitions.
Kubernetes had the concept of initializers way before 1.7, but then they were a fixed part of the API server. The new initializers feature that you linked to is mainly a decoupling of those parts from the API server:
Today each of these plugins must be compiled into Kubernetes. As Kubernetes grows, the requirement that all policy enforcement beyond coarse grained access control be done through in-tree compilation and distribution becomes unwieldy and limits administrators and the growth of the ecosystem.
(from the design document)