Restrict New Deployments to Kubernetes namespace - kubernetes

I need to be able to restrict users from creating new Deployments/ReplicaSets into existing namespace spaces if they don't match a list of approved apps, I assume a custom admission controller would be the best way but I'm unsure how to go about this.
Any solution to do so?

You're right - if you need to use data like the list of approved apps in an admission control decision you need more than RBAC. You could write a custom admission controller, but a more recommended approach would be to use Open Policy Agent (OPA) for this as it gives you the flexibility you need without having to deal with low-level API server integration concerns.
Check out OPA Gatekeeper for an open source integration of OPA with Kubernetes, or Styra for a commercial solution.
Finally, Kyverno is an alternative to OPA for policy-based admission control on Kubernetes.

Related

How to create kubernetes custom resources or custom plugin?

Whenever the deployment has been created needs to trigger a custom function or webhook. Does Kubernetes provide any option to do this?
Custom Resources are an extension to the Kubernetes API. Just having them standalone is not going to do anything functionally for you. If you need to perform a specific action upon change or deployment or bare existence of a given custom resource, you will need a custom controller that does that.
One of the possible implementations is an Operator. I specifically mention that, as it is fairly easy to create the controller alongside the custom resource definition using Operator SDK. However you can just create a custom resource definition and deploy a custom controller.
On a closing note: there are other ways your question is very broadly formulated so there is a vast variety of ways to answer, and this is just one option.

Usage of Namespaces in Kubernetes

I got a question regarding namespaces and seeking your expertise to clear out my doubts.
What I understood about namespaces is that they are there to introduce logical boundaries among teams and projects.
Of course, I read somewhere namespaces can be used to introduce/define different environments within the same cluster.
E.g Test, UAT and PRODUCTION.
However, if an organization is developing a solution and that solution consists of X number of microservices and have dedicated teams to look after those services,
should we still need to use namespaces to separate them or are they gonna deploy in one single namespace reflecting the solution?
E.g if we are developing an e-commerce application:
Inventory, ShoppingCart, Payment, Orders etc. would be the microservices that I can think of. Should we deploy them under the namespace of sky-commerce for an instance? or should they need dedicated namespaces.?
My other question is. if we deploy services in different namespaces, is it possible for us to access them through APIGateway/ Ingress controller?
For an instance, I have the front-end SPA application and it has its BFF (Backend For Frontend). can the BFF access the other services through the APIGateway/Ingress controller?
Please help me to clear these doubts.
Thanks in advance for your prompt reply in this regard.
RSF
Namespaces are cheap, use lots of them. Only ever put two things in the same namespace if they are 100% a single unit (two daemons that are always updated at the same time and are functionally a single deployment) or if you must because a related object is used (such as a Service being in the same ns as Pods it references).
When creating a new Kubernetes namespace, a request is sent using the namespace API using the defined syscalls, and since Kubernetes has admin privileges, a new namespace will be created. The new namespace will contain specifications for the capabilities of a new process assigned under its domain.
In regards to your question above, yes you can keep services in different namespaces as long as they are able to talk together and render the services to the outside world as one piece.
Since all organizations are different, it is up to you to figure out how best to implement and manage Kubernetes Namespaces. In general, aim to:
Create an effective Kubernetes Namespace structure
Keep namespaces simple and application-specific
Label everything
Use cluster separation when necessary

Swagger multiple definitions best practices

I have an API, and a consumer web app, both written in Node and Express. The API is defined by a OpenAPI Specification. Implemented by swagger-ui-express.
The above web apps are Dockerised and managed in Kubernetes.
The API has a handful of endpoints for managing the lifecycle of a user's registration/application to the service.
Currently, when I need to cleardown completed/abandoned applications, or resubmit failed applications, I employ a periodically run cronjob to carry out a database query for the actions mentioned. The cronjob is defined by a Kubernetes config YAML file. This is quickly becoming unmanageable, and hard to maintain.
I am looking in to having a dedicated endpoint for each of the above tasks. Then a dedicated cronjob could periodically send a request to the API endpoint to carry out the complex task. This moves the business logic back in to the API, and avoids duplication within a cronjob hosted elsewhere. I am ultimately asking if this is a good approach or is there a better workflow documented somewhere I could implement?
My thinking is that I could add these new endpoints to the already-existing consumer API, but have the new (housekeeping/management) endpoints separated from the others.
To separate each (current) endpoint in to their respective resource, I am defining tags within the specification. Tags don't seem to be sufficient for the separation of these new "housekeeping" endpoints.
Looking through the SwaggerUI documentation I can see that I can define multiple definitions (via the urls property) to switch between. These definitions being powered by individual Specification documents. This looks like a very clean way of separating the consumer API from the admin API, is this best practice?
Any input would be appreciated on this as I am struggling to find much documentation on this kind of issue.

Best way to achieve feature flag based routing in Kubernetes

I would like to set up an infrastructure that enables easy experimentation in the production environment for developers in my team.
For example, let's assume that I have a HTML page that lists purchases for on online retail shop. The production version is implemented using React, but we would like to test out some alternative implementations, for example one written in Vue.js, and the other one that is not JS based and instead uses backend rendering.
In this scenario, I would like to flip a feature flag for all the developers who are working on the Vue.js implementation to see the Vue.js page, and for the backend rendering team to see their implementation.
In Kubernetes, each implementation would be a different pod/replication set/service.
What is the best pattern to implement the above routing scheme in Kubernetes? Is Istio based intelligent HTTP header based routing a good candidate for this task?
From my perspective, more clean way is to use different path/FQDN for each type of backend and manage all of them by any ingress controller. At least it will able your developers to access the new version without customization of requests.
But, if you want to use a header as a feature flag and manage routing based on it, then yes, content based routing in Istio will be OK, I think.

How to create a replicaset through a custom resource?

I want to create a custom resource that is able to create a replicaset under a certain event. What is the best way to accomplish this?
Note that I am aware of deployment, but deployment does not meet my intended use cases.
Seems like you might be looking into building something that would suit more or less the operator pattern.
https://coreos.com/operators/
https://coreos.com/blog/introducing-operators.html
https://github.com/coreos/prometheus-operator
Generaly you need to watch on some resources including your custom ones with kube client and act based on events propagated from kube API.