mTLS between services in K8S - kubernetes

I would like to enable mTLS between services in one K8S namespace. I wonder if I can do it without using service mesh? I considered cert-manager but all the examples I've seen involved Ingress resource which I do not need as my services are not exposed outside of the cluster.Thanks

Using Service Mesh like Istio or Linkerd for this is currently the only general solution for this.
It should be possible to do this using a library for you app as well, the library typically would need to support certificate management. Service Meshes typically use EnvoyProxy as a sidecar, it has implemented novel "control plane" APIs for management, called xDS protocols - this is something that your library typically would need to implement. In addition you need a control plane interface to manage services.
A drawback with doing this in a library is that it will be language-dependent. But the pro is that it will be better performant.
Google has recently taking this route with Traffic Director - proxyless service mesh

You need something like SPIRE together with SPIRE integration operator. Together, they can create mTLS keys and certificates for use within a cluster, where the only configuration you need is pod annotations. The mTLS keypairs are provided as secrets, which you mount into your pods. SPIRE & the operator automatically handle keypair and CA rotation and update the Secrets accordingly.

Related

Installing knative on existing kind cluster

I have an existing kind k8s cluster with a bunch of running services and nginx-ingress setup and I would like to add knative to it.
Is there a way of doing this with nginx-ingress, seems like networking for knative is a bit more complex than a normal service installation.
Knative needs more capabilities out of the HTTP routing layer than are exposed via the Kubernetes Ingress resource. (Percentage splits and header rewrite are two of the big ones.)
Unfortunately, so one has written an adapter from Knative's "KIngress" implementation to the nginx ingress. In the future, the gateway API (aka "Ingress V2") may provide these capabilities; in the meantime, you'll need to install one of the other network adapters and ingress implementations. Kourier provides the smallest implementation, while Contour also provides an Ingress implementation if you want to switch from nginx entirely.

Istio service mesh in on premise kubernetes cluster in data center

Currently, we have k8s cluster in our data center due to compliance reasons. We are running the traefik as an ingress controller. Now we want to have the service mesh added to it for monitoring the service level communication. Can you suggest me how can I do it? Do I replace the traefik ingress controller and have the istio ingress on the host network setup or any other better options without removing the traefik and have istio to it too?
If you are going to install Istio to get "free" observability features, you need to keep in mind that in some scenarios it directly doesn't fit. e.g. you want to get the latency within a service. not possible with Istio.
I would recommend you to get Istio, if you need service mesh and/or routing, besides observability, but don't install it just for observability. There are other tools out there specific for that.
Without counting the fact that you are going to use cluster resources, to get an extra container for each service, just for monitoring. Not a good approach, in my opinion.

Deploying and exposing a microservice on Openshift

I'm new in k8s world and using Openshift 4.2.18. I want to deploy a microservice on it. What I need is one common ip and being able to access each microservice using virtual path.
Like this,
https://my-common-ip/microservice1/
https://my-common-ip/microservice2/
https://my-common-ip/microservice3/
Service and deployment are OK. However I'm so confused with the other terms. Should I use route or ingress? Should I use VirtualService like in this link? Also heard about HA-Proxy and Istio. What's the best way of doing this? I would appreciate it if you could provide the information about these terms.
Thanks in advance, Best Regards
Route and ingress are intended to achieve the same end. Originally Kubernetes had no such concept and so in OpenShift the concept of a Route was developed, along with the bits for providing a load balancing proxy etc. In time it was seen as being useful to have something like this in Kubernetes, so using Route from OpenShift as a starting point for what could be done, Ingress was developed for Kubernetes. In the Ingress version they went for a more generic rules based system so how you specify them looks different, but the intent is to effectively be able to do the same thing.If you intend to deploy your application on multiple Kubernetes distributions at the same time then Ingress might be a good option.
Virtual service and istio is service mesh which is not necessary for external access of an app. You bring complexity with a service mesh. Unless the capabilities offered by a service mesh is really needed for your usecase there is no reason to use it.

Is it possible to use Istio without kubernetes or docker?

I have 4 microservices running on my laptop listening at various ports. Can I use Istio to create a service mesh on my laptop so the services can communicate with each other through Istio? All the links on google about Istio include kubernetes but I want to run Istio without Kubernetes. Thanks for reading.
In practice, not really as of this writing, since pretty much all the Istio runbooks and guides are available for Kubernetes.
In theory, yes. Istio components are designed to be 'platform independent'. Quote from the docs:
While Istio is platform independent, using it with Kubernetes (or infrastructure) network policies, the benefits are even greater, including the ability to secure pod-to-pod or service-to-service communication at the network and application layers.
But unless you know really well the details of each of the components: Envoy, Mixer, Pilot, Citadel, and Galley and you are willing to spend a lot of time it becomes not practically feasible to get it running outside of Kubernetes.
If you want to use something less tied to Kubernetes you can take a look at Consul, although it doesn't have all the functionality Istio has, it has overlap with some of its features.
I do some googles, and found that istio claim to support apps running outside k8s, like in vm. But I never try.
https://istio.io/latest/news/releases/0.x/announcing-0.2/#cross-environment-support
https://jimmysong.io/blog/istio-vm-odysssey/

Is testing on OpenShift Container Platform (OCP) equivalent to testing on Openshift Origin from a kubernetes standpoint?

This applications which are programmed to use the kubernetes API.
Should we assume that openshift container platform, from a kubernetes standpoint, matches all the standards that openshift origin (and kubernetes) does?
Background
Compatibility testing cloud native apps that are shipped can include a large matrix. It seems that, as a baseline, if OCP is meant to be a pure kubernetes distribution, with add ons, then testing against it is trivial, so long as you are only using core kubernetes features.
Alternatively, if shipping an app with support on OCP means you must test OCP, that would to me imply that (1) the app uses OCP functionality or (2) the app uses kube functionality which may not work correctly in OCP, which should be a considered a bug.
In practice you should be able to regard OpenShift Container Platform (OCP) as being the same as OKD (previously known as Origin). This is because it is effectively the same software and setup.
In comparing both of these to plain Kubernetes there are a few things you need to keep in mind.
The OpenShift distribution of Kubernetes is set up as a multi-tenant system, with a clear distinction between normal users and administrators. This means RBAC is setup so that a normal user is restricted in what they can do out of the box. A normal user cannot for example create arbitrary resources which affect the whole cluster. They also cannot run images that will only work if run as root as they run under a default service account which doesn't have such rights. That default service also has no access to the REST API. A normal user has no privileges to enable the ability to run such images as root. A user who is a project admin, could allow an application to use the REST API, but what it could do via the REST API will be restricted to the project/namespace it runs in.
So if you develop an application on Kubernetes where you have an expectation that you have full admin access, and can create any resources you want, or assume there is no RBAC/SCC in place that will restrict what you can do, you can have issues getting it running.
This doesn't mean you can't get it working, it just means that you need to take extra steps so you or your application is granted extra privileges to do what it needs.
This is the main area where people have issues and it is because OpenShift is setup to be more secure out of the box to suit a multi-tenant environment for many users, or even to separate different applications so that they cannot interfere with each other.
The next thing worth mentioning is Ingress. When Kubernetes first came out, it had no concept of Ingress. To fill that hole, OpenShift implemented the concept of Routes. Ingress only came much later, and was based in part of what was done in OpenShift with Routes. That said, there are things you can do with Routes which I believe you still can't do with Ingress.
Anyway, obviously, if you use Routes, that only works on OpenShift as a plain Kubernetes cluster only has Ingress. If you use Ingress, you need to be using OpenShift 3.10 or later. In 3.10, there is an automatic mapping of Ingress to Route objects, so I believe Ingress should work even though OpenShift actually implements Ingress under the covers using Routes and its haproxy router setup.
There are obviously other differences as well. OpenShift has DeploymentConfig because Kubernetes never originally had Deployment. Again, there are things you can do with DeploymentConfig you can't do with Deployment, but Deployment object from Kubernetes is supported. One difference with DeploymentConfig is how it works in with ImageStream objects in OpenShift, which don't exist in Kubernetes. Stick with Deployment/StatefulSet/DaemonSet and don't use the OpenShift objects which were created when Kubernetes didn't have such features you should be fine.
Do note though that OpenShift takes a conservative approach on some resource types and so they may not by default be enabled. This is for things that are still regarded as alpha, or are otherwise in very early development and subject to change. You should avoid things which are still in development even if using plain Kubernetes.
That all said, for the core Kubernetes bits, OpenShift is verified for conformance against CNCF tests for Kubernetes. So use what is covered by that and you should be okay.
https://www.cncf.io/certification/software-conformance/