How to create dedicated service catalog per OpenShift project? - kubernetes

Is it possible to create a dedicated service catalog for each project/namespaces in Openshift? I am hosting a multi-tenant OpensShift cluster. When each tenants login to OpenShift cluster, they should only be able see services which is relevant for them in the service catalog.
For eg: Tenant-A should only see MySQL and Apache services. Tenant-B should only see ElasticSearch and Ruby services. Is it possible to do this kind of isolation?

Related

How can I deploy one entry point for applications running cross Kubernete clusters?

I have two K8S clusters setup, one on AWS EKS, the other is on GCP. I setup a rancher server which is used to manage this two clusters. I have an application (appA) which is packaged in a helm chart. The application is just a rest api server created by nodejs + express.
It is deployed to both clusters via Rancher. After deploy, appA are running in the two clusters separately.
I have another application (appB) (running outside of K8S) which is listening on a database stream and it needs to call appA (in the K8S clusters) to process the change events.
My question is how I can deploy an entry point, like nginx, which will forward the appB's requests to appA, one of the pod from the clusters should serve this request.
You can expose the appA Kubernetes service type as Loadbalancer.
You can run nginx outside of the k8s, create a upstream and add both EKS
and GKE loadbalancers urls as backend servers.
Follow the below link
https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/

CloudSQL Proxy on GKE : Service vs Sidecar

Does anyone know the pros and cons for installing the CloudSQL-Proxy (that allows us to connect securely to CloudSQL) on a Kubernetes cluster as a service as opposed to making it a sidecar against the application container?
I know that it is mostly used as a sidecar. I have used it as both (in non-production environments), but I never understood why sidecar is more preferable to service. Can someone enlighten me please?
The sidecar pattern is preferred because it is the easiest and more secure option. Traffic to the Cloud SQL Auth proxy is not encrypted or authenticated, and relies on the user to restrict access to the proxy (typically be running local host).
When you run the Cloud SQL proxy, you are essentially saying "I am user X and I'm authorized to connect to the database". When you run it as a service, anyone that connects to that database is connecting authorized as "user X".
You can see this warning in the Cloud SQL proxy example running as a service in k8s, or watch this video on Connecting to Cloud SQL from Kubernetes which explains the reason as well.
The Cloud SQL Auth proxy is the recommended way to connect to Cloud SQL, even when using private IP. This is because the Cloud SQL Auth proxy provides strong encryption and authentication using IAM, which can help keep your database secure.
When you connect using the Cloud SQL Auth proxy, the Cloud SQL Auth proxy is added to your pod using the sidecar container pattern. The Cloud SQL Auth proxy container is in the same pod as your application, which enables the application to connect to the Cloud SQL Auth proxy using localhost, increasing security and performance.
As sidecar is a container that runs on the same Pod as the application container, because it shares the same volume and network as the main container, it can “help” or enhance how the application operates. In Kubernetes, a pod is a group of one or more containers with shared storage and network. A sidecar is a utility container in a pod that’s loosely coupled to the main application container.
Sidecar Pros: Scales indefinitely as you increase the number of pods. Can be injected automatically. Already used by serviceMeshes.
Sidecar Cons: A bit difficult to adopt, as developers can't just deploy their app, but deploy a whole stack in a deployment. It consumes much more resources and it is harder to secure because every Pod must deploy the log aggregator to push the logs to the database or queue.
Refer to the documentation for more information.

Deploy an Elastic Kubernetes Cluster with Openstack

I am working on a Cloud provider named Wekeo, which offers only static provisionning of instances. I have access to the Morpheus API and the underlying OpenStack API.
My goal is to deploy an elastic cluster (EKS for instance), but I'm getting lost through the many concepts and tools I found so any guidance would be appreciated!

Usage of custom resource definition (CRD) vs Service Catalog in k8s

I recently started to explore k8s extensions and got introduced to two concepts:
CRD.
Service catalogs.
They look pretty similar to me. The only difference to my understanding is, CRDs are deployed inside same cluster to be consumed; whereas, catalogs are deployed to be exposed outside the cluster for example as database service (client can order cluster of mysql which will be accessible from his cluster).
My query here is:
Is my understanding correct? if yes, can there be any other scenario where I would like to create catalog and not CRD.
Yes, your understanding is correct. Taken from official documentation:
Example use case
An application developer wants to use message queuing as part of their application running in a Kubernetes cluster.
However, they do not want to deal with the overhead of setting such a
service up and administering it themselves. Fortunately, there is a
cloud provider that offers message queuing as a managed service
through its service broker.
A cluster operator can setup Service Catalog and use it to communicate
with the cloud provider’s service broker to provision an instance of
the message queuing service and make it available to the application
within the Kubernetes cluster. The application developer therefore
does not need to be concerned with the implementation details or
management of the message queue. The application can simply use it as
a service.
With CRD you are responsible for provisioning resources, running backend logic and so on.
More info can be found in this KubeCon 2018 presentation.

LoadBalancing Spring cloud data flow server

In spring cloud dataflow, as per my understanding each stream is a microservice but the dataflow server is not. Am I right?
Is it possible to have multiple instances of spring cloud dataflow(SCDF) server? How to loadbalance the dataflow server? I am planning to deploy it in AWS.The official documentation didn't mention anything about loadbalancing of dataflow server. If it is possible how do Dashboard, shell works?
The SCDF-server is a regular Spring MVC + Spring Boot application that serves the REST-APIs, DSL commands, UI, and repository access for stream/task metadata persistence.
In platforms like Cloud Foundry, Kubernetes and others, upon scaling the SCDF-server, the platform automatically handles traffic routing and load-balancing.
If you were to orchestrate the deployment on your own and on AWS, you'd have to plug a load-balancer in front of the server instances. The shell, UI, and REST-APIs would hit the load-balancer instead, to interact with the SCDF-server.