Running Concurrent Major versions of an API with google endpoints in Kubernetes - kubernetes

I'm struggling to find any documentation relating to the configuration of Extensible Service Proxy and Google Endpoints relating to the correct pattern for deploying multiple versions of an API.
Brief overview - I have docker building out two releases of an API.
they run in separate containers.
I currently have a kubernetes pod with ESP and APIv1.
Really I want to run a pod with ESP+APIv1 and a pod with ESP+APIv2 but I can work out how this would work - my external IP and DNS would all point at one pod - Endpoints doesn't seem to be contacted until the user gets to the ESP service, is there some mechanism for passing to another ESP instance - I'm clearly missing something here.
OR - In order to run multiple versions should I be running a pod with ESP, APIv1, and APIv2 in it? That doesn't seem ideal from a scalability or management point of view.

Unless APIv1 and APIv2 are disjoint, you can probably implement methods supporting both versions in the same dockerized app. This approach is explained in more detail here.
https://cloud.google.com/endpoints/docs/lifecycle-management

Related

deploying multiple apps on a single cluster

I am wondering how to deploy multiple applications such as springboot app, nodejs app etc.on a single kubernetes cluster that has a single istio load balancer.
Is it pssible?
I am a beginner in devops so need some guidance on this.
Thank you for suggestions.
Yes, it's possible. Moreover this is the exact purpose of the LoadBalancer - to be a single point of entrance for multiple applications.
If you deploy an example application, you will create three versions of reviews application (reviews-v1, reviews-v2, reviews-v3 - as far as K8s and Istio are concerned, those are three different apps). With the use of Virtual Services and Destination rules, Istio manages traffic between those three applications.
Since you are a beginner, I would strongly recommend thorough read of Istio documentation, especially Tasks and Examples

Is testing on OpenShift Container Platform (OCP) equivalent to testing on Openshift Origin from a kubernetes standpoint?

This applications which are programmed to use the kubernetes API.
Should we assume that openshift container platform, from a kubernetes standpoint, matches all the standards that openshift origin (and kubernetes) does?
Background
Compatibility testing cloud native apps that are shipped can include a large matrix. It seems that, as a baseline, if OCP is meant to be a pure kubernetes distribution, with add ons, then testing against it is trivial, so long as you are only using core kubernetes features.
Alternatively, if shipping an app with support on OCP means you must test OCP, that would to me imply that (1) the app uses OCP functionality or (2) the app uses kube functionality which may not work correctly in OCP, which should be a considered a bug.
In practice you should be able to regard OpenShift Container Platform (OCP) as being the same as OKD (previously known as Origin). This is because it is effectively the same software and setup.
In comparing both of these to plain Kubernetes there are a few things you need to keep in mind.
The OpenShift distribution of Kubernetes is set up as a multi-tenant system, with a clear distinction between normal users and administrators. This means RBAC is setup so that a normal user is restricted in what they can do out of the box. A normal user cannot for example create arbitrary resources which affect the whole cluster. They also cannot run images that will only work if run as root as they run under a default service account which doesn't have such rights. That default service also has no access to the REST API. A normal user has no privileges to enable the ability to run such images as root. A user who is a project admin, could allow an application to use the REST API, but what it could do via the REST API will be restricted to the project/namespace it runs in.
So if you develop an application on Kubernetes where you have an expectation that you have full admin access, and can create any resources you want, or assume there is no RBAC/SCC in place that will restrict what you can do, you can have issues getting it running.
This doesn't mean you can't get it working, it just means that you need to take extra steps so you or your application is granted extra privileges to do what it needs.
This is the main area where people have issues and it is because OpenShift is setup to be more secure out of the box to suit a multi-tenant environment for many users, or even to separate different applications so that they cannot interfere with each other.
The next thing worth mentioning is Ingress. When Kubernetes first came out, it had no concept of Ingress. To fill that hole, OpenShift implemented the concept of Routes. Ingress only came much later, and was based in part of what was done in OpenShift with Routes. That said, there are things you can do with Routes which I believe you still can't do with Ingress.
Anyway, obviously, if you use Routes, that only works on OpenShift as a plain Kubernetes cluster only has Ingress. If you use Ingress, you need to be using OpenShift 3.10 or later. In 3.10, there is an automatic mapping of Ingress to Route objects, so I believe Ingress should work even though OpenShift actually implements Ingress under the covers using Routes and its haproxy router setup.
There are obviously other differences as well. OpenShift has DeploymentConfig because Kubernetes never originally had Deployment. Again, there are things you can do with DeploymentConfig you can't do with Deployment, but Deployment object from Kubernetes is supported. One difference with DeploymentConfig is how it works in with ImageStream objects in OpenShift, which don't exist in Kubernetes. Stick with Deployment/StatefulSet/DaemonSet and don't use the OpenShift objects which were created when Kubernetes didn't have such features you should be fine.
Do note though that OpenShift takes a conservative approach on some resource types and so they may not by default be enabled. This is for things that are still regarded as alpha, or are otherwise in very early development and subject to change. You should avoid things which are still in development even if using plain Kubernetes.
That all said, for the core Kubernetes bits, OpenShift is verified for conformance against CNCF tests for Kubernetes. So use what is covered by that and you should be okay.
https://www.cncf.io/certification/software-conformance/

Kubernetes - Load balancing Web App access per connections

Long time I did not come here and I hope you're fine :)
So for now, i have the pleasure of working with kubernetes ! So let's start ! :)
[THE EXISTING]
I have an operationnal kubernetes cluster with which I work every day.it consists of several applications, one of which is of particular interest to us, which is the web management interface.
I currently own one master and four nodes in my cluster.
For my web application, pod contain 3 containers : web / mongo /filebeat, and for technical reasons, we decided to assign 5 users max for each web pod.
[WHAT I WANT]
I want to deploy a web pod on each nodes (web0,web1,web2,web3), what I can already do, and that each session (1 session = 1 user) is distributed as follows:
For now, all HTTP requests are processed by web0.
[QUESTIONS]
Am I forced to go through an external loadbalancer (haproxy)?
Can I use an internal loadbalancer, configuring a service?
Does anyone have experience on the implementation described above?
I thank in advance those who can help me in this process :)
This generally depends how and where you've deployed your Kubernetes infrastructure, but you can do this natively with a few options.
Firstly, you'll need to scale your web deployment. This is very simple to do:
kubectl scale --current-replicas=2 --replicas=3 deployment/web
If you're deployed into a cloud provider (such as AWS using kops, or GKE) you can use a service. Just specify the type as LoadBalancer. Services will spread the sessions for your users.
Another option is to use an Ingress. In order to do this, you'll need to use an Ingress Controller, such as the nginx-ingress-controller which is the most featureful and widely deployed.
Both of these options will automatically loadbalance your incoming application sessions, but they may not necessarily do it in the order you've described in your image, it'll be random across the available web deployments

Google Container Engine Subnets

I'm trying to isolate services from one another.
Suppose ops-human has a bunch of mysql stores running on Google Container Engine, and dev-human has a bunch of node apps running on the same cluster. I do NOT want dev-human to be able to access any of ops-human's mysql instances in any way.
Simplest solution: put both of these in separate subnets. How do I do such a thing? I'm open to other implementations as well.
The Kubernetes Network-SIG team has been working on the network isolation issue for a while, and there is an experimental API in kubernetes 1.2 to support this. The basic idea is to provide network policy on a per-namespace basis. A third party network controller can then react to changes to resources and enforces the policy. See the last blog post about this topic for the details.
EDIT: This answer is more about the open-source kubernetes, not for GKE specifically.
The NetworkPolicy resource is now available in GKE in alpha clusters (see latest blog post).

Handling thousands of services in Kubernetes cluster

Some time ago I asked about handling thousands of services in a Kubernetes cluster:
Can Kubernetes handle thousands of services?
At that time Kubernetes was using env vars and my question was more oriented to that. Now that Kubernetes has a DNS sounds like we don't have the problem with env vars anymore, however the docs still says it won't perform well when handling thousands of services:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#shortcomings
Wanted to know if documentation is outdated or if there are still issues to scale Kubernetes to thousands of services.
The shortcoming mentioned in the documentation has not changed, because Kubernetes still uses the same mechanism (iptables and a userspace proxy) for proxying traffic sent to a service IP to the pods backing the service.
However, I don't believe we actually know how bad it is. A team member briefly tried testing it early this year and didn't see any impact, but didn't do anything rigorous to verify. It's possible that it'll work fine at a couple thousand services. If you try it, we'd love to hear how it goes via IRC or email.