What's the difference between Kubernetes-native and non-native applications? - kubernetes

From the Kubernetes docs:
For Kubernetes-native applications, Kubernetes offers a simple Endpoints API that is updated whenever the set of Pods in a Service changes. For non-native applications, Kubernetes offers a virtual-IP-based bridge to Services which redirects to the backend Pods.
What is the exact difference between Kubernetes-native and non-native applications?

I found the same section and interpret it as :
native are apps that are packed up and run inside k8s as some ‚kind‘. All dockerized apps should be in that category.
non native is connected with the k8s cluster infrastructure but not deployed within. A legacy app, an Oracle cluster or your backup robot may fall in this category.

From my interpretation, non-native applications are Services-without-selectors, for which end-point are not created,(such as application running in different namespace, running outside kubernetes, development databases etc.)

Related

Azure Service Fabric and Kubernetes communication within same network

I am looking at some strategies how to make bidirectional communication of applications hosted on seperate clusters. Some of them are hosted in Service Fabric and the others are in Kubernetes. One of the options is to use a DNS service on the Service Fabric and the counterpart on Kubernetes. On the other hand the Reverse Proxy seems to be a way to go. After going through the options I was thinking...what is actually the best way to create microservices that can be deployed either in SF or in K8s without worrying about the communication model which requires least changes if we wish suddenly to migrate one app from SF to K8s but still making it avaiable to the SF apps and vice versa?

deploying multiple apps on a single cluster

I am wondering how to deploy multiple applications such as springboot app, nodejs app etc.on a single kubernetes cluster that has a single istio load balancer.
Is it pssible?
I am a beginner in devops so need some guidance on this.
Thank you for suggestions.
Yes, it's possible. Moreover this is the exact purpose of the LoadBalancer - to be a single point of entrance for multiple applications.
If you deploy an example application, you will create three versions of reviews application (reviews-v1, reviews-v2, reviews-v3 - as far as K8s and Istio are concerned, those are three different apps). With the use of Virtual Services and Destination rules, Istio manages traffic between those three applications.
Since you are a beginner, I would strongly recommend thorough read of Istio documentation, especially Tasks and Examples

multiple environment for websites in Kubernetes

I am a newbie in Kubernetes.
I have 19 LAN servers with 190 machines.
Each of the 19 LANs has 10 machines and 1 exposed IP.
I have different websites/apps and their environments that are assigned to each LAN.
how do I manage my Kubernetes cluster and do setup/housekeeping.
Would like to have a single portal or manager to manage the websites and environment(dev, QA, prod) and keep isolation.
Is that possible?
I only got a vague idea of what you want to achieve so here goes nothing.
Since Kubernetes has a lot of convenience tools for setting a cluster on a public cloud platform, I'd suggest to start by going through "kubernetes-the-hard-way". It is a guide to setup a cluster on Google Cloud Platform without any additional scripts or tools, but the instructions can be applied to local setup as well.
Once you have an operational cluster, next step should be to setup an Ingress Controller. This gives you the ability to use one or more exposed machines (with public IPs) as gateways for the services running in the cluster. I'd personally recommend Traefik. It has great support for HTTP and Kubernetes.
Once you have the ingress controller setup, your cluster is pretty much ready to use. Process for deploying a service is really specific to service requirements but the right hand rule is to use a Deployment and a Service for stateless loads, and StatefulSet and headless services for stateful workloads that need peer discovery. This is obviously too generalized and have many exceptions.
For managing different environments, you could split your resources into different namespaces.
As for the single portal to manage it all, I don't think that anything as such exists, but I might be wrong. Besides, depending on your workflow, you can create your own portal using the Kubernetes API but it requires a good understanding of Kubernetes itself.

How to configure Acitve/Standby high availability for application deployed in Kubernetes

I have to deploy an application in Kubernetes in Active/Standby High availability configurations.
How to do this using Kubernetes concepts?
Thanks
There is no any native support for Active/Standby application deployment in Kubernetes.
Here is the feature request on Github.
https://github.com/kubernetes/kubernetes/issues/45300
front the two application instances using ha-proxy. ha-proxy supports active/standby mode

Kubernetes - Load balancing Web App access per connections

Long time I did not come here and I hope you're fine :)
So for now, i have the pleasure of working with kubernetes ! So let's start ! :)
[THE EXISTING]
I have an operationnal kubernetes cluster with which I work every day.it consists of several applications, one of which is of particular interest to us, which is the web management interface.
I currently own one master and four nodes in my cluster.
For my web application, pod contain 3 containers : web / mongo /filebeat, and for technical reasons, we decided to assign 5 users max for each web pod.
[WHAT I WANT]
I want to deploy a web pod on each nodes (web0,web1,web2,web3), what I can already do, and that each session (1 session = 1 user) is distributed as follows:
For now, all HTTP requests are processed by web0.
[QUESTIONS]
Am I forced to go through an external loadbalancer (haproxy)?
Can I use an internal loadbalancer, configuring a service?
Does anyone have experience on the implementation described above?
I thank in advance those who can help me in this process :)
This generally depends how and where you've deployed your Kubernetes infrastructure, but you can do this natively with a few options.
Firstly, you'll need to scale your web deployment. This is very simple to do:
kubectl scale --current-replicas=2 --replicas=3 deployment/web
If you're deployed into a cloud provider (such as AWS using kops, or GKE) you can use a service. Just specify the type as LoadBalancer. Services will spread the sessions for your users.
Another option is to use an Ingress. In order to do this, you'll need to use an Ingress Controller, such as the nginx-ingress-controller which is the most featureful and widely deployed.
Both of these options will automatically loadbalance your incoming application sessions, but they may not necessarily do it in the order you've described in your image, it'll be random across the available web deployments