I'm trying to isolate services from one another.
Suppose ops-human has a bunch of mysql stores running on Google Container Engine, and dev-human has a bunch of node apps running on the same cluster. I do NOT want dev-human to be able to access any of ops-human's mysql instances in any way.
Simplest solution: put both of these in separate subnets. How do I do such a thing? I'm open to other implementations as well.
The Kubernetes Network-SIG team has been working on the network isolation issue for a while, and there is an experimental API in kubernetes 1.2 to support this. The basic idea is to provide network policy on a per-namespace basis. A third party network controller can then react to changes to resources and enforces the policy. See the last blog post about this topic for the details.
EDIT: This answer is more about the open-source kubernetes, not for GKE specifically.
The NetworkPolicy resource is now available in GKE in alpha clusters (see latest blog post).
Related
I'm new to Kubernetes and trying to point all requests to the domain to another local service.
Both applications are running in the same cluster under a different namespace
Example domains
a.domain.com hosting first app
b.domain.com hosting the second app
When I do a curl request from the first app to the second app (b.domain.com). it travels through the internet to the second app.
Usually what I could do is in /etc/hosts point b.domain.com to localhost.
What do we do in this case in Kubernetes?
I was looking into Network Policies but I'm not sure if it correct approach.
Also As I understood we could just call service name.namespace:port from the first app. But I would like to keep the full URL.
Let me know if you need more details to help me solve this.
The way to do it is by using the Kubernetes Gateway API. Now, it is true that you can deploy your own implementation since this is an Open Source project, but there are a lot of solutions already using it and it would be much easier to learn how to implement those instead.
For what you want, Istio would fit your needs. If your cluster is hosted in a Cloud environment, you can take a look at Anthos, which is the managed version of Istio.
Finally, take a look at the blog Welcome to the service mesh era, since the traffic management between services is one of the elements of the service mesh paradigm, among others like monitoring, logging, etc.
I am wondering how to deploy multiple applications such as springboot app, nodejs app etc.on a single kubernetes cluster that has a single istio load balancer.
Is it pssible?
I am a beginner in devops so need some guidance on this.
Thank you for suggestions.
Yes, it's possible. Moreover this is the exact purpose of the LoadBalancer - to be a single point of entrance for multiple applications.
If you deploy an example application, you will create three versions of reviews application (reviews-v1, reviews-v2, reviews-v3 - as far as K8s and Istio are concerned, those are three different apps). With the use of Virtual Services and Destination rules, Istio manages traffic between those three applications.
Since you are a beginner, I would strongly recommend thorough read of Istio documentation, especially Tasks and Examples
On Google Container Platform can Kubernetes nodes (or node pools) be configured to have multiple network interfaces?
Unfortunately they cannot. All of the parameters that you can configure through the Google Kubernetes Engine API are here.
If you want to customize the nodes beyond what is offered through the API you can create your own instance template as described in this stackoverflow answer. The downside is that you will no longer be able to manage the nodes via the Google Kubernetes Engine API (e.g. for upgrading).
I am trying to deploy my Docker images using Kubernetes orchestration tools.When I am reading about Kubernetes, I am seeing documentation and many YouTube video tutorial of working with Kubernetes. In there I only found that creation of pods, services and creation of that .yml files. Here I have doubts and I am adding below section,
When I am using Kubernetes, how I can create clusters and nodes ?
Can I deploy my current docker-compose build image directly using pods only? Why I need to create services yml file?
I new to containerizing, Docker and Kubernetes world.
My favorite way to create clusters is kubespray because I find ansible very easy to read and troubleshoot, unlike more monolithic "run this binary" mechanisms for creating clusters. The kubespray repo has a vagrant configuration file, so you can even try out a full cluster on your local machine, to see what it will do "for real"
But with the popularity of kubernetes, I'd bet if you ask 5 people you'll get 10 answers to that question, so ultimately pick the one you find easiest to reason about, because almost without fail you will need to debug those mechanisms when something inevitably goes wrong
The short version, as Hitesh said, is "yes," but the long version is that one will need to be careful because local docker containers and kubernetes clusters are trying to solve different problems, and (as a general rule) one could not easily swap one in place of the other.
As for the second part of your question, a Service in kubernetes is designed to decouple the current provider of some networked functionality from the long-lived "promise" that such functionality will exist and work. That's because in kubernetes, the Pods (and Nodes, for that matter) are disposable and subject to termination at almost any time. It would be severely problematic if the consumer of a networked service needed to constantly update its IP address/ports/etc to account for the coming-and-going of Pods. This is actually the exact same problem that AWS's Elastic Load Balancers are trying to solve, and kubernetes will cheerfully provision an ELB to represent a Service if you indicate that is what you would like (and similar behavior for other cloud providers)
If you are not yet comfortable with containers and docker as concepts, then I would strongly recommend starting with those topics, and moving on to understanding how kubernetes interacts with those two things after you have a solid foundation. Else, a lot of the terminology -- and even the problems kubernetes is trying to solve -- may continue to seem opaque
I'm struggling to find any documentation relating to the configuration of Extensible Service Proxy and Google Endpoints relating to the correct pattern for deploying multiple versions of an API.
Brief overview - I have docker building out two releases of an API.
they run in separate containers.
I currently have a kubernetes pod with ESP and APIv1.
Really I want to run a pod with ESP+APIv1 and a pod with ESP+APIv2 but I can work out how this would work - my external IP and DNS would all point at one pod - Endpoints doesn't seem to be contacted until the user gets to the ESP service, is there some mechanism for passing to another ESP instance - I'm clearly missing something here.
OR - In order to run multiple versions should I be running a pod with ESP, APIv1, and APIv2 in it? That doesn't seem ideal from a scalability or management point of view.
Unless APIv1 and APIv2 are disjoint, you can probably implement methods supporting both versions in the same dockerized app. This approach is explained in more detail here.
https://cloud.google.com/endpoints/docs/lifecycle-management