When using Istio with Kubernetes, is an overlay network still required for each node?
I have read the FAQ's and the documentation, but cannot see anything that directly references this.
Istio is built upon the Kubernetes. It creates sidecar containers in the Kubernetes Pods for routing requests, gathering metrics and so on. But still, it requires a way for Pods to communicate with each other. Therefore, Kubernetes network overlay is required.
For additional information, you can start from the following link.
Related
Kubernetes will be using the same server or we can use multiple servers with k8s. if yes then how it will be work ?
In case of one instance full then would it create a new instance to route everything to the new server?
If anyone can show a real example of K8s then it would be great!
For this I can suggest Kubernetes docs to start reading from but briefly,
Kubernetes deals with resources or networking in the Master nodes (Control Plane).
Worker nodes simply have the kube-proxy and basic control mechanisms coming from kubelet service. You still can not control your cluster from worker nodes.
And yes K8s can use multiple servers for LoadBalancing. This is a Possibility.
When it comes to K8s you do not have to work in a single zone so therefore you do not have to have all the pods in the same server.
So, in a single zone if you have one master and multiple worker nodes you will be using Master's scheduler and LoadBalancer to manage the resources or the traffic if necessary. If you have multiple Master nodes, then you will be using Masters' schedulers and etc.
For a real example of K8s just search for Highly-Available Kubernetes Clusters and switch to Images section. You can have a visualized opinion about them that way.
I hope I was a little bit of help. But the docs could be more helpful I suppose.
Is it possible to block egress network access from a sidecar container?
I'm trying to implement capability to run some untrusted code in a sidecar container exposed via another trusted container in same pod having full network access.
It seems 2 containers in a pod can't have different network policies. Is there some way to achieve similar functionality?
As a sidenote, I do control the sidecar image which provides runtime to untrusted code.
You are correct, all containers in a pod share the same networking so you can't easily differentiate it. In general Kubernetes is not suitable for running code you assume to be actively malicious. You can build such a system around Kubernetes, but K8s itself is not nearly enough.
I was trying to understand how exactly the kubernetes modules interacts with etcd. I understand kubernetes modules by themselves are stateless and they keep the states in etcd. But I am confused when it comes to how modules are interacting with etcd. I see conflicting texts on this, some saying all etcd interactions are happening through apiserver and some others say all the modules interacts with etcd.
I am looking for the possibility of changing etcd endpoint and restarting integration points so that they can work with new etcd instance.
I do not have time to go look in to the code to understand this part so hoping the someone here can help me on this.
If a kubernete component want to communicate with etcd, it must know the endpoint of etcd.
If you check the spec config of these components, you will find the correct answer: only api-server directly talk to etcd.
All kubernetes components, such as, kubelet, kubeproxy, scheduler, controllers etc. interact with etcd through API server. They dont directly talk to etcd.
if you change etcd endpoint, then same should be updated in api server configuration.
I am learning Kubernetes and currently deep diving into high availability and while I understand that I can set up a highly available control plane (API-server, controllers, scheduler) with local (or with remote) etcds as well as a highly available set of minions (through Kubernetes itself), I am still not sure where in this concept services are located.
If they live in the control plane: Good I can set them up to be highly available.
If they live on a certain node: Ok, but what happens if the node goes down or becomes unavailable in any other way?
As I understand it, services are needed to expose my pods to the internet as well as for loadbalancing. So no HA service, I risk that my application won't be reachable (even though it might be super highly available for any other aspect of the system).
Kubernetes Service is another REST Object in the k8s Cluster. There are following types are services. Each one of them serves a different purpose in the cluster.
ClusterIP
NodePort
LoadBalancer
Headless
fundamental Purpose of Services
Providing a single point of gateway to the pods
Load balancing the pods
Inter Pods communication
Provide Stability as pods can die and restart with different Ip
more
These Objects are stored in etcd as it is the single source of truth in the cluster.
Kube-proxy is the responsible for creating these objects. It uses selectors and labels.
For instance, each pod object has labels therefore service object has selectors to match these labels. Furthermore, Each Pod has endpoints, so basically kube-proxy assign these endpoints (IP:Port) with service (IP:Port).Kube-proxy use IP-Tables rules to do this magic.
Kube-Proxy is deployed as DaemonSet in each cluster nodes so they are aware of each other by using etcd.
You can think of a service as an internal (and in some cases external) loadbalancer. The definition is stored in Kubernetes API server, yet the fact thayt it exists there means nothing if something does not implement it. Most common component that works with services is kube-proxy that implements services on nodes using iptables (meaning that every node has every service implemented in it's local iptables rules), but there are also ie. Ingress Controller implementations that use Service concept from API to find endpoints and direct traffic to them, effectively skipping iptables implementation. Finaly there are service mesh solutions like linkerd or istio that can leverage Service definitions on their own.
Services loadbalance between pods in most of implementations, meaning that as long as you have one backing pod alive (and with enough capacity) your "service" will respond (so you get HA as well, specially if you implement readiness/liveness probes that among other things will remove unhealthy pods from services)
Kubernetes Service documentation provides pretty good insight on that
We have some Kubernetes clusters that have been deployed using kops in AWS.
We really like using the upstream/official images.
We have been wondering whether or not there was a good way to monitor the systems without installing software directly on the hosts? Are there docker containers that can extract the information from the host? I think that we are likely concerned with:
Disk space (this seems to be passed through to docker via df
Host CPU utilization
Host memory utilization
Is this host/node level information already available through heapster?
Not really a question about kops, but a question about operating Kubernetes. kops stops at the point of having a functional k8s cluster. You have networking, DNS, and nodes have joined the cluster. From there your world is your oyster.
There are many different options for monitoring with k8s. If you are a small team I usually recommend offloading monitoring and logging to a provider.
If you are a larger team or have more specific needs then you can look at such options as Prometheus and others. Poke around in the https://github.com/kubernetes/charts repository, as I know there is a Prometheus chart there.
As with any deployment of any form of infrastructure you are going to need Logging, Monitoring, and Metrics. Also, do not forget to monitor the monitoring ;)
I am using https://prometheus.io/, it goes naturally with kubernetes.
Kubernetes api already exposes a bunch of metrics in prometheus format,
https://github.com/kubernetes/ingress-nginx also exposes prometheus metrics (enable-vts-status: "true"), and you can also install https://github.com/prometheus/node_exporter as a daemonset to monitor CPU, disk, etc...
I install one prometheus inside the cluster to monitor internal metrics and one outside the cluster to monitor LBs and URLs.
Both send alerts to the same https://github.com/prometheus/alertmanager that MUST be outside the cluster.
It took me about a week to configure everything properly.
It was worth it.