Kubernetes deployment using shared-disk FC HBA options - kubernetes

I have been looking at available Kubernetes storage add-ons and have been unable to put together something that would work with our setup. Current situation is several nodes each with an FC HBA controller connected to a single LUN. I realize that some sort of cluster FS will need to be implemented, but once that is in place I don't see how I would then connect this to Kubernetes.
We've discussed taking what we have and making an iSCSI or NFS host but in addition to requiring another dedicated machine, we lose all the advantages of having the storage directly available on each node. Is there any way to make use of our current infrastructure?
Details:
4x Kubernetes nodes (1 master) deployed via kubeadm on Ubuntu 16.04 using flannel as the network addon, each system has the SAN available as block device (/dev/sdb)

Related

How to simulate node joins and failures with a local Kubernetes cluster?

I'm developing a Kubernetes scheduler and I want to test its performance when nodes join and leave a cluster, as well as how it handles node failures.
What is the best way to test this locally on Windows 10?
Thanks in advance!
Unfortunately, you can't add nodes to Docker Desktop with Kubernetes enabled. Docker Desktop is single-node only.
I can think of two possible solutions, off the top of my head:
You could use any of the cloud providers. Major (AWS, GCP, Azure) ones have some kind of free tier (under certain usage, or timed). Adding nodes in those environments is trivial.
Create local VM for each node. This is less than perfect solution - very resource intesive. To make adding nodes easier, you could use kubeadm to provision your cluster.

Does kubernetes run new virtual machines on new machines when autoscale pods?

i'm reading about kubernetes but i don't have understand if kubernetes has the ability to run new virtual machines on new machines and then start the pods on them or if the set of machines in which operate are fixed and must be always running. I'll use kubernetes on top of openstack. Thanks
Providing answer blindly as have no comments from author.
First of all I would like recommend you read Theory of Auto-Scaling article, as you mentioned you will use k8s on top of openstack.
In openstack conceptual diagram looks like this
And below are the components that can be controlled with Auto-Scaling.
Compute Host
VM running on a Compute Host
Container running on a Compute Host
Network Attached Storage
Virtual Network Functions
So answer your question - yes
What I would also like to show - read about Heat component and Autoscaling with heat

Kubernetes in Vsphere Virtual machines

Dears,
Sorry may be a basic question for some of you. If i have a Vsphere Environment and i am allowed to access only 2 Virtual machines inside them. Can I set kubernetes cluster with 1 VM as master and 1 VM as Minion without interacting with the hypervisor or the Vsphere center ?
In this case what are the requirements
I already set up an environment in my Laptop but i should define a host only network in Virtualbox and define the machines also for the host ? should that be the same in case of Vsphere ?
There are some requirements for Kubernetes cluster. According to the official documentation it is necessary to have:
One or more machines running one of:
Ubuntu 16.04+
Debian 9
CentOS 7
RHEL 7
Fedora 25/26 (best-effort)
HypriotOS v1.0.1+
Container Linux (tested with 1800.6.0)
2 GB or more of RAM per machine (any less will leave little room for your apps)
2 CPUs or more
Full network connectivity between all machines in the cluster (public or private network is fine)
Unique hostname, MAC address, and product_uuid for every node. See here for more details.
Certain ports are open on your machines. See here for more details.
Swap disabled. You MUST disable swap in order for the kubelet to work properly.
Also, IP subnets for Services and for Pods must not interfere with IP subnets in the same VPC.
To set up Kubernetes cluster it is enough to have SSH access to VMs. Additional network interfaces are not required.
If you already have VMs, the most convenient tool for cluster creation is kubeadm. Please, consider reading the following part of official documentation:
Creating a single master cluster with kubeadm

Will the master know the data on workers/nodes in k8s

I try to deploy a set of k8s on the cloud, there are two options:the masters are in trust to the cloud provider or maintained by myself.
so i wonder about that if the masters in trust will leak the data on workers?
Shortly, will the master know the data on workers/nodes?
The abstractions in Kubernetes are very well defined with clear boundaries. You have to understand the concept of Volumes first. As defined here,
A Kubernetes volume is essentially a directory accessible to all
containers running in a pod. In contrast to the container-local
filesystem, the data in volumes is preserved across container
restarts.
Volumes are attached to the containers in a pod and There are several types of volumes
You can see the layers of abstraction source
Master to Cluster communication
There are two primary communication paths from the master (apiserver) to the cluster. The first is from the apiserver to the kubelet process which runs on each node in the cluster. The second is from the apiserver to any node, pod, or service through the apiserver’s proxy functionality.
Also, you should check the CCM - The cloud controller manager (CCM) concept (not to be confused with the binary) was originally created to allow cloud specific vendor code and the Kubernetes core to evolve independent of one another. The cloud controller manager runs alongside other master components such as the Kubernetes controller manager, the API server, and scheduler. It can also be started as a Kubernetes addon, in which case it runs on top of Kubernetes.
Hope this answers all your questions related to Master accessing the data on Workers.
If you are still looking for more secure ways, check 11 Ways (Not) to Get Hacked
Short answer: yes the control plane can access all of your data.
Longer and more realistic answer: probably don't worry about it. It is far more likely that any successful attack against the control plane would be just as successful as if you were running it yourself. The exact internal details of GKE/AKS/EKS are a bit fuzzy, but all three providers have a lot of experience running multi-tenant systems and it wouldn't be negligent to trust that they have enough protections in place against lateral escalations between tenants on the control plane.

kubernetes network performance issue: moving service from physical machine to kubernetes get half rps drop

I setup a kubernetes cluster with 2 powerful physical servers (32 cores + 64GB memory.) Everything runs very smooth except the bad network performance I observed.
As comparison: I run my service on such physical machine directly (one instance). Have a client machine in the same network subset calling the service. The rps can goes to 10k easily. While when I put the exact same service in kubernetes version 1.1.7, one pod (instance) of the service in launched and expose the service by ExternalIP in service yaml file. With the same client, the rps drops to 4k. Even after I switched to iptable mode of kube-proxy, it doesn't seem help a lot.
When I search around, I saw this document https://www.percona.com/blog/2016/02/05/measuring-docker-cpu-network-overhead/
Seems the docker port-forwarding is the network bottleneck. While other network mode of docker: like --net=host, bridge network, or containers sharing network don't have such performance drop. Wondering whether Kubernetes team already aware of such network performance drop? Since docker containers are launched and managed by Kubernetes. Is there anyway to tune the kubernetest to use other network mode of docker?
You can configure Kubernetes networking in a number of different ways when configuring the cluster, and a few different ways on a per-pod basis. If you want to try verifying whether the docker networking arrangement is the problem, set hostNetwork to true in your pod specification and give it another try (example here). This is the equivalent of the docker --net=host setting.