Ceph iSCSI network deployment - ceph

I intend to install ceph cluster with iSCSI gateways. Was wondering, about networking of ceph iSCSI gateways, typically the following:
iSCSI gateways should have a separate (Public)network and private
network, i.e. Public to connect to iSCSI clients and private to
connect to ceph public network?
If 1 is true, how does colocated iscsi gateway work? does it use ceph public network for both iscsi public network and iscsi backend network to connect to ceph?
What is better-
Having two dedicated Networks for iscsi clients & a separate backend network to connect to ceph public? (As iscsi gateway would be a ceph client as well, or?)
Having one aggregated public network for iscsi clients & a separate backend network to connect to ceph public?

I think I found the answer. There are some clients not compliant/do not understand rbd directly, but they do understand iscsi. Iscsi gateways can help in those cases,like with windows clients or ovirt like engines which support iscsi but not ceph rbd directly. In that case, no separate network for iscsi gateways is needed.
It will be an overhead component sitting on top of ceph, just to enable non-compatible clients to consume ceph storage indirectly.

Related

Expose Volume of Kubernetes to non-kubernetes apps

I am new to Kubernetes and am working on a Computer Vision project, in which, some of the services are deployed in Kubernetes and some of the services are running in a cluster of physical servers (Nvidia Jetson Boards) which has GPU. Can the non-Kubernetes services access the Persistent Volume of the K8s environment? Please let me know,
How to expose a Persistent Volume from K8s and mount it as a shared drive in a different physical server?
Instead using Persistent Volume, can I have a volume in the host machine where K8s is deployed and can I use it for both k8s and non-k8s services?
Please note that we are connecting Cameras through USBs to each of those Jetson Boards, so we cannot bring those Jetson Boards as nodes under K8s.
Not possible.
This is a better approach. Example, you can use NAS to back the k8s and the nvidia board cluster, both clusters can share files thru the NAS mounted volume. For pods on k8s cluster to access the mount point is as simple as using hostPath, or a more sophisticated storage driver depends on your storage architecture.

Is it possible to apply & maintain the CIS Benchmarks Compliance on Managed Kubernetes Clusters such as Azure Kubernetes Service?

I have a managed Kubernetes cluster over Azure Public Cloud. I tried to make some changes on the nodes to satisfy 1 Host Compliance provided by CIS Benchmark Guide for Kubernetes. Then I upgraded a node regarding size. And the host compliance failed again. It was reset on that node. How do I maintain all the changes on the nodes?
I did ssh over the nodes and did the change over there. But compliance failed after the node upgrade.
You can Reconfigure a Node's Kubelet in a Live Cluster, but it's for Cluster configuration.
As for the changes on the node itself, I recommend reading Security hardening in AKS virtual machine hosts.
AKS clusters are deployed on host virtual machines, which run a security optimized OS. This host OS is currently based on an Ubuntu 16.04.LTS image with a set of additional security hardening steps applied (see Security hardening details).
The goal of the security hardened host OS is to reduce the surface area of attack and allow the deployment of containers in a secure fashion.
Important
The security hardened OS is NOT CIS benchmarked. While there are overlaps with CIS benchmarks, the goal is not to be CIS-compliant. The goal for host OS hardening is to converge on a level of security consistent with Microsoft’s own internal host security standards.
If you need to make any changes then I would advice setting up your own cluster manually using kubeadm. Just get virtual servers configure them your way and use Creating a single control-plane cluster with kubeadm or any other guide that fits your needs.

How to configure Kubernetes to encrypt the traffic between nodes, and pods?

In preparation for HIPAA compliance, we are transitioning our Kubernetes cluster to use secure endpoints across the fleet (between all pods). Since the cluster is composed of about 8-10 services currently using HTTP connections, it would be super useful to have this taken care of by Kubernetes.
The specific attack vector we'd like to address with this is packet sniffing between nodes (physical servers).
This question breaks down into two parts:
Does Kubernetes encrypts the traffic between pods & nodes by default?
If not, is there a way to configure it such?
Many thanks!
Actually the correct answer is "it depends". I would split the cluster into 2 separate networks.
Control Plane Network
This network is that of the physical network or the underlay network in other words.
k8s control-plane elements - kube-apiserver, kube-controller-manager, kube-scheduler, kube-proxy, kubelet - talk to each other in various ways. Except for a few endpoints (eg. metrics), it is possible to configure encryption on all endpoints.
If you're also pentesting, then kubelet authn/authz should be switched on too. Otherwise, the encryption doesn't prevent unauthorized access to the kubelet. This endpoint (at port 10250) can be hijacked with ease.
Cluster Network
The cluster network is the one used by the Pods, which is also referred to as the overlay network. Encryption is left to the 3rd-party overlay plugin to implement, failing which, the app has to implement.
The Weave overlay supports encryption. The service mesh linkerd that #lukas-eichler suggested can also achieve this, but on a different networking layer.
The replies here seem to be outdated. As of 2021-04-28 at least the following components seem to be able to provide an encrypted networking layer to Kubernetes:
Istio
Weave
linkerd
cilium
Calico (via Wireguard)
(the list above was gained via consultation of the respective projects home pages)
Does Kubernetes encrypts the traffic between pods & nodes by default?
Kubernetes does not encrypt any traffic.
There are servicemeshes like linkerd that allow you to easily introduce https communication between your http service.
You would run a instance of the service mesh on each node and all services would talk to the service mesh. The communication inside the service mesh would be encrypted.
Example:
your service -http-> localhost to servicemesh node - https-> remoteNode -http-> localhost to remote service.
When you run the service mesh node in the same pod as your service the localhost communication would run on a private virtual network device that no other pod can access.
No, kubernetes does not encrypt traffic by default
I haven't personally tried it, but the description on the Calico software defined network seems oriented toward what you are describing, with the additional benefit of already being kubernetes friendly
I thought that Calico did native encryption, but based on this GitHub issue it seems they recommend using a solution like IPSEC to encrypt just like you would a traditional host

Kubernetes deployment using shared-disk FC HBA options

I have been looking at available Kubernetes storage add-ons and have been unable to put together something that would work with our setup. Current situation is several nodes each with an FC HBA controller connected to a single LUN. I realize that some sort of cluster FS will need to be implemented, but once that is in place I don't see how I would then connect this to Kubernetes.
We've discussed taking what we have and making an iSCSI or NFS host but in addition to requiring another dedicated machine, we lose all the advantages of having the storage directly available on each node. Is there any way to make use of our current infrastructure?
Details:
4x Kubernetes nodes (1 master) deployed via kubeadm on Ubuntu 16.04 using flannel as the network addon, each system has the SAN available as block device (/dev/sdb)

Access SkyDNS etcd API on Google Container Engine to Add Custom Records

I'm running a kubernetes cluster on GKE and I would like to discover and access the etcd API from a service pod. The reason I want to do this is to add keys to the SkyDNS hierarchy.
Is there a way to discover (or create/expose) and interact with the etcd service API endpoint on a GKE cluster from application pods?
We have IoT gateway nodes that connect to our cloud services via an SSL VPN to ease management and comms. When a device connects to the VPN I want to update an entry in SkyDNS with the hostname and VPN IP address of the device.
It doesn't make sense to spin another clustered DNS setup since SkyDNS will work great for this and all of the pods in the cluster are already automatically configured to query it first.
I'm running a kubernetes cluster on GKE and I would like to discover and access the etcd API from a service pod. The reason I want to do this is to add keys to the SkyDNS hierarchy.
It sounds like you want direct access to the etcd instance that is backing the DNS service (not the etcd instance that is backing the Kubernetes apiserver, which is separate).
Is there a way to discover (or create/expose) and interact with the etcd service API endpoint on a GKE cluster from application pods?
The etcd instance for the DNS service is an internal implementation detail for the DNS service and isn't designed to be directly accessed. In fact, it's really just a convenient communication mechanism between the kube2sky binary and the skydns binary so that skydns wouldn't need to understand that it was running in a Kubernetes cluster. I wouldn't recommend attempting to access it directly.
In addition, this etcd instance won't even exist in Kubernetes 1.3 installs, since skydns is being replaced by a new DNS binary kubedns.
We have IoT gateway nodes that connect to our cloud services via an SSL VPN to ease management and comms. When a device connects to the VPN I want to update an entry in SkyDNS with the hostname and VPN IP address of the device.
If you create a new service, that will cause the cluster DNS to have a new entry created mapping the service name to the endpoints that back the service. What if you programmatically add a service each time a new IoT device registers rather than trying to configure DNS directly?