Connect k8s pod to iscsi - kubernetes

I'm using a terastation's iscsi capabilities to create storage for my k8s 1.14 cluster (kubeadm, ubuntu 18.04). I check the iqn:
iscsiadm --mode node
192.168.2.113:3260,1 iqn.2004-08.jp.buffalo.7403bd2a30a0.drupal-mysql
There's no ":". when I try to use
volumes:
- name: iscsi-data
iscsi:
targetPortal: 192.168.2.113:3260
iqn: "iqn.2004-08.jp.buffalo.7403bd2a30a0.drupal-mysql"
lun: 0
fsType: xfs
i get the error:
spec.template.spec.volumes[0].iscsi.iqn: Invalid value: "iqn.2004-08.jp.buffalo.7403bd2a30a0.drupal-mysql": must be valid format
i know its looking for something that ends in a ":name" but I can't figure out what thats supposed to be for the life of me. I know the iscsi drive mounts because i can see it on my node and was able to format it using xfs. I think I'm missing something really simple.
Thanks

iSCSI network storage standard is fully documented in RFC 3720 and RFC 3721 with an appropriate IQN constructing format for iSCSI names.
iSCSI qualified name (IQN), coressponds to the following form:
iqn.yyyy-mm.naming-authority:unique-name, where:
iqn – the prefix iqn.
yyyy-mm – the year and month when the naming authority was established. For example: 1992-08.
naming-authority – the organizational naming authority string, usually reverse syntax of the Internet domain name of the naming
authority. For example: com.vmware.
unique name – any name you want to use, such as the name of your host. For example: host-1
In the above k8s volume spec case you might try to specify IQN like:
iqn: "iqn.2004-08.jp.buffalo:7403bd2a30a0.drupal-mysql"
Find some relative example about iSCSI volume provisioning in k8s cluster here.

Related

fsGroup vs supplementalGroups

I'm running my deployment on OpenShift, and found that I need to have a GID of 2121 to have write access.
I still don't seem to have write access when I try this:
security:
podSecurityContext:
fsGroup: 2121
This gives me a 2121 is not an allowed group error.
However, this does seem to be working for me:
security:
podSecurityContext:
fsGroup: 100010000 # original fsGroup value
supplementalGroups: [2121]
I am wondering what the difference of fsGroup and supplementalGroups is.
I've read the documentation here and have also looked at kubectl explain deployment.spec.template.spec.securityContext, but I still can't quite understand the difference.
Could I get some clarification on what are the different use cases?
FSGroup is used to set the group that owns the pod volumes. This group will be used by Kubernetes to change the permissions of all files in volumes, when volumes are mounted by a pod.
The owning GID will be the FSGroup
The setgid bit is set (new files created in the volume will be owned by FSGroup)
The permission bits are OR'd with rw-rw----
If unset, the Kubelet will not modify the ownership and permissions of
any volume.
Some caveats when using FSGroup:
Changing the ownership of a volume for slow and/or large file systems
can cause delays in pod startup.
This can harm other processes using the same volume if their
processes do not have permission to access the new GID.
SupplementalGroups - controls which supplemental group ID can be assigned to processes in a pod.
A list of groups applied to the first process run in each container,
in addition to the container's primary GID. If unspecified, no groups
will be added to any container.
Additionally from the OpenShift documentation:
The recommended way to handle NFS access, assuming it is not an option
to change permissions on the NFS export, is to use supplemental
groups. Supplemental groups in OpenShift Container Platform are used
for shared storage, of which NFS is an example. In contrast, block
storage such as iSCSI uses the fsGroup SCC strategy and the fsGroup
value in the securityContext of the pod.

how to configure kubespray DNS for bare-metal

I am relatively new to kubernetes and have a project for my University class, to build a kubernetes Cluster on bare metal.
For this I have set up a PoC Environment, out of 6 Machines (of which 3 are KVM Machines on one Node) all the administration is done by MAAS, meaning DHCP, and DNS is administered by that one Machine. I have a DNS Zone delegated to the MAAS DNS server k8s.example.com where all the machines are inside. The whole network is in its own VLan 10.0.10.0/24, with the metallb IPRange reserved from DHCP.
This is a picture to illustrate the simple cluster:
software wise, all hosts are using ubuntu 20.04 and I use kubespray to deploy everything, meaning kubernetes, metallb and nginx-ingress-controller. My corresponding values for kubespray are:
dashboard_enabled: false
ingress_nginx_enabled: true
ingress_nginx_host_network: true
kube_proxy_strict_arp: true
metallb_enabled: true
metallb_speaker_enabled: true
metallb_ip_range:
- "10.0.10.100-10.0.10.120"
kubeconfig_localhost: true
My Problem is, that I am unable getting DNS out of the cluster to the Internet to work.
I had a wildcard A Record set for *.k8s.example.com to the nginx-Ingress external ip, which worked fine for every pod to be accessible from outside.
The Problem was, every container inside the Cluster could not reach the internet anymore. Every request was routed via the ingress. Meaning if I tried to reach www.google.net it would try to reach www.google.net.k8s.example.com which makes kind of sense. Only every .com domain could be reached without problems (example www.google.com) after removing the Wildcard A record it worked fine. All pods inside of the cluster have no problem reaching each other.
There are several configuration possibility I see, where it makes sense to tweak around, yet after 2 weeks I really would prefer a solution that is based on best practice and done right.
I would really love to be able to work with a wildcard A record, but I fear that might not be possible.
I hope I supplied every Information needed to give you enough overview to understand my Problem.
EDIT:
I used the standard kubespray DNS config as i was told it would suffice:
DNS configuration.
# Kubernetes cluster name, also will be used as DNS domain
cluster_name: cluster.local
# Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods
ndots: 2
# Can be coredns, coredns_dual, manual or none
dns_mode: coredns
# Set manual server if using a custom cluster DNS server
# manual_dns_server: 10.x.x.x
# Enable nodelocal dns cache
enable_nodelocaldns: true
nodelocaldns_ip: 169.254.25.10
nodelocaldns_health_port: 9254
# nodelocaldns_external_zones:
# - zones:
# - example.com
# - example.io:1053
# nameservers:
# - 1.1.1.1
# - 2.2.2.2
# cache: 5
# - zones:
# - https://mycompany.local:4453
# nameservers:
# - 192.168.0.53
# cache: 0
# Enable k8s_external plugin for CoreDNS
enable_coredns_k8s_external: false
coredns_k8s_external_zone: k8s_external.local
# Enable endpoint_pod_names option for kubernetes plugin
enable_coredns_k8s_endpoint_pod_names: false
# Can be docker_dns, host_resolvconf or none
resolvconf_mode: docker_dns
# Deploy netchecker app to verify DNS resolve as an HTTP service
deploy_netchecker: false
# Ip address of the kubernetes skydns service
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
skydns_server_secondary: "{{ kube_service_addresses|ipaddr('net')|ipaddr(4)|ipaddr('address') }}"
dns_domain: "{{ cluster_name }}"
What I noticed is, that the etc resolv.conf of pods looks like this:
/ $ cat /etc/resolv.conf
nameserver 169.254.25.10
search flux-system.svc.cluster.local svc.cluster.local cluster.local k8s.example.com maas
options ndots:5
for example on the node, which is managed by MAAS, it is:
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0 trust-ad
search k8s.example.com maas
As discussed in comments, the issue is with the resolv.conf on your Kubernetes nodes, and the fact that you are using a wildcard record, that matches one of the names in that resolv.conf search entries.
Any name you may call, from a Node or a Pod, would first be searched as ${input}.${search-entry}, while ${input} would only be queried if concatenation with your search did not return with some record already. Having a wildcard record in the domains search list would result in just any name resolving to that record.
Granted that in this case, the k8s.example.com record is pushed by MAAS, and that we can't really remove it persistently, the next best solution would be to use another name serving your Ingresses - either a subdomain, or something unrelated. Usually, changing an option in your DHCP server should be enough - or arguably better: don't use DHCP hosting Kubernetes nodes.

Kubernetes : Force value of "$pvname" with nfs-client-provsisioner

I use nfs-client-provisioner inside my kubernetes cluster.
But, the name of the PersistentVolume is random.
cf. doc:
nfs-client-provisioner
--> Persistent volumes are provisioned as ${namespace}-${pvcName}-${pvName}
But, where could i change the value of pvName ??
Actually, it's random, for exemple : pvName = pvc-2v82c574-5bvb-491a-bdfe-061230aedd5f
This is the naming convention of directories corresponding to the PV names but stored on share of NFS server
If it comes to PV name provisioned dynamically by nfs-provisioner it follow the following naming convention:
pvc- + claim.UID
Background information:
According to the design proposal of external storage provisioners (NFS-client belongs to this category), you must not declare volumeName explicitly in PVC spec.
# volumeName: must be empty!
pv.Name MUST be unique. Internal provisioners use name based on claim.UID to produce conflicts when two provisioners accidentally provision a PV for the same claim, however external provisioners can use any mechanism to generate an unique PV name.
In case of nfs-client provisioner, the pv.Name generation is handled by the controller library, and it gets following format:
pvc- + claim.UID
Source
I hope it helps.

What prerequisites do I need for Kubernetes to mount an EBS volume?

The documentation doesn’t go into detail. I imagine I would at least need an iam role.
This is what we have done and it worked well.
I was on kubernetes 1.7.2 and trying to provision storage (dynamic/static) for kubernetes pods on AWS. Some of the things mentioned below may not be needed if you are not looking for dynamic storage classes.
Made sure that the DefaultStorageClass admission controller is enabled on the API server. (DefaultStorageClass is among the comma-delimited, ordered list of values for the --enable-admission-plugins flag of the API server component.)
I have given options --cloud-provider=aws and --cloud-config=/etc/aws/aws.config (while starting apiserver, controller-manager, kubelet)
(the file /etc/aws/aws.conf is present on instance with below contents)
$ cat /etc/aws/aws.conf
[Global]
Zone = us-west-2a
Created IAM policy added to role (as in link below), created instance profile for it and attached to the instances. (NOTE: I missed attaching instance profile and it did not work).
https://medium.com/#samnco/using-aws-elbs-with-the-canonical-distribution-of-kubernetes-9b4d198e2101
For dynamic provisioning:
Created storage class and made it default.
Let me know it this did not work.
Regards
Sudhakar
This is the one used by kubespray, and is very likely indicative of a rational default:
https://github.com/kubernetes-incubator/kubespray/blob/v2.5.0/contrib/aws_iam/kubernetes-minion-policy.json
with the tl;dr of that link being to create an Allow for the following actions:
s3:*
ec2:Describe*
ec2:AttachVolume
ec2:DetachVolume
route53:*
(although I would bet that s3:* is too wide, I don't have the information handy to provide a more constrained version; similar observation on the route53:*)
All of the Resource keys for those are * except the s3: one which restricts the resource to buckets beginning with kubernetes-* -- unknown if that's just an example, or there is something special in the kubernetes prefixed buckets. Obviously you might have a better list of items to populate the Resource keys to genuinely restrict attachable volumes (just be careful with dynamically provisioned volumes, as would be created by PersistentVolume resources)

Kubernetes | kubespray

When deploying a Kubernetes cluster manually we use kubeadm,
kubeadm init ...
passing the parameter --apiserver-cert-extra-sans=<FQDN> to include the FQDN in the generated certificate.
What approach can we use to achieve the same affect using Kubespray/Ansible?
I thought it was supplementary_addresses_in_ssl_keys but seeing it used demonstrates they really mean "IP address" and not the more generic address concept.
So I would suspect one of two paths: 1. update the openssl.conf.j2 to distinguish between a supplementary_address which is an IP, versus a hostname; 2. cheat and make the kube-master "hostnames" in the inventory match up with the actual SAN name you would like in the cert (since those identifiers in the inventory can be mapped to IP addresses via ansible_ssh_host for the purposes of connecting to the Nodes)
Arguably the change to openssl.conf.j2 should go upstream in a PR, because your request certainly seems like a common and reasonable one