How to Implement a specific /etc/resolv.conf per Openshift project - kubernetes

I'm having a use case where each openshift project belongs to an own VLAN, which has more than just Openshift Nodes in it. Each VLAN has it's own independent DNS to resolve all the Hosts within that VLAN. The Openshift Cluster itself hosts more of such VLANs on the same time. To get the per-project dns resolution done, it is elementary to get a project-based DNS resolving implemented.
Is there a way to change the pod's /etc/resolv.conf dependent on the Openshift project it runs in? The Cluster runs on RHEL 7.x, Openshift is 3.11

Personally I think the OpenShift has not been supported configuration of DNS per a project unit. But you can consider the CustomPodDNS feature to configure DNS per Pod unit. So you might configure the Pods to use same DNS config in a project using this feature.
You can enable the CustomPodDNS feature for OCP cluster, if you configure the following parameters in /etc/origin/master/master-config.yaml.
kubernetesMasterConfig:
apiServerArguments:
feature-gates:
- CustomPodDNS=true
controllerArguments:
feature-gates:
- CustomPodDNS=true
You can also enable this feature on one node host as configuring it in the /etc/origin/node/node-config.yaml.
kubeletArguments:
feature-gates:
- CustomPodDNS=true
You should restart the related services master and node to take effect the changes.
The config of Pod, refer Pod's Config for more details.
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: dns-example
spec:
containers:
- name: test
image: nginx
dnsPolicy: "None"
dnsConfig:
nameservers:
- 1.2.3.4
searches:
- ns1.svc.cluster.local
- my.dns.search.suffix
options:
- name: ndots
value: "2"
- name: edns0

Related

How to make Redis work with mTLS enabled Istio cluster?

Summary
I have a simple Istio enabled k8s cluster consists of only:
A Java web server.
A Redis master instance.
Normally, the web server can read and write from Redis. However, Kiali shows a disconnected graph similar to (https://kiali.io/documentation/latest/faq/#disconnected-tcp). As a result, I tried to explicitly turn on mTLS by using STRICT mode. However, Kiali seems to continue to show disconnected graph
Set up:
Kubernetes version 1.18.0
Minikube version 1.18.0
Istio version 1.9
I followed Istio's Getting Started page to install Istio.
$ istioctl install --set profile=demo -y
$ kubectl apply -f samples/addons
Java Server code snippet (redis.clients.jedis.Jedis)
Jedis redis = new Jedis("redis-master");
redis.set(key, value);
mTLS
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
spec:
mtls:
mode: STRICT
Questions
My understanding is that by default, mTLS should have been turned on by default. Is this not the case for non-HTTP TCP traffic?
Is there anything special I need to do to enable mTLS for non-HTTP TCP traffic? (e.g. change the port on the Service to 443 from 6379? Set up a VirtualService?).
According to istio documentation you have to configure redis to make it work with istio.
Similar to other services deployed in an Istio service mesh, Redis instances need to listen on 0.0.0.0. However, each Redis slave instance should announce an address that can be used by master to reach it, which cannot also be 0.0.0.0.
Use the Redis configuration parameter replica-announce-ip to announce the correct address. For example, set replica-announce-ip to the IP address of each Redis slave instance using these steps:
Pass the pod IP address through an environment variable in the env subsection of the slave StatefulSet definition:
- name: "POD_IP"
valueFrom:
fieldRef:
fieldPath: status.podIP
Also, add the following under the command subsection:
echo "" >> /opt/bitnami/redis/etc/replica.conf
echo "replica-announce-ip $POD_IP" >> /opt/bitnami/redis/etc/replica.conf

How to expose kubernetes service on prem using 443/80

Is it possible to expose Kubernetes service using port 443/80 on-premise?
I know some ways to expose services in Kubernetes:
1. NodePort - Default port range is 30000 - 32767, so we cannot access the service using 443/80. Changing the port range is risky because of port conflicts, so it is not a good idea.
2. Host network - Force the pod to use the host’s network instead of a dedicated network namespace. Not a good idea because we lose the kube-dns and etc.
3. Ingress - AFAIK it uses NodePort (So we face with the first problem again) or a cloud provider LoadBalancer. Since we use Kubernetes on premise we cannot use this option.
MetalLB which allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, is not yet stable enough.
Do you know any other way to expose service in Kubernetes using port 443/80 on-premise?
I'm looking for a "Kubernetes solution"; Not using external cluster reverse proxy.
Thanks.
IMHO ingress is the best way to do this on prem.
We run the nginx-ingress-controller as a daemonset with each controller bound to ports 80 and 443 on the host network. Nearly 100% of traffic to our clusters comes in on 80 or 443 and is routed to the right service by ingress rules.
Per app, you just need a DNS record mapping your hostname to your cluster's nodes, and a corresponding ingress.
Here's an example of the daemonset manifest:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: nginx-ingress-controller
spec:
selector:
matchLabels:
component: ingress-controller
template:
metadata:
labels:
component: ingress-controller
spec:
restartPolicy: Always
hostNetwork: true
containers:
- name: nginx-ingress-lb
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
ports:
- name: http
hostPort: 80
containerPort: 80
protocol: TCP
- name: https
hostPort: 443
containerPort: 443
protocol: TCP
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- '--default-backend-service=$(POD_NAMESPACE)/default-http-backend'
Use ingress controller as an entrypoint to a services in kubernetes cluster. Run ingress controller on port 80 or 443.
You need to define ingress rules for each backend service that you want to access from outside. Ingress controller should be able to allow client to access the services based on the paths defined in the ingress rules.
If you need to allow access over https then you need to get the dns certificates, load them into secrets and bind them in the ingress rules
Most popular one is nginx ingress controller. Traefik and ha proxy ingress controllers are also other alternate solutions
Idea with hostNetwork proxy is actually not bad, Openshift Router uses that for example. You define two or three nodes to run proxy and use DNS load balancing in front of them.
And you can still use kube-dns with hostNetwork, see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
You are probably running a kubeadm on-premise Kubernetes setup with a nginx ingress controller on unix/linux hosts and can't safely expose ports in the restricted system port range (0-1023).
You either need to set up your own dedicated load balancer pair (e.g. a Linux boxes with HA-Proxy running) or alternatively use an existing load balancers if you are lucky engough being in a corporate environment that already provides load balancing (e.g. F5 LB).
Then you will be able to set the load balancers to forward your 443/80 requests to your cluster node's 30443/30080 ports that are handled by your cluster's ingress controller.

How to auto update /etc/hosts file entries inside running Pod without entering the pod

How can we auto-update (delete, create, change) entries in /etc/hosts file of running Pod without actually entering the pod?
We working on containerisation of SAP application server and so far succeeded in achieving this using Kubernetes.
apiVersion: v1
kind: Pod
spec:
hostNetwork: true
Since we are using host network approach, all entries of our VMs /etc/hosts file are getting copied whenever a new pod is created.
However, once pod has been created and in running state, any changes to VMs /etc/hosts file are not getting transferred to already running pod.
We would like to achieve this for our project requirement.
Kubernetes does have several different ways of affecting name resolution, your request is most similar to here and related pages.
Here is an extract, emphasis mine.
Adding entries to a Pod’s /etc/hosts file provides Pod-level override of hostname resolution when DNS and other options are not applicable. In 1.7, users can add these custom entries with the HostAliases field in PodSpec.
Modification not using HostAliases is not suggested because the file is managed by Kubelet and can be overwritten on during Pod creation/restart.
An example Pod specification using HostAliases is as follows:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"
One issue here is that you will need to update and restart the Pods with a new set of HostAliases if your network IPs change. That might cause downtime in your system.
Are you sure you need this mechanism and not a service that points to an external endpoint?

Expose port 80 on Digital Ocean's managed Kubernetes without a load balancer

I would like to expose my Kubernetes Managed Digital Ocean (single node) cluster's service on port 80 without the use of Digital Ocean's load balancer. Is this possible? How would I do this?
This is essentially a hobby project (I am beginning with Kubernetes) and just want to keep the cost very low.
You can deploy an Ingress configured to use the host network and port 80/443.
DO's firewall for your cluster doesn't have 80/443 inbound open by default.
If you edit the auto-created firewall the rules will eventually reset themselves. The solution is to create a separate firewall also pointing at the same Kubernetes worker nodes:
$ doctl compute firewall create \
--inbound-rules="protocol:tcp,ports:80,address:0.0.0.0/0,address:::/0 protocol:tcp,ports:443,address:0.0.0.0/0,address:::/0" \
--tag-names=k8s:CLUSTER_UUID \
--name=k8s-extra-mycluster
(Get the CLUSTER_UUID value from the dashboard or the ID column from doctl kubernetes cluster list)
Create the nginx ingress using the host network. I've included the helm chart config below, but you could do it via the direct install process too.
EDIT: The Helm chart in the above link has been DEPRECATED, Therefore the correct way of installing the chart would be(as per the new docs) is :
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
After this repo is added & updated
# For Helm 2
$ helm install stable/nginx-ingress --name=myingress -f myingress.values.yml
# For Helm 3
$ helm install myingress stable/nginx-ingress -f myingress.values.yml
#EDIT: The New way to install in helm 3
helm install myingress ingress-nginx/ingress-nginx -f myingress.values.yaml
myingress.values.yml for the chart:
---
controller:
kind: DaemonSet
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
daemonset:
useHostPort: true
service:
type: ClusterIP
rbac:
create: true
you should be able to access the cluster on :80 and :443 via any worker node IP and it'll route traffic to your ingress.
since node IPs can & do change, look at deploying external-dns to manage DNS entries to point to your worker nodes. Again, using the helm chart and assuming your DNS domain is hosted by DigitalOcean (though any supported DNS provider will work):
# For Helm 2
$ helm install --name=mydns -f mydns.values.yml stable/external-dns
# For Helm 3
$ helm install mydns stable/external-dns -f mydns.values.yml
mydns.values.yml for the chart:
---
provider: digitalocean
digitalocean:
# create the API token at https://cloud.digitalocean.com/account/api/tokens
# needs read + write
apiToken: "DIGITALOCEAN_API_TOKEN"
domainFilters:
# domains you want external-dns to be able to edit
- example.com
rbac:
create: true
create a Kubernetes Ingress resource to route requests to an existing Kubernetes service:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: testing123-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: testing123.example.com # the domain you want associated
http:
paths:
- path: /
backend:
serviceName: testing123-service # existing service
servicePort: 8000 # existing service port
after a minute or so you should see the DNS records appear and be resolvable:
$ dig testing123.example.com # should return worker IP address
$ curl -v http://testing123.example.com # should send the request through the Ingress to your backend service
(Edit: editing the automatically created firewall rules eventually breaks, add a separate firewall instead).
A NodePort Service can do what you want. Something like this:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
- protocol: TCP
nodePort: 80
targetPort: 80
This will redirect incoming traffic from port 80 of the node to port 80 of your pod. Publish the node IP in DNS and you're set.
In general exposing a service to the outside world like this is a very, very bad idea, because the single node passing through all traffic to the service is both going to receive unbalanced load and be a single point of failure. That consideration doesn't apply to a single-node cluster, though, so with the caveat that LoadBalancer and Ingress are the fault-tolerant ways to do what you're looking for, NodePort is best for this extremely specific case.

Running kubectl proxy from same pod vs different pod on same node - what's the difference?

I'm experimenting with this, and I'm noticing a difference in behavior that I'm having trouble understanding, namely between running kubectl proxy from within a pod vs running it in a different pod.
The sample configuration run kubectl proxy and the container that needs it* in the same pod on a daemonset, i.e.
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
# ...
spec:
template:
metadata:
# ...
spec:
containers:
# this container needs kubectl proxy to be running:
- name: l5d
# ...
# so, let's run it:
- name: kube-proxy
image: buoyantio/kubectl:v1.8.5
args:
- "proxy"
- "-p"
- "8001"
When doing this on my cluster, I get the expected behavior. However, I will run other services that also need kubectl proxy, so I figured I'd rationalize that into its own daemon set to ensure it's running on all nodes. I thus removed the kube-proxy container and deployed the following daemon set:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-proxy
labels:
app: kube-proxy
spec:
template:
metadata:
labels:
app: kube-proxy
spec:
containers:
- name: kube-proxy
image: buoyantio/kubectl:v1.8.5
args:
- "proxy"
- "-p"
- "8001"
In other words, the same container configuration as previously, but now running in independent pods on each node instead of within the same pod. With this configuration "stuff doesn't work anymore"**.
I realize the solution (at least for now) is to just run the kube-proxy container in any pod that needs it, but I'd like to know why I need to. Why isn't just running it in a daemonset enough?
I've tried to find more information about running kubectl proxy like this, but my search results drown in results about running it to access a remote cluster from a local environment, i.e. not at all what I'm after.
I include these details not because I think they're relevant, but because they might be even though I'm convinced they're not:
*) a Linkerd ingress controller, but I think that's irrelevant
**) in this case, the "working" state is that the ingress controller complains that the destination is unknown because there's no matching ingress rule, while the "not working" state is a network timeout.
namely between running kubectl proxy from within a pod vs running it in a different pod.
Assuming your cluster has an software defined network, such as flannel or calico, a Pod has its own IP and all containers within a Pod share the same networking space. Thus:
containers:
- name: c0
command: ["curl", "127.0.0.1:8001"]
- name: c1
command: ["kubectl", "proxy", "-p", "8001"]
will work, whereas in a DaemonSet, they are by definition not in the same Pod and thus the hypothetical c0 above would need to use the DaemonSet's Pod's IP to contact 8001. That story is made more complicated by the fact that kubectl proxy by default only listens on 127.0.0.1, so you would need to alter the DaemonSet's Pod's kubectl proxy to include --address='0.0.0.0' --accept-hosts='.*' to even permit such cross-Pod communication. I believe you also need to declare the ports: array in the DaemonSet configuration, since you are now exposing that port into the cluster, but I'd have to double-check whether ports: is merely polite, or is actually required.