I need to add a second interface to some of the specific K8s pods on GKE that need to be accessible directly from public users on the Internet. So I used Multus and defined a Macvlan cni like this:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: '{
"cniVersion": "0.3.1",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "10.162.0.0/20",
"rangeStart": "10.162.0.100",
"rangeEnd": "10.162.0.150",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "10.162.0.1"
}
}'
10.162.0.1 is the default gateway of my K8s nodes in GCP. So, I imagine that in this case, pods should have the access to the outside. But in pods, just there is one default gateway that routes the internal pods traffic. Also, I can't add any route because of the privileges issues.
Question:
My expectation is wrong? How I should use Macvlan to create a public interface for those pods?
Related
I have 2 worker nodes in a Kubernetes cluster. The worker nodes are on the same L2 domain.
$]cat ipvlanconf1.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: ipvlanconf1
namespace: cncf
spec:
config: '{
"cniVersion": "0.3.0",
"type": "ipvlan",
"master": "enp1s0.10",
"mode": "l3",
"vlan": 10,
"ipam": {
"type": "whereabouts",
"range": "10.1.1.1/24",
"gateway": "10.1.1.254"
}
}'
Pod00 on Worker-node0 is using IPVLAN. So, net1 gets 10.1.1.1
Pod01 on Worker-node1 is using IPVLAN. So, net1 gets 10.1.1.2
I want to able to ping 10.1.1.1 <---> 10.1.1.2 and it should carry the VLAN header. I don't see any in the tcpdump.
Questions:
I assumed that the VLAN header is inserted by the Pod itself. However, in the IPVLAN CNI I don't see any code where VLAN information is taken via config. Is my understanding correct?
Should interfaces in pod be explicitly configured as vlan-subinterfaces (net1.10) or should I do it on the worker node (enp1s0.10)?
What should I use as 'master' interface? enp1s0 or enp1s0.10?
Thanks
Is there a possibility to have a service in all namespaces of k8s dynamically deployed?
Right now, glusterFS endpoint(ns dependent) is being deleted by k8s if the port is not in use anymore.
Ex:
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs"
},
"subsets": [
{
"addresses": [
{
"ip": "172.0.0.1"
}
],
"ports": [
{
"port": 1
}
]
}
]
}
So I made a svc for port 1 to be used all the time, so I dont end up with a missing/deleted endpoint in any ns.
apiVersion: v1
kind: Service
metadata:
name: glusterfs
spec:
ports:
- port: 1
It would be interesting to have the above service deployed dynamically every time someone creates a new namespace.
DaemonSet is used to deploy Exactly one replica per node.
coming to your question, why do you need to create same service across namespaces?
It is not supported out of box though. However, you can create a custom script to achieve it.
K8s doesn't have any replication of services, pods, deployments, secrets, etc across namespaces... out of the box.
Introducing...The Kubernetes Controller/Operator Pattern.
Deploy a controller pod that has a read/list permissions on the namespaces resource. This controller will "watch" the namespaces and deploy whatever resources you want when they show up or change.
To get started building your own operator or controller please look at kubebuilder. https://book.kubebuilder.io/
Our Kubernetes cluster includes an nginx load balancer that forwards the requests to other pods.
However, the nginx sees local source IPs and therefore cannot set the correct X-Real-IP header. I tried setting the externalTrafficPolicy value of nginx to "Local" but the IP does not change.
Section of the nginx service config:
"selector": {
"app": "nginx-ingress",
"component": "controller",
"release": "loping-lambkin"
},
"clusterIP": "10.106.1.182",
"type": "LoadBalancer",
"sessionAffinity": "None",
"externalTrafficPolicy": "Local",
"healthCheckNodePort": 32718
Result:
GET / HTTP/1.1
Host: example.com:444
X-Request-ID: dd3310a96bf154d2ac38c8877dec312c
X-Real-IP: 10.39.0.0
X-Forwarded-For: 10.39.0.0
We use a bare metal cluster with metallb.
I found out that weave needs to be configured using NO_MASQ_LOCAL=1 to respect the externalTrafficPolicy property
This appears to be a bug in the IPVS implementation for services of type LoadBalancer : https://github.com/google/metallb/issues/290
I cannot reach the following Kubernetes service when externalTrafficPolicy: Local is set. I access it directly through the NodePort but always get a timeout.
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "echo",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/echo",
"uid": "c1b66aca-cc53-11e8-9062-d43d7ee2fdff",
"resourceVersion": "5190074",
"creationTimestamp": "2018-10-10T06:14:33Z",
"labels": {
"k8s-app": "echo"
}
},
"spec": {
"ports": [
{
"name": "tcp-8080-8080-74xhz",
"protocol": "TCP",
"port": 8080,
"targetPort": 3333,
"nodePort": 30275
}
],
"selector": {
"k8s-app": "echo"
},
"clusterIP": "10.101.223.0",
"type": "NodePort",
"sessionAffinity": "None",
"externalTrafficPolicy": "Local"
},
"status": {
"loadBalancer": {}
}
}
I know that for this pods of the service need to be available on a node because traffic is not routed to other nodes. I checked this.
Not sure where you are connecting from and what command you are typing to test connectivity or what's your environment like. But this is most likely due to this known issue where the node ports are not reachable with externalTrafficPolicy set to Local if the kube-proxy cannot find the IP address for the node where it's running on.
This link sheds more light into the problem. Apparently --hostname-override on the kube-proxy is not working as of K8s 1.10. You have to specify the HostnameOverride option in the kube-proxy ConfigMap. There's also a fix described here that will make it upstream at some point in the future from this writing.
As Jagrut said, the link shared in Rico's answer does not contain the desired section with the patch anymore, so I'll share a different thread where stacksonstacks' answer worked for me: here . This solution consists in editing kube-proxy.yaml to include the HOST_IP argument.
In my case, request to node cluster/public ip which own deployment/pod.
spec:
clusterIP: 10.9x.x.x <-- request this ip
clusterIPs:
- 10.9x.x.x
externalTrafficPolicy: Local
or
NAME STATUS ... EXTERNAL-IP
master-node Ready 3.2x.x.x
worker-node Ready 13.1x.x.x <-- request this ip
additionally, alway to request same node's ip, use nodeSelector in Deployment.
I'm trying to get going with Kubernetes DaemonSets and not having any luck at all. I've searched for a solution to no avail. I'm hoping someone here can help out.
First, I've seen this ticket. Restarting the controller manager doesn't appear to help. As you can see here, the other kube processes have all been started after the apiserver and the api server has '--runtime-config=extensions/v1beta1=true' set.
kube 31398 1 0 08:54 ? 00:00:37 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://dock-admin:2379 --address=0.0.0.0 --allow-privileged=false --portal_net=10.254.0.0/16 --admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota --runtime-config=extensions/v1beta1=true
kube 12976 1 0 09:49 ? 00:00:28 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://127.0.0.1:8080 --cloud-provider=
kube 29489 1 0 11:34 ? 00:00:00 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://127.0.0.1:8080
However api-versions only shows version 1:
$ kubectl api-versions
Available Server Api Versions: v1
Kubernetes version is 1.2:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}
The DaemonSet has been created, but appears to have no pods scheduled (status.desiredNumberScheduled).
$ kubectl get ds -o json
{
"kind": "List",
"apiVersion": "v1",
"metadata": {},
"items": [
{
"kind": "DaemonSet",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "ds-test",
"namespace": "dvlp",
"selfLink": "/apis/extensions/v1beta1/namespaces/dvlp/daemonsets/ds-test",
"uid": "2d948b18-fa7b-11e5-8a55-00163e245587",
"resourceVersion": "2657499",
"generation": 1,
"creationTimestamp": "2016-04-04T15:37:45Z",
"labels": {
"app": "ds-test"
}
},
"spec": {
"selector": {
"app": "ds-test"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "ds-test"
}
},
"spec": {
"containers": [
{
"name": "ds-test",
"image": "foo.vt.edu:1102/dbaa-app:v0.10-dvlp",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {}
}
}
},
"status": {
"currentNumberScheduled": 0,
"numberMisscheduled": 0,
"desiredNumberScheduled": 0
}
}
]
}
Here is my yaml file to create the DaemonSet
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: ds-test
spec:
selector:
app: ds-test
template:
metadata:
labels:
app: ds-test
spec:
containers:
- name: ds-test
image: foo.vt.edu:1102/dbaa-app:v0.10-dvlp
ports:
- containerPort: 8080
Using that file to create the DaemonSet appears to work (I get 'daemonset "ds-test" created'), but no pods are created:
$ kubectl get pods -o json
{
"kind": "List",
"apiVersion": "v1",
"metadata": {},
"items": []
}
(I would have posted this as a comment, if I had enough reputation)
I am confused by your output.
kubectl api-versions should print out extensions/v1beta1 if it is enabled on the server. Since it does not, it looks like extensions/v1beta1 is not enabled.
But kubectl get ds should fail if extensions/v1beta1 is not enabled. So I can not figure out if extensions/v1beta1 is enabled on your server or not.
Can you try GET masterIP/apis and see if extensions is listed there?
You can also go to masterIP/apis/extensions/v1beta1 and see if daemonsets is listed there.
Also, I see kubectl version says 1.2, but then kubectl api-versions should not print out the string Available Server Api Versions (that string was removed in 1.1: https://github.com/kubernetes/kubernetes/pull/15796).
I have this issue in my cluster (k8s version: 1.9.7):
Daemonset controlled by "Daemonset controller" not "Scheduler", So I restart the controller manager, the problem sloved:
But I think this is a issue of kubernetes, some relation info:
Bug 1469037 - Sometime daemonset DESIRED=0 even this matched node
v1.7.4 - Daemonset DESIRED 0 (for node-exporter) #51785
I was facing a similar issue, then tried searching for the daemonset in the kube-system namespace, as mentioned here, https://github.com/kubernetes/kubernetes/issues/61342
I actually did get an output properly as well
For any case that the current state of pods is not equal to desired state (whether it was created by a DaemonSet, ReplicaSet, Deployment etc') I would first check the Kubelet on the current node:
$ sudo systemctl status kubelet
Or:
$ sudo journalctl -u kubelet
In many cases pods weren't created in my cluster because of errors like:
Couldn't parse as pod (Object 'Kind' is missing in 'null')
Which might occur after editing a resource's yaml in editor like vim.
Try:
$kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule-
master node cannot accept pods.