My understanding is that the AGE shown for a pod when using kubectl get pod, shows the time that the pod has been running since the last restart. So, for the pod shown below, my understanding is that it intially restarted 14 times, but hasn't restarted in the last 17 hours. Is this correct, and where is a kubernetes reference that explains this?
Hope you're enjoying your Kubernetes journey !
In fact, the AGE Headers when using kubectl get pod shows you for how long your pod has been created and it's running. But do not confuse POD and container:
The header "RESTARTS" is actually linked to the parameters > '.status.containerStatuses[0].restartCount' of the pod manifest. That means that this header is linked to the number of restarts, not of the pod, but of the container inside the pod.
Here is an example:
I just deployed a new pod:
NAME READY STATUS RESTARTS AGE
test-bg-7d57d546f4-f4cql 2/2 Running 0 9m38s
If I check the yaml configuration of this pod, we can see that in the "status" section we have the said "restartCount" field:
❯ k get po test-bg-7d57d546f4-f4cql -o yaml
apiVersion: v1
kind: Pod
metadata:
...
spec:
...
status:
...
containerStatuses:
...
- containerID: docker://3f53f140f775416644ea598d554e9b8185e7dd005d6da1940d448b547d912798
...
name: test-bg
ready: true
restartCount: 0
...
So, to demonstrate what I'm saying, I'm going to connect into my pod and kill the main process's my pod is running:
❯ k exec -it test-bg-7d57d546f4-f4cql -- bash
I have no name!#test-bg-7d57d546f4-f4cql:/tmp$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
1000 1 0.0 0.0 5724 3256 ? Ss 03:20 0:00 bash -c source /tmp/entrypoint.bash
1000 22 1.5 0.1 2966140 114672 ? Sl 03:20 0:05 java -jar test-java-bg.jar
1000 41 3.3 0.0 5988 3592 pts/0 Ss 03:26 0:00 bash
1000 48 0.0 0.0 8588 3260 pts/0 R+ 03:26 0:00 ps aux
I have no name!#test-bg-7d57d546f4-f4cql:/tmp$ kill 22
I have no name!#test-bg-7d57d546f4-f4cql:/tmp$ command terminated with exit code 137
and after this, if I reexecute the "kubectl get pod" command, I got this:
NAME READY STATUS RESTARTS AGE
test-bg-7d57d546f4-f4cql 2/2 Running 1 11m
Then, if I go back to my yaml config, We can see that the restartCount field is actually linked to my container and not to my pod.
❯ k get po test-bg-7d57d546f4-f4cql -o yaml
apiVersion: v1
kind: Pod
metadata:
...
spec:
...
status:
...
containerStatuses:
...
- containerID: docker://3f53f140f775416644ea598d554e9b8185e7dd005d6da1940d448b547d912798
...
name: test-bg
ready: true
restartCount: 1
...
So, to conclude, the RESTARTS header is giving you the restartCount of the container not of the pod, but the AGE header is giving you the age of the pod.
This time, if I delete the pod:
❯ k delete pod test-bg-7d57d546f4-f4cql
pod "test-bg-7d57d546f4-f4cql" deleted
we can see that the restartCount is back to 0 since its a brand new pod with a brand new age:
NAME READY STATUS RESTARTS AGE
test-bg-7d57d546f4-bnvxx 2/2 Running 0 23s
test-bg-7d57d546f4-f4cql 2/2 Terminating 2 25m
For your example, it means that the container restarted 14 times, but the pod was deployed 17 hours ago.
I can't find the exact documentation of this but (as it is explained here: https://kubernetes.io/docs/concepts/workloads/_print/#working-with-pods):
"Note: Restarting a container in a Pod should not be confused with restarting a Pod. A Pod is not a process, but an environment for running container(s). A Pod persists until it is deleted."
Hope this has helped you better understand.
Here is a little tip from https://kubernetes.io/docs/reference/kubectl/cheatsheet/:
kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'
(to sort your pods by their restartCount number :p)
Bye
OMG, they add a new feature in kubernetes (i dont know since when) but look:
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-74d589986c-ngvgc 1/1 Running 0 21h
pod/postgres 0/1 CrashLoopBackOff 7 (3m16s ago) 14m
now you can see the actual AGE of the container !!!!!! (3m16s ago in my example)
here is my kubernetes version:
❯ kubectl version --short
Client Version: v1.22.5
Server Version: v1.23.5
Related
I have the following pods on my kubernetes (1.18.3) cluster:
NAME READY STATUS RESTARTS AGE
pod1 1/1 Running 0 14m
pod2 1/1 Running 0 14m
pod3 0/1 Pending 0 14m
pod4 0/1 Pending 0 14m
pod3 and pod4 cannot start because the node has capacity for 2 pods only. When pod1 finishes and quits, then the scheduler picks either pod3 or pod4 and starts it. So far so good.
However, I also have a high priority pod (hpod) that I'd like to start before pod3 or pod4 when either of the running pods finishes and quits.
So I created a priorityclass can be found in the kubernetes docs:
kind: PriorityClass
metadata:
name: high-priority-no-preemption
value: 1000000
preemptionPolicy: Never
globalDefault: false
description: "This priority class should be used for XYZ service pods only."
I've created the following pod yaml:
apiVersion: v1
kind: Pod
metadata:
name: hpod
labels:
app: hpod
spec:
containers:
- name: hpod
image: ...
resources:
requests:
cpu: "500m"
memory: "500Mi"
limits:
cpu: "500m"
memory: "500Mi"
priorityClassName: high-priority-no-preemption
Now the problem is that when I start the high prio pod with kubectl apply -f hpod.yaml, then the scheduler terminates a running pod to allow the high priority pod to start despite I've set 'preemptionPolicy: Never'.
The expected behaviour would be to postpone starting hpod until a currently running pod finishes. And when it does, then let hpod start before pod3 or pod4.
What am I doing wrong?
Prerequisites:
This solution was tested on Kubernetes v1.18.3, docker 19.03 and Ubuntu 18.
Also text editor is required (i.e. sudo apt-get install vim).
In Kubernetes documentation under How to disable preemption you can find Note:
Note: In Kubernetes 1.15 and later, if the feature NonPreemptingPriority is enabled, PriorityClasses have the option to set preemptionPolicy: Never. This will prevent pods of that PriorityClass from preempting other pods.
Also under Non-preempting PriorityClass you have information:
The use of the PreemptionPolicy field requires the NonPreemptingPriority feature gate to be enabled.
Later if you will check thoses Feature Gates info, you will find that NonPreemptingPriority is false, so as default it's disabled.
Output with your current configuration:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-normal 1/1 Running 0 32s
nginx-normal-2 1/1 Running 0 32s
$ kubectl apply -f prio.yaml
pod/nginx-priority created$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-normal-2 1/1 Running 0 48s
nginx-priority 1/1 Running 0 8s
To enable preemptionPolicy: Never you need to apply --feature-gates=NonPreemptingPriority=true to 3 files:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-controller-manager.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
To check if this feature-gate is enabled you can check by using commands:
ps aux | grep apiserver | grep feature-gates
ps aux | grep scheduler | grep feature-gates
ps aux | grep controller-manager | grep feature-gates
For quite detailed information, why you have to edit thoses files please check this Github thread.
$ sudo su
# cd /etc/kubernetes/manifests/
# ls
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
Use your text editor to add feature gate to those files
# vi kube-apiserver.yaml
and add - --feature-gates=NonPreemptingPriority=true under spec.containers.command like in example bellow:
spec:
containers:
- command:
- kube-apiserver
- --feature-gates=NonPreemptingPriority=true
- --advertise-address=10.154.0.31
And do the same with 2 other files. After that you can check if this flags were applied.
$ ps aux | grep apiserver | grep feature-gates
root 26713 10.4 5.2 565416 402252 ? Ssl 14:50 0:17 kube-apiserver --feature-gates=NonPreemptingPriority=true --advertise-address=10.154.0.31
Now you have redeploy your PriorityClass.
$ kubectl get priorityclass
NAME VALUE GLOBAL-DEFAULT AGE
high-priority-no-preemption 1000000 false 12m
system-cluster-critical 2000000000 false 23m
system-node-critical 2000001000 false 23m
$ kubectl delete priorityclass high-priority-no-preemption
priorityclass.scheduling.k8s.io "high-priority-no-preemption" deleted
$ kubectl apply -f class.yaml
priorityclass.scheduling.k8s.io/high-priority-no-preemption created
Last step is to deploy pod with this PriorityClass.
TEST
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-normal 1/1 Running 0 4m4s
nginx-normal-2 1/1 Running 0 18m
$ kubectl apply -f prio.yaml
pod/nginx-priority created
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-normal 1/1 Running 0 5m17s
nginx-normal-2 1/1 Running 0 20m
nginx-priority 0/1 Pending 0 67s
$ kubectl delete po nginx-normal-2
pod "nginx-normal-2" deleted
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-normal 1/1 Running 0 5m55s
nginx-priority 1/1 Running 0 105s
We have 2 clusters on GKE: dev and production. I tried to run this command on dev cluster:
gcloud beta container clusters update "dev" --update-addons=NodeLocalDNS=ENABLED
And everything went great, node-local-dns pods are running and all works, next morning I decided to run same command on production cluster and node-local-dns fails to run, and I noticed that both PILLAR__LOCAL__DNS and PILLAR__DNS__SERVER in yaml aren't changed to proper IPs, I tried to change those variables in config yaml, but GKE keeps overwriting them back to yaml with PILLAR__DNS__SERVER variables...
The only difference between clusters is that dev runs on 1.15.9-gke.24 and production 1.15.11-gke.1.
Apparently 1.15.11-gke.1 version has a bug.
I recreated it first on 1.15.11-gke.1 and can confirm that node-local-dns Pods fall into CrashLoopBackOff state:
node-local-dns-28xxt 0/1 CrashLoopBackOff 5 5m9s
node-local-dns-msn9s 0/1 CrashLoopBackOff 6 8m17s
node-local-dns-z2jlz 0/1 CrashLoopBackOff 6 10m
When I checked the logs:
$ kubectl logs -n kube-system node-local-dns-msn9s
2020/04/07 21:01:52 [FATAL] Error parsing flags - Invalid localip specified - "__PILLAR__LOCAL__DNS__", Exiting
Solution:
Upgrade to 1.15.11-gke.3 helped. First you need to upgrade your master-node and then your node pool. It looks like on this version everything runs nice and smoothly:
$ kubectl get daemonsets -n kube-system node-local-dns
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
node-local-dns 3 3 3 3 3 addon.gke.io/node-local-dns-ds-ready=true 44m
$ kubectl get pods -n kube-system -l k8s-app=node-local-dns
NAME READY STATUS RESTARTS AGE
node-local-dns-8pjr5 1/1 Running 0 11m
node-local-dns-tmx75 1/1 Running 0 19m
node-local-dns-zcjzt 1/1 Running 0 19m
As it comes to manually fixing this particular daemonset yaml file, I wouldn't recommend it as you can be sure that GKE's auto-repair and auto-upgrade features will overwrite it sooner or later anyway.
I hope it was helpful.
Would there be any reason if I'm trying to run a pod on a k8s cluster which stays in the 'Completed' status forever but is never in ready status 'Ready 0/1' ... although there are core-dns , kube-proxy pods,etc running successfully scheduled under each node in the nodepool assigned to a k8s cluster ... all the worker nodes seems to be in healthy state
This sounds like the pod lifecycle has ended and the reasons is probably because your pod have finished the task it is meant for.
Something like next example will not do anything, it will be created successfully then started, will pull the image and then it will be marked as completed.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: test-pod
image: busybox
resources:
Here is how it will look:
NAMESPACE NAME READY STATUS RESTARTS AGE
default my-pod 0/1 Completed 1 3s
default testapp-1 0/1 Pending 0 92m
default web-0 1/1 Running 0 3h2m
kube-system event-exporter-v0.2.5-7df89f4b8f-mc7kt 2/2 Running 0 7d19h
kube-system fluentd-gcp-scaler-54ccb89d5-9pp8z 1/1 Running 0 7d19h
kube-system fluentd-gcp-v3.1.1-gdr9l 2/2 Running 0 123m
kube-system fluentd-gcp-v3.1.1-pt8zp 2/2 Running 0 3h2m
kube-system fluentd-gcp-v3.1.1-xwhkn 2/2 Running 5 172m
kube-system fluentd-gcp-v3.1.1-zh29w 2/2 Running 0 7d19h
For this case, I will recommend you to check your yaml, check what kind of pod you are running? and what is it meant for?
Even, and with test purposes, you can add a command argument to keep it running.
command: ["/bin/sh","-c"]
args: ["command one; command two && command three"]
something like:
args: ["-c", "while true; do echo hello; sleep 10;done"]
The yaml with commands added would look like this:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: test-pod
image: busybox
resources:
command: ["/bin/sh","-c"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
Here is how it will look:
NAMESPACE NAME READY STATUS RESTARTS AGE
default my-pod 1/1 Running 0 8m56s
default testapp-1 0/1 Pending 0 90m
default web-0 1/1 Running 0 3h
kube-system event-exporter-v0.2.5-7df89f4b8f-mc7kt 2/2 Running 0 7d19h
kube-system fluentd-gcp-scaler-54ccb89d5-9pp8z 1/1 Running 0 7d19h
kube-system fluentd-gcp-v3.1.1-gdr9l 2/2 Running 0 122m
Another thing that will help is a kubectl describe pod $POD_NAME, to analyze this further.
This issue got nothing to do with k8s cluster or setup. It's mostly with your Dockerfile.
For the pod to be in running state, you need to have the container as an executable, for that usually an Entrypoint that specifies the instructions would solve the problem you are facing.
A better approach to resolve this or debug this particular issue would be, by running the respective docker container on a localmachine before deploying that onto k8s cluster.
The message Completed, most likely implies that the k8s started your container, then the container subsequently exited.
In Dockerfile, the ENTRYPOINT defines the process with pid 1 which the docker container should hold, otherwise it terminates, so whatever the pod is trying to do, you need to check whether the ENTRYPOINT command is looping, and doesn't die.
For side notes, Kubernetes will try to restart the pod, once it terminates but after few tries the pod status may turn into CrashLoopBackOff and you will see message something similar to Back-off restarting failed container.
I do not want to decrease the number of pods controlled by StatefulSet, and i think that decreasing pods is a dangerous operation in production env.
so... , is there some way ? thx ~
I'm not sure if this is what you are looking for but you can scale a StatefulSet
Use kubectl to scale StatefulSets
First, find the StatefulSet you want to scale.
kubectl get statefulsets <stateful-set-name>
Change the number of replicas of your StatefulSet:
kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
To show you an example, I've deployed a 2 pod StatefulSet called web:
$ kubectl get statefulsets.apps web
NAME READY AGE
web 2/2 60s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 63s
web-1 1/1 Running 0 44s
$ kubectl describe statefulsets.apps web
Name: web
Namespace: default
CreationTimestamp: Wed, 23 Oct 2019 13:46:33 +0200
Selector: app=nginx
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"name":"web","namespace":"default"},"spec":{"replicas":2,"select...
Replicas: 824643442664 desired | 2 total
Update Strategy: RollingUpdate
Partition: 824643442984
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
...
Now if we do scale this StatefulSet up to 5 replicas:
$ kubectl scale statefulset web --replicas=5
statefulset.apps/web scaled
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 3m41s
web-1 1/1 Running 0 3m22s
web-2 1/1 Running 0 59s
web-3 1/1 Running 0 40s
web-4 1/1 Running 0 27s
$ kubectl get statefulsets.apps web
NAME READY AGE
web 5/5 3m56s
You do not have any downtime in already working pods.
i think that decreasing pods is a dangerous operation in production env.
I agree with you.
As Crou wrote, it is possible to do this operation with kubectl scale statefulsets <stateful-set-name> but this is an imperative operation and it is not recommended to do imperative operations in a production environment.
In a production environment it is better to use a declarative operation, e.g. have the number of replicas in a text file (e.g. stateful-set-name.yaml) and deploy them with kubectl apply -f <stateful-set-name>.yaml with this way of working, it is easy to store the yaml-files in Git so you have full control of all changes and can revert/rollback to a previous configuration. When you store the declarative files in a Git repository you can use a CICD solution e.g. Jenkins or ArgoCD to 1) validate the operation (e.g. not allow decrease) and 2) first deploy to a test-environment and see that it works, before applying the changes to the production environment.
I recommend the book (new edition) Kubernetes Up&Running 2nd ed that describes this procedure in Chapter 18 (new chapter).
I am building a Kubernetes cluster following this tutorial, and I have troubles to access the Kubernetes dashboard. I already created another question about it that you can see here, but while digging up into my cluster, I think that the problem might be somewhere else and that's why I create a new question.
I start my master, by running the following commands:
> kubeadm reset
> kubeadm init --apiserver-advertise-address=[MASTER_IP] > file.txt
> tail -2 file.txt > join.sh # I keep this file for later
> kubectl apply -f https://git.io/weave-kube/
> kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-kb2zq 0/1 Pending 0 2m46s
coredns-fb8b8dccf-nnc5n 0/1 Pending 0 2m46s
etcd-kubemaster 1/1 Running 0 93s
kube-apiserver-kubemaster 1/1 Running 0 93s
kube-controller-manager-kubemaster 1/1 Running 0 113s
kube-proxy-lxhvs 1/1 Running 0 2m46s
kube-scheduler-kubemaster 1/1 Running 0 93s
Here we can see that I have two coredns pods stuck in Pending state forever, and when I run the command :
> kubectl -n kube-system describe pod coredns-fb8b8dccf-kb2zq
I can see in the Events part the following Warning :
Failed Scheduling : 0/1 nodes are available 1 node(s) had taints that the pod didn't tolerate.
Since it is a Warning and not and Error, and that as a Kubernetes newbie, taints does not mean much to me, I tried to connect a node to the master (using the previously saved command) :
> cat join.sh
kubeadm join [MASTER_IP]:6443 --token [TOKEN] \
--discovery-token-ca-cert-hash sha256:[ANOTHER_TOKEN]
> ssh [USER]#[WORKER_IP] 'bash' < join.sh
This node has joined the cluster.
On the master, I check that the node is connected:
> kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster NotReady master 13m v1.14.1
kubeslave1 NotReady <none> 31s v1.14.1
And I check my pods :
> kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-kb2zq 0/1 Pending 0 14m
coredns-fb8b8dccf-nnc5n 0/1 Pending 0 14m
etcd-kubemaster 1/1 Running 0 13m
kube-apiserver-kubemaster 1/1 Running 0 13m
kube-controller-manager-kubemaster 1/1 Running 0 13m
kube-proxy-lxhvs 1/1 Running 0 14m
kube-proxy-xllx4 0/1 ContainerCreating 0 2m16s
kube-scheduler-kubemaster 1/1 Running 0 13m
We can see that another kube-proxy pod have been created and is stuck in ContainerCreating status.
And when I am doing a describe again :
kubectl -n kube-system describe pod kube-proxy-xllx4
I can see in the Events part multiple identical Warnings :
Failed create pod sandbox : rpx error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Get https://k8s.gcr.io/v1/_ping: dial tcp: lookup k8s.gcr.io on [::1]:53 read up [::1]43133->[::1]:53: read: connection refused
Here are my repositories :
docker image ls
REPOSITORY TAG
k8s.gcr.io/kube-proxy v1.14.1
k8s.gcr.io/kube-apiserver v1.14.1
k8s.gcr.io/kube-controller-manager v1.14.1
k8s.gcr.io/kube-scheduler v1.14.1
k8s.gcr.io/coredns 1.3.1
k8s.gcr.io/etcd 3.3.10
k8s.gcr.io/pause 3.1
And so, for the dashboard part, I tried to start it with the command
> kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
But the dashboard pod is stuck in Pending state.
kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-kb2zq 0/1 Pending 0 40m
coredns-fb8b8dccf-nnc5n 0/1 Pending 0 40m
etcd-kubemaster 1/1 Running 0 38m
kube-apiserver-kubemaster 1/1 Running 0 38m
kube-controller-manager-kubemaster 1/1 Running 0 39m
kube-proxy-lxhvs 1/1 Running 0 40m
kube-proxy-xllx4 0/1 ContainerCreating 0 27m
kube-scheduler-kubemaster 1/1 Running 0 38m
kubernetes-dashboard-5f7b999d65-qn8qn 1/1 Pending 0 8s
So, event though my problem originaly was that I cannot access to my dashboard, I guess that the real problem is deeper thant that.
I know that I just put a lot of information here, but I am a k8s beginner and I am completely lost on this.
There is an issue I experienced with coredns pods stuck in a pending mode when setting up your own cluster; which I resolve by adding pod network.
Looks like because there is no Network Addon installed, the nodes are taint as not-ready. Installing the Addon would remove the taints and the Pods will be able to schedule. In my case adding flannel fixed the issue.
EDIT: There is a note about this in the official k8s documentation - Create cluster with kubeadm:
The network must be deployed before any applications. Also, CoreDNS
will not start up before a network is installed. kubeadm only
supports Container Network Interface (CNI) based networks (and does
not support kubenet).
Actually it is the opposite of a deep or serious issue. This is a trivial issue. Always you see a pod stuck on Pending state, it means the scheduler is having a hard time to schedule the pod; mostly because there are no enough resources on the node.
In your case it is a taint that has the node, and your pod doesn't have the toleration. What you have to do is to describe the node and get the taint:
kubectl describe node | grep -i taints
Note: you might have more then one taint. So you might want to do kubectl describe no NODE since with grep you will only see one taint.
Once you get the taint, that will be something like hello=world:NoSchedule; which means key=value:effect, you will have to add a toleration section in your Deployment. This is an example Deployment so you can see how it should look like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 10
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
name: http
tolerations:
- effect: NoExecute #NoSchedule, PreferNoSchedule
key: node
operator: Equal
value: not-ready
tolerationSeconds: 3600
As you can see there is the toleration section in the yaml. So, if I would have a node with node=not-ready:NoExecute taint, no pod would be able to be scheduled on that node, unless would have this toleration.
Also you can remove the taint, if you don need it. To remove a taint you would describe the node, get the key of the taint and do:
kubectl taint node NODE key-
Hope it makes sense. Just add this section to your deployment, and it will work.
Set up the flannel network tool.
Running commands:
$ sysctl net.bridge.bridge-nf-call-iptables=1
$ kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml