Can't modify ETCD manifest for Kubernetes static pod - kubernetes

I'd like to modify etcd pod to listening 0.0.0.0(or host machine IP) instead of 127.0.0.1.
I'm working on a migration from a single master to multi-master kubernetes cluster, but I faced with an issue that after I modified /etc/kubernetes/manifests/etcd.yaml with correct settings and restart kubelet and even docker daemons, etcd still working on 127.0.0.1.
Inside docker container I'm steel seeing that etcd started with --listen-client-urls=https://127.0.0.1:2379 instead of host IP
cat /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
containers:
- command:
- etcd
- --advertise-client-urls=https://192.168.22.9:2379
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --initial-advertise-peer-urls=https://192.168.22.9:2380
- --initial-cluster=test-master-01=https://192.168.22.9:2380
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --listen-client-urls=https://192.168.22.9:2379
- --listen-peer-urls=https://192.168.22.9:2380
- --name=test-master-01
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --snapshot-count=10000
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
image: k8s.gcr.io/etcd-amd64:3.2.18
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /bin/sh
- -ec
- ETCDCTL_API=3 etcdctl --endpoints=https://[192.168.22.9]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
get foo
failureThreshold: 8
initialDelaySeconds: 15
timeoutSeconds: 15
name: etcd
resources: {}
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /var/lib/etcd
type: DirectoryOrCreate
name: etcd-data
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
status: {}
[root#test-master-01 centos]# kubectl -n kube-system get po etcd-test-master-01 -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/config.hash: c3eef2d48a776483adc00311df8cb940
kubernetes.io/config.mirror: c3eef2d48a776483adc00311df8cb940
kubernetes.io/config.seen: 2019-05-24T13:50:06.335448715Z
kubernetes.io/config.source: file
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: 2019-05-24T14:08:14Z
labels:
component: etcd
tier: control-plane
name: etcd-test-master-01
namespace: kube-system
resourceVersion: "6288"
selfLink: /api/v1/namespaces/kube-system/pods/etcd-test-master-01
uid: 5efadb1c-7e2d-11e9-adb7-fa163e267af4
spec:
containers:
- command:
- etcd
- --advertise-client-urls=https://127.0.0.1:2379
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --initial-advertise-peer-urls=https://127.0.0.1:2380
- --initial-cluster=test-master-01=https://127.0.0.1:2380
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --listen-client-urls=https://127.0.0.1:2379
- --listen-peer-urls=https://127.0.0.1:2380
- --name=test-master-01
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --snapshot-count=10000
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
image: k8s.gcr.io/etcd-amd64:3.2.18
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /bin/sh
- -ec
- ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
get foo

First check your kubelet option --pod-manifest-path, put your correct yaml in this path.
To make sure etcd pod has been deleted, move yaml file out of pod-manifest-path, wait this pod has been deleted by docker ps -a. Then put your correct yaml file into pod-manifest-path.

Reviewed my automation scripts step by step and found that I've performed a backup of etcd yaml in the same folder with .bak extension. Looks like kubelet daemon uploads all the files inside the manifests folder and despite the file extension.

Related

deploying wazuh-manager and replace ossec.conf after pods running - kubernetes

I'm deploying wazuh-manager on my kubernetes cluster and I need to disabled some security check features from the ossec.conf and I'm trying to copy the config-map ossec.conf(my setup) with the one from the wazuh-manager image but if I'm creating the "volume mount" on /var/ossec/etc/ossec.conf" it will delete everything from the /var/ossec/etc/(when wazuh-manager pods is deployed it will copy all files that this manager needs).
So, I'm thinking to create a new volume mount "/wazuh/ossec.conf" with "lifecycle poststart sleep > exec command "cp /wazuh/ossec.conf > /var/ossec/etc/ " but I'm getting an error that "cannot find /var/ossec/etc/".
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: wazuh-manager
labels:
node-type: master
spec:
replicas: 1
selector:
matchLabels:
appComponent: wazuh-manager
node-type: master
serviceName: wazuh
template:
metadata:
labels:
appComponent: wazuh-manager
node-type: master
name: wazuh-manager
spec:
volumes:
- name: ossec-conf
configMap:
name: ossec-config
containers:
- name: wazuh-manager
image: wazuh-manager4.8
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cp /wazuh/ossec.conf >/var/ossec/etc/ossec.conf"]
resources:
securityContext:
capabilities:
add: ["SYS_CHROOT"]
volumeMounts:
- name: ossec-conf
mountPath: /wazuh/ossec.conf
subPath: master.conf
readOnly: true
ports:
- containerPort: 8855
name: registration
volumeClaimTemplates:
- metadata:
name: wazuh-disk
spec:
accessModes: ReadWriteOnce
storageClassName: wazuh-csi-disk
resources:
requests:
storage: 50
error:
$ kubectl get pods -n wazuh
wazuh-1670333556-0 0/1 PostStartHookError: command '/bin/sh -c cp /wazuh/ossec.conf > /var/ossec/etc/ossec.conf' exited with 1: /bin/sh: /var/ossec/etc/ossec.conf: No such file or directory...
Within the wazuh-kubernetes repository you have a file for each of the Wazuh manager cluster nodes:
wazuh/wazuh_managers/wazuh_conf/master.conf for the Wazuh Manager master node.
wazuh/wazuh_managers/wazuh_conf/worker.conf for the Wazuh Manager worker node.
With these files, in the Kustomization.yml script, configmaps are created:
configMapGenerator:
-name: indexer-conf
files:
- indexer_stack/wazuh-indexer/indexer_conf/opensearch.yml
- indexer_stack/wazuh-indexer/indexer_conf/internal_users.yml
-name: wazuh-conf
files:
-wazuh_managers/wazuh_conf/master.conf
-wazuh_managers/wazuh_conf/worker.conf
-name: dashboard-conf
files:
- indexer_stack/wazuh-dashboard/dashboard_conf/opensearch_dashboards.yml
Then, in the deployment manifest, they are mounted to persist the configurations in the ossec.conf file of each cluster node:
wazuh/wazuh_managers/wazuh-master-sts.yaml:
...
specification:
volumes:
-name:config
configMap:
name: wazuh-conf
...
volumeMounts:
-name:config
mountPath: /wazuh-config-mount/etc/ossec.conf
subPath: master.conf
...
It should be noted that the configuration files that you need to copy into the /var/ossec/ directory must be mounted on the /wazuh-config-mount/ directory and then the Wazuh Manager image entrypoint takes care of copying it to its location at the start of the container. As an example, the configmap is mounted to /wazuh-config-mount/etc/ossec.conf and then copied to /var/ossec/etc/ossec.conf at startup.

pod events shows persistentvolumeclaim "flink-pv-claim-11" is being deleted in kubernetes but binding success

I am binding persistentvolumeclaim into my Job's pod, it shows:
persistentvolumeclaim "flink-pv-claim-11" is being deleted
but the persistent volume claim is exists and binded success.
and the pod has no log output. what should I do to fix this? this is the job yaml:
apiVersion: batch/v1
kind: Job
metadata:
name: flink-jobmanager-1.11
spec:
template:
metadata:
labels:
app: flink
component: jobmanager
spec:
restartPolicy: OnFailure
containers:
- name: jobmanager
image: flink:1.11.0-scala_2.11
env:
args: ["standalone-job", "--job-classname", "com.job.ClassName", <optional arguments>, <job arguments>] # optional arguments: ["--job-id", "<job id>", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState"]
ports:
- containerPort: 6123
name: rpc
- containerPort: 6124
name: blob-server
- containerPort: 8081
name: webui
livenessProbe:
tcpSocket:
port: 6123
initialDelaySeconds: 30
periodSeconds: 60
volumeMounts:
- name: flink-config-volume
mountPath: /opt/flink/conf
- name: job-artifacts-volume
mountPath: /opt/flink/usrlib
- name: job-artifacts-volume
mountPath: /opt/flink/data/job-artifacts
securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
volumes:
- name: flink-config-volume
configMap:
name: flink-1.11-config
items:
- key: flink-conf.yaml
path: flink-conf.yaml
- key: log4j-console.properties
path: log4j-console.properties
- name: job-artifacts-volume
persistentVolumeClaim:
claimName: flink-pv-claim-11
Make sure the job tied to this PVC is deleted. If any other job/pods are running and using this pvc then you can not delete the PVC until you delete job/pod.
To see which resources are in use at the moment from the PVC try to run:
kubectl get pods --all-namespaces -o=json | jq -c \ '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName:.spec.volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'
You have two volumeMounts named job-artifacts-volume, which may be causing some confusion.

Where to execute kube-proxy command?

From this article, I can specify 'userspace' as my proxy-mode, but I am unable to understand what command I need to use for it and at what stage? Like after creating deployment or service?
I am running a minikube cluster currently.
kube-proxy is a process that runs on each kubernetes node to manage network connections coming into and out of kubernetes.
You don't run the command as such, but your deployment method (usually kubeadm) configures the options for it to run.
As #Hang Du mentioned, in minikube you can modify it's options by editing the kube-proxy configmap and changing mode to userspace
kubectl -n kube-system edit configmap kube-proxy
Then delete the Pod.
kubectl -n kube-system get pod
kubectl -n kube-system delete pod kube-proxy-XXXXX
If you are using minikube, you can find a DaemonSet named kube-proxy like followings:
$ kubectl get ds -n kube-system kube-proxy -o yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
...
labels:
k8s-app: kube-proxy
name: kube-proxy
namespace: kube-system
...
spec:
...
spec:
containers:
- command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- --hostname-override=$(NODE_NAME)
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: k8s.gcr.io/kube-proxy:v1.15.0
imagePullPolicy: IfNotPresent
name: kube-proxy
...
volumeMounts:
- mountPath: /var/lib/kube-proxy
name: kube-proxy
- mountPath: /run/xtables.lock
name: xtables-lock
- mountPath: /lib/modules
name: lib-modules
readOnly: true
dnsPolicy: ClusterFirst
...
volumes:
- configMap:
defaultMode: 420
name: kube-proxy
name: kube-proxy
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
- hostPath:
path: /lib/modules
type: ""
name: lib-modules
...
Look at the .spec.template.spec.containers[].command, the container runs the kube-proxy command. You can provide the flag --proxy-mode=userspace in the command array.
- command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- --hostname-override=$(NODE_NAME)
- --proxy-mode=userspace

Istio direct Pod to Pod communication

I have a problem with the communication to a Pod from a Pod deployed with Istio? I actually need it to make Hazelcast discovery working with Istio, but I'll try to generalize the issue here.
Let's have a sample hello world service deployed on Kubernetes. The service replies to the HTTP request on the port 8000.
$ kubectl create deployment nginx --image=crccheck/hello-world
The created Pod has an internal IP assigned:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
hello-deployment-84d876dfd-s6r5w 1/1 Running 0 8m 10.20.3.32 gke-rafal-test-istio-1-0-default-pool-91f437a3-cf5d <none>
In the job curl.yaml, we can use the Pod IP directly.
apiVersion: batch/v1
kind: Job
metadata:
name: curl
spec:
template:
spec:
containers:
- name: curl
image: byrnedo/alpine-curl
command: ["curl", "10.20.3.32:8000"]
restartPolicy: Never
backoffLimit: 4
Running the job without Istio works fine.
$ kubectl apply -f curl.yaml
$ kubectl logs pod/curl-pptlm
...
Hello World
...
However, when I try to do the same with Istio, it does not work. The HTTP request gets blocked by Envoy.
$ kubectl apply -f <(istioctl kube-inject -f curl.yaml)
$ kubectl logs pod/curl-s2bj6 curl
...
curl: (7) Failed to connect to 10.20.3.32 port 8000: Connection refused
I've played with Service Entries, MESH_INTERNAL, and MESH_EXTERNAL, but with no success. How to bypass Envoy and make a direct call to a Pod?
EDIT: The output of istioctl kube-inject -f curl.yaml.
$ istioctl kube-inject -f curl.yaml
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
name: curl
spec:
backoffLimit: 4
template:
metadata:
annotations:
sidecar.istio.io/status: '{"version":"dbf2d95ff300e5043b4032ed912ac004974947cdd058b08bade744c15916ba6a","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
creationTimestamp: null
spec:
containers:
- command:
- curl
- 10.16.2.34:8000/
image: byrnedo/alpine-curl
name: curl
resources: {}
- args:
- proxy
- sidecar
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- curl.default
- --drainDuration
- 45s
- --parentShutdownDuration
- 1m0s
- --discoveryAddress
- istio-pilot.istio-system:15010
- --zipkinAddress
- zipkin.istio-system:9411
- --connectTimeout
- 10s
- --proxyAdminPort
- "15000"
- --concurrency
- "2"
- --controlPlaneAuthPolicy
- NONE
- --statusPort
- "15020"
- --applicationPorts
- ""
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ISTIO_META_CONFIG_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
image: docker.io/istio/proxyv2:1.1.1
imagePullPolicy: IfNotPresent
name: istio-proxy
ports:
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
readinessProbe:
failureThreshold: 30
httpGet:
path: /healthz/ready
port: 15020
initialDelaySeconds: 1
periodSeconds: 2
resources:
limits:
cpu: "2"
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
securityContext:
readOnlyRootFilesystem: true
runAsUser: 1337
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
initContainers:
- args:
- -p
- "15001"
- -u
- "1337"
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- ""
- -d
- "15020"
image: docker.io/istio/proxy_init:1.1.1
imagePullPolicy: IfNotPresent
name: istio-init
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 10m
memory: 10Mi
securityContext:
capabilities:
add:
- NET_ADMIN
restartPolicy: Never
volumes:
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
secretName: istio.default
status: {}
---
When a pod with an istio side car is started, the follwing things happen
an init container changes the iptables rules so that all the outgoing tcp traffic is routed to the sidecar container (istio-proxy) on port 15001 .
the containers of the pod are started in parallel (curl and istio-proxy)
If your curl container is executed before istio-proxy listens on port 15001, you get the error.
I started this container with a sleep command, exec-d into the container and the curl worked.
$ kubectl apply -f <(istioctl kube-inject -f curl-pod.yaml)
$ k exec -it -n noistio curl -c curl bash
[root#curl /]# curl 172.16.249.198:8000
<xmp>
Hello World
## .
## ## ## ==
## ## ## ## ## ===
/""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o _,/
\ \ _,'
`'--.._\..--''
</xmp>
[root#curl /]#
curl-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: curl
spec:
containers:
- name: curl
image: centos
command: ["sleep", "3600"]
Make sure that you have configured a ingress "Gateway" and after doing that you need to configure a "VirtualService". See link below for simple example.
https://istio.io/docs/tasks/traffic-management/ingress/#configuring-ingress-using-an-istio-gateway
Once you have deployed the gateway along with the virtual service you should be able to 'curl' you service from outside the cluster from an external IP.
But if you want to check for traffic from INSIDE the cluster you will need to use istio's mirroring API to mirror the service (pod) from one pod to another pod, and THEN use your command (kubectl apply -f curl.yaml) to see the traffic.
See link below for mirroring example:
https://istio.io/docs/tasks/traffic-management/mirroring/
hope this helps

How can I get more debug information out of my kubernetes docker containers

I'm really struggling to get kubernetes working on CoreOS my biggest problem is to find the right information to debug the following problem.
(I followed this tutorial https://coreos.com/kubernetes/docs/latest/getting-started.html)
```
core#helena-coreos ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
423aa16a66e7 gcr.io/google_containers/pause:2.0 "/pause" 2 hours ago Up 2 hours k8s_POD.6059dfa2_kube-controller-manager-37.139.31.151_kube-system_f11896dcf9adf655df092f2a12a41673_ec25db0a
4b456d7cf17d quay.io/coreos/hyperkube:v1.2.4_coreos.1 "/hyperkube apiserver" 3 hours ago Up 3 hours k8s_kube-apiserver.33667886_kube-apiserver-37.139.31.151_kube-system_bfdfe85e7787a05e49ebfe95e7d4a401_abd7982f
52e25d838af3 gcr.io/google_containers/pause:2.0 "/pause" 3 hours ago Up 3 hours k8s_POD.6059dfa2_kube-apiserver-37.139.31.151_kube-system_bfdfe85e7787a05e49ebfe95e7d4a401_411b1a93
Apparently there is a problem with the kube-controller-manager. My two main sources of debugging info (as far as I know) are the journal and the docker logs.
The docker logs doesnt show anything at all.
```
core#helena-coreos ~ $ docker logs 423aa16a66e7
core#helena-coreos ~ $
I also tried to login at this container with docker exec but that didn't work either.
So my hope was based on the journal.
```
Jul 15 13:16:59 helena-coreos kubelet-wrapper[2318]: I0715 13:16:59.143892 2318 manager.go:2050] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-37.139.31.151_kube-system(f11896dcf9adf655df092f2a12a41673)
Jul 15 13:16:59 helena-coreos kubelet-wrapper[2318]: E0715 13:16:59.143992 2318 pod_workers.go:138] Error syncing pod f11896dcf9adf655df092f2a12a41673, skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-37.139.31.151_kube-system(f11896dcf9adf655df092f2a12a41673)"
So the kube-controller-manager got failed to start but I don't know why. How can I resolve this is issue?
I dumped my configuration below:
```
core#helena-coreos ~ $ cat /etc/flannel/options.env
FLANNELD_IFACE=37.139.31.151
FLANNELD_ETCD_ENDPOINTS=https://37.139.31.151:2379
FLANNELD_ETCD_CAFILE=/etc/ssl/etcd/ca.pem
FLANNELD_ETCD_CERTFILE=/etc/ssl/etcd/coreos.pem
FLANNELD_ETCD_KEYFILE=/etc/ssl/etcd/coreos-key.pem
```
core#helena-coreos ~ $ cat /etc/systemd/system/flanneld.service.d/40-ExecStartPre-symlink.conf
[Service]
ExecStartPre=/usr/bin/ln -sf /etc/flannel/options.env /run/flannel/options.env
```
core#helena-coreos ~ $ cat /etc/systemd/system/docker.service.d/35-flannel.conf
[Unit]
Requires=flanneld.service
After=flanneld.service
```
core#helena-coreos ~ $ cat /etc/systemd/system/kubelet.service
[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
Environment=KUBELET_VERSION=v1.2.4_coreos.cni.1
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers=http://127.0.0.1:8080 \
--network-plugin-dir=/etc/kubernetes/cni/net.d \
--network-plugin= \
--register-schedulable=false \
--allow-privileged=true \
--config=/etc/kubernetes/manifests \
--hostname-override=37.139.31.151 \
--cluster-dns=10.3.0.10 \
--cluster-domain=cluster.local
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
```
core#helena-coreos ~ $ cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
containers:
-
command:
- /hyperkube
- apiserver
- "--bind-address=0.0.0.0"
- "--etcd-servers=https://37.139.31.151:2379"
- "--etcd-cafile=/etc/kubernetes/ssl/ca.pem"
- "--etcd-certfile=/etc/kubernetes/ssl/worker.pem"
- "--etcd-keyfile=/etc/kubernetes/ssl/worker-key.pem"
- "--allow-privileged=true"
- "--service-cluster-ip-range=10.3.0.0/24"
- "--secure-port=443"
- "--advertise-address=37.139.31.151"
- "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota"
- "--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem"
- "--tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem"
- "--client-ca-file=/etc/kubernetes/ssl/ca.pem"
- "--service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem"
- "--runtime-config=extensions/v1beta1=true,extensions/v1beta1/thirdpartyresources=true"
image: "quay.io/coreos/hyperkube:v1.2.4_coreos.1"
name: kube-apiserver
ports:
-
containerPort: 443
hostPort: 443
name: https
-
containerPort: 8080
hostPort: 8080
name: local
volumeMounts:
-
mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
-
mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
hostNetwork: true
volumes:
-
hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
-
hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
```
core#helena-coreos ~ $ cat /etc/kubernetes/manifests/kube-proxy.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: "quay.io/coreos/hyperkube:v1.2.4_coreos.1"
command:
- /hyperkube
- proxy
- "--master=http://127.0.0.1:8080"
- "--proxy-mode=iptables"
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
```
core#helena-coreos ~ $ cat /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-controller-manager
image: "quay.io/coreos/hyperkube:v1.2.4_coreos.1"
command:
- /hyperkube
- controller-manager
- "--master=http://127.0.0.1:8080"
- "--leader-elect=true"
- "--service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem"
- "--root-ca-file=/etc/kubernetes/ssl/ca.pem"
livenessProbe:
httpGet:
host: "127.0.0.1"
path: /healthz
port: 10252
initialDelaySeconds: 15
timeoutSeconds: 1
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
```
core#helena-coreos ~ $ cat /etc/kubernetes/manifests/kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-scheduler
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-scheduler
image: "quay.io/coreos/hyperkube:v1.2.4_coreos.1"
command:
- /hyperkube
- scheduler
- "--master=http://127.0.0.1:8080"
- "--leader-elect=true"
livenessProbe:
httpGet:
host: "127.0.0.1"
path: /healthz
port: 10251
initialDelaySeconds: 15
timeoutSeconds: 1
I didnt setup Calico.
I checked the network config in etcd
```
core#helena-coreos ~ $ etcdctl get /coreos.com/network/config
{"Network":"10.2.0.0/16", "Backend":{"Type":"vxlan"}}
I also checked if API is working which it does:
```
core#helena-coreos ~ $ curl http://127.0.0.1:8080/version
{
"major": "1",
"minor": "2",
"gitVersion": "v1.2.4+coreos.1",
"gitCommit": "7f80f816ee1a23c26647aee8aecd32f0b21df754",
"gitTreeState": "clean"
}
OK thanks to # Sasha Kurakin I'm one step further :)
```
core#helena-coreos ~ $ ./kubectl describe pods kube-controller-manager-37.139.31.151 --namespace="kube-system"
Name: kube-controller-manager-37.139.31.151
Namespace: kube-system
Node: 37.139.31.151/37.139.31.151
Start Time: Fri, 15 Jul 2016 09:52:19 +0000
Labels: <none>
Status: Running
IP: 37.139.31.151
Controllers: <none>
Containers:
kube-controller-manager:
Container ID: docker://6fee488ee838f60157b071113e43182c97b4217018933453732290a4f131767d
Image: quay.io/coreos/hyperkube:v1.2.4_coreos.1
Image ID: docker://sha256:2cac344d3116165bd808b965faae6cd9d46e840b9d70b40d8e679235aa9a6507
Port:
Command:
/hyperkube
controller-manager
--master=http://127.0.0.1:8080
--leader-elect=true
--service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
--root-ca-file=/etc/kubernetes/ssl/ca.pem
QoS Tier:
memory: BestEffort
cpu: BestEffort
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 15 Jul 2016 14:30:40 +0000
Finished: Fri, 15 Jul 2016 14:30:58 +0000
Ready: False
Restart Count: 12
Liveness: http-get http://127.0.0.1:10252/healthz delay=15s timeout=1s period=10s #success=1 #failure=3
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
ssl-certs-kubernetes:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
ssl-certs-host:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
No events.
Try to run kubectl describe pods pod_name --namespace=pods_namespace and get more information
Doc