My goal to override the default Kubelet configuration in the running cluster
"imageGCHighThresholdPercent": 85,
"imageGCLowThresholdPercent": 80,
to
"imageGCHighThresholdPercent": 60,
"imageGCLowThresholdPercent": 40,
The possible option is to apply the node patch for each node.
I'm using the following command to get the kubelet config via kubeclt proxy
curl -sSL "http://localhost:8001/api/v1/nodes/ip-172-31-20-135.eu-west-1.compute.internal/proxy/configz" | python3 -m json.tool
The output is
{
"kubeletconfig": {
....
"imageGCHighThresholdPercent": 85,
"imageGCLowThresholdPercent": 80,
.....
}
}
here is the command I'm using to update these two values
kubectl patch node ip-172-31-20-135.eu-west-1.compute.internal -p '{"kubeletconfig":{"imageGCHighThresholdPercent":60,"imageGCLowThresholdPercent":40}}'
Unfortunately the kubeclt returns me
node/ip-172-31-20-135.eu-west-1.compute.internal patched (no change)
As a result the change has no effect.
Any thought what I'm doing wrong.
Thanks
Patching node object is not woking because those configurations are not part of node object.
The way to achieve this would be by updating the kubelet config file in the kubernetes nodes and restarting kubelet process. systemctl status kubelet should tell if kubelet was started with a config file and the location of the file.
root#kind-control-plane:/var/lib/kubelet# systemctl status kubelet
kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/kind/systemd/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Tue 2020-04-14 08:43:14 UTC; 2 days ago
Docs: http://kubernetes.io/docs/
Main PID: 639 (kubelet)
Tasks: 20 (limit: 2346)
Memory: 59.6M
CGroup: /docker/f01f57e1ef7aa7a1a8197e0e79be15415c580da33a7d048512e22418a88e0317/system.slice/kubelet.service
└─639 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --c
ontainer-runtime-endpoint=/run/containerd/containerd.sock --fail-swap-on=false --node-ip=172.17.0.2 --fail-swap-on=false
As it can seen above in a cluster setup by kubeadm kubelet was started with a config file located at /var/lib/kubelet/config.yaml
Edit the configfile to add
ImageGCHighThresholdPercent: 60
ImageGCHighThresholdPercent: 40
Restart kubelet using systemctl restart kubelet.service
In case the cluster was not started with a kubelet config file then create a new config file and pass the config file while starting kubelet.
While you are using EKS you have to configure An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an AMI when you launch an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration. You can use different AMIs to launch instances when you need instances with different configurations.
First create folder /var/lib/kubelet, and put kubeconfig template file into it, the content as below:
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: CERTIFICATE_AUTHORITY_FILE
server: MASTER_ENDPOINT
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubelet
name: kubelet
current-context: kubelet
users:
- name: kubelet
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: /usr/bin/heptio-authenticator-aws
args:
- "token"
- "-i"
- "CLUSTER_NAME"
Then create template file /etc/systemd/system/kubelet.service, the content as below:
[Unit]
Description=Kubernetes Kubelet
Documentation=[https://github.com/kubernetes/kubernetes](https://github.com/kubernetes/kubernetes)
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet \
--address=0.0.0.0 \
--authentication-token-webhook \
--authorization-mode=Webhook \
--allow-privileged=true \
--cloud-provider=aws \
--cluster-dns=DNS_CLUSTER_IP \
--cluster-domain=cluster.local \
--cni-bin-dir=/opt/cni/bin \
--cni-conf-dir=/etc/cni/net.d \
--container-runtime=docker \
--max-pods=MAX_PODS \
--node-ip=INTERNAL_IP \
--network-plugin=cni \
--pod-infra-container-image=602401143452.dkr.ecr.REGION.amazonaws.com/eks/pause-amd64:3.1 \
--cgroup-driver=cgroupfs \
--register-node=true \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--feature-gates=RotateKubeletServerCertificate=true \
--anonymous-auth=false \
--client-ca-file=CLIENT_CA_FILE \
--image-gc-high-threshold=60 \
--image-gc-low-threshold=40
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
You have to add flags image-gc-high-threshold and image-gc-low-threshold and specify proper values.
--image-gc-high-threshold int32 The percent of disk usage after which image garbage collection is always run. (default 85)
--image-gc-low-threshold int32 The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. (default 80)
Please take a look: eks-worker-node-ami.
Related
I want to deploy a pod using a Docker image which has been pushed to a private registry.
So far, I've used the following command to install the registry and push the image:
# Build the DockerImage file
DOCKER_IMAGE="truc/tf-http-server:0.1"
cd docker
docker build -t $DOCKER_IMAGE .
cd ..
# Install Registry V2
docker run -d -p 5000:5000 --restart=always --name registry registry:2
# Push image
docker tag $DOCKER_IMAGE localhost:5000/$DOCKER_IMAGE
docker push localhost:5000/$DOCKER_IMAGE
# Add to known repository
sudo bash -c 'cat << EOF > /etc/docker/daemon.json
{
"insecure-registries" : [ "192.168.1.37:5000" ]
}
EOF'
sudo systemctl daemon-reload
sudo systemctl restart docker
Pulling the image works directly from Docker:
$ sudo docker pull 192.168.1.37:5000/truc/tf-http-server:0.1
0.1: Pulling from truc/tf-http-server
Digest: sha256:b09c10375f1e90346f9b0c4bfb2bdfc7df919a4c89aaebfb433f2d845b37a960
Status: Downloaded newer image for 192.168.1.37:5000/truc/tf-http-server:0.1
192.168.1.37:5000/truc/tf-http-server:0.1
When I want to deploy the image from Kubernetes, I got the following error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 29s default-scheduler Successfully assigned default/tf-http-server-nvl9v to worker01
Normal Pulling 16s (x2 over 29s) kubelet Pulling image "192.168.1.37:5000/truc/tf-http-server:0.1"
Warning Failed 16s (x2 over 29s) kubelet Failed to pull image "192.168.1.37:5000/truc/tf-http-server:0.1": rpc error: code = Unknown desc = failed to pull and unpack image "192.168.1.37:5000/truc/tf-http-server:0.1": failed to resolve reference "192.168.1.37:5000/truc/tf-http-server:0.1": failed to do request: Head "https://192.168.1.37:5000/v2/truc/tf-http-server/manifests/0.1": http: server gave HTTP response to HTTPS client
Warning Failed 16s (x2 over 29s) kubelet Error: ErrImagePull
Normal BackOff 3s (x2 over 28s) kubelet Back-off pulling image "192.168.1.37:5000/truc/tf-http-server:0.1"
Warning Failed 3s (x2 over 28s) kubelet Error: ImagePullBackOff
It seems like if the repository access was forbidden. Is there a way to make it reachable from Kubernetes ?
EDIT: To install Docker registy, run the following commands and follow the checked answer.
mkdir registry && cd registry && mkdir certs && cd certs
openssl genrsa 1024 > domain.key
chmod 400 domain.key
openssl req -new -x509 -nodes -sha1 -days 365 -key domain.key -out domain.crt -subj "/C=FR/ST=France/L=Lannion/O=TGI/CN=OrangeFactoryBox"
cd .. && mkdir auth
sudo apt-get install apache2-utils -y
htpasswd -Bbn username password > auth/htpasswd
cd ..
docker run -d \
--restart=always \
--name registry \
-v `pwd`/auth:/auth \
-v `pwd`/certs:/certs \
-v `pwd`/certs:/certs \
-e REGISTRY_AUTH=htpasswd \
-e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-e REGISTRY_HTTP_ADDR=0.0.0.0:5000 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
-p 5000:5000 \
registry:2
sudo docker login -u username -p password localhost:5000
Assumption: The docker server where you tested it and the kubernetes
nodes are on the same private subnet 192.168.1.0/24
http: server gave HTTP response to HTTPS client
So, apparently your private docker registry is HTTP not HTTPS. Kuberentes prefers the registry to use a valid SSL certificate. On each node in your kubernetes cluster, you will need to explicitly tell the docker to treat this registry as an insecure registry. Following this change you will have to restart the docker service as well.
Kubernetes: Failed to pull image. Server gave HTTP response to HTTPS client.
{ "insecure-registries":["192.168.1.37:5000"] }
to the daemon.json file at /etc/docker.
You will also need to define the imagePullSecrets in your namespace and use it in your deployment/pod spec
First create the secret from your <path/to/.docker/config.json> using:
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
Then refer to this secret in your pod yaml
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
Reference: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
Folks,
When trying to increase a GKE cluster from 1 to 3 nodes, running in separate zones (us-centra1-a, b, c). The following seems apparent:
Pods scheduled on new nodes can not access resources on the internet... i.e. not able to connect to stripe apis, etc. (potentially kube-dns related, have not tested traffic attempting to leave without a DNS lookup).
Similarly, am not able to route between pods in K8s as expected. I.e. it seems cross-az calls could be failing? When testing with openvpn, unable to connect to pods scheduled on new nodes.
A separate issue I noticed was Metrics server seems wonky. kubectl top nodes shows unknown for the new nodes.
At the time of writing, master k8s version 1.15.11-gke.9
The settings am paying attention to:
VPC-native (alias IP) - disabled
Intranode visibility - disabled
gcloud container clusters describe cluster-1 --zone us-central1-a
clusterIpv4Cidr: 10.8.0.0/14
createTime: '2017-10-14T23:44:43+00:00'
currentMasterVersion: 1.15.11-gke.9
currentNodeCount: 1
currentNodeVersion: 1.15.11-gke.9
endpoint: 35.192.211.67
initialClusterVersion: 1.7.8
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/skilful-frame-180217/zones/us-central1-a/instanceGroupManagers/gke-cluster-1-default-pool-ff24932a-grp
ipAllocationPolicy: {}
labelFingerprint: a9dc16a7
legacyAbac:
enabled: true
location: us-central1-a
locations:
- us-central1-a
loggingService: none
....
masterAuthorizedNetworksConfig: {}
monitoringService: none
name: cluster-1
network: default
networkConfig:
network: .../global/networks/default
subnetwork: .../regions/us-central1/subnetworks/default
networkPolicy:
provider: CALICO
nodeConfig:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-2
...
nodeIpv4CidrSize: 24
nodePools:
- autoscaling: {}
config:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-2
...
initialNodeCount: 1
locations:
- us-central1-a
management:
autoRepair: true
autoUpgrade: true
name: default-pool
podIpv4CidrSize: 24
status: RUNNING
version: 1.15.11-gke.9
servicesIpv4Cidr: 10.11.240.0/20
status: RUNNING
subnetwork: default
zone: us-central1-a
Next troubleshooting step is creating a new pool and migrating to it. Maybe the answer is staring at me right in the face... could it be nodeIpv4CidrSize a /24?
Thanks!
In your question, the description of your cluster have the following Network Policy:
name: cluster-1
network: default
networkConfig:
network: .../global/networks/default
subnetwork: .../regions/us-central1/subnetworks/default
networkPolicy:
provider: CALICO
I deployed a cluster as similar as I could:
gcloud beta container --project "PROJECT_NAME" clusters create "cluster-1" \
--zone "us-central1-a" \
--no-enable-basic-auth \
--cluster-version "1.15.11-gke.9" \
--machine-type "n1-standard-1" \
--image-type "COS" \
--disk-type "pd-standard" \
--disk-size "100" \
--metadata disable-legacy-endpoints=true \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--num-nodes "1" \
--no-enable-ip-alias \
--network "projects/owilliam/global/networks/default" \
--subnetwork "projects/owilliam/regions/us-central1/subnetworks/default" \
--enable-network-policy \
--no-enable-master-authorized-networks \
--addons HorizontalPodAutoscaling,HttpLoadBalancing \
--enable-autoupgrade \
--enable-autorepair
After that I got the same configuration as yours, I'll point two parts:
addonsConfig:
networkPolicyConfig: {}
...
name: cluster-1
network: default
networkConfig:
network: projects/owilliam/global/networks/default
subnetwork: projects/owilliam/regions/us-central1/subnetworks/default
networkPolicy:
enabled: true
provider: CALICO
...
In the comments you mention "in the UI, it says network policy is disabled...is there a command to drop calico?". Then I gave you the command, for which you got the error stating that Network Policy Addon is not Enabled.
Which is weird, because it's applied but not enabled. I DISABLED it on my cluster and look:
addonsConfig:
networkPolicyConfig:
disabled: true
...
name: cluster-1
network: default
networkConfig:
network: projects/owilliam/global/networks/default
subnetwork: projects/owilliam/regions/us-central1/subnetworks/default
nodeConfig:
...
NetworkPolicyConfig went from {} to disabled: true and the section NetworkPolicy above nodeConfig is now gone. So, i suggest you to enable and disable it again to see if it updates the proper resources and fix your network policy issue, here is what we will do:
If your cluster is not on production, I'd suggest you to resize it back to 1, do the changes and then scale again, the update will be quicker. but if it is in production, leave it as it is, but it might take longer depending on your pod disrupting policy. (default-pool is the name of my cluster pool), I'll resize it on my example:
$ gcloud container clusters resize cluster-1 --node-pool default-pool --num-nodes 1
Do you want to continue (Y/n)? y
Resizing cluster-1...done.
Then enable the network policy addon itself (it will not activate it, only make available):
$ gcloud container clusters update cluster-1 --update-addons=NetworkPolicy=ENABLED
Updating cluster-1...done.
and we enable (activate) the network policy:
$ gcloud container clusters update cluster-1 --enable-network-policy
Do you want to continue (Y/n)? y
Updating cluster-1...done.
Now let's undo it:
$ gcloud container clusters update cluster-1 --no-enable-network-policy
Do you want to continue (Y/n)? y
Updating cluster-1...done.
After disabling it, wait until the pool is ready and run the last command:
$ gcloud container clusters update cluster-1 --update-addons=NetworkPolicy=DISABLED
Updating cluster-1...done.
Scale it back to 3 if you had downscaled:
$ gcloud container clusters resize cluster-1 --node-pool default-pool --num-nodes 3
Do you want to continue (Y/n)? y
Resizing cluster-1...done.
Finally check again the description to see if it matches the right configuration and test the communication between the pods.
Here is the reference for this configuration:
Creating a Cluster Network Policy
If you still got the issue after that, update your question with the latest cluster description and we will dig further.
I want to setup an etcd cluster runnin on multiple nodes. I have running 2 unbuntu 18.04 machines running on a Hyper-V terminal.
I followed this guide on the official kubernetes site:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/
Therefore, I changed the several scripts and executed this scripts on HOST0 and HOST1
export HOST0=192.168.101.90
export HOST1=192.168.101.91
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/
ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})
NAMES=("infra0" "infra1")
for i in "${!ETCDHOSTS[#]}"; do
HOST=${ETCDHOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta2"
kind: ClusterConfiguration
etcd:
local:
serverCertSANs:
- "${HOST}"
peerCertSANs:
- "${HOST}"
extraArgs:
initial-cluster: ${NAMES[0]}=https://${ETCDHOSTS[0]}:2380,${NAMES[1]}=https://${ETCDHOSTS[1]}:2380
initial-cluster-state: new
name: ${NAME}
listen-peer-urls: https://${HOST}:2380
listen-client-urls: https://${HOST}:2379
advertise-client-urls: https://${HOST}:2379
initial-advertise-peer-urls: https://${HOST}:2380
EOF
done
After that, I executed this command on HOST0
kubeadm init phase certs etcd-ca
I created all the nessecary on HOST0
# cleanup non-reusable certificates
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.k
kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST1}/
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
# No need to move the certs because they are for HOST0
# clean up certs that should not be copied off this host
find /tmp/${HOST1} -name ca.key -type f -delete
After that, I copied the files to the second ETCTD node (HOST1). Before that I created a root user mbesystem
USER=mbesystem
HOST=${HOST1}
scp -r /tmp/${HOST}/* ${USER}#${HOST}:
ssh ${USER}#${HOST}
USER#HOST $ sudo -Es
root#HOST $ chown -R root:root pki
root#HOST $ mv pki /etc/kubernetes/
I'll check all the files were there on HOST0 and HOST1.
On HOST0 I started the etcd cluster using:
kubeadm init phase etcd local --config=/tmp/192.168.101.90/kubeadmcfg.yaml
On Host1 I started using:
kubeadm init phase etcd local --config=/home/mbesystem/kubeadmcfg.yaml
After I executed:
docker run --rm -it \
--net host \
-v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd:3.4.3-0 etcdctl \
--cert /etc/kubernetes/pki/etcd/peer.crt \
--key /etc/kubernetes/pki/etcd/peer.key \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://192.168.101.90:2379 endpoint health --cluster
I discovered my cluster is not healty, I'll received a connection refused.
I can't figure out what went wrong. Any help will be appreciated.
I've looked into it, reproduced what was in the link that you provided: Kubernetes.io: Setup ha etcd with kubeadm and managed to make it work.
Here is some explanation/troubleshooting steps/tips etc.
First of all etcd should be configured with odd number of nodes. What I mean by that is it should be created as 3 or 5 nodes cluster.
Why an odd number of cluster members?
An etcd cluster needs a majority of nodes, a quorum, to agree on updates to the cluster state. For a cluster with n members, quorum is (n/2)+1. For any odd-sized cluster, adding one node will always increase the number of nodes necessary for quorum. Although adding a node to an odd-sized cluster appears better since there are more machines, the fault tolerance is worse since exactly the same number of nodes may fail without losing quorum but there are more nodes that can fail. If the cluster is in a state where it can't tolerate any more failures, adding a node before removing nodes is dangerous because if the new node fails to register with the cluster (e.g., the address is misconfigured), quorum will be permanently lost.
-- Github.com: etcd documentation
Additionally here are some troubleshooting steps:
Check if docker is running You can check it by running command (on systemd installed os):
$ systemctl show --property ActiveState docker
Check if etcd container is running properly with:
$ sudo docker ps
Check logs of etcd container if it's running with:
$ sudo docker logs ID_OF_CONTAINER
How I've managed to make it work:
Assuming 2 Ubuntu 18.04 servers with IP addresses of:
10.156.0.15 and name: etcd-1
10.156.0.16 and name: etcd-2
Additionally:
SSH keys configured for root access
DNS resolution working for both of the machines ($ ping etcd-1)
Steps:
Pre-configuration before the official guide.
I did all of the below configuration with the usage of root account
Configure the kubelet to be a service manager for etcd.
Create configuration files for kubeadm.
Generate the certificate authority.
Create certificates for each member
Copy certificates and kubeadm configs.
Create the static pod manifests.
Check the cluster health.
Pre-configuration before the official guide
Pre-configuration of this machines was done with this StackOverflow post with Ansible playbooks:
Stackoverflow.com: 3 kubernetes clusters 1 base on local machine
You can also follow official documentation: Kubernetes.io: Install kubeadm
Configure the kubelet to be a service manager for etcd.
Run below commands on etcd-1 and etcd-2 with root account.
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
# Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs".
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd
Restart=always
EOF
$ systemctl daemon-reload
$ systemctl restart kubelet
Create configuration files for kubeadm.
Create your configuration file on your etcd-1 node.
Here is modified script that will create kubeadmcfg.yaml for only 2 nodes:
export HOST0=10.156.0.15
export HOST1=10.156.0.16
# Create temp directories to store files that will end up on other hosts.
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/
ETCDHOSTS=(${HOST0} ${HOST1})
NAMES=("etcd-1" "etcd-2")
for i in "${!ETCDHOSTS[#]}"; do
HOST=${ETCDHOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta2"
kind: ClusterConfiguration
etcd:
local:
serverCertSANs:
- "${HOST}"
peerCertSANs:
- "${HOST}"
extraArgs:
initial-cluster: ${NAMES[0]}=https://${ETCDHOSTS[0]}:2380,${NAMES[1]}=https://${ETCDHOSTS[1]}:2380
initial-cluster-state: new
name: ${NAME}
listen-peer-urls: https://${HOST}:2380
listen-client-urls: https://${HOST}:2379
advertise-client-urls: https://${HOST}:2379
initial-advertise-peer-urls: https://${HOST}:2380
EOF
done
Take a special look on:
export HOSTX on the top of the script. Paste the IP addresses of your machines there.
NAMES=("etcd-1" "etcd-2"). Paste the names of your machines (hostname) there.
Run this script from root account and check if it created files in /tmp/IP_ADDRESS directory.
Generate the certificate authority
Run below command from root account on your etcd-1 node:
$ kubeadm init phase certs etcd-ca
Create certificates for each member
Below is a part of the script which is responsible for creating certificates for each member of etcd cluster. Please modify the HOST0 and HOST1 variables.
#!/bin/bash
HOST0=10.156.0.15
HOST1=10.156.0.16
kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST1}/
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
# No need to move the certs because they are for HOST0
Run above script from root account and check if there is pki directory inside /tmp/10.156.0.16/.
There shouldn't be any pki directory inside /tmp/10.156.0.15/ as it's already in place.
Copy certificates and kubeadm configs.
Copy your kubeadmcfg.yaml of etcd-1 from /tmp/10.156.0.15 to root directory with:
$ mv /tmp/10.156.0.15/kubeadmcfg.yaml /root/
Copy the content of /tmp/10.156.0.16 from your etcd-1 to your etcd-2 node to /root/ directory:
$ scp -r /tmp/10.156.0.16/* root#10.156.0.16:
After that check if files copied correctly, have correct permissions and copy pki folder to /etc/kubernetes/ with command on etcd-2:
$ mv /root/pki /etc/kubernetes/
Create the static pod manifests.
Run below command on etcd-1 and etcd-2:
$ kubeadm init phase etcd local --config=/root/kubeadmcfg.yaml
All should be running now.
Check the cluster health.
Run below command to check cluster health on etcd-1.
docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd:3.4.3-0 etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints https://10.156.0.15:2379 endpoint health --cluster
Modify:
--endpoints https://10.156.0.15:2379 with correct IP address of etcd-1
It should give you a message like this:
https://10.156.0.15:2379 is healthy: successfully committed proposal: took = 26.308693ms
https://10.156.0.16:2379 is healthy: successfully committed proposal: took = 26.614373ms
Above message concludes that the etcd is working correctly but please be aware of an even number of nodes.
Please let me know if you have any questions to that.
Stack
Environment: Azure
Type of install: Custom
Base OS: Centos 7.3
Docker: 1.12.5
The first thing I will say is that I have this same install working in AWS with the same configuration files for apiserver, manager, scheduler, kubelet, and kube-proxy.
Here is the kubelet config:
/usr/bin/kubelet \
--require-kubeconfig \
--allow-privileged=true \
--cluster-dns=10.32.0.10 \
--container-runtime=docker \
--docker=unix:///var/run/docker.sock \
--network-plugin=kubenet \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--serialize-image-pulls=true \
--cgroup-root=/ \
--system-container=/system \
--node-status-update-frequency=4s \
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
--v=2
Kube-proxy config:
/usr/bin/kube-proxy \
--master=https://10.240.0.6:6443 \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--proxy-mode=iptables \
--v=2
Behavior:
Login to any of the pods on any node:
nslookup kubernetes 10.32.0.10
Server: 10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'kubernetes': Try again
What does work is:
nslookup kubernetes.default.svc.cluster.local. 10.32.0.10
Server: 10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default.svc.cluster.local.
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
So I figured out that on azure, the resolv.conf looked like this:
; generated by /usr/sbin/dhclient-script
search ssnci0siiuyebf1tqq5j1a1cyd.bx.internal.cloudapp.net
10.32.0.10
options ndots:5
If I added the search domains of default.svc.cluster.local svc.cluster.local cluster.local.
Everything started working and I understand why.
However, this is problematic because for every namespace I create, I would need to manage the resolv.conf.
This does not happen when I deploy in Amazon so I am kind of stumped on why it is happening in Azure.
Kubelet has a command line flag, cluster-domain which it looks like you're missing. See the docs
Add --cluster-domain=cluster.local to your kubelet command start up, and it should start working as expected.
I'm busy testing out kubernetes on my local pc using https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md
which launches a dockerized single node k8s cluster. I need to run a privileged container inside k8s (it runs docker in order to build images from dockerfiles). What I've done so far is add a security context privileged=true to the pod config which returns forbidden when trying to create the pod. I know that you have to enable privileged on the node with --allow-privileged=true and I've done this by adding the parameter arg to step two (running the master and worker node) but it still returns forbidden when creating the pod.
Anyone know how to enable privileged in this dockerized k8s for testing?
Here is how I run the k8s master:
docker run --privileged --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --api-servers=http://localhost:8080 --v=2 --address=0.0.0.0 --allow-privileged=true --enable-server --hostname-override=127.0.0.1 --config=/etc/kubernetes/manifests
Update: Privileged mode is now enabled by default (both in the apiserver and in the kubelet) starting with the 1.1 release of Kubernetes.
To enable privileged containers, you need to pass the --allow-privileged flag to the Kubernetes apiserver in addition to the Kubelet when it starts up. The manifest file that you use to launch the Kubernetes apiserver in the single node docker example is bundled into the image (from master.json), but you can make a local copy of that file, add the --allow-privileged=true flag to the apiserver command line, and then change the --config flag you pass to the Kubelet in Step Two to a directory containing your modified file.