Already to download the all binary about kubernetes,
the directory:
~/vagrant/kubernetes/server/kubernetes/server/bin$ ls
federated-apiserver kubelet
hyperkube kubemark
kube-apiserver kube-proxy
kube-apiserver.docker_tag kube-proxy.docker_tag
kube-apiserver.tar kube-proxy.tar
kube-controller-manager kube-scheduler
kube-controller-manager.docker_tag kube-scheduler.docker_tag
kube-controller-manager.tar kube-scheduler.tar
kubectl
Can use these binary directly to create a cluster?
Yes, but unfortunately it is a non-trivial task to start with plain binaries and end up with a fully functional cluster.
To create a cluster, I'd recommend following one of the many supported solutions. If you want to create a cluster without using one of the existing scripts, you can follow the Creating a Custom Cluster from Scratch guide.
I downloaded the tar.gz(flannel\etcd\kubernetes) and modified download-release.sh by changing curl to tar local files directly.Then i run kube-up.sh and created a cluster with downloaded files.
Related
I was given a docker compose file for superset which included volumes mounted from the repo itself.
docker-compose-non-dev.yml
I have to deploy this as containers in a pod in an EKS cluster. I can't figure out how the volumes should be done because the files are mounted locally from the repo when we run:
docker-compose up
[ EDIT ]
I just built the container with the files I needed inside it.
Docker compose is a tool geared towards local deployments (as you may know) and so it optimizes its workflows with that assumption. One way to work this around is by wrapping the docker image(s) that compose up with the additional files you have on your local environment. For example a wrapper dockerfile would be something like
FROM <original image>
ADD <local files to new image>
The resulting image is what you would run in the cloud on EKS.
Of course there are many other ways to work it around such as using Kubernetes volumes and (pre-)populating them with the local files, or bake the local files in the original image from the get go, etc.
All in all the traditional compose model of thinking (with local file mappings) isn't very "cloud deployments friendly".
You can convert docker-compose.yaml files with a tool called kompose.
It's as easy as running
kompose convert
in a directory containing docker-ccompose.yaml file.
This will create a bunch of files which you can deploy with kubectl apply -f . (or kompose up). You can read more here.
However, even though kompose will generate PersistentVolueClaim manifests, no PersistentVolumes will be created. You have to make those yourself (cluster may try to create PVs by itself, but it's strongly based on PVCs generated by kompose, I would not rely on that).
Docker compose is mainly used for devlopment, testing and single host deployments [reference], which is not exactly what Kubernetes was created for (latter being cloud oriented).
Is there a way to specify a tar file of a docker image in a deployment manifest file for kubernetes? The nodes have access to a mounted network drive that will have the tar file. There's a post where the image is loaded by docker on each node, but I was wondering if there's a way just to specify the tar file and have Kubernetes do the loading and running.
--edit--
To be more exact, say I have a mounted network drive on each node, is there a way with just the manifest file to instruct kubernetes to load that image directly from tar file and not have to put it into a docker registry.
In general, no, Kubernetes can only access container images from a registry, not from a network drive, see documentation.
However, you could have a private registry inside your cluster (see docs). You could also have the images locally on the nodes (pre-pulled images) and have Kubernetes access them from there by setting imagePullPolicy to Never (see docs).
You have provided quite limited information about your environment and how it would looks like.
Two things comes to my mind.
Use initContainer to download this file using wget or similar.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
That way you can be sure that tar file will be downloaded before your application will start. Example can be found here
Use Mount Volume
In your deployment, statefulset, pod (not sure what you are using), you can Mount Volume into pod. After that you will be able to inside pod specified path from volume. Please keep in mind that you have to use proper access modes.
To run .tar file you can use some bash commands like in this documentation.
I'm new to K8s. In process to config Openstack Cinder as K8s StorageClass, i have to add some flags to my kube controller manager, and I found that it's my big problem.
I'm using K8s 1.11 in VMs, and my K8s cluster has a kube-controller-manager pod, but I don't know how to add these flags to my kube-controller-manager.
After hours search, i found that there's a lot of task require add flag to kube-controller-manager, but no exactly document guide me how to do that. Please share me the way to go over it.
Thank you.
You can check /etc/kubernetes/manifests dir on your master nodes.
This dir would contain yaml files for master components.
These are also known as static pods.
More Info : https://kubernetes.io/docs/tasks/administer-cluster/static-pod/
Update these files and you would be able to see your changes as kubelet should restart the pod on file change.
As a more long term solution, you will need to incorporate the flags to the tooling that you use to generate your k8s cluster.
I launched a MongoDB replica set on Kubernetes (GKE as well as kubeadm). I faced no problems with the pods accessing the storage.
However, when I used Helm to deploy the same, I face this problem.
When I run this command-
(
kubectl describe po mongodb-shard1-0 --namespace=kube-system
)
(Here mongodb-shard1-0 is the first and only pod (of the desired three) which was created)
I get the error-
Events
Error: failed to start container "mongodb-shard1-container": Error
response from daemon: error while creating mount source path
'/mongo/data': mkdir /mongo: read-only file system
I noticed one major difference between the two ways of creating MongoDB cluster (without Helm, and with Helm)- when using Helm, I had to create a service account and install the Helm chart using that service account. Without Helm, I did not need that.
I used different mongo docker images, I faced the same error every time.
Can anybody help why I am facing this issue?
Docker exports volumes from filesystem using -v command line option. i.e. -v /var/tmp:/tmp
Can you check if the containers/pods are writing to shared volumes, not to the root filesystem?
I have setup my kubernetes cluster from scratch following this doc: https://kubernetes.io/docs/getting-started-guides/scratch/
My kubernetes master and worker are working correctly, but I didn't find the instruction to deploy dns addons.
Addons can be deployed through yaml files as well as using the addon manager. I have already installed dashboard, monitoring, DNS manually using the yaml files provided (with small modifications) in this repo.
Please note addon-manager is pretty special, You should copy all files into a directory then:
./kube-addons.sh
Btw I prefer installing addons manually instead of using addon manager.
DNS addon manual example:
Take the kubedns-controller.yaml.sed,
Replace the $DNS_DOMAIN with cluster.local(you should use the domain specified in your setup here). You can also set it as a variable. Please note there are multiple occurrences in this file.
Then:
mv kubedns-controller.yaml.sed kubedns-deployement.yaml
kubectl create -f kubedns-deployement.yaml