I have to use a version of Kubernetes by me but I don't know how to tell to OpenShift to use that version of Kubernetes.
At the beginning I thought that I have to recompile the source code of OpenShift Origin and I did it. So, do someone tell me how to configure OpenShift to do what I explained above?
I use CentOS 7 on a CloudStack virtual machine.
Thanks in advance.
OpenShift can either run its own compiled-in Kubernetes components (which is the typical setup), or can run against an external Kubernetes server process. It does not manage launching an external Kubernetes binary.
You can run OpenShift against an external Kubernetes process by giving the OpenShift master a kubeconfig file containing connection information and credentials for an existing Kubernetes API server:
openshift start master --kubeconfig=/path/to/k8s.kubeconfig
Related
I have some team members that don't have permission to pull docker image locally on their machine and run as local instance of mongodb and mongoexpress.
So am planning to deploy as mongodb and mongoexpress as pods in Openshift to access locally. Can anyone provide the steps to do that in Openshift? Or else proper resource where I can find information / steps.
I am new to openshift.
Can anyone provide the steps to do that in Openshift?
This tutorial explains how apps (containers) are deployed from images in OpenShift
Or else proper resource where I can find information/steps
It depends on your needs, the main question is if you really need container orchestration tools or no. If you need them then you can consider installing them locally:
Docker for Windows
Minikube
or in cloud:
Google Kubernetes Engine aka GKE (it allows you creating basic Kubernetes cluster in a few clicks)
OpenShift (I haven't been dealing with it yet)
from what I've already seen, Kubernetes provides a lot of documentation (with examples, etc) on topic.
Last but not least, there is a really nice step-by-step guide on how to create Kubernetes cluster from scratch "the hard way" if you need the cluster to be fully managed by you.
I have a Kubernetes cluster set up using Kubernetes Engine on GCP. I have also installed Dask using the Helm package manager. My data are stored in a Google Storage bucket on GCP.
Running kubectl get services on my local machine yields the following output
I can open the dashboard and jupyter notebook using the external IP without any problems. However, I'd like to develop a workflow where I write code in my local machine and submit the script to the remote cluster and run it there.
How can I do this?
I tried following the instructions in Submitting Applications using dask-remote. I also tried exposing the scheduler using kubectl expose deployment with type LoadBalancer, though I do not know if I did this correctly. Suggestions are greatly appreciated.
Yes, if your client and workers share the same software environment then you should be able to connect a client to a remote scheduler using the publicly visible IP.
from dask.distributed import Client
client = Client('REDACTED_EXTERNAL_SCHEDULER_IP')
I am trying to install kubernetes on Self-hosted production environment running on Ubuntu 16.04. I am not able to find any helpful guide to setup production grade kubernetes master and connect worked nodes to it.
any help is much appreciated.
you can use the kubespray to self Host production environment.
https://github.com/kubernetes-incubator/kubespray
Depends on what you understand by saying "self-host". The most people think it's about deploying kubernetes in the own environment.
If you want to compare different approaches to deploy k8s in a custom environment, refer to this article which covers a bunch of options suitable for that.
If you are interested in how to set up an HA Kubernetes cluster using kubeadm, refer to this article.
However, in kubernetes, there is a different definition of "self-hosted". It means running kubernetes itself as a workload in kubernetes. If you are interested in a real self-hosted approach (on a custom environment), refer to this article
Hope this helps
You can use typhoon which can be used to provision an HA kubernetes cluster.
Here is a sample configuration which I used to bring up my own home cluster.
A few advantages of typhoon are that you have the option of choosing your choice of a cloud provider for provisioning your infrastructure, which is done using terraform and the fact that it gives you upstream k8s is a big plus too.
Internally, it uses bootkube to bring up the temporary control plane, which would consist of
api-server
controller-manager
scheduler
and then when we have the temporary control plane object, we inject the objects to the API server to have our k8s cluster.
Have a look at this kubecon talk given by CoreOS which explains how this is working.
I've followed the instructions given in this link to setup kong on kubernetes container in my local machine. I'm able to access APIs behind kong through Kubernetes (minikube) IP. Now, I've enterprise edition (trial version) of kong. Without Kubernetes, i've downloaded Kong enterprise image and able to run Kong in my local machine. But, my question is how to setup enterprise Kong installation on kubernetes container. I assume that i've to tweak "image section" in .yaml to pull enterprise Kong image. But i'm not sure how to do that. Can you help us how to go ahead with enterprise Kong installation on Kubernetes container?
There are (at least) two answers to your question:
set up a private docker registry -- even one inside your own kubernetes cluster -- and push the image to it, then point the image: at the internal registry
that assumes that your enterprise purchase didn't come with access to an authenticated registry hosted by Mashape, would would absolutely be the preferred mechanism for that problem
or I think you can pre-load the docker image onto the nodes via PodSpec:initContainers: in any number of ways: ftp, http, s3api, nfs, ... because the initContainer will run before the Pod container, I would expect kubelet will delay the image pull of the container until the initContainer has finished. If I had a working cluster in front of me, I'd try it out, so take this one with a grain of salt
While getting familiar with kubernetes I do see tons of tools that should helps me to install kubernetes anywhere, but I don't understand exactly what it does inside, and as a result don't understand how to trouble shoot issues.
Can someone provide me a link with tutorial how to install kubernetes without any tools.
There are two good guides on setting up Kubernetes manually:
Kelsey Hightower's Kubernetes the hard way
Kubernetes guide on getting started from scratch
Kelsey's guide assumes you are using GCP or AWS as the infrstructure, while the Kubernetes guide is a bit more agnostic.
I wouldn't recommend running either of these in production unless you really know what you're doing. However, they are great for learning what is going on under the hood. Even if you just read the guides and don't use them to setup any infrastructure you should gain a better understanding of the pieces that make up a Kubernetes cluster. You can then use one of the helpful setup tools to create your cluster, but now you will understand what it is actually doing and can debug when things go wrong.
For simplicity, you can view k8s as three components
etcd
k8s master, which includes kube-apiserver, controller, scheduler
node, which contains kubelet
You can install etcd and k8s master together in one machine. The procedures are
Install etcd. Download etcd package and run it, which is quite
simple. Remember the port of etcd service, e.g. 2379,4001, or any you
set.
Git clone the kubernetes project from github. Find the executable binary file, e.g. for k8s version 1.3, you can find kube-apiserver, kube-controller-manager and kube-scheduler in src/k8s.io/kubernetes/_output/local/bin/linux/amd64 folder
Then run kube-apiserver, specify the etcd ip and port (e.g. --etcd_servers=http://127.0.0.1:4001)
Run scheduler and controller, specifying the apiserver ip and port(e.g. --master=127.0.0.1:8080). There is no oreder between scheduler and controller
Master is running so far. Make sure these processes run without errors. If etcd exits, apiserver would exit. If apiserver exits, scheduler and controller would exit.
On another machine(virtual preferred, network connected), run kubelet. Kubelet could also be found in previous folder(src/k8s.io/kubernetes/_output/local/bin/linux/amd64), specify apiserver ip and port(e.g. --api-servers=http://10.10.10.19:8080). You may install docker or something else on node, which to prove that you could create a container.