I've followed the instructions given in this link to setup kong on kubernetes container in my local machine. I'm able to access APIs behind kong through Kubernetes (minikube) IP. Now, I've enterprise edition (trial version) of kong. Without Kubernetes, i've downloaded Kong enterprise image and able to run Kong in my local machine. But, my question is how to setup enterprise Kong installation on kubernetes container. I assume that i've to tweak "image section" in .yaml to pull enterprise Kong image. But i'm not sure how to do that. Can you help us how to go ahead with enterprise Kong installation on Kubernetes container?
There are (at least) two answers to your question:
set up a private docker registry -- even one inside your own kubernetes cluster -- and push the image to it, then point the image: at the internal registry
that assumes that your enterprise purchase didn't come with access to an authenticated registry hosted by Mashape, would would absolutely be the preferred mechanism for that problem
or I think you can pre-load the docker image onto the nodes via PodSpec:initContainers: in any number of ways: ftp, http, s3api, nfs, ... because the initContainer will run before the Pod container, I would expect kubelet will delay the image pull of the container until the initContainer has finished. If I had a working cluster in front of me, I'd try it out, so take this one with a grain of salt
Related
I have some team members that don't have permission to pull docker image locally on their machine and run as local instance of mongodb and mongoexpress.
So am planning to deploy as mongodb and mongoexpress as pods in Openshift to access locally. Can anyone provide the steps to do that in Openshift? Or else proper resource where I can find information / steps.
I am new to openshift.
Can anyone provide the steps to do that in Openshift?
This tutorial explains how apps (containers) are deployed from images in OpenShift
Or else proper resource where I can find information/steps
It depends on your needs, the main question is if you really need container orchestration tools or no. If you need them then you can consider installing them locally:
Docker for Windows
Minikube
or in cloud:
Google Kubernetes Engine aka GKE (it allows you creating basic Kubernetes cluster in a few clicks)
OpenShift (I haven't been dealing with it yet)
from what I've already seen, Kubernetes provides a lot of documentation (with examples, etc) on topic.
Last but not least, there is a really nice step-by-step guide on how to create Kubernetes cluster from scratch "the hard way" if you need the cluster to be fully managed by you.
What i want do is deployment of multiple container application in...
In RHEL os
RedHat Supportable product (if possible)
In single node K8S cluster (Bare metal machine)
So I found several way but I concerned about..
minikube, minishift, OKD, CodeReady Container
First, they run in VM but what I want is run in HOST.
Second, their doc said they are not for production environment.
So, Is there any PaaS for single-node cluster as production environment?
Docker, Docker-compose
Deployment target OS should maybe RHEL8. I guess it is not good idea to use docker because RedHat product is moving away from docker. Even in RHEL8 repository, there is no docker rpm for el8 yet.
My question is
Is there any PaaS for single-node cluster as production environment?
If not exist, docker-compose is best?
It was already mentioned, you should not use single node setup in production environment.
You should not do that because, if your servers drops you have service offline. There is nothing to switch to, nothing that might continue the process that was being worked on.
If you still want to setup a single node Kubernetes cluster you can do that using kubeadm. I think this would be closest to production grade as you can get.
Other then that as an alternative you can play with Installing Kubernetes with Minikube or Install a local Kubernetes with MicroK8s.
It's up to you which one you will choose but you need to remember this should not be running as a production, this should be a lab or a test environment which if works as expected will be migrated into few node production grade cluster.
As for PaaS as a single node there is Dokku.
Docker powered mini-Heroku. The smallest PaaS implementation you've ever seen.
And if you would consider using a cloud for PaaS, you can choose from AWS Cloud9, Azure App Service or Google App Engine.
Single node cluster is not recommended for production applications. You need scalability, high availability, fault tolerance for production apps. You must have more than one node to have these features.
Our company use Kubernetes in all our environments. as well as on our local Macbook using minikube.
We have many microservices and most of them are running JVM which require a large amount of memory. We started to face an issue that we cannot run our stack on minikube due to out of memory of the local machine.
We thought about multiple solutions:
the first was to create a k8s cloud development environment and when a developer is working on a single microservice on his local macbook he will redirect the outbound traffic into the cloud instead of the local minikube. but this solution will create new problems:
how a pod inside the cloud dev env will send data to the local developer machine? its not just a single request/response scenario
We have many developers, they can overlap each other with different versions of each service they need to be deploy on the cloud. (We can set each developer separate namespace but we will need a huge cluster to support it)
The second solution was maybe we should use a tools like skaffold or draft to deploy our current code into the cloud development environment. that will solve issue #1 but again we see problems:
Slow development cycle - building a java image and push to remote cloud and wait for init will take too much time for developer to work.
And we still facing issue #2
Antoher though was, kubernetes support multiple nodes, why won't we just add another node, a remote node that sit on the cloud, to our local minikube? The main issue is that minikube is a single node solution. Also, we didn't find any resources for it on the web.
Last thought was to connect minikube docker daemon to a remote machine. so we will use minikube on the local machine but the docker will run the containers on a remote cloud server. But no luck so far, minikube crush when we do this manipulate. and we didn't find any resources for it on the web as well.
Any thought how to solve our issue? Thank you!
I'm looking for some clues on which solution can help me on my problem which is running a kubernetes cluster between 2 VMs.
I'm beginning with Kubernetes and all its possibilities but like everybody I started from a minikube single-node cluster to host my 4 containers respectively hosting mongoDB, redis, rabbitMQ and minio.
The idea is that I need something like minikube to create a cluster like this:
Moreover, these 2 VMs will run on RedHat EL 7 and won't be local and they may be hosted on different machines
Is it possible to build that architecture with kubeadm?
Response is yes, it can be achieved with kubeadm, it will allow you to aggregate your different VM/hosts into the cluster in the same way docker-swarm does by kind of subscribing them into the cluster.
See how (thanks #Jason Stanley)
Some Prerequisites
If like me you're on a client infrastructure, implementing your solution 'on-premise' on a Red Hat environment, you'll need two or three things.
The proper docker environment
An image registry
An easy way to configure your cluster
Docker environment
First one and because it's a restriction meant by docker and red hat is to use docker-ee the enterprise edition which allow support on prod by red hat, which is why most people pay a red hat subscription.
See typical architecture using docker-ee and how to install that on Red Hat
Local image registry
The other thing is to deploy a docker registry, this is an official image (documentation here) provided by Docker team, registry will allow your cluster to pull the image from it - images that you will push inside while setting everything up - note that this is relevant in a restricted environment where your VM/hosts can't have access to internet or use a proxy.
Cluster setting tool
One handy tool is using helm which allow you to set your cluster easily by referencing which images provide a given service, on which port with which policy.
Helm documentation
I have to use a version of Kubernetes by me but I don't know how to tell to OpenShift to use that version of Kubernetes.
At the beginning I thought that I have to recompile the source code of OpenShift Origin and I did it. So, do someone tell me how to configure OpenShift to do what I explained above?
I use CentOS 7 on a CloudStack virtual machine.
Thanks in advance.
OpenShift can either run its own compiled-in Kubernetes components (which is the typical setup), or can run against an external Kubernetes server process. It does not manage launching an external Kubernetes binary.
You can run OpenShift against an external Kubernetes process by giving the OpenShift master a kubeconfig file containing connection information and credentials for an existing Kubernetes API server:
openshift start master --kubeconfig=/path/to/k8s.kubeconfig