Can we point kubernetes to another cluster - kubernetes

I have asked myself this question and invested time researching it. Running out of time. Can someone point me in the right direction?
I have created a kubernetes cluster on minikube, with its Ingress, Services and Deployments. There is a whole configuration of services in there.
Can, now, I point this kubectl command to another provider like VMWareFusion, AWS , Azure, not to forget Google Cloud.
I know about kops. My understanding is that although this is the design goal of kops but presently it only supports AWS.

Yes, you can use different clusters via the context. List them using kubectl config get-contexts and switch between them using kubectl config use-context.

I would like to suggest you couple of things the way i worked out with kubernetes, From my local system to production my environment remains consistent.
I use kubeadm to create a kubernetes cluster on my local machine. And I maintain all my kubernetes resources like Services, Pods, Deployment etc.. in a yaml as my deployment files.
All my services and pods are saved in a yaml file e.g. counter.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: deployment-counter
namespace: default
labels:
module: log-counter
spec:
replicas: 1
selector:
matchLabels:
module: log-counter
template:
metadata:
labels:
module: log-counter
spec:
containers:
- name: container-counter
image: busybox
command:
- "/bin/sh"
- "-c"
- 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done'
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
tolerations:
- key: ud_application
operator: Equal
value: docxtract
effect: NoSchedule
- key: ud_module
operator: Exists
effect: NoSchedule
strategy:
type: RollingUpdate
On my local kubernetes cluster provisioned by kubeadm I deploy it as follow
kubectl apply -f counter.yaml
And on Production i have a kubernetes cluster provisioned by kubeadm too and i deploy it the same way.
kubectl apply -f counter.yaml
PS:
kubeadm is a tool provided by kubernetes to provision a kubernetes cluster.

Related

How can I run a cli app in a pod inside a Kubernetes cluster?

I have a cli app written in NodeJS [not by me].
I want to deploy this on a k8s cluster like I have done many times with web servers.
I have not deployed something like this before, so I am in a kind of a loss.
I have worked with dockerized cli apps [like Terraform] before, and i know how to use them in a CICD.
But how should I deploy them in a pod so they are always available for usage from another app in the cluster?
Or is there a completely different approach that I need to consider?
#EDIT#
I am using this in the end of my Dockerfile ..
# the main executable
ENTRYPOINT ["sleep", "infinity"]
# a default command
CMD ["mycli help"]
That way the pod does not restart and the cli inside is waiting for commands like mycli do this
Is it a hacky way that is frowned upon or a legit solution?
Your edit is one solution, another one if you do not want or cannot change the Docker image is to Define a Command for a Container to loop infinitely, this would achieve the same as the Dockerfile ENTRYPOINT but without having to rebuild the image.
Here's an example of such implementation:
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 5 ; done"]
restartPolicy: OnFailure
As for your question about if this is a legit solution, this is hard to answer; I would say it depends on what your application is designed to do. Kubernetes Pods are designed to be ephemeral, so a good solution would be one that is running until the job is completed; for a web server, for example, the job is never completed because it should be constantly listening to requests.
If your pods are in the same cluster they are already available to other pods through Core-DNS. An internal DNS service which allows you to access them by their internal DNS name. Something like my-cli-app.my-namespace.svc.cluster. DNS for service and pods
You would then create a deployment file with all your apps. Note this doesn't need ports to work and also doesn't include communication through the internet.
#deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

Kubectl error upon applying agones fleet: ensure CRDs are installed first

I am using minikube (docker driver) with kubectl to test an agones fleet deployment. Upon running kubectl apply -f lobby-fleet.yml (and when I try to apply any other agones yaml file) I receive the following error:
error: resource mapping not found for name: "lobby" namespace: "" from "lobby-fleet.yml": no matches for kind "Fleet" in version "agones.dev/v1"
ensure CRDs are installed first
lobby-fleet.yml:
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
name: lobby
spec:
replicas: 2
scheduling: Packed
template:
metadata:
labels:
mode: lobby
spec:
ports:
- name: default
portPolicy: Dynamic
containerPort: 7600
container: lobby
template:
spec:
containers:
- name: lobby
image: gcr.io/agones-images/simple-game-server:0.12 # Modify to correct image
I am running this on WSL2, but receive the same error when using the windows installation of kubectl (through choco). I have minikube installed and running for ubuntu in WSL2 using docker.
I am still new to using k8s, so apologies if the answer to this question is clear, I just couldn't find it elsewhere.
Thanks in advance!
In order to create a resource of kind Fleet, you have to apply the Custom Resource Definition (CRD) that defines what is a Fleet first.
I've looked into the YAML installation instructions of agones, and the manifest contains the CRDs. you can find it by searching kind: CustomResourceDefinition.
I recommend you to first try to install according to the instructions in the docs.

Kubernetes: Cannot deploy a simple "Couchbase" service

I am new to Kubernetes I am trying to mimic a behavior a bit like what I do with docker-compose when I serve a Couchbase database in a docker container.
couchbase:
image: couchbase
volumes:
- ./couchbase:/opt/couchbase/var
ports:
- 8091-8096:8091-8096
- 11210-11211:11210-11211
I managed to create a cluster in my localhost using a tool called "kind"
kind create cluster --name my-cluster
kubectl config use-context my-cluster
Then I am trying to use that cluster to deploy a Couchbase service
I created a file named couchbase.yaml with the following content (I am trying to mimic what I do with my docker-compose file).
apiVersion: apps/v1
kind: Deployment
metadata:
name: couchbase
namespace: my-project
labels:
platform: couchbase
spec:
replicas: 1
selector:
matchLabels:
platform: couchbase
template:
metadata:
labels:
platform: couchbase
spec:
volumes:
- name: couchbase-data
hostPath:
# directory location on host
path: /home/me/my-project/couchbase
# this field is optional
type: Directory
containers:
- name: couchbase
image: couchbase
volumeMounts:
- mountPath: /opt/couchbase/var
name: couchbase-data
Then I start the deployment like this:
kubectl create namespace my-project
kubectl apply -f couchbase.yaml
kubectl expose deployment -n my-project couchbase --type=LoadBalancer --port=8091
However my deployment never actually start:
kubectl get deployments -n my-project couchbase
NAME READY UP-TO-DATE AVAILABLE AGE
couchbase 0/1 1 0 6m14s
And when I look for the logs I see this:
kubectl logs -n my-project -lplatform=couchbase --all-containers=true
Error from server (BadRequest): container "couchbase" in pod "couchbase-589f7fc4c7-th2r2" is waiting to start: ContainerCreating
As OP mentioned in a comment, issue was solved using extra mount as explained in documentation: https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts
Here is OP's comment but formated so it's more readable:
the error shows up when I run this command:
kubectl describe pods -n my-project couchbase
I could fix it by creating a new kind cluster:
kind create cluster --config cluster.yaml
Passing this content in cluster.yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: inf
nodes:
- role: control-plane
extraMounts:
- hostPath: /home/me/my-project/couchbase
containerPath: /couchbase
In couchbase.yaml the path becomes path: /couchbase of course.

Methods of Verifying Kubernetes Configuration

I've been working on a small side project to try and learn Kubernetes. I have a relatively simple cluster with two services, an ingress, and working on adding a Redis database now. I'm hosting this cluster in Google Kubernetes Engine (GKE), but using Minikube to run the cluster locally and try everything out before I commit any changes and push them to the prod environment in GKE.
During this project, I have noticed that GKE seems to have some slight differences in how it wants the configuration vs what works in Minikube. I've seen this previously with ingresses and now with persistent volumes.
For example, to run Redis with a persistent volume in GKE, I can use:
apiVersion: apps/v1
kind: Deployment
metadata:
name: chatter-db-deployment
labels:
app: chatter
spec:
replicas: 1
selector:
matchLabels:
app: chatter-db-service
template:
metadata:
labels:
app: chatter-db-service
spec:
containers:
- name: master
image: redis
args: [
"--save", "3600", "1", "300", "100", "60", "10000",
"--appendonly", "yes",
]
ports:
- containerPort: 6379
volumeMounts:
- name: chatter-db-storage
mountPath: /data/
volumes:
- name: chatter-db-storage
gcePersistentDisk:
pdName: chatter-db-disk
fsType: ext4
The gcePersistentDisk section at the end refers to a disk I created using gcloud compute disks create. However, this simply won't work in Minikube as I can't create disks that way.
Instead, I need to use:
volumes:
- name: chatter-db-storage
persistentVolumeClaim:
claimName: chatter-db-claim
I also need to include separate configuration for a PeristentVolume and a PersistentVolumeClaim.
I can easily get something working in either Minikube OR GKE, but I'm not sure what is the best means of getting a config which works for both. Ideally, I want to have a single k8s.yaml file which deploys this app, and kubectl apply -f k8s.yaml should work for both environments, allowing me to test locally with Minikube and then push to GKE when I'm satisfied.
I understand that there are differences between the two environments and that will probably leak into the config to some extent, but there must be an effective means of verifying a config before pushing it? What are the best practices for testing a config? My questions mainly come down to:
Is it feasible to have a single Kubernetes config which can work for both GKE and Minikube?
If not, is it feasible to have a mostly shared Kubernetes config, which overrides the GKE and Minikube specific pieces?
How do existing projects solve this particular problem?
Is the best method to simply make a separate dev cluster in GKE and test on that, rather than bothering with Minikube at all?
Yes, you have found some parts of Kubernetes configuration that was not perfect from the beginning. But there are newer solutions.
Storage abstraction
The idea in newer Kubernetes releases is that your application configuration is a Deployment with Volumes that refers to PersistentVolumeClaim for a StorageClass.
While StorageClass and PersistentVolume belongs more to the infrastructure configuration.
See Configure a Pod to Use a PersistentVolume for Storage on how to configure a Persistent Volume for Minikube. For GKE you configure a Persistent Volume with GCEPersistentDisk or if you want to deploy your app to AWS you may use a Persistent Volume for AWSElasticBlockStore.
Ingress and Service abstraction
Service with type LoadBalancer and NodePort in combination with Ingress does not work the same way across cloud providers and Ingress Controllers. In addition, Services Mesh implementations like Istio have introduced VirtualService. The plan is to improve this situation with Ingress v2 as how I understand it.

Kube-dns does not resolve external hosts on kubeadm bare-metal cluster

I've got a k8n cluster setup on a bare-metal ubuntu 16.04 cluster using weave networking with kubeadm. I'm having a variety of little problems, the most recent of which is that I realized that kube-dns does not resolve external addresses (e.g. google.com). Any thoughts on why? Using kube-adm did not give me a lot of insight into the details of that part of the setup.
The issue turned out to be that a node-level firewall was interfering with the cluster networking. So there was no issue with the DNS setup.
I had the same issue on kubernetes v1.6 and it was not a firewall issue in my case.
The problem was that I have configured the DNS manually on the /etc/docker/daemon.json, and these parameters are not used by kube-dns. Instead it is needed to create a configmap for kubedns (pull request here and documentation here), as follows:
Solution
Create a yaml for the configmap, for example kubedns-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["<own-dns-ip>"]
And simply, apply it on kubernetes with
kubectl apply -f kubedns-configmap.yml
Test 1
On your kubernetes host node:
dig #10.96.0.10 google.com
Test 2
To test it I use a busybox image with the following resource configuration (busybox.yml):
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
# for arm
#- image: hypriot/armhf-busybox
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
Apply the resource with
kubectl apply -f busybox.yml
And test it with the following:
kubectl exec -it busybox -- ping google.com