Kubernetes: Cannot deploy a simple "Couchbase" service - kubernetes

I am new to Kubernetes I am trying to mimic a behavior a bit like what I do with docker-compose when I serve a Couchbase database in a docker container.
couchbase:
image: couchbase
volumes:
- ./couchbase:/opt/couchbase/var
ports:
- 8091-8096:8091-8096
- 11210-11211:11210-11211
I managed to create a cluster in my localhost using a tool called "kind"
kind create cluster --name my-cluster
kubectl config use-context my-cluster
Then I am trying to use that cluster to deploy a Couchbase service
I created a file named couchbase.yaml with the following content (I am trying to mimic what I do with my docker-compose file).
apiVersion: apps/v1
kind: Deployment
metadata:
name: couchbase
namespace: my-project
labels:
platform: couchbase
spec:
replicas: 1
selector:
matchLabels:
platform: couchbase
template:
metadata:
labels:
platform: couchbase
spec:
volumes:
- name: couchbase-data
hostPath:
# directory location on host
path: /home/me/my-project/couchbase
# this field is optional
type: Directory
containers:
- name: couchbase
image: couchbase
volumeMounts:
- mountPath: /opt/couchbase/var
name: couchbase-data
Then I start the deployment like this:
kubectl create namespace my-project
kubectl apply -f couchbase.yaml
kubectl expose deployment -n my-project couchbase --type=LoadBalancer --port=8091
However my deployment never actually start:
kubectl get deployments -n my-project couchbase
NAME READY UP-TO-DATE AVAILABLE AGE
couchbase 0/1 1 0 6m14s
And when I look for the logs I see this:
kubectl logs -n my-project -lplatform=couchbase --all-containers=true
Error from server (BadRequest): container "couchbase" in pod "couchbase-589f7fc4c7-th2r2" is waiting to start: ContainerCreating

As OP mentioned in a comment, issue was solved using extra mount as explained in documentation: https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts
Here is OP's comment but formated so it's more readable:
the error shows up when I run this command:
kubectl describe pods -n my-project couchbase
I could fix it by creating a new kind cluster:
kind create cluster --config cluster.yaml
Passing this content in cluster.yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: inf
nodes:
- role: control-plane
extraMounts:
- hostPath: /home/me/my-project/couchbase
containerPath: /couchbase
In couchbase.yaml the path becomes path: /couchbase of course.

Related

kubectl apply -f works on PC but not in Gitlab Runner

I am trying to deploy to kubernetes using Gitlab CICD. No matter what I do, kubectl apply -f helloworld-deployment.yml --record in my .gitlab-ci.yml always returns that the deployment is unchanged:
$ kubectl apply -f helloworld-deployment.yml --record
deployment.apps/helloworld-deployment unchanged
Even if I change the tag on the image, or if the deployment doesn't exist at all. However, if I run kubectl apply -f helloworld-deployment.yml --record from my own computer, it works fine and updates when a tag changes and creates the deployment when no deployment exist. Below is my .gitlab-ci.yml that I'm testing with:
image: docker:dind
services:
- docker:dind
stages:
- deploy
deploy-prod:
stage: deploy
image: google/cloud-sdk
environment: production
script:
- kubectl apply -f helloworld-deployment.yml --record
Below is helloworld-deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
replicas: 2
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: registry.gitlab.com/repo/helloworld:test
imagePullPolicy: Always
ports:
- containerPort: 3000
imagePullSecrets:
- name: regcred
Update:
This is what I see if I run kubectl rollout history deployments/helloworld-deployment and there is no existing deployment:
Error from server (NotFound): deployments.apps "helloworld-deployment" not found
If the deployment already exists, I see this:
REVISION CHANGE-CAUSE
1 kubectl apply --filename=helloworld-deployment.yml --record=true
With only one revision.
I did notice this time that when I changed the tag, the output from my Gitlab Runner was:
deployment.apps/helloworld-deployment configured
However, there were no new pods. When I ran it from my PC, then I did see new pods created.
Update:
Running kubectl get pods shows two different pods in Gitlab runner than I see on my PC.
I definitely only have one kubernetes cluster, but kubectl config view shows some differences (the server url is the same). The output for contexts shows different namespaces. Does this mean I need to set a namespace either in my yml file or pass it in the command? Here is the output from the Gitlab runner:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: URL
name: gitlab-deploy
contexts:
- context:
cluster: gitlab-deploy
namespace: helloworld-16393682-production
user: gitlab-deploy
name: gitlab-deploy
current-context: gitlab-deploy
kind: Config
preferences: {}
users:
- name: gitlab-deploy
user:
token: [MASKED]
And here is the output from my PC:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: URL
contexts:
- context:
cluster: do-nyc3-helloworld
user: do-nyc3-helloworld-admin
name: do-nyc3-helloworld
current-context: do-nyc3-helloworld
kind: Config
preferences: {}
users:
- name: do-nyc3-helloworld-admin
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- kubernetes
- cluster
- kubeconfig
- exec-credential
- --version=v1beta1
- --context=default
- VALUE
command: doctl
env: null
It looks like Gitlab adds their own default for namespace:
<project_name>-<project_id>-<environment>
Because of this, I put this in the metadata section of helloworld-deployment.yml:
namespace: helloworld-16393682-production
And then it worked as expected. It was deploying before, but kubectl get pods didn't show it since that command was using the default namespace.
Since Gitlab use a custom namespace you need to add a namespace flag to you command to display your pods:
kubectl get pods -n helloworld-16393682-production
You can set the default namespace for kubectl commands. See here.
You can permanently save the namespace for all subsequent kubectl commands in that contex
In your case it could be:
kubectl config set-context --current --namespace=helloworld-16393682-production
Or if you are using multiples cluster, you can switch between namespaces using:
kubectl config use-context helloworld-16393682-production
In this link you can see a lot of useful commands and configurations.
I hope it helps! =)

Can we point kubernetes to another cluster

I have asked myself this question and invested time researching it. Running out of time. Can someone point me in the right direction?
I have created a kubernetes cluster on minikube, with its Ingress, Services and Deployments. There is a whole configuration of services in there.
Can, now, I point this kubectl command to another provider like VMWareFusion, AWS , Azure, not to forget Google Cloud.
I know about kops. My understanding is that although this is the design goal of kops but presently it only supports AWS.
Yes, you can use different clusters via the context. List them using kubectl config get-contexts and switch between them using kubectl config use-context.
I would like to suggest you couple of things the way i worked out with kubernetes, From my local system to production my environment remains consistent.
I use kubeadm to create a kubernetes cluster on my local machine. And I maintain all my kubernetes resources like Services, Pods, Deployment etc.. in a yaml as my deployment files.
All my services and pods are saved in a yaml file e.g. counter.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: deployment-counter
namespace: default
labels:
module: log-counter
spec:
replicas: 1
selector:
matchLabels:
module: log-counter
template:
metadata:
labels:
module: log-counter
spec:
containers:
- name: container-counter
image: busybox
command:
- "/bin/sh"
- "-c"
- 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done'
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
tolerations:
- key: ud_application
operator: Equal
value: docxtract
effect: NoSchedule
- key: ud_module
operator: Exists
effect: NoSchedule
strategy:
type: RollingUpdate
On my local kubernetes cluster provisioned by kubeadm I deploy it as follow
kubectl apply -f counter.yaml
And on Production i have a kubernetes cluster provisioned by kubeadm too and i deploy it the same way.
kubectl apply -f counter.yaml
PS:
kubeadm is a tool provided by kubernetes to provision a kubernetes cluster.

How to connect MongoDB which is running on Kubernetes cluster through UI tools like MongoDB Compass or RoboMongo?

I have multiple instances of Mongo db deployed inside my kubernetes cluster through helm packages.
They are running as a service, in NodePort.
How do I connect to those Mongo db instances through UI tools like MongoDB Compass and RoboMongo from outside the cluster?
Any help is appreciated.
You can use kubectl port-forward to connect to MongoDB from outside the cluster.
Run kubectl port-forward << name of a mongodb pod >> --namespace << mongodb namespace >> 27018:27018.
Now point your UI tool to localhost:27018 and kubectl will forward all connections to the pod inside the cluster.
Starting with Kubernetes 1.10+ you can also use this syntax to connect to a service (you don't have to find a pod name first):
kubectl port-forward svc/<< mongodb service name >> 27018:27018 --namespace << mongodb namespace>>
If it is not your production database you can expose it through a NodePort service:
# find mongo pod name
kubectl get pods
kubectl expose pod <<pod name>> --type=NodePort
# find new mongo service
kubectl get services
Last command will output something like
mongodb-0 10.0.0.45 <nodes> 27017:32151/TCP 30s
Now you can access your mongo instance with mongo <<node-ip>>:32151
Fetch the service associated with the mongo db:
kubectl get services -n <namespace>
Port forward using:
kubectl port-forward service/<service_name> -n <namespace> 27018:27017
Open Robomongo on localhost:27018
If not resolved, expose your mongo workload as a load balancer and use the IP provided by the service. Copy the LB IP and use the same in the robo3T. If it requires authentication, check my YAML file below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
volumeMounts:
- name: data
mountPath: "/data/db"
subPath: "mongodb_data"
ports:
- containerPort: 27017
protocol: TCP
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: xxxx
- name: MONGO_INITDB_ROOT_PASSWORD
value: xxxx
imagePullSecrets:
- name: xxxx
volumes:
- name: data
persistentVolumeClaim:
claimName: xxx
Set the same values in the authentication tab in ROBO3T
NOTE: I haven't mentioned the service section in the YAML since I directly exposed as an LB in the GCP UI itself.

Intercept/capture incoming traffic to pods/services in Kubernetes

I'm using Openshift and Kubernetes as cloud platform for my application. For test purposes I need to intercept incoming http requests to my pods. Is this possible to do that with Kubernetes client library or maybe it can be configured with yaml?
Simple answer is no, you can't.
One of the ways to overcome this is to exec into your container (kubectl exec -it <pod> bash), install tcpdump and run something like tcpdump -i eth0 -n.
A more reasonable way to have it solved on infra level is to use some tracing tool like Jaeger/Zipkin
You can try something like below it will work. First you need create a job.
Let's say with name (tcpdumppod.yaml)
apiVersion: batch/v1
kind: Job
metadata:
name: tcpdump-capture-job
namespace: blue
spec:
template:
metadata:
name: "tcpdumpcapture-pod"
spec:
hostNetwork: true
nodeSelector:
kubernetes.io/hostname: "ip-xx-x-x-xxx.ap-south-1.compute.internal"
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: "job-container"
image: "docker.io/centos/tools"
command: ["/bin/bash", "-c", "--"]
args: [ "tcpdump -i any -s0 -vv -n dst host 10.233.6.70 and port 7776 || src 10.233.64.23" ]
restartPolicy: Never
backoffLimit: 3
activeDeadlineSeconds: 460
=> kubectl create -f tcpdumppod.yaml
And check the pod logs which is created by the job when the container is running.

Kube-dns does not resolve external hosts on kubeadm bare-metal cluster

I've got a k8n cluster setup on a bare-metal ubuntu 16.04 cluster using weave networking with kubeadm. I'm having a variety of little problems, the most recent of which is that I realized that kube-dns does not resolve external addresses (e.g. google.com). Any thoughts on why? Using kube-adm did not give me a lot of insight into the details of that part of the setup.
The issue turned out to be that a node-level firewall was interfering with the cluster networking. So there was no issue with the DNS setup.
I had the same issue on kubernetes v1.6 and it was not a firewall issue in my case.
The problem was that I have configured the DNS manually on the /etc/docker/daemon.json, and these parameters are not used by kube-dns. Instead it is needed to create a configmap for kubedns (pull request here and documentation here), as follows:
Solution
Create a yaml for the configmap, for example kubedns-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["<own-dns-ip>"]
And simply, apply it on kubernetes with
kubectl apply -f kubedns-configmap.yml
Test 1
On your kubernetes host node:
dig #10.96.0.10 google.com
Test 2
To test it I use a busybox image with the following resource configuration (busybox.yml):
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
# for arm
#- image: hypriot/armhf-busybox
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
Apply the resource with
kubectl apply -f busybox.yml
And test it with the following:
kubectl exec -it busybox -- ping google.com