Error when trying to expose a docker container in minikube - kubernetes

I am using minikube to learn about docker, but I have come across a problem.
I am following along with the examples in Kubernetes in Action, and I am trying to get a pod that I have pulled from my docker hub account, but I cannot make this pod visible.
if I run
kubectl get pod
I can see that the pod is present.
NAME READY STATUS RESTARTS AGE
kubia 1/1 Running 1 6d22h
However when I do the first step to create a service
kubectl expose rc kubia --type=LoadBalancer --name kubia-http service "kubia-http" exposed
I am getting this error returned
Error from server (NotFound): replicationcontrollers "kubia" not found
Error from server (NotFound): replicationcontrollers "service" not found
Error from server (NotFound): replicationcontrollers "kubia-http" not found
Error from server (NotFound): replicationcontrollers "exposed" not found
Any ideas why I am getting this error and what I need to do to correct it?
I am using minikube v1.13.1 on mac Mojave (v10.14.6), and I can't upgrade because I am using a company supplied machine, and all updates are controlled by HQ.

In this book, used command is kubectl run kubia --image=luksa/kubia --port=8080 --generator=run/v1 which used to create ReplicationController back in the days when book was written however this object is currently depracated.
Now kubectl run command creates standalone pod without ReplicationController. So to expose it you should run:
kubectl expose pod kubia --type=LoadBalancer --name kubia-http
In order to create a replication it is recommended to use Deployment. To create it using CLI you can simply run
kubectl create deployment <name_of_deployment> --image=<image_to_be_used>
It will create a deployment and one pod. And then it can be exposed similarly to previous pod exposure:
kubectl expose deployment kubia --type=LoadBalancer --name kubia-http

Replication controllers are older concepts than creating services and deployments in Kubernetes, checkout this answer.
A service template looks like the following:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: App
ports:
- protocol: tcp
port: 80
targetPort: 8080
Then after saving the service config into a file you do kubectl apply -f <filename>
Checkout more at: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service

kubia.yaml
https://kubernetes.io/ko/docs/concepts/workloads/controllers/deployment/
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: seunggab/kubia:latest
ports:
- containerPort: 8080
shell
https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel
kubectl apply -f kubia.yaml
kubectl expose deployment kubia --type=LoadBalancer --port 8080 --name kubia-http
minikube tunnel &
curl 127.0.0.1:8080
if you change replicas
change kubia.yaml (3 -> 5)
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia
spec:
replicas: 5
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: seunggab/kubia:latest
ports:
- containerPort: 8080
re-apply
kubectl apply -f kubia.yaml
# as-is
NAME READY STATUS RESTARTS AGE
kubia-5f896dc5d5-qp7wl 1/1 Running 0 20s
kubia-5f896dc5d5-rqqm5 1/1 Running 0 20s
kubia-5f896dc5d5-vqgj9 1/1 Running 0 20s
# to-be
NAME READY STATUS RESTARTS AGE
kubia-5f896dc5d5-fsd49 0/1 ContainerCreating 0 6s
kubia-5f896dc5d5-qp7wl 1/1 Running 0 3m35s
kubia-5f896dc5d5-rqqm5 1/1 Running 0 3m35s
kubia-5f896dc5d5-vqgj9 1/1 Running 0 3m35s
kubia-5f896dc5d5-x84fr 1/1 Running 0 6s
ref: https://github.com/seunggabi/kubernetes-in-action/wiki/2.-Kubernetes

Related

Can a Deployment controller control Pods that weren't created by it?

Say I have a pod YAML such as:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19.1
And a Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- containerPort: 80
Now I first create the Pod:
$ kubectl apply -f pod.yaml
And only then the Deployment:
$ kubectl apply -f deployment.yaml
I thought that, since the pod.yaml metadata includes a app: nginx selector, the Deployment controller will only create 2 nginx:1.17.1 pods, but I see that all 3 are created. Why is that?
In addition to creating the app: nginx label, Deployment controller also added the pod-template-hash label for each pod that was created.
If we check labels for running pods, we can see pod-template-hash=5d5dd5dd49 label for my-deployment pods:
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
my-deployment-5d5dd5dd49-9tbcx 1/1 Running 0 55s app=nginx,pod-template-hash=5d5dd5dd49
my-deployment-5d5dd5dd49-b88f4 1/1 Running 0 55s app=nginx,pod-template-hash=5d5dd5dd49
my-deployment-5d5dd5dd49-x7n8q 1/1 Running 0 55s app=nginx,pod-template-hash=5d5dd5dd49
nginx 1/1 Running 0 62s app=nginx
According to the official documentation:
The pod-template-hash label ensures that child ReplicaSets of a Deployment do not
overlap. It is generated by hashing the PodTemplate of the
ReplicaSet and using the resulting hash as the label value that is
added to the ReplicaSet selector, Pod template labels, and in any
existing Pods that the ReplicaSet might have.
This is why the changes from the Deployment didn't apply to a single pod with only the app: nginx label.

Expose every replica pods in a deployment/replicaset with nodeport service

I have 2 bare metal worker nodes and a replicaset with 10 pod replicas. How can I expose each pod using nodeport?
You can use the kubectl expose command to create a NodePort Service for a ReplicaSet.
This is a template that may be useful:
kubectl expose rs <REPLICASET_NAME> --port=<PORT> --target-port=<TARGET_PORT> --type=NodePort
The most important flags are:
NOTE: Detailed information on this command can be found in the Kubectl Reference Docs.
--port
The port that the service should serve on. Copied from the resource being exposed, if unspecified
--target-port
Name or number for the port on the container that the service should direct traffic to. Optional.
--type
Type for this service: ClusterIP, NodePort, LoadBalancer, or ExternalName. Default is 'ClusterIP'.
I will create an example to illustrate how it works
First, I created a simple app-1 ReplicaSet:
$ cat app-1-rs.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: app-1
labels:
app: app-1
spec:
replicas: 10
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- name: nginx
image: nginx
$ kubectl apply -f app-1-rs.yaml
replicaset.apps/app-1 created
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
app-1 10 10 10 93s
Then I used the kubectl expose command to create a NodePort Service for the app-1 ReplicaSet:
### kubectl expose rs <REPLICASET_NAME> --port=<PORT> --target-port=<TARGET_PORT> --type=NodePort
$ kubectl expose rs app-1 --port=80 --type=NodePort
service/app-1 exposed
$ kubectl get svc app-1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-1 NodePort 10.8.6.92 <none> 80:32275/TCP 18s
$ kubectl describe svc app-1 | grep -i endpoints
Endpoints: 10.4.0.15:80,10.4.0.16:80,10.4.0.17:80 + 7 more...
If you want to make some modifications before creating the Service, you can export the Service defintion to a manifest file and apply it after the modifications:
kubectl expose rs app-1 --port=80 --type=NodePort --dry-run=client -oyaml > app-1-svc.yaml
Additionally, it's worth considering if you can use Deployment instead of directly using ReplicaSet. As we can find in the Kubernetes ReplicaSet documentation:
Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don't require updates at all.
you can create a service with selector point to the pods, use NodePort type, and be sure deploy kube-proxy on every node.
example yaml like below, you should set right selector and port. get service detail for nodePort after created
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
type: NodePort
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
the official doc is good enough https://kubernetes.io/docs/concepts/services-networking/service/

kubernetes dns list all ips for service

I have a list of pods like so:
❯ kubectl get pods -l app=test-pod (base)
NAME READY STATUS RESTARTS AGE
test-deployment-674667c867-jhvg4 1/1 Running 0 14m
test-deployment-674667c867-ssx6h 1/1 Running 0 14m
test-deployment-674667c867-t4crn 1/1 Running 0 14m
I have a service
kubectl get services (base)
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default test-service ClusterIP 10.100.4.138 <none> 4000/TCP 15m
I perform a dns query:
❯ kubectl exec -ti test-deployment-674667c867-jhvg4 -- /bin/bash (base)
root#test-deployment-674667c867-jhvg4:/# busybox nslookup test-service
Server: 10.100.0.10
Address: 10.100.0.10:53
Name: test-service.default.svc.cluster.local
Address: 10.100.4.138
My config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: python-http-server
image: python:2.7
command: ["/bin/bash"]
args: ["-c", "echo \" Hello from $(hostname)\" > index.html; python -m SimpleHTTPServer 80"]
ports:
- name: http
containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: test-service
spec:
selector:
app: test-pod
ports:
- protocol: TCP
port: 4000
targetPort: http
How can I instead get a list of all the pods's ip addresses via a dns query?
Ideally I would like to perform an nslookup of a name and get a list of all the pod's ips in a list.
You have to use a headless service with selectors. It returns the ip addresses of the pods.
See here:
https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
.spec.clusterIP must be "None"

Kubernetes - how to access statefulset headless service from another namespace?

How to access headless services from another namespace? If I try to access it by <service>.<namespace> then I sometimes connect to replica db dbhost001-1. I want to specifically connect to dbhost001-0 master DB.
kubectl get pods -n test-db-dev
NAME READY STATUS RESTARTS AGE
dbhost001-0 1/1 Running 0 38m
dbhost001-1 1/1 Running 0 17m
headless-service.yml:
apiVersion: v1
kind: Service
metadata:
name: dbhost001
labels:
app: dbhost001-service
namespace: test-db-dev
spec:
selector:
app: dbhost001 # Metadata label of the deployment pod template or pod metadata label
clusterIP: None
ports:
- name: mysql-port # Optional when its just only one port
protocol: TCP
port: 3306
targetPort: 3306
Considering that you're using StatefulSet for the DB.
If both the StatefulSet and it's governing service name is dbhost001, running on default namespace, you can connect Pod dbhost001-0 on address: dbhost001-0.dbhost001.default.svc.
Format: <pod name>.<service name>.<namespace>.svc

Updating kubernetes deployment will create new pod

I have an existing kubernetes deployment which is running fine. Now I want to edit it with some new environment variables which I will use in the pod.
Editing a deployment will delete and create new pod or it will update the existing pod.
My requirement is I want to create a new pod whenever I edit/update the deployment.
Kubernetes is always going to recreate your pods in case you change/create env vars.
Lets check this together creating a deployment without any env var on it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Let's check and note these pod names so we can compare later:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-56db997f77-9mpjx 1/1 Running 0 8s
nginx-deployment-56db997f77-mgdv9 1/1 Running 0 8s
nginx-deployment-56db997f77-zg96f 1/1 Running 0 8s
Now let's edit this deployment and include one env var making the manifest look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
env:
- name: STACK_GREETING
value: "Hello from the MARS"
ports:
- containerPort: 80
After we finish the edition, let's check our pod names and see if it changed:
$ kubectl get pod
nginx-deployment-5b4b68cb55-9ll7p 1/1 Running 0 25s
nginx-deployment-5b4b68cb55-ds9kb 1/1 Running 0 23s
nginx-deployment-5b4b68cb55-wlqgz 1/1 Running 0 21s
As we can see, all pod names changed. Let's check if our env var got applied:
$ kubectl exec -ti nginx-deployment-5b4b68cb55-9ll7p -- sh -c 'echo $STACK_GREETING'
Hello from the MARS
The same behavior will occur if you change the var or even remove it. All pods need to be removed and created again for the changes to take place.
If you would like to create a new pod, then you need to create a new deployment for that. By design deployments are managing the replicas of pods that belong to them.