How to access headless services from another namespace? If I try to access it by <service>.<namespace> then I sometimes connect to replica db dbhost001-1. I want to specifically connect to dbhost001-0 master DB.
kubectl get pods -n test-db-dev
NAME READY STATUS RESTARTS AGE
dbhost001-0 1/1 Running 0 38m
dbhost001-1 1/1 Running 0 17m
headless-service.yml:
apiVersion: v1
kind: Service
metadata:
name: dbhost001
labels:
app: dbhost001-service
namespace: test-db-dev
spec:
selector:
app: dbhost001 # Metadata label of the deployment pod template or pod metadata label
clusterIP: None
ports:
- name: mysql-port # Optional when its just only one port
protocol: TCP
port: 3306
targetPort: 3306
Considering that you're using StatefulSet for the DB.
If both the StatefulSet and it's governing service name is dbhost001, running on default namespace, you can connect Pod dbhost001-0 on address: dbhost001-0.dbhost001.default.svc.
Format: <pod name>.<service name>.<namespace>.svc
Related
I was trying to apply service externalIPs feature on EKS cluster.
What I do
I've created EKS cluster with eksctl:
eksctl create cluster --name=test --region=eu-north-1 --nodes=1
I've opened all security groups to make sure I don't have issue with firewall. ACL also allow all traffic.
I took public IP for the only available worker node and try to use it with simple service + deployment.
This should be only 1 deployment with 1 replicaset and 1 pod with nginx. This should be attached to a service with external/public IP everyone can reach.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: app
labels:
app: app
spec:
ports:
- port: 80
targetPort: 80
selector:
app: app
externalIPs:
- 13.51.55.82
When I apply it then everything seems to work just fine. I can port-forward my app service to localhost and I can see the output (kubectl port-forward svc/app 9999:80 -> curl localhost:9999).
But the problem is I cannot reach this service via public IP.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app ClusterIP 10.100.140.38 13.51.55.82 80/TCP 49m
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 62m
$ curl 13.51.55.82:80
curl: (7) Failed to connect to 13.51.55.82 port 80: Connection refused
Thoughts
For me it looks like the service is not connected to node itself. When I ssh to the node and setup simple web server on port 80 it respond immediately.
I know I can use NodePort but in my case I want finally use fixed port 4000 and NodePort allow me only to use ports in range 30000-32768.
Question
I want to be able to curl my service via public IP on certain port below 30000 (NodePort doesn't apply).
How can I make it work with Kubernetes Service externalIPs on EKS cluster?
Edit I:
FYI: I do not want to use LoadBalancer.
I am using minikube to learn about docker, but I have come across a problem.
I am following along with the examples in Kubernetes in Action, and I am trying to get a pod that I have pulled from my docker hub account, but I cannot make this pod visible.
if I run
kubectl get pod
I can see that the pod is present.
NAME READY STATUS RESTARTS AGE
kubia 1/1 Running 1 6d22h
However when I do the first step to create a service
kubectl expose rc kubia --type=LoadBalancer --name kubia-http service "kubia-http" exposed
I am getting this error returned
Error from server (NotFound): replicationcontrollers "kubia" not found
Error from server (NotFound): replicationcontrollers "service" not found
Error from server (NotFound): replicationcontrollers "kubia-http" not found
Error from server (NotFound): replicationcontrollers "exposed" not found
Any ideas why I am getting this error and what I need to do to correct it?
I am using minikube v1.13.1 on mac Mojave (v10.14.6), and I can't upgrade because I am using a company supplied machine, and all updates are controlled by HQ.
In this book, used command is kubectl run kubia --image=luksa/kubia --port=8080 --generator=run/v1 which used to create ReplicationController back in the days when book was written however this object is currently depracated.
Now kubectl run command creates standalone pod without ReplicationController. So to expose it you should run:
kubectl expose pod kubia --type=LoadBalancer --name kubia-http
In order to create a replication it is recommended to use Deployment. To create it using CLI you can simply run
kubectl create deployment <name_of_deployment> --image=<image_to_be_used>
It will create a deployment and one pod. And then it can be exposed similarly to previous pod exposure:
kubectl expose deployment kubia --type=LoadBalancer --name kubia-http
Replication controllers are older concepts than creating services and deployments in Kubernetes, checkout this answer.
A service template looks like the following:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: App
ports:
- protocol: tcp
port: 80
targetPort: 8080
Then after saving the service config into a file you do kubectl apply -f <filename>
Checkout more at: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
kubia.yaml
https://kubernetes.io/ko/docs/concepts/workloads/controllers/deployment/
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: seunggab/kubia:latest
ports:
- containerPort: 8080
shell
https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel
kubectl apply -f kubia.yaml
kubectl expose deployment kubia --type=LoadBalancer --port 8080 --name kubia-http
minikube tunnel &
curl 127.0.0.1:8080
if you change replicas
change kubia.yaml (3 -> 5)
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia
spec:
replicas: 5
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: seunggab/kubia:latest
ports:
- containerPort: 8080
re-apply
kubectl apply -f kubia.yaml
# as-is
NAME READY STATUS RESTARTS AGE
kubia-5f896dc5d5-qp7wl 1/1 Running 0 20s
kubia-5f896dc5d5-rqqm5 1/1 Running 0 20s
kubia-5f896dc5d5-vqgj9 1/1 Running 0 20s
# to-be
NAME READY STATUS RESTARTS AGE
kubia-5f896dc5d5-fsd49 0/1 ContainerCreating 0 6s
kubia-5f896dc5d5-qp7wl 1/1 Running 0 3m35s
kubia-5f896dc5d5-rqqm5 1/1 Running 0 3m35s
kubia-5f896dc5d5-vqgj9 1/1 Running 0 3m35s
kubia-5f896dc5d5-x84fr 1/1 Running 0 6s
ref: https://github.com/seunggabi/kubernetes-in-action/wiki/2.-Kubernetes
I am trying to deploy a simple FLASK app (python web framework) on a Kubernetes cluster. I am using minikube.
Here's my Helm 3 stuff:
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-app-deployment
labels:
app: flask-app
some: label
spec:
replicas: 1
selector:
matchLabels:
app: flask-app-pod
template:
metadata:
labels:
app: flask-app-pod
spec:
containers:
- name: flask-app-container
image: flask_app:0.0.1
imagePullPolicy: Never
ports:
- name: app
containerPort: 5000
protocol: TCP
securityContext: # root access for debugging
allowPrivilegeEscalation: false
runAsUser: 0
Service:
apiVersion: v1
kind: Service
metadata:
name: flak-app-service
labels:
service: flask-app-services
spec:
type: NodePort
ports:
- port: 5000
targetPort: 5000
protocol: TCP
name: https
selector:
app: flask-app-pod
Chart:
apiVersion: v2
name: flask-app
type: application
version: 0.0.1
appVersion: 0.0.1
I deploy this by doing helm install test-chart/ --generate-name.
Sample output of kubectl get all:
NAME READY STATUS RESTARTS AGE
pod/flask-app-deployment-d94b86cc9-jcmxg 1/1 Running 0 8m19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/flak-app-service NodePort 10.98.48.114 <none> 5000:30317/TCP 8m19s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d2h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flask-app-deployment 1/1 1 1 8m19s
NAME DESIRED CURRENT READY AGE
replicaset.apps/flask-app-deployment-d94b86cc9 1 1 1 8m19s
I exec'd into the pod to check if it's listening on the correct port, looks fine (netstat output):
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 1/python3
My Dockerfile should be fine. I can create a container and call the app then running a "normal" dcker container.
Must be something stupid. What am I not seeing here?
I would expect to be able to go https://localhost:30317 which gets forwarded to the service listening on port 5000 internally, which forwards it into the pod that also listens on port 5000.
To validate traffic you can use following as where it is breaking:
kubectl port-forward pods/flask-app-deployment-d94b86cc9-jcmxg 5000:12345
or
kubectl port-forward deployment/flask-app-deployment 5000:12345
or
kubectl port-forward service/flak-app-service 5000:12345
depending upon where you want to debug.
Also please validate by running netstat -tunlp whether your host is listening on the allotted port or not.
Hope this solves your error, or let me know if it does not.
I'm using a ReplicaSet to manage my pods and I try to expose these pods with a service. The Pods created by a ReplicaSet have randomized names.
NAME READY STATUS RESTARTS AGE
master 2/2 Running 0 20m
worker-4szkz 2/2 Running 0 21m
worker-hwnzt 2/2 Running 0 21m
I try to expose these Pods with a Service, since some policies restrict me to use hostNetwork=true. I'm able to expose them by creating a NodePort service for each Pod with kubectl expose pod worker-xxxxx --type=NodePort.
This is clearly not a flexible way. I wonder how to create a Service (LoadBalancer type maybe?) to access to all the replicas dynamically in my ReplicaSet. If that comes with a Deployment that would be perfect too.
Thanks for any help and advice!
Edit:
I put a label on my ReplicaSet and a NodePort type Service called worker selecting that label. But I'm not able to ping worker in any of my pods. What's the correct way of doing this?
Below is how the kubectl describe service worker gives. As the Endpoints show the pods are picked up.
Name: worker
Namespace: default
Annotations: <none>
Selector: tag=worker
Type: NodePort
IP: 10.106.45.174
Port: port1 29999/TCP
TargetPort: 29999/TCP
NodePort: port1 31934/TCP
Endpoints: 10.32.0.3:29999,10.40.0.2:29999
Port: port2 29996/TCP
TargetPort: 29996/TCP
NodePort: port2 31881/TCP
Endpoints: 10.32.0.3:29996,10.40.0.2:29996
Port: port3 30001/TCP
TargetPort: 30001/TCP
NodePort: port3 31877/TCP
Endpoints: 10.32.0.3:30001,10.40.0.2:30001
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I Believe that you can optimize this a bit by using Deployments instead of ReplicaSets (This is now the standard way), i.e you could have a deployment as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then your service to match this would be:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
# This is the important part as this is what is used to route to
# the pods created by your deployment
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
I created a pod with an api and web docker container in kuberneters using a yml file (see below).
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
purpose: test
spec:
containers:
- name: api
image: gcr.io/test-1/api:latest
ports:
- containerPort: 8085
name: http
protocol: TCP
- name: web
image: gcr.io/test-1/web:latest
ports:
- containerPort: 5000
name: http
protocol: TCP
It show my pod is up and running
NAME READY STATUS RESTARTS AGE
test 2/2 Running 0 5m
but I don't know how to expose it from here.
it seems odd I would have to run kubectl run .... again as the pod is already running. It does not show a deployment though.
if I try something like
kubectl expose deployment test --type="NodePort"--port 80 --target-port 5000
it complains about deployments.extensions "test' not found. What is the cleanest way to deploy from here?
To expose a deployment to the public internet, you will want to use a Service. The service type LoadBalancer handles this nicely, as you can just use pod selectors in the yaml file.
So if my deployment.yaml looks like this:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: test-dply
spec:
selector:
# Defines the selector that can be matched by a service for this
deployment
matchLabels:
app: test_pod
template:
metadata:
labels:
# Puts the label on the pod, this must match the matchLabels
selector
app: test_pod
spec:
# Our containers for training each model
containers:
- name: mycontainer
image: myimage
imagePullPolicy: Always
command: ["/bin/bash"]
ports:
- name: containerport
containerPort: 8085
Then the service that would link to it is:
kind: Service
apiVersion: v1
metadata:
# Name of our service
name: prodigy-service
spec:
# LoadBalancer type to allow external access to multiple ports
type: LoadBalancer
selector:
# Will deliver external traffic to the pod holding each of our containers
app: test_pod
ports:
- name: sentiment
protocol: TCP
port: 80
targetPort: containerport
You can deploy these two items by using kubectl create -f /path/to/dply.yaml and kubectl create -f /path/to/svc.yaml. Quick note: The service will allocate a public IP address, which you can find using kubectl get services with the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
carbon-relay ClusterIP *.*.*.* <none> 2003/TCP 78d
comparison-api LoadBalancer *.*.*.* *.*.*.* 80:30920/TCP 15d
It can take several minutes to allocate the ip, just a forewarning. But the LoadBalancer's ip is fixed, and you can delete the pod that it points to and re-spin it without consequence. So if I want to edit my test.dply, I can without worrying about my service being impacted. You should rarely have to spin down services
You have created a pod, not a deployment.
Then you have exposed a deployment (and not your pod).
Try:
kubectl expose pod test --type=NodePort --port=80 --target-port=5000
kubectl expose pod test --type=LoadBalancer --port=XX --target-port=XXXX
If you already have pod and service running, you can create an ingress for the service you want to expose to the internet.
If you want to create it through console, Google Cloud provides really easy way to create an ingress from an existing service. Go to Services and Ingress tab, select the service, click on create ingress, fill the name and other mandatory fields.
or you can create using yaml file
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "example-ingress"
namespace: "default"
spec:
defaultBackend:
service:
name: "example-service"
port:
number: 8123
status:
loadBalancer: {}