I created a pod with an api and web docker container in kuberneters using a yml file (see below).
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
purpose: test
spec:
containers:
- name: api
image: gcr.io/test-1/api:latest
ports:
- containerPort: 8085
name: http
protocol: TCP
- name: web
image: gcr.io/test-1/web:latest
ports:
- containerPort: 5000
name: http
protocol: TCP
It show my pod is up and running
NAME READY STATUS RESTARTS AGE
test 2/2 Running 0 5m
but I don't know how to expose it from here.
it seems odd I would have to run kubectl run .... again as the pod is already running. It does not show a deployment though.
if I try something like
kubectl expose deployment test --type="NodePort"--port 80 --target-port 5000
it complains about deployments.extensions "test' not found. What is the cleanest way to deploy from here?
To expose a deployment to the public internet, you will want to use a Service. The service type LoadBalancer handles this nicely, as you can just use pod selectors in the yaml file.
So if my deployment.yaml looks like this:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: test-dply
spec:
selector:
# Defines the selector that can be matched by a service for this
deployment
matchLabels:
app: test_pod
template:
metadata:
labels:
# Puts the label on the pod, this must match the matchLabels
selector
app: test_pod
spec:
# Our containers for training each model
containers:
- name: mycontainer
image: myimage
imagePullPolicy: Always
command: ["/bin/bash"]
ports:
- name: containerport
containerPort: 8085
Then the service that would link to it is:
kind: Service
apiVersion: v1
metadata:
# Name of our service
name: prodigy-service
spec:
# LoadBalancer type to allow external access to multiple ports
type: LoadBalancer
selector:
# Will deliver external traffic to the pod holding each of our containers
app: test_pod
ports:
- name: sentiment
protocol: TCP
port: 80
targetPort: containerport
You can deploy these two items by using kubectl create -f /path/to/dply.yaml and kubectl create -f /path/to/svc.yaml. Quick note: The service will allocate a public IP address, which you can find using kubectl get services with the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
carbon-relay ClusterIP *.*.*.* <none> 2003/TCP 78d
comparison-api LoadBalancer *.*.*.* *.*.*.* 80:30920/TCP 15d
It can take several minutes to allocate the ip, just a forewarning. But the LoadBalancer's ip is fixed, and you can delete the pod that it points to and re-spin it without consequence. So if I want to edit my test.dply, I can without worrying about my service being impacted. You should rarely have to spin down services
You have created a pod, not a deployment.
Then you have exposed a deployment (and not your pod).
Try:
kubectl expose pod test --type=NodePort --port=80 --target-port=5000
kubectl expose pod test --type=LoadBalancer --port=XX --target-port=XXXX
If you already have pod and service running, you can create an ingress for the service you want to expose to the internet.
If you want to create it through console, Google Cloud provides really easy way to create an ingress from an existing service. Go to Services and Ingress tab, select the service, click on create ingress, fill the name and other mandatory fields.
or you can create using yaml file
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "example-ingress"
namespace: "default"
spec:
defaultBackend:
service:
name: "example-service"
port:
number: 8123
status:
loadBalancer: {}
Related
I have two pods. One pod (main) has an endpoint that queries other pod(other) to get the result. Both pods have services of type ClusterIP, and the main pod also has an ingress. The main pod is not able to connect to other pod to query at the given endpoint.
In the above image, / endpoint works, but /other endpoint fails.
Below are the config files:
# main-service.yaml
apiVersion: v1
kind: Service
metadata:
name: main-service
labels:
name: main-service-label
spec:
selector:
app: main # label selector of pod, not the deployment
type: ClusterIP
ports:
- port: 8001
protocol: TCP
targetPort: 8001
# other-service.yaml
apiVersion: v1
kind: Service
metadata:
name: other-service
labels:
name: other-service-label
spec:
selector:
app: other # label selector of pod, not the deployment
type: ClusterIP
ports:
- port: 8002
protocol: TCP
targetPort: 8002
All the docker images, deployment files, ingress etc are made available at: this repo.
Note:
I entered the other pod using kubectl exec, and I am able to make curl request to main pod, but not vice versa. Not sure what is going wrong.
All pods, services are in default namespace.
I create one pod with label:appenv and one service of type node port with a selector as appenv. But when I use kubectl get ep service-name it's showing "no endpoints"(means service is not connecting with that pod).
Here are my pod.yaml and service.yaml
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod1
labels:
app: appenv
spec:
containers:
- name: app
image: aathith/lbt:v1
ports:
- name: app-port
containerPort: 8082
restartPolicy: Never
service.yaml
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: appenv
type: NodePort
ports:
- name: http
port: 8082
targetPort: 8082
nodePort: 30082
protocol: TCP
output for kubectl get po --show-labels
output for kubectl get svc
output for kubectl get svc app
Why am I not able to connect my pod to this service?
How can I change the above files to connect with each other?
Your pod is in "Completed" state - that is the problem. It is not in state "Running". Why? Because command in container was finished with 0 exit code. In normal situation container's running command should not exit unless it's a Job or Cronjob. You see what I mean?
I have the following pods hello-abc and hello-def.
And I want to send data from hello-abc to hello-def.
How would pod hello-abc know the IP address of hello-def?
And I want to do this programmatically.
What's the easiest way for hello-abc to find where hello-def?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-abc-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-abc
spec:
containers:
- name: hello-abc
image: hello-abc:v0.0.1
imagePullPolicy: Always
args: ["/hello-abc"]
ports:
- containerPort: 5000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-def-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-def
spec:
containers:
- name: hello-def
image: hello-def:v0.0.1
imagePullPolicy: Always
args: ["/hello-def"]
ports:
- containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: hello-abc-service
spec:
ports:
- port: 80
targetPort: 5000
protocol: TCP
selector:
app: hello-abc
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: hello-def-service
spec:
ports:
- port: 80
targetPort: 5001
protocol: TCP
selector:
app: hello-def
type: NodePort
Preface
Since you have defined a service that routes to each deployment, if you have deployed both services and deployments into the same namespace, you can in many modern kubernetes clusters take advantage of kube-dns and simply refer to the service by name.
Unfortunately if kube-dns is not configured in your cluster (although it is unlikely) you cannot refer to it by name.
You can read more about DNS records for services here
In addition Kubernetes features "Service Discovery" Which exposes the ports and ips of your services into any container which is deployed into the same namespace.
Solution
This means, to reach hello-def you can do so like this
curl http://hello-def-service:${HELLO_DEF_SERVICE_PORT}
based on Service Discovery https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
Caveat: Its very possible that if the Service port changes, only pods that are created after the change in the same namespace will receive the new environment variables.
External Access
In addition, you can also reach this your service externally since you are using the NodePort feature, as long as your NodePort range is accessible from outside.
This would require you to access your service by node-ip:nodePort
You can find out the NodePort which was randomly assigned to your service with kubectl describe svc/hello-def-service
Ingress
To reach your service from outside you should implement an ingress service such as nginx-ingress
https://github.com/helm/charts/tree/master/stable/nginx-ingress
https://github.com/kubernetes/ingress-nginx
Sidecar
If your 2 services are tightly coupled, you can include both in the same pod using the Kubernetes Sidecar feature. In this case, both containers in the pod would share the same virtual network adapter and accessible via localhost:$port
https://kubernetes.io/docs/concepts/workloads/pods/pod/#uses-of-pods
Service Discovery
When a Pod is run on a Node, the kubelet adds a set of environment
variables for each active Service. It supports both Docker links
compatible variables (see makeLinkVariables) and simpler
{SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the
Service name is upper-cased and dashes are converted to underscores.
Read more about service discovery here:
https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
You should be able to reach hello-def-service from pods in hello-abc via DNS as specified here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
However, kube-dns or CoreDNS has to be configured/installed in your k8s cluster before DNS records can be utilized in your cluster.
Specifically, you should be reach hello-def-service via the DNS record http://hello-def-service for the service running in the same namespace as hello-abc-service
And you should be able to reach hello-def-service running in another namespace ohter_namespace via the DNS record hello-def-service.other_namespace.svc.cluster.local.
If, for some reason, you do not have DNS add-ons installed in your cluster, you still can find the virtual IP of the hello-def-service via environment variables in hello-abc pods. As is documented here.
Suppose I have a service containing two pods. One of the pods is an HTTP server, and the other pod needs to hit a REST endpoint on this pod. Is there a hostname that the second pod can use to address the first pod?
I'm assuming when you say "service" you aren't referring to the Kubernetes lexicon of a Service object, otherwise your two Pods in the Service would be identical, so let's start by teasing out what a "Service" means in Kubernetes land.
You will have to create an additional Kubernetes object called a Service to get your hostname for your HTTP server's Pod. When you create a Service you will define a .spec.selector that points to a set of labels on the HTTP service's Pod. For the sake of example, let's say the label is app: nginx. The name of that Service object will become the internal DNS record that can be queried by the second Pod.
A simplified example:
apiVersion: v1
kind: Pod
metadata:
name: http-service
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: my-http-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
Now your second Pod can make requests to the HTTP service by the Service name, my-http-service.
It's also worth mentioning that Kubernetes best practice dictates that these Pods be managed by controllers such as Deployments or ReplicaSets for all sorts of reasons, including high availability of your applications.
Note that a service is a different concept in Docker then in K8s. The easiest way of getting what you want would be creating the two pods; say pod-1 and pod-2, with a yaml file similar to this one:
apiVersion: v1
kind: Pod
metadata:
name: NAME
labels:
app: LABEL
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Say NAME and LABEL are nginx and nginx-1, so you have now two pods called nginx and nginx-1, with labels app: nginx and app: nginx-1. Actually, as only one of them is going to be exposed, the other label is irrelevant.
Now you expose the pod either with a yaml file or from command line.
Yaml file:
apiVersion: v1
kind: Service
metadata:
name: server
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: nginx
Command line:
kubectl expose pod nginx --port 80 --name server
If you now access the second pod (nginx-1) and curl the service directly, you would end up hitting the pod behind it (nginx):
nerus:~/workspace $ kubectl exec -it nginx-1 bash
root#nginx-1:/# curl -I server
HTTP/1.1 200 OK
You can expose your pod kubectl expose deployment --type=name of pod , then you can use the kubectl describe It will show you port number . Then you access you pod at http://localhost:portnumber in last command ....**.....Hope it will help .
Ironically, you answered your own question: a Service is a stable name and IP that abstracts over the individual coming-and-going of the Pods to which it will route traffic, as described very well in the fine manual.
If the-http-pod needs to reach the-rest-pod, then create a Service that matches the labels on the PodSpec that created the-rest-pod, and from that point forward the-http-pod can always use ${serviceName}.${serviceNamespace}.svc.cluster.local to each any Pod that has matching labels
This is the way I understand the flow in question:
When requesting a kubernetes service (via http for example) I am using port 80.
The request is forwarded to a pod (still on port 80)
The port forwards the request to the (docker) container that exposes port 80
The container handles the request
However my container exposes a different port, let's say 3000.
How can make a port mapping like 80:3000 in step 2 or 3?
There are confusing options like targetport and hostport in the kubernetes docs which didn't help me. kubectl port-forward seems to forward only my local (development) machine's port to a specific pod for debugging.
These are the commands I use for setting up a service in the google cloud:
kubectl run test-app --image=eu.gcr.io/myproject/my_app --port=80
kubectl expose deployment test-app --type="LoadBalancer"
I found that I needed to add some arguments to my second command:
kubectl expose deployment test-app --type="LoadBalancer" --target-port=3000 --port=80
This creates a service which directs incoming http traffic (on port 80) to its pods on port 3000.
A nicer way to do this whole thing is with yaml files service.yaml and deployment.yaml and calling
kubectl create -f deployment.yaml
kubectl create -f service.yaml
where the files have these contents
# deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: user-app
image: eu.gcr.io/myproject/my_app
ports:
- containerPort: 3000
and
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: test-app
ports:
- port: 80
targetPort: 3000
type: LoadBalancer
Note that the selector of the service must match the label of the deployment.