Accessing application running inside pod in browser - kubernetes

I want to access an application running inside pod from the browser.
Yaml file of pod
apiVersion: v1
kind: Pod
metadata:
name: app-pod6
labels:
name: app-pod
app: containerization
spec:
containers:
- name: appimage1
image: appimage6:1.0
ports:
- containerPort: 3000
When I do
curl localhost:3000/app/home
its giving me response inside container. Now I want to access it from browser.
I created a service for exposing application using command:
kubectl expose pod app-pod6 --name=app-svc6 --port=3000 --type=NodePort
And when I do describe service, it gives me Nodeport: 31974
And when I do 'ip add', I get kubernetes ip as 192.168.102.128
But cant access application from 192.168.102.128:31974

Related

How to access app once deployed via Kubernetes?

I have a very simple Python app that works fine when I execute uvicorn main:app --reload. When I go to http://127.0.0.1:8000 on my machine, I'm able to interact with the API. (My app has no frontend, it is just an API built with FastAPI). However, I am trying to deploy this via Kubernetes, but am not sure how I can access/interact with my API.
Here is my deployment.yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16.1
ports:
- containerPort: 80
When I enter kubectl describe deployments my-deployment in the terminal, I get back a print out of the deployment, the namespace it is in, the pod template, a list of events, etc. So, I am pretty sure it is properly deployed.
How can I access the application? What would the url be? I have tried a variety of localhost + port combinations to no avail. I am new to kubernetes so I'm trying to understand how this works.
Update:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: default
spec:
selector:
matchLabels:
app: web
replicas: 2
template:
metadata:
labels:
app: web
spec:
containers:
- name: site
image: nginx:1.16.1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app-entrypoint
namespace: default
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 80
nodePort: 30001
Again, when I use the k8s CLI, I'm able to see my deployment, yet when I hit localhost:30001, I get an Unable to connect message.
You have given containerPort: 80 but if your app listens on port 8080 change it to 8080.
There are different ways to access an application deployed on kubernetes
Port Forward using kubectl port-forward deployment/my-deployment 8080:8080
Creare a NodePort service and use http://<NODEIP>:<NODEPORT>
Create a LoadBalanceer service. This works only in supported cloud environment such as AWS, GKE etc.
Use ingress controller such nginx to expose the application.
By Default k8s application are exposed only within the cluster, if you want to access it from outside of the cluster then you can select any of the below options:
Expose Deployment as a node port service (kubectl expose deployment my-deployment --name=my-deployment-service --type=NodePort), describe the service and get the node port assigned to it (kubectl describe svc my-deployment-service). Then try http://<node-IP:node-port>/
For production grade cluster the best practice is to use LoadBalancer type (kubectl expose deployment my-deployment --name=my-deployment-service --type=LoadBalancer --target-port=8080) as part of this service you get an external IP which can be used to access your service http://EXTERNAL-IP:8080/
You can also see the details about the endpoint using kubectl get ep
Thanks,

Not able to access service by service name on kubernetes

I am using below manifest. I am having a simple server which prints pod name on /hello. Here, I was going through kubernetes documentation and it mentioned that we can access service via service name as well. But that is not working for me. As this is a service of type NodePort, I am able to access it using IP of one of the nodes. Is there something wrong with my manifest?
apiVersion: apps/v1
kind: Deployment
metadata:
name: myhttpserver
labels:
day: zero
name: httppod
spec:
replicas: 1
selector:
matchLabels:
name: httppod
day: zero
template:
metadata:
labels:
day: zero
name: httppod
spec:
containers:
- name: myappcont
image: agoyalib/trial:tryit
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: servit
labels:
day: zeroserv
spec:
type: NodePort
selector:
day: zero
name: httppod
ports:
- name: mine
port: 8080
targetPort: 8090
Edit: I created my own mini k8s cluster and I am doing these operations on the master node.
From what I understand when you say
As this is a service of type NodePort, I am able to access it using IP of one of the nodes
You're accessing your service from outside your cluster. That's why you can't access it using its name.
To access a service using its name, you need to be inside the cluster.
Below is an example where you use a pod based on centos in order to connect to your service using its name :
# Here we're just creating a pod based on centos
$ kubectl run centos --image=centos:7 --generator=run-pod/v1 --command sleep infinity
# Now let's connect to that pod
$ kubectl exec centos -ti bash
[root#centos /]# curl servit:8080/hello
You need to be inside cluster meaning you can access it from another pod.
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 --rm -it -- nslookup servit

Setting environment variables based on services in Kubernetes

I am running a simple app based on an api and web interface in Kubernetes. However, I can't seem to get the api to talk to the web interface. In my local environment, I just define a variable API_URL in the web interface with eg. localhost:5001 and the web interface correctly connects to the api. As api and web are running in different pods I need to make them talk to each other via services in Kubernetes. So far, this is what I am doing, but without any luck.
I set-up a deployment for the API
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
replicas: 1
selector:
matchLabels:
component: api
template:
metadata:
labels:
component: api
spec:
containers:
- name: api
image: gcr.io/myproject-22edwx23/api:latest
ports:
- containerPort: 5001
I attach a service to it:
apiVersion: v1
kind: Service
metadata:
name: api-cluster-ip-service
spec:
type: NodePort
selector:
component: api
ports:
- port: 5001
targetPort: 5001
and then create a web deployment that should connect to this api.
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: web
image: gcr.io/myproject-22edwx23/web:latest
ports:
- containerPort: 5000
env:
- name: API_URL
value: http://api-cluster-ip-service:5001
afterwards, I add a service for the web interface + ingress etc but that seems irrelevant for the issues. I am wondering if the setting of API_URL correctly picks up the host of the api via http://api-cluster-ip-service:5001?
Or can I not rely on Kubernetes getting the appropriate dns for the api and should the web app call the api via the public internet.
If you want to check API_URL variable value, simply run
kubectl exec -it web-deployment-pod env | grep API_URL
The kube-dns service listens for service and endpoint events from the Kubernetes API and updates its DNS records as needed. These events are triggered when you create, update or delete Kubernetes services and their associated pods.
kubelet sets each new pod's search option in /etc/resolv.conf
Still, if you want to http from one pod to another via cluster service it is recommended to refer service's ClusterIP as follows
api-cluster-ip-service.default.svc.cluster.local
You should have service IP assigned to env variable within your web pod, so there's no need to re-invent it:
sukhoversha#sukhoversha:~/GCP$ kk exec -it web-deployment-675f8fcf69-xmqt8 env | grep -i service
API_CLUSTER_IP_SERVICE_PORT=tcp://10.31.253.149:5001
API_CLUSTER_IP_SERVICE_PORT_5001_TCP=tcp://10.31.253.149:5001
API_CLUSTER_IP_SERVICE_PORT_5001_TCP_PORT=5001
API_CLUSTER_IP_SERVICE_PORT_5001_TCP_ADDR=10.31.253.149
To read more about DNS for Services. A service defines environment variables naming the host and port.
If you want to use environment variables you can do the following:
Example in Python:
import os
API_URL = os.environ['API_CLUSTER_IP_SERVICE_SERVICE_HOST'] + ":" + os.environ['API_CLUSTER_IP_SERVICE_SERVICE_PORT']
Notice that the environment variable is based on your service name. If you want to check all environment variables available in a Pod:
kubectl get pods #get {pod name}
kubectl exec -it {pod_name} printenv
P.S. Be careful that a Pod gets its environment variables during its creation and it will not be able to get it from services created after it.

expose kubernetes pod to internet

I created a pod with an api and web docker container in kuberneters using a yml file (see below).
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
purpose: test
spec:
containers:
- name: api
image: gcr.io/test-1/api:latest
ports:
- containerPort: 8085
name: http
protocol: TCP
- name: web
image: gcr.io/test-1/web:latest
ports:
- containerPort: 5000
name: http
protocol: TCP
It show my pod is up and running
NAME READY STATUS RESTARTS AGE
test 2/2 Running 0 5m
but I don't know how to expose it from here.
it seems odd I would have to run kubectl run .... again as the pod is already running. It does not show a deployment though.
if I try something like
kubectl expose deployment test --type="NodePort"--port 80 --target-port 5000
it complains about deployments.extensions "test' not found. What is the cleanest way to deploy from here?
To expose a deployment to the public internet, you will want to use a Service. The service type LoadBalancer handles this nicely, as you can just use pod selectors in the yaml file.
So if my deployment.yaml looks like this:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: test-dply
spec:
selector:
# Defines the selector that can be matched by a service for this
deployment
matchLabels:
app: test_pod
template:
metadata:
labels:
# Puts the label on the pod, this must match the matchLabels
selector
app: test_pod
spec:
# Our containers for training each model
containers:
- name: mycontainer
image: myimage
imagePullPolicy: Always
command: ["/bin/bash"]
ports:
- name: containerport
containerPort: 8085
Then the service that would link to it is:
kind: Service
apiVersion: v1
metadata:
# Name of our service
name: prodigy-service
spec:
# LoadBalancer type to allow external access to multiple ports
type: LoadBalancer
selector:
# Will deliver external traffic to the pod holding each of our containers
app: test_pod
ports:
- name: sentiment
protocol: TCP
port: 80
targetPort: containerport
You can deploy these two items by using kubectl create -f /path/to/dply.yaml and kubectl create -f /path/to/svc.yaml. Quick note: The service will allocate a public IP address, which you can find using kubectl get services with the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
carbon-relay ClusterIP *.*.*.* <none> 2003/TCP 78d
comparison-api LoadBalancer *.*.*.* *.*.*.* 80:30920/TCP 15d
It can take several minutes to allocate the ip, just a forewarning. But the LoadBalancer's ip is fixed, and you can delete the pod that it points to and re-spin it without consequence. So if I want to edit my test.dply, I can without worrying about my service being impacted. You should rarely have to spin down services
You have created a pod, not a deployment.
Then you have exposed a deployment (and not your pod).
Try:
kubectl expose pod test --type=NodePort --port=80 --target-port=5000
kubectl expose pod test --type=LoadBalancer --port=XX --target-port=XXXX
If you already have pod and service running, you can create an ingress for the service you want to expose to the internet.
If you want to create it through console, Google Cloud provides really easy way to create an ingress from an existing service. Go to Services and Ingress tab, select the service, click on create ingress, fill the name and other mandatory fields.
or you can create using yaml file
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "example-ingress"
namespace: "default"
spec:
defaultBackend:
service:
name: "example-service"
port:
number: 8123
status:
loadBalancer: {}

Mapping incoming port in kubernetes service to different port on docker container

This is the way I understand the flow in question:
When requesting a kubernetes service (via http for example) I am using port 80.
The request is forwarded to a pod (still on port 80)
The port forwards the request to the (docker) container that exposes port 80
The container handles the request
However my container exposes a different port, let's say 3000.
How can make a port mapping like 80:3000 in step 2 or 3?
There are confusing options like targetport and hostport in the kubernetes docs which didn't help me. kubectl port-forward seems to forward only my local (development) machine's port to a specific pod for debugging.
These are the commands I use for setting up a service in the google cloud:
kubectl run test-app --image=eu.gcr.io/myproject/my_app --port=80
kubectl expose deployment test-app --type="LoadBalancer"
I found that I needed to add some arguments to my second command:
kubectl expose deployment test-app --type="LoadBalancer" --target-port=3000 --port=80
This creates a service which directs incoming http traffic (on port 80) to its pods on port 3000.
A nicer way to do this whole thing is with yaml files service.yaml and deployment.yaml and calling
kubectl create -f deployment.yaml
kubectl create -f service.yaml
where the files have these contents
# deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: user-app
image: eu.gcr.io/myproject/my_app
ports:
- containerPort: 3000
and
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: test-app
ports:
- port: 80
targetPort: 3000
type: LoadBalancer
Note that the selector of the service must match the label of the deployment.