Call rest api from different namespace in kubernetes - kubernetes

I deployed a REST API application YAML in Kubernetes and I tried to access that API from another namespace. But it shows error. How to access the rest API from a different namespace. Below is my deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: configuration
labels:
app: configuration
namespace: restapi
spec:
replicas: 1
selector:
matchLabels:
app: configuration
template:
metadata:
labels:
app: configuration
spec:
containers:
- name: configuration
image: global.azurecr.io/config:1
env:
- name: AzureFunctionsJobHost__functions__0
value: configuration
envFrom:
- secretRef:
name: configuration
imagePullSecrets:
- name: pull
URL for API calls from other namespace is
"http://configuration.restapi:80/api/configuration"
I tried with .restapi in my url but its not working.
I can call rest API in the same namespace.

You can always do a simple test to check if you can reach from different namespace.
Lets say you don't have a service.
Then you can reach the pod directly using the ip like below.
kubectl run dnstest --image=busybox:1.28 --restart=Never --rm -ti -- nslookup 10-36-0-2.default.pod
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: 10-36-0-2.default.pod
Address 1: 10.36.0.2
pod "dnstest" deleted
And if you expose the service like below
kubectl expose deployment configuration --port 80 -n restapi
Then the result of the test is below.
kubectl run dnstest --image=busybox:1.28 --restart=Never --rm -ti -- nslookup configuration.restapi
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: configuration.restapi
Address 1: 10.105.110.174 configuration.restapi.svc.cluster.local
pod "dnstest" deleted
You can use configuration.restapi or configuration.restapi.svc or configuration.restapi.svc.cluster.local in an standard kubernetes environment.

Assuming configuration.restapi is the name of the service you're trying to access and is the name you would use within that namespace, you would use "configuration.restapi.othernamespace.svc.cluster.local".

Related

Accessing application running inside pod in browser

I want to access an application running inside pod from the browser.
Yaml file of pod
apiVersion: v1
kind: Pod
metadata:
name: app-pod6
labels:
name: app-pod
app: containerization
spec:
containers:
- name: appimage1
image: appimage6:1.0
ports:
- containerPort: 3000
When I do
curl localhost:3000/app/home
its giving me response inside container. Now I want to access it from browser.
I created a service for exposing application using command:
kubectl expose pod app-pod6 --name=app-svc6 --port=3000 --type=NodePort
And when I do describe service, it gives me Nodeport: 31974
And when I do 'ip add', I get kubernetes ip as 192.168.102.128
But cant access application from 192.168.102.128:31974

Not able to access service by service name on kubernetes

I am using below manifest. I am having a simple server which prints pod name on /hello. Here, I was going through kubernetes documentation and it mentioned that we can access service via service name as well. But that is not working for me. As this is a service of type NodePort, I am able to access it using IP of one of the nodes. Is there something wrong with my manifest?
apiVersion: apps/v1
kind: Deployment
metadata:
name: myhttpserver
labels:
day: zero
name: httppod
spec:
replicas: 1
selector:
matchLabels:
name: httppod
day: zero
template:
metadata:
labels:
day: zero
name: httppod
spec:
containers:
- name: myappcont
image: agoyalib/trial:tryit
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: servit
labels:
day: zeroserv
spec:
type: NodePort
selector:
day: zero
name: httppod
ports:
- name: mine
port: 8080
targetPort: 8090
Edit: I created my own mini k8s cluster and I am doing these operations on the master node.
From what I understand when you say
As this is a service of type NodePort, I am able to access it using IP of one of the nodes
You're accessing your service from outside your cluster. That's why you can't access it using its name.
To access a service using its name, you need to be inside the cluster.
Below is an example where you use a pod based on centos in order to connect to your service using its name :
# Here we're just creating a pod based on centos
$ kubectl run centos --image=centos:7 --generator=run-pod/v1 --command sleep infinity
# Now let's connect to that pod
$ kubectl exec centos -ti bash
[root#centos /]# curl servit:8080/hello
You need to be inside cluster meaning you can access it from another pod.
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 --rm -it -- nslookup servit

Find why i am getting 502 Bad gateway error on kubernetes

I am using kubernetes. I have Ingress service which talks my container service. We have exposed a webapi which works all fine. But we keep getting 502 bad gateway error. I am new to kubernetes and i have no clue how to go about debugging this issue. Server is a nodejs server connected to database. Is there anything wrong with configuration?
My Deployment file--
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-pod
spec:
replicas: 1
template:
metadata:
labels:
app: my-pod
spec:
containers:
- name: my-pod
image: my-image
ports:
- name: "http"
containerPort: 8086
resources:
limits:
memory: 2048Mi
cpu: 1020m
---
apiVersion: v1
kind: Service
metadata:
name: my-pod-serv
spec:
ports:
- port: 80
targetPort: "http"
selector:
app: my-pod
My Ingress Service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: abc.test.com
http:
paths:
- path: /abc
backend:
serviceName: my-pod-serv
servicePort: 80
In Your case:
I think that you get this 502 gateway error because you don't have Ingress controller configured correctly.
Please try do do it with installed Ingress like in example below. It will do all automatically.
Nginx Ingress step by step:
1) Install helm
2) Install nginx controller using helm
$ helm install stable/nginx-ingress --name nginx-ingress
It will create 2 services. You can get their details via
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.39.240.1 <none> 443/TCP 29d
nginx-ingress-controller LoadBalancer 10.39.243.140 35.X.X.15 80:32324/TCP,443:31425/TCP 19m
nginx-ingress-default-backend ClusterIP 10.39.252.175 <none> 80/TCP 19m
nginx-ingress-controller - in short, it's dealing with requests to Ingress and directing
nginx-ingress-default-backend - in short, default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand
3) Create 2 deployments (or use yours)
$ kubectl run my-pod --image=nginx
deployment.apps/my-pod created
$ kubectl run nginx1 --image=nginx
deployment.apps/nginx1 created
4) Connect to one of the pods
$ kubectl exec -ti my-pod-675799d7b-95gph bash
And add additional line to the output to see which one we will try to connect later.
$ echo "HELLO THIS IS INGRESS TEST" >> /usr/share/nginx/html/index.html
$ exit
5) Expose deployments.
$ kubectl expose deploy nginx1 --port 80
service/nginx1 exposed
$ kubectl expose deploy my-pod --port 80
service/my-pod exposed
This will automatically create service and will looks like
apiVersion: v1
kind: Service
metadata:
labels:
app: my-pod
name: my-pod
selfLink: /api/v1/namespaces/default/services/my-pod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: my-pod
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
6) Now its the time to create Ingress.yaml and deploy it. Each rule in ingress need to be specified. Here I have 2 services. Each service specification starts with -host under rule parameter.
Ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: two-svc-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.pod.svc
http:
paths:
- path: /pod
backend:
serviceName: my-pod
servicePort: 80
- host: nginx.test.svc
http:
paths:
- path: /abc
backend:
serviceName: nginx1
servicePort: 80
$ kubectl apply -f Ingress.yaml
ingress.extensions/two-svc-ingress created
7) You can check Ingress and hosts
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
two-svc-ingress my.pod.svc,nginx.test.svc 35.228.230.6 80 57m
8) Eplanation why I installed Ingress.
Connect to the ingress controller pod
$ kubectl exec -ti nginx-ingress-controller-76bf4c745c-prp8h bash
www-data#nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ cat /etc/nginx/nginx.conf
Because I have installed nginx ingress earlier, after deploying Ingress.yaml, the nginx-ingress-controller found changes and automatically added necessary code.
In this file you should be able to find whole configuration for two services. I will not copy configuration but only headers.
start server my.pod.svc
start server nginx.test.svc
www-data#nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ exit
9) Test
$ kubectl get svc to get your nginx-ingress-controller external IP
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/
default backend - 404
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/pod
<!DOCTYPE html>
...
</html>
HELLO THIS IS INGRESS TEST
Please keep in mind Ingress needs to be in the same namespace like services. If you have a few services in many namespace you need to create Ingress for each namespace.
I would need to set up a cluster in order to test your yml files.
Just to help you debugging, follow this steps:
1- get the logs of the my-pod container using kubectl logs my-pod-container-name, make sure everything is working
2- Use port-forward to expose your container and test it.
3- Make sure the service is working properly, change its type to load balancer, so you can reach it from outside the cluster.
If the three things are working there is a problem with your ingress configuration.
I am not sure if I explained it in a detailed way, let me know if something is not clear

expose kubernetes pod to internet

I created a pod with an api and web docker container in kuberneters using a yml file (see below).
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
purpose: test
spec:
containers:
- name: api
image: gcr.io/test-1/api:latest
ports:
- containerPort: 8085
name: http
protocol: TCP
- name: web
image: gcr.io/test-1/web:latest
ports:
- containerPort: 5000
name: http
protocol: TCP
It show my pod is up and running
NAME READY STATUS RESTARTS AGE
test 2/2 Running 0 5m
but I don't know how to expose it from here.
it seems odd I would have to run kubectl run .... again as the pod is already running. It does not show a deployment though.
if I try something like
kubectl expose deployment test --type="NodePort"--port 80 --target-port 5000
it complains about deployments.extensions "test' not found. What is the cleanest way to deploy from here?
To expose a deployment to the public internet, you will want to use a Service. The service type LoadBalancer handles this nicely, as you can just use pod selectors in the yaml file.
So if my deployment.yaml looks like this:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: test-dply
spec:
selector:
# Defines the selector that can be matched by a service for this
deployment
matchLabels:
app: test_pod
template:
metadata:
labels:
# Puts the label on the pod, this must match the matchLabels
selector
app: test_pod
spec:
# Our containers for training each model
containers:
- name: mycontainer
image: myimage
imagePullPolicy: Always
command: ["/bin/bash"]
ports:
- name: containerport
containerPort: 8085
Then the service that would link to it is:
kind: Service
apiVersion: v1
metadata:
# Name of our service
name: prodigy-service
spec:
# LoadBalancer type to allow external access to multiple ports
type: LoadBalancer
selector:
# Will deliver external traffic to the pod holding each of our containers
app: test_pod
ports:
- name: sentiment
protocol: TCP
port: 80
targetPort: containerport
You can deploy these two items by using kubectl create -f /path/to/dply.yaml and kubectl create -f /path/to/svc.yaml. Quick note: The service will allocate a public IP address, which you can find using kubectl get services with the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
carbon-relay ClusterIP *.*.*.* <none> 2003/TCP 78d
comparison-api LoadBalancer *.*.*.* *.*.*.* 80:30920/TCP 15d
It can take several minutes to allocate the ip, just a forewarning. But the LoadBalancer's ip is fixed, and you can delete the pod that it points to and re-spin it without consequence. So if I want to edit my test.dply, I can without worrying about my service being impacted. You should rarely have to spin down services
You have created a pod, not a deployment.
Then you have exposed a deployment (and not your pod).
Try:
kubectl expose pod test --type=NodePort --port=80 --target-port=5000
kubectl expose pod test --type=LoadBalancer --port=XX --target-port=XXXX
If you already have pod and service running, you can create an ingress for the service you want to expose to the internet.
If you want to create it through console, Google Cloud provides really easy way to create an ingress from an existing service. Go to Services and Ingress tab, select the service, click on create ingress, fill the name and other mandatory fields.
or you can create using yaml file
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "example-ingress"
namespace: "default"
spec:
defaultBackend:
service:
name: "example-service"
port:
number: 8123
status:
loadBalancer: {}

Mapping incoming port in kubernetes service to different port on docker container

This is the way I understand the flow in question:
When requesting a kubernetes service (via http for example) I am using port 80.
The request is forwarded to a pod (still on port 80)
The port forwards the request to the (docker) container that exposes port 80
The container handles the request
However my container exposes a different port, let's say 3000.
How can make a port mapping like 80:3000 in step 2 or 3?
There are confusing options like targetport and hostport in the kubernetes docs which didn't help me. kubectl port-forward seems to forward only my local (development) machine's port to a specific pod for debugging.
These are the commands I use for setting up a service in the google cloud:
kubectl run test-app --image=eu.gcr.io/myproject/my_app --port=80
kubectl expose deployment test-app --type="LoadBalancer"
I found that I needed to add some arguments to my second command:
kubectl expose deployment test-app --type="LoadBalancer" --target-port=3000 --port=80
This creates a service which directs incoming http traffic (on port 80) to its pods on port 3000.
A nicer way to do this whole thing is with yaml files service.yaml and deployment.yaml and calling
kubectl create -f deployment.yaml
kubectl create -f service.yaml
where the files have these contents
# deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: user-app
image: eu.gcr.io/myproject/my_app
ports:
- containerPort: 3000
and
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: test-app
ports:
- port: 80
targetPort: 3000
type: LoadBalancer
Note that the selector of the service must match the label of the deployment.