I am using below manifest. I am having a simple server which prints pod name on /hello. Here, I was going through kubernetes documentation and it mentioned that we can access service via service name as well. But that is not working for me. As this is a service of type NodePort, I am able to access it using IP of one of the nodes. Is there something wrong with my manifest?
apiVersion: apps/v1
kind: Deployment
metadata:
name: myhttpserver
labels:
day: zero
name: httppod
spec:
replicas: 1
selector:
matchLabels:
name: httppod
day: zero
template:
metadata:
labels:
day: zero
name: httppod
spec:
containers:
- name: myappcont
image: agoyalib/trial:tryit
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: servit
labels:
day: zeroserv
spec:
type: NodePort
selector:
day: zero
name: httppod
ports:
- name: mine
port: 8080
targetPort: 8090
Edit: I created my own mini k8s cluster and I am doing these operations on the master node.
From what I understand when you say
As this is a service of type NodePort, I am able to access it using IP of one of the nodes
You're accessing your service from outside your cluster. That's why you can't access it using its name.
To access a service using its name, you need to be inside the cluster.
Below is an example where you use a pod based on centos in order to connect to your service using its name :
# Here we're just creating a pod based on centos
$ kubectl run centos --image=centos:7 --generator=run-pod/v1 --command sleep infinity
# Now let's connect to that pod
$ kubectl exec centos -ti bash
[root#centos /]# curl servit:8080/hello
You need to be inside cluster meaning you can access it from another pod.
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 --rm -it -- nslookup servit
Related
I am going to access a Kubernetes NodePort service in MacOS in the browser, running on minikube, but I can't.
This is the Service definition file:
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30004
selector:
app: myapp
And this is the Pod difinition file:
apiVersion: v1
kind: Pod
metadata:
name: nginx-2
labels:
env: production
app: myapp
spec:
containers:
- name: nginx
image: nginx
And this is the Deployment definition file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
tier: frontend
app: myapp
spec:
selector:
matchLabels:
env: production
replicas: 6
template:
metadata:
name: nginx-2
labels:
env: production
spec:
containers:
- name: nginx
image: nginx
I cannot access the NodePort service using this URL on the browser:
http://localhost:30004
Also by entering minikube ip instead of localhost in the top command, a timeout happens.
And finally, by running the below command:
minikube service myapp-service --url
A sample output like this is generated:
http://127.0.0.1:53751
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
But an the below error is shown to the user:
The connection was reset
Update : The issue is because the label app:myapp was not mentioned in the Deployment definition file -> template section
Look, your are not exposing your localhost to the nodePort. You are exposing the NodePort on your Kupernetes cluster.
So you have to access it http://nodeip:nodePort.
Think about what is localhost.
You have localhost on your Pc.
You have localhost on the VM (minkube node).
You have localhost in each container running inside your cluster.
If you want to use your Pc localhost to access the port inside a container in a Pod you can do this:
kubectl port-forward svc/serviceName reachablePortFromyourPc:containerPort
For example:
kubectl port-forward svc/serviceName 80:80
This starts a port forwarding. As long it is running you can access it from your browser.
This is good only for testing.
To access the NodePort use.
minikubeIp:NodePort
I have two Ubuntu VMs created using Oracle Virtual Box on my Windows 11 laptop. I setup a k8s cluster using kubeadm with these two Ubuntu VMs, one of them is a master node and an another one is a worker node. Both the nodes are running with Ubuntu 20.04.3 LTS and docker://20.10.7. I deployed my spring boot app into the k8s cluster and exposed a node port service for my spring boot app with port 30000, but I am not really sure on how to access my node port service on the internet outside my cluster. Could you please help me with this issue?
Following are the IP address of my nodes in k8s cluster - master [192.168.254.94] and worker [192.168.254.95]. I tried with the following urls but none of them worked
http://192.168.254.94:30000/swagger-ui.html
http://192.168.254.95:30000/swagger-ui.html
These above urls throwing message which says refused to connect
http://192.168.9.13:30000/swagger-ui.html
http://192.168.9.14:30000/swagger-ui.html
These above urls says that the site cannot be reached
Below is the content of my application.yaml which I used for deploying the spring boot app and its corresponding service
apiVersion: apps/v1
kind: Deployment
metadata:
name: dealer-engine
spec:
replicas: 1
selector:
matchLabels:
app: dealer-engine
template:
metadata:
labels:
app: dealer-engine
spec:
containers:
- name: dealer-engine
image: moviepopcorn/dealer_engine:0.0.1
ports:
- containerPort: 9090
env:
- name: MONGO_URL
value: mongodb://mongo-service:27017/mazda
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: dealer-engine
spec:
type: NodePort
selector:
app: dealer-engine
ports:
- port: 9091
targetPort: 9090
nodePort: 30000
externalIPs:
- 10.0.0.12
I am a beginner in k8s so please help me on how I can access my node port service outside my k8s cluster.
I created a new simple Springboot application which returns "Hello world!!!" back to the user when the following endpoint "/helloWorld" is invoked. I deployed this spring boot app into my k8s cluster using the below yaml configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: moviepopcorn/hello_world:0.0.1
ports:
- containerPort: 9091
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
selector:
app: hello-world
ports:
- port: 9091
targetPort: 9091
nodePort: 30001
After successful deployment, I am able to access the helloWorld endpoint using the following url <K8S_MASTER_NODE_IP>:<NODE_PORT (30001)>.
Thank you all for your answers and inputs. Very much appreciated.
Have you install any CNI plugin like flannel?
If yes, Check your CIDR setting here
kubectl get node k8s-master -o yaml | grep podCIDR:
kubectl get configmap -n kube-system kube-flannel-cfg -o yaml | grep '"Network":'
Basically yes, CNI is must. flannel is the most simple one.
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
Download cni pulgin on every server.
# download cni plugin
wget https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz
sudo mkdir -p /opt/cni/bin
sudo tar -xzvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin
Reset you cluster
kubeadm reset
Init your cluster with cidr setting must same as flannel config, default 10.244.0.0
kubeadm init --pod-network-cidr=10.244.0.0/16
Apply cni plugin.
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
I want to access an application running inside pod from the browser.
Yaml file of pod
apiVersion: v1
kind: Pod
metadata:
name: app-pod6
labels:
name: app-pod
app: containerization
spec:
containers:
- name: appimage1
image: appimage6:1.0
ports:
- containerPort: 3000
When I do
curl localhost:3000/app/home
its giving me response inside container. Now I want to access it from browser.
I created a service for exposing application using command:
kubectl expose pod app-pod6 --name=app-svc6 --port=3000 --type=NodePort
And when I do describe service, it gives me Nodeport: 31974
And when I do 'ip add', I get kubernetes ip as 192.168.102.128
But cant access application from 192.168.102.128:31974
I deployed a REST API application YAML in Kubernetes and I tried to access that API from another namespace. But it shows error. How to access the rest API from a different namespace. Below is my deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: configuration
labels:
app: configuration
namespace: restapi
spec:
replicas: 1
selector:
matchLabels:
app: configuration
template:
metadata:
labels:
app: configuration
spec:
containers:
- name: configuration
image: global.azurecr.io/config:1
env:
- name: AzureFunctionsJobHost__functions__0
value: configuration
envFrom:
- secretRef:
name: configuration
imagePullSecrets:
- name: pull
URL for API calls from other namespace is
"http://configuration.restapi:80/api/configuration"
I tried with .restapi in my url but its not working.
I can call rest API in the same namespace.
You can always do a simple test to check if you can reach from different namespace.
Lets say you don't have a service.
Then you can reach the pod directly using the ip like below.
kubectl run dnstest --image=busybox:1.28 --restart=Never --rm -ti -- nslookup 10-36-0-2.default.pod
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: 10-36-0-2.default.pod
Address 1: 10.36.0.2
pod "dnstest" deleted
And if you expose the service like below
kubectl expose deployment configuration --port 80 -n restapi
Then the result of the test is below.
kubectl run dnstest --image=busybox:1.28 --restart=Never --rm -ti -- nslookup configuration.restapi
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: configuration.restapi
Address 1: 10.105.110.174 configuration.restapi.svc.cluster.local
pod "dnstest" deleted
You can use configuration.restapi or configuration.restapi.svc or configuration.restapi.svc.cluster.local in an standard kubernetes environment.
Assuming configuration.restapi is the name of the service you're trying to access and is the name you would use within that namespace, you would use "configuration.restapi.othernamespace.svc.cluster.local".
I created a pod with an api and web docker container in kuberneters using a yml file (see below).
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
purpose: test
spec:
containers:
- name: api
image: gcr.io/test-1/api:latest
ports:
- containerPort: 8085
name: http
protocol: TCP
- name: web
image: gcr.io/test-1/web:latest
ports:
- containerPort: 5000
name: http
protocol: TCP
It show my pod is up and running
NAME READY STATUS RESTARTS AGE
test 2/2 Running 0 5m
but I don't know how to expose it from here.
it seems odd I would have to run kubectl run .... again as the pod is already running. It does not show a deployment though.
if I try something like
kubectl expose deployment test --type="NodePort"--port 80 --target-port 5000
it complains about deployments.extensions "test' not found. What is the cleanest way to deploy from here?
To expose a deployment to the public internet, you will want to use a Service. The service type LoadBalancer handles this nicely, as you can just use pod selectors in the yaml file.
So if my deployment.yaml looks like this:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: test-dply
spec:
selector:
# Defines the selector that can be matched by a service for this
deployment
matchLabels:
app: test_pod
template:
metadata:
labels:
# Puts the label on the pod, this must match the matchLabels
selector
app: test_pod
spec:
# Our containers for training each model
containers:
- name: mycontainer
image: myimage
imagePullPolicy: Always
command: ["/bin/bash"]
ports:
- name: containerport
containerPort: 8085
Then the service that would link to it is:
kind: Service
apiVersion: v1
metadata:
# Name of our service
name: prodigy-service
spec:
# LoadBalancer type to allow external access to multiple ports
type: LoadBalancer
selector:
# Will deliver external traffic to the pod holding each of our containers
app: test_pod
ports:
- name: sentiment
protocol: TCP
port: 80
targetPort: containerport
You can deploy these two items by using kubectl create -f /path/to/dply.yaml and kubectl create -f /path/to/svc.yaml. Quick note: The service will allocate a public IP address, which you can find using kubectl get services with the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
carbon-relay ClusterIP *.*.*.* <none> 2003/TCP 78d
comparison-api LoadBalancer *.*.*.* *.*.*.* 80:30920/TCP 15d
It can take several minutes to allocate the ip, just a forewarning. But the LoadBalancer's ip is fixed, and you can delete the pod that it points to and re-spin it without consequence. So if I want to edit my test.dply, I can without worrying about my service being impacted. You should rarely have to spin down services
You have created a pod, not a deployment.
Then you have exposed a deployment (and not your pod).
Try:
kubectl expose pod test --type=NodePort --port=80 --target-port=5000
kubectl expose pod test --type=LoadBalancer --port=XX --target-port=XXXX
If you already have pod and service running, you can create an ingress for the service you want to expose to the internet.
If you want to create it through console, Google Cloud provides really easy way to create an ingress from an existing service. Go to Services and Ingress tab, select the service, click on create ingress, fill the name and other mandatory fields.
or you can create using yaml file
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "example-ingress"
namespace: "default"
spec:
defaultBackend:
service:
name: "example-service"
port:
number: 8123
status:
loadBalancer: {}