I want to deploy a custom nginx app on my kubernetes cluster.
I have three raspberry in a cluster. My deplotment file looks as follows
kubepodDeploy.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: privateRepo/my-nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx
How can I deploy it so that I can access my app by ipadress. Which service type do I need?
my service details are:
kubectl describe service my-nginx ~/Project/htmlBasic
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Annotations: Selector: run=my-nginx
Type: NodePort
IP: 10.99.107.194
Port: http 8080/TCP
TargetPort: 80/TCP
NodePort: http 30488/TCP
Endpoints: 10.32.0.4:80,10.32.0.5:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32430/TCP
Endpoints: 10.32.0.4:443,10.32.0.5:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
You can not access the application on ipaddress:8080 without using a proxy server in front or changing IPtable rules(not good idea). NodePort service type will always expose service in port range 30000-32767
So at any point, your service will be running on ipaddress:some_higher_port
Running a proxy in front which, redirects the traffic to node port and since 8080 is your requirement so run proxy server also on 8080 port.
Just to add proxy server will not be part of Kubernetes cluster
If you are on cloud consider using LoadBalancer service and access you app on some DNS name.
Related
I have a simple ASP.NET core Web API. It works locally. I deployed it in Azure AKS using the following yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sa-be
spec:
selector:
matchLabels:
name: sa-be
template:
metadata:
labels:
name: sa-be
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: sa-be
image: francotiveron/anapi:latest
resources:
limits:
memory: "64Mi"
cpu: "250m"
---
apiVersion: v1
kind: Service
metadata:
name: sa-be-s
spec:
type: LoadBalancer
selector:
name: sa-be
ports:
- port: 8080
targetPort: 80
The result is:
> kubectl get service sa-be-s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
sa-be-s LoadBalancer 10.0.157.200 20.53.188.247 8080:32533/TCP 4h55m
> kubectl describe service sa-be-s
Name: sa-be-s
Namespace: default
Labels: <none>
Annotations: <none>
Selector: name=sa-be
Type: LoadBalancer
IP Families: <none>
IP: 10.0.157.200
IPs: <none>
LoadBalancer Ingress: 20.53.188.247
Port: <unset> 8080/TCP
TargetPort: 80/TCP
NodePort: <unset> 32533/TCP
Endpoints: 10.244.2.5:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I expected to reach the Web API at http://20.53.188.247:32533/, instead it is reachable only at http://20.53.188.247:8080/
Can someone explain
Is this is he expected behaviour
If yes, what is the use of the NodePort (32533)?
Yes, expected.
Explained: Kubernetes Service Ports - please read full article to understand what is going on in the background.
Loadbalancer part:
apiVersion: v1
kind: Service
metadata:
name: sa-be-s
spec:
type: LoadBalancer
selector:
name: sa-be
ports:
- port: 8080
targetPort: 80
port is the port the cloud load balancer will listen on (8080 in our
example) and targetPort is the port the application is listening on in
the Pods/containers. Kubernetes works with your cloud’s APIs to create
a load balancer and everything needed to get traffic hitting the load
balancer on port 8080 all the way back to the Pods/containers in your
cluster listening on targetPort 80.
now main:
Behind the scenes, many implementations create NodePorts to glue the cloud load balancer to the cluster. The traffic flow is usually like this.
Can someone please help me complete this scenario as I have everything running successfully but with no end result as expected. I have a PostgreSQL database and Redis (back-end) Pods along with a front-end API application that I will be using to send API requests. Unfortunately, after setting up everything and making sure they are all running on the Kubernetes dashboard, when sending an API request there is nothing happening as if there is no connection between my services. Before desperately posting here I did some research to find a solution to this problem, I tried this tutorial of using a ingress-service to connect my services but did not work. I also came across this tutorial that will connect my back-end service to my front-end service using some sort of upstream configuration file but it did not do the trick.
Here is my YAML configuration files if anyone is interested:
apiVersion: apps/v1
kind: Deployment
metadata:
name: dictionary-project
labels:
app: net
spec:
replicas: 1
selector:
matchLabels:
app: dictionary
tier: frontend
template:
metadata:
labels:
app: dictionary
tier: frontend
spec:
hostNetwork: true
containers:
- name: dictionaryapi
image: amin/dictionary_server_api:latest
ports:
- containerPort: 400
apiVersion: v1
kind: Service
metadata:
name: dictionary-service
labels:
app: net
spec:
selector:
app: dictionary
tier: frontend
type: NodePort
ports:
- nodePort: 31003
port: 67
targetPort: 400
protocol: TCP
apiVersion: v1
kind: Service
metadata:
name: postgrebackendservice
labels:
run: backend
spec:
selector:
app: postgres
tier: backend
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
name: postgresdb
apiVersion: v1
kind: Service
metadata:
name: reddisbackendservice
labels:
run: backend
spec:
selector:
app: reddis
tier: backend
type: ClusterIP
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
Output of all services:
Name: dictionary-service
Namespace: default
Labels: app=net
Annotations: <none>
Selector: app=dictionary,tier=frontend
Type: NodePort
IP: 10.126.146.18
Port: <unset> 67/TCP
TargetPort: 400/TCP
NodePort: <unset> 31003/TCP
Endpoints: 192.168.x.x:400
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: postgrebackendservice
Namespace: default
Labels: run=backend
Annotations: Selector: app=postgres,tier=backend
Type: ClusterIP
IP: 10.182.374.13
Port: postgresdb 5432/TCP
TargetPort: 5432/TCP
Endpoints: 192.168.x.x:5432
Session Affinity: None
Events: <none>
Name: reddisbackendservice
Namespace: default
Labels: run=backend
Annotations: Selector: app=reddis,tier=backend
Type: ClusterIP
IP: 10.182.0.60
Port: client 6379/TCP
TargetPort: 6379/TCP
Endpoints: 192.168.x.x:6379
Port: gossip 16379/TCP
TargetPort: 16379/TCP
Endpoints: 192.168.x.x:16379
Session Affinity: None
Events: <none>
I am testing my front-end API application on a web browser by sending a http://{workernodeIP}:31003/swagger but the page is not loading due to no connection to the server.
Cluster information:
Kubernetes version: v1.18.6
Environment being used: bare-metal, 1 VM Master Node and 1 VM Worker Node
Installation method: kubeadm
Host OS: Ubuntu 18.04
CNI and version: calico
CRI and version: Docker 19.03.6
From the docs here
containerPort:
List of ports to expose from the container. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port
here DOES NOT prevent that port from being exposed. Any port which is
listening on the default "0.0.0.0" address inside a container will be
accessible from the network. Cannot be updated.
As you can see containerPort is information and does not make the app listen on that port. For really making the app listen on port 400 you need change port in code.
im trying to access a deloyment on our Kubernetes cluster on Azure. This is a Azure Kubernetes Service (AKS). Here are the configuration files for the deployment and the service that should expose the deployment.
Configurations
apiVersion: apps/v1
kind: Deployment
metadata:
name: mira-api-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mira-api
template:
metadata:
labels:
app: mira-api
spec:
containers:
- name: backend
image: registry.gitlab.com/izit/mira-backend
ports:
- containerPort: 8080
name: http
protocol: TCP
imagePullSecrets:
- name: regcred
apiVersion: v1
kind: Service
metadata:
name: mira-api-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
run: mira-api
When I check the cluster after applying these configurations I, I see the pod running correctly. Also the service is created and has public IP assigned.
After this deployment I don't see any requests getting handled. I get a error message in my browser saying the site is inaccessible. Any ideas what I could have configured wrong?
Your service selector labels and pod labels do not match.
You have app: mira-api label in deployment's pod template but have run: mira-api in service's label selector.
Change your service selector label to match the pod label as follows.
apiVersion: v1
kind: Service
metadata:
name: mira-api-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: mira-api
To make sure your service is selecting the backend pods or not, you can run kubectl describe svc <svc name> command and check if it has any Endpoints listed.
# kubectl describe svc postgres
Name: postgres
Namespace: default
Labels: app=postgres
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"postgres"},"name":"postgres","namespace":"default"},"s...
Selector: app=postgres
Type: ClusterIP
IP: 10.106.7.183
Port: default 5432/TCP
TargetPort: 5432/TCP
Endpoints: 10.244.2.117:5432 <------- This line
Session Affinity: None
Events: <none>
I've been trying to use the below to expose my application to a public IP. This is being done on Azure. The public IP is generated but when I browse to it I get nothing.
This is a Django app which runs the container on Port 8000. The service runs at Port 80 at the moment but even if I configure the service to run at port 8000 it still doesn't work.
Is there something wrong with the way my service is defined?
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- port: 80
selector:
app: hmweb
tier: frontend
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hmweb-deployment
labels:
app: hmweb
spec:
replicas: 1
selector:
matchLabels:
app: hmweb
template:
metadata:
labels:
app: hmweb
spec:
containers:
- name: hmweb
image: nw_webimage
envFrom:
- configMapRef:
name: new-config
command: ["/bin/sh","-c"]
args: ["gunicorn saleor.wsgi -w 2 -b 0.0.0.0:8000"]
ports:
- containerPort: 8000
imagePullSecrets:
- name: key
Output of kubectl describe service web (name of service:)
Name: web
Namespace: default
Labels: app=hmweb
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"hmweb"},"name":"web","namespace":"default"},"spec":{"ports":[{"port":...
Selector: app=hmweb
Type: LoadBalancer
IP: 10.0.86.131
LoadBalancer Ingress: 13.69.127.16
Port: <unset> 80/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31827/TCP
Endpoints: 10.244.0.112:8000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 8m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 7m service-controller Ensured load balancer
The reason behind that is your service has two selector app: hmweb and tier: frontend and your deployment pods has only single label named app: hmweb. Hence when your service is created it could not find the pods which has both the labels and doesn't connect to any pods. Also, if you have container running on 8000 port then you must define targetPort which has the value of container port on which container is running, else it will take both targetPort and port value as same you defined in your service i.e. port: 80
The correct yaml for your deployment is:
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- port: 80
targetPort: 8000
protocol: TCP
selector:
app: hmweb
type: LoadBalancer
Hope this helps.
My objective: To expose a pod's(running angular image) port so that I can access it from the host machine's browser.
service.yml:
apiVersion: v1
kind: Service
metadata:
name: my-frontend-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 8000
targetPort: 4200
Pod's yml:
apiVersion: v1
kind: Pod
metadata:
name: angular.frontend
labels:
app: MyApp
spec:
containers:
- name: angular-frontend-demo
image: angular-frontend-image
ports:
- name: nodejs-port
containerPort: 4200
Weird thing is that doing kubectl port-forward pod/angular.frontend 8000:4200 works. However, my objective is to write that in service.yml
Use Nodeport:
apiVersion: v1
kind: Service
metadata:
name: my-frontend-service
spec:
selector:
app: MyApp
type: NodePort
ports:
- protocol: TCP
port: 8000
targetPort: 4200
nodePort: 30001
then you can access the service on nodeport 30001 on any node of the cluster.
For example the machine name is node01 , you can then do curl http://node01:30001
The service you've defined here is of type ClusterIP (since you haven't set a type in the spec). This means the service is only available and reachable within the cluster. You can use Ingress to make it accessible from outside the cluster, see for example this post showing how to do that for Minikube.