Hello I am new to kubernetes and i need some help.
I want use kubernetes ingress path for my 2 different nuxt project.
First / path working well but my
second /v1 path not get resources like .css and .js
My first deployment and service yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-1
labels:
app: nginx1
spec:
replicas: 1
selector:
matchLabels:
app: nginx1
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx1
image: st/itmr:latest "can't show image"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx1-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: nginx1
My second deployment and service yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
labels:
app: nginx2
spec:
replicas: 1
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx2
image: st/itpd:latest "can't show image"
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginx2-svc
spec:
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
selector:
app: nginx2
And there is the my ingress yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: some.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx1-svc
port:
number: 80
- path: /v1
pathType: Prefix
backend:
service:
name: nginx2-svc
port:
number: 8080
I tought using nginx.ingress.kubernetes.io/rewrite-target: /$1 would be work for me bu its not.
I don't know where is the problem so help me.
To clarify I am posting a community wiki answer.
The problem here was resolved by switching the project path.
See more about ingress paths here.
Related
I'm trying to setup ingress to work with a kubernetes cluster as seen here:https://www.youtube.com/watch?v=DgVjEo3OGBI. When testing the endpoint in postman it is returning a 404 not found. I've tried using https and http and i'm at a loss. Thanks!
Edit: I was using a localhost for testing and am now trying to use acme.com as the routing url.
Ingress file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: acme.com
http:
paths:
- path: /api/platforms
pathType: Prefix
backend:
service:
name: platforms-clusterip-service
port:
number: 80
- path: /api/c/platforms
pathType: Prefix
backend:
service:
name: commands-clusterip-service
port:
number: 80
Depl files
apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: revlisc/platformservice:latest
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-service
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: commands-depl
spec:
replicas: 1
selector:
matchLabels:
app: commandservice
template:
metadata:
labels:
app: commandservice
spec:
containers:
- name: commandservice
image: revlisc/commandservice:latest
apiVersion: v1
kind: Service
metadata:
name: commands-clusterip-service
spec:
type: ClusterIP
selector:
app: commandservice
ports:
- name: commandservice
protocol: TCP
port: 80
targetPort: 80
Service
apiVersion: v1
kind: Service
metadata:
name: platformnpservice-srv
spec:
type: NodePort
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
So there was a change in ingress.yml which I have made and it works for me can you test using the below manifest and check if its working ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: revlisc/platformservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-service
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
Also there was an issue with your ingress file as well i have made a small change. Check if this works for you
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/rewrite-target: /$
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: <your-hostname>
http:
paths:
- pathType: Prefix
path: "/api/platforms"
backend:
service:
name: platforms-clusterip-service
port:
number: 80
When I hit hostname/api/platforms I was able to see this output I am not sure if this is the expected result. Can you confirm ?
[{"id":1,"name":"Dot Net","publisher":"Microsoft","cost":"Free"},{"id":2,"name":"SQL Server Express","publisher":"Microsoft","cost":"Free"},{"id":3,"name":"Kubernetes","publisher":"Cloud Native Computing Foundation","cost":"Free"}]
I have been following the tutorial here:
MS Azure
This is fine. However deploying a local config file I get a "502 Gate Way" error. This config has been fine and works as expected.
Can anyone see anything obvious with this: At this point I don't know where to start.
I am trying to achieve using the ingress controller that is Application gateway. Then add deployments and apply additional ingress rules
apiVersion: apps/v1
kind: Deployment
metadata:
name: one-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: one-api
template:
metadata:
labels:
run: one-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: one-api
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: one-api
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: one-api
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: two-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: two-api
template:
metadata:
labels:
run: two-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: two-api
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: two-api
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: two-api
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: strata-2022
labels:
app: my-docker-apps
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: one-api
port:
number: 80
- path: /two-api
pathType: Prefix
backend:
service:
name: two-api
port:
number: 80
Output of: kubectl describe ingress strata-2022
Name: strata-2022
Labels: app=my-docker-apps
Namespace: default
Address: 51.142.191.83
Ingress Class:
Default backend:
Rules:
Host Path Backends
/ one-api:80 (10.224.0.15:80,10.224.0.59:80,10.224.0.94:80)
/two-api two-api:80 (10.224.0.13:80,10.224.0.51:80,10.224.0.82:80)
Annotations: kubernetes.io/ingress.class: azure/application-gateway
Events:
kubectl describe ingress
Name: strata-2022
Labels: app=my-docker-apps
Namespace: default
Address: 51.142.191.83
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ one-api:80 (10.224.0.15:80,10.224.0.59:80,10.224.0.94:80)
/two-api two-api:80 (10.224.0.13:80,10.224.0.51:80,10.224.0.82:80)
Annotations: kubernetes.io/ingress.class: azure/application-gateway
Events: <none>
Commands used to create AKS using Azure CLI.
az aks create -n myCluster -g david-tutorial --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name testApplicationGateway --appgw-subnet-cidr "10.225.0.0/16" --generate-ssh-keys
// Get credentials and switch to this context
az aks get-credentials -n myCluster -g david-tutorial
// This line is from the tutorial -- this works as expected
//kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/aspnetapp.yaml
// This is what i ran. It works locally
kubectl apply -f nano new-deploy.yaml
// Get address
kubectl get ingress
kubectl get configmap
I tried recreating the same setup on my end, and I could identify the following issue right after running the same az aks create command: All the instances in one or more of your backend pools are unhealthy.
Since this appeared to indicate that the backend pools are unreachable, it was strange at first so I tried to look at the logs of one of the pods based on the hello-app images you were using and noticed this right away:
> kubectl logs one-api-77f9b4b9f-6sv6f
2022/08/12 00:22:04 Server listening on port 8080
Hence, my immediate thought was that maybe in the Docker image that you are using, nothing is configured to listen on port 80, which is the port you are using in your kubernetes resources definition.
After updating your Deployment and Service definitions to use port 8080 instead of 80, everything worked perfectly fine and I started getting the following response in my browser:
Hello, world!
Version: 1.0.0
Hostname: one-api-d486fbfd7-pm8kt
Below you can find the updated YAML file that I used to successfully deploy all the resources:
apiVersion: apps/v1
kind: Deployment
metadata:
name: one-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: one-api
template:
metadata:
labels:
run: one-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: one-api
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: one-api
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: one-api
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: two-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: two-api
template:
metadata:
labels:
run: two-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: two-api
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: two-api
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: two-api
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: strata-2022
labels:
app: my-docker-apps
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: one-api
port:
number: 8080
- path: /two-api
pathType: Prefix
backend:
service:
name: two-api
port:
number: 8080
I have a kubernetes cluster having two deployments ui-service-app and user-service-app. Both of the deployments are exposed through Cluster IP services namely ui-service-svc and user-service-svc. In addition there is a Ingress for accessing both of my applications inside those deployments from outside the cluster.
Now I want to make a api call from my application inside ui-service-app to user-service-app. Currently I am using the ingress-ip/user to do so. But there should be some way to do this internally?
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-app
labels:
app: user-service-app
spec:
replicas: 1
selector:
matchLabels:
app: user-service-app
template:
metadata:
labels:
app: user-service-app
spec:
containers:
- name: user-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /ping
port: 3000
readinessProbe:
httpGet:
path: /ping
port: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "user-service-svc"
namespace: "default"
labels:
app: "user-service-app"
spec:
type: "ClusterIP"
selector:
app: "user-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui-service-app
labels:
app: ui-service-app
spec:
replicas: 1
selector:
matchLabels:
app: ui-service-app
template:
metadata:
labels:
app: ui-service-app
spec:
containers:
- name: ui-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "ui-service-svc"
namespace: "default"
labels:
app: "ui-service-app"
spec:
type: "ClusterIP"
selector:
app: "ui-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awesome-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
defaultBackend:
service:
name: ui-service-svc
port:
number: 80
rules:
- http:
paths:
- path: /login
pathType: Prefix
backend:
service:
name: ui-service-svc
port:
number: 80
- path: /user(/|$)(.*)
pathType: Prefix
backend:
service:
name: user-service-svc
port:
number: 80
UPDATE 1:
THIS IS THE ERROR PAGE WHEN I CHANGE THE URL IN REACT APP TO HTTP://USER-SERVICE-SVC
Use the service name of the associated service.
From any other pod in the same namespace, the hostname user-service-svc will map to the Service you've defined, so http://user-service-svc would connect you to the web server of the user-service-app Deployment (no port specified, because your Service is mapping port 80 to container port 3000).
From another namespace, you can use the hostname <service>.<namespace>.svc.cluster.local, but that's not relevant to what you're doing here.
See the Service documentation for more details.
I've setup k3s v1.20.4+k3s1 with Klipper Lb and nginx ingress 3.24.0 from the helm charts.
I'm following this article but I'm stumbling upon a very weird issue where my ingress hosts would point to the wrong service.
Here is my configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-routing
spec:
rules:
- host: echo1.stage.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo1
port:
number: 80
- host: echo2.stage.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo2
port:
number: 80
---
apiVersion: v1
kind: Service
metadata:
name: echo1
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo1
spec:
selector:
matchLabels:
app: echo1
replicas: 2
template:
metadata:
labels:
app: echo1
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=This is echo1"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: echo2
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo2
spec:
selector:
matchLabels:
app: echo2
replicas: 1
template:
metadata:
labels:
app: echo2
spec:
containers:
- name: echo2
image: hashicorp/http-echo
args:
- "-text=This is the new (echo2)"
ports:
- containerPort: 5678
And my Cloudflare DNS records (no DNS proxy activated):
;; A Records
api.stage.example.com. 1 IN A 162.15.166.240
echo1.stage.example.com. 1 IN A 162.15.166.240
echo2.stage.example.com. 1 IN A 162.15.166.240
But when I do a curl on echo1.stage.example.com multiple times, here is what I get:
$ curl echo1.stage.example.com
This is echo1
$ curl echo1.stage.example.com
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>
$ curl echo1.stage.example.com
This is the new (echo2)
Sometimes I get a bad gateway, sometimes I get the echo1.stage.example.com domain pointing to the service assigned to echo2.stage.example.com. Is this because of the LB? Or a bad configuration on my end? Thanks!
EDIT: It's not coming from the LB, I just switched to metallb and I still get the same issue
Ok I found the issue. It was actually not related to the config I previously posted by to my kustomization.yaml config where I had:
commonLabels:
app: myapp
Just removing that commonLabels solved the issue.
I am building a service in EKS that has two deployments, two services (NodePort) , and a single ingress.
I am using the aws-alb-ingress-controller.
When I run kubectl port-forward POD 8080:80 It does show me my working pods.
When I look at the generated endpoints by the alb I get 502 errors.
When I look at the Registered Targets of the target group I am seeing the message, Health checks failed with these codes: [502]
Here is my complete yaml.
---
#Example game deployment and service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-game"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-game"
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: "example-game"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-game"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
#Example nginxdemo Deployment and Service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-nginxdemo"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-nginxdemo"
spec:
containers:
- image: nginxdemos/hello
imagePullPolicy: Always
name: "example-nginxdemo"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-nginxdemo"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
---
#Shared ALB ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "example-ingress"
namespace: "example-app"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
Alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /
# alb.ingress.kubernetes.io/scheme: internal
# alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
labels:
app: example-app
spec:
rules:
- http:
paths:
- path: /game/*
backend:
serviceName: "service-example-game"
servicePort: 80
- path: /nginxdemo/*
backend:
serviceName: "service-example-nginxdemo"
servicePort: 80
I don't know why but it turns out that the label given to to ingress has to be unique.
When I changed the label from 'example-app' to 'example-app-ingress' it just started working.