not able to connect to the deployment, after installing the ingress-controller and ingress - kubernetes

i have deployed the nginx-ingress controller in kubernetes cluster hosted on aws using the KOPS(kubernetes operations)
as part of the ingress controller testing, i have installed a nginx-ingress controller.
to test that, i have created one deployment, service and ingress. below is the yaml code that i used.
i am able to see the backends in the ingress.
and when i run different pod on same cluster and tried to connect to the service from that i am able to reach deployment.
when i expose same deployment with NodePort. i am able to reach the deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
service
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
Ingress
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: nginx.gopinallani.xyz
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deployment
port:
number: 80
ingress-controller --> https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/aws/deploy.yaml
kubernets version:
**
Client Version: v1.25.0
Kustomize Version: v4.5.7
Server Version: v1.24.4
**
DNS entry I have added in the aws Route53 Zone

Related

Localhost kubernetes ingress not exposing services to local machine

I'm running kuberenetes in localhost, the pod is running and I can access the services when I port forwarding:
kubectl port-forward svc/my-service 8080:8080
I can get/post etc. the services in localhost.
I'm trying to use it with ingress to access it, here is the yml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 8080
I've also installed the ingress controller. But it isn't working as expected. Anything wrong with this?
EDIT: the service that Im trying to connect with ingress:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
labels:
app: my-service
spec:
replicas: 1
selector:
matchLabels:
app: my-service
template:
metadata:
labels: my-service
app: my-service
spec:
containers:
- image: test/my-service:0.0.1-SNAPSHOT
name: my-service
ports:
- containerPort:8080
... other spring boot override properties
---
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-service
spec:
type: ClusterIP
selector:
app: my-service
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
service is working by itself though
EDIT:
It worked when I used https instead of http
Is ingress resource in the same namespace as the service? Can you share the manifest of service? Also, what do logs of nginx ingress-controller show and what sort of error do you face when hitting the endpoint in the browser?
Ingress's YAML file looks OK to me BTW.
I was being stupid. It worked when I used https instead of http

How do I make a forward-proxy server on k8s and ALB(or NLB)?

I created forward proxy server on EKS pods behind ALB(created by AWS Load Balancer Controller). All pod can take a response through 8118 port through ALB.
The resources like pod and ingress looked good to me. Then I tried if the proxy server work well with curl -Lx k8s-proxy-sample-domain.ap-uswest-1.elb.amazonaws.com:18118 ipinfo.io
Normally, I get random ip address from ipinfo.io. But it didn't.... So, I also did port-forad. Like this:
kubectl port-forward specifi-pod 8118:8118
Then I re-try redirect access on my host address.
curl -Lx localhost:8118 ipinfo.io
In this case, it went well. I cannot catch the reason. What's the difference between THROUGH ALB and port-forward. Should I use NLB for some reason? Or some misconfigure?
My environement
k8s version: v1.18.2
node type: fargate
Manifest
Here is my manifest.
---
apiVersion: v1
kind: Namespace
metadata:
name: tor-proxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: tor-proxy
name: tor-proxy-deployment
spec:
selector:
matchLabels:
app.kubernetes.io/name: tor-proxy
replicas: 5
template:
metadata:
labels:
app.kubernetes.io/name: tor-proxy
spec:
containers:
- image: dperson/torproxy
imagePullPolicy: Always
name: tor-proxy
ports:
- containerPort: 8118
---
apiVersion: v1
kind: Service
metadata:
labels:
name: tor-proxy
name: tor-proxy-service
namespace: tor-proxy
spec:
ports:
- port: 18118
targetPort: 8118
protocol: TCP
type: NodePort
selector:
app.kubernetes.io/name: tor-proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: tor-proxy
name: tor-proxy-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 18118}]'
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: tor-proxy-service
servicePort: 18118
Use NLB not ALB, because it pass the client IP toward a target site through proxy server.

Minikube ingress is not responding

I am trying to redirect traffic to my Cluster Deployment object. My Deployment object is a node image which depends on a SQL server image.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: simptekapi-api-service
servicePort: 5000
MY Cluster IP service ====>
apiVersion: v1
kind: Service
metadata:
name: simptekapi-api-service
spec:
selector:
component: api
type: ClusterIP
ports:
- port: 5000
targetPort: 5000
My deployment Object ===>
apiVersion: apps/v1
kind: Deployment
metadata:
name: simptekapi-api-deployment
spec:
replicas: 3
selector:
matchLabels:
component: api
template:
metadata:
labels:
component: api
spec:
containers:
- name: simptekapi-api
image: nayan2/simptekapi
ports:
- containerPort: 5000
My node image is basically a Test API Project. I am sharing the SignIn url (http://Minikube_ip/api/v1/auth/SignIn)
Seems like everything ok to me. As am still literally a newbie with Kubernetes. It's hard for me to figure out what I am doing wrong.
While I have tried to reach my pods through Node-Port, everything was perfect. I don't know why I am unable to react it out through ingress.
kubectl get ingress example-ingress ==> Result
AND kubectl get pods -n ingress-nginx ==> Result

Kubernetes nginx ingress controller throws error when trying to obtain endpoints for service

I am trying to set micro-services on Kubernetes on Google Cloud Platform. I've created a deployment, clusterIp and ingress configuration files.
First after creating a cluster, I run this command to install nginx ingress.
helm install my-nginx stable/nginx-ingress --set rbac.create=true
I use helm v3.
Then I apply deployment and clusterIp configurations.
deployment and clusterIp configurations:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-production-deployment
spec:
replicas: 2
selector:
matchLabels:
component: app-production
template:
metadata:
labels:
component: app-production
spec:
containers:
- name: app-production
image: eu.gcr.io/my-project/app:1.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app-production-cluser-ip-service
spec:
type: ClusterIP
selector:
component: app-production
ports:
- port: 80
targetPort: 80
protocol: TCP
My ingress config is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: app.example.com
http:
paths:
- path: /
backend:
serviceName: app-production-cluster-ip-service
servicePort: 80
I get this error from Google Cloud Platform logs withing ingress controller:
Error obtaining Endpoints for Service "default/app-production-cluster-ip-service": no object matching key "default/app-production-cluster-ip-service" in local store
But when I do kubectl get endpoints command the output is this:
NAME ENDPOINTS AGE
app-production-cluser-ip-service 10.60.0.12:80,10.60.1.13:80 17m
I am really not sure what I'm doing wrong.
The service name mentioned in the ingress not matching. Please recreate a service and check
apiVersion: v1
kind: Service
metadata:
name: app-production-cluster-ip-service
spec:
type: ClusterIP
selector:
component: app-production
ports:
- port: 80
targetPort: 80
protocol: TCP

Kubernetes ingress-nginx gives 502 error (Bad Gateway)

I have an EKS cluster for which I want :
- 1 Load Balancer per cluster,
- Ingress rules to direct to the right namespace and the right service.
I have been following this guide : https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
My deployments:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: IMAGENAME
ports:
- containerPort: 8000
name: hello-world
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bleble
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: bleble
template:
metadata:
labels:
app: bleble
spec:
containers:
- name: bleble
image: IMAGENAME
ports:
- containerPort: 8000
name: bleble
the service of those deployments:
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: hello-world
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: bleble-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: bleble
type: NodePort
My Load balancer:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
namespace : default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: internal-lb.aws.com
http:
paths:
- path: /bleble
backend:
serviceName: bleble-svc
servicePort: 80
- path: /hello-world
backend:
serviceName: hello-world-svc
servicePort: 80
I've set up the Nginx Ingress Controller with this : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/mandatory.yaml
I am unsure why I get a 503 Service Temporarily Unavailable for one service and one 502 for another... I would guess it's a problem of ports or of namespace? In the guide, they don't define namespace for the deployment...
Every resources create correctly, and I think the ingress is actually working but is getting confused where to go.
Thanks for your help!
In general, use externalTrafficPolicy: Cluster instead of Local. You can gain some performance (latency) improvement by using Local but you need to configure those pod allocations with a lot efforts. You will hit 5xx errors with those misconfigurations. In addition, Cluster is the default option for externalTrafficPolicy.
In your ingress, you route /bleble to service bleble, but your service name is actually bleble-svc. please make them consistent. Also, you would need to set your servicePort to 8080 as you exposed 8080 in your service configuration.
For internal service like bleble-svc, Cluster IP is good enough in your case as it does not need external access.
Hope this helps.
Found it!
The containerPort in the Deployment were set to 8000, the targetport of the services as well, but the person who did the Dockerfile of the code exposed the port 80. Which was the reason it was getting the 502 Bad getaway!
Thanks a lot as well to #Fei who has been a fantastic helper!