I have configured a kubernetes ingress service but it only works when the path is /
I have tried all manner of different values for the path including:
/*
/servicea
/servicea/
/servicea/*
This is my ingress configuration (that works)
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: boardingservice
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.url.com
http:
paths:
- path: /
backend:
serviceName: servicea-nodeport
servicePort: 80
This is my nodeport service
- apiVersion: v1
kind: Service
metadata:
name: servicea-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8081
nodePort: 30124
selector:
app: servicea
And this is my deployment
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: servicea
spec:
replicas: 1
template:
metadata:
name: ervicea
labels:
app: servicea
spec:
containers:
- image: 350329402011.dkr.ecr.eu-west-2.amazonaws.com/servicea
name: servicea
ports:
- containerPort: 8080
protocol: TCP
- image: 350329402011.dkr.ecr.eu-west-2.amazonaws.com/serviceb
name: serviceab
ports:
- containerPort: 8081
protocol: TCP
If the path is / then I can do this http://my.url.com/api/ping
but as I will have multiple services I want to do this: http://my.url.com/servicea/api/ping but when I set the path to /servicea I get a 404.
I am running kubernetes on AWS with an ingress-nginx ingress controller
Any idea?
You are not using kubernetes Pods as they are intended to be used. A Pod
it contains one or more application containers which are relatively tightly coupled — in a pre-container world, they would have executed on the same physical or virtual machine.
If you have two applications, servicea and serviceb, they should be running on different Pods: one pod for servicea and another one for serviceb. This has many benefits: you can deploy them separately, scale them independently, etc.
As the docs say
A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.
These Pods can be created using Deployments, as you were already doing. That's fine and recommended.
Once you have the Deployments running, you'd create a different Service that would balance traffic between all the Pods for a given Deployment.
And finally, you want to hit servicea or serviceb depending on the request URL. That can be done with Ingress, as you were trying, but mapping each path to different services. For example
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.url.com
http:
paths:
- path: /servicea
backend:
serviceName: servicea
servicePort: 80
- path: /serviceb
backend:
serviceName: serviceb
servicePort: 80
That way, requests going to your ingress controller using the /servicea path would be served by the Pods behind the servicea Service. And requests going to your ingress controller using the /serviceb path would be served by the Pods behind the serviceb Service.
For anyone reading this, my configuration was correct (even though unorthodox, as pointed out by fiunchinho), the error was in my Spring Boot applications, that were running in the containers. I needed to change the context paths to match the Ingress path - I could, of course, have changed the #GetMapping and #PostMapping methods in my Spring Controller, but I opted to change the context path.
Related
I have two pods (deployments) running on minikube. Each pod has the same port exposed (say 8081), but use different images. Now I want to configure so that I can access either of the pods using the same external URL, in a load balanced way. So what I tried to do is put same matching label in both pods and map them to same service and then expose through NodePort. Example:
#pod1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep1
labels:
apps: dep1
tier: cloud
spec:
template:
metadata:
name: dep1-pod
labels:
app: deployment1
spec:
containers:
- name: cont1
image: cont1
ports:
- containerPort: 8081
selector:
matchLabels:
app:deployment1
Now second pod
#pod2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep2
labels:
apps: dep2
tier: cloud
spec:
template:
metadata:
name: dep2-pod
labels:
app: deployment1
spec:
containers:
- name: cont2
image: cont2
ports:
- containerPort: 8081
selector:
matchLabels:
app:deployment1
Now the service:
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: service1
spec:
type: NodePort
ports:
- port: 8081
targetPort: 8081
nodePort: 30169
selector:
app: deployment1
Now this does not work as intended as it refuses to connect to my IP:30169. However, I can connect if only one of the pods are deployed.
Now I know I can achieve this functionality using replicas and just one image, but in this case, I want to do this using 2 images. Any help is much appreciated.
You can use Ingress to achieve it.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
In your situation Ingress will forward traffic to your services using the same URL. It depends which path you type: URL for first Pod and URL/v2 for second Pod. Of course you can change /v2 to something else.
On the beginning you need enable Ingress on minikube. You can do it using a command below. You can red more about it here
minikube addons enable ingress
Next step you need create a Ingress using a yaml file. Here is an example how to do it step by step.
Yaml file of Ingress looks as below.
As you can see in this configuration, you can access to one Pod using URL and it will forward traffic to first service attached to the first Pod. For the second Pod using URL/v2 it will forward traffic to second service on attached to the second Pod.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
- path: /v2
pathType: Prefix
backend:
service:
name: web2
port:
number: 8080
I’ve already seen this question; AFAIK I’m doing everything in the answers there.
Using GKE, I’ve deployed a GCP HTTP(S) load balancer-based ingress for a kubernetes cluster containing two almost identical deployments: production and development instances of the same application.
I set up a dedicated port on each pod template to use for health checks by the load balancer so that they are not impacted by redirects from the root path on the primary HTTP port. However, the health checks are consistently failing.
From these docs I added a readinessProbe parameter to my deployments, which the load balancer seems to be ignoring completely.
I’ve verified that the server on :p-ready (9292; the dedicated health check port) is running correctly using the following (in separate terminals):
➜ kubectl port-forward deployment/d-an-server p-ready
➜ curl http://localhost:9292/ -D -
HTTP/1.1 200 OK
content-length: 0
date: Wed, 26 Feb 2020 01:21:55 GMT
What have I missed?
A couple notes on the below configs:
The ${...} variables below are filled by the build script as part of deployment.
The second service (s-an-server-dev) is almost an exact duplicate of the first (with it’s own deployment) just with -dev suffixes on the names and labels.
Deployment
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "d-an-server"
namespace: "default"
labels:
app: "a-an-server"
spec:
replicas: 1
selector:
matchLabels:
app: "a-an-server"
template:
metadata:
labels:
app: "a-an-server"
spec:
containers:
- name: "c-an-server-app"
image: "gcr.io/${PROJECT_ID}/an-server-app:${SHORT_SHA}"
ports:
- name: "p-http"
containerPort: 8080
- name: "p-ready"
containerPort: 9292
readinessProbe:
httpGet:
path: "/"
port: "p-ready"
initialDelaySeconds: 30
Service
apiVersion: "v1"
kind: "Service"
metadata:
name: "s-an-server"
namespace: "default"
spec:
ports:
- port: 8080
targetPort: "p-http"
protocol: "TCP"
name: "sp-http"
selector:
app: "a-an-server"
type: "NodePort"
Ingress
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "primary-ingress"
annotations:
kubernetes.io/ingress.global-static-ip-name: "primary-static-ipv4"
networking.gke.io/managed-certificates: "appname-production-cert,appname-development-cert"
spec:
rules:
- host: "appname.example.com"
http:
paths:
- backend:
serviceName: "s-an-server"
servicePort: "sp-http"
- host: "dev.appname.example.com"
http:
paths:
- backend:
serviceName: "s-an-server-dev"
servicePort: "sp-http-dev"
I think what's happening here is GKE ingress is not at all informed of port 9292. You are referring sp-http in the ingress which refers to port 8080.
You need to make sure of below:
1.The service's targetPort field must point to the pod port's containerPort value or name.
2.The readiness probe must be exposed on the port matching the servicePort specified in the Ingress.
On GKE, K8s Ingress are LoadBalancers provided by Compute Engine which have some cost. Example for 2 months I payed 16.97€.
In my cluster I have 3 namespaces (default, dev and prod) so to reduce cost I would like to avoid spawning 3 LoadBalancers. The question is how to configure the current one to point to the right namespace?
GKE requires the ingress's target Service to be of type NodePort, I am stuck because of that constraint.
I would like to do something like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: dev
annotations: # activation certificat ssl
kubernetes.io/ingress.global-static-ip-name: lb-ip-adress
spec:
hosts:
- host: dev.domain.com
http:
paths:
- path: /*
backend:
serviceName: dev-service # This is the current case, 'dev-service' is a NodePort
servicePort: http
- host: domain.com
http:
paths:
- path: /*
backend:
serviceName: prod-service # This service lives in 'dev' namespace and is of type ExternalName. Its final purpose is to point to the real target service living in 'prod' namespace.
servicePort: http
- host: www.domain.com
http:
paths:
- path: /*
backend:
serviceName: prod-service
servicePort: http
As GKE requires service to be NodePort I am stuck with prod-service.
Any help will be appreciated.
Thanks a lot
OK here is what I have been doing. I have only one ingress with one backend service to nginx.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
backend:
serviceName: nginx-svc
servicePort: 80
And In your nginx deployment/controller you can define the config-maps with typical nginx configuration. This way you use one ingress and target mulitple namespaces.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
default.conf: |
server {
listen 80;
listen [::]:80;
server_name _;
location /{
add_header Content-Type text/plain;
return 200 "OK.";
}
location /segmentation {
proxy_pass http://myservice.mynamespace.svc.cluster.local:80;
}
}
And the deployment will use the above config of nginx via config-map
apiVersion: extensions/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
#podAntiAffinity will not let two nginx pods to run in a same node machine
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-configs
mountPath: /etc/nginx/conf.d
livenessProbe:
httpGet:
path: /
port: 80
# Load the configuration files for nginx
volumes:
- name: nginx-configs
configMap:
name: nginx-config
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
selector:
app: nginx
type: NodePort
ports:
- protocol: "TCP"
nodePort: 32111
port: 80
This way you can take advantage of ingress features like tls/ssl termination like managed by google or cert-manager and also if you want you can also have your complex configuration inside nginx too.
Use #Prata way but with one change, do not route prod traffic via nginx but route it directly to service from loadbalancer, use nginx for non-prod traffic e.g staging etc.
The reason is that google HTTPS load balancer uses container native LB (Link) which routes traffic directly to healthy pods which saves hops and is effecient, Why not use it for production.
One alternative (and probably the most flexible GCP native) solution for http(s) load-balancing is the use standalone NEGs. This requires you to setup all parts of the load-balancer yourself (such as url maps, health-checks etc.)
There are multiple benefits, such as:
One load-balancer can serve multiple namespaces
The same load-balancer can integrate other backends as-well (like other instance groups outside your cluster)
You can still use container native load balancing
One challenge of this approach is that it is not "GKE native", which means that the routes will still exist even if you delete the underlying service. This approach is therefore best maintained through tools like terraform which allows you to have GCP wholistic deployment control.
i'm currently using nginx ingress to expose my apps to outside, currently my approach is like this. My question is this the best way to do it? or if not what would be the best practice.
nginx ingress controller service:-
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: LoadBalancer
ports:
- port: 80
name: http
protocol: "TCP"
- port: 443
name: https
protocol: "TCP"
selector:
app: nginx-ingress-lb
ingress:-
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: app-ingress
spec:
tls:
- hosts:
- myservice.example.com
secretName: sslcerts
rules:
- host: myservice.example.com
http:
paths:
- backend:
serviceName: myservice
servicePort: 80
path: /
so basically i have my nginx ingress controller pods running and expose to a service type of Loadbalacner and i have rules define to determine the routes.
The best approach depends on the environment you will choose to build your cluster, whether you consider using Cloud provider or Bare-metal solutions.
In your example, I guess you are using Cloud provider as a Loadbalancer provisioner, which delivers External IP address as an entry point for your NGINX Ingress Controller. Therefore, you have a quite good possibility to scale your Ingress on demand with various features and options.
I found this Article a very useful to make the comparison between implementation of NGINX Ingress Controller on Cloud and Bare-metal environments.
I am setting up a minimal Kubernetes cluster on localhost on a Linux machine (starting with hack/local-up-cluster from the checked out repo). In my deployment file I defined an ingress, which should make the services, which are deployed in the cluster, accessible from the outside. Deployment.yml:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: foo-service-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: foo-service
spec:
containers:
- name: foo-service
image: images/fooservice
imagePullPolicy: IfNotPresent
ports:
- containerPort: 7778
---
apiVersion: v1
kind: Service
metadata:
name: foo-service-service
spec:
ports:
- port: 7778
selector:
app: foo-service
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-gateway-ingress
spec:
rules:
- host:
http:
paths:
- path: /foo
backend:
serviceName: foo-service-service
servicePort: 7779
- path: /bar
backend:
serviceName: bar-service-service
servicePort: 7776
I can not access the services. kubectl describe shows the following for my ingress:
Name: api-gateway-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/foo foo-service-service:7779 (<none>)
/bar bar-service-service:7776 (<none>)
Annotations:
Events: <none>
Is it because there is not address set for my ingress, that it is not visible to outside world yet?
An Ingress resource is just a definition for your cluster how to handle ingress traffic. It needs an Ingress Controller to actually process these definitions; creating an Ingress resource without having deployed an Ingress controller will not have any effect.
From the documentation:
In order for the Ingress resource to work, the cluster must have an Ingress controller running. This is unlike other types of controllers, which typically run as part of the kube-controller-manager binary, and which are typically started automatically as part of cluster creation. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one.
There are several Ingress controllers available that you can deploy by yourself (typically, via a Deployment resource), like for example the NGINX ingress controller (which is part of the Kubernetes project) or third-party ingress controllers like Traefik, Envoy or Voyager.