Tried creating a Kubernetes endpoints service to invoke resource hosted outside the cluster via static IP's over HTTPS protocol.
Below is the endpoint code
kind: Service
apiVersion: v1
metadata:
name: serviceRequest
spec:
ports:
- port: 8081
targetPort: 8094
---
kind: Endpoints
apiVersion: v1
metadata:
name: serviceRequest
subsets:
- addresses:
- ip: XX.XX.XX.XX // **external IP which is accessible as https://XX.XX.XX.XX:8094**
ports:
- port: 8094
But the above configuration is giving 400 Bad Request with message as "This combination of host and port requires TLS."
Same is working for http not for https exposed "ip".Could someone please guide how to achieve this.
##Update1
This is how the flow is configured.
Ingress->service->endpoints
This is the error message your get when calling a https endpoint with http. Are you sure that whoever is calling your service, is calling it with https:// at the beginning?
Kubernetes Service is no more than a set of forwarding rules in iptables (most often), and it knows nothing about TLS.
If you want to enforce https redirection you might use ingress controller for this. All major ingress controllers have this capability.
For example, check for nginx-ingress.
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect.
Basically, all you need is to add this annotation to your ingress rule.
nginx.ingress.kubernetes.io/ssl-redirect: "true"
Easypeasy, just add port 443 to the Service that will make the request TLS/https:
kind: Service
apiVersion: v1
metadata:
name: serviceRequest
spec:
ports:
- port: 443 # <-- this is the way
targetPort: 8094
---
kind: Endpoints
apiVersion: v1
metadata:
name: serviceRequest
subsets:
- addresses:
- ip: XX.XX.XX.XX # **external IP which is accessible as https://XX.XX.XX.XX:8094**
ports:
- port: 8094
So you can reach your serviceRequest from your containers on https://serviceRequest url.
Also keep in mind that in yaml the # character is the comment sing not //
Related
i have ingress file where i am forwarding request to pods using service name but i have a scenario where few requests with path /abc* needs to be forwarded to ip based url say http://10.10.1.1:8080/. How to do this case using ingress in Kubernetes and i am using AWS EKS as my kubernetes.
You can create Service using Endpoints for that:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 10.10.1.1
ports:
- port: 8080
I have a website that needs to be proxied through my web app.
Traditionally we've accomplished it via apache proxy with proxy directives.
The proxy also rewrites some of the headers and adds a couple of new ones.
Now the app has moved to OpenShift (Kubernetes) and I'm trying to avoid deploying another pod with apache.
Can I perform this header rewriting and proxying via K8 ingress? or router?
I've tried this approach, but it didn't work.
I also don't know how to get OpenShift Ingress logs, nothing seems to happen in there.
I tried using an external name, but it doesn't work:
kind: Service
metadata:
name: es3
spec:
externalName: google.com
type: ExternalName
---
kind: Route
apiVersion: route.openshift.io/v1
spec:
host: host.my-cluster-url.net
to:
kind: Service
name: es3
port:
targetPort: es3
I also tried using Endpoints , same result
apiVersion: v1
kind: Service
metadata:
name: mysvc
spec:
ports:
- name: app
port: 80
protocol: TCP
targetPort: 80
clusterIP: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: mysvc
subsets:
- addresses:
- ip: my.ip.address
ports:
- name: app
port: 80
protocol: TCP
you want to proxy non kubernetes service, right? if yes, use end point and create service from end point, I have used this with kubernetes will work with openshift too my wild guess
https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/
Is it possible to configure k8s nginx-ingress as a LB to have a K8s Service actively connect to external backend hosted on external hosts/ports (where one will be enabled at a time, connecting back to the cluster service)?
Similar to envoy proxy? This is on vanilla K8s, on-prem.
So rather than balance load from
client -> cluster -> service.
I am looking for
service -> nginx-ingress -> external-backend.
Define a Kubernetes Service with no selector. Then you need to define a Endpoint. You can put the IP and port in the Endpoint. Normally you do not define Endpoints for Services but because the Service will not have a Selector you will need to provide an Endpoint with the same name as the Service.
Then you point the Ingress to the Service.
Here's an example that exposes an Ingress on the cluster and sends the traffic to 192.168.88.1 on TCP 8081.
apiVersion: v1
kind: Service
metadata:
name: router
namespace: default
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8081
---
apiVersion: v1
kind: Endpoints
metadata:
name: router
namespace: default
subsets:
- addresses:
- ip: 192.168.88.1
- ip: 192.168.88.2 # As per question below
ports:
- port: 8081
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: router
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: my-router.domain.com
http:
paths:
- path: /
backend:
serviceName: router
servicePort: 80
While defining ingress use nginx.ingress.kubernetes.io/configuration-snippet annotation. Enable also proxy protocol using use-proxy-protocol: "true".
Using this annotation you can add additional configuration to the NGINX location.
Please take a look: ingress-nginx-issue, advanced-configuration-with-annotations.
I've setup a K8S-cluster in GKE and installed RabbitMQ (from the marketplace) and Istio (via Helm). I can access rabbitMQ from pods until I enable the envoy proxy to be injected into these pods, but after that the traffic will not reach rabbitMQ, and I can't figure out how to enable traffic to the rabbitmq service.
There is a service rabbitmq-rabbitmq-svc (in the rabbitmq namespace) that is of type LoadBalancer.
I've tried a simple busybox when I don't have envoy running and then I have no trouble telneting to rabbitmq (port 5672), but as soon as I try with automatic envoy injection envoy prevents the traffic.
I tried unsuccessfully to add a DestinationRule. (I've added a rule but it makes no difference)
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: rabbitmq-rabbitmq-svc
spec:
host: rabbitmq.rabbitmq.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
It seems like it should be a simple solution, but I can't figure it out... :/
UPDATE
Turns out it was a simple error in the hostname, ended up using this and it works:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: rabbitmq-rabbitmq-svc
spec:
host: rabbitmq-rabbitmq-svc.rabbitmq.svc.cluster.local
Turns out it was a simple error in the hostname, the correct one was rabbitmq-rabbitmq-svc.rabbitmq.svc.cluster.local
The only thing I needed to do to get RabbitMQ clusters to work within Istio is to annotate the RabbitMQ pods as such:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
spec:
override:
statefulSet:
spec:
template:
metadata:
annotations:
#annotate rabbitMQ pods to only redirect traffic on ports 15672 and 5672 to Envoy proxy sidecars.
traffic.sidecar.istio.io/includeInboundPorts: "15672, 5672"
traffic.sidecar.istio.io/includeOutboundPorts: "15672, 5672"
For some reason the exclude port annotations weren't working so I just flipped it by using include port annotations. In my case, the global Istio config is controlled by another team in the company so perhaps there's a clash when trying to use the exclude port annotations.
I maybe encounter the same problem with you before. But my app can connect rabbitmq by envoy after declaring epmd with 4369 port in rabbitmq service.
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
app: rabbitmq
spec:
type: ClusterIP
ports:
- port: 5672
targetPort: 5672
name: message
- port: 4369
targetPort: 4369
name: epmd
- port: 15672
targetPort: 15672
name: management
selector:
app: rabbitmq
(Using Istio 0.5.1, kubectl 1.9.1/1.9.0 for client/server, minikube 0.25.0)
I'm trying to get Istio EgressRules to work with Kubernetes Services, but having some trouble.
I tried to set up EgressRules 3 ways:
An ExternalName service which points to another domain (like
www.google.com)
A Service with no selector, but an associated
Endpoint object (for services that have an IP address but no DNS
name)
(for comparison) No Kubernetes service, just an EgressRule
I figured I could use the FQDN of the kubernetes service as the HTTP-based EgressRule destination service (like ext-service.default.svc.cluster.local), and this is what I attempted for both an ExternalName service as well as a Service with no selectors but an associated Endpoints object.
For the former, I created the following yaml file:
kind: Service
apiVersion: v1
metadata:
name: ext-service
spec:
type: ExternalName
externalName: www.google.com
---
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: ext-egress-rule
spec:
destination:
service: ext-service.default.svc.cluster.local
ports:
- port: 443
protocol: https
For the latter, I created this yaml file (I just pinged google and grabbed the IP address):
kind: Endpoints
apiVersion: v1
metadata:
name: ext-service
subsets:
- addresses:
- ip: 216.58.198.78
ports:
- port: 443
---
kind: Service
apiVersion: v1
metadata:
name: ext-service
spec:
ports:
- protocol: TCP
port: 443
targetPort: 443
---
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: ext-service-egress-rule
spec:
destination:
service: ext-service.default.svc.cluster.local
ports:
- port: 443
protocol: https
In both cases, in the application code, I access:
http://ext-service.default.svc.cluster.local:443
My assumption is that the traffic will flow like:
[[ app -> envoy proxy -> (tls origination) -> kubernetes service ]] -> external service
where [[ ... ]] is the boundary of the service mesh (and also the Kubernetes cluster)
Results:
The ExternalName Service almost worked as expected, but it brought me to Google's 404 page (and sometimes the response just seemed empty, not sure how to replicate one or the other specifically)
The Service with the Endpoint object did not work, instead printing this message (when making the request via Golang, but I don't think that matters):
Get http://ext-service.default.svc.cluster.local:443: EOF
This also sometimes gives an empty response.
I'd like to use Kubernetes services (even though it's for external traffic) for a few reasons:
You can't use an IP address for the EgressRule's destination service. From Egress Rules configuration: "The destination of an egress rule ... can be either a fully qualified or wildcard domain name".
For external services that don't have a domain name (some on-prem legacy/monolith service without a DNS name), I'd like the application to be able to access them not by IP address but by a kube-dns (or Istio-related similar) name.
(related to previous) I like the additional layer of abstraction that a Kubernetes service provides, so I can change the underlying destination without changing the EgressRule (unless I'm mistaken and this isn't the right way to architect this). Is the EgressRule meant to replace Kubernetes services for external traffic entirely and without creating additional Kubernetes services?
Using https:// in the app code isn't an option because then the request would have to disable TLS verification since the kube-dns name doesn't match any on the certificate. It also wouldn't be observable.
If I use the following EgressRule (without any Kubernetes Services), accessing Google via http://www.google.com:443 works fine, getting the exact html representation that I expect:
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: google-egress-rule
spec:
destination:
service: www.google.com
ports:
- port: 443
protocol: https
I saw there's a TCP EgressRule, but I would rather not have to specify rules for each block of IPs. From TCP Egress: "In TCP egress rules as opposed to HTTP-based egress rules, the destinations are specified by IPs or by blocks of IPs in CIDR notation.".
Also, I would still like the HTTP-based observability that comes from L7 instead of L4, so I'd prefer an HTTP-based egress. (With TCP Egresses, "The HTTPS traffic originated by the application will be treated by Istio as opaque TCP").
Any help getting a Kubernetes service as "destination service" of an EgressRule (or help understanding why this isn't necessary if that's the case) is appreciated. Thanks!
The solution is:
Define a Kubernetes ExternalName service to point to www.google.com
Do not define any EgressRules
Create a RouteRule to set the Host header.
In your case, define an ExternalName service with the port and the protocol:
kind: Service
apiVersion: v1
metadata:
name: ext-service
spec:
type: ExternalName
externalName: www.google.com
ports:
- port: 80
# important to set protocol name
name: http
---
Define an HTTP Rewrite Route Rule to set the Host header:
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: externalgoogle-rewrite-rule
#namespace: default
spec:
destination:
name: ext-service
rewrite:
authority: www.google.com
---
Then access it with curl, for example: curl ext-service
Without the Route Rule, the request will arrive to google.com, with the Host header being ext-service. The web server does not know where to forward such a request since google.com does not have such a virtual host. This is what you experienced:
it brought me to Google's 404 page