I am creating istio service mesh and then trying to call an external service from istio pod.
I followed steps in link
https://istio.io/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/
till
2 Verify that your ServiceEntry was applied correctly by sending a request to http://edition.cnn.com/politics.
but in place of "edition.cnn.com", used my service.
When I try to do curl inside my pod, I am getting the below error.
[2020-02-02T10:02:52.465Z] "GET / HTTP/1.1" 503 UF,URX "-" "-" 0 91 150 - "-" "curl/7.58.0" "fafa8680-bdf1-468a-b50f-1a4430707ceb" "service.abc.com" "173.25.13.66:80" outbound|80||service.abc.com - 173.25.13.66:80 10.44.0.6:47544 - default
I can ping to service.abc.com, but how do I debug this error, and how to get more logs for analysis? As it did not mention to create steps for mtls and destination rules in above link, I did not create them.
Note: I am not facing any issue with edition.cnn.com, but getting issues when using my service which is external to mesh and is running in another server within my company network.
service.abc.com service supports only http or only https or both http and https? Is it configured to redirect http to https ? if you hit an endpoint with http and if its neither listening on port 80 nor redirecting http to https you are expected to get 503.
If you follow all the steps till 5 in the doc and assuming that service.abc.com is a https service it should work as expected because at step 5 even if you are sending a http request istio egress gateway is going to convert it to https(TLS origination) before it sends out the request to service.abc.com.
Related
I have an Openshift 4.6 platform running an applicative pod.
We use postman to send request to the pod.
The applicative pod return a 200 http response code, but get a 502 in postman.
So there is a interim component inside OpenShift/K8s that should transform the 200 into a 502.
Is there a way to debug/trace more information in Egress ?
Thanks
Nicolas
The HTTP 502 error is likely returned by the OpenShift Router that is forwarding your request to your application.
In practice this often means that the OpenShift Router (HAProxy) is sending the request to your application and it does not receive any or an unexpected answer from your application.
So I would recommend that you check your applications logs if there is any error in your application and if your application returns a valid HTTP answer. You can test this by using curl localhost:<port> from your application Pods to see if there is a response being returned.
I have a microservice architecture (implemented in Spring Boot) deployed in Google Kubernetes Engine. For this microservice architecture I have setup the following:
domain: comanddev.tk (free domain from Freenom)
a certificate for this domain
the following Ingress config:
The problem is that when I invoke an URL that I know it should be working https://comanddev.tk/customer-service/actuator/health, the response I get is ERR_TIMEDOUT. I checked Ingress Controller and I don't receive any request in the ingress although URL forwarding is set.
Update: I tried to set a "glue record" like in the following picture and the response I get is that the certificate is not valid (i have certificate for comanddev.tk not dev.comanddev.tk) and I get 401 after agreeing to access unsecure url.
I've digged a bit into this.
As I mentioned when you $ curl -IL http://comanddev.tk/customer-service/actuator/health you will received nginx ingress response.
As domain intercepts the request and redirect to the destination server I am not sure if there is point to use TLS.
I would suggest you to use nameserver instead of URL Forwarding, just use IP of your Ingress. In this option you would redirect request to your Ingress. When you are using Port Forwarding you are using Freenom redirection and I am not sure how its handled on their side.
I'm facing an issue that popped out due to a bad api design. There is a large (I mean huge: 11361 in length) request URI with lot of parameters that is being blocked (or maybe badly routed) by envoy proxy sidecar throwing a connection reset by peer. The request is done between a pod and a service inside the same K8s cluster.
By removing the envoy sidecar in the caller pod, the api call works normally even if the destination is deployed with the envoy sidecar.
This is the log showing up when I try to do a curl with that huge URI:
"- - -" 0 UF,URX "-" "-" 0 0 1000 - "-" "-" "-" "-" "10.10.146.112:80" outbound|80||default-http-backend.default.svc.cluster.local
The destination shown in the log is a fallback internal service.
I've tryied to add an EnvoyFilter that increases the max header size, but I think that those rules are for incoming calls and not for outgoing requests.
The issue is showing up with both Istio versions 1.4.0 and 1.5.8
Any ideas to workaround this while the dev team does the refactor?
Thank you very much :)
I cannot for the life of me get the AWS API Gateway HTTP Proxy to work, i.e. redirect http://<my-domain>.com to https://<my-domain>.com. Here is how I set it up:
Using the Test functionality on the ANY method inside the resource works. But if I simply do curl http://<my-domain>.com or run http://<my-domain>.com in Chrome, it fails to connect; https://<my-website>.com works just fine. I'm driving myself crazy trying to figure out what I'm missing here; it seems like it should just redirect http://<my-domain>.com to https://<my-domain>.com, but it doesn't (even on different devices).
So, it turns out that API Gateway's HTTP Proxy allows HTTPS traffic to go to an HTTP endpoint, but not the reverse. In fact, API Gateway won't even establish a connection on port 80; from the FAQ:
Q: Can I create HTTPS endpoints?
Yes, all of the APIs created with Amazon API Gateway expose HTTPS
endpoints only. Amazon API Gateway does not support unencrypted (HTTP)
endpoints.
API Gateway doesn't support unencrypted HTTP traffic. Here are the possible options you can do to secure your website:
If you have access to the server that hosts the website, install an SSL certificate to the webserver.
If the website is hosted on EC2, you can set up a load balancer and let it do the SSL termination.
I'm pretty new to kubernetes, I hope to explain myself in a good way, and if anyone has any resources/suggestions to read for my problem it would be really much appreciated.
Let's get straight to the point.
The web app I'm trying to expose accepts only https connection on the service. So basically I would like to ask the ingress to communicate with my service through https
Following some tutorial I tried to expose a simple web app(that accept http connection) through https, craeting a certificate and a secret and adding the following line to the ingress.yml:
tls:
- secretName: testexample.com
hosts:
- testexample.com
and executing a curl -k https://testexample.com or through browser I can see my webpage.
The troubles pops out when my webapp accept only https connection, and the webapp we are moving to kubernetes does.
I always receive "404 default backend" message.
I tried to look for some resources/tutorial/previous questions,
Secure communication between Ingress Controller (Traefik) and backend service on Kubernetes
Securing connections from ingress to services in Kubernetes with TLS
but I didnt figure out how to get out of the problem.
Any suggestions as mentioned before would be much appreciated.
Error source can are probably your Ingress rule. It's their not pointing to the correct service & port or it is not in the same namespace as the service.