How to access backend microservice (http) from a frontend microservice (https via aws alb) in kubernetes (eks)? - kubernetes

Here's the context about resources in my EKS cluster:
Backend microservice exposed via service running on port 80
Frontend microservice exposed via service running on port 80
Frontend service calls backend service using the fqdn http://backend-service.backend.svc.cluster.local for REST API calls.
Additionally, I have also an ingress object that created an application load balancer for frontend microservice.
The load balancer has ACM SSL certificate attached.
https://my-website.com sends the traffic to the frontend service which in turn calls the HTTP backend microservice for REST API calls.
But I'm getting the following error:
The page at https://my-website.com was loaded over HTTPS but requested an insecure XMLHttpRequest endpoint http://backend-service.backend.svc.cluster.local
How do I fix this error? I don't want to expose my backend service to the outside world with a load balancer or anything.
Edit:
My frontend is written in React.js
My backend is a node.js express app

Related

HTTP/2 client preface string missing or corrupt for gRPC client in Kubernetes making call to local service using Telepresence

I am trying to prepare an environment for Integration testing of the Springboot application running inside Kubernetes cluster. I am using Telepresence which intercepts the traffic(gRPC APIs) in Kubernetes cluster to route it to locally running application in my IDE(IntelliJ).
Springboot application in Kubernetes is listening to gRPC calls on port 9090, and exposes via a ClusterIP service. I am trying to intercept gRPC traffic to this application running in Kubernetes, and route it to locally running application which listens on port 9095, using the below Telepresence intercept command
telepresence intercept service-name --namespace=ns --port 9095:9090 --env-file C:\Users\SC1063\telepresence\env_files\intercept-config.env
My local application on receiving the gRPC call is throwing the following exception
io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception: HTTP/2 client preface string missing or corrupt. Hex dump for received bytes: 1603010200010001fc0303ffd1d5efdfb5771b509014337a
From the question Spring boot + GRPC Http2Exception I understand, call from client application running in Kubernetes is trying to secure the communication using TLS. Whereas, the non-intercepted gRPC calls within kubernetes is working without any problem.
Application environment uses Istio for service mesh.
Error observed in the client logs
upstream connect error or disconnect/reset before headers. retried and the latest reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER]
Root cause for the issue is, client is applying TLS before sending the request to server, whereas server is expecting PLAINTEXT.
Istio service mesh secures external outbound traffic (traffic flowing outside K8s cluster) with TLS unless DISABLED.
Create Istio destinationrule CRD which is utilized by envoy proxy to DISABLE TLS while routing the traffic
spec:
trafficPolicy:
tls:
mode: DISABLE

AWS APIGateway to Istio ALB to EKS Workloads

We have a set of microservice APIs hosted on AWS EKS behind the Istio Service Mesh (which is exposed as an ALB ingress).
We have two ALB ingress for Istio, one meant for external traffic (from internet) and one meant for internal traffic (within the VPC).
The APIs are mostly meant for internal traffic. We also want to create an AWS APIGateway route to the internal Istio ALB for these APIs (the APIGateway will manage authentication).
Here are the steps we have completed:
We are using HTTP AWS Gateway. We cant use REST AWS Gateway since that only works on NLBs and we have an ALB for our Istio workloads.
We have created a VPC connector to allow HTTP AWS Gateway to access our internal ALBs.
We can see that the request is reaching the Istio envoy service from the APIGateway but is not getting forwarded further. This is because the API gateway is hitting our ALB but not passing any HOST header. So Istio doesn't know where to send the request.
So, how do we achieve the following:
Have multiple internal APIs hosted over a single ALB routed from AWS APIGateway?
Ensure Istio forwards the request from APIGateway to the appropriate service?

Microservices Api gateway and Identity Server 4 kubernates

I have microsevices and SPA app. All of them run on docker with docker compose. I have ocelot api gateway. But gateway knows ip address or container names of microservices for reaching . I add a aggregater service inside ocelot app. And I can reach to all services from aggregator service with ips.
But I want to move kubernates. I can scale services. there is no static ip. How can I configure .
I have identity service. This service knows clients ip addresses. Again same problem.
I searched for hours. I found some keywords. Envoy, Ingress, Consul, Ocelot . Can someone explain these things ?
It sounds like your question is related to Service Discovery.
In Kubernetes, the native way an "API Gateway" is implemented, is by using Ingress resources and Ingress Controllers. If you use a cloud provider, they usually have a product for this, or you can use a custom deployed within the cluster.
Service Discovery the Kubernetes way, is by referring to Service resources, e.g. the Ingress resources maps URLs (in your public API) to services. And your app is deployed as a Deployment resource, and all replicas (instances) is exposed via a Service resource. An app can also send request to other apps, and it should address that request to the Service resource for the other app. The Service resource does load balancing to the replicas of the receiving app.
you can use the service name to connect with the service instead of the client IP.
for example : curl HTTP://<service.name>.<namespace name>.svc.cluster.local
now if you are looking forward to list of API gateway and Identity server for Kubernetes
there are several options however it all depends on requirement.
For basic requirement nginx ingress and other ingress is available while if you are looking for API gateway :
Kong APi gateway
Ambassador api gateway
TYK API gateway
part of that service mesh can be also useful not in all scenario because it's mostly used for managing internal traffic (east-west).
API gateway is mostly used for managing edge traffic.
List of identity servers :
keycloak
Cognito IAM (AWS)
ingress controllers :
GCE ingress
Nginx ingress controller
Kong ingress controller
Gloo
HA proxy
AKS gateway
istio ingress

Does API gateways such as Zuul or Ngnix require backend services to be exposed externally as well?

We are trying to figure out a microservice architecture where we have an API Gateway (Zuul in this case), now all the services that Zuul is redirecting requests to would also need to be exposed externally? It seems counter intuitive as all these services can have private/local/cluster access and gateway is the one that should be externally exposed. Is this correct assessment? In what scenarios would you want these backend services to be exposed externally?
-----
|-----
Normally, you would not expose your backend services externally. The gateway (or the ingress) serves as the external gateway and proxies the requests to the internal network.
I am familiar with one use case where I expose some services directly: I do not want to expose some admin services running on my cluster to the external world, but I want to expose them to my VPN, so I have an ingress forwarding traffic between the external network and the cluster, and nodePort services that expose admin apps to my VPN.

Deploy REST API as clusterIP and web app as NodePort

My REST API is deployed a clusterIP.
Web app consuming REST API is deployed as NodePort.
Both are on the same cluster.
When I run my web app the connection to the REST API (ClusterIP) fails.
ClusterIP is a virtual IP so it will not be resolved from outside your cluster. Your web app will be used in browser so it will need a public URL for the restapi. You can create restapi also a node port service or you can add ingress service in your cluster and then use routing based on the hostpath.