EKS Admission Controller fails to call - kubernetes

Hi I am attempting to create a set of dynamic admission webhooks (registry whitelisting, mostly for security context stuff). This is the chart that I am using, everything works fine when deployed to 2 other EKS clusters, but when I deploy it to a more secure cluster that we are setting up (using Bottlerocket OS among others things) I get the following error:
Error from server (InternalError): Internal error occurred: failed calling webhook "...": failed to call webhook: Post "https://image-admission-controller-webhook.kube-system.svc:443/validate?timeout=2s": context deadline exceeded
I have verified that the service has an endpoint, the selector label maps to a pod, and that I am able to curl the above URL using a test curl image. What should I do? Thanks!

Needed to allow a rule in the SG for the controlplane to allow 443 outbound from RFC1918

Related

IBM Cloud: Kubernetes add-on ALB Oauth2 Proxy for App ID integration fails to start

I deployed a containerized app to my IBM Cloud Kubernetes service in a VPC. The app uses App ID for authentication. The deployment pipeline ran successfully. The app seems ready, but when accessing its URL it gives an internal server error (500 status code).
From the Kubernetes dashboard I found that the ALB Oauth Proxy add-on is failing. It is deployed, but does not start.
The deployment seems to fail in the health checks (ping not successful). From the POD logs I found the following as last (and only) entry:
[provider.go:55] Performing OIDC Discovery...
Else, there is not much. Any advise?
Guessing from the missing logs and the failing pings, it seemed related to some network setup. Checking the VPC itself, I found that there was no Public Gateway attached to the subnet. Enabling it allowed outbound traffic. The oauth proxy could contact the App ID instance. The app is working as expected now.
Make sure that the VPC subnets allow outbound traffic and have a Public Gateway enabled.

Failed to connect to proxy URL when deploying CloudFormation

I am attempting to deploy a CloudFormation template, but the Internet Gateway resource fails with an encoded error that decodes to:
Failed to connect to proxy URL: "http://127.0.0.1:10080"
What proxy am I missing that would prevent an Internet Gateway from being created?
It turns out the proxy error is misleading. The real issue is that the user deploying the CloudFormation template did not have the correct permissions.
I granted the user the AmazonVPCFullAccess policy, and the template was deployed correctly.

GRPC service inside Kubernetes is working but fails with an GRPC protocol error when we use istio

I have a server to server calls and I use GRPC (with .net core 5) It's working and test in local.
After that, I have moved all the services to Kubernetes Pod (Docker Desktop) and also tested through the flow (with swagger post-call) and it's working there too.
Now for monitoring, I added ISTIO and added the label to my namespace "istio-injection=enabled"
restarted all my pods and now all are having 2 containers inside each pod.
I tested the basic services (again swagger) and it's working. when it comes to testing the GRPC call. The call is failing from the caller side saying
Grpc.Core.RpcException: Status(StatusCode="Unavailable", Detail="upstream connect error or disconnect/reset before headers. reset reason: protocol error")
I checked the logs at GRPC server-side and it has no clue about this call and the service is just running. then I am kind of thinking that error is coming from the caller side whereas it is not able to or make a call to GRPC server.
The error detail:
Grpc.Core.RpcException: Status(StatusCode="Unavailable", Detail="upstream connect error or disconnect/reset before headers. reset reason: protocol error")
at Basket.API.GrpcServices.DiscountGrpcService.GetDiscount(String productName) in /src/Services/Basket/Basket.API/GrpcServices/DiscountGrpcService.cs:line 21
at Basket.API.Controllers.BasketController.UpdateBasket(ShoppingCart basket) in /src/Services/Basket/Basket.API/Controllers/BasketController.cs:line 47 at lambda_method7(Closure , Object )
at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.AwaitableObjectResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Obje
Again, I remove the Istio and tested and that's started working again (without changing anything) I added istio back and it started failing again. all other services are working with istio but not this call (This is only GRPC call I have).
I found a solution at https://istiobyexample.dev/grpc/ where it describes the missing item.
istio recommends using the name and version tag as Label but more importantly when working with GRPC, the service that exposes the GRPC needs to have the port name GRPC.
I have added that restarted the service and it got started working as expected.
Again it's not something I resolved. All credit goes to the link https://istiobyexample.dev/grpc/ and the image posted below.

Kubernetes and Authorization header stripped

Project:
Deploy a staging API (Symfony) on a Kubernetes cluster on GCloud
With its services (MariaDB, RabbitMQ ...)
issue:
All Pods and Services start correctly
Access to the API from outside
is problematic:
I deploy the API via a LoadBalancer service and the API is accessible but always removes the header "Authorization" which makes the API unusable.
I deploy the API via a Nginx-Ingress, the set of links to the correct air (the Ingress is well linked to the service and the pods of the API), I receive an external IP, but when I access this IP, the site is inaccessible (requests are lost and do not arrive at the servers).
If you are using Apache with CGI/FastCGI, then you might get an error message about missing authorization headers. This is because Apache does not, by default, pass authorization headers to PHP.
The Fix
You need to edit your Apache site configuration to add a line to your vhost config <VirtualHost> directive.
<VirtualHost>
# ...
SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1
# ...
</VirtualHost>

Kubernetes - HTTPS Communicating between services

I have few services running in multiple namesapces.
My deployment is as follows.
Ingress -> Service(ClusterIP) -> Pods
My application is running as HTTPS due to some restrictions and ingress also running as HTTPS. I have different certificates in both the places.
Trying to find different ways of communicating b/w services.
If both the services are running on the same namesapce,
Using ingress url - This should be used for connecting from outside the cluster. But, still can be used within the cluster also.
https://<INGRESS_NAME>.<NAMESPACE>.ing.lb.<CLUSTER_NAME>.XYZ.com/
Using service url
https://<SVC_NAME>.<NAMESPACE>.svc.int.<CLUSTER_NAME>.XYZ.com/
Using just the svc name
https://SVC_NAME:PORT
Using the svc name and namespace name
https://SVC_NAME.NAMESPACE:PORT
Is there any other way of connecting?
Also, My application is running as HTTPS and Ingress is also with HTTPS.
When I connect using https://<SVC_NAME>:<PORT>, getting the cert error.
Caused by: javax.net.ssl.SSLPeerUnverifiedException: Host name
'<SERVICE_NAME>' does not match the certificate subject provided by
the peer.
Do I need to include all these names( like URL 2, URL 3, URL 4) in the cert?