Istio access to container SSL endpoint - kubernetes

My application is running a SSL NodeJS server with mutual authentication.
How do I tell k8s to access the container thought HTTPS?
How do I forward the client SSL certificates to the container?
I tried to setup a Gateway & a Virtual host without success. In every configuration I tried I hit a 503 error.

The Istio sidecar proxy container (when injected) in the pod will automatically handle communicating over HTTPS. The application code can continue to use HTTP, and the Istio sidecar will intercept the request, "upgrading" it to HTTPS. The sidecar proxy in the receiving pod will then handle "downgrading" the request to HTTP for the application container.
Simply put, there is no need to modify any application code. The Istio sidecar proxies requests and responses between Kubernetes pods with TLS/HTTPS.
UPDATE:
If you wish to use HTTPS at the application level, you can tell Istio to exclude certain inbound and outbound ports. To do so, you can add the traffic.sidecar.istio.io/excludeInboundPorts and traffic.sidecar.istio.io/excludeOutboundPorts annotations, respectively, to the Kubernetes deployment YAML.
Example:
...
spec:
selector:
matchLabels:
app: podinfo
template:
metadata:
annotations:
traffic.sidecar.istio.io/includeInboundPorts: "443"
traffic.sidecar.istio.io/excludeInboundPorts: "443"
labels:
...

Related

Connect to gRPC service via Kubernetes API server proxy?

Let's say we have a Kubernetes service which serves both a RESTful HTTP API and a gRPC API:
apiVersion: v1
kind: Service
metadata:
namespace: mynamespace
name: myservice
spec:
type: ClusterIP
selector:
app: my-app
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
We want to be able to reach those service endpoints externally, for example from another Kubernetes cluster.
This could be achieved by changing the service type from ClusterIP to LoadBalancer. However, let's assume that this is not desirable, for example because it requires additional public IP addresses.
An alternative approach would be to use the apiserver proxy which
connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
This works with the http endpoint. For example, if the http API exposes an endpoint /api/foo, it can be reached like this:
http://myapiserver/api/v1/namespaces/mynamespace/services/myservice:http/proxy/api/foo
Is it somehow possible to also reach the gRPC service via the apiserver proxy? It would seem that since gRPC uses HTTP/2, the apiserver proxy won't support it out of the box. e.g. doing something like this on the client side...
grpc.Dial("myapiserver/api/v1/namespaces/mynamespace/services/myservice:grpc/proxy")
... won't work.
Is there a way to connect to a gRPC service via the apiserver proxy?
If not, is there a different way to connect to the gRPC service from external, without using a LoadBalancer service?
You can use NodePort service. Each of your k8s workers will start listening on some high port. You can connect to any of the workers and your traffic would be routed to the target service.
apiserver-proxy solution looks like workaround to me and is far from production grade solution. You shouldn't route the traffic to your services through k8s API servers (even though it's technically possible). Control plane should be doing just control plane things and not data plane (traffic routing, running workloads, ...)
LoadBalancer service can be typically configured to create Internal LB (with internal IP from your VPC) instead External LB. This frankly the only 'correct' solution.
...not to require an additional public IP
NodePort is not bound to public IP. That is, your worker node can sits in the private network and reachable at the node private IP:nodePort#. The meantime, you can use kubectl port-forward --namespace mynamespace service myservice 8080:8080 and connect thru localhost.

Minikube service expose to public IP

I am learning Kubernetes and trying to deploy an app using MiniKube.
I have managed to expose the service mapped to nginx pod on Minikube IP. I can access the nginx service on url $(minikube ip):$(serviceport). which is fine, however I am looking to expose this to the public network. Currently this service is only accessible via my local machine, any other machine on my wifi network is not able to access it as it is exposed only on minikube ip. I dont want to forward the port in my local linux via IPtables, and I am looking for a built in solution to expose the port to world (and not just on minikube ip). I know it can be achieved as minikube dashboard by default expose the service on localhost, this implies that minikube can talk to other network adapters and can register the port, I am not sure how.
Here is my service yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: nginxservice
labels:
app: nginxservice
spec:
type: NodePort
ports:
- port: 80
name: http
targetPort: 80
nodePort: 32756
selector:
app: nginxcontainer
#subudear is right - you need Ingress.
An API object that manages external access to the services in a
cluster, typically HTTP. Ingress may provide load balancing, SSL
termination and name-based virtual hosting.
Ingress exposes HTTP and
HTTPS routes from outside the cluster to services within the cluster.
Traffic routing is controlled by rules defined on the Ingress
resource.
To be able use regularly use ingress(Im not talking about minikube right now) - it is not enough simply create Ingress object. You should first install related ingress controller.
There are lot of them, most popular are:
NGINX Ingress Controller
Kubernetes Nginx Ingress Controller
Traefik
Istio Ingress Controller
First 2 are very similar, but use absolutely different annotations. It often happens people confuse them
Talking about minikube:
As per guidelines, in order to install ingress the only you have to do is
minikube addons enable ingress
Please note that by default, minikube installing exactly NGINX Ingress controller
nginx-ingress-controller-5984b97644-rnkrg 1/1 Running 0 1m
You have to create ingress.
Follow the steps in this doc - https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/

How to intercept requests to a service in Kubernetes?

Let's say I define a Service named my-backend in Kubernetes. I would like to intercept every request sent to this service, what is the proper way to do it? For example, another container under the same namespace sends a request through http://my-backend.
I tried to use Admission Controller with a validation Webhook. However, it can intercept the CRUD operations on service resources, but it fails to intercept any connection to a specific service.
There is no direct way to intercept the requests to a service in Kubernetes.
For workaround this is what you can do-
Create a sidecar container just to log the each incoming request. logging
Run tcpdump -i eth0 -n in your containers and filter out requests
Use Zipkin
Creating service on cloud providers, will have their own logging mechanism. for ex - load balancer service on aws will have its logs generated on S3. aws elb logs
You can use a service mesh such as istio. An istio service mesh deploys a envoy proxy sidecar along with every pod. Envoy intercepts all the incoming requests to the pod and can provide you metrics such as number of requests etc. A service mesh brings in more features such as distributed tracing, rate limiting etc.
Kubernetes NetworkPolicy object will help on this. A network policy controls how group of pods can communicate with each other and other network endpoints. You can only allow the ingress traffic to the my-backend service based on pod selector. Below is the example that will allow the ingress traffic from specific
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-only-from-frontend-to-my-backend
namespace: default
spec:
podSelector:
matchLabels:
<my-backend pod label>
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
<Frontend web pod label>

Kubernetes pods can not make https request after deploying istio service mesh

I am exploring the istio service mesh on my k8s cluster hosted on EKS(Amazon).
I tried deploying istio-1.2.2 on a new k8s cluster with the demo.yml file used for bookapp demonstration and most of the use cases I understand properly.
Then, I deployed istio using helm default profile(recommended for production) on my existing dev cluster with 100s of microservices running and what I noticed is my services can can call http endpoints but not able to call external secure endpoints(https://www.google.com, etc.)
I am getting :
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong
version number
Though I am able to call external https endpoints from my testing cluster.
To verify, I check the egress policy and it is mode: ALLOW_ANY in both the clusters.
Now, I removed the the istio completely from my dev cluster and install the demo.yml to test but now this is also not working.
I try to relate my issue with this but didn't get any success.
https://discuss.istio.io/t/serviceentry-for-https-on-httpbin-org-resulting-in-connect-cr-srvr-hello-using-curl/2044
I don't understand what I am missing or what I am doing wrong.
Note: I am referring to this setup: https://istio.io/docs/setup/kubernetes/install/helm/
This is most likely a bug in Istio (see for example istio/istio#14520): if you have any Kubernetes Service object, anywhere in your cluster, that listens on port 443 but whose name starts with http (not https), it will break all outbound HTTPS connections.
The instance of this I've hit involves configuring an AWS load balancer to do TLS termination. The Kubernetes Service needs to expose port 443 to configure the load balancer, but it receives plain unencrypted HTTP.
apiVersion: v1
kind: Service
metadata:
name: breaks-istio
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:...
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
selector: ...
ports:
- name: http-ssl # <<<< THIS NAME MATTERS
port: 443
targetPort: http
When I've experimented with this, changing that name: to either https or tcp-https seems to work. Those name prefixes are significant to Istio, but I haven't immediately found any functional difference between telling Istio the port is HTTPS (even though it doesn't actually serve TLS) vs. plain uninterpreted TCP.
You do need to search your cluster and find every Service that listens to port 443, and make sure the port name doesn't start with http-....

Kubernetes ingress: How to enable HTTPS to backend service

A typical ingress with TLS configuration is like below:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: no-rules-map
spec:
tls:
- secretName: testsecret
backend:
serviceName: s1
servicePort: 80
By default, the load balancer will talk to backend service in HTTP. Can I configure Ingress so that the communication between load balancer and the backend service is also HTTPS?
Update:
Found GLBC talking about enabling HTTPS backend for GCE Ingress. Excerpt from the document:
"Backend HTTPS
For encrypted communication between the load balancer and your Kubernetes service, you need to decorate the service's port as expecting HTTPS. There's an alpha Service annotation for specifying the expected protocol per service port. Upon seeing the protocol as HTTPS, the ingress controller will assemble a GCP L7 load balancer with an HTTPS backend-service with a HTTPS health check."
It is not clear if the load balancer accepts 3rd party signed server certificate, self-signed, or both. How CA cert should be configured on load balancer to do backend server authentication. Or it will bypass authentication check.
On ingress-nginx you can use the nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" annotation and point it to the HTTPS port on the service.
This is ingress controller specific and other ingress controllers might not provide the option (or provide an option that works differently)
With this setup, the ingress controller decrypts the traffic. (This allows the ingress controller to control things like ciphers and the certificate presented to the user and do path-based routing, which SSL passthrough does not allow)
It is possible to configure certificate validation with serveral other (ingress-nginx-specific) annotations:
Docs
nginx.ingress.kubernetes.io/proxy-ssl-verify (it defaults to "off")
nginx.ingress.kubernetes.io/proxy-ssl-verify-depth
nginx.ingress.kubernetes.io/proxy-ssl-ciphers (ciphers, not validation related)
nginx.ingress.kubernetes.io/proxy-ssl-name (Override the name that the cert is checked against)
nginx.ingress.kubernetes.io/proxy-ssl-protocols (SSL / TLS versions)
nginx.ingress.kubernetes.io/proxy-ssl-server-name (SNI passthrough)
Follow the instructions in "Backend HTTPS" section of GLBC,
GCP HTTP(S) load balancer will build a HTTPS connection with the backend, traffic will be encrypted. There is no need to configure CA certificate on LB side (Actually you can't). This implies the load balancer will skip server certificate authentication.
You should enable SSL-Passthrough config for the ingress or load balancer.
I suggest you using nginx ingress and kube-lego for SSL.
with this combination, you can use the ssl-passthrough config.
Nginx ingress for k8s
kube-lego for generating SSL certificate on the fly
Guidance for enable ssl-passthrough config