Currently I am using Cloud proxy to connect to a Postgres Cloud SQL database as a sidecar. When using Istio, however it introduces its own sidecar, which lead to the result that there are two proxies in the pod. So I thougth, can the encrypted connection not also established using Istio?
Basically, it is possible to connect to an external IP using Istio.
It should also be possible to configure a DestinationRule which configures TLS.
And it also be possible to create Client certificates for Cloud SQL.
EDIT: might be the same problem: NGINX TLS termination for PostgreSQL
So I ended up with something like
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-db
spec:
hosts:
- external-db
ports:
- number: 5432
name: postgres
protocol: TLS
location: MESH_EXTERNAL
resolution: STATIC
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: external-db
spec:
host: external-db
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/client-cert.pem
privateKey: /etc/certs/client-key.pem
caCertificates: /etc/certs/server-ca.pem
---
apiVersion: v1
kind: Service
metadata:
name: external-db
spec:
clusterIP: None
ports:
- protocol: TCP
port: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-db
subsets:
- addresses:
- ip: 10.171.48.3
ports:
- port: 5432
and in the pod with
sidecar.istio.io/userVolumeMount: '[{"name":"cert", "mountPath":"/etc/certs", "readonly":true}]'
sidecar.istio.io/userVolume: '[{"name":"cert", "secret":{"secretName":"cert"}}]'
However, the server rejects the connection. So the question is, can this setup possibly work? And does it even make any sense?
It seems that Postgres uses application-level protocol negotation, so Istio/Envoy cannot be used in that case:
https://github.com/envoyproxy/envoy/issues/10942
https://github.com/envoyproxy/envoy/issues/9577#issuecomment-606943362
Related
CONTEXT:
I'm in the middle of planning a migration of kubernetes services from one cluster to another, the clusters are in separate GCP projects but need to be able to communicate across the clusters until all apps are moved across. The projects have VPC peering enabled to allow internal traffic to an internal load balancer (tested and confirmed that's fine).
We run Anthos service mesh (v1.12) in GKE clusters.
PROBLEM:
I need to find a way to do the following:
PodA needs to be migrated, and references a hostname in its ENV which is simply 'serviceA'
Running in the same cluster this resolves fine as the pod resolves 'serviceA' to 'serviceA.default.svc.cluster.local' (the internal kubernetes FQDN).
However, when I run PodA on the new cluster I need serviceA's hostname to actually resolve back to the internal load balancer on the other cluster, and not on its local cluster (and namespace), seen as serviceA is still running on the old cluster.
I'm using an istio ServiceEntry resource to try and achieve this, as follows:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.default.svc.cluster.local
location: MESH_EXTERNAL
ports:
- number: 50051
name: grpc
protocol: GRPC
resolution: STATIC
endpoints:
- address: 'XX.XX.XX.XX' # IP Redacted
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: resources
namespace: default
spec:
hosts:
- 'serviceA.default.svc.cluster.local'
gateways:
- mesh
http:
- timeout: 5s
route:
- destination:
host: serviceA.default.svc.cluster.local
This doesn't appear to work and I'm getting Error: 14 UNAVAILABLE: upstream request timeout errors on PodA running in the new cluster.
I can confirm that running telnet to the hostname from another pod on the mesh appears to work (i.e. don't get connection timeout or connection refused).
Is there a limitation on what you can use in the hosts on a serviceentry? Does it have to be a .com or .org address?
The only way I've got this to work properly is to use a hostAlias in PodA to add a hostfile entry for the hostname, but I really want to try and avoid doing this as it means making the same change in lots of files, I would rather try and use Istio's serviceentry to try and achieve this.
Any ideas/comments appreciated, thanks.
Fortunately I came across someone with a similar (but not identical) issue, and the answer in this stackoverflow post gave me the outline of what kubernetes (and istio) resources I needed to create.
I was heading in the right direction, just needed to really understand how istio uses Virtual Services and Service Entries.
The end result was this:
apiVersion: v1
kind: Service
metadata:
name: serviceA
namespace: default
spec:
type: ExternalName
externalName: serviceA.example.com
ports:
- name: grpc
protocol: TCP
port: 50051
---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.example.com
location: MESH_EXTERNAL
ports:
- number: 50051
name: grpc
protocol: TCP
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.default.svc.cluster.local
http:
- timeout: 5s
route:
- destination:
host: serviceA.default.svc.cluster.local
rewrite:
authority: serviceA.example.com
I have a website that needs to be proxied through my web app.
Traditionally we've accomplished it via apache proxy with proxy directives.
The proxy also rewrites some of the headers and adds a couple of new ones.
Now the app has moved to OpenShift (Kubernetes) and I'm trying to avoid deploying another pod with apache.
Can I perform this header rewriting and proxying via K8 ingress? or router?
I've tried this approach, but it didn't work.
I also don't know how to get OpenShift Ingress logs, nothing seems to happen in there.
I tried using an external name, but it doesn't work:
kind: Service
metadata:
name: es3
spec:
externalName: google.com
type: ExternalName
---
kind: Route
apiVersion: route.openshift.io/v1
spec:
host: host.my-cluster-url.net
to:
kind: Service
name: es3
port:
targetPort: es3
I also tried using Endpoints , same result
apiVersion: v1
kind: Service
metadata:
name: mysvc
spec:
ports:
- name: app
port: 80
protocol: TCP
targetPort: 80
clusterIP: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: mysvc
subsets:
- addresses:
- ip: my.ip.address
ports:
- name: app
port: 80
protocol: TCP
you want to proxy non kubernetes service, right? if yes, use end point and create service from end point, I have used this with kubernetes will work with openshift too my wild guess
https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/
I have a simple stateful with SFTP features and a second service try to connect to it.
When ISTIO is enabled, the connection is closed by the sftp service.
We can find this log:
Bad protocol version identification '\026\003\001'
The routing is OK.
The service:
apiVersion: v1
kind: Service
metadata:
name: foo
namespace: bar
spec:
ports:
- port: 22
targetPort: 22
protocol: TCP
name: tcp-sftp
selector:
app.kubernetes.io/instance: foo-bar
I try to add a VirtualService with no luck:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: foo-bar-destination-virtualservice
namespace: bar
spec:
hosts:
- foo.bar.svc.cluster.local
tcp:
- match:
- port: 22
route:
- destination:
host: foo.bar.svc.cluster.local
port:
number: 22
The workaround is to disabled the sidecar on the sftp pod for now:
sidecar.istio.io/inject: "false"
It seems Envoy proxy (and Istio by proxy) does not support SFTP protocol (referecce).
Your workaround is currently the only way to make it work.
If you want your auto-discovered services in the mesh to route/access your SFTP service, you can additionally create ServiceEntry pointing to it.
I am trying to create a tcp service like this in kubernetes cluster followed by official docs:
apiVersion: traefik.containo.us/v1alpha1
kind: TraefikService
metadata:
name: app-mysql
spec:
tcp:
services:
my-service:
loadBalancer:
servers:
- address: '<private-ip-server-1>:<private-port-server-1>'
- address: '<private-ip-server-2>:<private-port-server-2>'
and I only see the traefik service in lens, in the traefik dashboard found nothing:
What should I do to create a TCP Service in traefik 2.2.1?
Assuming you'd like to talk to TCP services running in Kubernetes. For TCP you don't need really need a TraefikService, you can just use an IngressRouteTCP resource.
You can see in the docs that the IngressRouteTCP can talk directly to a K8s service.
Similarly to the example you can have something like this:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteUDP
metadata:
name: my-ingress-udp-route
namespace: default
spec:
entryPoints:
- myentrypoint
routes:
- match: HostSNI(`mysql.example.com`)
services:
- name: app-mysql 👈 K8s Service
port: 3306
Notes:
TraefikService can be used for in regular IngressRoute resources, and not supported in TCP/UDP case today)
Not sure how you plan to load balance a MySQL service though, as this typically happens at the application level or you need a particular proxy that handles your reads/writes and data consistency)
✌️
I am trying to setup Istio and I need to whitelist few ports for allowing non mTLS traffic from outside world coming in through specfic port for few pods runnings in local k8s.
I am unable to find a successful way of doing it.
Tried Service entry, policy and destination rule and didnt succeed.
Helps is highly appreciated.
version.BuildInfo{Version:"1.1.2", GitRevision:"2b1331886076df103179e3da5dc9077fed59c989", User:"root", Host:"35adf5bb-5570-11e9-b00d-0a580a2c0205", GolangVersion:"go1.10.4", DockerHub:"docker.io/istio", BuildStatus:"Clean", GitTag:"1.1.1"}```
Service Entry
```apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-traffic
namespace: cloud-infra
spec:
hosts:
- "*.cluster.local"
ports:
- number: 50506
name: grpc-xxx
protocol: TCP
location: MESH_EXTERNAL
resolution: NONE```
You need to add a DestinationRule and a Policy :
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: destinationrule-test
spec:
host: service-name
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
portLevelSettings:
- port:
number: 8080
tls:
mode: DISABLE
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: policy-test
spec:
targets:
- name: service-name
ports:
- number: 8080
peers:
This has been tested with istio 1.0, but it will probably work for istio 1.1. It is heavily inspired by the documentation https://istio.io/help/ops/setup/app-health-check/
From your question, I understood that you want to control your ingress traffic allow some ports to your services that functioning in your mesh/cluster from outside, but your configuration is for egress traffic.
In order to control and allow ports to your services from outside, you can follow these steps.
1.Make sure that containerPort included to your deployment/pod configuration.
For more info
2.You have to have service pointing to your backends/pods. For more info about Kubernetes Services.
3.Then in your Istio enabled cluster, you have to create Gateway similar to below configuration:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: your-service-gateway
namespace: foo-namespace # Use same namespace with backend service
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: HTTP
protocol: HTTP
hosts:
- "*"
4.Then configure route to your service for traffic entering via the this gateway by creating VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: your-service
namespace: foo-namespace # Use same namespace with backend service
spec:
hosts:
- "*"
gateways:
- your-service-gateway # define gateway name
http:
- match:
- uri:
prefix: "/"
route:
- destination:
port:
number: 3000 # Backend service port
host: your-service # Backend service name
Hope it helps.