I have a simple stateful with SFTP features and a second service try to connect to it.
When ISTIO is enabled, the connection is closed by the sftp service.
We can find this log:
Bad protocol version identification '\026\003\001'
The routing is OK.
The service:
apiVersion: v1
kind: Service
metadata:
name: foo
namespace: bar
spec:
ports:
- port: 22
targetPort: 22
protocol: TCP
name: tcp-sftp
selector:
app.kubernetes.io/instance: foo-bar
I try to add a VirtualService with no luck:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: foo-bar-destination-virtualservice
namespace: bar
spec:
hosts:
- foo.bar.svc.cluster.local
tcp:
- match:
- port: 22
route:
- destination:
host: foo.bar.svc.cluster.local
port:
number: 22
The workaround is to disabled the sidecar on the sftp pod for now:
sidecar.istio.io/inject: "false"
It seems Envoy proxy (and Istio by proxy) does not support SFTP protocol (referecce).
Your workaround is currently the only way to make it work.
If you want your auto-discovered services in the mesh to route/access your SFTP service, you can additionally create ServiceEntry pointing to it.
Related
I have a website that needs to be proxied through my web app.
Traditionally we've accomplished it via apache proxy with proxy directives.
The proxy also rewrites some of the headers and adds a couple of new ones.
Now the app has moved to OpenShift (Kubernetes) and I'm trying to avoid deploying another pod with apache.
Can I perform this header rewriting and proxying via K8 ingress? or router?
I've tried this approach, but it didn't work.
I also don't know how to get OpenShift Ingress logs, nothing seems to happen in there.
I tried using an external name, but it doesn't work:
kind: Service
metadata:
name: es3
spec:
externalName: google.com
type: ExternalName
---
kind: Route
apiVersion: route.openshift.io/v1
spec:
host: host.my-cluster-url.net
to:
kind: Service
name: es3
port:
targetPort: es3
I also tried using Endpoints , same result
apiVersion: v1
kind: Service
metadata:
name: mysvc
spec:
ports:
- name: app
port: 80
protocol: TCP
targetPort: 80
clusterIP: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: mysvc
subsets:
- addresses:
- ip: my.ip.address
ports:
- name: app
port: 80
protocol: TCP
you want to proxy non kubernetes service, right? if yes, use end point and create service from end point, I have used this with kubernetes will work with openshift too my wild guess
https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/
Currently I am using Cloud proxy to connect to a Postgres Cloud SQL database as a sidecar. When using Istio, however it introduces its own sidecar, which lead to the result that there are two proxies in the pod. So I thougth, can the encrypted connection not also established using Istio?
Basically, it is possible to connect to an external IP using Istio.
It should also be possible to configure a DestinationRule which configures TLS.
And it also be possible to create Client certificates for Cloud SQL.
EDIT: might be the same problem: NGINX TLS termination for PostgreSQL
So I ended up with something like
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-db
spec:
hosts:
- external-db
ports:
- number: 5432
name: postgres
protocol: TLS
location: MESH_EXTERNAL
resolution: STATIC
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: external-db
spec:
host: external-db
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/client-cert.pem
privateKey: /etc/certs/client-key.pem
caCertificates: /etc/certs/server-ca.pem
---
apiVersion: v1
kind: Service
metadata:
name: external-db
spec:
clusterIP: None
ports:
- protocol: TCP
port: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-db
subsets:
- addresses:
- ip: 10.171.48.3
ports:
- port: 5432
and in the pod with
sidecar.istio.io/userVolumeMount: '[{"name":"cert", "mountPath":"/etc/certs", "readonly":true}]'
sidecar.istio.io/userVolume: '[{"name":"cert", "secret":{"secretName":"cert"}}]'
However, the server rejects the connection. So the question is, can this setup possibly work? And does it even make any sense?
It seems that Postgres uses application-level protocol negotation, so Istio/Envoy cannot be used in that case:
https://github.com/envoyproxy/envoy/issues/10942
https://github.com/envoyproxy/envoy/issues/9577#issuecomment-606943362
SFTP server is not accessible when exposed using a NodePort service and an Kubernetes Ingress. However, if the same deployment is exposed using a Service of type LoadBalancer it works fine.
Below is the deployment file for SFTP in GKE using atmoz/sftp Dockerfile.
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: sftp
labels:
environment: production
app: sftp
spec:
replicas: 1
minReadySeconds: 10
template:
metadata:
labels:
environment: production
app: sftp
annotations:
container.apparmor.security.beta.kubernetes.io/sftp: runtime/default
spec:
containers:
- name: sftp
image: atmoz/sftp:alpine
imagePullPolicy: Always
args: ["user:pass:1001:100:upload"]
ports:
- containerPort: 22
securityContext:
capabilities:
add: ["SYS_ADMIN"]
resources: {}
If I expose this deployment normally using a Kubernetes Service of type LoadBalancer like below:
apiVersion: v1
kind: Service
metadata:
labels:
environment: production
name: sftp-service
spec:
type: LoadBalancer
ports:
- name: sftp-port
port: 22
protocol: TCP
targetPort: 22
selector:
app: sftp
Above Service gets an external IP which I can simply use in the command sftp xxx.xx.xx.xxx command to get access using the pass password.
However, I try to expose the same deployment using GKE Ingress it does not work. Below is the manifest for the ingress:
# First I create a NodePort service to expose the deployment internally
---
apiVersion: v1
kind: Service
metadata:
labels:
environment: production
name: sftp-service
spec:
type: NodePort
ports:
- name: sftp-port
port: 22
protocol: TCP
targetPort: 22
nodePort: 30063
selector:
app: sftp
# Ingress service has SFTP service as it's default backend
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress-2
spec:
backend:
serviceName: sftp-service
servicePort: 22
rules:
- http:
paths:
# "http-service-sample" is a service exposing a simple hello-word app deployment
- path: /sample
backend:
serviceName: http-service-sample
servicePort: 80
After an external IP is assigned to the Ingress (I know it takes a few minutes to completely set up) and xxx.xx.xx.xxx/sample starts working but sftp -P 80 xxx.xx.xx.xxx doesn't work.
Below is the error I get from the server:
ssh_exchange_identification: Connection closed by remote host
Connection closed
What am I doing wrong in the above set-up? Why does LoadBalancer service is able to allow access to SFTP service, while Ingress fails?
That's currently not fully supported to route in Kubernetes Ingress any other traffic than HTTP/HTTPS protocols (see docs).
You can try to make some workaround as described there: Kubernetes: Routing non HTTP Request via Ingress to Container
I've setup a K8S-cluster in GKE and installed RabbitMQ (from the marketplace) and Istio (via Helm). I can access rabbitMQ from pods until I enable the envoy proxy to be injected into these pods, but after that the traffic will not reach rabbitMQ, and I can't figure out how to enable traffic to the rabbitmq service.
There is a service rabbitmq-rabbitmq-svc (in the rabbitmq namespace) that is of type LoadBalancer.
I've tried a simple busybox when I don't have envoy running and then I have no trouble telneting to rabbitmq (port 5672), but as soon as I try with automatic envoy injection envoy prevents the traffic.
I tried unsuccessfully to add a DestinationRule. (I've added a rule but it makes no difference)
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: rabbitmq-rabbitmq-svc
spec:
host: rabbitmq.rabbitmq.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
It seems like it should be a simple solution, but I can't figure it out... :/
UPDATE
Turns out it was a simple error in the hostname, ended up using this and it works:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: rabbitmq-rabbitmq-svc
spec:
host: rabbitmq-rabbitmq-svc.rabbitmq.svc.cluster.local
Turns out it was a simple error in the hostname, the correct one was rabbitmq-rabbitmq-svc.rabbitmq.svc.cluster.local
The only thing I needed to do to get RabbitMQ clusters to work within Istio is to annotate the RabbitMQ pods as such:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
spec:
override:
statefulSet:
spec:
template:
metadata:
annotations:
#annotate rabbitMQ pods to only redirect traffic on ports 15672 and 5672 to Envoy proxy sidecars.
traffic.sidecar.istio.io/includeInboundPorts: "15672, 5672"
traffic.sidecar.istio.io/includeOutboundPorts: "15672, 5672"
For some reason the exclude port annotations weren't working so I just flipped it by using include port annotations. In my case, the global Istio config is controlled by another team in the company so perhaps there's a clash when trying to use the exclude port annotations.
I maybe encounter the same problem with you before. But my app can connect rabbitmq by envoy after declaring epmd with 4369 port in rabbitmq service.
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
app: rabbitmq
spec:
type: ClusterIP
ports:
- port: 5672
targetPort: 5672
name: message
- port: 4369
targetPort: 4369
name: epmd
- port: 15672
targetPort: 15672
name: management
selector:
app: rabbitmq
I am trying to setup Istio and I need to whitelist few ports for allowing non mTLS traffic from outside world coming in through specfic port for few pods runnings in local k8s.
I am unable to find a successful way of doing it.
Tried Service entry, policy and destination rule and didnt succeed.
Helps is highly appreciated.
version.BuildInfo{Version:"1.1.2", GitRevision:"2b1331886076df103179e3da5dc9077fed59c989", User:"root", Host:"35adf5bb-5570-11e9-b00d-0a580a2c0205", GolangVersion:"go1.10.4", DockerHub:"docker.io/istio", BuildStatus:"Clean", GitTag:"1.1.1"}```
Service Entry
```apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-traffic
namespace: cloud-infra
spec:
hosts:
- "*.cluster.local"
ports:
- number: 50506
name: grpc-xxx
protocol: TCP
location: MESH_EXTERNAL
resolution: NONE```
You need to add a DestinationRule and a Policy :
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: destinationrule-test
spec:
host: service-name
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
portLevelSettings:
- port:
number: 8080
tls:
mode: DISABLE
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: policy-test
spec:
targets:
- name: service-name
ports:
- number: 8080
peers:
This has been tested with istio 1.0, but it will probably work for istio 1.1. It is heavily inspired by the documentation https://istio.io/help/ops/setup/app-health-check/
From your question, I understood that you want to control your ingress traffic allow some ports to your services that functioning in your mesh/cluster from outside, but your configuration is for egress traffic.
In order to control and allow ports to your services from outside, you can follow these steps.
1.Make sure that containerPort included to your deployment/pod configuration.
For more info
2.You have to have service pointing to your backends/pods. For more info about Kubernetes Services.
3.Then in your Istio enabled cluster, you have to create Gateway similar to below configuration:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: your-service-gateway
namespace: foo-namespace # Use same namespace with backend service
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: HTTP
protocol: HTTP
hosts:
- "*"
4.Then configure route to your service for traffic entering via the this gateway by creating VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: your-service
namespace: foo-namespace # Use same namespace with backend service
spec:
hosts:
- "*"
gateways:
- your-service-gateway # define gateway name
http:
- match:
- uri:
prefix: "/"
route:
- destination:
port:
number: 3000 # Backend service port
host: your-service # Backend service name
Hope it helps.