Istio authorization policy not applying on child gateway - kubernetes

What I am trying to achieve: block all traffic to a service, containing the code to handle this within the same namespace as the service.
Why: this is the first step in "locking down" a specific service to specific IPs/CIDRs
I have a primary ingress GW called istio-ingressgateway which works for services.
$ kubectl describe gw istio-ingressgateway -n istio-system
Name: istio-ingressgateway
Namespace: istio-system
Labels: operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.5
release=istio
Annotations: API Version: networking.istio.io/v1beta1
Kind: Gateway
Metadata:
Creation Timestamp: 2020-08-28T15:45:10Z
Generation: 1
Resource Version: 95438963
Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/gateways/istio-ingressgateway
UID: ae5dd2d0-44a3-4c2b-a7ba-4b29c26fa0b9
Spec:
Selector:
App: istio-ingressgateway
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Events: <none>
I also have another "primary" GW, the K8s ingress GW to support TLS (thought I'd include this, to be as explicit as possible)
k describe gw istio-autogenerated-k8s-ingress -n istio-system
Name: istio-autogenerated-k8s-ingress
Namespace: istio-system
Labels: app=istio-ingressgateway
istio=ingressgateway
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.5
release=istio
Annotations: API Version: networking.istio.io/v1beta1
Kind: Gateway
Metadata:
Creation Timestamp: 2020-08-28T15:45:56Z
Generation: 2
Resource Version: 95439499
Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/gateways/istio-autogenerated-k8s-ingress
UID: edd46c17-9975-4089-95ff-a2414d40954a
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Hosts:
*
Port:
Name: https-default
Number: 443
Protocol: HTTPS
Tls:
Credential Name: ingress-cert
Mode: SIMPLE
Private Key: sds
Server Certificate: sds
Events: <none>
I want to be able to create another GW, in the namespace x and have an authorization policy attached to that GW.
If I create the authorization policy in the istio-system namespace, then it comes back with RBAC: access denied which is great - but that is for all services using the primary GW.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: block-all
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
action: DENY
rules:
- from:
- source:
ipBlocks: ["0.0.0.0/0"]
What I currently have does not work. Any pointers would be highly appreciated. The following are all created under the x namespace when applying the kubectl apply -f files.yaml -n x
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
annotations:
app: x-ingress
name: x-gw
labels:
app: x-ingress
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- x.y.com
port:
name: http
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- x.y.com
port:
name: https
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
privateKey: sds
serverCertificate: sds
credentialName: ingress-cert
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: x
labels:
app: x
spec:
hosts:
- x.y.com
gateways:
- x-gw
http:
- route:
- destination:
host: x
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: x-ingress-policy
spec:
selector:
matchLabels:
app: x-ingress
action: DENY
rules:
- from:
- source:
ipBlocks: ["0.0.0.0/0"]
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: x
labels:
app: x
spec:
hosts:
- x.y.com
gateways:
- x-gw
http:
- route:
- destination:
host: x
The above should be blocking all traffic to the GW, as it matches on the CIDR range of 0.0.0.0/0
I am entirely misunderstanding the concept of GWs/AuthorizationPolicies or have I missed something?
Edit
I ended up creating another GW which had the IP restriction block on that, as classic load balancers on AWS do not support IP forwarding.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istiocontrolplane
spec:
profile: demo
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
- name: admin-ingressgateway
enabled: true
label:
istio: admin-ingressgateway
k8s:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all-admin
namespace: istio-system
spec:
selector:
matchLabels:
istio: admin-ingressgateway
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["176.252.114.59/32"]
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
I then used that gateway in my workload that I wanted to lock down.

As far as I know you should rather use AuthorizationPolicy in 3 ways
on ingress gateway
on namespace
on specific service
I have tried to make it work on a specific gateway with annotations like you did, but I couldn't make it work for me.
e.g.
the following authorization policy denies all requests to workloads in namespace x.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: x
spec:
{}
the following authorization policy denies all requests on ingress gateway.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
the following authorization policy denies all requests on httpbin in x namespace.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-service-x
namespace: x
spec:
selector:
matchLabels:
app: httpbin
Let's say you deny all requests on x namespace and allow only get requests for httpbin service.
Then you would use this AuthorizationPolicy to deny all requests
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: x
spec:
{}
And this AuthorizationPolicy to allow only get requests.
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "x-viewer"
namespace: x
spec:
selector:
matchLabels:
app: httpbin
rules:
- to:
- operation:
methods: ["GET"]
And there is the main issue ,which is ipBlocks. There is related github issue about that.
As mentioned here by #incfly
I guess the reason why it’s stop working when in non ingress pod is because the sourceIP attribute will not be the real client IP then.
According to https://github.com/istio/istio/issues/22341 7, (not done yet) this aims at providing better support without setting k8s externalTrafficPolicy to local, and supports CIDR range as well.
I have tried this example from istio documentation to make it work, but it wasn't working for me, even if I changed externalTrafficPolicy. Then a workaround with envoyfilter came from above istio discuss thread.
Answer provided by #hleal18 here.
Got and example working successfully using EnvoyFilters, specifically with remote_ip condition applied on httbin.
Sharing the manifest for reference.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: httpbin
namespace: foo
spec:
workloadSelector:
labels:
app: httpbin
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.rbac
config:
rules:
action: ALLOW
policies:
"ip-premissions":
permissions:
- any: true
principals:
- remote_ip:
address_prefix: xxx.xxx.xx.xx
prefix_len: 32
I have tried above envoy filter on my test cluster and as far as I can see it's working.
Take a look at below steps I made.
1.I have changed the externalTrafficPolicy with
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
2.I have created namespace x with istio-injection enabled and deployed httpbin here.
kubectl create namespace x
kubectl label namespace x istio-injection=enabled
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.7/samples/httpbin/httpbin.yaml -n x
kubectl apply -f https://github.com/istio/istio/blob/master/samples/httpbin/httpbin-gateway.yaml -n x
3.I have created envoyfilter
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: httpbin
namespace: x
spec:
workloadSelector:
labels:
app: httpbin
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.rbac
config:
rules:
action: ALLOW
policies:
"ip-premissions":
permissions:
- any: true
principals:
- remote_ip:
address_prefix: xx.xx.xx.xx
prefix_len: 32
address_prefix is the CLIENT_IP, there are commands I have used to get it.
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].port}')
curl "$INGRESS_HOST":"$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
CLIENT_IP=$(curl "$INGRESS_HOST":"$INGRESS_PORT"/ip -s | grep "origin" | cut -d'"' -f 4) && echo "$CLIENT_IP"
4.I have test it with curl and my browser.
curl "$INGRESS_HOST":"$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
200
Let me know if you have any more questions, I might be able to help.

Related

Right namespace for VirtualService and DestinationRule Istio resources

I am trying to understand the VirtualService and DestinationRule resources in relation with the namespace which should be defined and if they are really namespaced resources or they can be considered as cluster-wide resources also.
I have the following scenario:
The frontend service (web-frontend) access the backend service (customers).
The frontend service is deployed in the frontend namespace
The backend service (customers) is deployed in the backend namespace
There are 2 versions of the backend service customers (2 deployments), one related to the version v1 and one related to the version v2.
The default behavior for the clusterIP service is to load-balance the request between the 2 deployments (v1 and v2) and my goal is by creating a DestinationRule and a VirtualService to direct the traffic only to the deployment version v1.
What I want to understand is which is the appropriate namespace to define such DestinationRule and a VirtualService resources. Should I create the necessary DestinationRule and VirtualService resources in the frontend namespace or in the backend namespace?
In the frontend namespace I have the web-frontend deployment and and the related service as follow:
apiVersion: v1
kind: Namespace
metadata:
name: frontend
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-frontend
namespace: frontend
labels:
app: web-frontend
spec:
replicas: 1
selector:
matchLabels:
app: web-frontend
template:
metadata:
labels:
app: web-frontend
version: v1
spec:
containers:
- image: gcr.io/tetratelabs/web-frontend:1.0.0
imagePullPolicy: Always
name: web
ports:
- containerPort: 8080
env:
- name: CUSTOMER_SERVICE_URL
value: 'http://customers.backend.svc.cluster.local'
---
kind: Service
apiVersion: v1
metadata:
name: web-frontend
namespace: frontend
labels:
app: web-frontend
spec:
selector:
app: web-frontend
type: NodePort
ports:
- port: 80
name: http
targetPort: 8080
I have expose the web-frontend service by defining the following Gateway and VirtualService resources as follow:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway-all-hosts
# namespace: default # Also working
namespace: frontend
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: web-frontend
# namespace: default # Also working
namespace: frontend
spec:
hosts:
- "*"
gateways:
- gateway-all-hosts
http:
- route:
- destination:
host: web-frontend.frontend.svc.cluster.local
port:
number: 80
In the backend namespace I have the customers v1 and v2 deployments and related service as follow:
apiVersion: v1
kind: Namespace
metadata:
name: backend
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v1
namespace: backend
labels:
app: customers
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v1
template:
metadata:
labels:
app: customers
version: v1
spec:
containers:
- image: gcr.io/tetratelabs/customers:1.0.0
imagePullPolicy: Always
name: svc
ports:
- containerPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v2
namespace: backend
labels:
app: customers
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v2
template:
metadata:
labels:
app: customers
version: v2
spec:
containers:
- image: gcr.io/tetratelabs/customers:2.0.0
imagePullPolicy: Always
name: svc
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: customers
namespace: backend
labels:
app: customers
spec:
selector:
app: customers
type: NodePort
ports:
- port: 80
name: http
targetPort: 3000
I have created the following DestinationRule and VirtualService resources to send the traffic only to the v1 deployment.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: customers
#namespace: default # Not working
#namespace: frontend # working
namespace: backend # working
spec:
host: customers.backend.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: customers
#namespace: default # Not working
#namespace: frontend # working
namespace: backend # working
spec:
hosts:
- "customers.backend.svc.cluster.local"
http:
## route - subset: v1
- route:
- destination:
host: customers.backend.svc.cluster.local
port:
number: 80
subset: v1
The question is which is the appropriate namespace to define the VR and DR resources for the customer service?
From my test I see that I can use either the frontend namespace, or the backend namespace. Why the VR,DR can be created to the frontend namespace or in the backend namespaces and in both cases are working? Which is the correct one?
Are the DestinationRule and VirtualService resources really namespaced resources or can be considered as cluster-wide resources ?
Are the low level routing rules of the proxies propagated to all envoy proxies regardless of the namespace?
A DestinationRule to actually be applied during a request needs to be on the destination rule lookup path:
-> client namespace
-> service namespace
-> the configured meshconfig.rootNamespace namespace (istio-system by default)
In your example, the "web-frontend" client is in the frontend Namespace (web-frontend.frontend.svc.cluster.local), the "customers" service is in the backend Namespace (customers.backend.svc.cluster.local), so the customers DestinationRule should be created in one of the following Namespaces: frontend, backend or istio-system. Additionally, please note that the istio-system Namespace isn't recommended unless the destination rule is really a global configuration that is applicable in all Namespaces.
To make sure that the destination rule will be applied we can use the istioctl proxy-config cluster command for the web-frontend Pod:
$ istioctl proxy-config cluster web-frontend-69d6c79786-vkdv8 -n frontend | grep "customers.backend.svc.cluster.local"
SERVICE FQDN PORT SUBSET DESTINATION RULE
customers.backend.svc.cluster.local 80 - customers.frontend
customers.backend.svc.cluster.local 80 v1 customers.frontend
customers.backend.svc.cluster.local 80 v2 customers.frontend
When the destination rule is created in the default Namespace, it will not be applied during the request:
$ istioctl proxy-config cluster web-frontend-69d6c79786-vkdv8 -n frontend | grep "customers.backend.svc.cluster.local"
SERVICE FQDN PORT SUBSET DESTINATION RULE
customers.backend.svc.cluster.local 80 -
For more information, see the Control configuration sharing in namespaces documentation.

Add header with EnvoyFilter does not work

I am testing istio 1.10.3 to add headers with minikube but I am not able to do so.
Istio is installed in the istio-system namespaces.
The namespace where the deployment is deployed is labeled with istio-injection=enabled.
In the config_dump I can see the LUA code only when the context is set to ANY. When I set it to SIDECAR_OUTBOUND the code is not listed:
"name": "envoy.lua",
"typed_config": {
"#type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua",
"inline_code": "function envoy_on_request(request_handle)\n request_handle:headers():add(\"request-body-size\", request_handle:body():length())\nend\n\nfunction envoy_on_response(response_handle)\n response_handle:headers():add(\"response-body-size\", response_handle:body():length())\nend\n"
}
Someone can give me some tips?
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: headers-envoy-filter
namespace: nginx-echo-headers
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_OUTBOUND
listener:
filterChain:
filter:
name: envoy.filters.network.http_connection_manager
subFilter:
name: envoy.filters.http.router
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
'#type': type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inline_code: |
function envoy_on_request(request_handle)
request_handle:headers():add("request-body-size", request_handle:body():length())
end
function envoy_on_response(response_handle)
response_handle:headers():add("response-body-size", response_handle:body():length())
end
workloadSelector:
labels:
app: nginx-echo-headers
version: v1
Below is my deployment and Istio configs:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-echo-headers-v1
namespace: nginx-echo-headers
labels:
version: v1
spec:
selector:
matchLabels:
app: nginx-echo-headers
version: v1
replicas: 2
template:
metadata:
labels:
app: nginx-echo-headers
version: v1
spec:
containers:
- name: nginx-echo-headers
image: brndnmtthws/nginx-echo-headers:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginx-echo-headers-svc
namespace: nginx-echo-headers
labels:
version: v1
service: nginx-echo-headers-svc
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: nginx-echo-headers
version: v1
---
# ISTIO GATEWAY
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: nginx-echo-headers-gateway
namespace: istio-system
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "api.decchi.com.ar"
# ISTIO VIRTUAL SERVICE
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginx-echo-headers-virtual-service
namespace: nginx-echo-headers
spec:
hosts:
- 'api.decchi.com.ar'
gateways:
- istio-system/nginx-echo-headers-gateway
http:
- route:
- destination:
# k8s service name
host: nginx-echo-headers-svc
port:
# Services port
number: 80
# workload selector
subset: v1
## ISTIO DESTINATION RULE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginx-echo-headers-dest
namespace: nginx-echo-headers
spec:
host: nginx-echo-headers-svc
subsets:
- name: "v1"
labels:
app: nginx-echo-headers
version: v1
It is only working when I configure the context in GATEWAY. The envoyFilter is running in the istio-system namespace and the workloadSelector is configured like this:
workloadSelector:
labels:
istio: ingressgateway
But my idea is to configure it in SIDECAR_OUTBOUND.
it is only working when I configure the context in GATEWAY, the envoyFilter is running in the istio-system namespace
That's correct! You should apply your EnvoyFilter in the config root namespace istio-system- in your case.
And the most important part, just omit context field, when matching your configPatches, so that this applies to both sidecars and gateways. You can see the examples of usage in this Istio Doc.
Here is an example I managed to come up with
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: add-x-cluster-client-ip-header
namespace: istio-system
spec:
configPatches:
- applyTo: ROUTE_CONFIGURATION
match:
context: SIDECAR_INBOUND
patch:
operation: MERGE
value:
request_headers_to_add:
- header:
key: 'x-cluster-client-ip'
value: '%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%'
append: false
# the following is used to debug
response_headers_to_add:
- header:
key: 'x-cluster-client-ip'
value: '%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%'
append: false
https://gist.github.com/qudongfang/75cf0230c0b2291006f72cd23d45f297

Exposing Redis with Ingress Nginx Controller

Hello when I use node port to expose my redis service it works fine. I am able to access it.
But if I try switch to Ingress Nginx controller it refuse to connect.. Other apps work fine with ingress.
Here is my service:
apiVersion: v1
kind: Service
metadata:
name: redis-svc
spec:
# type: NodePort
ports:
- name: http
port: 6379
targetPort: 6379
protocol: TCP
# nodePort: 30007
selector:
app: redis
And here is ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-ing
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
# nginx.ingress.kubernetes.io/enable-cors: "true"
# nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
# nginx.ingress.kubernetes.io/cors-allow-origin: "https://test.hefest.io"
# nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- secretName: letsencrypt-prod
hosts:
- redis-dev.domain.com
rules:
- host: redis-dev.domain.com
http:
paths:
- path: /
backend:
serviceName: redis-svc
servicePort: 6379
Any idea what can be an issue?
I am using this ingress controller: https://github.com/nginxinc/kubernetes-ingress
Redis works on 6379 which is non HTTP port(80,443). So you need to enable TCP/UDP support in nginx ingress controller. The minikube docs here shows how to do it for redis.
Update the TCP and/or UDP services configmaps
Borrowing from the tutorial on configuring TCP and UDP services with the ingress nginx controller we will need to edit the configmap which is installed by default when enabling the minikube ingress addon.
There are 2 configmaps, 1 for TCP services and 1 for UDP services. By default they look like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
Since these configmaps are centralized and may contain configurations, it is best if we only patch them rather than completely overwrite them.
Let’s use this redis deployment as an example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: default
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
imagePullPolicy: Always
name: redis
ports:
- containerPort: 6379
protocol: TCP
Create a file redis-deployment.yaml and paste the contents above. Then install the redis deployment with the following command:
kubectl apply -f redis-deployment.yaml
Next we need to create a service that can route traffic to our pods:
apiVersion: v1
kind: Service
metadata:
name: redis-service
namespace: default
spec:
selector:
app: redis
type: ClusterIP
ports:
- name: tcp-port
port: 6379
targetPort: 6379
protocol: TCP
Create a file redis-service.yaml and paste the contents above. Then install the redis service with the following command:
kubectl apply -f redis-service.yaml
To add a TCP service to the nginx ingress controller you can run the following command:
kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6379":"default/redis-service:6379"}}'
Where:
6379 : the port your service should listen to from outside the minikube virtual machine
default : the namespace that your service is installed in
redis-service : the name of the service
We can verify that our resource was patched with the following command:
kubectl get configmap tcp-services -n kube-system -o yaml
We should see something like this:
apiVersion: v1
data:
"6379": default/redis-service:6379
kind: ConfigMap
metadata:
creationTimestamp: "2019-10-01T16:19:57Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: tcp-services
namespace: kube-system
resourceVersion: "2857"
selfLink: /api/v1/namespaces/kube-system/configmaps/tcp-services
uid: 4f7fac22-e467-11e9-b543-080027057910
The only value you need to validate is that there is a value under the data property that looks like this:
"6379": default/redis-service:6379
Patch the ingress-nginx-controller
There is one final step that must be done in order to obtain connectivity from the outside cluster. We need to patch our nginx controller so that it is listening on port 6379 and can route traffic to your service. To do this we need to create a patch file.
spec:
template:
spec:
containers:
- name: ingress-nginx-controller
ports:
- containerPort: 6379
hostPort: 6379
Create a file called ingress-nginx-controller-patch.yaml and paste the contents above.
Next apply the changes with the following command:
kubectl patch deployment ingress-nginx-controller --patch "$(cat ingress-nginx-controller-patch.yaml)" -n kube-system
The way I made it work is by enabling ssl-passthrough on nginx-ingress controller.
Once my nginx-ingress controller was patched, I was able to connect.
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redis-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
spec:
tls:
- hosts:
- <host_address>
secretName: <k8s_secret_name>
rules:
- host: <host_address>
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: redis-service
port:
number: 6380
python snippet to connect
import redis
r = redis.StrictRedis(host='<host_address>',
port=443, db=0, ssl=True,
ssl_ca_certs='server.pem')
print(r.ping())
I do use redistls in my case.

Managed Certificate in Ingress, Domain Status is FailedNotVisible

I'm simply following the tutorial here: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#creating_an_ingress_with_a_managed_certificate
Everything works fine until I deploy my certificate and wait 20 minutes for it to show up as:
Status:
Certificate Name: daojnfiwlefielwrfn
Certificate Status: Provisioning
Domain Status:
Domain: moviedecisionengine.com
Status: FailedNotVisible
That domain clearly works so what am I missing?
EDIT:
Here's the Cert:
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: moviedecisionengine
spec:
domains:
- moviedecisionengine.com
The Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
ingress.kubernetes.io/backends: '{"k8s-be-31721--1cd1f38313af9089":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/https-forwarding-rule: k8s-fws-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/https-target-proxy: k8s-tps-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/ssl-cert: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
ingress.kubernetes.io/target-proxy: k8s-tp-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/url-map: k8s-um-default-showcase-mde-ingress--1cd1f38313af9089
kubernetes.io/ingress.global-static-ip-name: 34.107.208.110
networking.gke.io/managed-certificates: moviedecisionengine
creationTimestamp: "2020-01-16T19:44:13Z"
generation: 4
name: showcase-mde-ingress
namespace: default
resourceVersion: "1039270"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/showcase-mde-ingress
uid: 92a2f91f-3898-11ea-b820-42010a800045
spec:
backend:
serviceName: showcase-mde
servicePort: 80
rules:
- host: moviedecisionengine.com
http:
paths:
- backend:
serviceName: showcase-mde
servicePort: 80
- host: www.moviedecisionengine.com
http:
paths:
- backend:
serviceName: showcase-mde
servicePort: 80
status:
loadBalancer:
ingress:
- ip: 34.107.208.110
And lastly, the load balancer:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-01-13T22:41:27Z"
labels:
app: showcase-mde
name: showcase-mde
namespace: default
resourceVersion: "2298"
selfLink: /api/v1/namespaces/default/services/showcase-mde
uid: d5a77d7b-3655-11ea-af7f-42010a800157
spec:
clusterIP: 10.31.251.46
externalTrafficPolicy: Cluster
ports:
- nodePort: 31721
port: 80
protocol: TCP
targetPort: 80
selector:
app: showcase-mde
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.232.156.172
For the full output of kubectl describe managedcertificate moviedecisionengine:
Name: moviedecisionengine
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.gke.io/v1beta1","kind":"ManagedCertificate","metadata":{"annotations":{},"name":"moviedecisionengine","namespace...
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2020-01-17T16:47:19Z
Generation: 3
Resource Version: 1042869
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcertificates/moviedecisionengine
UID: 06c97b69-3949-11ea-b820-42010a800045
Spec:
Domains:
moviedecisionengine.com
Status:
Certificate Name: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
Certificate Status: Provisioning
Domain Status:
Domain: moviedecisionengine.com
Status: FailedNotVisible
Events: <none>
I was successful in using Managedcertificate with GKE Ingress resource.
Let me elaborate on that:
Steps to reproduce:
Create IP address with gcloud
Update the DNS entry
Create a deployment
Create a service
Create a certificate
Create a Ingress resource
Create IP address with gcloud
Invoke below command to create static ip address:
$ gcloud compute addresses create example-address --global
Check newly created IP address with below command:
$ gcloud compute addresses describe example-address --global
Update the DNS entry
Go to GCP -> Network Services -> Cloud DNS.
Edit your zone with A record with the same address that was created above.
Wait for it to apply.
Check with $ nslookup DOMAIN.NAME if the entry is pointing to the appropriate address.
Create a deployment
Below is example deployment which will respond to traffic:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
selector:
matchLabels:
app: hello
version: 1.0.0
replicas: 3
template:
metadata:
labels:
app: hello
version: 1.0.0
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:1.0"
env:
- name: "PORT"
value: "50001"
Apply it with command $ kubectl apply -f FILE_NAME.yaml
You can change this deployment to suit your application but be aware of the ports that your application will respond to.
Create a service
Use the NodePort as it's the same as in the provided link:
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello
version: 1.0.0
ports:
- name: hello-port
protocol: TCP
port: 50001
targetPort: 50001
Apply it with command $ kubectl apply -f FILE_NAME.yaml
Create a certificate
As shown in guide you can use below example to create ManagedCertificate:
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: example-certificate
spec:
domains:
- DOMAIN.NAME
Apply it with command $ kubectl apply -f FILE_NAME.yaml
The status FAILED_NOT_VISIBLE indicates that certificate provisioning failed for a domain because of a problem with DNS or the load balancing configuration. Make sure that DNS is configured so that the certificate's domain resolves to the IP address of the load balancer.
-- Google Cloud documentation
Creation of this certificate should be affected by DNS entry that you provided earlier.
Create a Ingress resource
Below is example for Ingress resource which will use ManagedCertificate:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: example-address
networking.gke.io/managed-certificates: example-certificate
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
Apply it with command $ kubectl apply -f FILE_NAME.yaml
It took about 20-25 minutes for it to fully work.

Istio: Ingress for ACME-challenge not working (503)

We are running Istio 1.1.3 on 1.12.5-gke.10 cluster-nodes.
We use certmanager for managing our let's encrypt certificates.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: certs.ourdomain.nl
namespace: istio-system
spec:
secretName: certs.ourdomain.nl
newBefore: 360h # 15d
commonName: operations.ourdomain.nl
dnsNames:
- operations.ourdomain.nl
issuerRef:
name: letsencrypt
kind: ClusterIssuer
acme:
config:
- http01:
ingressClass: istio
domains:
- operations.ourdomain.nl
Next thing we see the acme backend, service (nodeport and ingress) deployed. The ingress (auto-generated) looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
generateName: cm-acme-http-solver-
generation: 1
labels:
certmanager.k8s.io/acme-http-domain: "1734084804"
certmanager.k8s.io/acme-http-token: "1476005735"
name: cm-acme-http-solver-69vzw
namespace: istio-system
ownerReferences:
- apiVersion: certmanager.k8s.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: Certificate
name: certs.ourdomain.nl
uid: 751011d2-4fc8-11e9-b20e-42010aa40101
spec:
rules:
- host: operations.ourdomain.nl
http:
paths:
- backend:
serviceName: cm-acme-http-solver-fzk8q
servicePort: 8089
path: /.well-known/acme-challenge/dnrcr-LRRMdXhBaUefjqpHQx8ytYuk-feEfXu9gW-Ck
status:
loadBalancer: {}
However, when we try to access the url operations.ourdomain.nl /.well-known/acme-challenge/dnrcr-LRRMdXhBaUefjqpHQx8ytYuk-feEfXu9gW-Ck we get a 404.
We do have a loadbalancer for istio:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
labels:
app: istio-ingress
chart: gateways-1.1.0
heritage: Tiller
istio: ingress
release: istio
name: istio-ingress
namespace: istio-system
spec:
selector:
app: istio-ingress
servers:
- hosts:
- operations.ourdomain.nl
#port:
# name: http
# number: 80
# protocol: HTTP
#tls:
# httpsRedirect: true
- hosts:
- operations.ourdomain.nl
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: certs.ourdomain.nl
mode: SIMPLE
privateKey: sds
serverCertificate: sds
This interesting article gives a good insight in how the acme-challenge is supposed to work. For purpose of testing we have removed the port 80 and redirect to https in our custom gateway. We have added the autogenerated k8s gateway, listening only on port 80.
Istio is supposed to create a virtualservice for the acme-challenge. This seems to be happening, because now, when we request the acme-challenge url we get a 503: upstream connect error or disconnect/reset before headers. I believe this means the request gets to the gateway and is matched by a virtualservice, but there is no service / healthy pod to revert the traffic to.
We do see some possibly interesting logging in the istio-pilot:
“ProxyStatus”: {“endpoint_no_pod”:
{“cm-acme-http-solver-l5j2g.istio-system.svc.cluster.local”:
{“message”: “10.16.57.248”}
I have double checked and the service mentioned above does have a pod it is exposing. So I am not sure whether this line is relevant to this issue.
The acme-challenge pods do not have an istio-sidecar. Could this be the issue? If so: why does it apparently work for others