Unable to log egress traffic HTTP requests with the istio-proxy - kubernetes

I am following this guide.
Ingress requests are getting logged. Egress traffic control is working as expected, except I am unable to log egress HTTP requests. What is missing?
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: myapp
spec:
workloadSelector:
labels:
app: myapp
outboundTrafficPolicy:
mode: REGISTRY_ONLY
egress:
- hosts:
- default/*.example.com
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: example
spec:
location: MESH_EXTERNAL
resolution: NONE
hosts:
- '*.example.com'
ports:
- name: https
protocol: TLS
number: 443
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: mesh-default
namespace: istio-system
spec:
accessLogging:
- providers:
- name: envoy
Kubernetes 1.22.2 Istio 1.11.4

For ingress traffic logging I am using EnvoyFilter to set log format and it is working without any additional configuration. In the egress case, I had to set accessLogFile: /dev/stdout.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: config
namespace: istio-system
spec:
profile: default
meshConfig:
accessLogFile: /dev/stdout

AFAIK istio collects only ingress HTTP logs by default.
In the istio documentation there is an old article (from 2018) describing how to enable egress traffic HTTP logs.
Please keep in mind that some of the information may be outdated, however I believe this is the part that you are missing.

Related

How do I point Kubernetes Ingress to the Istio ingress gateway?

I have a currently functioning Istio application. I would now like to add HTTPS using the Google Cloud managed certs. I setup the ingress there like this...
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
namespace: istio-system
spec:
domains:
- mydomain.co
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: managed-cert
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: istio-ingressgateway
port:
number: 443
---
But when I try going to the site (https://mydomain.co) I get...
Secure Connection Failed
An error occurred during a connection to earth-615.mydomain.co. Cannot communicate securely with peer: no common encryption algorithm(s).
Error code: SSL_ERROR_NO_CYPHER_OVERLAP
The functioning virtual service/gateway looks like this...
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: ingress-gateway
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: earth-616
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http2
protocol: HTTP2
hosts:
- "*"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: test-app
namespace: foo
spec:
hosts:
- "*"
gateways:
- "istio-system/ingress-gateway"
http:
- match:
- uri:
exact: /
route:
- destination:
host: test-app
port:
number: 8000
Pointing k8s ingress towards istio ingress would result in additional latency and additional requirement for the istio gateway to use ingress sni passthrough to accept the HTTPS (already TLS terminated traffic).
Instead the best practice here would be to use the certificate directly with istio Secure Gateway.
You can use the certificate and key issued by Google CA. e.g. from Certificate Authority Service and create a k8s secret to hold the certificate and key. Then configure istio Secure Gateway to terminate the TLS traffic as documented in here.

Istio rating limit with redisquota not taken into account

I'm trying to use rate limitings with istio (i've already done it with envoy but the project manager wants me to try it that way). I based my config on the tutorial of istio. I tried a few different things but can't make it work and i don't even know how to debug this. Kiali doesn't give any nice information about quotas, rules,... My goal is to block to max 2 request per XX seconds the traffic to a service. you can find my code here if you want to give a try: https://github.com/hagakure/istio_rating.
first step i did was: istioctl install --set meshConfig.disablePolicyChecks=false --set values.pilot.policy.enabled=true
as said on istio website
then i add some yaml config:
My service:
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
namespace: rate-limit
spec:
selector:
app: hello-world
ports:
- protocol: TCP
port: 80
targetPort: 80
Exposed by Istio:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hello-world-gateway
namespace: rate-limit
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-web
protocol: HTTP
hosts:
- '*'
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-world-vs
namespace: rate-limit
spec:
hosts:
- "*"
gateways:
- hello-world-gateway
http:
- route:
- destination:
port:
number: 80
host: hello-world-svc.rate-limit.svc.cluster.local
My rate-limiting configuration for istio:
apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
name: requestcount
namespace: rate-limit
spec:
compiledTemplate: quota
params:
dimensions:
destination: destination.labels["app"] | destination.service.host | "unknown"
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: quota
namespace: rate-limit
spec:
rules:
- quotas:
- quota: requestcount.instance.rate-limit
charge: 1
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: quota-binding
namespace: rate-limit
spec:
quotaSpecs:
- name: quota
namespace: rate-limit
services:
- service: '*'
---
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: quotahandler
namespace: rate-limit
spec:
compiledAdapter: redisquota
params:
redisServerUrl: localhost:6379
connectionPoolSize: 10
quotas:
- name: requestcount.instance.rate-limit
maxAmount: 2
validDuration: 30s
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota-rule
namespace: rate-limit
spec:
actions:
- handler: quotahandler.handler.rate-limit
instances:
- requestcount.instance.rate-limit
But nothing appends, i can curl as much as i want the service, no problem :'(
1.6.2 i know it's deprecated but it is still usable no?
As mentioned in documentation
The mixer policy is deprecated in Istio 1.5 and not recommended for production usage.
Consider using Envoy native rate limiting instead of mixer rate limiting. Istio will add support for native rate limiting API through the Istio extensions API.
As far as I know mixer no longer exist when you install istio, documentation says that
If you depend on specific Mixer features like out of process adapters, you may re-enable Mixer. Mixer will continue receiving bug fixes and security fixes until Istio 1.7.
But I couldn´t find a proper documentation on how to do that.
There is older github issue about rate limiting when mixer is deprecated.
i've already done it with envoy but the project manager wants me to try it that way
There is a github issue with envoy filter rate limiting example, which as mentioned in above issue and documentation should be used now instead of deprecated rate limiting from istio documentation. So I would recommend to talk with your project manager about that. This is actually the right way to go.
About the issue which might occur if you have used older version of istio with mixer or you have enabled it somehow on newer versions.
Take a look at this github issue.
There were some issues with the commands from documentation you mentioned
istioctl install --set meshConfig.disablePolicyChecks=false --set values.pilot.policy.enabled=true
Instead you should use
istioctl install --set values.pilot.policy.enabled=true --set values.global.policyCheckFailOpen=true
OR
apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
spec:
values:
pilot:
policy:
enabled: true
global:
policyCheckFailOpen: true
Hope you find this informations useful.

GKE Managed Certificate not serving over HTTPS

I'm trying to spin up a Kubernetes cluster that I can access securely and can't seem to get that last part. I am following this tutorial: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs
Here are the .yaml files i'm using for my Ingress, Nodeport and ManagedCertificate
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: client-v1-cert
spec:
domains:
- api.mydomain.com
---
apiVersion: v1
kind: Service
metadata:
name: client-nodeport-service
spec:
selector:
app: myApp
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: api-v1
networking.gke.io/managed-certificates: client-v1-cert
spec:
backend:
serviceName: client-nodeport-service
servicePort: 80
No errors that I can see in the GCP console. i can also access my API from http://api.mydomain.com/, but it won't work when I try https, just not https. Been banging my head on this for a few days and just wondering if there's some little thing i'm missing.
--- UPDATE ---
Output of kubectl describe managedcertificate
Name: client-v1-cert
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2019-07-01T17:42:43Z
Generation: 3
Resource Version: 1136504
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcer
tificates/client-v1-cert
UID: b9b7bec1-9c27-33c9-a309-42284a800179
Spec:
Domains:
api.mydomain.com
Status:
Certificate Name: mcrt-286cdab3-b995-40cc-9b3a-28439285e694
Certificate Status: Active
Domain Status:
Domain: api.mydomain.com
Status: Active
Expire Time: 2019-09-29T09:55:12.000-07:00
Events: <none>
I figured out a solution to this problem. I ended up going into my GCP console, locating the load balancer associated with the Ingress, and then I noticed that there was only one frontend protocol, and it was HTTP serving over port 80. So I manually added another frontend protocol for HTTPS, selected the managed certificate from the list, and waited about 5 minutes and everything worked.
I have no idea why my ingress.yaml didn't do that automatically though. So though the problem is fixed if there is anyone out there who knows what I would love to know.

Istio: Ingress for ACME-challenge not working (503)

We are running Istio 1.1.3 on 1.12.5-gke.10 cluster-nodes.
We use certmanager for managing our let's encrypt certificates.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: certs.ourdomain.nl
namespace: istio-system
spec:
secretName: certs.ourdomain.nl
newBefore: 360h # 15d
commonName: operations.ourdomain.nl
dnsNames:
- operations.ourdomain.nl
issuerRef:
name: letsencrypt
kind: ClusterIssuer
acme:
config:
- http01:
ingressClass: istio
domains:
- operations.ourdomain.nl
Next thing we see the acme backend, service (nodeport and ingress) deployed. The ingress (auto-generated) looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
generateName: cm-acme-http-solver-
generation: 1
labels:
certmanager.k8s.io/acme-http-domain: "1734084804"
certmanager.k8s.io/acme-http-token: "1476005735"
name: cm-acme-http-solver-69vzw
namespace: istio-system
ownerReferences:
- apiVersion: certmanager.k8s.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: Certificate
name: certs.ourdomain.nl
uid: 751011d2-4fc8-11e9-b20e-42010aa40101
spec:
rules:
- host: operations.ourdomain.nl
http:
paths:
- backend:
serviceName: cm-acme-http-solver-fzk8q
servicePort: 8089
path: /.well-known/acme-challenge/dnrcr-LRRMdXhBaUefjqpHQx8ytYuk-feEfXu9gW-Ck
status:
loadBalancer: {}
However, when we try to access the url operations.ourdomain.nl /.well-known/acme-challenge/dnrcr-LRRMdXhBaUefjqpHQx8ytYuk-feEfXu9gW-Ck we get a 404.
We do have a loadbalancer for istio:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
labels:
app: istio-ingress
chart: gateways-1.1.0
heritage: Tiller
istio: ingress
release: istio
name: istio-ingress
namespace: istio-system
spec:
selector:
app: istio-ingress
servers:
- hosts:
- operations.ourdomain.nl
#port:
# name: http
# number: 80
# protocol: HTTP
#tls:
# httpsRedirect: true
- hosts:
- operations.ourdomain.nl
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: certs.ourdomain.nl
mode: SIMPLE
privateKey: sds
serverCertificate: sds
This interesting article gives a good insight in how the acme-challenge is supposed to work. For purpose of testing we have removed the port 80 and redirect to https in our custom gateway. We have added the autogenerated k8s gateway, listening only on port 80.
Istio is supposed to create a virtualservice for the acme-challenge. This seems to be happening, because now, when we request the acme-challenge url we get a 503: upstream connect error or disconnect/reset before headers. I believe this means the request gets to the gateway and is matched by a virtualservice, but there is no service / healthy pod to revert the traffic to.
We do see some possibly interesting logging in the istio-pilot:
“ProxyStatus”: {“endpoint_no_pod”:
{“cm-acme-http-solver-l5j2g.istio-system.svc.cluster.local”:
{“message”: “10.16.57.248”}
I have double checked and the service mentioned above does have a pod it is exposing. So I am not sure whether this line is relevant to this issue.
The acme-challenge pods do not have an istio-sidecar. Could this be the issue? If so: why does it apparently work for others

How to get Kubernetes Ingress Port 80 working on baremetal single node cluster

I have a bare-metal kubernetes (v1.11.0) cluster created with kubeadm and working fine without any issues. Network with calico and made it a single node cluster using kubectl taint nodes command. (single node is a requirement).
I need to run mydockerhub/sampleweb static website image on host port 80. Assume the IP address of the ubuntu server running this kubernetes is 192.168.8.10.
How to make my static website available on 192.168.8.10:80 or a hostname mapped to it on local DNS server? (Example: frontend.sampleweb.local:80). Later I need to run other services on different port mapped to another subdomain. (Example: backend.sampleweb.local:80 which routes to a service run on port 8080).
I need to know:
Can I achieve this without a load balancer?
What resources needed to create? (ingress, deployment, etc)
What additional configurations needed on the cluster? (network policy, etc)
Much appreciated if sample yaml files are provided.
I'm new to kubernetes world. I got sample kubernetes deployments (like sock-shop) working end-to-end without any issues. I tried NodePort to access the service but instead of running it on a different port I need to run it exact port 80 on the host. I tried many ingress solutions but didn't work.
Screenshot of my setup:
I recently used traefik.io to configure a project with similar requirements to yours.
So I'll show a basic solution with traefik and ingresses.
I dedicated a whole namespace (you can use kube-system), called traefik, and created a kubernetes serviceAccount:
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: traefik
name: traefik-ingress-controller
The traefik controller which is invoked by ingress rules requires a ClusterRole and its binding:
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
namespace: traefik
name: traefik-ingress-controller
The traefin controller will be deployed as daemonset (i.e. by definition one for each node in your cluster) and a Kubernetes service is dedicated to the controller:
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: traefik
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- name: traefik-ingress-lb
image: traefik
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
namespace: traefik
name: traefik-ingress-service
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
The final part requires you to create a service for each microservice in you project, here an example:
apiVersion: v1
kind: Service
metadata:
namespace: traefik
name: my-svc-1
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
and also the ingress (set of rules) that will forward the request to the proper service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: traefik
name: ingress-ms-1
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: my-address-url
http:
paths:
- backend:
serviceName: my-svc-1
servicePort: 80
In this ingress I wrote a host URL, this will be the entry point in your cluster, so you need to resolve the name to your master K8S node. If you have more nodes which could be master, then a loadbalancer is suggested (in this case the host URL will be the LB).
Take a look to kubernetes.io documentation to have clear the concepts for kubernetes. Also traefik.io is useful.
I hope this helps you.
In addition to the andswer of Nicola Ben , You have to define an externalIPs in your traefik service, just follow the steps of Nicola Ben and add a externalIPs section to the service "my-svc-1" .
apiVersion: v1
kind: Service
metadata:
namespace: traefik
name: my-svc-1
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
externalIPs:
- <IP_OF_A_NODE>
And you can define more than on externalIP.