I've got sequences working in my Knative environment. We're trying to configure and confirm the DLQ/Dead Letter Sink works so we can write tests and things against sequences. I can't for the life of my get Knative to send anything to the Dead Letter Sink. I've approached this two ways, the first was setting up a Broker, Trigger, Services and a Sequence. I've defined in the Broker a service to use for the DLQ. I then setup a service in the sequence to intentionally returned a non-200 status. When I view the logs for the channel dispatcher in the knative-eventing namespace, I believe what I read is that it thinks there was a failure.
I read some things about the default MT Broker maybe not handling DLQ correctly so then I installed Kafka. Got that all working and essentially, it appears to do the same thing.
I started to wonder, ok, maybe within a sequence you can't do DLQ. After all the documentation only talks about DLQ with subscriptions and brokers, and maybe Knative believes that the message was successfully delivered from the broker to the sequence, even if it dies within the sequence. So I manually setup and channel and a subscription and sent the data straight to the channel and again, what I got was essentially the same thing, which is:
The sequence will stop on whatever step doesn't return a 2XX status code, but nothing gets sent to the DLQ. I even made the subscription go straight to the service (instead of a sequence) and that service returned a 500 and still nothing to the DLQ.
The log item below is from the channel dispatcher pod running in the knative-eventing namespace. It basically looks the same with In memory channel or Kafka, i.e. expected 2xx got 500.
{"level":"info","ts":"2021-11-30T16:01:05.313Z","logger":"kafkachannel-dispatcher","caller":"consumer/consumer_handler.go:162","msg":"Failure while handling a message","knative.dev/pod":"kafka-ch-dispatcher-5bb8f84976-rpd87","knative.dev/controller":"knative.dev.eventing-kafka.pkg.channel.consolidated.reconciler.dispatcher.Reconciler","knative.dev/kind":"messaging.knative.dev.KafkaChannel","knative.dev/traceid":"957c394a-1636-44ad-b024-fb0dde9c8440","knative.dev/key":"kafka/test-sequence-kn-sequence-0","topic":"knative-messaging-kafka.kafka.test-sequence-kn-sequence-0","partition":0,"offset":4,"error":"unable to complete request to http://cetf.kafka.svc.cluster.local: unexpected HTTP response, expected 2xx, got 500"}
{"level":"warn","ts":"2021-11-30T16:01:05.313Z","logger":"kafkachannel-dispatcher","caller":"dispatcher/dispatcher.go:314","msg":"Error in consumer group","knative.dev/pod":"kafka-ch-dispatcher-5bb8f84976-rpd87","error":"unable to complete request to http://cetf.kafka.svc.cluster.local: unexpected HTTP response, expected 2xx, got 500"}
Notes on setup. I deployed literally everything to the same namespace for testing. I followed the guide here essentially to setup my broker when doing the broker/trigger and for deploying Kafka. My broker looked like this:
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
annotations:
# case-sensitive
eventing.knative.dev/broker.class: Kafka
name: default
namespace: kafka
spec:
# Configuration specific to this broker.
config:
apiVersion: v1
kind: ConfigMap
name: kafka-broker-config
namespace: knative-eventing
# Optional dead letter sink, you can specify either:
# - deadLetterSink.ref, which is a reference to a Callable
# - deadLetterSink.uri, which is an absolute URI to a Callable (It can potentially be
out of the Kubernetes cluster)
delivery:
deadLetterSink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: dlq
namespace: kafka
When I manually created the subscription and channel my subscription looked like this:
apiVersion: messaging.knative.dev/v1
kind: Subscription
metadata:
name: test-sub # Name of the Subscription.
namespace: kafka
spec:
channel:
apiVersion: messaging.knative.dev/v1beta1
kind: KafkaChannel
name: test-channel
delivery:
deadLetterSink:
backoffDelay: PT1S
backoffPolicy: linear
retry: 1
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: dlq
namespace: kafka
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: cetf
No matter what I do I NEVER see the dlq pod spin up. I've adjusted retry stuff, waited and waited, used the default channel/broker, Kafka, etc. I simply cannot see the pod ever run. Is there something I'm missing, what on earth could be wrong? I can set the subscriber to be a junk URI and then the DLQ pod spins up, but shouldn't it also spin up if the service it sends events to returns error codes?
Can anyone provide a couple of very basic YAML files to deploy the simplest version of a working DLQ to test with?
there was some issue with Dead Letter Sinks not being propagated at pre-GA releases. Can you make sure you are using Knative 1.0?
This is working for me as expected using the inmemory channel:
https://gist.github.com/odacremolbap/f6ce029caf4fa6fbb3cc1e829f188788
curl producing cloud events to a broker
broker with DLS configured to an event-display
event display service as DLS receiver
trigger from broker to a replier service
replier service (can ack and nack depending on the incoming event)
I never found an example of this in the docs, but the API docs for the SequenceStep does show a delivery property. Which, when assigned, uses the DLQ.
steps:
- ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: service-step
delivery:
# DLS to an event-display service
deadLetterSink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: dlq-service
namespace: ns-name
It seems odd to have to specify a delivery for EVERY step and not just the sequence as a whole.
Related
I am trying to set up a multi-cluster architecture. I have a Spring Boot API that I want to run on a second cluster (for isolation purposes). I have set that up using the gateway.networking.k8s.io API. I am using a Gateway that has an SSL certificate and matches an IP address that's registered to my domain in the DNS registry. I am then setting up an HTTPRoute for each service that I am running on the second cluster. That works fine and I can communicate between our clusters and everything works as intended but there is a problem:
There is a timeout of 30s by default and I cannot change it. I want to increase it as the application in the second cluster is a WebSocket and I obviously would like our WebSocket connections to stay open for more than 30s at a time. I can see that in the backend service that's created from our HTTPRoute there is a timeout specified as 30s. I found a command to increase it gcloud compute backend-services update gkemcg1-namespace-store-west-1-8080-o1v5o5p1285j --timeout=86400
When I run that command it would increase the timeout and the webSocket connection will be kept alive. But after a few minutes this change gets overridden (I suspect that it's because it's managed by the yaml file). This is the yaml file for my backend service
kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: public-store-route
namespace: namespace
labels:
gateway: external-http
spec:
hostnames:
- "my-website.example.org"
parentRefs:
- name: external-http
rules:
- matches:
- path:
type: PathPrefix
value: /west
backendRefs:
- group: net.gke.io
kind: ServiceImport
name: store-west-1
port: 8080
I have tried to add either a timeout, timeoutSec, or timeoutSeconds under every level with no success. I always get the following error:
error: error validating "public-store-route.yaml": error validating data: ValidationError(HTTPRoute.spec.rules[0].backendRefs[0]): unknown field "timeout" in io.k8s.networking.gateway.v1beta1.HTTPRoute.spec.rules.backendRefs; if you choose to ignore these errors, turn validation off with --validate=false
Surely there must be a way to configure this. But I wasn't able to find anything in the documentation referring to a timeout. Am I missing something here?
How do I configure the timeout?
Edit:
I have found this resource: https://cloud.google.com/kubernetes-engine/docs/how-to/configure-gateway-resources
I have been trying to set up a LBPolicy and attatch it it the Gateway, HTTPRoute, Service, or ServiceImport but nothing has made a difference. Am I doing something wrong or is this not working how it is supposed to? This is my yaml:
kind: LBPolicy
apiVersion: networking.gke.io/v1
metadata:
name: store-timeout-policy
namespace: sandstone-test
spec:
default:
timeoutSec: 50
targetRef:
name: public-store-route
group: gateway.networking.k8s.io
kind: HTTPRoute
Would Kafka need to be installed on the consumer cluster?
Presently the same cluster YAML configuration is:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: sample-topic
spec:
type: bindings.kafka
version: v1
metadata:
# Kafka broker connection setting
- name: brokers
value: dapr-kafka.kafka:9092
# consumer configuration: topic and consumer group
- name: topics
value: sample
- name: consumerGroup
value: group1
# publisher configuration: topic
- name: publishTopic
value: sample
- name: authRequired
value: "false"
On different clusters, does each cluster require only either "name: publishTopic" or "name: consumerGroup" and not the other?
I'm not familiar with Dapr, but Kafka does not need installed in k8s, or any specific location. Your only requirement should be client connectivity to that bootstrap-servers list.
According to the Kafka Binding spec, consumerGroup is for incoming events, and publishTopic is for outgoing, so two different use cases, although one app should be able to have both event types. If the app only uses incoming or outgoing events, then use the appropriate binding for that case.
TL;DR: How can we configure istio sidecar injection/istio-proxy/envoy-proxy/istio egressgateway to allow long living (>3 hours), possibly idle, TCP connections?
Some details:
We're trying to perform a database migration to PostgreSQL which is being triggered by one application which has Spring Boot + Flyway configured, this migration is expected to last ~3 hours.
Our application is deployed inside our kubernetes cluster, which has configured istio sidecar injection. After exactly one hour of running the migration, the connection is always getting closed.
We're sure it's istio-proxy closing the connection as we attempted the migration from a pod without istio sidecar injection and it was running for longer than one hour, however this is not an option going forward as this may imply some downtime in production which we can't consider.
We suspect this should be configurable in istio proxy setting the parameter idle_timeout - which was implemented here. However this isn't working, or we are not configuring it properly, we're trying to configure this during istio installation by adding --set gateways.istio-ingressgateway.env.ISTIO_META_IDLE_TIMEOUT=5s to our helm template.
If you use istio version higher than 1.7 you might try use envoy filter to make it work. There is answer and example on github provided by #ryant1986.
We ran into the same problem on 1.7, but we noticed that the ISTIO_META_IDLE_TIMEOUT setting was only getting picked up on the OUTBOUND side of things, not the INBOUND. By adding an additional filter that applied to the INBOUND side of the request, we were able to successfully increase the timeout (we used 24 hours)
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: listener-timeout-tcp
namespace: istio-system
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: envoy.filters.network.tcp_proxy
patch:
operation: MERGE
value:
name: envoy.filters.network.tcp_proxy
typed_config:
'#type': type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy
idle_timeout: 24h
We also created a similar filter to apply to the passthrough cluster (so that timeouts still apply to external traffic that we don't have service entries for), since the config wasn't being picked up there either.
for ingress gateway, we use env.ISTIO_META_IDLE_TIMEOUT to set the idle-timeout for TCP or HTTP protocol.
for sidecar, you can use the similar envoyfilter (listener-timeout-tcp) to configure INBOUND direction or OUTBOUND direction.
A Usecase where one of the service must be scaled to 10 pods.
BUT, one of the pod must have different env variables. (kind of doing certain actions like DB actions and triggers handling, don't want 10 triggers to be handled instead of 1 DB change), for example 9 pods have env variable CHANGE=0 but one of the pod has env variable CHANGE=1
Also i am resolving by service name, so changing service name is not what i am looking for.
It sounds like you're trying to solve an issue with your app using Kubernetes.
The reason I say that is because the whole concept of "replicas" is to have identical instances, what you're actually saying is: "I have 10 identical pods but I want 1 of the to be different" and that's not how Kubernetes works.
So, you need to re-think the reason for which you need this environment variable to be different, what do you use it for. If you want to share the details maybe I can help you find an idiomatic way of doing this using Kubernetes.
The easiest way to do what you describe is to have two separate Services. One attaches to any "web" pod:
apiVersion: v1
kind: Service
metadata:
name: myapp-web
spec:
selector:
app: myapp
tier: web
The second attaches to only the master pod(s):
apiVersion: v1
kind: Service
metadata:
name: myapp-master
spec:
selector:
app: myapp
tier: web
role: master
Then have two separate Deployments. One has the single master pod, and one replica; the other has nine server pods. Your administrative requests go to myapp-master but general requests go to myapp-web.
As #omricoco suggests you can come up with a couple of ways to restructure this. A job queue like RabbitMQ will have the property that each job is done once (with retries if a job fails), so one setup is to run a queue like this, allow any server to accept administrative requests, but have their behavior just be to write a job into the queue. Then you can run a worker process (or several) to service these.
Our application uses RabbitMQ with only a single node. It is run in a single Kubernetes pod.
We use durable/persistent queues, but any time that our cloud instance is brought down and back up, and the RabbitMQ pod is restarted, our existing durable/persistent queues are gone.
At first, I though that it was an issue with the volume that the queues were stored on not being persistent, but that turned out not to be the case.
It appears that the queue data is stored in /var/lib/rabbitmq/mnesia/<user#hostname>. Since the pod's hostname changes each time, it creates a new set of data for the new hostname and loses access to the previously persisted queue. I have many sets of files built up in the mnesia folder, all from previous restarts.
How can I prevent this behavior?
The closest answer that I could find is in this question, but if I'm reading it correctly, this would only work if you have multiple nodes in a cluster simultaneously, sharing queue data. I'm not sure it would work with a single node. Or would it?
What helped in our case was to set hostname: <static-host-value>
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 1
...
template:
metadata:
labels:
app: rabbitmq
spec:
...
containers:
- name: rabbitmq
image: rabbitmq:3-management
...
hostname: rmq-host
How can I prevent this behavior?
By using a StatefulSet as is intended for the case where Pods have persistent data that is associated with their "identity." The Helm chart is a fine place to start reading, even if you don't end up using it.
I ran into this issue myself and the quickest way I found was to specify an environment variable RABBITMQ_NODENAME = "yourapplicationsqueuename" and making sure I only had 1 replica for my pod.