Hubot change IRC channel topic - irc

I'd like our Hubot to manage the topic of certain IRC channels. When I have hubot send "/TOPIC #channel New Topic" that text just ends up in the channel.
I know I can add a listener for IRC topic changes (like irc-topic.coffee) with:
robot.adapter.bot.addListener 'topic', (channel, topic) ->
But is there an interface to setting the topic or a way to coerce hubot-irc adapter into send a raw IRC command?

https://github.com/nandub/hubot-irc/blob/master/src/irc.coffee#L40
it looks like you just setup the listener and then topic = "thing the topic should be"

Related

Kafka Stream automatically read from new topic?

Is there any way to make my Kafka Stream application automatically read from the newly created topic?
Even if the topic is created while the stream application is already running?
Something like having a wildcard in topic name like this:
KStream<String, String> rawText = builder.stream("topic-input-*");
Why I need this?
Right now, I've multiple clients sending data(all with the same schema) to their own topic and my stream application reads from those topics. Then my application does some transformation and writes the result to a single topic.
Although all of the clients could write to the same topic, an unbehaving client could also write on behalf of someone else. So I've created individual topics for each client. The problem is, whenever a new client comes, I create the new topic and set the ACL for them with a script but that is not enough. I also have to stop my streaming application, edit the code, add the new topic, compile it, package it, put it on the server and run it again!
Kafka Streams supports patter subscription:
builder.stream(Pattern.compile("topic-input-*"));
(I hope the syntax is right; not sure from the top of my head... But the point is, instead of passing in a String you can user an overload of the stream() method that takes a pattern.)

Kafka topic to multiple kafka topics dispatcher (same cluster)

My use-case is as follows:
I have a kafka topic A with messages "logically" belonging to different "services", I don't handle neither the system sending the messages to A.
I want to read such messages from A and dispatch them to a per-service set of topics on the same cluster (let's call them A_1, ..., A_n), based on one column describing the service (the format is CSV-style, but it doesn't matter).
The set of services is static, I don't have to handle addition/removal at the moment.
I was hoping to use KafkaConnect to perform such task but, surprisingly, there are no Kafka source/sinks (I cannot find the tickets, but they have been rejected).
I have seen MirrorMaker2 but it looks like an overkill for my (simple) use-case.
I also know KafkaStreams but I'd rather not write and maintain code just for that.
My question is: is there a way to achieve this topic dispatching with kafka native tools without writing a kafka-consumer/producer myself?
PS: if anybody thinks that MirrorMaker2 could be a good fit I am interested too, I don't know the tool very well.
As for my knowledge, there is no straightforward way to branch incoming topic messages to a list of topics based on the incoming messages. You need to write custom code to achieve this.
Use Processor API Refer here
Pass list of topics inside the Processor method
Use logic to identify topics need to branch
Use context.forward to publish a message to other topics
context.forward(key, value, To.child("selected topic"))
Mirror Maker is for doing ... mirroring. It's useful when you want to mirror one cluster from one data center to the other with the same topics. Your use case is different.
Kafka Connect is for syncing different systems (data from Databases for example) through Kafka topics but I don't see it for this use case either.
I would use a Kafka Streams application for that.
All the other answers are right, at the time of writing I did find any "config-only" solution in the Kafka toolset.
What finally did the trick was to use Logstash, as its "kafka output plugin" supports jinja variables in topic-id parameter.
So once you have the "target topic name" available in a field (say service_name) it's as simple as this:
output {
kafka {
id => "sink"
codec => [...]
bootstrap_servers => [...]
topic_id => "%{[service_name]}"
[...]
}
}

Is Kafka message headers the right place to put event type name?

In scenario where multiple single domain event types are produced to single topic and only subset of event types are consumed by consumer i need a good way to read the event type before taking action.
I see 2 options:
Put event type (example "ORDER_PUBLISHED") into message body (payload) itself which would be like broker agnostic approach and have other advantages. But would involve parsing of every message just to know the event type.
Utilize Kafka message headers which would allow to consume messages without extra payload parsing.
The context is event-sourcing. Small commands, small payloads. There are no huge bodies to parse. Golang. All messages are protobufs. gRPC.
What is typical workflow in such scenario.
I tried to google on this topic, but didn't found much on Headers use-cases and good practices.
Would be great to hear when and how to use Kafka message headers and when not to use.
Clearly the same topic should be used for different event types that apply to the same entity/aggregate (reference). Example: BookingCreated, BookingConfirmed, BookingCancelled, etc. should all go to the same topic in order to (excuse the pun) guarantee ordering of delivery (in this case the booking ID is the message key).
When the consumer gets one of these events, it needs to identify the event type, parse the payload, and route to the processing logic accordingly. The event type is the piece of message metadata that allows this identification.
Thus, I think a custom Kafka message header is the best place to indicate the type of event. I'm not alone:
Felipe Dutra: "Kafka allow you to put meta-data as header of your message. So use it to put information about the message, version, type, a correlationId. If you have chain of events, you can also add the correlationId of opentracing"
This GE ERP system has a header labeled "event-type" to show "The type of the event that is published" to a kafka topic (e.g., "ProcessOrderEvent").
This other solution mentions that "A header 'event' with the event type is included in each message" in their Kafka integration.
Headers are new in Kafka. Also, as far as I've seen, Kafka books focus on the 17 thousand Kafka configuration options and Kafka topology. Unfortunately, we don't easily find much on how an event-driven architecture can be mapped with the proper semantics onto elements of the Kafka message broker.

Storm KafkaBolt push to multiple Kafka Topics

I have a use-case, where I have a message which has to be pushed to a number of kafka topics.
Currently at a high level, that method looks like this:
pushToTopics(String msg){
pushToTopicA(msg);
pushToTopicB(msg);
pushToTopicC(msg);
.
.
.
pushToTopicN(msg);
}
Every PushToTopicX(msg) has a condition which when fulfilled should lead the message to be published to the corresponding topic. Right now, all of this logic is at the terminal Bolt and to push the messages, we use KafkaProducer.
I was looking at ways to break this down into topic specific bolts and more importantly use KafkaBolts to push messages.
Is it possible with storm(v 1.2.2)? I saw that very recently a PR has been merged which lets one create custom callbacks, but we don't have that.
The KafkaBolt can decide which topic to send to based on the tuple. You could just use a splitter bolt to split your message into N messages, each with a different destination topic, and then send all of them to the KafkaBolt.
The way I eventually solved it is to create separate streams, each one bound to the destination topics. Then via collector.emit on specific streams, I was able to fan the messages out across various bolts, which eventually push to Kafka using KafkaBolt.

How can I send a message to my akka actor system's event stream without addressing the message to any actor in particular?

I'm interested in implementing:
1. an akka actor A that sends messages to an event stream;
2. an akka actor L that listens to messages of a certain type that have been published on the event stream.
If possible, I would like to reutilize the actor system's event stream.
I know how to do 2. It is explained here: https://doc.akka.io/docs/akka/2.5/event-bus.html#event-stream
But how can I do 1?
I know how to make A send a message addressed to another actor(Ref), but I do not want to address the message to any particular actor(Ref). I just want the message to appear in the event stream and be picked up by whoever is listening to messages of that type. Is this possible somehow?
A side-question: if I implement 2 as described in https://doc.akka.io/docs/akka/2.5/event-bus.html#event-stream, does the listener know who sent the message?
As per the documentation link that you posted you can publish messages to the EventStream:
system.eventStream.publish(Jazz("Sonny Rollins"))
Message will be delivered to all actors that subscribed themselves to this message type:
system.eventStream.subscribe(jazzListener, classOf[Jazz])
For the subscribers to know the sender, I suggest you define an ActorRef field in your payload and the sending actor can put its self reference in it when publishing the message. NB Defining the sender's ActorRef explicitly in the message type is how the new akka-typed library deals with all actor interactions, so it's a good idea to get used to this pattern.