I am new to apache zookeeper. I have a distributed application. Each node need to send messages (also heavy) to each other. Does anybody suggest to use Zookeeper as a messages exchange channel, or is it used only for collaboration ?
I hope I dont miss anything from the whole picture!
You should consider posting your question to the Zookeeper user list for a more complete answer. Heavy message passing is not a highly recommended use of Zookeeper. Zookeeper is a coordination service, so it can help you manage where to send messages, or where specific destinations are located, but probably isn't the application for sending heavy messages between processes.
Related
I am going to implement some sort of home automation system (as my bachelor thesis). I have looked on MQTT protocol and I have two question about it.
I have saw this tutorial:
https://www.youtube.com/watch?v=X04yaaydjFo&list=PLeJ_Vi9u6KisKTSNeRRfqsASNZdHSbo8E&index=13
Which has materials (codes etc) here:
https://randomnerdtutorials.com/raspberry-pi-publishing-mqtt-messages-to-esp8266/
My first question is:
Is the logic how to manage acquired data/message (due to some topic subscription) in broker or clients? From tutorial above it seems, that that logic how to deal with message is in clients. Should it always be there? Or is it possible to has it in broker? Sorry if this question is too "abstract", i am at beginning of programming, so I have no concrete example. Basically what I want is to have as "light" program in client, as is possible (because broker will have lot of memory and computing capacity, while clients will be very limited in both).
My second question is:
Is it posible to put ESP8266 (or just any client) in sleep and awake it, let's say every 5 minutes? Of course it should not be problem if that client only publish (and never subscribe) to topics. But what when I have client, which can read some sensor, which will send in that 5 minutes cycles to broker, and also can control some of its output? Is there way to do that? Or if client is not available (if there is some data to publish to it and it is currently sleeping), the message is just thrown out? My thought was if there is way to ask broker after client awake if there was any published message to them during client sleep?
Thanks for every info! :)
1. Answer
The message processing logic is to be fully implemented by the client. With MQTT there is no "stream" processing as is possible with e.g. Apache Kafka. However, you can of course have intermediary (non-IoT) clients that subscribe to the original topic, prepare a modified message and publish it to a new topic - the one which the IoT device would then subscribe to.
2. Answer
You can tell the broker to retain a message. However, it will at most retain 1 message per topic.
P.S. for the future please stick to 1-question-1-post here on SO.
I haven't used Kafka before and wanted to know if messages are published through Kafka what are the possible ways to capture that info?
Is Kafka only way to receive that info via "Consumers" or can Rest APIs be also used here?
Haven't used Kafka before and while reading up I did find that Kafka needs ZooKeeper running too.
I don't need to publish info just process data received from Kafka publisher.
Any pointers will help.
Kafka is a distributed streaming platform that allows you to process streams of records in near real-time.
Producers publish records/messages to Topics in the cluster.
Consumers subscribe to Topics and process those messages as they are available.
The Kafka docs are an excellent place to get up to speed on the core concepts: https://kafka.apache.org/intro
Is Kafka only way to receive that info via "Consumers" or can Rest APIs be also used here?
Kafka has its own TCP based protocol, not a native HTTP client (assuming that's what you actually mean by REST)
Consumers are the only way to get and subsequently process data, however plenty of external tooling exists to make it so you don't have to write really any code if you don't want to in order to work on that data
I have a problem, that I want to solve using kafka queues.
I need to process some result, then return it to the user.
As you can see in the picture, the Rest Service, requests something to the Calculator Service.
Both services have a kafka consumer, and a kafka producer.
The rest service receive a request, then produces a message on toAdd queue, then keep consuming the fromAdd queue, until receives a value.
The calculator service keep consuming the toAdd queue, when some message comes, it sum two values, then produces a message on fromAdd queue.
Sometimes the rest service receives old messages from the queue, or more than one message.
I find something about idempotent configuration, but I don't know how to implement right.
Is that diagram, the right way to the communication between two or more services using kafka?
Can someone give a example?
Thanks.
Is that diagram, the right way to the communication between two or more services using kafka?
If you mean "Does it make sense to have two or more services communicate indirectly through Kafka?", then yes, it does.
Can someone give a example?
Here are some good pointers including examples:
Build Services on a Backbone of Events, Confluent blog, May 2017
Commander: Better Distributed Applications through CQRS, Event Sourcing, and Immutable Logs, by Bobby Calderwood, StrangeLoop, Sep 2016
Recorded talk
Reference implementation on GitHub
To answer your question: There is no problem with such communication.
Now referring back to other parts...
Keep in mind that it's an asynchronous communication so you should not keep HTTP connection open and keep user of that service waiting for the response. This is just not the way to go. You can solve this in many ways. For instance: you can use WebSockets, you can send an email/SMS/slack msg to the user with the reply and so on.
Our application receives events through a HAProxy server on HTTPs, which should be forwarded and stored to Kafka cluster.
What should be the best option for this ?
This layer should receive events from HAProxy & produce them to Kafka cluster, in a reliable and efficient way (and should scale horizontally).
Please suggest.
I'd suggest to write a simple application in Java that just receives events and sends it to Kafka. The Java client for Kafka is the official client thus is the most reliable. The other option is to use an arbitrary language together with the official Kafka REST Proxy.
Every instance of the app should send the messages to all partitions based on some partition key. Then you can run multiple instances of the app and they don't even need to know about each other.
Just write a simple application which consumes the messages from the Proxy
and send the response which you have obtained to the producer by setting the Kafka Configurationsproducer.data(). If the configurations are done successfully. you can able to consume the messages from the Proxy server which you use and see the response output in /tmp/kafka-logs/topicname/00000000000000.log.
this link will help you to tritw enter link description here
Good Day
Keep Coding
Am a newbie in Storm and have been exploring its features to match our CEP requirements. Different examples which I have stumbled implements spouts as a polling service from a message broker, database. How to implement a push based spout i.e. Thrift server running inside a spout? How shall I make my clients aware of where my spouts are running, so that they can push data on it?
Spouts are designed and intended to poll, so you can't push to them. However, what many people do is use things like Redis, Thrift, or Kafka as services that you can push messages to and then your spout can poll them.
The control you have on where and when a spout runs is limited, so it's a bit of hassle to have external processes communicate directly with spouts. It certainly is possible, but it's not the simplest solution.
The standard solution is to push messages to some external message queue and let your spouts poll this message queue.
There are implementations of spouts that do exactly this for commonly used message queue services, such as Kafka, Kestrel and JMS, in storm-contrib
I don't have a whole lot of experience with either Storm or Kafka/Kestrel or CEP, in general but I am looking for a similar solution - push to a Storm spout. How about using a load-balancer between event source and the Storm cluster? For my use case of pushing Syslog messages from rsyslog to Storm, a load-balancer can keep track of what Storm nodes are running a listening spout and which ones are down and also distribute incoming load based on different parameters. I am less inclined to introduce another layer like a message bus between the source and spout.
Edit: I read your blog and to summarize, if the only problem with a listening spout is how would a source find it then a message bus might be the wrong answer. There are simpler/better solutions to direct network traffic at a receiver based on simple network status or higher app level logic. But yes, if you want to use all the additional message bus features then obviously Kafka/Kestrel would be good options.
It's not a typical usage of Storm, obviously you can't bind multiple instances of the spout on the same machine to the same port. In distributed setup it would be good idea to store API's current IP address and port e.g. to ZooKeeper and then balancer which would forward requests to your API.
Here's a project with simple REST API on Storm:
https://github.com/timjstewart/restexpress-storm