Elixir Kafka client Elsa - apache-kafka

I am trying to create dynamically topics in Kafka but unfortunately some error occurs. Here is my code
def hello_from_elsa do
topic = "producer-manager-test"
connection = :conn
Elsa.Supervisor.start_link(endpoints: #endpoints,
connection: connection)
Elsa.create_topic(#endpoints, topic)
end
As far as I understand I can connect to the broker itself but when the crete topic line is executed i get this error:
(MatchError) no match of right hand side value: false
(kafka_protocol) src/kpro_brokers.erl:240: anonymous fn/1 in :kpro_brokers.discover_controller/2
(kafka_protocol) src/kpro_lib.erl:376: :kpro_lib.do_ok_pipe/2
(kafka_protocol) src/kpro_lib.erl:281: anonymous fn/3 in :kpro_lib.with_timeout/2
I am not sure whether i miss some additional step before creating the topic. But it should be fine I guess since i start the supervisor and its running :/

Hard to say since the error is coming from the underlying Kafka protocol and not Elsa directly but it looks like there aren't any Kafka cluster controllers able to be found.
Topic management has to be done through a controller node so the with_connection function create_topic wraps explicitly passes the atom :controller to establish the connection and for whatever reason, likely something specific to your cluster, the function isn't able to successfully find a controller.
What type of cluster are you testing against? If you use the divo and divo_kafka library you can stand up a single-node kafka cluster using Docker on your local host to test against and it should work as expected.

Related

Debezium io with pulsar

I want to understand how pulsar uses debezium io connect for CDC.
While creating the source using pulsar-admin source create, how can I pass broker url and authentication params or client. Similar to what we di when using localrun.
The cmd I run :
bin/pulsar-admin source localrun --sourceConfigFile debezium-mysql-source-config.yaml --client-auth-plugin --client-auth-params --broker-service-url
Now I want to replace this to create a connector which runs in cluster mode.
Localrun is a special mode that simplifies debugging and it runs outside of normal cluster. It needs extra parameters to create the client for the local runtime.
In the cluster mode the connector will get the client from the Pulsar connectors runtime/through the function worker configuration. All you need to do is use "bin/pulsar-admin source create ...".

Directly connecting jaeger client to remote collector using kafka as intermediate buffer

I am trying to connect to jaeger collector which uses Kafka as intermediate buffer.
Here are my doubts could any one please point to some docs .
QUESTION
1. How to connect to collector by skipping agent and use kafka as intermediate buffer.Please provide me command or configuration
2. Whats the configuration for kafka to connect to particular host.When I tried to use below command its still pointing to localhost and failing
docker run -e SPAN_STORAGE_TYPE=kafka jaegertracing/jaeger-collector:1.17
```{"level":"fatal","ts":1585063279.3705006,"caller":"collector/main.go:70","msg":"Failed to init storage factory","error":"kafka: client has run out of available brokers to talk to (Is your cluster reachable?)","stacktrace":"main.main.func1\n\tgithub.com/jaegertraci```
Please provide me some sample example so that I can go through,...

Apache Ignite Failover functionality

I have set apache ignite on a Cluster of nodes and sent some job to some server node to run. When connection to that server node was lost I need to somehow store the result of that node locally (either via binary file or via some other way). Then when the connection with that node is established again push back the stored results to some Database server.
I'm working under .Net platform.
I can use
EventType.EVT_CLIENT_NODE_DISCONNECTED
EventType.EVT_CLIENT_NODE_RECONNECTED
these events and inside of their functions to implement the 'storing locally' and 'pushing to the DB server' functionality but I wanted to find some ready solution.
Is there any ready tool with the functionality I mentioned to just take and use it?
You can take a look at Checkpointing. I'm not sure this is exactly the same as you described (mainly because it will save the intermidiate state on server side), but I think it can be quite helpful.

how to configure a Akka Pub/Sub to run on same machine?

I am following the Distributed Publish Subscribe in Cluster example in Akka. However, I would like to run all the actor (publisher and subscribers) on the same node (my laptop). I am not sure if I understand how to configure that, could somebody help me? is it possible to use the runOn or should it be declared in a configuration file? Currently,
I run into this error:
Caused by: akka.ConfigurationException: ActorSystem [akka://mySystem]
needs to have a 'ClusterActorRefProvider' enabled in the
configuration, currently uses [akka.actor.LocalActorRefProvider]
Your error is telling you what the problem is. In your application.conf you should set akka.actor.provider = "akka.cluster.ClusterActorRefProvider". If you want to use a 1 node cluster on your laptop you should also set akka.cluster.min-nr-of-members = 1.

Questions Concerning Using Celery with Multiple Load-Balanced Django Application Servers

I'm interested in using Celery for an app I'm working on. It all seems pretty straight forward, but I'm a little confused about what I need to do if I have multiple load balanced application servers. All of the documentation assumes that the broker will be on the same server as the application. Currently, all of my application servers sit behind an Amazon ELB and tasks need to be able to come from any one of them.
This is what I assume I need to do:
Run a broker server on a separate instance
Configure each application instance to connect to that broker server
Each application instance will also be be a celery working (running
celeryd)?
My only beef with that is: What happens if my broker instance dies? Can I run 2 broker instances some how so I'm safe if one goes under?
Any tips or information on what to do in a setup like mine would be greatly appreciated. I'm sure I'm missing something or not understanding something.
For future reference, for those who do prefer to stick with RabbitMQ...
You can create a RabbitMQ cluster from 2 or more instances. Add those instances to your ELB and point your celeryd workers at the ELB. Just make sure you connect the right ports and you should be all set. Don't forget to allow your RabbitMQ machines to talk among themselves to run the cluster. This works very well for me in production.
One exception here: if you need to schedule tasks, you need a celerybeat process. For some reason, I wasn't able to connect the celerybeat to the ELB and had to connect it to one of the instances directly. I opened an issue about it and it is supposed to be resolved (didn't test it yet). Keep in mind that celerybeat by itself can only exist once, so that's already a single point of failure.
You are correct in all points.
How to make reliable broker: make clustered rabbitmq installation, as described here:
http://www.rabbitmq.com/clustering.html
Celery beat also doesn't have to be a single point of failure if you run it on every worker node with:
https://github.com/ybrs/single-beat