I work on a code that uses reactive-kafka module in Scala. My code uses lines such as:
val kafka = new ReactiveKafka()
kafka.consume(ConsumerProprties(...).readFromEndOfStream())
kafka.publish(ProducerProperties(...))
I have two questions:
I see that my code imports the package com.softwaremill.react.kafka. Can you give me a link to a detailed documentation? All information I found so far is too consice.
I want to check that the server is still connected to Kafka. How can I do it using com.softwaremill.react.kafka package?
Related
We have a requirement where we need to send .avro file as an input request to our API's. Really stuck at this point. If any detail example provided would be more appreciated.
Just use Java interop: https://github.com/intuit/karate#calling-java
You need to write a helper (start with a static method) to convert JSON to Avro and vice versa. I know teams using this for gRPC. Read this thread for tips: https://github.com/intuit/karate/issues/412
Also there is even a "karate-grpc" project: https://github.com/pecker-io/karate-grpc
Also see:
https://twitter.com/KarateDSL/status/1128170638223364097
https://twitter.com/KarateDSL/status/1417023536082812935
I am following an OReilly book, "Advanced Analytics with spark" book. It seems that they expect you to use the command shell to follow the examples in the book (PuTTY). But i don't want to use that. I'd prefer to use Zeppelin. I'd like to create notebooks, but my own comments into the code etc.
So, using an Azure subscription, I spin up a Spark cluster and go into zeppelin. I am able to follow the guide fine for the most part. But there is one bit that trips me up. And its probably pretty basic.
You are asked to create a scala file called "StatsWithMissing.scala" with code in it. I do that. I upload it to blob to: //user/Zeppelin
(this is where i expect the Zeppelin user directory to be)
Then it asks you to run the following;
":load StatsWithMissing.scala"
At this point it gives the error:
:1: error: illegal start of definition
My first question is, where exactly is this scala file supposed to be on Blob Storage for Zeppelin to see it? How do i determine that? Is where i am putting it correct?
And second what does this message mean? Does it not like the Load statement?
I believe the Interpreter set at the top of the page is Livy, and that covers scala.
Any help would be great.
Regards
Conor
The OffsetRequest has been deprecated for a while, along with almost all the other classes in kafka.api and now has been removed in 1.0. However, neither the deprecation message nor the docs explain what can be used instead.
The FAQ is not helpful for this topic.
There are CLI tools for this, but I have not found any advice on doing it programmatically.
The classes in kafka.api are for the old clients (in Scala) that are deprecated and will be removed in a future release.
The new Java clients are using the classes defined in org.apache.kafka.common.requests.
The class org.apache.kafka.common.requests.ListOffsetRequest is the replacement for the old OffsetRequest.
The following methods from KafkaConsumer can be used to retrieve offsets (they all send a ListOffsetRequest from the client):
beginningOffsets() : http://kafka.apache.org/10/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#beginningOffsets-java.util.Collection-
endOffsets(): http://kafka.apache.org/10/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#endOffsets-java.util.Collection-
offsetsForTimes(): http://kafka.apache.org/10/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#offsetsForTimes-java.util.Map-
While exploring how to unit test a Kafka Stream I came across ProcessorTopologyTestDriver, unfortunately this class seems to have gotten broken with version 0.10.1.0 (KAFKA-4408)
Is there a work around available for the KTable issue?
I saw the "Mocked Streams" project but first it uses version 0.10.2.0, while I'm on 0.10.1.1 and second it is Scala, while my tests are Java/Groovy.
Any help here on how to unit test a stream without having to bootstrap zookeeper/kafka would be great.
Note: I do have integration tests that use embedded servers, this is for unit tests, aka fast, simple tests.
EDIT
Thank you to Ramon Garcia
For people arriving here in Google searches, please note that the test driver class is now org.apache.kafka.streams.TopologyTestDriver
This class is in the maven package groupId org.apache.kafka, artifactId kafka-streams-test-utils
I found a way around this, I'm not sure it is THE answer especially after https://stackoverflow.com/users/4953079/matthias-j-sax comment. In any case, sharing what I have so far...
I completely copied ProcessorTopologyTestDriver from the 0.10.1 branch (that's the version I'm using).
To address KAFKA-4408 I made private final MockConsumer<byte[], byte[]> restoreStateConsumer accessible and moved the chunk task = new StreamTask(... to a separate method, e.g. bootstrap.
On the setup phase of my test I do the following
driver = new ProcessorTopologyTestDriver(config, builder)
ArrayList partitionInfos = new ArrayList();
partitionInfos.add(new PartitionInfo('my_ktable', 1, (Node) null, (Node[]) null, (Node[]) null));
driver.restoreStateConsumer.updatePartitions('my_ktable', partitionInfos);
driver.restoreStateConsumer.updateEndOffsets(Collections.singletonMap(new TopicPartition('my_ktable', 1), Long.valueOf(0L)));
driver.bootstrap()
And that's it...
Bonus
I also ran into KAFKA-4461, fortunately since I copied the whole class I was able to "cherry-pick" the accepted fix with minor tweaks.
As always feedback is appreciated. Although apparently not an official test class, this driver is proven super useful!
For people arriving here in Google searches, please note that the test driver class is now org.apache.kafka.streams.TopologyTestDriver
This class is in the maven package groupId org.apache.kafka, artifactId kafka-streams-test-utils
How can we subscribe to file present as resource in Scala project so that any live changed to the file can be detected in the the service?
Example there is a Scala code which is calculating sum of numbers from the text file , how to subscribe to that file in the code so that program can act upon immediately for any addition of new numbers in the file.
In Scala you can use Java classes and APIs.
You can use the Java Watch Service API in java.nio.file.
You can about it here.