The OffsetRequest has been deprecated for a while, along with almost all the other classes in kafka.api and now has been removed in 1.0. However, neither the deprecation message nor the docs explain what can be used instead.
The FAQ is not helpful for this topic.
There are CLI tools for this, but I have not found any advice on doing it programmatically.
The classes in kafka.api are for the old clients (in Scala) that are deprecated and will be removed in a future release.
The new Java clients are using the classes defined in org.apache.kafka.common.requests.
The class org.apache.kafka.common.requests.ListOffsetRequest is the replacement for the old OffsetRequest.
The following methods from KafkaConsumer can be used to retrieve offsets (they all send a ListOffsetRequest from the client):
beginningOffsets() : http://kafka.apache.org/10/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#beginningOffsets-java.util.Collection-
endOffsets(): http://kafka.apache.org/10/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#endOffsets-java.util.Collection-
offsetsForTimes(): http://kafka.apache.org/10/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#offsetsForTimes-java.util.Map-
Related
As the title says we are getting such an error and google searching doesn't return anything useful. This rule no longer exists on the specification.
Just wondering if anyone has seen this before please?
As already mentioned I believe that this is not sbt API. This looks like akka-stream Sink (I stumbled upon this question googling exactly about that).
In that case this error basically means that Publisher created with fanout set to false can only have one subscriber and this error is raised when another subscriber tried to subscribe to the publisher.
I guess that the error message is slightly outdated and it should point to rule 1.11 here.
While exploring how to unit test a Kafka Stream I came across ProcessorTopologyTestDriver, unfortunately this class seems to have gotten broken with version 0.10.1.0 (KAFKA-4408)
Is there a work around available for the KTable issue?
I saw the "Mocked Streams" project but first it uses version 0.10.2.0, while I'm on 0.10.1.1 and second it is Scala, while my tests are Java/Groovy.
Any help here on how to unit test a stream without having to bootstrap zookeeper/kafka would be great.
Note: I do have integration tests that use embedded servers, this is for unit tests, aka fast, simple tests.
EDIT
Thank you to Ramon Garcia
For people arriving here in Google searches, please note that the test driver class is now org.apache.kafka.streams.TopologyTestDriver
This class is in the maven package groupId org.apache.kafka, artifactId kafka-streams-test-utils
I found a way around this, I'm not sure it is THE answer especially after https://stackoverflow.com/users/4953079/matthias-j-sax comment. In any case, sharing what I have so far...
I completely copied ProcessorTopologyTestDriver from the 0.10.1 branch (that's the version I'm using).
To address KAFKA-4408 I made private final MockConsumer<byte[], byte[]> restoreStateConsumer accessible and moved the chunk task = new StreamTask(... to a separate method, e.g. bootstrap.
On the setup phase of my test I do the following
driver = new ProcessorTopologyTestDriver(config, builder)
ArrayList partitionInfos = new ArrayList();
partitionInfos.add(new PartitionInfo('my_ktable', 1, (Node) null, (Node[]) null, (Node[]) null));
driver.restoreStateConsumer.updatePartitions('my_ktable', partitionInfos);
driver.restoreStateConsumer.updateEndOffsets(Collections.singletonMap(new TopicPartition('my_ktable', 1), Long.valueOf(0L)));
driver.bootstrap()
And that's it...
Bonus
I also ran into KAFKA-4461, fortunately since I copied the whole class I was able to "cherry-pick" the accepted fix with minor tweaks.
As always feedback is appreciated. Although apparently not an official test class, this driver is proven super useful!
For people arriving here in Google searches, please note that the test driver class is now org.apache.kafka.streams.TopologyTestDriver
This class is in the maven package groupId org.apache.kafka, artifactId kafka-streams-test-utils
I work on a code that uses reactive-kafka module in Scala. My code uses lines such as:
val kafka = new ReactiveKafka()
kafka.consume(ConsumerProprties(...).readFromEndOfStream())
kafka.publish(ProducerProperties(...))
I have two questions:
I see that my code imports the package com.softwaremill.react.kafka. Can you give me a link to a detailed documentation? All information I found so far is too consice.
I want to check that the server is still connected to Kafka. How can I do it using com.softwaremill.react.kafka package?
Cache2k looks like a very promising caching implementation. Unfortunately the documentation is very limited, which is why I need some help with the following issue. I am using the latest version 0.26-BETA.
According to the documentation the cache is supposed to be created like this:
Cache<String,String> c =
CacheBuilder.newCache(String.class, String.class).build();
String val = c.peek("something");
assertNull(val);
c.put("something", "hello");
val = c.get("something");
assertNotNull(val);
c.destroy();
Unfortunately at least 2 of these methods are deprecated, including the CacheBuilder class itself. I therefore tried creating the cache like this:
org.cache2k.Cache<String, Object> newCache = Cache2kBuilder.of(String.class, Object.class)
.name(cacheKey.toString())
.entryCapacity(100000)
.eternal(true)
.build();
This however throws the "java.lang.UnsupportedOperationException: loader not set" exception.
The question therefore is: how am I supposed to build the cache so that I do not get this exception?
EDIT:
This gives me the same exception:
org.cache2k.Cache<Object, Object> newCache =
CacheBuilder.newCache(Object.class, Object.class)
.eternal(true)
.build();
EDIT #2:
Just one more note: When I copy&paste the code from the wiki page I get an error - as can be seen in the image below.
With what jdk version are you testing? I'll try just removing the <> that are causing the problem for now.
Thanks very much in advance!
Michael
Cache2k looks like a very promising caching implementation.
Thanks :)
According to the documentation the cache is supposed to be created like this
There are new interfaces in place. The deprecated one is still there to support users of old cache2k versions. That will get cleared up in the next weeks. Sorry for the confusion.
Please take a look here for the latest getting started information:
https://github.com/cache2k/cache2k/blob/master/doc/src/docs/asciidoc/user-guide/sections/_start.adoc
This however throws the "java.lang.UnsupportedOperationException: loader not set" exception.
The question therefore is: how am I supposed to build the cache so that I do not get this exception?
Short answer: Either use cache.peek() or wait for 0.27, since then it is working with cache.get() transparently.
Longer answer: In our own applications I use cache.get() only when a loader is defined and cache.peek() when no loader is defined or when I want to inspect the cache only. Reserving cache.get() only for the read through usage, seemed like a good idea. However, I reasoned that it might be a caveat for new users, so I change that behavior and align it to other cache solutions.
Answer to Edit 2:
For an untyped cache use the factory method Cache2kBuilder.forUnkownTypes(). Constructing the anonymous class is only needed for specific types.
I'm using scala v2.10.2; eclipse with scala plugin v3.0.1; The full error message is:
error while loading Vector$1, class file 'C:\Program
Files\Java\jre7\lib\rt.jar(java/util/Vector$1.class)' is broken (class
java.util.NoSuchElementException/key not found: E)
It occurs when attempting to extending java.util.Stack
import java.util.Stack
class MyStack[T] extends Stack[T]{}
It's worth noting that java.util.Stack is a subclass of java.util.Vector.
java.util.Stack extends the essentially deprecated java.util.Vector, and thus is also essentially deprecated (they're not actually deprecated, but the docs always recommends using newer alternatives if you're running a newer version of Java). The javadoc for Stack recommends using the java.util.Deque interface instead:
A more complete and consistent set of LIFO stack operations is provided by the Deque interface and its implementations, which should be used in preference to this class. For example: Deque<Integer> stack = new ArrayDeque<Integer>();
Using the Deque interface and java.util.ArrayDeque will probably solve your problem since—referring to pretzels1337's answer—this seems to be a Vector-specific bug.
eThis same issue may be part of a larger bug report:
https://issues.scala-lang.org/browse/SI-7455
The report claims fixed in Scala 2.10.3-RC1, Scala 2.11.0-M6
I'm waiting for the next stable scala IDE update before verifying fixed (lazy I know) but a simple work around in the mean time is to simply change the class definitions to extend scala.collection.mutable.Stack instead.
--
Most people running into this issue are trying to use swing; for you I can only recommend trying one of the fixed builds of scala.