While exploring how to unit test a Kafka Stream I came across ProcessorTopologyTestDriver, unfortunately this class seems to have gotten broken with version 0.10.1.0 (KAFKA-4408)
Is there a work around available for the KTable issue?
I saw the "Mocked Streams" project but first it uses version 0.10.2.0, while I'm on 0.10.1.1 and second it is Scala, while my tests are Java/Groovy.
Any help here on how to unit test a stream without having to bootstrap zookeeper/kafka would be great.
Note: I do have integration tests that use embedded servers, this is for unit tests, aka fast, simple tests.
EDIT
Thank you to Ramon Garcia
For people arriving here in Google searches, please note that the test driver class is now org.apache.kafka.streams.TopologyTestDriver
This class is in the maven package groupId org.apache.kafka, artifactId kafka-streams-test-utils
I found a way around this, I'm not sure it is THE answer especially after https://stackoverflow.com/users/4953079/matthias-j-sax comment. In any case, sharing what I have so far...
I completely copied ProcessorTopologyTestDriver from the 0.10.1 branch (that's the version I'm using).
To address KAFKA-4408 I made private final MockConsumer<byte[], byte[]> restoreStateConsumer accessible and moved the chunk task = new StreamTask(... to a separate method, e.g. bootstrap.
On the setup phase of my test I do the following
driver = new ProcessorTopologyTestDriver(config, builder)
ArrayList partitionInfos = new ArrayList();
partitionInfos.add(new PartitionInfo('my_ktable', 1, (Node) null, (Node[]) null, (Node[]) null));
driver.restoreStateConsumer.updatePartitions('my_ktable', partitionInfos);
driver.restoreStateConsumer.updateEndOffsets(Collections.singletonMap(new TopicPartition('my_ktable', 1), Long.valueOf(0L)));
driver.bootstrap()
And that's it...
Bonus
I also ran into KAFKA-4461, fortunately since I copied the whole class I was able to "cherry-pick" the accepted fix with minor tweaks.
As always feedback is appreciated. Although apparently not an official test class, this driver is proven super useful!
For people arriving here in Google searches, please note that the test driver class is now org.apache.kafka.streams.TopologyTestDriver
This class is in the maven package groupId org.apache.kafka, artifactId kafka-streams-test-utils
Related
I am trying to pass a parameter to ksy file. The parameter is of type another ksy file. The reason is that i need to access all the fields from the ksy file passed as parameter.
Is that possible?
If yes, would you please provide me with syntax code snippet so I can mimic it.
If no, what would be another solution?
Thank You.
Affiliate disclaimer: I'm a Kaitai Struct maintainer (see my GitHub profile).
First, I recommend always using the development version of the Kaitai Struct Web IDE (https://ide.kaitai.io/devel/), not the stable one. The stable IDE deployed at https://ide.kaitai.io/ has KS compiler of version 0.8, which is indeed the latest stable version, but already 2 years old at the moment. But the project is under active development, new bug fixes and improvements are coming every week, so the stable Web IDE is pretty much outdated. And thanks to the recent infrastructure enhancement, the devel Web IDE now gets rebuilt every time the compiler is updated, so you can use even the most recent features.
However, you won't be able to simulate the particular situation you describe in the Web IDE, because it can't currently handle top-level parameteric types (there is no hook where you can pass your own values as arguments). But it should work in a local environment. You can compile the commontype.ksy and pty.ksy specs in the Web IDE to the target language you want to use (the manual shows how to do it). The code putting it together could look like this (Java):
Commontype ct = new Commontype(new ByteBufferKaitaiStream(new byte[] { 80, 75 }));
Pty r = new Pty(
new ByteBufferKaitaiStream(new byte[] { 80 }), // IO stream
ct // commonword
);
Note that the actual parameter order of the Pty constructor may be different, e.g. in Python come the custom params (commonword) first and then the IO object. Check the generated code in your particular language.
The OffsetRequest has been deprecated for a while, along with almost all the other classes in kafka.api and now has been removed in 1.0. However, neither the deprecation message nor the docs explain what can be used instead.
The FAQ is not helpful for this topic.
There are CLI tools for this, but I have not found any advice on doing it programmatically.
The classes in kafka.api are for the old clients (in Scala) that are deprecated and will be removed in a future release.
The new Java clients are using the classes defined in org.apache.kafka.common.requests.
The class org.apache.kafka.common.requests.ListOffsetRequest is the replacement for the old OffsetRequest.
The following methods from KafkaConsumer can be used to retrieve offsets (they all send a ListOffsetRequest from the client):
beginningOffsets() : http://kafka.apache.org/10/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#beginningOffsets-java.util.Collection-
endOffsets(): http://kafka.apache.org/10/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#endOffsets-java.util.Collection-
offsetsForTimes(): http://kafka.apache.org/10/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#offsetsForTimes-java.util.Map-
Cache2k looks like a very promising caching implementation. Unfortunately the documentation is very limited, which is why I need some help with the following issue. I am using the latest version 0.26-BETA.
According to the documentation the cache is supposed to be created like this:
Cache<String,String> c =
CacheBuilder.newCache(String.class, String.class).build();
String val = c.peek("something");
assertNull(val);
c.put("something", "hello");
val = c.get("something");
assertNotNull(val);
c.destroy();
Unfortunately at least 2 of these methods are deprecated, including the CacheBuilder class itself. I therefore tried creating the cache like this:
org.cache2k.Cache<String, Object> newCache = Cache2kBuilder.of(String.class, Object.class)
.name(cacheKey.toString())
.entryCapacity(100000)
.eternal(true)
.build();
This however throws the "java.lang.UnsupportedOperationException: loader not set" exception.
The question therefore is: how am I supposed to build the cache so that I do not get this exception?
EDIT:
This gives me the same exception:
org.cache2k.Cache<Object, Object> newCache =
CacheBuilder.newCache(Object.class, Object.class)
.eternal(true)
.build();
EDIT #2:
Just one more note: When I copy&paste the code from the wiki page I get an error - as can be seen in the image below.
With what jdk version are you testing? I'll try just removing the <> that are causing the problem for now.
Thanks very much in advance!
Michael
Cache2k looks like a very promising caching implementation.
Thanks :)
According to the documentation the cache is supposed to be created like this
There are new interfaces in place. The deprecated one is still there to support users of old cache2k versions. That will get cleared up in the next weeks. Sorry for the confusion.
Please take a look here for the latest getting started information:
https://github.com/cache2k/cache2k/blob/master/doc/src/docs/asciidoc/user-guide/sections/_start.adoc
This however throws the "java.lang.UnsupportedOperationException: loader not set" exception.
The question therefore is: how am I supposed to build the cache so that I do not get this exception?
Short answer: Either use cache.peek() or wait for 0.27, since then it is working with cache.get() transparently.
Longer answer: In our own applications I use cache.get() only when a loader is defined and cache.peek() when no loader is defined or when I want to inspect the cache only. Reserving cache.get() only for the read through usage, seemed like a good idea. However, I reasoned that it might be a caveat for new users, so I change that behavior and align it to other cache solutions.
Answer to Edit 2:
For an untyped cache use the factory method Cache2kBuilder.forUnkownTypes(). Constructing the anonymous class is only needed for specific types.
I am new to JBehave having started to use it yesterday.
There seems to be a typo in the getting started pages I hope someone
can help with.
In "Developing Stories' section the example of configuration has the line:
addSteps(new InstanceStepsFactory(new TraderSteps(), new
BeforeAndAfterSteps()).createCandidateSteps());
However, there is no class called BeforeAndAfterSteps. The nearest I
found was BeforeOrAfterSteps but it requires parameters in the
constructor and I'm not sure what to use.
Thanks
Correct - the InstanceStepsFactory takes your POJOs with JBehave annotated methods and creates Steps out of them. There are also StepsFactories that can create the Steps classes from a container (Pico, Guice, Spring or Weld).
I'm having major problems getting Undercover to work using Maven
I'm using ScalaTest for unit tests and this is working perfectly
When I run Undercover though it simply creates empty files
I think it's probably a problem with the configuration in my pom.xml (but the documentation for Undercover is a little sketchy)
Help :)
Thanks
T
At one point I inquired about Emma on a Scala mailing list, and I was told that by some that they had more success with Cobertura. You might want to try that instead.
Up to what stage is this "working correctly", given that empty files are being produced?
Do you have a sample project/POM that demonstrates the problem?