ZookKeeper Recipes Dependencies - apache-zookeeper

I am new to zookeeper and trying to use some of the zookeeper recipes that are implemented here: https://github.com/apache/zookeeper/tree/trunk/src/recipes. so that i dont have to build them myself.
Seems like they are not distributed as part of the zookeeper libraries in maven central etc. What is the convention in the zookeeper community? Should i just pull the source and build them myself? Or am i missing something and they are readily available to consume from a central repo.
Thanks

Not directly answering your question, but...
Perhaps you'd find Curator (https://github.com/Netflix/curator/wiki) useful? We sure have so far. It has many of the base ZooKeeper recipes implemented and is bundled as separate maven dependencies.

Related

Quarkus class auto-generatation mechanism from Avro schemas is not working

I created a new Kafka Stream project based on Quarkus 2.1.2 following these two guides:
https://quarkus.io/guides/kafka-streams
https://quarkus.io/guides/kafka-schema-registry-avro
I put some Avro schemas in the src/main/avro/ folder and built the project with Maven.
The build is successful but there aren't any Java classes in the target/generated-sources/avsc directory.
As said by the guide with the new version of Quarkus
there’s no need to use a specific Maven plugin to process the Avro schema, this is all done for you!
I also double-checked that the generate-code goal was enabled for the quarkus-maven-plugin.
Am I missing something or is the guide incomplete in some way? Because it doesn't seems that the class generation is automatically managed by Quarkus.
p.s. I'm using Java 11.0.2 and Maven 3.8.1
Thank you,
Mauro
As suggested by #Ladicek I tried to add the quarkus-avro dependency and now I can find the auto-generated classes.
I think that the guide https://quarkus.io/guides/kafka-schema-registry-avro should be modified because the "default" dependency quarkus-apicurio-registry-avro has quarkus-avro in its dependency while Confluent's kafka-avro-serializer doesn't (obviously).
I opened a issue to improve the guide.
Thank you very much

Is there a documentation/blog/example on creating Kafka sink or source plugins?

We are planning to create our own repo of connector (sink or source plugins) for Apache Kafka like one here
We tried to search for the documentation or help on how to create a plugin jar for Kafka.
There is no mention of developing a plugin in the official documentation from apache Kafka.
Any help or pointer will be helpful, can share it back with the open community once developed.
Here is guide on How to Build a Connector
As Well here is a Connector Developer Guide
Developing a connector only requires implementing two interfaces, the Connector and Task
Refer to the example source code for full examples for simple example
Once you’ve developed and tested your connector, you must package it so that it can be easily installed into Kafka Connect installations. The two techniques described here both work with Kafka Connect’s plugin path mechanism.
If you plan to package your connector and distribute it for others to use, you are obligated to properly license and copyright your own code and to adhere to the licensing and copyrights of all libraries your code uses and that you include in your distribution.
Creating an Archive
The most common approach to packaging a connector is to create a tarball or ZIP archive. The archive should contain a single directory whose name will be unique relative to other connector implementations, and will therefore often include the connector’s name and version. All of the JAR files and other resource files needed by the connector, including third party libraries, should be placed within that top-level directory. Note, however, that the archive should never include the Kafka Connect API or runtime libraries.
To install the connector, a user simply unpacks the archive into the desired location. Having the name of the archive’s top-level directory be unique makes it easier to unpack the archive without overwriting existing files. It also makes it easy to place this directory on Installing Connect Plugins or for older Kafka Connect installations to add the JARs to the CLASSPATH.
Creating an Uber JAR
An alternative approach is to create an uber JAR that contains all of the connector’s JAR files and other resource files. No directory internal structure is necessary.
To install, a user simply places the connector’s uber JAR into one of the directories listed in Installing Connect Plugins.

What is the Main class in a Lagom/Play application?

I'm trying to figure out how to package and deploy my lagom app in production. The docs are surprisingly coy about how to actually do this, and when I try to use sbt-native-packager to run universal:packageBin I get the warning that You have no main class in your project. No start script will be generated.
Has anyone worked through this and knows a good tutorial or something to reference?
https://github.com/lagom/lagom/blob/a35fab1ad8a0c4a3d28d6c86ae31a2408da2e340/dev/sbt-plugin/src/main/scala/com/lightbend/lagom/sbt/LagomSettings.scala#L28
Adding that to your project will fix it. That said, generally you shouldn't see this warning, because the Lagom plugin should configure it for you. There's two reasons that I can think of off the top of my head why you might be seeing this warning.
The first would be that you don't have the Lagom plugin enabled on your project. If that's the case, and you're not doing something advanced where you really know what you're doing (and if you really knew what you were doing I would be surprised if you had to ask this question), then you probably have a misconfiguration and need to enable the Lagom plugin.
The second might be that you're running universal:packageBin on multiple projects, some of which do have the Lagom plugin enabled, and some of which don't. In such a case, you probably only want to build the production artifact for your Lagom project, not for all the other projects (eg the API project, or the root project). So, just run it for your service (eg, run my-service-impl/universal:packageBin).

OSGI bundle - Eclipse project bundling with ALL dependencies

I am new to the OSGI world and could use some advice from the experts out there. My aim is to deploy a few servlets along with REST resources into a standard Karaf installation. I am planning to use Grizzly (w/Jersey) as the http container.
I am trying to figure out a way to create an eclipse project, in which I can compile my custom code, and deploy this code along with all dependencies such as Grizzly, Jersey, OSGI frameworks & bundles as a single archive into Karaf.
The end goal is to have a single deployable entity which includes all my code and the dependencies without needing to manually install dependencies in Karaf.
Is this possible or am I looking at it the wrong way? I have been reading up on OBR, features and KAR but not able to put the whole picture together as yet. What would be the best practice wrt achieving this objective?
Thanks!
To give you the general idea regarding embedding and launching a complete OSGi application, I suggest you check out chapter 13 on this book. It explains it using Equinox implementation but I hope the overall approach should look similar. If you follow through you will see that you can put all your bundles in a folder where the system will iterate through and install them.

What's the point of downloading the source jars in a grails project?

I've noticed that in eclipse if you Right click on a project -> Grails Tools -> You have the option to 'Download Source Jars'.
What is the point of this and what are some common reasons as to why you would want to do this?
Grails 2.2.3
Edit:
I'm not even sure what grails does instead of that.
Many (most) libraries (JARs, "artifacts" in the Maven terminology) publish a sources archive alongside their binary artifacts in the repositories. This can be useful for Eclipse to show you the Javadoc and source code when you're using the library in your projects. As #JonSkeet commented above, it's very useful to have source code available directly in the IDE when using a library.
By default, Grails does not download the sources for artifacts; this option triggers it to do so and attach the sources to the binary JARs.
Agreed with E-Riz.
Here are the reasons I use the sources:
i want to have a deeper understanding of how the library works when debugging my own depending code
i want to find a possible bug in the library, so I can fork it and apply my own patch. i will possibly share this with the maintainers as a pull request if I'm willing to spend that much time on it.
i want to find out what logging systems it uses that might be poorly documented, so I can see better what their code is doing during runtime, to troubleshooting complicated problems.