Zookeeper/Kafka with Tomcat - Possible At All? - apache-kafka

I was wondering if someone has used zookeeper/kafka embeeded within Tomcat. I know that Kafka requires Zookeeper, but does it mean that I have to run Kafka and Zookeeper as separate instances? So far I cannot see any use cases where everything has been bolt in. Could anyone advise?
My question is more around the concept of using zookeeper and kafka as a jar within the same tomcat web application.

Both Kafka and Zookeeper are meant to be used in a stand-alone fashion, run as separate processes.
They should even be on different machines/vms/containers than the tomcat web application.
You also probably want a Zookeeper cluster of 3-5 machines, rather than a single one, at least for production.
Both of them have Java clients though, for you to interact from the web application with them, and those are OK to include.

Related

Spring Cloud Integration test, embedded kafka vs testcontainers

I have a Spring cloud stream application which I need to make an integration test for (to be specific using cucumber). The application communicate with other services using Kafka message broker. From what I know I could make this work using either a kafka testcontainers or using spring provided embedded kafka. But what I don't know is which one would be the best solution so are there anything that the testcontainer could do but embedded can't or the other way around? (use cases or example would be appreciate!)
p.s This integration should be able to run on ci/cd pipeline.
It is called embedded for a reason. It really can be only accessed from the process which spawned it. With Testcontainers you really can reuse existing container and have access to it from the other process. But that's probably to exotic.
I guess with properly configured Testcontainers you can reach as much as possible similarities with the prod you'd deploy your solution. The embedded Kafka might be limited in some areas, e.g. SSL configuration or so.

Should zookeeper be run on independent machines? in production environment

in practice, I would like to build a server using the zookeeper and Kafka.
However, I heard that the zookeeper and Kafka should be built separately. I wonder why they should be built separately.

Micro services with JBOSS

I am new to Jboss, want to know if micro services architecture is a right choice on JBOSS. I cannot change the application server as it is decided by client architect and I have no choice.
Want to know whether we can develop micro services with underlying JBOSS application server.
I understand Spring boot comes with embedded tomcat container, which makes it flexible to stop and start, deploy individual service with no impact to other services.
However will that architecture works with JBoss too.
Please suggest.
Thanks,
I actually developed a feasibility study to investigate the solution you mentioned. My conclusion is that it is totally viable to use Micro Service principles in a JBoss Platform.
I used the combination of JBoss \ Spring Boot \ Netflix to create successful Micro Service stack, I personally do that to find a solution to the transaction problem (multiple micro services collaborating) and the fan out problem which caused because excessive Network communication and Serialization costs.
I also wrote a blog about the subject, you might find more details there if you like to, here is the link.
Micro Services – Fan Out, Transaction Problems and Solutions with Spring Boot/JBoss and Netflix Eureka
By the definition what micro services are, then conceptually yes. A micro service is a service that is an independent unit, it could deployed, updated, and undeployed independently without affecting any unrelated part of your application. So that would mean having multiple instances of JBoss for MS and your application calling them through some sort of gateway or any other mechanism depending on your use case. If you plan to deploy all your MS in the same JBoss instance then it defeats the very purpose of a MS. Given that, JBoss wouldn't be a right choice for MS deployment because it will only make your MS deployment infrastructure quite heavy.
Depending on what your client's requirements are, your could possibly keep your webapp in JBoss and deploy your MS containers separately.
It depends on what you want to get out of microservices.
Some of the developers at my organisation looked at Spring Boot but concluded that it's best off being run as a standalone container rather than in JBoss, otherwise you've effectively got two container frameworks competing (SB and JBoss) and a range of associated issues.
Deploying microservices in JBoss won't give you the same flexibility as a true container system like Docker. With Docker you create standalone packages for your microservices that contain all the code, system tools, runtime environment, etc. It can be as small or large as it needs to be. JBoss on the other hand is a large container running a single JVM designed to hold multiple applications. The level of isolation is not the same, and it's not efficient to have JBoss as a container for a single microservice so you have to appropriately size and then deploy to the instance to make use of the resources it has available.
If you're looking at microservices as a way to gain greater control over service lifecycle management (deployment, versioning, deprecating, etc.) as opposed to an automated, web-scale component deployment model a la Netflix or LinkedIn, you could do this adequately with JBoss.
I'm actually looking to do something along these lines here. It won't be true microservices but by packaging and deploying individual, properly versioned APIs rather than monolithic applications and following most of the other principles of microservice development (componentisation, business function focus, stateless etc.) we will be hopefully better able to manage and benefit from our APIs.
Our APIs will all be behind an API gateway and load balancer so we can choose how we distribute the microserves distributed across the JBoss instances and balance resource usage as required. Note that our organisation is relatively small and has relatively low and predictable traffic so this approach should work fine. Your needs however may be different.

Running zookeeper from within application

I'd like to use zookeeper in one of my applications for distributed configuration management. The application is currently running in distributed environment and having to restart nodes for configuration files changes is a headache.
However, we want the zookeeper process to be started from within the application. The point is to reduced startup dependency and reduce operational cost. We've already have startup/shutdown scripts for the application and we need to reduce impact for operations team.
Has any one done something similar? Is this setup recommended or there are better solutions? Any tip or feedback is appreciated.
I have a blog post that describes how to embed Zookeeper in an application. The Zookeeper developers don't recommend it, though, and I would tend to agree now, though I had the same rationale for embedding it that you do - to reduce the number of moving parts.
You want to keep your ZK cluster stable but you will need to restart your app to do code updates, etc, impacting the ZK cluster stability.
Ultimately you will end up using your ZK cluster for multiple apps and those extra moving parts will be amortized over a number of projects.

Deploying an application without undeploying previous one and with no downtime?

I use Glassfish Java, and JSP over MySQL for my web applications. Many online people uses this web application and that web-site should not be down.
When I want to deploy a new war file, I should undeploy and deploy the new one for my application at server.
My question is that;
Is there any technology that doesn't need to undeploy my application and just change the appropriate classes so no need to redoploy it again?
There are java technologies that would allow you to replace classes on the fly (like JRebel). But since you're using Glassfish already, you should just start using clustering which is built into glassfish. You'll need either 2.1 or 3.1, as 3.0 does not support clustering. With a Glassfish cluster, you have a load balancer (Apache, Sun Web Server, hardware (Big IP, Coyote), etc) distribute the load among your cluster nodes. When you want to upgrade the app, you can technically do it one node at a time. Setting up the cluster is not the easiest thing in the world, but it is doable and it would get you some great benefits. You'll be able to scale the load by adding new hardware and even using Amazon (or whoever) cloud services. You'll be able to keep your site running even if the hardware fails on one of the nodes.
Personally I'm in the middle of converting from Glassfish 2.1 to 3.1. So far I like the management of the Glassfish 3.1 cluster much better, but I can't personally vouch for how it will run in production, though I have high expectations.
http://download.oracle.com/docs/cd/E18930_01/html/821-2432/gktqx.html#gktob
Jim is right, the best solution is currently to use a cluster and perform a manual rolling-upgrade.
But there is actually work ongoing to address your needs. We are working on a rolling-upgrade feature in a single standalone instance. To sum up in a nutshell (as the specifications have not been published yet), it will let you switch from an application version to another (see application versioning and the enable command) with no downtime. Stay tuned.