Access the clustered web console - activemq-artemis

I'm using ActiveMQ Artemis in a master / slave cluster with HA. I access it through the host of each master, but through the VIP that delivers the hosts, he keeps asking me to login in loop.
Here is an image of the topology that is currently working.
Only the access to the console by the VIP does not work. I use the same topology for ActiveMQ and it works. I am balanced for the master host.
Is it possible to access the ActiveMQ Artemis web console through a single point?

Related

Accessing Kafka in Kubernetes from SvelteKit

I am building a SvelteKit application with Kafka Support. I have created a kafka.js file and tested the kafka support with a local kafka setup, which is successful. Now, When I replaced the topic name with a topic that is running in the kafka of our Kubernetes cluster, am not seeing any response.
How do I test this connection that is establishing between Kafka in Kubernetes cluster and the JS web application ? Any hints could be much helpful. Doing just console logs are not helpful so far because kafka itself not getting hit.
Any two pods deployed in the same namespace can communicate using local service names. So, do the brokers have a Service resource?
For example, assuming you are in namespace: default, and you have kafka-svc, then you'd setup bootstrap.servers: kafka-svc.svc.cluster.local.
You also need to configure Kafka's advertised.listeners. Related blog - https://strimzi.io/blog/2019/04/17/accessing-kafka-part-1/
But this requires NodeJS (or other language) backend, and not SvelteKit UI (which cannot connect to backend TCP server, only use some HTTP bridge). Your HTTP options would include some custom server, or Confluent REST Proxy, Strimzi Kafka Bridge, etc. But you were already told this.

Exposing Kafka brokers using Google click to deploy environment

I have used the Kafka cluster (with replication) click to deploy container from Google on kubernetes. How do I expose the brokers so I can consume from an external consumer? I'm very new to kunernetes.
I have tried exposing the broker nodes with a load balancer but the external ip given
I opened the port with a firewall rule
But when connecting from my consumer it throws an error about being disconnected
Any help would be great, I can provide more info if asked.
You cannot use a load balancer. Kafka clients must talk directly to the brokers.
Firewall is a good step, but you need to ensure each of the brokers are exposed properly via advertised.listeners within and outside of the VPC. Refer blog post.
Alternative, within GKE, you can run Strimzi Operator, which handles Kubernetes resources for you with regard to the Kafka cluster.

How to expand confluent cloud kafka cluster?

I have set up a confluent cloud multizone cluster and it got created with just one bootstrap server. There was no setting for choosing number of servers while creating the cluster. Even after creation, I can’t edit the number of bootstrap servers.
I want to know how to increase the number of servers in confluent cloud kafka cluster.
Under the hood, the Confluent Cloud cluster is already running multiple brokers. Depending on your cluster configuration (specifically, whether you're running Standard or Dedicated, and what region and cloud you're in), the cluster will have between six and several dozen brokers.
The way a Kafka client bootstrap server config works is that the client reaches out to the bootstrap server and requests a list of all brokers, and then uses those broker endpoints to actually produce/consume from Kafka (reference: https://jaceklaskowski.gitbooks.io/apache-kafka/content/kafka-properties-bootstrap-servers.html)
In Confluent Cloud, the provided bootstrap server is actually a load balancer in front of all of the brokers; when the client connects to the bootstrap server it'll receive the actual endpoints for all of the actual brokers, and then use that for subsequent connections.
So TL;DR, in your client, you only need to specify the one bootstrap server; under the hood, the Kafka client will connect to the (many) brokers running in Confluent Cloud, and it should all just work.
Source: I work at Confluent.

Having Kafka connected with ip as well as service name - Openshift

In our Openshift ecosystem, we have a kafka instance sourced from wurstmeister/kafka. As of now I am able to have the kafka accessible withing the Openshift system using the below parameters,
KAFKA_LISTENERS=PLAINTEXT://:9092
KAFKA_ADVERTISED_HOST_NAME=kafka_service_name
And ofcourse, the params for port and zookeper is there.
I am able to access the kafka from the pods within the openshift system. But I am unable to access kafka service from the host machine. Eventhough I am able to access the kafka pod using its IP and able to telnet the pod using, telnet Pod_IP 9092
When I am trying to connect using the kafka producer from the host machine, I am getting the below error,
2017-08-07 07:45:13,925] WARN Error while fetching metadata with
correlation id 2 : {tls21=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient)
And When I try to connect from Kafka consumer from the host machine using IP, it is blank.
Note: As of now, its a single openshift server. And the use case is for dev testing.
Maybe you want to take a look at this POC for having Kafka on OpenShift ?
https://github.com/EnMasseProject/barnabas

Apache Kafka consumer groups and microservices running on Kubernetes, are they compatible?

So far, I have been using Spring Boot apps (with Spring Cloud Stream) and Kafka running without any supporting infrastructure (PaaS).
Since our corporate platform is running on Kubernetes we need to move those Spring Boot apps into K8s to allow the apps to scale and so on. Obviously there will be more than one instance of every application so we will define a consumer group per application to ensure the unique delivery and processing of every message.
Kafka will be running outside Kubernetes.
Now my doubt is: since the apps deployed on k8s are accessed through the k8s service that abstracts the underlying pods, and individual application pods can't be access directly outside of the k8s cluster, Kafka won't know how to call individual instances of the consumer group to deliver the messages, will it?
How can I make them work together?
Kafka brokers do not push data to clients. Rather clients poll() and pull data from the brokers. As long as the consumers can connect to the bootstrap servers and you set the Kafka brokers to advertise an IP and port that the clients can connect to and poll() then it will all work fine.
Can Spring Cloud Data Flow solve your requirement to control the number of instances deployed?
and, there is a community released Spring Cloud Data Flow server for OpenShift:
https://github.com/donovanmuller/spring-cloud-dataflow-server-openshift