How to get cluster nodeId in vertx? - vert.x

I want to get the nodeId of the current node in cluster.
I didn't configuring the cluster programmatically when I start vertx, i just provide a file called cluster.xml on my classpath. In this case, I have no ClusterManager object.
I tried to find the right API in the Vertx object, but I couldn't find it.
So, how can i get the nodeId?

The cluster manager instance is not accessible from the Vert.x public API. But you can cast the Vert.x object to VertxInternal:
VertxInternal vertxInternal = (VertxInternal) vertx;
ClusterManager clusterManager = vertxInternal.getClusterManager();
String nodeId = clusterManager.getNodeID();

Have you tried
Config hazelcastConfig = ConfigUtil.loadConfig(); // load config from cluster xml
ClusterManager mgr = new HazelcastClusterManager(hazelcastConfig);
mdr.nodeId

Related

Configure Spring Data Redis to perform all operations via Elasticache configuration endpoint?

Description
Is it possible for Spring Data Redis to use Elasticache's configuration endpoint to perform all cluster operations (i.e., reading, writing, etc.)?
Long Description
I have a Spring Boot application that uses a Redis cluster as data store. The Redis cluster is hosted on AWS Elasticache running in cluster-mode enabled. The Elasticache cluster has 3 shards spread out over 12 nodes. The Redis version that the cluster is running is 6.0.
The service isn't correctly writing or retrieving data from the cluster. Whenever performing any of these operations, I get a message similar to the following:
io.lettuce.core.RedisCommandExecutionException: MOVED 16211 10.0.7.254:6379
In searching the internet, it appears that the service isn't correctly configured for a cluster. The fix seems to be set the spring.redis.cluster.nodes property with a list of all the nodes in the Elasticache cluster (see here and here). I find this rather needless, considering that the Elasticache configuration endpoint is supposed to be used for all read and write operations (see the "Finding Endpoints for a Redis (Cluster Mode Enabled) Cluster" section here).
My question is this: can Spring Data Redis use Elasticache's configuration endpoint to perform all reads and writes, the way the AWS documentation describes? I'd rather not hand over a list of all the nodes if Spring Data Redis can use the configuration endpoint the way its meant to be used. This seems like a serious limitation to me.
Thanks in advance!
Here is what I found works:
#Bean
public RedisConnectionFactory lettuceConnectionFactory()
{
LettuceClientConfiguration config =
LettucePoolingClientConfiguration
.builder()
.*your configuration settings*
.build();
RedisClusterConfiguration clusterConfig = new RedisClusterConfiguration();
clusterConfig.addClusterNode(new RedisNode("xxx.v1tc03.clustercfg.use1.cache.amazonaws.com", 6379));
return new LettuceConnectionFactory(clusterConfig, config);
}
where xxx is the name from your elasticache cluster.

specify more than one endpoint in java s3 api for ceph to connect to ceph cluster?

Hello every one i have just started to get my hands dirty with ceph object storage i.e. radosgateway and for this purpose have spun out a very basic single node ceph/daemon docker container which works perfectly fine for both s3cmd and java s3 API (the mgr dashboard don't work though container shuts down when issuing command ceph mgr module enable dashboard) but one thing i cant seem to figure out is how can we specify more than one endpoint for our java s3 client to connect to our cluster? does it have something to do with HTTP front-ends? please need some pointers or a sample example would be great.Following is my code to connect to a single node ceph cluster built using ceph/daemon image's docker container.
String accessKey = "demoKey";
String secretKey = "demoKey";
try {
ClientConfiguration clientConfig = new ClientConfiguration();
clientConfig.setProtocol(Protocol.HTTP);
System.setProperty(SDKGlobalConfiguration.DISABLE_CERT_CHECKING_SYSTEM_PROPERTY,"true");
if (SDKGlobalConfiguration.isCertCheckingDisabled())
{
System.out.println("Cert checking is disabled");
}
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
AmazonS3 conn = new AmazonS3Client(credentials);
conn.setEndpoint("http://ubuntu:8080"); //more than one endpoint ??
List<Bucket> buckets = conn.listBuckets();
for (Bucket bucket : buckets) {
System.out.println(bucket.getName() + "\t" +
StringUtils.fromDate(bucket.getCreationDate()));
}
}catch(Exception ex)
{
ex.printStackTrace();
}
Finally my ceph version
ceph version 14.2.4 nautilus (stable)
The Ceph Object Gateway can have multiple instances. These are combined by some load balancer. You have one end point that is distributing the load onto the ceph object gateway instances. The load balancer itself can be scaled as well (i.e. round-robin DNS or whatnot).
I found a nice use case here. Maybe it helps. Have a look at the media storage architecture.

Deploy Lagom Framework as standalone jar/docker

Is it possible to deploy a Lagom Application as a standalone running jar or Docker Container?
And if yes, how?
Yes, it is possible to deploy a Lagom application as a standalone JAR/Docker container. In order to do this, you can follow these steps.
Configure Cassandra Contact Points: If you are planning to use dynamic service location for your service but need to statically locate Cassandra, which is obvious in Production, then modify the application.conf of your service. Also, disable Lagom's ConfigSessionProvider and fall back to the one provided in akka-persistence-cassandra, which uses the list of endpoints listed in contact-points. Your Cassandra configuration should look something like this-
cassandra.default {
## list the contact points here
contact-points = ["127.0.0.1"]
## override Lagom’s ServiceLocator-based ConfigSessionProvider
session-provider = akka.persistence.cassandra.ConfigSessionProvider
}
cassandra-journal {
contact-points = ${cassandra.default.contact-points}
session-provider = ${cassandra.default.session-provider}
}
cassandra-snapshot-store {
contact-points = ${cassandra.default.contact-points}
session-provider = ${cassandra.default.session-provider}
}
lagom.persistence.read-side.cassandra {
contact-points = ${cassandra.default.contact-points}
session-provider = ${cassandra.default.session-provider}
}
Provide Kafka Broker Settings (If you are using Kafka Message Broker): Next step is to provide Kafka broker settings if you plan to use Lagom's streaming service. For this, you need to modify the application.conf of your service, if Kafka service is to be statically located, which is the case when your service acts only like a consumer, otherwise, you do not need to give following configurations.
lagom.broker.kafka {
service-name = ""
brokers = "127.0.0.1:9092"
client {
default {
failure-exponential-backoff {
min = 3s
max = 30s
random-factor = 0.2
}
}
producer = ${lagom.broker.kafka.client.default}
producer.role = ""
consumer {
failure-exponential-backoff = ${lagom.broker.kafka.client.default.failure-exponential-backoff}
offset-buffer = 100
batching-size = 20
batching-interval = 5 seconds
}
}
}
Create Akka Cluster: At last, we need to create an Akka cluster on our own. Since we are not using ConductR, we need to implement the joining yourself. This can be done by adding the following lines in application.conf.
akka.cluster.seed-nodes = [
"akka.tcp://MyService#host1:2552",
"akka.tcp://MyService#host2:2552"]
Now, we know what configurations we need to provide to our service, let's take a look at the steps of deployment. Since we are using just java -cp command, we need to package our service and run it. To simplify the process, we have created a shell script for it.
For a complete example, you can refer to our GitHub repo - Lagom Scala SBT Standalone project.
I hope it helps!

JBoss 6 Cluster Node register & deregister listener in deployed application

I have a cluster over jboss6 AS in domain mode. I have an application deployed in it. My application need to have a listener(callback) when a new node become member of the cluster and also when gets removed. Is there a way to get the member node list and to add such a listener?
The simplest way is to get define a clustered cache in the configuration and get access to it from your code (see example). With the cache available, you can call cache.getCacheManager().addListener(Object) that can listen for org.infinispan.notifications.cachemanagerlistener.annotation.ViewChanged. See listener documentation for more info.

Quartz scheduler in cluster environment

I am using
SchedulerFactory schedulerFactory = new StdSchedulerFactory();
scheduler = schedulerFactory.getScheduler();
scheduler.start();
Trigger asapTrigger = getAsapTrigger();
JobDetail asapJob = getAsapJobDetails();
scheduler.scheduleJob(asapJob, asapTrigger);
This is working but when I go for cluster environment, 2 threads are running for the same job.
I am using annotations not properties file. I want to run only one thread. Can someone help on this. How to configure?
my code almost look like : http://k2java.blogspot.com/2011/04/quartz.html
You have to configure Quartz to run in a clustered environment. Clustering currently only works with the JDBC jobstore, and works by having each node of the cluster to share the same database.
Set the org.quartz.jobStore.isClustered property to true if you have multiple instances of Quartz that use the same set of database tables. This property is used to turn on the clustering features.
Set the org.quartz.jobStore.clusterCheckinInterval property (milliseconds) which is the frequency at which this instance checks in with the other instances of the cluster.
Set the org.quartz.scheduler.instanceId to AUTO so that each node in the cluster will have a unique instanceId.
Please note that each instance in the cluster should use the same copy of the quartz.properties file. Furthermore if you use clustering on separate machines ensure that their clocks are synchronized.
For more information check the official documentation which contains a sample properties file for a clustered scheduler.