I'm writing an app that uses Hazelcast. My app is really slow to start because Hazelcast tries to talk to other nodes on the network on startup. This is fine for production, but when I'm doing single-node debugging it really slows down my edit-compile-run-debug cycle.
Is there a Hazelcast setting that tells it that there's only one node so please start quickly and don't bother pinging the rest of the network?
You can disable join methods:
<join>
<multicast enabled="false">
</multicast>
<tcp-ip enabled="false">
</tcp-ip>
<aws enabled="false">
</aws>
</join>
this will save you at least 3 sec while debugging
System.setProperty("hazelcast.local.localAddress", "127.0.0.1");
Related
Based on the documentation:
Domain Clustered Mode:
Domain mode is a way to centrally manage and publish the configuration for your servers.
Running a cluster in standard mode can quickly become aggravating as
the cluster grows in size. Every time you need to make a configuration
change, you have to perform it on each node in the cluster. Domain
mode solves this problem by providing a central place to store and
publish configurations. It can be quite complex to set up, but it is
worth it in the end. This capability is built into the WildFly
Application Server which Keycloak derives from.
I tried the example setup from the user manual and it really the maintenance of multiple configuration.
However, as High Availability is concerned, this is not quite resilient. When the master node goes down, the Auth Server will stop functioning since all the slave nodes listen to the domain controller.
Is my understanding correct here? Or am I missing something?
If this is the case, to ensure High Availability then Standalone-HA is the way to go, right?
Wildfly nodes management and clustering is ortogonal features.
Clustering in keycloak in fact is just a cache replication (all kinds of sessions, login failures etc...). So if you want to enable fault tolerance for your sessions you just have to properly configure cache replication (and usually nodes discovery), and to do that you can simply just make owners param be greater that 1:
<distributed-cache name="sessions" owners="2"/>
<distributed-cache name="authenticationSessions" owners="2"/>
<distributed-cache name="offlineSessions" owners="2"/>
<distributed-cache name="clientSessions" owners="2"/>
<distributed-cache name="offlineClientSessions" owners="2"/>
<distributed-cache name="loginFailures" owners="1"/>
<distributed-cache name="actionTokens" owners="2">
Now all new sessions that was initiated on first node will be replicated to another node, so if first node goes down end-user can be served by another node. For example you can have 3 node total, and require at least 2 sessions replica distributed among those 3 nodes.
Now if we look to domain vs ha mode, we can say that it just all about how those jboss/wildfly server configs will be delivered to target node. In HA mode all configs supplied with server runtime, in domain mode this configs will be fetched from domain controller.
I suggest you to achieve replication with HA mode, and then if required move to Domain mode. Also if we take to account modern approach to containerize everything, HA mode is more appropriate for containerization. Parametrized clustering settings could be injected during container build, with ability to alter them in runtime via environment (e.g. owners param could be drained from container enviroment variable)
There was some articles in Keycloak blog about clustering like:
this
Also i suggest to check out Keycloak docker container image repository:
here
I need to monitor a JDK 8 JVM on a production Red Hat 6 Linux server that allows me no direct access. I'm able to connect to the JMX port using VisualVM.
I can see the Monitor tab: CPU usage; heap and metaspace; loaded and unloaded classes; total number of threads.
However, I can't see dynamic data on the Threads tab. It shows me total number of threads and daemon threads, but no real time status information by thread name.
I don't know how to advise the owners of the server how to configure the JDK so I can see dynamic data on the Threads tab.
It works fine on my local machine. I can see the status of every thread by name, with color coded state information, if I point VisualVM to a Java process running locally.
What JVM setting makes dynamic thread data available on the JMX port?
Update:
I should point out that I'm using JBOSS 6.x.
If I look at my local JBOSS 6.x standalone.xml configuration I see the following subsystem entry for JMX:
<subsystem xmlns="urn:jboss:domain:jmx:1.3">
<expose-resolved-model/>
<expose-expression-model/>
<remoting-connector/>
</subsystem>
I can see all dynamic thread information when running on my local machine.
I'm asking the owners of the production instance if the standalone.xml includes this subsystem. I'm hoping they will say that theirs is different. If it is, perhaps modifying the XML will make the data I need available.
Is it possible to configure startup order when starting up the services.
A Service1 has to be running before Service2 can be started.
Clarification:
I'm didn't mean micro services when I mentioned Service, I meant stateless services like REST API (Service1) and WebSocket (Service2).
So when then solution is deployed the WebSocket service (Service2) must be up and running before the REST API (Service1)?
Of course you can, because you control when services are created. It's not immediately obvious if you've only ever deployed applications through Visual Studio, because Visual Studio sets you up with Default Services. This is what you see in ApplicationManifest.xml when you create an application through Visual Studio:
<DefaultServices>
<Service Name="Stateless1">
<StatelessService ServiceTypeName="Stateless1Type" InstanceCount="[Stateless1_InstanceCount]">
<SingletonPartition />
</StatelessService>
</Service>
<Service Name="Stateful1">
<StatefulService ServiceTypeName="Stateful1Type" TargetReplicaSetSize="[Stateful1_TargetReplicaSetSize]" MinReplicaSetSize="[Stateful1_MinReplicaSetSize]">
<UniformInt64Partition PartitionCount="[Stateful1_PartitionCount]" LowKey="-9223372036854775808" HighKey="9223372036854775807" />
</StatefulService>
</Service>
</DefaultServices>
This is a nice convenience when you know you always want certain services created a certain way each time you create an application instance. You can define them declaratively here and Service Fabric will create them whenever you create an instance of the application.
But it has some drawbacks. Most notably, in your case, is that you have no control over the order in which the services are created.
It also hides some of the concepts around application and service types and application and service instances, which again can be convenient until you want to do something more advanced, like in your case.
When you "deploy" an application, there are actually several steps:
Create the application package
Copy the package up to the cluster
Register the application type and version
Create an instance of the registered application type and version
Create instances of each registered service type in that application
With Default Services, you skip step 5 because Service Fabric does it for you. Without Default Services though, you get to create your service instances yourself, so you can determine what order to do it in. You can do other things like check if a service is ready before creating the next one. All of these actions are available in Service Fabric's C# SDK and PowerShell cmdlets. Here's a quick PowerShell example:
Copy-ServiceFabricApplicationPackage -ApplicationPackagePath C:\temp\MyApp -ImageStoreConnectionString fabric:ImageStore -ApplicationPackagePathInImageStore MyApp
Register-ServiceFabricApplicationType MyApp
New-ServiceFabricApplication -ApplicationName fabric:/MyAppInstance -ApplicationTypeName MyApp -ApplicationTypeVersion 1.0
New-ServiceFabricService -ApplicationName fabric:/MyAppInstance -InstanceCount 1 -PartitionSchemeSingleton -ServiceName fabric:/MyAppInstance/MyStatelessService -ServiceTypeName MyStatelessService -Stateless
New-ServiceFabricService -ApplicationName fabric:/MyAppInstance -MinReplicaSetSize 2 -PartitionSchemeSingleton -ServiceName fabric:/MyAppInstance/MyStatefulService -ServiceTypeName MyStatefulServiceType -Stateful -TargetReplicaSetSize 3
Of course, this just applies to creating the service instances. When it comes time to upgrading your services, the "upgrade unit" is actually the application, so you can't pick the order in which services within an application get upgraded, at least not in one single upgrade. You can, however, choose which services get upgraded during an application upgrade, so if you have the same ordering dependency, you can accomplish that by doing two separate application upgrades.
So you get some level of control. But, it's really best if your services are resilient to missing dependent services, because there will likely be times when a service is unavailable for one reason or another.
Edit:
I showed you a lot of PowerShell but I should mention the C# APIs are also just as powerful. You have the full set of management tools at your disposal in C#. For example, you can have a service that creates and manages other services. In your case, if Service A depends on Service B, then you can have Service A create an instance of Service B before Service A itself does any work, and throughout Service A's life it can keep an eye on Service B. Here's an example of a service that creates other applications and services: https://github.com/Azure-Samples/service-fabric-dotnet-management-party-cluster/blob/master/src/PartyCluster.ApplicationDeployService/FabricClientApplicationOperator.cs
In service fabric world it's called Service Affinity (https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-resource-manager-advanced-placement-rules-affinity/)
Anyway, try to avoid these situations in microservices world.
Good luck.
I started out with a single mongo instance as my database, configured in spring xml:
<mongo:mongo host="localhost" port="27017" />
Recently, I changed my configuration to use a 3 node replica set, configured as:
<mongo:mongo replica-set="${my.replica.set}" />
Everything works great.
My current problem is that for my dev environment I'd like to use the single localhost mongo config and for int and prod envrionments I'd like to use the replica set config. The differing values I will handle via properties files. The question is about the mongo config itself.
Something along the lines of the example below would be ideal...
<mongo:mongo uri="localhost:27017" />
<mongo:mongo uri="localhost:27017,localhost:27018" />
I ran into this example: spring-boot uriCanBeCustomized unitTest
Is there a way to do this in spring config?
I am using spring-data-mongodb-1.7.0.RELEASE.
Looks like the replica set configuration works even if you point it to a standalone mongod. I assumed that wouldn't work since it specifically sets 'replica-set' but testing shows that it does.
So in my case, the configuration would just look like
<mongo:mongo replica-set="${mongodbs}" />
where in the dev properties file I'd have
mongodbs=localhost:27017
and for int/prod properties
mongodbs=host1:port1,host2:port2,host3:port3
When we deployed our java app using spring data + mongodb we noticed that the cpu spiked to 100%. After removing the mongo configuration it went back to normal. The application is literally doing nothing with mongo, we merely have it added to the context file
<mongo:mongo id="mongo" host="127.0.0.1" port="27017" />
Any ideas what could be causing this? The cpu slowly gets eaten up until it reaches 150%.