I'm trying to deploy an infinisppan cluster (2 machines) using the domain mode. But I can't find any working example of domain.xml and host.xml config file.
This cluster would be used by keycloak as a cache server
Any luck one of you already work on this ?
You need to download infinispan 9.4.14 (or any 9.4) and start the bin/domain.sh [bat] script.
That's it you have a running domain with two servers.
If you want to add a second machine you need to copy the server and start the domain script by passing "--host-config=host-slave.xml" also you need to set "jboss.domain.master.address=" with "-D" to let the process know where the domain master is.
Anothe option is to move host-slave.xml-->host.xml and edit the domain-controller discovery-options.
More information can be found here -> http://infinispan.org/docs/stable/server_guide/server_guide.html#domain_mode
Related
I have three managed servers running on a Weblogic domain. Now I need to configured node manager so that I can stop and start each of the managed servers.
My question is do I need to define a separate 'Machine' and 'Node Manager port' for each of the managed servers? Or can a single 'Machine' and "Node Manager port' combination be used to start/stop multiple managed servers
Thanks in advance
Yes it is possible but the configuration depends on how your Machines are distributed across your hosts on whether you need to use different ports etc. Oracle provides quite a detailed tutorial on this here. The contents of which is too much to replicate into SO.
I recommend you follow the tutorial and then post any specific questions you may have as a new question.
Step 1: Start weblogic server and open weblogic console in browser and login with correct credential.
Step 2: Expand Environment and click on Machine link
Step 3: Click on New to add new machine and give the machine name. Then click on next.
Step 4: Enter Listen Address (server IP) and port on which node manage will run. And then click Finish.
Please use below link for more detail view
https://fi-sm.com/blog/how-to-add-new-server-on-admin-server-in-web-logic-12c-server/
I want to manage the servers in our staging pipeline with Powershell DSC (push model). The servers map to the environments as following
Development: 1 server
Test: 2 servers
UAT: 2 servers
Production: 2 servers
The server(s) within one environment do have the same configuration. But the configuration is different between the environments. I wanted to go with the push model because I do not have to setup a pull server.
Powershell DSC offers the option to manage the configuration via configuration data in a separate file But this comes with the caveat that you need to specify a node name that matches the respective server name. And that means, I need to copy the configuration data for each server in one environment. And when changing the configuration I need to remember that there is a second place where I need to update the configuration value.
Additionally, I do not really care about the server names. If the servers are exchanged tomorrow for new servers, the configuration should be just applied which is relevant to the environment.
What is the best practice approach to manage multiple servers within one environment with the same configuration?
Check the links, I think they cover scenerio
Using A Single DSC Configuration for Multiple Servers
enter link description here
DSC ConfigurationNames with multiple nodes
enter link description here
The mof file that gets produced does not contain the nodename inside it. So as long as you build a generic configuration, you can rename it after the fact at deploy time.
You can create one config for each environment with some generic name. Then enumerate the list of servers and make a copy of the config for each one with that servers name.
You can take it a step further. Have a share where you create a folder for each server that matches the server's name. Then copy the mof for that server into that folder with a name of localhost.mof. You can then run Start-DSCConfiguration -Path \\server\share\$env:computername from that machine as part of my deployment script.
I'm trying to setup a cell and a collective in a WAS for bluemix service. I've found a few steps online for generic liberty setup, but nothing specific for a bluemix collective or cell. Can someone point me in the right direction?
At a high level, you should be able to do the following for a Cell:
Login to the Admin Console as wsadmin
Create a server.
Open all the ports on each host for each server created by running the openFirewallPorts.sh script. Below, you will find the standard ports for a new server given that only one server exists on each host You may need to open more ports for additional servers on the same host since ports can be unique per server. Try the following:
cd WAS_HOME/virtual/bin
export serverPorts=2810:TCP,2810:UDP,8880:TCP,8880:UDP,9101:TCP,9101:UDP,9061:TCP,9061:UDP,9080:TCP,9080:UDP,9354:TCP,9354:UDP,9044:TCP,9044:UDP,9443:TCP,9443:UDP,5060:TCP,5060:UDP,5061:TCP,5061:UDP,11005:TCP,11005:UDP,11007:TCP,11007:UDP,9633:TCP,9633:UDP,7276:TCP,7276:UDP,7286:TCP,7286:UDP,5558:TCP,5558:UDP,5578:TCP,5578:UDP
sudo ./openFirewallPorts.sh -ports $serverPorts -persist true
Start your server.
Deploy your application.
There are a few slight differences for a Liberty Collective, but again, at a high level, you should be able to try the following:
Switch your user to wsadmin or ssh to your host using wsadmin / password
On each host, create a server and join it to the collective. Be sure to use the full host name of the controller for the --host parameter.
cd WAS_HOME/bin
./server create server
./collective join server --host=yourhostname --port=9443 --user=wsadmin --password=xxxxxxxx --keystorePassword=yyyyyyyy
Accept the chain certificate (y/n) y
Save the output from each join so you can paste it into each host's application server.xml file before deploying your application.
Install the features required by your application on each host. The features listed below are an example.
cd /opt/IBM/WebSphere/Liberty/bin
./featureManager install --acceptLicense ejblite-3.2 websocket-1.0 jsp-2.3 jdbc-4.1 jaxrs-2.0 cdi-1.2 beanValidation-1.1
NOTE: Output from this command will contain messages similar to:
chmod: changing permissions of
`/opt/IBM/WebSphere/Liberty/bin/featureManager': Operation not
permitted
This is OK. You should see this message upon completion:
Product validation completed successfully.
Update your application's server.xml file with the information saved in Step 2.
Start your server.
Deploy your application.
Verify your application is reachable :9080/appname
does any one know how to deactivate the automatic clustering in a JBoss 5.1.0?
we have a JBoss running on each developer machine and because we are all in the same network, they do an auto clustering. The problem could be solved if each of us could get its own multicast ip, but the network hardware is not capable of that.
Isn't there a switch in jboss to deactivate this?
Under Eclipse under Windows, you can run the server using the following JVM property (see Open Launch Configuration) :
-Djboss.partition.name=${env_var:COMPUTERNAME}
This way each of the developer machine will have its own cluster (with a single server if you run only one server). Under Linux, you will need to replace COMPUTERNAME by HOSTNAME.
If you run JBoss AS from the command line, you would use something like -Djboss.partition.name=%COMPUTERNAME% under Windows (not tested).
Note that using -Djgroups.udp.ip_ttl=0 (as proposed in another answer) has the following drawbacks:
server startup is slower (4 minutes instead of 1 minute in my case);
there are a lot of NAKACK warn/error logs;
the JGroups UDP multicast is limited to the local machine which could conflict with other applications based on JGroupds UDP;
other servers on the same machine with the same configuration will be in the same cluster, which may not be desired.
You can use different multicast or partition name to avoid conflict.
However, if you want to disable clustering in "production" or "all" configuration , you need to do following actions:
Remove
farm/
deploy-hasingleton/
deploy/cluster/
In deploy/messaging/*-persistence-service.xml, change Clustered to false:
<attribute name="Clustered>false</attribute>
and remove
<depends optional-attribute-name="ChannelFactoryName">jboss.jgroups:service=ChannelFactory</depends>
In conf/bootstrap/profile.xml, replace
<bean name="BootstrapProfileFactory" class="org.jboss.system.server.profileservice.StaticClusteredProfileFactory">
with
<bean name="BootstrapProfileFactory" class="org.jboss.system.server.profileservice.repository.StaticProfileFactory">
and remove the "farmURIs" property a few lines below that.
Replace deploy/httpha-invoker.sar with http-invoker.sar from the default profile
In the deployers/clustering-deployer-jboss-beans.xml, comment out WebAppClusteringDependencyDeployer.
In SOA-P, if you are removing clustering, you will need to take a few additional steps.
Copy the server/default/deploy/jbpm.esb/hibernate.cfg.xml to server//deploy/jbpm.esb/hibernate.cfg.xml
Remove server//deploy/riftsaw* and cp -R server/default/deploy/riftsaw* server//deploy/
You can do this by setting the TTL (time-to-live) on the multicast packets to zero. Clustering will still be enabled, but none of the JBoss servers running on the developer machines will be able to locate each other.
When starting JBoss, set the jgroups.udp.ip_ttl system property, e.g.
-Djgroups.udp.ip_ttl=0
You'll need to hack that into the JBoss startup script, most likely.
I found couple of tutorials how to run multiple instances of JBoss on the same machine.
All of them mention uncommenting Service Binder and having separate service-binding.xml files for each server.
The question is why it's done like that? Is there any reason except adding additional layer of indirection?
It looks the same could be done by modification of ports in jboss-service.xml for each server. The only restriction would be that there won't be easy way to switch which instance of JBoss uses which set of ports.
You are right with modifying the ports in jboss-service.xml. This is the straightforward and genuine way to change the ports.
Unfortunately, ports are not only defined in that file, but also in other places like jboss-web's configuration etc.
Catching all those places can be error prone.
So the idea was to have a central file (service-binding.xml) that lives in the root of a server installation. You basically copy the 'default' config to server1, server2 etc and then via command line pass in the server name when starting so that the correct port-offset for all of the services is taken from service-bindings.xml and applied to the resulting runtime configuration.
JBossAS 7 takes this concept one step further to the ServiceBindingGroups, where the base ports are defined on a domain level and then per server you pick a basic group + just a port offset by name, so that there is even less work needed than in as4