Is there any way to configure JNDI so the lookup first checks localhost and if it doesn't find matching name it performs automatic discovery of other jndi servers?
My understanding of the documentation is that this is the default behavior when using clustering:
16.2.2. Client configuration
The JNDI client needs to be aware of
the HA-JNDI cluster. You can pass a
list of JNDI servers (i.e., the nodes
in the HA-JNDI cluster) to the
java.naming.provider.url JNDI
setting in the jndi.properties file.
Each server node is identified by its
IP address and the JNDI port number.
The server nodes are separated by
commas (see Section 16.2.3, “JBoss
configuration” on how to configure the
servers and ports).
java.naming.provider.url=server1:1100,server2:1100,server3:1100,server4:1100
When initialising, the JNP client code
will try to get in touch with each
server node from the list, one after
the other, stopping as soon as one
server has been reached. It will then
download the HA-JNDI stub from this
node.
Note - There is no load balancing behavior in the JNP client lookup
process. It just goes through the
provider list and use the first
available server. The HA-JNDI provider
list only needs to contain a subset of
HA-JNDI nodes in the cluster.
The downloaded smart stub contains the
logic to fail-over to another node if
necessary and the updated list of
currently running nodes. Furthermore,
each time a JNDI invocation is made to
the server, the list of targets in the
stub interceptor is updated (only if
the list has changed since the last
call).
If the property string java.naming.provider.url is empty or
if all servers it mentions are not
reachable, the JNP client will try to
discover a bootstrap HA-JNDI server
through a multicast call on the
network (auto-discovery). See
Section 16.2.3, “JBoss
configuration” on how to configure
auto-discovery on the JNDI server
nodes. Through auto-discovery, the
client might be able to get a valid
HA-JNDI server node without any
configuration. Of course, for the
auto-discovery to work, the client
must reside in the same LAN as the
server cluster (e.g., the web servlets
using the EJB servers). The LAN or WAN
must also be configured to propagate
such multicast datagrams.
Related
Is there a method for specific IP address while setting up Orion Context Broker using any of those methods mentioned here? Now I'm running it as a docker container simultaneously with mongodb. I tried modifying docker-compose file, however couldn't find any network settings for orion.
I recently came across many difficulties with Freeboard and OCB connection and it may be because of OCB running on default loopback interface. It was the same deal when fiware's accumulator server started on that interface and after change to other available the connection was established.
You can use the -localIp CLI option in order to specify on which IP interface the broker listens to. By default it listens to all the interfaces.
I'm trying to configure reliable configuration service that uses bus to update clients when config change happens. I started two config servers that monitors local file system. Two eureka servers, so that clients could discover config service at startup (i.e. eureka fist config type). I used rabbitMQ as a amqp bus.
Current behavior is the following: If I update config file and request post on http://config-server1/bus/refresh, config server sends notification and only one client picks it. So that to update 3 clients I need to make 3 posts.
Question: How can I configure bus so that one post to /bus/refresh will update all clients.
Thank you in advance.
Is it necessary to have a separate instance to act as a domain master host controller? Can the same JBoss installation also startup slave host controllers with server groups running multiple server instances on different port offsets?
So if there are 2 VMs that need to run 3 server instances in each, can the first VM also be the domain controller? Or is it a good idea to have a separate domain controller running on either one of these 2 VMs or a different one?
Does the domain controller create a single point of failure in controlling the multiple instances? What happens if the domain controller goes down? Does it have to be brought up to start and stop the slave host controllers and their server instances?
You can run several JBoss instances on each host, simply add server elements in the host-master.xml file.
As for the ports the domain controller (like the host controller, as a domain controller is a host controller with an extra centralization role) only opens the management ports. Only one controller (host or domain) is present per node. One JBoss server is created for each server in the servers host configuration file (with the possiblity to provide a port offset).
I think giving one thing to run per VM made it easier to manage (VM has been made to do that), but if you are limited resource wise (VMs have overhead), you can use one node to be your domain controller + JBoss instances.
The domain controller isn't (yet?) clusterable, but when it is down the JBoss instances will still run, so you will only lose the central point of configuration. The JBoss instances will in fact fall back to stand-alone mode, and you will still be able to update their configuration, but only by connecting directly to each one of them. When the controller is back, the central point of configuration will be back.
my test bed is 2 server which all run service based on jboss-4.0.3sp1, they are configured as cluster and has HA-JNDI online between 2 nodes.
due to some framework change, i need to shutdown the service on one node, how could we shutdown HA-JNDI?
i can not update cluster-service.xml to remove HA JDNI definition, that will cause application start-up error.
thanks,
Emre
Here is from JBoss Clustering documentation:
java.naming.provider.url JNDI setting can now
accept a list of urls separated by a comma. Example:
java.naming.provier.url=server1:1100,server2:1100,server3:1100,server4:1100
When initialising, the JNP client code will try to get in touch with each
server from the list, one after the other, stopping as soon as one server
has been reached.
So set it to server that is up.
I hope it is helps.
If I understand correctly the default JNDI service runs on my local AS, right? So if I create an EJB and in jboss.xml (running JBoss) I name it "sth" than it is registered in my AS. Correct?
In big projects EJBs might be distributed through many servers - on one server EJBs doing sth and on another sth else. When calling JNDI loopup() I search only one server, right? So it means that I need to know where the EJB is registered... Is it true?
When you cluster your app you will usually configure the cluster so that you have one shared JNDI. In JBoss you do this using HA-JNDI (High Availability - JNDI) or equivalent. This is a centralized service with fail-over. In principle you could imagine having a replicated service for better throughput, but to my knowledge that is not available in JBoss.
In short, you will have only one namespace, so you don't need to know where it is registered.