I'm using Wildfly 8.1.0.Final.
I have RecordingServerHandler configured, it does get triggered by web services' messages. The problem is, LogRecorders are disabled by default.
The Records management article says:
Default processors are not in recording mode upon creation, thus you need to switch them to recording mode through their MBean interfaces (see the Recording flag in the jmx-console).
Enabling them in runtime one by one for each endpoint won't do, I need to enable them globally at "development time".
The same article says:
The recorders can be configured in the stacks bean configuration
<!-- Installed Record Processors-->
<bean name="WSMemoryBufferRecorder" class="org.jboss.wsf.framework.management.recording.MemoryBufferRecorder">
<property name="recording">false</property>
</bean>
<bean name="WSLogRecorder" class="org.jboss.wsf.framework.management.recording.LogRecorder">
<property name="recording">false</property>
</bean>
What's "stacks bean configuration"? Does the specified WSLogRecorder name imply that this configuration creates another, non-default, LogRecorder by that name, and that I would need to add it to all endpoints somehow?
Ended up enabling them via JMX at the end of deployment.
import java.lang.management.ManagementFactory;
import java.util.Set;
import javax.management.Attribute;
import javax.management.MBeanServer;
import javax.management.ObjectName;
/* ... */
MBeanServer server = ManagementFactory.getPlatformMBeanServer();
Set<ObjectName> recorderNames = server.queryNames(
new ObjectName("jboss.ws:recordProcessor=LogRecorder,*"), null);
for (ObjectName recorderName : recorderNames) {
server.setAttribute(recorderName, new Attribute("Recording", true));
}
Related
We have been experimenting with the number of Ignite server pods to see the impact on performance.
One thing that we have noticed is that if the number of Ignite server pods is increased after client nodes have established communication the new pod will just fail loop with the error below.
If however the grid is destroyed (bring down all client and server nodes) and then the desired number of server nodes is launch there are no issues.
Also the above procedure is not fully dependable for anything other than launching a single Ignite server.
From reading it looks like [this stack over flow][1] post and [this documentation][2] that the issue may be that we are not launching the "Kubernetes service".
Ignite's KubernetesIPFinder requires users to configure and deploy a special Kubernetes service that maintains a list of the IP addresses of all the alive Ignite pods (nodes).
However this is the only documentation I have found and it says that it is no longer current.
Is this information still relevant for Ignite 2.11.1?
If not is there some more recent documentation?
If this service is indeed needed, are there some more concreate examples and information on setting them up?
Error on new Server pod:
[21:37:55,793][SEVERE][main][IgniteKernal] Failed to start manager: GridManagerAdapter [enabled=true, name=o.a.i.i.managers.discovery.GridDiscoveryManager]
class org.apache.ignite.IgniteCheckedException: Failed to start SPI: TcpDiscoverySpi [addrRslvr=null, addressFilter=null, sockTimeout=5000, ackTimeout=5000, marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1#78422efb], reconCnt=10, reconDelay=2000, maxAckTimeout=600000, soLinger=0, forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null, skipAddrsRandomization=false]
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:281)
at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:980)
at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1985)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1331)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2141)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1787)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1172)
at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1066)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:952)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:851)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:721)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690)
at org.apache.ignite.Ignition.start(Ignition.java:353)
at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:367)
Caused by: class org.apache.ignite.spi.IgniteSpiException: Node with the same ID was found in node IDs history or existing node in topology has the same ID (fix configuration and restart local node) [localNode=TcpDiscoveryNode [id=000e84bb-f587-43a2-a662-c7c6147d2dde, consistentId=8751ef49-db25-4cf9-a38c-26e23a96a3e4, addrs=ArrayList [0:0:0:0:0:0:0:1%lo, 127.0.0.1, fd00:85:4001:5:f831:8cc:cd3:f863%eth0], sockAddrs=HashSet [nkw-mnomni-ignite-1-1-1.nkw-mnomni-ignite-1-1.680e5bbc-21b1-5d61-8dfa-6b27be10ede7.svc.cluster.local/fd00:85:4001:5:f831:8cc:cd3:f863:47500, /0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500, order=0, intOrder=0, lastExchangeTime=1676497065109, loc=true, ver=2.11.1#20211220-sha1:eae1147d, isClient=false], existingNode=000e84bb-f587-43a2-a662-c7c6147d2dde]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.duplicateIdError(TcpDiscoverySpi.java:2083)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1201)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:473)
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2207)
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:278)
... 13 more
Server DiscoverySpi Config:
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="myNameSpace"/>
<property name="serviceName" value="myServiceName"/>
</bean>
</property>
</bean>
</property>
Client DiscoverySpi Configs:
<bean id="discoverySpi" class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder" ref="ipFinder" />
</bean>
<bean id="ipFinder" class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="shared" value="false" />
<property name="addresses">
<list>
<value>myServiceName.myNameSpace:47500</value>
</list>
</property>
</bean>
Edit:
I have experimented more with this issue. As long as I do not deploy any clients (using the static TcpDiscoveryVmIpFinder above) I am able to scale up and down server pods without any issue. However as soon as a single client joins I am no longer able to scale the server pods up.
I can see that the server pods have ports 47500 and 47100 open so I am not sure what the issue is. Dows the TcpDiscoveryKubernetesIpFinder still need the port to be specified on the client config?
I have tried to change my client config to use the TcpDiscoveryKubernetesIpFinder below but I am getting a discovery timeout falure (see below).
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="680e5bbc-21b1-5d61-8dfa-6b27be10ede7"/>
<property name="serviceName" value="nkw-mnomni-ignite-1-1"/>
</bean>
</property>
</bean>
</property>
24-Feb-2023 14:15:02.450 WARNING [grid-timeout-worker-#22%igniteClientInstance%] org.apache.ignite.logger.java.JavaLogger.warning Thread dump at 2023/02/24 14:15:02 UTC
Thread [name="main", id=1, state=WAITING, blockCnt=78, waitCnt=3]
Lock [object=java.util.concurrent.CountDownLatch$Sync#45296dbd, ownerName=null, ownerId=-1]
at java.base#17.0.1/jdk.internal.misc.Unsafe.park(Native Method)
at java.base#17.0.1/java.util.concurrent.locks.LockSupport.park(LockSupport.java:211)
at java.base#17.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:715)
at java.base#17.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1047)
at java.base#17.0.1/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:230)
at o.a.i.spi.discovery.tcp.ClientImpl.spiStart(ClientImpl.java:324)
at o.a.i.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2207)
at o.a.i.i.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:278)
at o.a.i.i.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:980)
at o.a.i.i.IgniteKernal.startManager(IgniteKernal.java:1985)
at o.a.i.i.IgniteKernal.start(IgniteKernal.java:1331)
at o.a.i.i.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2141)
at o.a.i.i.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1787)
- locked o.a.i.i.IgnitionEx$IgniteNamedInstance#57ac9100
at o.a.i.i.IgnitionEx.start0(IgnitionEx.java:1172)
at o.a.i.i.IgnitionEx.startConfigurations(IgnitionEx.java:1066)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:952)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:851)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:721)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:690)
at o.a.i.Ignition.start(Ignition.java:353)
Edit 2:
I also spoke with an admin about opening client side ports in case that was the issue. He indicated that should not be needed as clients should be able to open ephemeral ports to communicate with the server nodes.
[1]: Ignite not discoverable in kubernetes cluster with TcpDiscoveryKubernetesIpFinder
[2]: https://apacheignite.readme.io/docs/kubernetes-ip-finder
It's hard to say precisely what the root cause is, but in general it's something related to the network or domain names resolution.
A public address is assigned to a node on a startup and is exposed to other nodes for communication. Other nodes store that address and nodeId in their history. Here is what is happening: a new node is trying to enter the cluster, it connects to a random node, then this request is transferred to the coordinator. The coordinator issues TcpDiscoveryNodeAddedMessage that must circle across the topology ring and be ACKed by all other nodes. That process didn't finish during a join timeout, so the new node is trying to re-enter the topology by starting the same joining process but with a new ID. But, other nodes see that this address is already registered by another nodeId, causing the original duplicate nodeId error.
Some recommendations:
If the issue is reproducible on a regular basis, I'd recommend collecting more information by enabling DEBUG logging for the following package:
org.apache.ignite.spi.discovery (discovery-related events tracing)
Take thread dumps from affected nodes (could be done by kill -3). Check for discovery-related issues. Search for "lookupAllHostAddr".
Check that it's not DNS issue and all public addresses for your node are resolved instantly nkw-mnomni-ignite-1-1-1.nkw-mnomni-ignite-1-1.680e5bbc-21b1-5d61-8dfa-6b27be10ede7.svc.cluster.local. I was asking about the provider, because in OpenShift there seems to be a hard limit on DNS resolution time.
Check GC and safepoints.
To hide the underlying issue you can play around by increasing Ignite configuration: network timeout, join timeout, reducing failure detection timeout. But I recommend finding the real root cause instead of treating the symptoms.
We have a requirement to pause a job before application maintenance. We are using Quartz 2.2.1 in cluster. Database is oracle.
I have developed a screen with "Pause" functionality. I observed that "pause" works fine until I start the server again. The moment I start server, TRIGGER_STATE of QRTZ_TRIGGERS table resets to "WAITING".
Can anyone please provide a hint.
Thanks a lot in advance.
Rgds - Roy
If you have set overwriteExistingJobs=true (note that default value is false) then each time server starts, it loads the jobs/triggers from the configuration file and replaces existing ones (that have the same job/trigger names), therefore overwriting triggers and their states too as in your case.
You could try to set overwriteExistingJobs=false in the SchedulerFactoryBean. This however may not be convenient for you, since if you ever change job configuration in the server, the existing jobs with old configuration will remain in the database.
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
....
<property name="overwriteExistingJobs" value="false"/>
<property name="triggers">
<list>
....
</list>
</property>
....
</bean>
I am trying to set up a Jboss 6 in a clustered environment, and use it to host clustered stateful singleton EJBs.
So far we succesfully installed a Singleton EJB within the cluster, where different entrypoints to our application (through a website deployed on each node) point to a single environment on which the EJB is hosted (thus mantaining the state of static variables). We achieved this using the following configuration:
Bean interface:
#Remote
public interface IUniverse {
...
}
Bean implementation:
#Clustered #Stateful
public class Universe implements IUniverse {
private static Vector<String> messages = new Vector<String>();
...
}
jboss-beans.xml configuration:
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<!-- This bean is an example of a clustered singleton -->
<bean name="Universe" class="Universe">
</bean>
<bean name="UniverseController" class="org.jboss.ha.singleton.HASingletonController">
<property name="HAPartition"><inject bean="HAPartition"/></property>
<property name="target"><inject bean="Universe"/></property>
<property name="targetStartMethod">startSingleton</property>
<property name="targetStopMethod">stopSingleton</property>
</bean>
</deployment>
The main problem for this implementation is that, after the master node (the one that contains the state of the singleton EJB) shuts down gracefuly, the Singleton's state is lost and reset to default. Please note that everything was constructed following the JBoss 5 Clustering documents, as no JBoss 6 documents were found on this subject. Any information on how to solve this problem or where to find JBoss 6 documention on clustering is appreciated.
I don't think you actually need to "singleton" stateful session beans, as the way to invoke stateful session bean is to keep the obtained reference to the bean and invoke the same instance of the reference. Application server cluster will maintain the statefulness of stateful session bean by maintaining the ejb session across the cluster.
But at the end, I agree that you may want to reconsider your necessity of using stateful session bean.
What is the correct way to turn off the JBoss hot deploy service?
This is a production environment.
Edit: JBoss version 5.1.0 GA
I think deleting the "deploy/hdscanner-jboss-beans.xml" file is the correct way to do this.
From JBoss in Action, ch. 3.1.5:
The deployer is configured via the deployers.xml and profile.xml descriptor files,
both found in the server/xxx/conf directory. This file defines several POJOs that
manage various deployment responsibilities. Table 3.3 identifies each of these POJOs
and highlights some of the more interesting configuration properties provided by
each one. [...]
And the relevant bits from the table:
Bean: HDScanner
Property: scanEnabled - Set this to true (default) to enable the hot
deployer and to false to disable it. When set to
false, applications are deployed only when the
server is started or when the deploy method on
the MainDeployer MBean is called.
Property: scanPeriod - The number of milliseconds the hot deployer
waits between performing scans. The default is
5000 milliseconds (5 seconds). This value is
ignored if scanEnabled is set to false.
Property: scanThreadName - You can use this to change the name of the
thread from its default of HDScanner. The thread
name enables you to identify the hot deployer
thread if you should take a thread dump.
You can disable and expose it with JMX:
<bean name="HDScanner" class="org.jboss.system.server.profileservice.hotdeploy.HDScanner">
<annotation>#org.jboss.aop.microcontainer.aspects.jmx.JMX(name="jboss.deployment:service=HDScanner", exposedInterface=org.jboss.system.server.profileservice.hotdeploy.Scanner, registerDirectly=false)</annotation>
<start method="start" ignored="true" />
<property name="deployer"><inject bean="ProfileServiceDeployer"/></property>
<property name="profileService"><inject bean="ProfileService"/></property>
<property name="scanPeriod">5000</property>
<property name="scanThreadName">HDScanner</property>
<property name="scanEnabled">false</property>
</bean>
Property: scanEnabled doesn't exist on JBoss 5.x only on JBoss 4.x for the Deployment Scanner.
On JBoss 5.x just delete the hdscanner-jboss-beans.xml from the deploy directory and use the MainDeployer MBean to deploy your applications.
I have a web-app that requires two settings:
A JDBC datasource
A string token
I desperately want to be able to deploy one .war to various different containers (jetty,tomcat,gf3 minimum) and configure these settings at application level within the container.
My code does this:
InitialContext ctx = new InitialContext();
Context envCtx = (javax.naming.Context) ctx.lookup("java:comp/env");
token = (String)envCtx.lookup("token");
ds = (DataSource)envCtx.lookup("jdbc/datasource")
Let's assume I've used the glassfish management interface to create two jdbc resources: jdbc/test-datasource and jdbc/live-datasource which connect to different copies of the same schema, on different servers, different credentials etc. Say I want to deploy this to glassfish with and point it at the test datasource, I might have this in my sun-web.xml:
...
<resource-ref>
<res-ref-name>jdbc/datasource</res-ref-name>
<jndi-name>jdbc/test-datasource</jndi-name>
</resource-ref>
...
but
sun-web.xml goes inside my war, right?
surely there must be a way to do this through the management interface
Am I even trying to do the right thing? Do other containers make this any easier? I'd be particularly interested in how jetty 7 handles this since I use it for development.
EDIT Tomcat has a reasonable way to do this:
Create $TOMCAT_HOME/conf/Catalina/localhost/webapp.xml with:
<?xml version="1.0" encoding="UTF-8"?>
<Context antiResourceLocking="false" privileged="true">
<!-- String resource -->
<Environment name="token" value="value of token" type="java.lang.String" override="false" />
<!-- Linking to a global resource -->
<ResourceLink name="jdbc/datasource1" global="jdbc/test" type="javax.sql.DataSource" />
<!-- Derby -->
<Resource name="jdbc/datasource2"
type="javax.sql.DataSource"
auth="Container"
driverClassName="org.apache.derby.jdbc.EmbeddedDataSource"
url="jdbc:derby:test;create=true"
/>
<!-- H2 -->
<Resource name="jdbc/datasource3"
type="javax.sql.DataSource"
auth="Container"
driverClassName="org.h2.jdbcx.JdbcDataSource"
url="jdbc:h2:~/test"
username="sa"
password=""
/>
</Context>
Note that override="false" means the opposite. It means that this setting can't be overriden by web.xml.
I like this approach because the file is part of the container configuration not the war, but it's not part of the global configuration; it's webapp specific.
I guess I expect a bit more from glassfish since it is supposed to have a full web admin interface, but I would be happy enough with something equivalent to the above.
For GF v3, you may want to try leveraging the --deploymentplan option of the deploy subcommand of asadmin. It is discussed on the man page for the deploy subcommand.
We had just this issue when migrating from Tomcat to Glassfish 3. Here is what works for us.
In the Glassfish admin console, configure datasources (JDBC connection pools and resources) for DEV/TEST/PROD/etc.
Record your deployment time parameters (in our case database connect info) in properties file. For example:
# Database connection properties
dev=jdbc/dbdev
test=jdbc/dbtest
prod=jdbc/dbprod
Each web app can load the same database properties file.
Lookup the JDBC resource as follows.
import java.sql.Connection;
import javax.sql.DataSource;
import java.sql.SQLException;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
/**
* #param resourceName the resource name of the connection pool (eg jdbc/dbdev)
* #return Connection a pooled connection from the data source
* associated with resourceName
* #throws NamingException will be thrown if resource name is not found
*/
public Connection getDatabaseConnection(String resourceName)
throws NamingException, SQLException {
Context initContext = new InitialContext();
DataSource pooledDataSource = (DataSource) initContext.lookup(resourceName);
return pooledDataSource.getConnection();
}
Note that this is not the usual two step process involving a look up using the naming context "java:comp/env." I have no idea if this works in application containers other than GF3, but in GF3 there is no need to add resource descriptors to web.xml when using the above approach.
I'm not sure to really understand the question/problem.
As an Application Component Provider, you declare the resource(s) required by your application in a standard way (container agnostic) in the web.xml.
At deployment time, the Application Deployer and Administrator is supposed to follow the instructions provided by the Application Component Provider to resolve external dependencies (amongst other things) for example by creating a datasource at the application server level and mapping its real JNDI name to the resource name used by the application through the use of an application server specific deployment descriptor (e.g. the sun-web.xml for GlassFish). Obviously, this is a container specific step and thus not covered by the Java EE specification.
Now, if you want to change the database an application is using, you'll have to either:
change the mapping in the application server deployment descriptor - or -
modify the configuration of the existing datasource to make it points on another database.
Having an admin interface doesn't really change anything. If I missed something, don't hesitate to let me know. And just in case, maybe have a look at this previous answer.