Liberty : Intermediate context does not exist : jms/xyz - server

I am working on migrating ear application to liberty. It is a web appliation that uses JMS with MQ messaging provider.
For example in my stage.config.xml, we have following properties:
MQQueue(0).CCSID
MQQueue(0).baseQueueName
MQQueue(0).jndiName
MQQueue(0).name
MQQueueConnectionFactory(0).CCSID
MQQueueConnectionFactory(0).channel
MQQueueConnectionFactory(0).connectionPool.ConnectionPool(0).maxConnections
MQQueueConnectionFactory(0).description
MQQueueConnectionFactory(0).host
MQQueueConnectionFactory(0).jndiName
MQQueueConnectionFactory(0).name
MQQueueConnectionFactory(0).port
MQQueueConnectionFactory(0).provider
MQQueueConnectionFactory(0).queueManager
MQQueueConnectionFactory(0).sessionPool.ConnectionPool(0).maxConnections
MQQueueConnectionFactory(0).transportType
<featureManager>
<feature>jsp-2.3</feature>
<feature>localConnector-1.0</feature>
<feature>jndi-1.0</feature>
<feature>jdbc-4.1</feature>
<feature>samlWeb-2.0</feature>
<feature>wasJmsClient-2.0</feature>
<feature>wasJmsClient-1.1</feature>
<feature>wmqJmsClient-1.1</feature>
<feature>jndi-1.0</feature>
<feature>jmsMdb-3.1</feature>
</featureManager>
<featureManager>
<exclude>jsf-2.2</exclude>
</featureManager>
<variable name="wmqJmsClient.rar.location"
value="${server.config.dir}/wmq/wmq.jmsra.rar"/>
<jmsQueue id="1533A.TRANSPORT.ASSIGNMENT.RESP" jndiName="jms/xyz/queue/transportAssignment/response"></jmsQueue>
<jmsQueue id="1533A.TRANSPORT.ASSIGNMENT.RQST" jndiName="jms/xyz/queue/transportAssignment/request"></jmsQueue>
<jmsQueueConnectionFactory jndiName="jms/xyz" id="xyz_qa_QCF">
<connectionManager maxPoolSize="10"/>
<properties.wmqJms providerVersion="unspecified" transportType="CLIENT" applicationName="xyz" channel="CLIENTS.xyz" hostName="host123.GOT.hst.NET" queueManager="xyz141Q" CCSID="1208"/>
</jmsQueueConnectionFactory>
Exception I get : NameNotFoundException: Intermediate context does not exist: jms/xyz
Can anyone please guide on what all parameters/Configurations I have to use in Server.xml for this to work.Kindly help.

There are several issues with your server.xml:
duplicated jndi-1.0 feature
mixed wasJmsClient and wmqJmsClient - if you only use mq than remove was
mixed versions of wasJmsClient - use only one if you need to connect to internal queues also
<exclude> in features - where did you find such construct, I do not believe it is supported
finally you are using jms\xyz once as QCF name, and once as context name. It is incorrect. Change your QCF jndi name to something differnet e.g. jms\xyz\qcf
UPDATE based on comments
Check how you are using JMS classes.
Here is config and code I used for connecting to MQ:
server.xml fragment:
<feature>jms-2.0</feature>
Java code to send message:
#ApplicationScoped
public class JMSHelper {
private static Logger logger = Logger.getLogger(JMSHelper.class.getName());
#Inject
#JMSConnectionFactory("jms/myapp/NotificationQueueConnectionFactory")
private JMSContext jmsContext;
#Resource(lookup = "jms/myapp/NotificationQueue")
private Queue queue;
#Transactional
void invokeJMS(Object json) throws JMSException, NamingException {
String contents = json.toString();
logger.info("Sending "+contents+" to "+queue.getQueueName());
jmsContext.createProducer().send(queue, contents);
logger.info("JMS Message sent successfully!");
}
}

I am assuming you will use the resource adapter, so please start with reading Liberty and the IBM MQ resource adapter in the IBM documentation.
When you start configuring things like documented by IBM and it still does not work, please post the liberty config and the full exception you get, so we can help you again.

Related

started geode spring boot and save to remote region but failed to start bean gemfireClusterSchemaObjectInitializer

With a simple client app, make an object and object repository, connect to a Geode cluster, then run a #Bean ApplicationRunner to put some data to a remote region.
#ClientCacheApplication(name = "Web", locators = #Locator, logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache {
private static final Logger log = LoggerFactory.getLogger(MyCache.class);
#Bean
ApplicationRunner StartedUp(MyRepository myRepo){
log.info("In StartedUp");
return args -> {
String guid = UUID.randomUUID().toString().substring(0, 8).toUpperCase();
MyObject msg = new MyObject(guid, "Started");
myRepo.save(msg);
log.info("Out StartedUp");
};
}
The "save" put fails with
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.springframework.web.client.ResourceAccessException: I/O error on POST request for "https://localhost:7070/gemfire/v1/regions": Connection refused: connect; nested exception is java.net.ConnectException: Connection refused: connect
Problem creating region and persist region to disk Geode Gemfire Spring Boot helped. The problem is the #EnableClusterConfiguration(useHttp = true)
This annotation makes the remote cluster appear to be a localhost. If I remove it altogether then the put works.
If remove just the useHttp = true there is another error:
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)### The function is not registered for function id CreateRegionFunction
In a nutshell, the SDG #EnableClusterConfiguration annotation (details available here) enables configuration metadata defined and declared on the client (i.e. Spring [Boot] Data, GemFire/Geode application) to be pushed from the client-side to the cluster (of GemFire/Geode servers).
I say "enable" because it depends on the client-side configuration metadata (i.e. Spring bean definitions you have explicitly or implicitly defined/declared). Explicit configuration is configuration you defined with a bean definition (in XML, or JavaConfig with #Bean, etc). Implicit configuration is auto-configuration or using SDG annotations like #EnableEntityDefinedRegions or #EnableCachingDefinedRegions, etc.
By default, the #EnableClusterConfiguration annotation assumes the cluster of GemFire or Geode servers were configured and bootstrapped with Spring, and specifically using the SDG Annotation configuration model. When the GemFire or Geode servers are configured and bootstrapped with Spring, then SDG goes on to register some provided, canned GemFire Functions that the #EnableClusterConfiguration annotation calls (by default and...) as a fallback.
NOTE: See the appendix in the SBDG reference documentation on configuring and bootstrapping a GemFire or Geode server, or even a cluster of servers, with Spring. This certainly simplifies local development and debugging as opposed to using Gfsh. You can do all sorts of interesting combinations: Gfsh Locator with Spring servers, Spring [embedded|standalone] Locator with both Gfsh and Spring servers, etc.
Most of the time, users are using Spring on the client and Gfsh to (partially) configure and bootstrap their cluster (of servers). When this is the case, then Spring is generally not on the servers' classpath and the "provided, canned" Functions I referred to above are not present and automatically registered. In which case, you must rely on GemFire/Geodes internal, Management REST API (something I know a thing or 2 about, ;-) to send the configuration metadata from the client to the server/cluster. This is why the useHttp attribute on the #EnableClusterConfiguration annotation must be set to true.
This is why you saw the Exception...
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer';
nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)###
The function is not registered for function id CreateRegionFunction
The CreateRegionFunction is the canned Function provided by SDG out of the box, but only when Spring is used to both configure and bootstrap the servers in the cluster.
This generally works well for CI/CD environments, and especially our own test infrastructure since we typically do not have a full installations of either Apache Geode or Pivotal GemFire available to test with in those environments. For 1, those artifacts must be resolvable from and artifact repository like Maven Central. The Apache Geode (and especially) Pivotal GemFire distributions are not. The JARs are, but the full distro isn't. Anyway...
Hopefully, all of this makes sense up to this point.
I do have a recommendation if I may.
Given your application class definition begins with...
#ClientCacheApplication(name = "Web", locators = #Locator,
logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache { ... }
I would highly recommend simply using Spring Boot for Apache Geode (and Pivotal GemFire), i.e. SBDG, in place of SDG directly.
Your application class could then be simplified to:
#SpringBootApplication
#EnableClusterAware
#EnableClusterDefinedRegions
public class MyCache { ... }
You can then externalize some of the hard coded configuration settings using the Spring Boot application.properties file:
spring.application.name=Web
spring.data.gemfire.cache.log-level=debug
spring.data.gemfire.pool.subscription-enabled=true
NOTE: #EnableClusterAware is a much more robust and capable extension of #EnableClusterConfiguration. See additional details here.
Here are a few resources to get you going:
Project Overview
Getting Started Sample Guide
Use Case Driven Guides/Samples
Useful resources in the Appendix TOC.
Detailed information on SBDG provided Auto-configuration.
Detailed information on Declarative Configuration.
Detailed information on Externalized Configuration.
In general, SBDG, which is based on SDG, SSDG and STDG, is the preferred/recommended starting point for all things Spring for Apache Geode and Pivotal GemFire (or now, Pivotal Cloud Cache).
Hope this helps.

Spring Boot with application managed persistence context

I am trying to migrate an application from EJB3 + JTA + JPA (EclipseLink). Currently, this application makes use of application managed persistent context due to an unknown number of databases on design time.
The application managed persistent context allows us to control how to create EntityManager (e.g. supply different datasources JNDI to create proper EntityManager for specific DB on runtime).
E.g.
Map properties = new HashMap();
properties.put(PersistenceUnitProperties.TRANSACTION_TYPE, "JTA");
//the datasource JNDI is by configuration and without prior knowledge about the number of databases
//currently, DB JNDI are stored in a externalized file
//the datasource is setup by operation team
properties.put(PersistenceUnitProperties.JTA_DATASOURCE, "datasource-jndi");
properties.put(PersistenceUnitProperties.CACHE_SHARED_DEFAULT, "false");
properties.put(PersistenceUnitProperties.SESSION_NAME, "xxx");
//create the proper EntityManager for connect to database decided on runtime
EntityManager em = Persistence.createEntityManagerFactory("PU1", properties).createEntityManager();
//query or update DB
em.persist(entity);
em.createQuery(...).executeUpdate();
When deployed in a EJB container (e.g. WebLogic), with proper TransactionAttribute (e.g. TransactionAttributeType.REQUIRED), the container will take care of the transaction start/end/rollback.
Now, I am trying to migrate this application to Spring Boot.
The problem I encounter is that there is no transaction started even after I annotate the method with #Transactional(propagation = Propagation.REQUIRED).
The Spring application is packed as an executable JAR file and run with embadded Tomcat.
When I try to execute those update APIs, e.g. EntityManager.persist(..), EclipseLink always complains about:
javax.persistence.TransactionRequiredException: 'No transaction is currently active'
Sample code below:
//for data persistence
#Service
class DynamicServiceImpl implements DynamicService {
//attempt to start a transaction
#Transactional(propagation = Propagation.REQUIRED)
public void saveData(DbJndi, EntityA){
//this return false that no transaction started
TransactionSynchronizationManager.isActualTransactionActive();
//create an EntityManager based on the input DbJndi to dynamically
//determine which DB to save the data
EntityManager em = createEm(DbJndi);
//save the data
em.persist(EntityA);
}
}
//restful service
#RestController
class RestController{
#Autowired
DynamicService service;
#RequestMapping( value = "/saveRecord", method = RequestMethod.POST)
public #ResponseBody String saveRecord(){
//save data
service.saveData(...)
}
}
//startup application
#SpringBootApplication
class TestApp {
public static void main(String[] args) {
SpringApplication.run(TestApp.class, args);
}
}
persistence.xml
-------------------------------------------
&ltpersistence-unit name="PU1" transaction-type="JTA">
&ltproperties>
&lt!-- comment for spring to handle transaction??? -->
&lt!--property name="eclipselink.target-server" value="WebLogic_10"/ -->
&lt/properties>
&lt/persistence-unit>
-------------------------------------------
application.properties (just 3 lines of config)
-------------------------------------------
spring.jta.enabled=true
spring.jta.log-dir=spring-test # Transaction logs directory.
spring.jta.transaction-manager-id=spring-test
-------------------------------------------
My usage pattern does not follow most typical use cases (e.g. with known number of DBs - Spring + JPA + multiple persistence units: Injecting EntityManager).
Can anybody give me advice on how to solve this issue?
Is there anybody who has ever hit this situation that the DBs are not known in design time?
Thank you.
I finally got it work with:
Enable tomcat JNDI and create the datasource JNDI to each DS programmatically
Add transaction stuff
com.atomikos:transactions-eclipselink:3.9.3 (my project uses eclipselink instead of hibernate)
org.springframework.boot:spring-boot-starter-jta-atomikos
org.springframework:spring-tx
You have pretty much answered the question yourself: "When deployed in a EJB container (e.g. WebLogic), with proper TransactionAttribute (e.g. TransactionAttributeType.REQUIRED), the container will take care of the transaction start/end/rollback".
WebLogic is compliant with the Java Enterprise Edition specification which is probably why it worked before, but now you are using Tomcat (in embedded mode) which are NOT.
So you simply cannot do what you are trying to do.
This statement in your persistence.xml file:
<persistence-unit name="PU1" transaction-type="JTA">
requires an Enterprise Server (WebLogic, Glassfish, JBoss etc.)
With Tomcat you can only do this:
<persistence-unit name="PU1" transaction-type="RESOURCE_LOCAL">
And you need to handle transactions by your self:
myEntityManager.getTransaction.begin();
... //Do your transaction stuff
myEntityManager.getTransaction().commit();

HelloWorld using Drools Workbench & KIE Server

Have KIE Drools Workbench 6.2.0 Final installed inside a JBoss 7 Application Server local instance and Kie Server 6.2.0 Final inside a local Tomcat 7 instance.
Using the web based KIE Workbench strictly for evaluation purposes (am using it to code generate Java based Maven projects and am not using a particular IDE such as Eclipse or IntelliJ IDEA):
Created a new repository called testRepo
Created a new project called HelloWorld
Created a new Data Object called HelloWorld with a String property called message:
package demo;
/**
* This class was automatically generated by the data modeler tool.
*/
public class HelloWorld implements java.io.Serializable {
static final long serialVersionUID = 1L;
private java.lang.String message;
public HelloWorld()
{
}
public java.lang.String getMessage()
{
return this.message;
}
public void setMessage(java.lang.String message)
{
this.message = message;
}
public HelloWorld(java.lang.String message)
{
this.message = message;
}
}
Created a new DRL containing the following contents:
package demo;
import demo.HelloWorld;
rule "hello"
when
HelloWorld(message == "Joe");
then
System.out.println("Hello Joe!");
end
When I deploy it to my Kie Server under this URL:
http://localhost:8080/kie-server-6.2.0.Final-webc/services/rest/server/containers/helloworld
I get the following response when I copy and paste the above URL in Google Chrome:
<response type="SUCCESS" msg="Info for container hello">
<kie-container container-id="hello" status="STARTED">
<release-id>
<artifact-id>Hello</artifact-id>
<group-id>demo</group-id>
<version>1.0</version>
</release-id>
<resolved-release-id>
<artifact-id>Hello</artifact-id>
<group-id>demo</group-id>
<version>1.0</version>
</resolved-release-id>
<scanner status="DISPOSED"/>
</kie-container>
</response>
When I try to do a POST using the following payload (using Postman or SoapUI):
<batch-execution lookup="defaultKieSession">
<insert out-identifier="message" return-object="true" entrypoint="DEFAULT">
<demo.HelloWorld>
<message>Joe</message>
<demo.HelloWorld>
</insert>
Received the following:
HTTP Status 415 - Cannot consume content type
type Status report
message Cannot consume content type
description The server refused this request because the request entity is in a format not supported by the requested resource for the requested method.
What am I possibly doing wrong? I went to Deploy -> Rule Deployments and registered my kie-server along with creating a container called helloworld and as one can see from Step # 5, it worked. Perhaps I am not deploying it correctly?
Btw, I used the following Stack Overflow post as a basis (prior to asking this question)...
Most of the search results from Google just explain how to programmatically create Drools projects by setting up Maven based projects. Am evaluating KIE Drools Workbench to see how easily a non-technical person can use KIE Drools Workbench to generate Drools based rules and execute them.
Am I missing a step? Under Tomcat 7, it only contains the following directories under apache-tomcat-7.0.64/webapps/kie-server-6.2.0.Final-webc:
META-INF
WEB-INF
Thanks for taking the time to read this...
What content type are you using in your POST request header?
As far as I remember, that error message happened if you didn't provide a content-type: application/xml in your request's header.
Hope it helps,
are you ok?
The response of Esteban is right, but, you should add a another header, the header that you need to add is "Authorization", and the value of Authorization is the user that you registered to you application realm to your kie-server converted in base64. e.g.:
kieserver:system*01
converted to base64:
a2llc2VydmVyOnN5c3RlbSowMQ==
Anyway, the complete header of my request is like this:
Authorization : Basic a2llc2VydmVyOnN5c3RlbSowMQ==
Content-Type : application/xml
I hope it was helpful.
Sorry for my english! :)
I got it working with using Postman (Chrome app / plugin) with the Authorization tab selected to No Auth. Really cool response!
<response type="SUCCESS" msg="Container helloworld successfully called.">
<results>
<![CDATA[<execution-results>
<result identifier="message">
<demo.HelloWorld>
<message>Joe</message>
</demo.HelloWorld>
</result>
<fact-handle identifier="message" external-form="0:4:1864164041:1864164041:4:DEFAULT:NON_TRAIT"/>
</execution-results>]]>
</results>
</response>

NameNotFoundException when calling a EJB in Weblogic 10.3

First of all, I'd like to underline that I've already read other posts in StackOverflow (example) with similar questions, but unfortunately I didn't manage to solve this problem with the answers I saw on those posts. I have no intention to repost a question that has already been answered, so if that's the case, I apologize and I'd be thankful to whom points out where the solution is posted.
Here is my question:
I'm trying to deploy an EJB in WebLogic 10.3.2. The purpose is to use a specific WorkManager to execute work produced in the scope of this component.
With this in mind, I've set up a WorkManager (named ResponseTimeReqClass-0) on my WebLogic configuration, using the web-based interface (Environment > Work Managers > New). Here is a screenshot:
Here is my session bean definition and descriptors:
OrquestratorRemote.java
package orquestrator;
import javax.ejb.Remote;
#Remote
public interface OrquestratorRemote {
public void initOrquestrator();
}
OrquestratorBean.java
package orquestrator;
import javax.ejb.Stateless;
import com.siemens.ecustoms.orchestration.eCustomsOrchestrator;
#Stateless(name = "OrquestratorBean", mappedName = "OrquestratorBean")
public class OrquestratorBean implements OrquestratorRemote {
public void initOrquestrator(){
eCustomsOrchestrator orquestrator = new eCustomsOrchestrator();
orquestrator.run();
}
}
META-INF\ejb-jar.xml
<?xml version='1.0' encoding='UTF-8'?>
<ejb-jar xmlns='http://java.sun.com/xml/ns/javaee'
xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
metadata-complete='true'>
<enterprise-beans>
<session>
<ejb-name>OrquestradorEJB</ejb-name>
<mapped-name>OrquestratorBean</mapped-name>
<business-remote>orquestrator.OrquestratorRemote</business-remote>
<ejb-class>orquestrator.OrquestratorBean</ejb-class>
<session-type>Stateless</session-type>
<transaction-type>Container</transaction-type>
</session>
</enterprise-beans>
<assembly-descriptor></assembly-descriptor>
</ejb-jar>
META-INF\weblogic-ejb-jar.xml
(I've placed work manager configuration in this file, as I've seen on a tutorial on the internet)
<weblogic-ejb-jar xmlns="http://www.bea.com/ns/weblogic/90"
xmlns:j2ee="http://java.sun.com/xml/ns/j2ee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.bea.com/ns/weblogic/90
http://www.bea.com/ns/weblogic/90/weblogic-ejb-jar.xsd">
<weblogic-enterprise-bean>
<ejb-name>OrquestratorBean</ejb-name>
<jndi-name>OrquestratorBean</jndi-name>
<dispatch-policy>ResponseTimeReqClass-0</dispatch-policy>
</weblogic-enterprise-bean>
</weblogic-ejb-jar>
I've compiled this into a JAR and deployed it on WebLogic, as a library shared by administrative server and all cluster nodes on my solution (it's in "Active" state).
As I've seen in several tutorials and examples, I'm using this code on my application, in order to call the bean:
InitialContext ic = null;
try {
Hashtable<String,String> env = new Hashtable<String,String>();
env.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory");
env.put(Context.PROVIDER_URL, "t3://localhost:7001");
ic = new InitialContext(env);
}
catch(Exception e) {
System.out.println("\n\t Didn't get InitialContext: "+e);
}
//
try {
Object obj = ic.lookup("OrquestratorBean");
OrquestratorRemote remote =(OrquestratorRemote)obj;
System.out.println("\n\n\t++ Remote => "+ remote.getClass());
System.out.println("\n\n\t++ initOrquestrator()");
remote.initOrquestrator();
}
catch(Exception e) {
System.out.println("\n\n\t WorkManager Exception => "+ e);
e.printStackTrace();
}
Unfortunately, this don't work. It throws an exception on runtime, as follows:
WorkManager Exception =>
javax.naming.NameNotFoundException:
Unable to resolve 'OrquestratorBean'.
Resolved '' [Root exception is
javax.naming.NameNotFoundException:
Unable to resolve 'OrquestratorBean'.
Resolved '']; remaining name
'OrquestratorBean'
After seeing this, I've even tried changing this line
Object obj = ic.lookup("OrquestratorBean");
to this:
Object obj = ic.lookup("OrquestratorBean#orquestrator.OrquestratorBean");
but the result was the same runtime exception.
Can anyone please help me detecting what am I doing wrong here? I'm having a bad time debugging this, as I don't know how to check out what may be causing this issue...
Thanks in advance for your patience and help.
Your EJB gets bound under the following JNDI name (when deployed as EJB module):
Object obj = ic.lookup("OrquestratorBean#orquestrator.OrquestratorRemote");
Note that I deployed your code (without the weblogic-ejb-jar.xml) as an EJB module, not as a shared library.
seems like your mapped-name in ejb-jar.xml "Orquestrator" may be overriding the mappedName=OrquestratorBean setting of the Bean.
Have you tried ic.lookup for "Orquestrator" ?

Correct way to make datasources/resources a deploy-time setting

I have a web-app that requires two settings:
A JDBC datasource
A string token
I desperately want to be able to deploy one .war to various different containers (jetty,tomcat,gf3 minimum) and configure these settings at application level within the container.
My code does this:
InitialContext ctx = new InitialContext();
Context envCtx = (javax.naming.Context) ctx.lookup("java:comp/env");
token = (String)envCtx.lookup("token");
ds = (DataSource)envCtx.lookup("jdbc/datasource")
Let's assume I've used the glassfish management interface to create two jdbc resources: jdbc/test-datasource and jdbc/live-datasource which connect to different copies of the same schema, on different servers, different credentials etc. Say I want to deploy this to glassfish with and point it at the test datasource, I might have this in my sun-web.xml:
...
<resource-ref>
<res-ref-name>jdbc/datasource</res-ref-name>
<jndi-name>jdbc/test-datasource</jndi-name>
</resource-ref>
...
but
sun-web.xml goes inside my war, right?
surely there must be a way to do this through the management interface
Am I even trying to do the right thing? Do other containers make this any easier? I'd be particularly interested in how jetty 7 handles this since I use it for development.
EDIT Tomcat has a reasonable way to do this:
Create $TOMCAT_HOME/conf/Catalina/localhost/webapp.xml with:
<?xml version="1.0" encoding="UTF-8"?>
<Context antiResourceLocking="false" privileged="true">
<!-- String resource -->
<Environment name="token" value="value of token" type="java.lang.String" override="false" />
<!-- Linking to a global resource -->
<ResourceLink name="jdbc/datasource1" global="jdbc/test" type="javax.sql.DataSource" />
<!-- Derby -->
<Resource name="jdbc/datasource2"
type="javax.sql.DataSource"
auth="Container"
driverClassName="org.apache.derby.jdbc.EmbeddedDataSource"
url="jdbc:derby:test;create=true"
/>
<!-- H2 -->
<Resource name="jdbc/datasource3"
type="javax.sql.DataSource"
auth="Container"
driverClassName="org.h2.jdbcx.JdbcDataSource"
url="jdbc:h2:~/test"
username="sa"
password=""
/>
</Context>
Note that override="false" means the opposite. It means that this setting can't be overriden by web.xml.
I like this approach because the file is part of the container configuration not the war, but it's not part of the global configuration; it's webapp specific.
I guess I expect a bit more from glassfish since it is supposed to have a full web admin interface, but I would be happy enough with something equivalent to the above.
For GF v3, you may want to try leveraging the --deploymentplan option of the deploy subcommand of asadmin. It is discussed on the man page for the deploy subcommand.
We had just this issue when migrating from Tomcat to Glassfish 3. Here is what works for us.
In the Glassfish admin console, configure datasources (JDBC connection pools and resources) for DEV/TEST/PROD/etc.
Record your deployment time parameters (in our case database connect info) in properties file. For example:
# Database connection properties
dev=jdbc/dbdev
test=jdbc/dbtest
prod=jdbc/dbprod
Each web app can load the same database properties file.
Lookup the JDBC resource as follows.
import java.sql.Connection;
import javax.sql.DataSource;
import java.sql.SQLException;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
/**
* #param resourceName the resource name of the connection pool (eg jdbc/dbdev)
* #return Connection a pooled connection from the data source
* associated with resourceName
* #throws NamingException will be thrown if resource name is not found
*/
public Connection getDatabaseConnection(String resourceName)
throws NamingException, SQLException {
Context initContext = new InitialContext();
DataSource pooledDataSource = (DataSource) initContext.lookup(resourceName);
return pooledDataSource.getConnection();
}
Note that this is not the usual two step process involving a look up using the naming context "java:comp/env." I have no idea if this works in application containers other than GF3, but in GF3 there is no need to add resource descriptors to web.xml when using the above approach.
I'm not sure to really understand the question/problem.
As an Application Component Provider, you declare the resource(s) required by your application in a standard way (container agnostic) in the web.xml.
At deployment time, the Application Deployer and Administrator is supposed to follow the instructions provided by the Application Component Provider to resolve external dependencies (amongst other things) for example by creating a datasource at the application server level and mapping its real JNDI name to the resource name used by the application through the use of an application server specific deployment descriptor (e.g. the sun-web.xml for GlassFish). Obviously, this is a container specific step and thus not covered by the Java EE specification.
Now, if you want to change the database an application is using, you'll have to either:
change the mapping in the application server deployment descriptor - or -
modify the configuration of the existing datasource to make it points on another database.
Having an admin interface doesn't really change anything. If I missed something, don't hesitate to let me know. And just in case, maybe have a look at this previous answer.