Recently I began working with Teiid and Wildfly. I have a user defined function (UDF) that adds custom functionality to Teiid, and it works as expected. However, I need to modify it further and would like to use CDI to inject a bean from the Wildfly app server. I know that the UDF isn't managed by the container (it is a Wildfly module with an associated module.xml file deployed as a jar), so I've added (what seemed to be) necessary dependencies to module.xml but it doesn't work.
Is it possible to use CDI in a UDF with Teiid / Wildfly, and if so, how?
No, it is not possible. although Teiid is a resident of WildFly it is using the infrastructure of WildFly for a variety of features like transactions, security, data sources, administration etc. It is not part of JEE or anything, so there is no direct way to do this. If you want to explain what you are trying to accomplish, maybe we can offer any further guidance on alternatives.
Related
We have a webapp that is currently running on one instance of Apache Tomcat with one database instance, but the increase in traffic will soon (probably) force us to resort to load-balancing several webapp instances, and we've run into a problem that seems to have no easy answer.
Currently our JDBC DataSource is configured as Resource-local, rather than Transactional, and after some searching, everyone recommends to use Transactional, which requires the use of a JTA provider. No real justification is used for why I don't just stick with the current scenario where we have a servlet filter catch any unhandled exceptions and rollback the active transaction. Besides that the only one I've found that is just a JTA provider (not with 5 more JEE technologies combined) and is still maintained is Bitronix. The other alternative is to move out of Tomcat and use Glassfish, since it is a full Java EE platform, and we also use JavaMail, JPA and JAX-RS.
Only one transaction scenario uses Serializable isolation level.
As for the database, we may be looking too far ahead to think of distributed storage like Postgres-XL or pgpool, but if we make the wrong choice now it will be harder to fix later.
My questions are as follows:
Do synchronous database replication tools and JTA complete each-other, hinder each-other or just perform the same consistency checks twice?
Do we need JTA if we only have one database, but multiple webapp instances?
Do we need JTA if we have multiple database and multiple webapp instances?
Should we just switch to Glassfish or something like TomEE?
Supposedly there are ways we can keep using Hibernate as our JPA under both. It would be tedious to have to rewrite all our native queries to use positional parameters because EclipseLink and OpenJPA don't support them. That little extra feature makes Hibernate worth choosing above all other JPAs for me.
Pardon if I can't give more pointers, but I'm really a noob at wildfly. I'm using version 9.0.2.
I have deployed jbpm-console, drools, and dashboard - no problems here. I restart wildfly using the jboss CLI, and when I login again, the repositories won't appear in the web interface or on disk (atleast nothing that grepping or find will show).
I'm using the H2 database. I'm not even sure where to look, does anyone have any idea?
Thanks in advance!
After enough reading through the docs, it would seem that it's necessary to configure jBPM to persist. From the docs:
"By default, the engine does not save runtime data persistently. This means you can use the engine completely without persistence (so not even requiring an in memory database) if necessary, for example for performance reasons, or when you would like to manage persistence yourself. It is, however, possible to configure the engine to do use persistence by configuring it to do so. This usually requires adding the necessary dependencies, configuring a datasource and creating the engine with persistence configured."
https://docs.jboss.org/jbpm/v5.3/userguide/ch.core-persistence.html
I'm trying to find the best way to grammatically determine if my program is running on Jboss 5 or Jboss 7 (eap-6.1). The ways I've been finding so far are jboss 5 or jboss 7 specific, which doesn't work because the code has to work in both. Tried both solutions from here: How do I programmatically obtain the version in JBoss AS 5.1? and they didn't work. One complained about org.jboss.Main not existing in jboss 7, the other complained aobut not finidng "jmx/rmi/RMIAdaptor".
The only way I can see is to do Class.forName to look for "org.jboss.Version" (should be found if jboss 5) and if that fails, do Class.forName "org.jboss.util.xml.catalog.Version" (jboss 7). But that seems like a terrible idea.
The reason I need to know if the war is running on jboss 5 or 7 is because there are some custom files that are located in different places in both. So it's like "if jboss 5, execute this piece of code, if jboss 7 execute the other.
Ok i just saw what the problem is.
I would suggest you to think about design issues/refactoring of your software.
If you want to provide your software within different environments, seperate your logic from
technology dependencies.
Build facedes and interfaces to meet environmental requironments.
In my oppionen thats much better as to think we must support all integration platforms and support all there versions. This is completely impossible.
So decouple your business logic and offer specific interfaces. These interfaces (adapters) are much simplier to implement and to maintain.
Hope it helps.
UPDATE DUE TO COMMENT.
I think a solution is for servers 4 to 6 is to use
the MBean Server of JBoss to lookup the registered web application
which is associated to the deployed WarFile.
I suggest first to lookup the registered MBean of the web application manually using the JBoss jmx-console. The name of the WebApplication should be found under the capital "web" or "web-deployment" within the jmx-console.
If you found that name you can implement an own jmx based lookup mechanism
to check for that name.
Here is an Tutorial: pretty old but i think it gives you an idea how to do.
There must be more tutorials for this problem:
http://www.theserverside.com/news/1364648/Using-JMX-to-Manage-Web-Applications
Within JBoss 7 i just can give you the hint that its architecture is based on OSGI. So to lookup for other services you should have a look to this mechanism.
In any case you don't have direct access to the file system and the deployment directory
from an application which is deployed within a JEE container, except of
using the mechanisms provided by the container. JNDI Lookup, JMX ManagedBean mechanism, Java Connector Archicture (JCA) (makes no sense in your case)
It's not an answer just an suggestions since the implementations are completely different
One way could be to use the "interceptors" which are executed during bootstrap and before any ejb invocation and there you have access to the invocation context in other words ejb container.
I can't give you any example but this would be an access point to start.
Another accesspoint is to check for system wide JMX Beans by looking through the
Adminstratore console of the JBoss Server.
You can inject JMX Bean state into your application through the Context Mechansim.
Take a look from Version 4 to 6 at the JMX Managed Bean mechanism. The JMX Achitecture is the main concept of JBoss 3 to 6, so at this point you can influence and maintain the JBoss behaviour.
Aditionally i think you have differences from 4 to 6.x version and 7.0 because since
7 it's a completely new architecture. Since 7.0 the JMX architecture doens't exists anymore.
As mentioned in the article https://community.jboss.org/wiki/DataSourceConfigurationInAS7 JBoss 7 provides 2 main ways to configure a data source.
What is the BEST practice of configuring a data source in JBoss 7 AS ? Is it
As a module?
As a deployment?
(The same question has been asked in the thread https://community.jboss.org/thread/198023, but no one has provided an acceptable answer yet.)
The guide JBoss AS7 DS configuration says the recommended way is to configure the datasource by deployment
But according to discussion on the link Jboss 7 DS configuration JBoss Community Discussion on page 54 of the guide it mentions that the recommended way to deploy JDBC driver is to use modular approach
But I personally say that the better(not the best) approach to configure JDBC driver would be to use modules because of 3 reasons
JDBC driver will generally not change.
Re-usability : You can use the same module across various applications and not deploy the jar along with each application, this prevents duplicacy
Space Effective : Using the module approach lets you reduce the size of your EAR/WAR as you do not need to supply the jar with the package
Hence I would argue that the better of the two approaches is via modules
#Mukul Goel
It's not necessary to include it the EAR of your application it's sufficient to put the .jar inside the deployments folder so:
no need to embed in ear
no need to create a module
Jist deploy in deployments folder or via admin console
I'm beginner in OSGi, My project consists of developping and executing, within an OSGi container (apache felix; the distribuable jar), a persistence bundle (using jpa) and then communicating with the database (MySql) through a jpa provider (Hibernate).
I read about the jpa specification for OSGi, so, if I have correctly understood, I must use a JPA provider for OSGi implementing the OSGI jpa enterprise specification. this jpa provider will track for a registered persistence bundle to make an EntityManagerFactory for it ?
So what is the difference between using a jpa provider directly to create the EntityManagerFactory (Persistence.createEntityManagerFactory("xx")) or retrieving it from the registry :
serviceReferences = context.getServiceReferences(
EntityManagerFactory.class.getName(),
String.format("(%s=%s)",
EntityManagerFactoryBuilder.JPA_UNIT_NAME,
persistenceUnit));
I wouldn't like to use any container (apache karaf, geronimo, spring dm, ..) so, is it sufficient that I will instal and start in the OSGi container for example the "org.apache.aries.jpa.api" as an implementation of the OGSi enterprise jpa spec and then only retrieve an "EntityManagerFactory" service from the registry associated to my persistence unit name, or I should also register by myself a PersistenceProvider like HibernatePersistence to can declare it as "provider" in my persistence.xml file ?
I found many discussion in this topic here. I still having trouble, though
Thanks
OSGi is about services and services are extremely easy to consume in OSGi with the proper setup. You show a very old style example with service references and I do agree, in that model it is a lot easier to just use the old fashioned JPA way.
However, if you use Declarative Services, using services becomes very lightweight. You get injected with an EntityManagerFactory service that is completely prepared for you. The deployer could have tuned all kinds of settings with Config Admin, connection pools, another JPA provider, etc. It is a clear separation of concerns.
By not really knowing where this thing comes from and who implements it you get less assumptions in your code and your code will therefore be less error prone and more reusable. I principle, the fact that use Hibernate and MYSQL is completely irrelevant to most of your code. Yes, I do know that neither JPA nor SQL is very portable in practice but there are many aspects that are ignorant of the differences. It is the deployer that is then in the end responsible to put together the parts that work.
Now Declarative Services (DS) is of course an extra bundle but after using OSGi for 15 years now I declare any OSGi developer NOT using DS, well, let me not go too deep into that to keep it civil :-) If I would be back in the beginning of OSGi, DS would have been built into the framework, it is for the lowest level to program with.