I have an application, written in JEE, JPA, I use Payara as the application server.
In my application's persistence.xml I have
<persistence-unit name="XYZ" transaction type="JTA" >
On the payara side, I have JDBC resource called XYZ, with JDBC connection Pool.
The database used by the application is dedicated to one customer.
Now I want to deploy this app (unchanged) to other customers (on the same server), each customer needs separate database. In order to achieve that I need to modify persistence unit name for each customer and recompile the app, and of course, define JDBC on the Payara side.
I would like to somehow have this persistence unit name being dynamic, so I don't have to recompile the app for each customer. Is there any simple way to achieve that?
In the worst case, I can just modify persistence.xml in the compiled war file, but I hope a more elegant solution exists.
Please advise how to solve that issue.
You can setup a own Instance inside Payara for every customer and deploy your Program to these Instances.
Then you're able to set a variable, different for each Instance.
This variable can than be used inside your persistence.xml.
Using variables is documentated here:
https://docs.payara.fish/community/docs/documentation/payara-server/server-configuration/var-substitution/README.html
Related
I develop Spring Boot Rest API project using JDBC and the database is PostgreSQL. I added authorization with Keycloak. I wanna use User Federation because I would like to use Users in my PostgreSQL DB. How can I use it and other ways not to use User Federation?
I have faced the same problem recently. I have different clients with different RDBMS, so I have decided to address this problem so that I could reuse my solution across multiple clients.
I published my solution as a multi RDBMS implementation (oracle, mysql, postgresl, sqlserver) to solve simple database federation needs, supporting bcrypt and several types of hashes.
Just build and deploy this solution on keycloak and configure it through the admin console providing jdbc connection string, login, password, the required SQL queries and the type of hash used.
Feel free to clone, fork or do whatever you need to solve your issue.
GitHub repo:
https://github.com/opensingular/singular-keycloak-database-federation
I'm doing similar development but with Oracle and JSF.
I created a project with three classes:
one implementing UserStorageProvider, UserLookupProvider and CredentialInputValidator
one implementing UserStorageProviderFactory
one extending AbstractUserAdapter
Then I created another project which creates an ear file containing the jar file generated in the previous project plus the driver jar file (of PostgreSQL in your case) inside a lib folder.
Finally the ear file is copied in the /opt/jboss/keycloak/standalone/deployments/ folder of the Keycloak server and it gets autodeployed as a SPI. It's necessary to add this provider in the User federation section of the administration application of Keycloak.
Recently I began working with Teiid and Wildfly. I have a user defined function (UDF) that adds custom functionality to Teiid, and it works as expected. However, I need to modify it further and would like to use CDI to inject a bean from the Wildfly app server. I know that the UDF isn't managed by the container (it is a Wildfly module with an associated module.xml file deployed as a jar), so I've added (what seemed to be) necessary dependencies to module.xml but it doesn't work.
Is it possible to use CDI in a UDF with Teiid / Wildfly, and if so, how?
No, it is not possible. although Teiid is a resident of WildFly it is using the infrastructure of WildFly for a variety of features like transactions, security, data sources, administration etc. It is not part of JEE or anything, so there is no direct way to do this. If you want to explain what you are trying to accomplish, maybe we can offer any further guidance on alternatives.
I have created an "enterprise template" Liberty server with an EAR file application requiring a few SQLDB connections. This is working and I am able to cf push this server to the Bluemix environment.
My question is how do I go about packaging the entire content and publish this to Bluemix in ONE action (i.e., they will have an instance of the same application running on Liberty with the same SQLDB table setup).
From my quick browsing of the blogs and Q&A, I have only found articles talking about creating the SQLDB ahead of time, packaging the Liberty runtime as a .zip file, and then using cf push to Bluemix. Because the SQLDB was created ahead of time, the DB connections would work.
So is there a way to package the Liberty server with the SQLDB creation as one entity into perhaps one "buildpack"? If so, can someone guide me on the steps involved? (or articles/blogs, anything would help)
You can't do it.
If you want create a script that do all operations in one time, an idea is create a simple job (in java for example) that you can launch in your script.
The job should perform these steps:
connect to sqldb - bluemix service using VCAP_SERVICES (for this
step you can see the documentation
https://www.ng.bluemix.net/docs/#services/SQLDB/index.html#SQLDB
run DDL (create table, ...) in your little job
close connection
Another option is to package a database migration helper (something like Flyway in the application. Then you can invoke it using Java, on application startup (we've had good luck with #singleton #startup EJBs for this pattern). The migration will run when needed, but leave the database alone otherwise. Another advantage of this pattern is you can use the migrations to update the tables of an existing table (as the name suggests).
I'm beginner in OSGi, My project consists of developping and executing, within an OSGi container (apache felix; the distribuable jar), a persistence bundle (using jpa) and then communicating with the database (MySql) through a jpa provider (Hibernate).
I read about the jpa specification for OSGi, so, if I have correctly understood, I must use a JPA provider for OSGi implementing the OSGI jpa enterprise specification. this jpa provider will track for a registered persistence bundle to make an EntityManagerFactory for it ?
So what is the difference between using a jpa provider directly to create the EntityManagerFactory (Persistence.createEntityManagerFactory("xx")) or retrieving it from the registry :
serviceReferences = context.getServiceReferences(
EntityManagerFactory.class.getName(),
String.format("(%s=%s)",
EntityManagerFactoryBuilder.JPA_UNIT_NAME,
persistenceUnit));
I wouldn't like to use any container (apache karaf, geronimo, spring dm, ..) so, is it sufficient that I will instal and start in the OSGi container for example the "org.apache.aries.jpa.api" as an implementation of the OGSi enterprise jpa spec and then only retrieve an "EntityManagerFactory" service from the registry associated to my persistence unit name, or I should also register by myself a PersistenceProvider like HibernatePersistence to can declare it as "provider" in my persistence.xml file ?
I found many discussion in this topic here. I still having trouble, though
Thanks
OSGi is about services and services are extremely easy to consume in OSGi with the proper setup. You show a very old style example with service references and I do agree, in that model it is a lot easier to just use the old fashioned JPA way.
However, if you use Declarative Services, using services becomes very lightweight. You get injected with an EntityManagerFactory service that is completely prepared for you. The deployer could have tuned all kinds of settings with Config Admin, connection pools, another JPA provider, etc. It is a clear separation of concerns.
By not really knowing where this thing comes from and who implements it you get less assumptions in your code and your code will therefore be less error prone and more reusable. I principle, the fact that use Hibernate and MYSQL is completely irrelevant to most of your code. Yes, I do know that neither JPA nor SQL is very portable in practice but there are many aspects that are ignorant of the differences. It is the deployer that is then in the end responsible to put together the parts that work.
Now Declarative Services (DS) is of course an extra bundle but after using OSGi for 15 years now I declare any OSGi developer NOT using DS, well, let me not go too deep into that to keep it civil :-) If I would be back in the beginning of OSGi, DS would have been built into the framework, it is for the lowest level to program with.
I have an issue in Glassfish regarding dealing with properties wehn setting up a web application We are moving from using Jetty to a clustered environment setup with GlassFish on Amazon AWS
Conventionally speaking when dealing with Servlets you are meant to use a .properties file when you want to parse in environment variables, however this causes issues when you use a distributed environment (you would have to place the .properties file in every cluster node). GlassFish has the ability to configure properties of the web container through their Admin Console, which means the properties would automatically distribute through the cluster
The problem is, I am getting random behavior regarding retrieving the variables. The first time I ran a test application, I couldn't retrieve the variables, however no it no longer works
Basically I am setting the environment variables through the admin UI. Under Configurations there are 3 configuration stetings, one for the cluster (usually named .config), one default-config and one server-config. Under Web Container, I have put a test property in all 3 of the called "someVal".
I then created a quick Scalatra app in Scala (which uses Servlet 2.5) and I used this line to attempt to get the properties
getServletContext.getInitParameter("someVal")
Any ideas what I am doing incorrectly, it always returns null?
Update
It appears what I was attempting to do isn't the "correct" way of doing things. So my question is, what is the standard way of providing specific application settings (outside of the .war and outside of runtime) when dealing with clusters in GlassFish. myfear stated that using a database is the standard approach, however I use these configuration settings themselves to define the JDBC connection
I got it. You are referring to the Web Container Settings
http://docs.oracle.com/cd/E18930_01/html/821-2431/abedw.html
I'm afraid that this never has been thought of as providing application specific configuration and I strongly believe that you will never be able to access those properties from the servlet context.
So, you could (should) use the servlet init params in web.xml if you are talking about application specific information. if you use
getServletContext().setInitParameter("param", "value");
you might be able to set them (at least for the runtime of the application). I'm not sure about cluster replication here. The normal way would be to have you configuration settings in the database.