Two different data sources in a Weblogic server - eclipse

I have a Weblogic server configured in Eclipse with a local database as the data source. When debugging issues it would be nice to be able to connect to the database the test group is using. I thought I would be able to clone the default "myserver" in the default mydomain and create new data sources which point to the test groups database. I've done this but now I'm attempting to figure out how to start this new server and deploy my application to it through Eclipse.
I don't really care how it works, I just need to be able to easily switch between the two data sources, either through the Weblogic admin console or through eclipse via multiple servers. Being able to clone the current server would be nice since it's configuration is rather complex or just switch the sources out.
Any ideas on how to accomplish this would be much appreciated.

The JNDI name must be different, because it connects through the JNDI name Every data source should have unique JNDI name.

Related

Encrypt the docker images and ship them to client

We have a spring boot application that is using mongo db.
We need to up this complete application on a machine which is owned by our client and installed at their premises. We need to encrypt the application in such a way that nothing can be extracted from it.
We are planning to do this by using docker. As of now we are planning to create a docker-compose file and give it to the client. We will create images on our end and push them to a repository.
As we can extract the containers and get the data from them, hence this approach would not work for us. Is there any way to get things done with help of docker itself so the files will not be extracted?
Files that we need to abstract are our jar files and database.
we have already created a compose file that will up two containers one for the spring-boot application and another for mongo.
We have also tried to extract the container and we easily get the jar out of it and also the db credentials which we have mentioned in the script and copied in /docker-entrypoint-initdb.d/.
Need to do something so that the credentials and jar files will not get extracted.

Keycloak configure with PostgreSQL

I develop Spring Boot Rest API project using JDBC and the database is PostgreSQL. I added authorization with Keycloak. I wanna use User Federation because I would like to use Users in my PostgreSQL DB. How can I use it and other ways not to use User Federation?
I have faced the same problem recently. I have different clients with different RDBMS, so I have decided to address this problem so that I could reuse my solution across multiple clients.
I published my solution as a multi RDBMS implementation (oracle, mysql, postgresl, sqlserver) to solve simple database federation needs, supporting bcrypt and several types of hashes.
Just build and deploy this solution on keycloak and configure it through the admin console providing jdbc connection string, login, password, the required SQL queries and the type of hash used.
Feel free to clone, fork or do whatever you need to solve your issue.
GitHub repo:
https://github.com/opensingular/singular-keycloak-database-federation
I'm doing similar development but with Oracle and JSF.
I created a project with three classes:
one implementing UserStorageProvider, UserLookupProvider and CredentialInputValidator
one implementing UserStorageProviderFactory
one extending AbstractUserAdapter
Then I created another project which creates an ear file containing the jar file generated in the previous project plus the driver jar file (of PostgreSQL in your case) inside a lib folder.
Finally the ear file is copied in the /opt/jboss/keycloak/standalone/deployments/ folder of the Keycloak server and it gets autodeployed as a SPI. It's necessary to add this provider in the User federation section of the administration application of Keycloak.

Bluemix Liberty SQLDB

I have created an "enterprise template" Liberty server with an EAR file application requiring a few SQLDB connections. This is working and I am able to cf push this server to the Bluemix environment.
My question is how do I go about packaging the entire content and publish this to Bluemix in ONE action (i.e., they will have an instance of the same application running on Liberty with the same SQLDB table setup).
From my quick browsing of the blogs and Q&A, I have only found articles talking about creating the SQLDB ahead of time, packaging the Liberty runtime as a .zip file, and then using cf push to Bluemix. Because the SQLDB was created ahead of time, the DB connections would work.
So is there a way to package the Liberty server with the SQLDB creation as one entity into perhaps one "buildpack"? If so, can someone guide me on the steps involved? (or articles/blogs, anything would help)
You can't do it.
If you want create a script that do all operations in one time, an idea is create a simple job (in java for example) that you can launch in your script.
The job should perform these steps:
connect to sqldb - bluemix service using VCAP_SERVICES (for this
step you can see the documentation
https://www.ng.bluemix.net/docs/#services/SQLDB/index.html#SQLDB
run DDL (create table, ...) in your little job
close connection
Another option is to package a database migration helper (something like Flyway in the application. Then you can invoke it using Java, on application startup (we've had good luck with #singleton #startup EJBs for this pattern). The migration will run when needed, but leave the database alone otherwise. Another advantage of this pattern is you can use the migrations to update the tables of an existing table (as the name suggests).

FuelPHP on remote server

I am having a really hard time trying to get the FuelPHP to work with my remote server/ remote database. I know that they have Oil functionality that generates database tables for you but the issue is that I was only able to do it locally but not remotely.
Is there a tutorial or better way of trying to deploy or run FuelPHP on my remote server. I have seen a bunch of tutorials that are helpful for local server but I have not found anything for creating sites on remote server.
Maybe I am going about this in a wrong way or something but I am frustrated and confused. I just want to get it to work on my remote server as it does on my local server.
You should use migrations to create and modify your database schema. It allows you to change the schema and rollback if needed, using oil refine migrate.
Using it implies however that you have commandline access to your remote server. If it's one of those cheap ftp-only hosting options, you've got a challenge. You can create a controller that uses the Migrate class to run migrations, which you can call from the browser after you've ftp'd the updated code to the server.
If you go this route, make sure you secure it properly!

Building two different versions a given war with maven profiles and filtering from eclipse

I am trying to use maven profiles and filtering in order to produce two different versions of a given web archive (war):
A first one for local deployment to my local machine on localhost
A second one for remote deployment to cloudfoundry
There are a number of properties that differ according to whether the app is deployed to my local machine or to cloudfoundry.
Of course the difficult bit is that I am trying to do all this from STS/Eclipse and deploy from Eclipse to my local tomcat and to cloudfoundry...
Can anyone please provide advice, tips or suggestions?
If you are using Spring versioning 3.1+ the "profile" attribute for <beans> in the spring bean configuration xml would be the best choice. Take a look at the doc here: http://docs.cloudfoundry.com/frameworks/java/spring/spring.html#using-spring-profiles-to-conditionalize-cloud-foundry-configuration
Basically you need to specify at least 2 elements. One for your local properties (profile="default") and one for the properties when deployed to CF. The latter one should be defined as <beans profile="cloud">. When running locally the properties within "cloud" would be ignored and properties in "default" will take effect. When pushed to CF, CF will detect the profile named "cloud" and, which is better, inject corresponding datasource connection info of the services provisioned by CF itself. You can find the detailed CF-specified properties in that doc as well.
For more information about the profile attribute, see the doc here: http://blog.springsource.com/2011/02/11/spring-framework-3-1-m1-released/
Consider having a single project per artifact generated. Hence one project generating your local deployment and one project generating your cloudfoundry deployment.
Overlays (http://maven.apache.org/plugins/maven-war-plugin/overlays.html) is the officially sanctioned way to bake in extra files in an existing WAR file, giving a new WAR artifact. Very useful but may be too slow for comfort while developing.