My database values got wiped out after I manually configured the database in spring boot - spring-data-jpa

I tried to configure the a database connection in my spring boot application manually using Datasource, entityManager and transacitonManager and it worked. But When I stopped an re-run the application, it wiped out all the database values inside the database. I had to manually input new database values to test my Rest API URL on postman. After I stopped and re-ran the application again, my database value got wiped out out yet again. What could have been the reason for that?

Related

How do I connect to multiple databases using JPA Glassfish Java EE?

I have an application using JSF,EJB,JPA, Glassfish. There are multiple(may be around thousand or more) clients using my app, however each client has a separate database. All the databases have the same schema. I would like to determine which database connection to use at the time when a user logs into the system.
For example client A enters client code, username, and password and logs in, I determine that client A belongs to database A, grab the connection for database A and continue on my merry way.
I am using JPA as my persistence provider. Is it possible to set datasource in persistence.xml at runtime? Is there .java version of persistence.xml available? Is there a better/preferred way to do this? PersistenceUnit name will be same for all the connections.
Thanks
If you know all users in advance then you can create separate persistence unit for every user in persistence.xml and use factory for select Persistence Unit Name for users. Addition of users, require modification in persistence.xml and redeployment of Application.

Wildfly won't deploy when datasource is unavailable

I am using wildfly-8.2.0.Final.
There are several databases that i have to connect to. However, some of them are only used for certain functionalities on the web application and they are not needed to be online all the time. So when the wildfly starts, some of the datasources may not be online. However, disconnection to any datasource causes wildfly to not deploy .war deployment and i cannot find any way to solve this problem. Is there a way?
UPDATE:
I have a single table on a remote database server. The user will be able to query the table via my web application. The thing is, I have almost no control over the mentioned database. When the web application starts, it could be offline. However, this would cause my web application to fail to start. What I want is being able to run queries on a remote database if it is online. If it is offline, the web page could fail or the query can be canceled. But the only thing that I don’t want is that my web application to be limited by a remote database that I may have no control over.
My previous solution was a workaround. I would run queries on the remote database via a local database which has a foreign table to the remote one. However, the local one reads all data on the remote table before applying any constraints on postgresql 9.5. As the remote table has a large number of rows and I am using lazy loading, it takes so long for a single query and defeats the whole purpose of the lazy loading.
I found a similar question, but there is no answer.
On wildfly, you can set the datasource so that it tries to reconnect periodically when it disconnects. In my case, the deployment should be successful initially for this to be helpful.
The deployment will failed if it references those datasources.
Also you could define but disable those datasources.

Mongo dump and restore to a different cluster and I cannot log in

I moved a mongoDB from one atlas cluster to a different account/different cluster.
To do this I did a dump from the source db and a restore to the new account's cluster.
I did NOT have a problem restoring the db - that went fine - I can visually confirm that the hashes in the new db ARE the same as the old.
When I try to login to my app (pointed to the source) I get in fine, when I change my db setting and point to the new db I get a log in failed.
The api code is the same - running locally, the only thing that is different is the connection string.
I am using bcrypt to hash the passwords - but because the api is sitting on my local machine, that kind of takes any application layer variable out of my problem list.
With the exception of the connection string - I was using the 3.1 driver connection string to connect to the 'old' version, and I decided to try the 3.6 driver version to connect to the 'new'.
Can someone confirm that moving a db from one cluster to another, using the dump and restore method SHOULD not effect hashed password matching.??
And maybe offer suggestions on where to look for answers?
so the only difference on the code is this:
// Old
DB_URI=mongodb://u***:p***#dev0-shard-00-00-1xxx.mongodb.net:27017,dev0-shard-00-01-1xxx.mongodb.net:27017,dev0-shard-00-02-1xxx.mongodb.net:27017/db?ssl=true&replicaSet=Dev0-shard-0&authSource=admin
// new
DB_URI=mongodb+srv://n***:h***#prod-xxx.mongodb.net/test?retryWrites=true
Ok, So I finally got around to playing with stuff, and since the URI was the only change, I switched back to the 3.4 driver syntax (that long ungodly string), and it works fine.
For the record, all my "open" (non-authenticated) API calls, such as with signup, or requesting a forgotten password. and a slew of drop down lookup, all processed thru the api with the 3.6 driver, I also signed up and logged in fine - the only issue is logging in with an account that was created in the previous cluster, with the new driver connection string.
And as confirmation - now that I switched the connection string back to 3.4 - I cannot log into the account I created with the 3.6 connection string.

.NET Core Entity Framework on AWS

I've got a web application written in .NET Core using Entity Framework to connect to a SQL Server database. Everything works fine locally. The staging/production environment for the app is AWS Elastic Beanstalk, and I seem to be stuck on how to get the app to connect to the database there.
I've created an RDS (SQL Server) database underneath the Elastic Beanstalk app.
Connection string:
user id=[USER];password=[PASSWORD];Data Source=[SERVER].rds.amazonaws.com:1433;Initial Catalog=Development
Creating the DbContext:
services.AddDbContext<MyDbContext>(o =>
o.UseSqlServer(Configuration["Db:ConnectionString"]));
App fails to start, based on the following line in Startup.cs:
dbContext.Database.EnsureCreated();
You have to troubleshoot step by step as below procedure:
Db Connection string is working or not? Better to use with other app and simple doing the Db Connection testing. It could be possible that firewall block your port 1433.
As per your codes, .NET Core Framework will crate a database by code first approach. So that, you have to make sure, your IIS Application Pool user account has write access to SQL Database. Most probability it could be your problem.

Entity framework application crashing on other system,

i have made an application having entity framewrok. It is wpf application, now it runs fine on my computer, but when i run this application on another system it crashes, i debugged it on other system, exception came when i try to access data of entities, what could be problem,
The underlying provider failed on Open either means that your connection string is incorrect or the application doesn't have access permission to the database (either SQL account specified in connection string or current user running the application).