I have a multi-tenant application built in Laravel 5.1. It uses one main database connection for storing users, roles, permissions, as well as jobs and failed_jobs. Additionally, every user has his own database.
I use a Job class for sending mail, and when it is executed, the following exception occurs:
[PDOException] SQLSTATE[3D000]: Invalid catalog name: 1046 No database selected
The class uses tables from two db connections (the main connection, and the one associated with the current user).
Thanks in advance, any help is appreciated.
Ok, this was easy. For anyone who is interested, I totally forgot to set the database in the second connection.
In fact, the database field in the second db connection is dynamically filled, depending on the authenticated user. So, each time the job class is executed, there should be initialization of the database field:
Config::set('database.connections.second_connection.database', 'user_' . $user_id);
// $user_id is in fact auth()->user()->id, passed as parameter
That solves the problem.
Related
We ar running a webapplication in Azure Web Apps using a database per customer (multiple accounts per customer). When logging in we connect the user to the correct customer database. This database is also hosted in azure (an elastic pool). It is hosted in the same region (West Europe) as the Web App.
Once the connection is pooled, request times are fast, but the first time a user log's in, the connection still needs to be created an this takes (quiet) a long time.
The connectionstring is build up using a SqlConnectionStringBuilder.
var csb = new System.Data.SqlClient.SqlConnectionStringBuilder();
csb.DataSource = "tcp:******.database.windows.net,1433";
csb.InitialCatalog = "***-***-***";
csb.UserID = "**-**";
csb.Password = "**********";
csb.MultipleActiveResultSets = true;
csb.Encrypt = true;
csb.PersistSecurityInfo = false;
csb.TrustServerCertificate = false;
csb.ConnectTimeout = 30;
_connectionString = csb.ConnectionString;
// Data Source=tcp:******.database.windows.net,1433;Initial Catalog=***-***-***;Persist Security Info=False;User ID=**-***;Password=******;MultipleActiveResultSets=True;Connect Timeout=30;Encrypt=True;TrustServerCertificate=False
Am I doing anything wrong? Or are there some settings in azure to speed up the connect process?
The above request shows the first request to the application of a customer. It therefor includes the EF Migration Seed resulting in the first 2 queries not actually going to the database itself and quite a lot of queries (not all shown here) to the database.
Well, I solved my problem eventualy. Seems i was matching wrong queries within Applications Insights. I installed Stackify and this gives just the little bit more information I needed.
Seem's Entity Framework does some things with the 'master' database. As the user in the connectionstring did not have access to the 'master' database it throws an error. Well, handling that error take's up quite some time on the app-service used and therefor returning slow. It just doesn't fail.
What EF tries to do is determine if the database exist by querying the master database wich is faster then connecting to a non existing database. If it fails because it can not connect to the master database, EF just tries to connect to the database itself. If connection to the database works, it continues normal execution like the seed method.
Information from an SQLite DB is presented to user through a web server (displayed in an HTML browser). The DB is loaded once for all by a small application independent from the web server. DB data cannot be changed from user browser (this is a read-only service).
As the web-server has its own user-id, it accesses the SQLite DB file with "other" permissions. For security reason, I would like to set the DB file permissions as rw-rw-r--.
Unfortunately, with this permission set, I get a warning attempt to write a readonly database at line xxx which points to a line about a SELECT transaction (which in principle is read-only). Of course, I get no result.
If permissions are changed to rw-rw-rw, everything works fine, but that means everybody can tamper with the DB.
Is there any reason why SQLite DB cannot be accessed read-only?
Are there "behind-the-scene" processings which need write access, even for SELECT transactions?
Look-up on StackOverflow shows that people usually complain for the opposite situation: encountering a read-only access permission preventing writing to the DB. My goal is to protect my DB against ANY change attempt.
For the complete story, my web app is written in Perl and uses DBD::SQLite
You must connect to your SQLite db in readonly mode.
From the docs:
You can also set sqlite_open_flags (only) when you connect to a database:
use DBD::SQLite;
my $dbh = DBI->connect("dbi:SQLite:$dbfile", undef, undef, {
sqlite_open_flags => DBD::SQLite::OPEN_READONLY,
});
-- https://metacpan.org/pod/DBD::SQLite#Database-Name-Is-A-File-Name
The solution is given in the answer to this question Perl DBI treats setting SQLite DB cache_size as a write operation when subclassing DBI.
It turns out that AutoCommit cannot be set to 0 with read-only SQLite DB. Explicitly forcing it to 1 in the read-only DB case solved the problem.
Thanks to all who gave clues and leads.
I currently assign a mongodb to my meteor app using the env variable
"MONGO_URL": "mongodb://localhost:27017/dbName" when I start the meteor instance.
So all data gets written to the mongo database with the name "dbName".
I am looking for a way to individually set the dbName for each custumer upon login in order to seperate their data into different databases.
This generally unsupported as this is defined at startup. However, this thread offers a possible solution:
https://forums.meteor.com/t/switch-database-while-meteor-is-running/4361/6
var database = new MongoInternals.RemoteCollectionDriver("<mongo url>");
MyCollection = new Mongo.Collection("collection_name", { _driver: database });
This would allow you to define the database name in the mongo url but would require a fair bit of extra work to redefine your collections on a customer by customer basis.
Here's another approach that will make your life eternally easier:
Create a generic site with no accounts at mysite.com
When they login at mysite.com, figure out what site they actually belong to and redirect them to customerName.mysite.com and log them in there
Run a separate instance of Meteor configured for a different mongo at each site
nginx might help you with the above.
It is generally good practice to run separate DBs when offering a B2B
solution.
That's a matter of opinion that depends heavily on the platform. Many SaaS providers would argue that point.
i have a multi-tenant application and i need to change the schema name at runtime so it is gonna be shared DB seperate schema SaaS design.
because creating an EntityManagerFactory is very expensive, i would like to create the EMF application-scoped and specify the schema before every DB calls after initiating the EntityManger. i am using Postgresql 8.1 and because Postgesql doesn't support schema selection at setting up the DB connection, i thought the only way to query from tables for different schemas is querying 'SET search_path = "my.schema"' before making the required DB calls.
i have tried;
StringBuilder sb = new StringBuilder();
sb.append("SET search_path TO my.schema");
entityManager_.createNativeQuery(sb.toString()).executeUpdate();
i have got an exception saying 'java.lang.IllegalStateException: You cannot call executeUpdate() on this query. It is the incorrect query type'
i am using eclipselink as PersistenceProvider and glassfish as application manager
is there anyway i can get this done ?
i am open to any suggesstions if there is a better way of accomplishing this
thanks in advance
Since you're doing tenant-per-schema, are you also using tenant specific database login roles (user IDs)? If so, you can bind a default search path to your user:
ALTER USER thetenant SET search_path = 'thetenant';
If you also:
REVOKE ALL ON SCHEMA thetenant FROM public;
GRANT ALL ON SCHEMA thetenant TO tenant;
you will isolate the users from each other to a much greater extent, though they'll still see stuff in pg_catalog and INFORMATION_SCHEMA.
This requires you to use a login role per tenant. This can be difficult where connection pooling is in play because Java connection pools can't usually switch the user ID of a pooled connection and have to keep one pool per user Id. PostgresSQL's SET SESSION AUTHORISATION statement can be useful, allowing you to log in as a single master user then switch to the user you need for a particular job, but I don't know if any Java pools support it directly. You can use external connection pools like PgBouncer and PgPool-II that are SET SESSION AUTHORISATION aware, or see if there's any way to write an interceptor so you can issue a SET SESSION AUTHORISATION on connections as they're checked out from the pool, and RESET SESSION AUTHORISATION when they're checked back in.
Even if you can't use role-based access, I'd try the same approach with your search path: see if you can do it with the connection pooler's help by trapping connections as they're checked out of the pool for a task, and as they're checked back in at release. How to do this would depend on the connection pool, though, and you may not want to get into the specifics of that.
BTW, why on earth are you using such a prehistoric version of PostgreSQL?
I don't know why EclipseLink is refusing your command. At face value it looks reasonable. Are you using ancient versions of other things, too? Which Glassfish and EclipseLink versions are you using?
Say that I have a User table in my ReadDatabase (use SQL Server). In a regulare read/write database I can put like a index on the table to make sure that 2 users aren't addedd to the table with the same emailadress.
So if I try to add a user with a emailadress that already exist in my table for a diffrent user, the sql server will throw an exception back.
In Cqrs I can't do that since if I decouple the write to my readdatabas from the domain model, by puting it on an asyncronus queue I wont get the exception thrown back to me, and I will return "OK" to the UI and the user will think that he is added to the database, when infact he will never be added to the read database.
I can do a search in the read database checking if there is a user already in my database with the emailadress, and if there is one, then thru an exception back to the UI. But if they press the save button the same time, I will do 2 checks to the database and see that there isn't any user in the database with the emailadress, I send back that it's okay. Put it on my queue and later it will fail (by hitting the unique identifier).
Am I suppose to load all users from my EventSource (it's a SQL Server) and then do the check on that collection, to see if I have a User that already has this emailadress. That sounds a bit crazy too me...
How have you people solved it?
The way I can see is to not using an asyncronized queue, but use a syncronized one but that will affect perfomance really bad, specially when you have many "read storages" to write to...
Need some help here...
Searching for CQRS Set Based Validation will give you solutions to this issue.
Greg Young posted about the business impact of embracing eventual consistency http://codebetter.com/gregyoung/2010/08/12/eventual-consistency-and-set-validation/
Jérémie Chassaing posted about discovering missing aggregate roots in the domain http://thinkbeforecoding.com/post/2009/10/28/Uniqueness-validation-in-CQRS-Architecture
Related stack overflow questions:
How to handle set based consistency validation in CQRS?
CQRS Validation & uniqueness