Can SQLite DB files be made read-only? - perl

Information from an SQLite DB is presented to user through a web server (displayed in an HTML browser). The DB is loaded once for all by a small application independent from the web server. DB data cannot be changed from user browser (this is a read-only service).
As the web-server has its own user-id, it accesses the SQLite DB file with "other" permissions. For security reason, I would like to set the DB file permissions as rw-rw-r--.
Unfortunately, with this permission set, I get a warning attempt to write a readonly database at line xxx which points to a line about a SELECT transaction (which in principle is read-only). Of course, I get no result.
If permissions are changed to rw-rw-rw, everything works fine, but that means everybody can tamper with the DB.
Is there any reason why SQLite DB cannot be accessed read-only?
Are there "behind-the-scene" processings which need write access, even for SELECT transactions?
Look-up on StackOverflow shows that people usually complain for the opposite situation: encountering a read-only access permission preventing writing to the DB. My goal is to protect my DB against ANY change attempt.
For the complete story, my web app is written in Perl and uses DBD::SQLite

You must connect to your SQLite db in readonly mode.
From the docs:
You can also set sqlite_open_flags (only) when you connect to a database:
use DBD::SQLite;
my $dbh = DBI->connect("dbi:SQLite:$dbfile", undef, undef, {
sqlite_open_flags => DBD::SQLite::OPEN_READONLY,
});
-- https://metacpan.org/pod/DBD::SQLite#Database-Name-Is-A-File-Name

The solution is given in the answer to this question Perl DBI treats setting SQLite DB cache_size as a write operation when subclassing DBI.
It turns out that AutoCommit cannot be set to 0 with read-only SQLite DB. Explicitly forcing it to 1 in the read-only DB case solved the problem.
Thanks to all who gave clues and leads.

Related

Queuing multi-tenant application in Laravel

I have a multi-tenant application built in Laravel 5.1. It uses one main database connection for storing users, roles, permissions, as well as jobs and failed_jobs. Additionally, every user has his own database.
I use a Job class for sending mail, and when it is executed, the following exception occurs:
[PDOException] SQLSTATE[3D000]: Invalid catalog name: 1046 No database selected
The class uses tables from two db connections (the main connection, and the one associated with the current user).
Thanks in advance, any help is appreciated.
Ok, this was easy. For anyone who is interested, I totally forgot to set the database in the second connection.
In fact, the database field in the second db connection is dynamically filled, depending on the authenticated user. So, each time the job class is executed, there should be initialization of the database field:
Config::set('database.connections.second_connection.database', 'user_' . $user_id);
// $user_id is in fact auth()->user()->id, passed as parameter
That solves the problem.

PostgreSQL authorization with Access ODBC Linked Tables

For the impatient - I can summarize this question as:
What practical approach can be used to leverage role-based privileges
in PostgreSQL when using an Access Front End that employs ODBC linked-tables?
And now for the longer version:
I've inherited the unsavory task of upgrading an Access 2000 / PG 7 application to Access 2013 / PG 9. I'm new to PostgreSQL but have used Oracle and Microsoft Access quite a bit.
EDIT: The production server is running PostgreSQL on Mac OS X Lion. My Test machine is running PostgreSQL on Oracle Linux 7.
This Access DB is linking to tables in the PG Database via ODBC, connecting using an single PG login role (application_user). Every user connects with this login role, and it is only the conditions in the Forms / VBA that limits the user's rights. If, however, a user can get into the navigation pane - they can access the linked tables directly and bypass all security restrictions. While upgrading this database, I'd like to see if I can tighten that up.
I could set up each user with their own login role on PostgreSQL, but then it would mean (from the way I'm looking at it) a hefty amount of retooling the database. I'd rather not make such large changes on a production database - incremental changes are more desired.
Looking at the database's security needs - I can think of only five roles that would be needed.
Order Entry
Customer Entry
Order and Customer Entry
Read-Only
Not Authorized - No Access
I can set up these as Group Roles in PGSQL and each table with the necessary ACL for each role.
What I'm missing is how I can go from a single login-role (application_user) to all of the above roles?
My initial thought was to set the application_user (logon role) to have no group roles (essentially resulting in "Not Authorized - No Access"), and then use a call to a PL/pgSQL function authorize(Username, MD5PassWord) to authorize and elevate the role. The function would check if the supplied MD5 hash matches the MD5 hash stored in the users table - and if so - it would issue a SET SESSION ROLE for the appropriate Group Role.
If this would work, it would let me track user names that are logging in, and then using the pg_backend_pid() function, I can associate it back with the user for the business logic or logging or whatever. It also means I don't need to worry if some user goes into the Linked Table - because their access would be restricted by whatever role they are currently authorized for in that database session.
So I whipped up a plpgsql script, set its owner to OrderCustomerEntryGroup and gave it SECURITY DEFINER rights.
DECLARE
v_Status integer;
BEGIN
v_Status := 0;
IF pin_username = 'username' AND MD5('foo') = pin_pwmd5 THEN
SET SESSION AUTHORIZATION OrderEntryGroup;
v_Status := 1;
END IF;
RETURN v_Status;
END;
Only problem however with my implementation is that
SELECT authenticate('username',MD5('foo'));
gives:
ERROR: cannot set parameter "session_authorization" within security-definer function
SQL state: 42501
Context: SQL statement "SET SESSION AUTHORIZATION OrderEntryGroup"
PL/pgSQL function authenticate(character varying,text) line 7 at SQL statement
So I read up on this - and from what I can tell, you used to be able to do this, but for whatever reason it was removed. I haven't been able to find an alternative - other than using the built in roles on a per-user level.
So what I'm asking is .. What am I missing to make my approach (an easy solution) work, or is there a better way of doing this that won't involve ripping apart the existing access database?
If you want to restrict access to the database from a direct connection then you'll need to do a certain amount of "retooling" on the back-end regardless. The best approach is almost always to have each user connect with their own credentials and then restrict what that user can do based on the groups (sometimes referred to as "roles") to which they belong in the database.
If you want to avoid having to set up separate database userids/passwords for each network user then you should investigate using integrated Windows authentication (SSPI) as discussed in another question here. You'll still need to define the users (in addition to the groups/roles) at the database level, but you'd have to do most of that work anyway.

How to properly handle mongoose schema migrations?

I'm completely new to MongoDB & Mongoose and can't seem to find an answer as to how to handle migrations when a schema changes.
I'm used to running migration SQL scripts that alter table structure and any underlying data that needs to be changed. This typically involves DB downtime.
How is this typically handled within MongoDB/Mongoose? Any gotcha's that I need to be aware of?
In coming across this and reasonably understanding how migrations work on a relational database, MongoDB makes this a little simpler. I've come to 2 ways to break this down. The things to consider when dealing with data migrations in MongoDB (not all that uncommon from RDBs) are:
Ensuring local test environments do not break when a developer merges the latest from the project repository
Ensuring any data is correctly updated on the live version regardless if a user is logged in or out if authentication is used. (Of course if everyone is automatically logged out when an upgrade is made, then only worrying about when a user logs in is necessary).
1) If your change will log everyone out or application downtime is expected then the simple way to do this is have a migration script to connect to local or live MongoDB and upgrade the correct data. Example where a user's name is changed from a single string to an object with given and family name (very basic of course and would need to be put into a script to run for all developers):
Using the CLI:
mongod
use myDatabase
db.myUsers.find().forEach( function(user){
var curName = user.name.split(' '); //need some more checks..
user.name = {given: curName[0], family: curName[1]};
db.myUsers.save( user );
})
2) You want the application to migrate the schemas up and down based on the application version they are running. This will obviously be less of a burden for a live server and not require down time due to only upgrading users when they use the upgraded / downgraded versions for the first time.
If your using middleware in Expressjs for Nodejs:
Set an app variable in your root app script via app.set('schemaVersion', 1) which will be used later to compare to the users schema version.
Now ensure all the user schemas have a schemaVersion property as well so we can detect a change between the application schema version and the current MongoDB schemas for THAT PARTICULAR USER only.
Next we need to create simple middleware to detect the config and user version
app.use( function( req, res, next ){
//If were not on an authenticated route
if( ! req.user ){
next();
return;
}
//retrieving the user info will be server dependent
if( req.user.schemaVersion === app.get('schemaVersion')){
next();
return;
}
//handle upgrade if user version is less than app version
//handle downgrade if user version is greater than app version
//save the user version to your session / auth token / MongoDB where necessary
})
For the upgrade / downgrade I would make simple js files under a migrations directory with an upgrade / downgrade export functions that will accept the user model and run the migration changes on that particular user in the MongoDB. Lastly ensure the users version is updated in your MongoDB so they don't run the changes again unless they move to a different version again.
If you're used to SQL type migrations or Rails-like migrations then you'll find my cli tool migrate-mongoose the right fit for you.
It allows you to write migrations with an up and a down function and manages the state for you based on success and failure of your migrations.
It also supports ES6 if you're using ES 2015 syntax.
You get access to your mongoose models via the this object, making it easy to make the changes you need to your models and schemas.
There are 2 types of migrations:
Offline: Will require you to take your service down for maintenance, then iterate over the entire collection and make the changes you need.
Online: Does not require to take your service down for maintenance. When you read the document, you check its version, and run a version specific migration routine for each version between the old and the new. Then you load the resulting thing.
Not all services can afford an offline migration, I recommend the online approach.

eclipselink change schema at EntityManager callbacks

i have a multi-tenant application and i need to change the schema name at runtime so it is gonna be shared DB seperate schema SaaS design.
because creating an EntityManagerFactory is very expensive, i would like to create the EMF application-scoped and specify the schema before every DB calls after initiating the EntityManger. i am using Postgresql 8.1 and because Postgesql doesn't support schema selection at setting up the DB connection, i thought the only way to query from tables for different schemas is querying 'SET search_path = "my.schema"' before making the required DB calls.
i have tried;
StringBuilder sb = new StringBuilder();
sb.append("SET search_path TO my.schema");
entityManager_.createNativeQuery(sb.toString()).executeUpdate();
i have got an exception saying 'java.lang.IllegalStateException: You cannot call executeUpdate() on this query. It is the incorrect query type'
i am using eclipselink as PersistenceProvider and glassfish as application manager
is there anyway i can get this done ?
i am open to any suggesstions if there is a better way of accomplishing this
thanks in advance
Since you're doing tenant-per-schema, are you also using tenant specific database login roles (user IDs)? If so, you can bind a default search path to your user:
ALTER USER thetenant SET search_path = 'thetenant';
If you also:
REVOKE ALL ON SCHEMA thetenant FROM public;
GRANT ALL ON SCHEMA thetenant TO tenant;
you will isolate the users from each other to a much greater extent, though they'll still see stuff in pg_catalog and INFORMATION_SCHEMA.
This requires you to use a login role per tenant. This can be difficult where connection pooling is in play because Java connection pools can't usually switch the user ID of a pooled connection and have to keep one pool per user Id. PostgresSQL's SET SESSION AUTHORISATION statement can be useful, allowing you to log in as a single master user then switch to the user you need for a particular job, but I don't know if any Java pools support it directly. You can use external connection pools like PgBouncer and PgPool-II that are SET SESSION AUTHORISATION aware, or see if there's any way to write an interceptor so you can issue a SET SESSION AUTHORISATION on connections as they're checked out from the pool, and RESET SESSION AUTHORISATION when they're checked back in.
Even if you can't use role-based access, I'd try the same approach with your search path: see if you can do it with the connection pooler's help by trapping connections as they're checked out of the pool for a task, and as they're checked back in at release. How to do this would depend on the connection pool, though, and you may not want to get into the specifics of that.
BTW, why on earth are you using such a prehistoric version of PostgreSQL?
I don't know why EclipseLink is refusing your command. At face value it looks reasonable. Are you using ancient versions of other things, too? Which Glassfish and EclipseLink versions are you using?

SQL Server 2008 schema separation - Schema permissions and database roles

I really hope someone has some insight into this. Just to clarify what I'm talking about up front; when referring to Schema I mean the database object used for ownership separation, not the database create schema.
We use Sql Server Schema objects to group tables into wholes where each group belongs to an application. Each application also has it's own database login. I've just started introducing database roles in order to fully automate deployment to test and staging environment. We're using xSQL Object compare engine. A batch file is run each night to perform comparison and generate a script change file. This can then be applied to the target database along with code changes.
The issue I'm encountering is as folows. Consider the following database structure:
Database:
Security/Schemas:
Core
CoreRole (owner)
SchemaARole (select, delete, update)
SchemaBRole (select)
SchemaA
SchemaARole (owner)
SchemaB
SchemaBRole (owner)
Security/Roles/Database Roles:
CoreRole
core_login
SchemaARole
login_a
SchemaBRole
login_b
The set-up works perfectly well for the three applications that use these. The only problem is how to create / generate a script that creates schema -> role permissions? The owner role gets applied correctly. So for example, schema Core gets owner role CoreRole (as expected). However the SchemaARole and SchemaBRole do not get applied.
I wasn't able to find an option to turn this on within xSQL object nor does an option to script this from SQL Server management studio exist. Well, I can't find it at least.
Am I trying to do impossible? How does SQL Server manage this relationship then?
I just kicked up SQL Profiler & trapped what I think you scenario is. Try this:
GRANT SELECT ON [Core].[TestTable] TO [CoreRole]
GRANT DELETE ON [Core].[TestTable] TO [CoreRole]
GRANT UPDATE ON [Core].[TestTable] TO [CoreRole]