Managing users with Postgresql - postgresql

Our system will run on a local network with no more than 50 clients that connect to the same local server. We are creating a DB user for each client, to take advantage of the postgresql privilege system.
1) Analyzing the "performance", its OK to have ~ 50 DB users instead of reimplementing a custom system?
2) (SOLVED) How can the user check (what SQL statement) what permission he has in a table?
Solution:
SELECT HAS_TABLE_PRIVILEGE('user','table','insert')
I prefer to not reimplement the system, since a good security system isn't trivial to implement.

To answer the user/performance question: probably not. The only real risk would depend on how many users have unique security permissions (for example, if every one of those 50 users had different permissions on each table/schema in the database). In practice this should never happen, and as long as you have a sane group system for permissions, you should be fine.

Related

Postgres architecture for one machine with several apps

I have one machine on which several applications are hosted. Applications work on separated data and don't interact - each application only needs access to its own data. I want to use PostgreSQL as RDBMS. Which one of the following is best and why?
One global Postgres sever, one global database, one schema per application.
One global Postgres server, one database per application.
One Postgres server per application.
Feel free to suggest additional architectures if you think they would be better than the ones above.
The questions you need to ask yourself: does any application ever need to access data from another application (in the same SQL statement). If you can can answer that with a clear NO, then you should at least go for separate databases. Cross-database queries aren't that straight-forward in Postgres, so if the different applications do need a lot of data from other applications, then solution 1 might be deployment layout to think about. If this would only concern very few tables, then using foreign data wrappers with different databases might still be a better solution.
Solution 2 and 3 are more or less the same from the perspective of each application. One thing to keep in mind when deciding between 2 and 3 is availability. Some configuration changes to Postgres require a restart of the service. Is an outage of all applications acceptable in that case, even though the change was only necessary for one?
But you can always start with option 2 and then move database to different servers later.
Another question to ask is if all applications always use the same (major) Postgres version With solution 2 you must make sure that all applications are compatible with a new Postgres version if one of them wants to upgrade e.g. because of new features that the application wants to use.
Solution 1 is stupid : a SQL schema is not a database. Use SQL schema for one application that have multiple "parts" like "Poduction", "sales", "marketing", "finances"...
While the final volume of the data won't be too heavy and the number of user won't be too much, use only one PG cluster to facilitate administration tasks
If the volume of data or the number of user increases, it will be time to separates your different databases on new distinct PG clusters....

Speed of DB2 query

I have a DB2 procedure that runs a very lengthy SELECT query (6 CTE's that touches about 5 or 6 different tables, some pivoting, few joins). I am logged into System i Navigator as an "admin" user. This user account has the authority to do basically everything. My personal username does not. So I log in as this to make it easier for myself.
When I run this procedure (by opening a SQL Script window and typing in CALL Procedure_Name('Param1');), the processing completes in 4 or 5 seconds.
My boss has logged into his i Navigator as his own username. His username has more powers than my personal account, but less than the admin one that I use. When he runs the same procedure in the same method that I do, it takes about 15-20 seconds to run it.
So my question is, does the username you are logged in as affect the speed in which a DB2 query runs? If so, what do I change to make the query run at the same speed for all users, preferably at the speed in which the admin account runs it?
Use "Run & Explain" from Run SQL Scripts (preferably the latest version included with Access Client Solutions (ACS) rather than the older IBM i Access for Windows i Navigator version)
Compare the results from the different user profiles. Particularly the INI options section...it sounds as if the optimization goal may be different.
*FIRSTIO means the DB will pick the most efficient plan to return the first few records as quickly as possible; perfect if a user is waiting to see something on a screen. *ALLIO means the DB will pick the most efficient plan for returning all the records; perfect for a batch process (or a client app) that's going to retrieve all the records anyways.
Also look at the Environment Information section
That will show you if one user is running in different memory pool and/or workload group and/or there's a significant difference in memory available at the time one user is running.

Is it a bad idea to let laptops directly perform CRUD operations on databases?

I have developed an Excel add-in that I pitched to my employer's IT department. The add-in creates SELECT, INSERT, DELETE, and UPDATE SQL statements that are sent to a PostgreSQL database and any results (in the case of a SELECT statement) are returned to Excel to report on.
My team has been very impressed with this, but IT said that they don't allow laptops to perform CRUD operations directly on databases. Instead IT has set up certain environments to do this.
Can someone tell me if IT's concern around laptops directly connecting to a database and performing CRUD operations makes sense? Is this a valid concern?
If the laptops, their users and anybody else with access to them, the network connection, and the client software are all trusted, and you can always immediately push an update to the clients when the database structure inevitably changes in the future, then it's OK.
Otherwise it's not. The standard way would be to put some kind of service between the two that acts as a gatekeeper and defines the allowed operations on the database and who is allowed to do them. REST (or if you're enterprisey, SOAP) are two popular options. And don't send SQL over the wire in those cases.
With some database engines it might be possible in theory to let the users directly authenticate with the database and use the database's permission model to limit what they can do. For instance you could only allow users to execute certain stored procedures. But in practice that's probably more trouble than it's worth.
To be honest in practice it's probably not OK. That's too many things to trust at once.
Yes this is a valid concern. Someone could easily inject an SQL command into your database. They might be able to perform an operation that erases the entire database.
Say your software has this coed into itself: "SELECT $var1 FROM TEST WHERE $var2" and the user can modify var1 and var2. If they put "date > 10; DROP *" into var2 now your statement becomes "SELECT $var1 FROM TEST WHERE date > 10; DROP *;"
It is a little more complicated than that, but you should read up on SQL Injection.

Firebird with 1 database file to use 2 servers

Is it possible for firebirdSQL to run 2 servers sharing 1 database file (FDB)/ repository?
No. The server needs exclusive access to the database files. In the case of the Classic architecture version, multiple fb_inet_server processes access the same files, but locks are managed through the fb_lock_mgr process.
Databases on NFS or SMB/CIFS shares are disallowed unless one explicitly disables this protection. firebird.conf includes strong warnings against doing this unless you really know what you are doing.
If you mean if two servers on different host can share the same database, then no.
Firebird either requires exclusive access to a database (SuperServer), or coordinates access to the database by different processes on the same host through a lock file (SuperClassic and ClassicServer).
In both cases the server requires certain locking and write visibility guarantees, and most networked filesystems don't provide those (or don't provide the locking semantics Firebird needs it).
If you really, really want to, you can by changing a setting in firebird.conf, but that is a road to corrupt database or other consistency problems. And therefor not something you should want to do.
Every SQL server will not allow such configuration. If you want to split load, maybe you need to look at Multi Tier architecture. Using this architecture, you can split your SQL query load to many computers.

How to get available space in tablespace for a user (Oracle)

I'm working on a web application where I need to warn the user that they're running out of space in the given db user's tablespace.
The application doesn't know the credentials of the db's system user, so I can't query views like dba_users, dba_free_space..etc.
My question is, is there a way in Oracle for a user to find out how much space there is left for them in their tablespace?
Thanks!
Forgive my ignorance on the subject, for I believed only views available on data storage were dba_free_space etc..
I realized that for the logged user, there are user_free_space.. views for them.
Modified version of the query mentioned here would be the answer my question.
Query is as follows: (Getting the space left on the DEFAULT_TABLESPACE of the logged user)
SELECT
ts.tablespace_name,
TO_CHAR(SUM(NVL(fs.bytes,0))/1024/1024, '99,999,990.99') AS MB_FREE
FROM
user_free_space fs,
user_tablespaces ts,
user_users us
WHERE
fs.tablespace_name(+) = ts.tablespace_name
AND ts.tablespace_name(+) = us.default_tablespace
GROUP BY
ts.tablespace_name;
It would return free space in MB
create a stored package as a user that has the necessary privileges. You may have to create a new user. Grant EXECUTE on the package to any user that needs it. The packages needs to have all the procedures and functions needed to access the DBA views but should be coded carefully to avoid accessing "too much" information. You may want to write a second package in the account of a non-privileged user to encapsulate the logic.
This is potentially very complex, as it's quite possible for the user to:
Receive an "out of space" error even though the tablespaces that they have privileges on, including their default tablespace, have plenty of space. This could happen when they insert into a table that is owned by a different user which is on a tablespace that your user has no quota on. In this case, your user probably does not have access to the views required to determine whether there is free space or not,
Be able to continue inserting data even though there is no free space on the tablespaces on which they have a quota -- they might not even have a quota on their default tablespaces.
So unless you have a rather simple case you really have to be very aware of the way that the user interacts with the database on a far deeper level, and look at free space from a more database-holistic viewpoint.