I have a DB2 procedure that runs a very lengthy SELECT query (6 CTE's that touches about 5 or 6 different tables, some pivoting, few joins). I am logged into System i Navigator as an "admin" user. This user account has the authority to do basically everything. My personal username does not. So I log in as this to make it easier for myself.
When I run this procedure (by opening a SQL Script window and typing in CALL Procedure_Name('Param1');), the processing completes in 4 or 5 seconds.
My boss has logged into his i Navigator as his own username. His username has more powers than my personal account, but less than the admin one that I use. When he runs the same procedure in the same method that I do, it takes about 15-20 seconds to run it.
So my question is, does the username you are logged in as affect the speed in which a DB2 query runs? If so, what do I change to make the query run at the same speed for all users, preferably at the speed in which the admin account runs it?
Use "Run & Explain" from Run SQL Scripts (preferably the latest version included with Access Client Solutions (ACS) rather than the older IBM i Access for Windows i Navigator version)
Compare the results from the different user profiles. Particularly the INI options section...it sounds as if the optimization goal may be different.
*FIRSTIO means the DB will pick the most efficient plan to return the first few records as quickly as possible; perfect if a user is waiting to see something on a screen. *ALLIO means the DB will pick the most efficient plan for returning all the records; perfect for a batch process (or a client app) that's going to retrieve all the records anyways.
Also look at the Environment Information section
That will show you if one user is running in different memory pool and/or workload group and/or there's a significant difference in memory available at the time one user is running.
Related
In order to check if a new version of the database (in staging) react the same way (or better) than the production database, I would like to capture all requests execute on production server, .. to replay them on the staging database.
Is there a tool that does this job ?
what would be interesting is the abality to compare execution time, when replay, and highlight queries executed slower.
Else, I thought I would catpure queries by configuring '0' to log_min_statement_duration (so that queries can be logged in postgresql logfile), and then parse the file to grab and re play request on other server.... is there a better way to do it ?
(current database version postgresql9.6, but I'm interesting even if it's for higher version.. for next time)
I want to use SchemaSpy, but I my database is used heavily 24/7 and the DBA won't give me access, even readonly. However, i can give the DBA some commands and he can run them and give me the results.
Is it possible for SchemaSpy to run offline mode? In other words, Can I give it a dump of all the "CREATE TABLE, CREATE INDEX" and a list of all the sizes of the tables, and then it can generate the report?
Ok, The best thing about schemaspy is that it automatically runs and collects all the objects and in the case of the tables performed a count.
In your specific case you can use a work around as follows.
Ask your DBA for a dump or even the empty bank creation script, just the structures. And direct schemaspy to that bank that simulates your production.
By the way I have created a docker image that uses schamespy to document all the bases of a server.
https://github.com/krismorte/database-diagrams
We have PostgreSQL database no 3rd party software a Linux ad min and a SQL dba with little PostgreSQL experience.
We need to set up audit\access logging of all transactions on the CC tables. We enabled logging, but we are concerned about enabling everything to log. We want to restrict it to specified tables. I am not finding a resource that I under stand to accomplish this.
a few blogs have mentioned table triggers and logfiles
I found another that discusses functions. I am just not sure how t proceed on this. The following is the PCI information I am working off of:
(Done) Install pg_stat_statements extension to monitor all queries (SELECT, INSERT, UPDATE, DELETE)
Setup monitor to find out suspicious access on PAN holding table
Enable connection/disconnection logging
Enable Web Server access logs
Monitor Postgres logs for unsuccessful login attempts
Automated log analysis & Access Monitoring using Alerts
Keep archive audit and log history for at least one year and for last 3
months ready available for analysis
Update
also need to apply password policy to postgrsql db users.
90 day expirations (there is a place to set a date but not an
interval)
Lock out user 6 failed attempts Locked out for 30 minutes
or until an administrator enables the userID.
Force re-authenticate when idle for more than 15 minutes
Passwords/phrases must meet the
following: Require a minimum length of at least seven characters.
Contain both numeric and alphabetic characters.
Cannot be same as last 4 passwords/passphrases used
2) There is no direct way to log access to tables. The extension pg_audit claims that it can do that. I have never used it though.
3) can easily be done using log_connections and log_disconnections
4) has nothing to do with Postgres
5) can be done once connection logging has been done by monitoring the logfile
6) no idea what that should mean
7) that is independent of the Postgres setup. You just need to make sure the Postgres logfiles are archived properly.
I have developed an Excel add-in that I pitched to my employer's IT department. The add-in creates SELECT, INSERT, DELETE, and UPDATE SQL statements that are sent to a PostgreSQL database and any results (in the case of a SELECT statement) are returned to Excel to report on.
My team has been very impressed with this, but IT said that they don't allow laptops to perform CRUD operations directly on databases. Instead IT has set up certain environments to do this.
Can someone tell me if IT's concern around laptops directly connecting to a database and performing CRUD operations makes sense? Is this a valid concern?
If the laptops, their users and anybody else with access to them, the network connection, and the client software are all trusted, and you can always immediately push an update to the clients when the database structure inevitably changes in the future, then it's OK.
Otherwise it's not. The standard way would be to put some kind of service between the two that acts as a gatekeeper and defines the allowed operations on the database and who is allowed to do them. REST (or if you're enterprisey, SOAP) are two popular options. And don't send SQL over the wire in those cases.
With some database engines it might be possible in theory to let the users directly authenticate with the database and use the database's permission model to limit what they can do. For instance you could only allow users to execute certain stored procedures. But in practice that's probably more trouble than it's worth.
To be honest in practice it's probably not OK. That's too many things to trust at once.
Yes this is a valid concern. Someone could easily inject an SQL command into your database. They might be able to perform an operation that erases the entire database.
Say your software has this coed into itself: "SELECT $var1 FROM TEST WHERE $var2" and the user can modify var1 and var2. If they put "date > 10; DROP *" into var2 now your statement becomes "SELECT $var1 FROM TEST WHERE date > 10; DROP *;"
It is a little more complicated than that, but you should read up on SQL Injection.
Our system will run on a local network with no more than 50 clients that connect to the same local server. We are creating a DB user for each client, to take advantage of the postgresql privilege system.
1) Analyzing the "performance", its OK to have ~ 50 DB users instead of reimplementing a custom system?
2) (SOLVED) How can the user check (what SQL statement) what permission he has in a table?
Solution:
SELECT HAS_TABLE_PRIVILEGE('user','table','insert')
I prefer to not reimplement the system, since a good security system isn't trivial to implement.
To answer the user/performance question: probably not. The only real risk would depend on how many users have unique security permissions (for example, if every one of those 50 users had different permissions on each table/schema in the database). In practice this should never happen, and as long as you have a sane group system for permissions, you should be fine.