We have PostgreSQL database no 3rd party software a Linux ad min and a SQL dba with little PostgreSQL experience.
We need to set up audit\access logging of all transactions on the CC tables. We enabled logging, but we are concerned about enabling everything to log. We want to restrict it to specified tables. I am not finding a resource that I under stand to accomplish this.
a few blogs have mentioned table triggers and logfiles
I found another that discusses functions. I am just not sure how t proceed on this. The following is the PCI information I am working off of:
(Done) Install pg_stat_statements extension to monitor all queries (SELECT, INSERT, UPDATE, DELETE)
Setup monitor to find out suspicious access on PAN holding table
Enable connection/disconnection logging
Enable Web Server access logs
Monitor Postgres logs for unsuccessful login attempts
Automated log analysis & Access Monitoring using Alerts
Keep archive audit and log history for at least one year and for last 3
months ready available for analysis
Update
also need to apply password policy to postgrsql db users.
90 day expirations (there is a place to set a date but not an
interval)
Lock out user 6 failed attempts Locked out for 30 minutes
or until an administrator enables the userID.
Force re-authenticate when idle for more than 15 minutes
Passwords/phrases must meet the
following: Require a minimum length of at least seven characters.
Contain both numeric and alphabetic characters.
Cannot be same as last 4 passwords/passphrases used
2) There is no direct way to log access to tables. The extension pg_audit claims that it can do that. I have never used it though.
3) can easily be done using log_connections and log_disconnections
4) has nothing to do with Postgres
5) can be done once connection logging has been done by monitoring the logfile
6) no idea what that should mean
7) that is independent of the Postgres setup. You just need to make sure the Postgres logfiles are archived properly.
Related
I'm using azure ubuntu instance to store some data every minute in a mongo database. I noticed that the data is being wiped approximately once a day. I'm wondering why my data is being wiped?
I have a log every minute that shows a count of the db. Here are two consecutive minutes that show all records are deleted
**************************************
update at utc: 2022-08-06 10:19:02.393351 local: 2022-08-06 20:19:02.393366
count after insert = 1745
**************************************
update at utc: 2022-08-06 10:20:01.643487 local: 2022-08-06 20:20:01.643544
count after insert = 1
**************************************
You can see the data is wiped as count after insert goes from 1745 to 1. My question is why is my data being wiped?
Short Answer
Data was being deleted in a ransom attack. I wasn't using a mongo password as originally I was only testing mongo locally. Then when I set the bindIp to 0.0.0.0 for remote access, it meant anyone can access if they guess the host (this is pretty dumb of me).
Always secure the server with a password especially if your bindIp is 0.0.0.0. For instructions see https://www.mongodb.com/features/mongodb-authentication
More Detail
To check if you have been ransom attacked, look for a ransom note. An extra database may appear see show dbs in my case the new db with ransom note was called "READ__ME_TO_RECOVER_YOUR_DATA"
All your data is a backed up. You must pay 0.05 BTC to 1Kz6v4B5CawcnL8jrUvHsvzQv5Yq4fbsSv 48 hours for recover it. After 48 hours expiration we will leaked and exposed all your data. In case of refusal to pay, we will contact the General Data Protection Regulation, GDPR and notify them that you store user data in an open form and is not safe. Under the rules of the law, you face a heavy fine or arrest and your base dump will be dropped from our server! You can buy bitcoin here, does not take much time to buy https://localbitcoins.com or https://buy.moonpay.io/ After paying write to me in the mail with your DB IP: rambler+1c6l#onionmail.org and/or mariadb#mailnesia.com and you will receive a link to download your database dump.
Another way to check for suspicious activity is in Mongodb service logs in /var/log/mongodb/mongod.log. For other systems the filename might be mongodb.log. For me there are a series of commands around the attack time in the log, the first of which reads:
{"t":{"$date":"2022-08-07T09:54:37.779+00:00"},"s":"I", "c":"COMMAND", "id":20337, "ctx":"conn30393","msg":"dropDatabase - starting","attr":
{"db":"READ__ME_TO_RECOVER_YOUR_DATA"}}
the command drops the database or starts dropping the db. As suspected there are no commands to read any data which means the attacker isn't backing up as they claim. Unfortunately someone actually payed this scammer earlier this month. https://www.blockchain.com/btc/tx/65d035ca4db759a73bd9cb68610e04742ffe0e0b71ecdf88f54c7e464ee80a51
I am migrating oracle database to postgresql.
While migration came to know with the following query in the oracle side.
Oracle Query:
SELECT
TRIM(value) AS val
FROM v$parameter
WHERE name = 'job_queue_processes';
I just want to know how can we get the maximum number of job slaves per instance that can be created for the execution at the postgresql side.
I have created pg_cron extension and created required jobs till now. But one of the function is using above query in oracle, so I just want to convert it into the postgresql.
The documentation is usually a good source of information.
Important: By default, pg_cron uses libpq to open a new connection to the local database.
In this case, there is no specific limit. It would be limited in the same way other user connections are limited. Mostly by max_connections, but possibly lowered from that for particular users or particular databases by the ALTER command. You could create a user specifically for cron if you wanted to limit its connections separately, then grant that user privileges of other roles it will operate on behalf of. I don't know what pg_cron does if the limit is reached, does it deal with it gracefully or not?
Alternatively, pg_cron can be configured to use background workers. In that case, the number of concurrent jobs is limited by the max_worker_processes setting, so you may need to raise that.
Note that the max number of workers may have to be shared with parallel execution workers and maybe with other extensions.
I have a DB2 procedure that runs a very lengthy SELECT query (6 CTE's that touches about 5 or 6 different tables, some pivoting, few joins). I am logged into System i Navigator as an "admin" user. This user account has the authority to do basically everything. My personal username does not. So I log in as this to make it easier for myself.
When I run this procedure (by opening a SQL Script window and typing in CALL Procedure_Name('Param1');), the processing completes in 4 or 5 seconds.
My boss has logged into his i Navigator as his own username. His username has more powers than my personal account, but less than the admin one that I use. When he runs the same procedure in the same method that I do, it takes about 15-20 seconds to run it.
So my question is, does the username you are logged in as affect the speed in which a DB2 query runs? If so, what do I change to make the query run at the same speed for all users, preferably at the speed in which the admin account runs it?
Use "Run & Explain" from Run SQL Scripts (preferably the latest version included with Access Client Solutions (ACS) rather than the older IBM i Access for Windows i Navigator version)
Compare the results from the different user profiles. Particularly the INI options section...it sounds as if the optimization goal may be different.
*FIRSTIO means the DB will pick the most efficient plan to return the first few records as quickly as possible; perfect if a user is waiting to see something on a screen. *ALLIO means the DB will pick the most efficient plan for returning all the records; perfect for a batch process (or a client app) that's going to retrieve all the records anyways.
Also look at the Environment Information section
That will show you if one user is running in different memory pool and/or workload group and/or there's a significant difference in memory available at the time one user is running.
How can I obtain the creation date or time of an IBM's DB2 database without connecting to the specified database first? Solutions like:
select min(create_time) from syscat.tables
and:
db2 list tables for schema SYSIBM
require me to connect to the database first, like:
db2 connect to dbname user userName using password
Is there another way of doing this through a DB2 command instead, so I wouldn't need to connect to the database?
Can db2look command be used for that?
Edit 01: Background Story
Since more than one person asked why do I need to do this and for what reasons, here is the background story.
I have a server with DB2 DBMS where many people and automated scripts are using it to create some databases for temporary tasks and tests. It's never meant to keep the data for long time. However for one reason or another (ex: developer not cleaning after himself or tests stopping forcefully before they can do the clean up) some databases never get dropped and they start to get accumulated till the hard disk is filled out eventually. So The idea of the app is to look up the age of the database and drop it, if it's older than 6 months (for example).
Our system will run on a local network with no more than 50 clients that connect to the same local server. We are creating a DB user for each client, to take advantage of the postgresql privilege system.
1) Analyzing the "performance", its OK to have ~ 50 DB users instead of reimplementing a custom system?
2) (SOLVED) How can the user check (what SQL statement) what permission he has in a table?
Solution:
SELECT HAS_TABLE_PRIVILEGE('user','table','insert')
I prefer to not reimplement the system, since a good security system isn't trivial to implement.
To answer the user/performance question: probably not. The only real risk would depend on how many users have unique security permissions (for example, if every one of those 50 users had different permissions on each table/schema in the database). In practice this should never happen, and as long as you have a sane group system for permissions, you should be fine.