SQL Server 2008 R2 - Audit log - understanding hierarchy - sql-server-2008-r2

I have searched and not found a decent explanation of the standard SQL2008 audit log output- basics: SQL Server Audit Records.
So first bit.. anyone know of such a link.
I have had to setup an audit on a SQL Server 2008 R2 database to capture execute, insert, update, delete based on the database with the dbo as principle. I have no issues with the setup of auditing. This is returning an expected large amount of data. What is not clear is how to determine the hierarchy in the output. I need to isolate which is the parent object. I was wondering if the 'session id' could be used in conjunction with something else. All sequence no's are 1.
The overall aim is to remove the db access that utilises the dbo and create a role instead. Clearly I want to only assign permissions to the objects that are actually required.
So the main question: Anyone know how to determine the parent object in the audit log?
Thanks folks.
---Extra:
I was attempting to determine which objects where called first and thus the level at which the permissions for execution are set.
For example when executing a stored proc which then inserts into a table, or executes functions or other stored procs within the audit all the 'actions' are stored from the initial sp exec down to the all the table inserts etc. But the permission is only required on the initial sp not all the other reported objects (ignoring the dynamic sql stuff atm).
I was thus hoping to identify the 'top' level so I could assign permissions to a new role. There are a lot of objects in the db and in order to capture the vast majority of permissions the audit has been set on a UAT site which has a reduced user base.

Not sure what you mean in this case by "hierarchy" or "parent object". There's not really any relationship between audit entries except the sequence dicated by their date/time. Are you trying to determine the table accessed? P.S. I've written a good bit about SQL Audit at www.ultimatewindowssecurity.com/sqlserver.
If I understand it you want to know which tables, views, stored procedures are being accessed by dbo. Is that correct?

Related

Optimize the trigger to add audit log

I have a local database which is the production database, on which all operations are being done real time. I am storing the log on each action in an audit log table in another database via trigger. It basically checks if any change is made in any of the row's column it will remove that row and add it AGAIN (which is not a good way I think as it should simply update it but due to some reasons I need to delete and add it again).
There are some tables on which operations are being done rapidly like 100s of rows are being added in database. This is slowing the process of saving the data into audit log table. Now if trigger has to like delete 100 rows and add 100 again it will affect the performance obviously and if number of rows increases it will reduce the performance more.
What should be the best practice to tackle this, I have been looking into Read Replica and Foreign Data Wrapper but as for Read Replica it's only Readable and not writable for PostgreSQL and I don't really get to know how Foreign Data Wrapper gonna help me as this was suggested by one of my colleague.
Hope someone can guide me in right direction.
A log is append-only by definition. Loggers should never be modifying or removing existing entries.
Audit logs are no different. Audit triggers should INSERT an entry for each change (however you want to define "change"). They should never UPDATE or DELETE anything*.
The change and the corresponding log entry should be written to the same database within the same transaction, to ensure atomicity/consistency; logging directly to a remote database will always leave you with a window where the log is committed but the change is not (or vice versa).
If you need to aggregate these log entries and push them to a different database, you should do it from an external process, not within the trigger itself. If you need this to happen in real time, you can inform the process of new changes via a notification channel.
* In fact, you should revoke UPDATE/DELETE privileges on the audit table from the user inserting the logs. Furthermore, the trigger should ideally be a SECURITY DEFINER function owned by a privileged user with INSERT rights on the log table. The user connecting to the database should not be given permission to write to the log table directly.
This ensures that if your client application is compromised (whether due to a malfunction, or a malicious user e.g. exploiting an SQL injection vulnerability), then your audit log retains a complete and accurate record of everything it changed.

Is there a way to protect SQL statements from being altered, while still allowing the person to execute the statement?

I work for a large organization that relies heavily on SQL developer for financial reconciliation. We have only SELECT privileges. Several people have access to the same SQL statements, is there a way to ensure they cant change the code? We need to ensure that people who have access to run our SQL statements to generate a report, do not have the ability to change the code. This forces them to submit change requests if they need the code change, which helps us to create and audit log of the changes made. Our financial audit includes audit of our SQL statements. With too many people making changes it is hard to track/validate the change.
Remove their privileges to SELECT from the tables directly.
Wrap the existing code in a stored procedure (if bind variables are used in the SQL statement then they can be arguments to the stored procedure).
This also allows you to put additional code for verification/auditing inside the stored procedure so that it is automatically run with the query(ies) that the users require.
Create a ROLE and grant the EXECUTE privilege on the stored procedure to that role.
Give that role only to the people who are required to run that stored procedure.

db2 creating proxy user account

SQL server has an option to create proxy user accounts with the statement
CREATE USER proxyUser WITHOUT LOGIN;
I couldn't find much help on internet on getting the db2 (v8) equivalent of this. I'm not sure whether this is possible, if yes please let me know how.
The scenario where i want to use this is as follows.
I have table with ~8 million records which gets updated daily. Before the inserts happen, few records are deleted from the table and the number is ~2 million. Since these deletes need not be logged, we decided on setting off Logging during the deletes. Since our credentials do not have alter table rights, we decided to put the ALTER and DELETE statements in a script and and execute the script using the proxy account irrespective of what user executes the SP.
I foud this article which closely describes the scenario which i described above. The differences are that i need to do this on db2 and i need to do deletes without logging them.
http://www.mssqltips.com/sqlservertip/2583/grant-truncate-table-permissions-in-sql-server-without-alter-table/
Thanks
Arjun
It will work basically in the same manner in DB2, with a few exceptions. Firstly, there's no TRUNCATE TABLE statement in DB2 8.2 (and there's no DB2 version 8 on Linux). Secondly, there are no database users in DB2 -- all users are defined externally in the operating system, so there's no CREATE USER statement either.
All statements in a stored procedure, except dynamic SQL, are executed with the authorization of the procedure creator.
So, using the authorized ID, e.g. the database administrator's ID, create the stored procedure that does what you need (ALTER, DELETE, whatever), then grant the EXECUTE privilege on that procedure to whoever needs to run it.

report migration from SQL server to Oracle

I have a report in SQL server and I am migrating this to Oracle.
The approach I used in SQL server is load sum(sales) , person for given month into temporary tables (hash tables) and use this table to join with other transaction tables show the details, but when it comes to oracle I am not sure if I can use the same method here, because hash tables (temporary tables in SQL server) are specific to session and might not create any problem with output, please advise if there is anything in oracle which is analogous to that.
I came to know there are global temp tables in oracle, do they work in the manner I mentinoed above, also
If a user has no create/drop table privileges can they still use gloabal temp tables?
please help me.
You'll have to show some code or atleast some pseudo-code of how your process runs for anyone to help you. Having said that...
One thing that is different in oracle compared to temporary tables in other databases is that you do not create them each time you need them. You create them once and the data in the table is present either until you commit/rollback (transaction based) or until you end your session (session-based global temporary tables). Also, The data in a temporary table is visible only to the session that inserts the data into the table..
If you are generating the output files once and you don't need that data later, then Global temporary tables would probably fit in cleanly, with some minor changes.
Since you do not create the temporary tables each time you use them, you don't need the create/drop privilege. All you'd need is the insert/read privilege. Just read will not help because you cannot read another session's data anyways, so there is no use for it.

How can I audit with the Microsoft SQL Server LDF file?

We need an audit log in the product that we are creating. We use SQL Server 2008 R2. I learned that the LDF file keeps an complete log of all transactions that where made*.
I've found ApexSQL Log, this tools analyses the LDF file and provides a GUI. It's a great demonstration of what's possible. But it's expensive. More info: http://www.apexsql.com/sql_tools_log.aspx
Do you know of other programs that can analyse the LDF file's? Or perhaps other methods to provide audit-trail functionality? I know that it's possible to create triggers. But if it isn't necessary to add things to my database scheme then I would rather not do it.
*Only if you select the full recovery model.
How about the new Change Data Capture (CDC) functionality in R2. Doesnt that serve your purpose ?
When it comes to the information stored in an LDF file, make sure to form a full log chain. A log chain is a continuous sequence of transaction log backups. It starts with a full database backup followed by all subsequent log backups up through the auditing point. If it becomes broken, only the transactions in the logs up to the last backup before the missing one can be shown with full information (e.g. a schema and object name, or a row history)
Unlike INSERT and DELETE operations, which are fully logged in the LDF files, UPDATE operations are logged minimally – only the changes that are made are logged, but the old and new values are not. When logging UPDATE operations, SQL Server doesn’t log complete before and after row states but only the incremental change that occurred to the row. For example, if a word “log” was updated to word “blog” SQL Server will, in general case, only log an addition of letter “b” at index 0. This is enough for its purpose of ensuring ACID but not enough to easily show before and after states of the row. So, in order to understand what changed really occurred, you have to reconstruct the context in which the change occurred from the rest of transaction log and/or backup and online database data