Specify max number of databases a user can own - tsql

Is this possible?
I would like to have a limit for a user, lets say 5 databases.
So when he tries to issue a CREATE query to create a 6th an exception is thrown.

No, you cannot do this - at least not in a declarative way (just simply specify the max number of databases owned for each user).
The closest you could get with standard SQL Server functionality would be to create a DDL trigger for CREATE DATABASE and in that trigger, check to see if the current user already owns five databases, and in that case, let the trigger fail the operation.
Something along the lines of this (taken from TechNet sample):
CREATE TRIGGER ddl_trig_database
ON ALL SERVER
FOR CREATE_DATABASE
AS
-- here, check to see if current user already owns five databases
-- and if so, fail the trigger by using RAISERROR
GO

look into DDL triggers, with a trigger like this one below you can trap the CREATE DATABASE statement
CREATE TRIGGER ddl_trig_database
ON ALL SERVER
FOR CREATE_DATABASE
AS
--do something here
-- select count(*)from sys.sysdatabases where sid = ???
GO

Related

use a db2 trigger to identify the user initiating

When creating a DB2 trigger (db2 version 10.1 LUW), I am looking to capture the userid that initiating the trigger.
For example, if a user inserts data, the after insert trigger should write to a log on who inserted the data. This not meant for production purposes - just is to identify who is updating / inserting test data.
You can obtain the value of the SESSION_USER special registry variable. As an alternative look at the SYSTEM_USER registry. There are differences if you use features like SET SESSION AUTHORIZATION or use TRUSTED CONTEXTS.
Try this as a quick test:
select session_user from sysibm.sysdummy1;
select system_user from sysibm.sysdummy1;

Create Triggers on SYSUSER table in sql Anywhere 16

I want to call a certain procedure that logs everytime a database user is created or deleted in the db ( sql Anywhere 16).
For this i have written a Function that should be called via a trigger when a row is inserted or deleted from table SYS.SYSUSER.
However, I am not able to create a trigger on this table.
Am i allowed to create trigger on this or is there someother way to get notified whenever a user is created or deleted for db?
New to sybase please help.
heres is my create trigger code
CREATE TRIGGER myTrigger AFTER INSERT ON sys.sysuser
REFERENCING NEW AS newRecord
FOR EACH ROW
BEGIN
--
END;
You cannot create a trigger on a system table. You can create a handler for "system events", which are described in the online SQL Anywhere documentation. Unfortunately, creation or deletion of users is not a system event that can be handled. I don't believe there's a way short of polling that you can do what you want to do.
Full disclosure: I work for SAP in SQL Anywhere engineering.

How to fire a DDL trigger while detaching a database in SQL Server 2008 R2?

I know simple ddl triggers like Create_Table, Alter_Table, Drop_Table, I worked with this.
Now I want to know about something like: when a user detaches the database, a trigger should be fired whether the user is valid or not.
Create Trigger trgNoNewTables
ON Database
For Create_Table
AS
BEGIN
Print 'No Tables Please'
ROLLBACK
END
May I know like the above trigger there is any trigger for detach and attach?
You can't.
At best you can create trigger(s) for create/alter/drop database:
create trigger foo
on all server
for create_database, drop_database, alter_database
as
print 'triggered!'
go
but sp_detach_db will not fire it (sp_attach_db will).
The proper approach to restrict users from performing actions is security (grant/deny/revoke). Nothing else will work. I suggest you do not grant permissions if users are not allowed to perform an action.

Creating Stored Procedures that can work with different tables

I need to use the same Stored Procedures against many tables all with the same structure in my DB. This is data loaded from customers,with one table/customer and the data needs calculations/checks run before it's loaded to our DataWarehouse.
So far these are the options and issues I've found and I'm looking for a better pattern/approach.
Create a view that points to the
table I want to process, the SPs
then talk to that view. This works
well (especially once I'd worked out
how to create views 'automagically'
based on their columns). But the
view can only be used with one table
at a time, forcing the system to
deal with one customer at a time.
Use dynamic sql within each SP -
makes the SPs much harder to
read/debug and for those reasons has
been ruled out
Create a partitioned view across
all the tables and then use a
paramatised table function to return
just the data we're interested in -
ah but then I can't update the data
as the function returns a table that
can be only used for select
Use dynamic sql inside a function
(can't be done) to create a view
(which also can't be done) .... give
up
Within the SP create a temp table
with over the target table using
dynamic sql, but then the temp table
only exists in the session that runs
the dynamic sql not the 'parent'
session that's running the SP ...
give up
Create a global temp table using
dynamic SQL to avoid the scope issue
of 5, then run the SP against the
global temp table. Still run into
the single customer issue.
Create the view as in 1 within a
transaction and then run all the SPs
and then commit - works fine for one
user, but any others are now blocked
trying to create a new view of the
same name
Use a temporary view ... can't in
T/Sql
Move all the code into .Net - but
we have environment issues where
tsql is much easier to host/run
I know I'm not the only person who has this problem, have any of you good people solved it, please help.
Maybe your approach is wrong, I will go deep in details in a while but it seems that your problem can be solved using SSIS
-- Updated answer:
First, the big picture:
The most affordable way to process the tables dynamically is using a script instead of a stored procedure. If you want to make table access randomly chosen, you certainly will not use any of the performance advantages of stored procedures, i.e. execution plans. A SQL Script can be easily upgraded to point one table at runtime using placeholders and replacing it before executing.
The script can be loaded from the filesystem, a variable, a text column in a table, etc. The loading process consists in read the script content to a string variable. This step occurs once.
The next step is the preparation stage. This step will be executed for each table to be processed. The main business of this step is to replace the table placeholders with the current table being processed. Also is possible to set parameter values like any parameter you can need to pass into the sp that you already wrote.
The last step is the execution of the script. As is already loaded into a variable and the placeholders were set to the current table name, you can safely call a ExecuteSQLTask with the sql variable as the input. This process of course happens for each table you want to process.
Ok. Now let's see this in action.
This is a sample database model:
CREATE TABLE [dbo].[t_n](
[id] [int] IDENTITY(1,1) NOT NULL,
[name] [varchar](50) NOT NULL,
[start] [datetime] NULL,
CONSTRAINT [PK_t_n] PRIMARY KEY CLUSTERED ([id] ASC)
) ON [PRIMARY]
where t_n represents any table (t_1, t_2, t_3, etc).
This is your current stored procedure:
CREATE PROCEDURE SpProcessT_n
AS
BEGIN
SET NOCOUNT ON;
SELECT * FROM [t1];
END
GO
Now, transform this stored procedure to a Sql script, placing a placeholder instead of the table name
SET NOCOUNT ON;
SELECT * FROM [$table_name];
I choose to save this in a .sql file in the filesystem to keep the POC as simple as possible.
Next, create a SSIS Package like this:
These are the settings I choose to set up the loop:
And this is the way you can assign the table name to a variable called appropriately _table_name_
This is the setup of the script task, here you find that the variable _table_name_ has read only access, while a new variable called SqlExec has read/write access:
And this is it's Main function:
public void Main()
{
String Table_Name = Dts.Variables["table_name"].Value.ToString();
String SqlScript;
Regex reg = new Regex(#"\$table_name", RegexOptions.Compiled);
using (var f = File.OpenText(#"c:\sqlscript.sql")) {
SqlScript = f.ReadToEnd();
f.Close();
}
SqlScript = reg.Replace(SqlScript, Table_Name);
Dts.Variables["SqlExec"].Value = SqlScript;
Dts.TaskResult = (int)ScriptResults.Success;
}
You can notice that the Dts Variable SqlExec contains the sql script that will be executed. Now you can set the following options in your ExecuteSqlTask:
Successfully tested in MSSQL 2008, if you put a insert inside the script file you will notice new rows in each table.
Hope this helps!
If your application can afford to have one cut-off day late, then you can have a nightly scheduled job to run an SSIS package that will consolidate all 150+ tables into one single huge table. Since the freshness of the results of the queries against that huge table will then be 1 'date' late, this solution will not include any rows that recently been loaded.
You can actually time the running of this package. If it is still amazingly fast, say within 30 minutes, then you can bet to run it in every few hours, like during: the start of work day, lunch break, and end of day. This way you can have a nearly fresh data to query with.
Write a partitioned view including table names?
SELECT 'TableName', t.* FROM TableName t
UNION ALL
SELECT 'TableName2', t.* FROM TableName2 t
Then write a single instead of trigger which uses dynamic SQL for writing (less testing involved with that use of dynamic SQL because you'd just write the simple CRUD operations once for all tables I'd think)
I would not do this with SQL. What you are describing sounds like a traditional ETL situation.
Since all of the customer tables are the same, I would create a table in the data warehouse with all the columns from the client table, a surrogate key column, and a type identifier. You have an option to create a "staging" table here that will only have data in it during the ETL process, or just working on a single "live" table. I would create the staging table.
Then within SSIS package (don't worry you can still schedule from SQL Server agent, it hasn't totally left the DB server), start the ETL process...
E(xtract): copy the data from your source into the staging table in the data warehouse. You most likely want to use a sub-package within a foreach loop and changing the name of the table that you want to process from an external store (most people would say put this in the warehouse, but its up to you).
T(ransform): run the calculations/checks you were talking about, but do it on the whole set...
L(oad): Copy it to your real within the data warehouse.
There are a couple things I would NOT do.
1. Modify the data in the source table.
2. Try to do this in t-sql. Its just not what tsql is good at.
If you need more detail on this approach, I would probably ask the question with some Business Intelligence tags. I'll be traveling for the next week or so, but I will try to look at the comments to clear anything up if you need me to.
I am fairly certain that the standard way to solve this is using dynamic SQL in each sp (your option 2), which has already been ruled out.
Your goal is to make generic, multi-table SQL. I don't see how you intend to accomplish that without sacrificing some efficiency and readability.

MS-SQL 2000: Turn off logging during stored procedure

Here's my scenario:
I have a simple stored procedure that removes a specific set of rows from a table (we'll say about 30k rows), and then inserts about the same amount of rows. This generally should only take a few seconds; however, the table has a trigger on it that watches for inserts/deletes, and tries to mimic what happened to a linked table on another server.
This process in turn is unbareably slow due to the trigger, and the table is also locked during this process. So here are my two questions:
I'm guessing a decent part of the slowdown is due to the transaction log. Is there a way for me to specify in my stored procedure that I do not want what's in the procedure to be logged?
Is there a way for me to do my 'DELETE FROM' and 'INSERT INTO' commands without me locking the table during the entire process?
Thanks!
edit - Thanks for the answers; I figured it was the case (not being able to do either of the above), but wanted to make sure. The trigger was created a long time ago, and doesn't look very effecient, so it looks like my next step will be to go in to that and find out what's needed and how it can be improved. Thanks!
1) no, also you are not doing a minimally logged operation like TRUNCATE or BULK INSERT
2) No, how would you prevent corruption otherwise?
I wouldn't automatically assume that the performance problem is due to logging. In fact, it's likely that the trigger is written in such a way that is causing your performance problems. I encourage you to modify your original question and show the code for the trigger.
You can't turn off transactional integrity when modifying the data. You could ignore locks when you select data using select * from table (nolock); however, you need to be very careful and ensure your application can handle doing dirty reads.
It doesn't help with your trigger, but the solution to the locking issue is to perform the transactions in smaller batches.
Instead of
DELETE FROM Table WHERE <Condition>
Do something like
WHILE EXISTS ( SELECT * FROM table WHERE <condition to delete>)
BEGIN
SET ROWCOUNT 1000
DELETE FROM Table WHERE <Condition>
SET ROWCOUNT 0
END
You can temporarily disable the trigger, run your proc, then do whatever the trigger was doing in a more efficient manner.
-- disable trigger
ALTER TABLE [Table] DISABLE TRIGGER [Trigger]
GO
-- execute your proc
EXEC spProc
GO
-- do more stuff to clean up / sync with other server
GO
-- enable trigger
ALTER TABLE [Table] ENABLE TRIGGER [Trigger]
GO