I'm using .NET SQLBulkCopy to batch insert data into database. What I'm trying to do is to convert all specified string to uppercase when the data is inserted into the table. Here is my trigger script:
ALTER TRIGGER [dbo].[EmployeeTable_UpperCase]
ON [dbo].EmployeeTable
AFTER INSERT
AS
BEGIN
UPDATE EmployeeTable SET
[CompanyCode] = UPPER([CompanyCode])
,[EmpId] = UPPER([EmpId])
,[EmpCode] = UPPER([EmpCode])
Where Id in (Select Id from inserted)
END
The script above doesn't work. None of my data is converted to uppercase at all. Does it has anything to do with using SQLBulkCopy in .net to do insertion? How should I make things work in my case?
When you are creating your SqlBulkCopy object add SqlBulkCopyOptions.FireTriggers like so:
SqlBulkCopy bulkCopy = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.FireTriggers);
FireTriggers
When specified, cause the server to fire the insert triggers for the rows being inserted into the database.
- SqlBulkCopyOptions Enumeration - MSDN
Related
I know that in plpgsql if one would want to refer to the new inserted row, you can use "NEW".
How can I do this in T-SQL (transact sql)?
The following is the trigger I am trying to create:
CREATE Trigger setAlertId on rules_table
FOR INSERT AS
DECLARE #max_id integer
SELECT #max_id = (select max(AlertId) from rules_table)
NEW.AlertId = #max_id+1
END
GO
I get the error message:
Incorrect syntax near 'NEW'
Thanks.
inserted and deleted pseudo tables:
DML trigger statements use two special tables: the deleted table and the inserted tables. SQL Server automatically creates and manages these tables. You can use these temporary, memory-resident tables to test the effects of certain data modifications and to set conditions for DML trigger actions. You cannot directly modify the data in the tables
In your case why dont you use an identity on the alertid field that increments itself?
If you want to do it in your trigger you will need to select your primary key from inserted and then do an update on rules tables.
I need to modify the generated SQL for insert/update operation, before it is sent to database. The required modification is very specific, so I was hoping that there is a way to simply append string to statement.
For example, SQL looks like this (Oracle BTW):
UPDATE TABLE_A
SET DESCRIPTION = "ABC"
WHERE OBJECTID = 1
But I want to append this line (in SET part) to update one more field:
SHAPE = sde.st_geometry('point (18 57)', 4326)
I can't add SHAPE column to EF model, because that is unsupported data type.
Now, is there a way I can modify EF generated SQL statement?
You could move this update to a simple stored procedure that is mapped into your entity data model.
Here i need to select a column name by using function(stored procedure) which is present in other database table using PostgreSQL.
I have sql server query as shown below.
Example:
create procedure sp_testing
as
if not exists ( select ssn from testdb..testtable) /*ssn is the column-name of testtable which exists in testdb database */
...
Q: Can i do the same in PostgreSQL?
Your question is not very clear, but if you want to know if a column by a certain name exists in a table by a certain name in a remote PostgreSQL database, then you should first set up a foreign data wrapper, which is a multi-stage process. Then to test the existence of a certain column in a table you need to formulate a query that conforms to the standards of the particular DBMS that you are connecting to. Use the remote information_schema.tables table for optimal compatibility (which is here specified as remote_tables which you must have defined with a prior CREATE FOREIGN TABLE command):
CREATE FUNCTION sp_testing () AS $$
BEGIN
PERFORM *
FROM remote_tables
WHERE table_name = 'testtable'
AND column_name = 'ssn';
IF NOT FOUND THEN
...
END IF;
END;
$$ LANGUAGE plpgsql;
If you want to connect to another type of DBMS, you need to write some custom function in f.i. C or perl and then call that from within a PostgreSQL function on your local machine. The test on the column is then best done inside the function which should therefore take connection parameters, table name and column name as parameters, and return a boolean to inform the result.
Before you start testing this, make sure that you read all the documentation on connecting to remote servers and learning PL/pgSQL first would also be a nice gesture to demonstrate your own efforts before you ask for help.
Is there a way in the SQLAlchemy class of a table to define/create triggers and indexes for that table?
For instance if i had a basic table like ...
class Customer(DeclarativeBase):
__tablename__ = 'customers'
customer_id = Column(Integer, primary_key=True,autoincrement=True)
customer_code = Column(Unicode(15),unique=True)
customer_name = Column(Unicode(100))
search_vector = Column(tsvector) ## *Not sure how do this yet either in sqlalchemy*.
I now want to create a trigger to update "search_vector"
CREATE TRIGGER customers_search_vector_update BEFORE INSERT OR UPDATE
ON customers
FOR EACH ROW EXECUTE PROCEDURE
tsvector_update_trigger(search_vector,'pg_catalog.english',customer_code,customer_name);
Then I wanted to add that field also as an index ...
create index customers_search_vector_indx ON customers USING gin(search_vector);
Right now after i do any kind of database regeneration from my app i have to do the add column for the tsvector column, the trigger definition, and then the index statement from psql. Not the end of the world but its easy to forget a step. I am all about automation so if I can get this all to happen during the apps setup then bonus!
Indicies are straight-forward to create. For single-column with index=True parameter like below:
customer_code = Column(Unicode(15),unique=True,index=True)
But if you want more control over the name and options, use the explicit Index() construct:
Index('customers_search_vector_indx', Customer.__table__.c.search_vector, postgresql_using='gin')
Triggers can be created as well, but those need to still be SQL-based and hooked to the DDL events. See Customizing DDL for more info, but the code might look similar to this:
from sqlalchemy import event, DDL
trig_ddl = DDL("""
CREATE TRIGGER customers_search_vector_update BEFORE INSERT OR UPDATE
ON customers
FOR EACH ROW EXECUTE PROCEDURE
tsvector_update_trigger(search_vector,'pg_catalog.english',customer_code,customer_name);
""")
tbl = Customer.__table__
event.listen(tbl, 'after_create', trig_ddl.execute_if(dialect='postgresql'))
Sidenote: I do not know how to configure tsvector datatype: deserves a separate question.
I am wondering what the best / most efficient / common way is to add a row to an SQL Server table using C# and ADO.NET. I know of course that I can just create an SQL statement for that, but first, the destination table schema might vary, so I want to keep this flexible, and second, there are so much columns that I do not want to code and maintain this manually. So I currently use a SqlCommandBuilder that is automatically creating the proper insert statement for me, together with an SQLDataAdapter, like this:
var dataAdapter = new SqlDataAdapter("select * from sometable", _databaseConnection);
new SqlCommandBuilder(dataAdapter);
dataAdapter.Fill(dataTable);
// ... add row to dataTable, fill fields from some external file that
// ... includes column names as well,
//.... add some more field values not from the file, etc. ...
dataAdapter.Update(dataTable);
This seems pretty inefficient though to first grab all the records from the table even though I do not need them for anything (especially considering that there might even already be a million records in there). Using some select statement like select * from sometable where 1=2 would work, but it does not seem like a very clean approach. I imagine there is some different solution for this that I am just not aware of.
Thanks,
Timo
I think the best way to insert rows is by using Stored Procedures through the ADO.NET command object.
If you are inserting massive amounts of data and are using SQL Server 2008 you can pass DataTable objects to a stored procedure by using a User-Defined Table Types.
In SQL:
CREATE TYPE SAMPLE_TABLE_TYPE --
AS
field1 VARCHAR(255)
field2 VARCHAR(255)
CREATE STORED PROCEDURE insert_data
AS
#data Sample_TABLE_TYPE
BEGIN
INSERT INTO table1 (field1, field1)
SELECT username, password FROM #data;
In .NET:
DataTable myTable = new DataTable();
myTable.Columns.Add(new DataColumn("field1", typeof(string));
myTable.Columns.Add(new DataColumn("field1", typeof(string));
SqlCommand command = new SqlCommand(conn, CommandType.StoredProcedure);
command.Parameters.Add("#data", myTable);
command.ExecuteNonQuery();
If you data also contains updates you can use the new MERGE function used in SQL Server 2008 to efficiently perform both inserts and updates in the same procedure.
However, if creating User-Defined Table Types and creating stored procedures is too much work, and you need a complete dynamic solution I would stick with what you have, with the recommendation of using the
Where 1 = 0
appended to your SQL text.
You also can use "SELECT TOP(0) * FROM SOMETABLE;" query.