We are using SQL Server CE for our integration tests. At the moment before every test, we delete all data from all columns, then re-seed test data. And we drop the database file when the structure changes.
For deletion of data we need to go through every table in correct order and issue Delete from table blah and that is error-prone. Many times I simply forget to add delete statement when I add new entities. So it would be good if we can automate data-deletion from the tables.
I have seen Jimmy Bogard's goodness for deletion of data in the correct order. I have implemented that for Entity Frameworks and that works in full-blown SQL Server. But when I try to use that in SQL CE for testing, I get exception, saying
System.Data.SqlServerCe.SqlCeException : The specified table does not exist. [ ##sys.tables ]
SQL CE does not have supporting system tables that hold required information.
Is there a script that works with SQL CE version that can delete all data from all tables?
SQL Server Compact does in fact have system tables listing all tables. In my SQL Server Compact scripting API, I have code to list the tables in the "correct" order, not a trivial task! I use QuickGraph, it has an extension method for sorting a DataSet. You should be able to reuse some of that in your test code:
33
public void SortTables()
{
var _tableNames = _repository.GetAllTableNames();
try
{
var sortedTables = new List<string>();
var g = FillSchemaDataSet(_tableNames).ToGraph();
foreach (var table in g.TopologicalSort())
{
sortedTables.Add(table.TableName);
}
_tableNames = sortedTables;
//Now iterate _tableNames and issue DELETE statement for each
}
catch (QuickGraph.NonAcyclicGraphException)
{
_sbScript.AppendLine("-- Warning - circular reference preventing proper sorting of tables");
}
}
You must add the QuickGraph DLL files (from Codeplex or NuGet) and you can find the implementation of GetAllTableNames and FillSchemaDataSet here http://exportsqlce.codeplex.com/SourceControl/list/changesets (in Generator.cs and DbRepository.cs)
Related
MigrateDatabaseToLatestVersion is used. The database that is stored within SQL Server Express is updated.
When opening a local stored .sdf file (SQL Server CE database) with a valid path and file name, this file is not updated.
Database.SetInitializer(new MigrateDatabaseToLatestVersion<DTDataContext, Configuration>());
var connection = DTDataContext.GetConnectionSqlServerCE40(fullPathName);
dataBaseContext = new DTDataContext(connection, true);
dataBaseContext.Database.Initialize(true);
The MigrationHistory entries will be made in SQL Server Express and not in the local SQL Server CE database file.
What would be the easiest way to update a local SQL Server CE database file?
After a few experiments, an adequate solution was found (which fits for my purpose).
The question was focused about the old sdf(s) that were previously written but with an older model in contrast to the code.
I decided not to migrate old files (which are applied as a kind of backups).
Only reading will be made within those files. Obviously, it is possible that newer sdf(s) will be read once in the future but that's not a big deal.
Before reading stuff of an entity that could maybe not exist (in a sdf), it will be checked via SqlQuery and count(*).
[System.Diagnostics.CodeAnalysis.SuppressMessage( "Microsoft.Design", "CA1031:DoNotCatchGeneralExceptionTypes" )]
private bool TestIfTableExists( string tableName, DTDataContext dataContext )
{
try
{
int cnt = dataContext.Database.SqlQuery<int>( "select count(*) from " + tableName ).First();
return cnt > 0;
}
catch( Exception ex ) { /*available SqlCeException assembly does not fit --- table does not exist*/ return false; }
}
btw When using SqlCeException (v3.5), which could be provided as a reference via the assembly search, the above situation would fail (=unhandled exception!). Have not tested it with v4 because I wanna avoid a 'manual' reference because it must be checked in (no need for any path problems with other workstations).
Concerning writing a sdf:
When writing a new sdf with the current model, this is not a problem at all.
Database.CreateIfNotExists() was applied.
In my case, updating a sdf was not necessary --- and a quick solution for that was not found.
I am starting to learn Spring and faced with some issues regarding spring-jdbc.
First, I tried run the example from this: https://spring.io/guides/gs/relational-data-access/ and it worked. Then, I commented lines with droping and creating new tables(http://pastebin.com/zcJHsL1P), in order to not override data, but just get it from db and show it. However, spring showed me error:
Table "CUSTOMERS" not found; SQL statement: ...
So, my question is: What should I do to store my database permanently? I don't want to recreate all time new database, I want create it once and update it.
P.S. I used H2 database. Maybe problem exists in tis db?
That piece of code looks like you are "prototyping" something; so it's easier to automatically create a new database (schema, tables, data) on the fly, execute and/or test whatever you want to...and finish the execution.
If you want to persist your data and only modify/update it, either use H2 with the "file layout" or use MySQL, PostreSQL, etcetera.
By the way, the reason you are getting Table "CUSTOMERS" not found; SQL statement: ... is because you are using H2 as an in-memory database and every time you start your application you need to re-create the tables and populate them with data.
I need some help with a SSIS Script Task (SQL 2008 R2) that dynamically creates a package. I am refining a package that copies data from a Sage Timberline (Now rebranded to Sage 300) Pervasive SQL environment to a SQL server data warehouse. I can create a package that opens the connection to Timberline and copies the data to a table in SQL Server. The problem is, for each company in timberline and each table in SQL, I need to create a separate data flow task. Given the three Timberline company folders and the number of tables in each folder, this would take a lot of time to create and be cumbersome to maintain and troubleshoot.
I am trying to create a package that uses a Foreach Loop to create a package that creates a ADO/ODBC source (Timberline), a OLE destination (SQL) and dynamically handles the column mapping. I found code here that almost does what I need.
I tested this code and it works great using OLE SQL source and destinations. What makes this script work is that it dynamically handles the column mapping. So, it you placed it into a Foreach Loop of the 100 or so tables, with each loop it could dynamically create the data flow and map the columns, then execute the new package.
My problem is that I can only connect to Timberline using ODBC. So, I need to modify the script to create the source connection with ADO NET (ODBC) instead of OLE. I’m having a lot of trouble trying to figure this out. Could someone please help me out with this?
Here the other couple of things I tried first, other than this approach:
Solution: Setup a Linked server to Timberline Pervasive SQL
Problem: SQL server is 64-bit and the Timberline driver is 32-bit. Using a linked server returns a architecture mismatch error. I called Sage and they said they have no plans to release a 64-bit drive.
Solution: Use one of the SQL Transfer tasks
Problem: Only works with SQL databases. This source is a Pervasive SQL database
Solution: Use a “INSERT … INTO …” type script
Problem: This requires a linked server. See the problem above
Here’s the section of the original VB .NET code I need help with:
'To Create a package named [Sample Package]
Dim package As New Package()
package.Name = "Sample Package"
package.PackageType = DTSPackageType.DTSDesigner100
package.VersionBuild = 1
'To add Connection Manager to the package
'For source database (OLTP)
Dim OLTP As ConnectionManager = package.Connections.Add("OLEDB")
OLTP.ConnectionString = "Data Source=.;Initial Catalog=OLTP;Provider=SQLNCLI10;Integrated Security=SSPI;Auto Translate=False;"
OLTP.Name = "LocalHost.OLTP"
'To add Load Employee Dim to the package [Data Flow Task]
Dim dataFlowTaskHost As TaskHost = DirectCast(package.Executables.Add("SSIS.Pipeline.2"), TaskHost)
dataFlowTaskHost.Name = "Load Employee Dim"
dataFlowTaskHost.FailPackageOnFailure = True
dataFlowTaskHost.FailParentOnFailure = True
dataFlowTaskHost.DelayValidation = False
dataFlowTaskHost.Description = "Data Flow Task"
'-----------Data Flow Inner component starts----------------
Dim dataFlowTask As MainPipe = TryCast(dataFlowTaskHost.InnerObject, MainPipe)
' Source OLE DB connection manager to the package.
Dim SconMgr As ConnectionManager = package.Connections("LocalHost.OLTP")
' Create and configure an OLE DB source component.
Dim source As IDTSComponentMetaData100 = dataFlowTask.ComponentMetaDataCollection.[New]()
source.ComponentClassID = "DTSAdapter.OLEDBSource.2"
' Create the design-time instance of the source.
Dim srcDesignTime As CManagedComponentWrapper = source.Instantiate()
' The ProvideComponentProperties method creates a default output.
srcDesignTime.ProvideComponentProperties()
source.Name = "Employee Dim from OLTP"
' Assign the connection manager.
source.RuntimeConnectionCollection(0).ConnectionManagerID = SconMgr.ID
source.RuntimeConnectionCollection(0).ConnectionManager = DtsConvert.GetExtendedInterface(SconMgr)
' Set the custom properties of the source.
srcDesignTime.SetComponentProperty("AccessMode", 0)
' Mode 0 : OpenRowset / Table - View
srcDesignTime.SetComponentProperty("OpenRowset", "[dbo].[Employee_Dim]")
' Connect to the data source, and then update the metadata for the source.
srcDesignTime.AcquireConnections(Nothing)
srcDesignTime.ReinitializeMetaData()
srcDesignTime.ReleaseConnections()
Thanks in advance!
The C# code here is what you need if you need a Derived Column transform between the Source and Destination...
http://bifuture.blogspot.com/2011/01/ssis-adding-derived-column-to-ssis.html
To get the Source & Destination connections working, there is some secret sauce here to get things working between COM and .Net...
http://blogs.msdn.com/b/mattm/archive/2008/12/30/api-sample-ado-net-source.aspx
There is a similar page showing what to do for OleDB connections too.
Creating the source tables is easy. The available ODBC Metadata collections accessible should be retrieved with GetSchema("MetaDataCollections"). This will return a list of the available schema collections available for that particular ODBC driver.
Next, you'll want to see the data types returned from GetSchema("DataTypes"), so you can correctly interpret the data types for each column retrieved from GetSchema("Columns") to make your SQL Server create table script (which I'm assuming you've done).
To at least figure out which tables have primary keys, you'll need to loop over each table returned from GetSchema("Tables") in order to work with GetSchema("Indexes"). There's a bug that requires you to query the Indexes one table at a time. It is easy to google this - create a string array to pass in as the 3rd parameter: GetSchema("Indexes", tblName, resultArray[])
What I did was got the Tables and Columns collections into object variables in my parent SSIS package. Because Timberline is so fast (not), it seemed more efficient to pull all the columns down and filter them locally...which I do to create the tables in SQL Server, if necessary.
Once that is done, use the local copy of Tables again to manipulate a SSIS package in a Script task in "design mode" (change source and destination target tables, and redo the column mappings), and execute the now-in-memory SSIS package.
For me it took awhile to figure out. Both above URLs were required. I found and copied the .Net 2.0 Dts.PipelineWrap and Dts.RuntimeWrap .dlls to Microsoft.Net\FrameworkV2.0xxxxx folder, then referenced these in each script task wanting to use them, before setting up my "using DtsPW = Microsoft.SqlServer.Dts.Pipeline.Wrapper", etc.
Of note, because Timberline is 32-bit ODBC, I think it's necessary to build the SSIS package to use "X86", and target the script tasks to use .Net 2.0 framework.
I used the Derived Column code because I needed to copy multiple Timberline DBs into one SQL Server DB. Derived Column adds a "CompanyID" value to the output pipeline to SQL Server.
In the end, map the Destination's Virtual Input columns to its External Metadata columns, based off of the pipeline the Destination is attached to:
foreach (DtsPW.IDTSVirtualInputColumn100 vColumn in destVirtInput.VirtualInputColumnCollection)
{
var vCol = destInst.SetUsageType(destInput.ID, destVirtInput, vColumn.LineageID, DtsPW.DTSUsageType.UT_READWRITE);
destInst.MapInputColumn(destInput.ID, vCol.ID, destInput.ExternalMetadataColumnCollection[vColumn.Name].ID);
}
Anyways, that code will make more sense in the context of the bifuture.blogspot.com page.
The EzApi library could help with this too, but the AdoNet connection source for it is coded as a virtual class, so you'd need to implement specific classes to use. My C# kungfu is not strong enough for that in the time I have...
Also, CozyRoc sells a toolset with custom SSIS controls (data flow Source and Destination controls...) that looks like it does this on the fly input-to-output column mapping as well.
My package seems to work good enough now... Oh, and one more, I did not have luck trying to use DSN-less ODBC connections to Timberline, just: Dsn=dsnname;Uid=user;Pwd=pwd;
SSIS packages running in 64-bit land cannot see 32-bit DSNs on 64-bit OS, it seems...at least, it didn't work for me (win7-64, 32-bit Text ODBC DSN).
How can I do bulk data insert in Array in SYBASE table using in .NET. I don't want to use BCP utilities.
It's a bit untidy
You have to use sp_dboption to turn it on
then you can use Select Into to get the data in
the you turn the option back off again.
It's also recomended that your drop all triggers indexes etc before and put them back after for any 'erm lengthy operation...
How are you connected up, you might have a bit of fun if you are on ODBC, as it tends to blow up on proprietry stuff, unless you put pass thru on.
Found this, fater remembering similar troubles way back when with delphi and sybase
Sybase Manual
You can see this example to see how to execute the insert statement.
Then, you simply need to:
select each row of the excel at a time
build the insert command
execute it
or (the best way)
build an insert into command with several rows (not all! maybe 50 each time)
execute the command
One side note, this will take a lot more time that to do the simple
bull copy!
After so much investigation, I found DataAdapter is able to bulk insert. It has property batchsize( I forgot the name). We can specify the number of rows, we want to insert in one trip. DataAdapter insert command should be specified.
There is AseBulkCopy class in name space Sybase.Data.AseClient in Sybase.AdoNet2.AseClient.dll
DataTable dt = SourceDataSet.Tables[0];
using (AseBulkCopy bulkCopy = new AseBulkCopy((AseConnection)conn))
{
bulkCopy.BatchSize = 10000;
bulkCopy.NotifyAfter = 5000;
bulkCopy.AseRowsCopied += new AseRowsCopiedEventHandler(bc_AseRowsCopied);
bulkCopy.DestinationTableName = DestTableName;
bulkCopy.ColumnMappings.Add(new AseBulkCopyColumnMapping("id", "id");
bulkCopy.WriteToServer(dt);
}
static void bc_AseRowsCopied(object sender, AseRowsCopiedEventArgs e)
{
Console.WriteLine(e.RowCopied + "Copied ....");
}
I have a MSSQL procedure with the following code in it:
SELECT Id, Role, JurisdictionType, JurisdictionKey
FROM
dbo.SecurityAssignment WITH(UPDLOCK, ROWLOCK)
WHERE Id = #UserIdentity
I'm trying to move that same behavior into a component that uses OleDb connections, commands, and transactions to achieve the same result. (It's a security component that uses the SecurityAssignment table shown above. I want it to work whether that table is in MSSQL, Oracle, or Db2)
Given the above SQL, if I run a test using the following code
Thread backgroundThread = new Thread(
delegate()
{
using (var transactionScope = new TrasnsactionScope())
{
Subject.GetAssignmentsHavingUser(userIdentity);
Thread.Sleep(5000);
backgroundWork();
transactionScope.Complete();
}
});
backgroundThread.Start();
Thread.Sleep(3000);
var foregroundResults = Subject.GetAssignmentsHavingUser(userIdentity);
Where
Subject.GetAssignmentsHavingUser
runs the sql above and returns a collection of results and backgroundWork is an Action that updates rows in the table, like this:
delegate
{
Subject.UpdateAssignment(newAssignment(user1, role1));
}
Then the foregroundResults returned by the test should reflect the changes made in the backgroundWork action.
That is, I retrieve a list of SecurityAssignment table rows that have UPDLOCK, ROWLOCK applied by the SQL, and subsequent queries against those rows don't return until that update lock is released - thus the foregroundResult in the test includes the updates made in the backgroundThread.
This all works fine.
Now, I want to do the same with database-agnostic SQL, using OleDb transactions and isolation levels to achieve the same result. And I can't for the life of me, figure out how to do it. Is it even possible, or does this row-level locking only apply at the db level?