I'm using a sqlServer database that has stored procedures, and I want to use an in-memory database to unit test my code.
I've looked at a few - including VistaDB which looks amazing but expensive - and Blackfish seems to be the only possiblity so far. Before using it though i'd like to know exactly how compatible it is with TSQL - obviously if I have a lot of existing stored procedures these will use TSQL, so it's important that the in-memory db I use can handle this.
Thanks
Short Answer: Not Very
Long Answer:
Whilst Blackfish is SQL-92 compliant, you’re bound to run into stuff that worked on your T-SQL database that won’t work on BlackFish.
I'd strongly recommend SQL Server Compact 4.0 (or Express at a crunch), Compact can be easily bundled, has a tiny footprint (3mb installer? [18mb on disk ish]).
For instance, T-SQL Flow control might differ to Blackfish flow control - not really relevant for selects, inserts & updates etc, but if you have T-SQL logic gates in stored procedures, i don’t think these will port to Blackfish? Blackfish supports stored procedures, but they are compiled in other native languages (Delphi mainly). Good example from the documentation:
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/bfsql/storedprocedures_xml.html
Very different from the T-SQL procedures used in MS SQL
Related
Sorry for potential FAQ, RTFM, etc. If I understand correctly, transactions could not be used in native scripting units (functions, including anonymous do-blocks). What would PostgreSQL guys recommend as the least "not natural" way to combine scripting and transactions?
I think you are talking about autonomous transactions.
If so, you are correct that PostgreSQL doesn't support true stored procedures with autonomous transactions yet. (Feel free to sponsor work or contribute time...)
Your options are:
Use dblink to make connections-to-self and do the discrete units of work that way
Use an external process that connects to Pg
Use an in-db script with pl/python, pl/perl, etc that connects to the DB using psycopg2 / DBD::Pg / etc, rather than using the SPI, and does the work that way. Essentially you code the script like an externally connecting script, but run it within the DB for convenience.
I am new to Scala as well as play framework with Scala 2.0. I like the idea of writing the SQL code myself and have full control rather than depend on ORM tool. But does Anorm SQL work across different database vendors like MySQL and Oracle? Since I am writing an application which should be capable to work with any Relational database, my requirement is to write SQL which should work across databases since my application should work with vendor database.
Some vendor might have Oracle and some might have MySQL. So my code should be DB agnostic.Is this possible in Scala as I know that quires which run on mysql will not run on Oracle.
Thanks in Advance,
Pradeep
Short answer: NO.
Long answer: Anorm is just a library for dispatching your SQL queries to the database through JDBC, retrieving the results and delivering them to you. It does not understand the differences between different databases because it relies on JDBC for connection handling, and on you for writing queries.
You either have to handle different DB engines yourself or have an ORM handle that for you.
PS: Unless you really need to have a DB agnostic application (and fully understand its implications), I'd suggest you simply target 2-3 popular engines and avoid the future complications.
Could someone tell me if there are any times when it is more advantageous to use t-SQL over the Entity Framework? I'm aware of the N+1 issue, but is there any other gotchas I should be aware of? For instance, do Linq-to-EF queries cache as well as stored procedures? Are there instances where the SQL generated by EF is less than optimal?
Thanks!
Whenever you need to do the work "inside" the DB server and not go back and forth between your code and Server.
Also - when you use stored procedures, you can alter the code without recompiling/deploying, it might be easier on production environments.
IMHO it sometimes easier to code complex SQL statements in T-SQL rather than using LINQ....
I have a database in PostgreSQL with millions of records and I have to develop a website that will use this database using Entity Framework (using dotnetConnect for PostgreSQL driver in case of PostgreSQL database).
Since SQL Server and .Net are both native to the Windows platform, should I migrate the database from PostgreSQL to SQL Server 2008 R2 for performance reasons?
I have read some blogs comparing the two RDBMS' but I am still confused about which system I should use.
There is no clear answer here, as its subjective, however this is what I would consider:
The overhead of learning a new DBMS and its tools.
The SQL dialects each RDBMS uses and if you are using that dialect currently.
The cost (monetary and time) required to migrate from PostgreSQL to another RDBMS
Do you or your client have an ongoing budget for the new RDBMS? If not, don't make the mistake of developing an application to use a RDBMS that will never see the light of day.
Personally if your current database is working well I wouldn't change. Why fix what isn't broke?
You need to find out if there is actually a problem, and if moving to SQL Server will fix it before doing any application changes.
Start by ignoring the fact you've got .net and using entity framework. Look at the queries that your web application is going to make, and try them directly against the database. See if its returning the information quick enough.
Only if, after you've tuned indexes etc. you can't make the answers come back in a time you're happy with should you decide the database is a problem. At that point it makes sense to try the same tests against a SQL Server database, but don't just assume SQL Server is going to be faster. You might find out that neither can do what you need, and you need to use faster disks or more memory etc.
The mechanism you're using to talk to a database (DotConnect or Microsoft drivers) will likely be a very minor performance consideration, considering the amount of information flowing (SQL statements in one direction and result sets in the other) is going to be almost identical for both technologies.
I'm learning some ADO.NET. I noticed quite a few Database functionality can also be found in ADO.NET.
I'm kind of confused. Do I use ADO.NET to manage all the interactions or should I make call to the Database?
I don't know what should be done by ADO.NET and what should be done at the database level.
Thanks for helping.
If you mean what should be handled in SQL statements issued from ADO.NET, and what should be done in stored procedures stored at the database level, as much as possible in stored procedures, at least that's what I live by. In addition to eliminating the chance of SQL injection, stored procedures allow you to modify sql calls without having to recompile and deploy your code as well as they enable execution plan re-use by the query optimizer.