I need to make sure my ado.net commands don't return more than 1000-5000 rows. Is there an ADO.NET way to do this, or is it just a TSQL?
I'm calling a stored procedure, I don't control the source code in that stored procedure. Hence I was hoping there was a ADO.NET way to do it.
Before LINQ this typically was always done using a top N clause in your inline query or stored proc. With LINQ there is some cool functions called "Take" and "Skip" which provides a construct for downloading and or skipping N number of rows. Under the hood LINQ figures out the details of how to construct the inline query that yields the exact number of rows you want off the top.
[Edit]
Since you're calling a stored procedure, I'd advise just using a TOP N clause in the select statement. This is path of least resistance and IMHO is the simplest to maintain going forward since you already have the stored procedure.
Related
I have a PostgreSQL query constructed by a templating mechanism. What I want to do is to determine the relations actually hit by the query when it is run and record them in a relation. So this a very rudimentary lineage problem. Simply looking at the relation names appearing in the query (or parsing the query) would not easily solve the problem as the queries are somewhat complex and the templating mechanism inserts expressions like WHERE FALSE.
I can of course do it by using EXPLAIN on the query and insert the relation names I find manually. However this has two drawbacks:
EXPLAIN actually runs the query. Unfortunately running the query takes a lot of time so it is not ideal to run the query twice, once for the result and once for the EXPLAIN.
It is manual.
After reading a few documents I found out that on can log the result of an EXPLAIN automatically to a CSV file and read it back to a relation. But, as far as I understand, this means logging everything to the CSV which is not an option for me. Also, automatic logging seems to be triggered only when the execution takes longer then a predetermined threshold and I want to do this for a few specific queries, not for all time consuming ones.
PS: This does not need to be implemented fully at database layer. For instance, once I have the result of EXPLAIN in a relation, I can parse it and extract the relations it hits at the application layer.
EXPLAIN does not execute the query.
You can run EXPLAIN (FORMAT JSON) SELECT ..., which will return the execution plan as a JSON. Simply extract all Relation Name attributes, and you have a list of the tables scanned.
I am just starting to use tsqlt inside redgate's sql test. I have to deal with quite large tables (large number of columns) in a legacy databases. What is the best practise to insert some fake data into such tables (the 'script as' insert statements are quite large) - hence they would make my 'arrange part' of the unit test literally unreadable. Can I factor out such code? Also is there a way to not only script the insert statement but also fill in some values automagically? Thanks.
I would agree with your comment that you don't need to fill out all of the columns in your insert statement.
tSQLt.FakeTable removes all non-null constraints from the columns, as well as computed columns and identity columns (although these last two can be reinstated using particular parameters to FakeTable).
Therefore, you only need to populate the columns which are relevant to your code under test, which is usually only a smaller subset of columns from the table(s).
I wrote about this in a bit more detail in this article which also contains a few other 'gotchas' you may want to know.
Additionally, I'd suggest that if you have a number of tests which all need the same table faked and data inserted, that you consider using a SetUp routine - this is a Stored procedure in the test class (schema) which is called SetUp, and is called by tSQLt before each test in that schema. They won't show in RedGate's SQL Test window as yet (I've suggested it as an improvement), but will still work. This can make it harder to see - but does modularise that code thus reducing identical, repeated code.
Is there a way to get SQL Compare to output all objects, specifically stored procedures, in a database and not just those that differ?
In your compare results you should see a few different groupings. One being "identical objects". That one is on the bottom of my view. I am using SQL Compare 10.
If I want it to actually script all of the objects (rather than just see them in the results), I usually create an empty database and compare to that, then filter just to stored procedures and go through the synchronization wizard.
Our app runs on many web servers. The time of these web servers can get skewed over time, as is to be expected. The database is a single separate machine with it's own time. We're using EF 5.0 and have a table that needs very precise and consistent times in multiple columns. I would like to be sure the date columns in this table always use the database servers time.
In SQL I would just set the column to GetUtcDate(). Simple, the date is computed and set on the database server, done. But how can I do this with EF on an insert or update? To be clear I need the SQL generated by EF to set the column to the function GetUtcDate() so that the value comes from the database server. I do not want the date being calculated on the web server. Some ideas I've seen and considered and why they don't work for me:
1) I could use default values on the columns in the schema. But I have many update scenarios where I also need consistent dates, not just inserts.
2) I could use triggers in the database. But we currently have zero logic in our database (we are using an ORM after all) and I don't want to set that precedent if I can avoid it. It also is tricky to determine when to update these columns on the database end.
3) I can get the database server time manually (separate query as in the example below), set the column to that value, then do the update. But this is very inefficient as it requires an extra call to the database. In a tight loop this is way too much overhead. Plus the time is now less accurate since I got the time milliseconds earlier, though it is at least consistent.
CreateQuery<DateTime>("CurrentUtcDateTime()").Execute().First();
So what is the right way to do this? Or is it even possible to make EF do the right thing here?
This question is really, Can I tell EF to get the Date/Time from the DB / Underlying provider. As far as I know, this isnt possible with EF statements no.
You should use a simple SQL statement prior to the Get DBTime
T-SQL GetDate Choose the preferred date option
var dq = context.Database.SqlQuery<DateTime>("select GETUTCDATE();");
DateTime serverDate;
foreach (var dt in dq) {
serverDate = dt;
}
Now use serverDate in your EF linq statement.
I want to know, do I use stored procedure with Entity Framework or just user Linq-to-Entities queries?
Consider I want have a customer's orders. So one solution achieved by using stored procedure that uses joins on two table and for example return order IDs.
Another solution achieved by using Linq-to-Entities queries. This solution have a preference over first solution and that is, we can user navigation properties to easily move in customer order information. but in stored procedure since it only return ID's it's a little bit to access order information.
So which is better, considering second solution preference?
Both solutions work - and since you didn't define what "better" means to you, we cannot possibly tell you which solution is "better".
Using stored procedures with Entity Framework is definitely possible - especially with EF4. Stored procedure have their benefits - you don't have to give the user direct table access, you can possibly let a DBA tweak those stored procs for top performance, and you can do things like delete a Customer by just calling a stored proc with the CustomerID to delete (instead of having to first load the whole customer, just to delete it).
So stored procedures definitely have benefits - the downside to many folks is that they have to write those in T-SQL, and now suddenly part of your application is in your C# code, while another part is in stored procedure's T-SQL code.
So again: as vague as you asked, there's no real good answer to this. Both approaches are valid and both work - it's a bit of a personal preference, which one you want to use.