Silverlight WCF RIA Service select from SQL View vs SQL Table - wcf-ria-services

I have arrived at this dilemma via a tortuous and frustrating route, but I'll start with where I am right now. For information I'm using VS2010, Silverlight 5 and the latest versions of the Silverlight and RIA Toolkits, SDKs etc.
I have a view in my database (it's actually now an indexed view, but that has made no difference to the behaviour). For testing purposes (and that includes testing my sanity) I have duplicated the view as a Table (ie identical column names and definitions), and inserted all the view rows into the table. So if I SELECT * from the view or the table in Query Analyzer, I get identical results. So far so good.
I create an EDF model in my Silverlight Business Application web project, including all objects.
I create a Domain Service based on the model, and it creates ContextTypes and metadata for both the View and the Table, and associated Query objects.
If I populate a Silverlight ListBox in my Silverlight project via the Table Query, it returns all the data in the table.
If I populate the same ListBox via the View Query, it returns one row only, always the first row in the collection, however it is ordered. In fact, if I delve into the inner workings via the debugger, when it executes the ObjectContext Query in the service, it returns a result set of the correct number of rows, but all the rows are identical! If I order ascending I get n copies of the first row, descending I get n copies of the last row.
Can anyone put me out of my misery here, and tell me why the View doesn't work?
Ade

OK, well that was predictable - nearly every time I ask a question on a forum I stumble across the answer while I'm waiting for responses to flood in!
Despite having been through the metadata and model.designer files and made sure that all "view" and "table" class/method definitions etc were identical, it was still showing the exasperating difference in behaviour between view and table queries. So the problem just had to be caused by the database, right?
Sure enough, I hadn't noticed myself creating NOT NULL columns when I created the "identical" Table version of my view! Even though I was using a SELECT NEWID() to create a unique key column on the view, the database insisted that the ID column in the view was NULLABLE, and it was apparently this which was causing the problem.
To save some storage space I switched from using NEWID() to using ROW_NUMBER() to create my key column, but still had the "NULLABLE" property problem. SO I then changed it to
SELECT ISNULL(ROW_NUMBER() (OVER...) , -1)
for the ID column, and at last the column in the view was created NOT NULL! Even though neither NEWID() nor ROW_NUMBER() can ever generate NULL output, it seems you have to hold SQL Server's hand and reassure it by using the ISNULL operator before it will believe itself.
Having done this, deleted/recreated my model and service files, everything burst into glorious technicolour life without any manual additions of [Key()] properties or anything else. The problem had been with the database all along, and NOT with the Model/Service/Metadata definitions.
Hope this saves someone some time. Now all I need to do is work out why the original stored procedure method I started with two days ago doesn't work - but at least I now have a hint!
Ade

Related

Why would LINQ group by results be fewer from Visual Studio compared to SQL Server and Linqpad?

There are other questions similar to mine but they didn't help me. I'm performing what should be a simple Linq group by operation, and in SQL Server Management Studio and Linqpad I get 23,859 results from a table containing 36,102 total records. This is what I believe to be the correct result.
For some reason, when I move my query into my Visual Studio application code, I get 22,463 groups - and I cannot for the life of me figure out why.
I need to group this table's rows based on unique combinations of 8 columns. The columns contain account IDs, person IDs, device IDs, premise IDs, and address columns. Basically, a person can have multiple accounts, multiple premises, multiple devices, and each premise can have it's own address. I know the table design is lacking... it's customer provided and there are other columns that necessitate the format - it should not be relevant to the grouping though.
SQL Server: 23859 groups:
SELECT acct_id, per_id, dev_id, prem_id, address, city, state, postal
FROM z_AccountInfo GROUP BY acct_id, per_id, dev_id, prem_id, address, city, state, postal
ORDER BY per_id
Linqpad: 23859 groups:
//Get all rows...
List<z_AccountInfo> zAccounts = z_AccountInfo.ToList();
//Group them...
var zAccountGroups = (from za in zAccounts
group za by new { za.acct_id, za.per_id, za.dev_id, za.prem_id, za.address, za.city, za.state, za.postal } into zaGroups
select zaGroups).OrderBy(zag => zag.Key.per_id).ToList();
Visual Studio: 22463 groups - WRONG?:
//Intantiate list I can use outside of Entity Framework context...
List<z_AccountInfo> zAccounts = new List<z_AccountInfo>();
using (Entities db = Entities.CreateEntitiesForSpecificDatabaseName(implementation))
{
//Get all rows. Count verified to be correct...
zAccounts = db.z_AccountInfo.OrderBy(z => z.per_id).ToList();
}
// Group the rows. Doesn't work??? 22463 groups?
var zAccountGroups = (from z_AccountInfo za in zAccounts
group za by new { za.acct_id, za.per_id, za.dev_id, za.prem_id, za.address, za.city, za.state, za.postal } into zag
select zag).ToList();
I'm hoping someone can spot a syntax issue or something else I'm missing. Seems like Visual Studio is grouping something.. but it's off by 1396 groups... that's pretty significant.
UPDATE:
sgmoore's comment below put me on the track of making sure the zAccounts list from Linqpad and Visual Studio match. They do not!?! Querying the table in SQL Server shows this data (account / device / premise)
Inspecting the Visual Studio output in Beyond Compare shows the device ID 6106471 being erroneously repeated / duplicated for the 4 bottom rows... meaning there should be 2 groups here, but my query will only see 1...
Since I'm using Entity Framework to query the data in the table in Visual Studio, this makes me think something is wrong with my model but I have no idea what it could be. Beyond compare shows this same issue happening multiple times and explains why the group numbers are off. It's like EF knows there are 8 rows (in this case) - but the field that differentiates them doesn't come through.
I tried truncating the table and re-adding all of the data into it and re-running and the bad behavior persists. Quite confused here - I've never had this kind of issue with Entity Framework before.
I even ran SQL Profiler when VS was executing and trapped the query Entity Framework is firing to populate zAccounts. That query when fired by itself in SQL Server correctly shows the four 7066550 rows. This seems to be squarely on Entity Framework and the ToList() call that populates the full collection - ideas anyone?
Short answer - make sure the table in the Entity Framework model has an Entity Key on a column where the values of the column are unique.
Longer answer - to troubleshoot I ran SQL Profiler to ensure that the query EF was sending to SQL Server was correct - and it was. I ran that query and inspected the results to see the data I was wanting. The problem was my model. I had an Entity Key set on a field that did not contain unique values. My guess is that EF assumes that since the field is set as the Entity Key, the values must be unique. Based on that it somehow indexes or caches the first row where the "id" is and then projects that row's values into query results. That is a bad assumption in my view if there is not a validation check of the field marked as the Entity Key. I realize I'm to blame here for telling it to use a non-unique field as the Entity Key - but I don't see the case where this would be a good idea without it throwing at least a warning.
Anyway, to resolve, I added a proper id column to the table and set it's Identiy spec and auto-increment so that any rows in the table would have a unique id. After that, I updated my edmx to use my new column as the Entity Key and re-ran my code and then everything magically started working.

Using a query to supply an Access form

I have a Microsoft Access form that is being supplied (somehow) by a query. The query contains three tables linked together via their respective primary and foreign keys, and the form displays data quite happily.
The strange thing (as far as I'm concerned) is the ability of the form to then allow data entry using the query.
However, if the user creates a new record, the whole thing seems to have problems due to a required field in the [table2] table.
SELECT [table1].*
,[table2].JobNo
,[table2].PlannedDateOC
,[table3].DateJobStarted
,[table1].PlanNo
FROM (
[table1] LEFT JOIN [table2] ON [table1].PlanNo = [table2].PlanNo
)
LEFT JOIN [table3] ON [table2].JobNo = [table3].JobNo
ORDER BY [table2].PlannedDateOC
,[table1].PlanNo;
According to the users, this form worked perfectly prior to conversion from Access 97/2003 format to Access 2010 (2007 file format).
Could anyone clarify whether this (the fact that it should work) is legitimate, and if this process would work in either version of Access, if so? The concept of being able to use a query for data entry is quite alien to me.
Let me know if you need further clarification.
NOTE:
One thing of note, here, is that I did move some of the fields into the form header so that they were always visible as the rest of the form scrolls. I don't know if this will have any side effects on the performance of the form.
Above query will allow you to insert data into [table1] when all not null fields have their values and make sure only table1.fields are getting dirty when new record is inserted.

What's the best way to temporarily persist results of a long running SP?

I have a TSQL stored procedure that can run for a few minutes and return a few million records, I need to display that data in an ASP.NET Grid (Infragistics WebDataGrid to be precise). Obviously I don't want return all data at once and need to setup some kind of paging options - every time user selects another page - another portion of data is loaded from the DB. But I can't run the SP every time new page is requested - it would take too much time.
What would be the best way to persist data from the SP, so when user selects a new page - new data portion would be loaded by a simple SELECT... WHERE from that temp data storage?
A few options
One:
If the user only pages forward then you could just hold the connection open and use a DataReader. Just .Read() as needed.
Two:
Create a #temp table using the userID as part of the name to store the results. I don't like this as if user aborts sometimes tables are left over. About 1/2 second hit to create and drop the #temp. Store the entire results or just the PK and create the page detail on demand.
Three:
Use a DataReader to read the the PK into a List<>. It is faster than you would guess. That List is only going to IIS (not to the browser). List can be referenced by ordinal [] and preserves the sort. Get the detail for a page as required. The problem here is where PK in (3,9,2,6) will not return them in that order. I use TVP to pass the order, PK so the page is sorted by order. I do exactly this and get pages loads for objects with 20 properties 40 rows at a time and it takes less than 1/2 second. Do one query per table (NOT one per row) then assemble assign properties in .NET. Use DataReader (not DataTable). And you can even run the reader on a backgroundworker and pass back the first page of PKs using progresschanged.
Have you look at Server Side Paging (article is 2005, but will work with 2008 and CTEs). Also - just wondering, is there any reason you are returning that many rows? I can't see a very good use of a human paging through a million records even if the page size was 1000.

iPhone Dev - Trying to access every row of a sqlite3 table sequentially

this is my first time using SQL at all, so this might sound basic. I'm making an iPhone app that creates and uses a sqlite3 database (I'm using the libsqlite3.dylib database as well as importing "sqlite3.h"). I've been able to correctly created the database and a table in it, but now I need to know the best way to get stuff back from it.
How would I go about retrieving all the information in the table? It's very important that I be able to access each row in the order that it is in the table. What I want to do (if this helps) is get all the info from the various fields in a single row, put all that into one object, and then store the object in an array, and then do the same for the next row, and the next, etc. At the end, I should have an array with the same number of elements as I have rows in my sql table. Thank you.
My SQL is rusty, but I think you can use SELECT * FROM myTable and then iterate through the results. You can also use a LIMIT/OFFSET(1) structure if you do not want to retrieve all elements at one from your table (for example due to memory concerns).
(1) Note that this can perform unexpectedly bad, depending on your use case. Look here for more info...
How would I go about retrieving all the information in the table? It's
very important that I be able to access each row in the order that it
is in the table.
That is not how SQL works. Rows are not kept in the table in a specific order as far as SQL is concerned. The order of rows returned by a query is determined by the ORDER BY clause in the query, e.g. ORDER BY DateCreated, or ORDER BY Price.
But SQLite has a rowid virtual column that can be used for this purpose. It reflects the sequence in which the rows were inserted. Except that it might change with a VACUUM. If you make it an INTEGER PRIMARY KEY it should stay constant.
order by rowid

SqlDataAdapter Update

Can any one help me why this error occurs when i update using sqlDataadapter with join query
Dynamic SQL generation is not supported against multiple base tables.
You have a "join" in your main query for your dataset (The first one in the TableAdapter with a check by it). You can't automatically generate insert/update/delete logic for a TableAdapter when the main query has multiple tables referenced in the query via a join. The designer isn't smart enough to figure out which table you want to send updates to in that case, that is why you get the error message.
Solution. Ensure that your main query only references the table you want the designer to write insert/update/delete code for. Your secondary queries may reference as many tables as you want.
It was in the case that i was trying to set value for identity column in my datarow. Simply i deleted the code to set value for identity column and it will work.
My Scenario:
Database:
uin [primary, identity]
name
address
Whenever i tried to set the datarow("uin") the error occurs. But works fine with datarow("name") and datarow("address").
Hope it works for you too