What's the best way to temporarily persist results of a long running SP? - tsql

I have a TSQL stored procedure that can run for a few minutes and return a few million records, I need to display that data in an ASP.NET Grid (Infragistics WebDataGrid to be precise). Obviously I don't want return all data at once and need to setup some kind of paging options - every time user selects another page - another portion of data is loaded from the DB. But I can't run the SP every time new page is requested - it would take too much time.
What would be the best way to persist data from the SP, so when user selects a new page - new data portion would be loaded by a simple SELECT... WHERE from that temp data storage?

A few options
One:
If the user only pages forward then you could just hold the connection open and use a DataReader. Just .Read() as needed.
Two:
Create a #temp table using the userID as part of the name to store the results. I don't like this as if user aborts sometimes tables are left over. About 1/2 second hit to create and drop the #temp. Store the entire results or just the PK and create the page detail on demand.
Three:
Use a DataReader to read the the PK into a List<>. It is faster than you would guess. That List is only going to IIS (not to the browser). List can be referenced by ordinal [] and preserves the sort. Get the detail for a page as required. The problem here is where PK in (3,9,2,6) will not return them in that order. I use TVP to pass the order, PK so the page is sorted by order. I do exactly this and get pages loads for objects with 20 properties 40 rows at a time and it takes less than 1/2 second. Do one query per table (NOT one per row) then assemble assign properties in .NET. Use DataReader (not DataTable). And you can even run the reader on a backgroundworker and pass back the first page of PKs using progresschanged.

Have you look at Server Side Paging (article is 2005, but will work with 2008 and CTEs). Also - just wondering, is there any reason you are returning that many rows? I can't see a very good use of a human paging through a million records even if the page size was 1000.

Related

is there any way of simulating an autocommit in functions in postgres

I know you can't control transactions in functions or procedures, but I'm wondering if there's something like that or some alternative.
The problem: I have a function that's very expensive that turns things like a customer id into a nice html report. Trouble is - it takes seconds... so I've put into the function something that basically looks at a cache to see if a pre-rendered one exists, returning it if it does - and if it doesn't - it adds to the cache afterwards - so it will only ever render things once.
Now - given most things will never change - I sort of want to do it across everything - but given the time - it will probably take about 1 year to run - which is ok actually - this system has to run for ten. Trouble is - I don't want it to lock anything on the database, so I sort of want it trickle along, doing 1 at a time and committing immediately.
I investigated pg_cron, because that seemed an option, but the version of aurora I am using doesn't support it. Any ideas how I'd do this inside the database?
By all means, don't code that as a function running inside the database. It is fine to do the calculations in the database, but generating a report and iterating over customers belong into client code. That way, committing each report is not a problem.
Add a text column to the customer table to hold the html report.
Put trigger(s) on table(s) whose content influences the html report that refreshes the report column.
This gives you instant retrieval and only (re)calculates it when needed.
Or if data is stable:
create table customer_report (
customer_id int not null primary key,
report text not null
);
insert into customer_report
select id, my_report_proc(id)
from customer;

Silverlight WCF RIA Service select from SQL View vs SQL Table

I have arrived at this dilemma via a tortuous and frustrating route, but I'll start with where I am right now. For information I'm using VS2010, Silverlight 5 and the latest versions of the Silverlight and RIA Toolkits, SDKs etc.
I have a view in my database (it's actually now an indexed view, but that has made no difference to the behaviour). For testing purposes (and that includes testing my sanity) I have duplicated the view as a Table (ie identical column names and definitions), and inserted all the view rows into the table. So if I SELECT * from the view or the table in Query Analyzer, I get identical results. So far so good.
I create an EDF model in my Silverlight Business Application web project, including all objects.
I create a Domain Service based on the model, and it creates ContextTypes and metadata for both the View and the Table, and associated Query objects.
If I populate a Silverlight ListBox in my Silverlight project via the Table Query, it returns all the data in the table.
If I populate the same ListBox via the View Query, it returns one row only, always the first row in the collection, however it is ordered. In fact, if I delve into the inner workings via the debugger, when it executes the ObjectContext Query in the service, it returns a result set of the correct number of rows, but all the rows are identical! If I order ascending I get n copies of the first row, descending I get n copies of the last row.
Can anyone put me out of my misery here, and tell me why the View doesn't work?
Ade
OK, well that was predictable - nearly every time I ask a question on a forum I stumble across the answer while I'm waiting for responses to flood in!
Despite having been through the metadata and model.designer files and made sure that all "view" and "table" class/method definitions etc were identical, it was still showing the exasperating difference in behaviour between view and table queries. So the problem just had to be caused by the database, right?
Sure enough, I hadn't noticed myself creating NOT NULL columns when I created the "identical" Table version of my view! Even though I was using a SELECT NEWID() to create a unique key column on the view, the database insisted that the ID column in the view was NULLABLE, and it was apparently this which was causing the problem.
To save some storage space I switched from using NEWID() to using ROW_NUMBER() to create my key column, but still had the "NULLABLE" property problem. SO I then changed it to
SELECT ISNULL(ROW_NUMBER() (OVER...) , -1)
for the ID column, and at last the column in the view was created NOT NULL! Even though neither NEWID() nor ROW_NUMBER() can ever generate NULL output, it seems you have to hold SQL Server's hand and reassure it by using the ISNULL operator before it will believe itself.
Having done this, deleted/recreated my model and service files, everything burst into glorious technicolour life without any manual additions of [Key()] properties or anything else. The problem had been with the database all along, and NOT with the Model/Service/Metadata definitions.
Hope this saves someone some time. Now all I need to do is work out why the original stored procedure method I started with two days ago doesn't work - but at least I now have a hint!
Ade

APEX - Creating a page with multiple forms linked to multiple related tables... that all submit with one button?

I have two tables in APEX that are linked by their primary key. One table (APEX_MAIN) holds the basic metadata of a document in our system and the other (APEX_DATES) holds important dates related to that document's processing.
For my team I have created a contrl panel where they can interact with all of this data. The issue is that right now they alter the information in APEX_MAIN on a page then they alter APEX_DATES on another. I would really like to be able to have these forms on the same page and submit updates to their respective tables & rows with a single submit button. I have set this up currently using two different regions on the same page but I am getting errors both with the initial fetching of the rows (Which ever row is fetched 2nd seems to work but then the page items in the form that was fetched 1st are empty?) and with submitting (It give some error about information in the DB having been altered since the update request was sent). Can anyone help me?
It is a limitation of the built-in Apex forms that you can only have one automated row fetch process per page, unfortunately. You can have more than one form region per page, but you have to code all the fetch and submit processing yourself if you do (not that difficult really, but you need to take care of optimistic locking etc. yourself too).
Splitting one table's form over several regions is perfectly possible, even using the built-in form functionality, because the region itself is just a layout object, it has no functionality associated with it.
Building forms manually is quite straight-forward but a bit more work.
Items
These should have the source set to "Static Text" rather than database column.
Buttons
You will need button like Create, Apply Changes, Delete that submit the page. These need unique request values so that you know which table is being processed, e.g. CREATE_EMP. You can make the buttons display conditionally, e.g. Create only when PK item is null.
Row Fetch Process
This will be a simple PL/SQL process like:
select ename, job, sal
into :p1_ename, :p1_job, :p1_sal
from emp
where empno = :p1_empno;
It will need to be conditional so that it only fires on entry to the form and not after every page load - otherwise if there are validation errors any edits will be lost. This can be controlled by a hidden item that is initially null but set to a non-null value on page load. Only fetch the row if the hidden item is null.
Submit Process(es)
You could have 3 separate processes for insert, update, delete associated with the buttons, or a single process that looks at the :request value to see what needs doing. Either way the processes will contain simple DML like:
insert into emp (empno, ename, job, sal)
values (:p1_empno, :p1_ename, :p1_job, :p1_sal);
Optimistic Locking
I omitted this above for simplicity, but one thing the built-in forms do for you is handle "optimistic locking" to prevent 2 users updating the same record simultaneously, with one's update overwriting the other's. There are various methods you can use to do this. A common one is to use OWA_OPT_LOCK.CHECKSUM to compare the record as it was when selected with as it is at the point of committing the update.
In fetch process:
select ename, job, sal, owa_opt_lock.checksum('SCOTT','EMP',ROWID)
into :p1_ename, :p1_job, :p1_sal, :p1_checksum
from emp
where empno = :p1_empno;
In submit process for update:
update emp
set job = :p1_job, sal = :p1_sal
where empno = :p1_empno
and owa_opt_lock.checksum('SCOTT','EMP',ROWID) = :p1_checksum;
if sql%rowcount = 0 then
-- handle fact that update failed e.g. raise_application_error
end if;
Another, easier solution for the fetching part is creating a view with all the feilds that you need.
The weak point is it that you later need to alter the "submit" code to insert to the tables that are the source for the view data

iPhone Dev - Trying to access every row of a sqlite3 table sequentially

this is my first time using SQL at all, so this might sound basic. I'm making an iPhone app that creates and uses a sqlite3 database (I'm using the libsqlite3.dylib database as well as importing "sqlite3.h"). I've been able to correctly created the database and a table in it, but now I need to know the best way to get stuff back from it.
How would I go about retrieving all the information in the table? It's very important that I be able to access each row in the order that it is in the table. What I want to do (if this helps) is get all the info from the various fields in a single row, put all that into one object, and then store the object in an array, and then do the same for the next row, and the next, etc. At the end, I should have an array with the same number of elements as I have rows in my sql table. Thank you.
My SQL is rusty, but I think you can use SELECT * FROM myTable and then iterate through the results. You can also use a LIMIT/OFFSET(1) structure if you do not want to retrieve all elements at one from your table (for example due to memory concerns).
(1) Note that this can perform unexpectedly bad, depending on your use case. Look here for more info...
How would I go about retrieving all the information in the table? It's
very important that I be able to access each row in the order that it
is in the table.
That is not how SQL works. Rows are not kept in the table in a specific order as far as SQL is concerned. The order of rows returned by a query is determined by the ORDER BY clause in the query, e.g. ORDER BY DateCreated, or ORDER BY Price.
But SQLite has a rowid virtual column that can be used for this purpose. It reflects the sequence in which the rows were inserted. Except that it might change with a VACUUM. If you make it an INTEGER PRIMARY KEY it should stay constant.
order by rowid

Filemaker: Best way to set a certain field in every related record

I have a FileMaker script which calculates a value. I have 1 record from table A from which a relation points to n records of table B. What is the best way to set B::Field to this value for each of these n related records?
Doing Set Field [B::Field; $Value] will only set the value of the first of the n related records. What works however is the following:
Go to Related Record [Show only related records; From table: "B"; Using layout: "B_layout" (B)]
Loop
Set Field [B::Field; $Value]
Go To Record/Request/Page [Next; Exit after last]
End Loop
Go to Layout [original layout]
Is there a better way to accomplish this? I dislike the fact that in order to set some value (model) programmatically (controller), I have to create a layout (view) and switch to it, even though the user is not supposed to notice anything like a changing view.
FileMaker always was primarily an end-user tool, so all its scripts are more like macros that repeat user actions. It nowhere near as flexible as programmer-oriented environments. To go to another layout is, actually, a standard method to manipulate related values. You would have to do this anyway if you, say, want to duplicate a related record or print a report.
So:
Your script is quite good, except that you can use the Replace Field Contents script step. Also add Freeze Window script step in the beginning; it will prevent the screen from updating.
If you have a portal to the related table, you may loop over portal rows.
FileMaker plug-in API can execute SQL and there are some plug-ins that expose this functionality. So if you really want, this is also an option.
I myself would prefer the first variant.
Loop through a Portal of Related Records
Looping through a portal that has the related records and setting the field has a couple of advantages over either Replace or Go To Record, Set Field Loop.
You don't have to leave the layout. The portal can be hidden or place off screen if it isn't already on the layout.
You can do it transactionally. IE you can make sure that either all the records get edited or none of them do. This is important since in a multi-user networked solution, records may not always be editable. Neither replace or looping through the records without a portal is transaction safe.
Here is some info on FileMaker transactions.
You can loop through a portal using Go To Portal Row. Like so:
Go To Portal Row [First]
Loop
Set Field [B::Field; $Value]
Go To Portal Row [Next; Exit after last]
End Loop
It depends on what you're using the value for. If you need to hard wire a certain field, then it doesn't sound like you've got a very normalised data structure. The simplest way would be a calculation in TableB instead of a stored field, or if this is something that is stored, could it be a lookup field instead that is set on record creation?
What is the field in TableB being used for and how?