I have a view over a table. It turns out the table gets moved and an updated version of it created each night. This ensures there is always a table of the expected name present in the database, but I cannot find a way to make my view continue to point to the current version of the table. Whichever table existed when the view was created is the one I end up pointing to even after it moves and goes stale.
ViewA:
select a, b, c from todays_table;
todays_table stays current all day, then at night it gets renamed to todays_table01. View A now points to todays_table01 and a new table shows up called todays_table. Again, todays_table is current, but ViewA no longer is.
Is there a way to delay the table name resolution until the view is used? I haven't been able to get EXECUTE IMMEDIATE working for SELECT statement. I think I could get a dynamic SQL statement working if I used a cursor, but I have never needed these before and I'm not sure if they are the right path. I read about AUTO_REVAL but I believe this would only delay resolution until the first time the view was used and still go stale that night.
I could, of course, stop using the view and just move the complex query into my program but there are many places it is needed so I would like to eliminate all other solutions before falling back to this.
It would be ideal to eliminate the temporary table and just have the master table receive updates throughout the day but this is beyond my comprehension as I know nothing about RPG II and OCL.
Thanks for reading.
Edit
Per #Mr. Llama's suggestion, I experimented with using synonyms and aliases to point to todays_table and then having my view point to the synonym. Unfortunately in this scenario, the view uses the alias to resolve the actual table name on creation so the view continues to point to todays_table when it is renamed to todays_table01, though the alias continues to reference todays_table.
Edit 2
I'm accepting #mustaccio's answer because it does work and would be a reasonable approach to this problem if I could get the parameters going where they need to. My particular project requires flexibility so I am actually going to jump on the nightly process bandwagon and add a program to recreate my views after the process messes with their references as #danny117 suggested.
Thanks to everyone who replied though, I learned a lot about how all of these pieces work together.
I think you might be able to achieve what you want by wrapping your view definition in a SQL table function, something like
CREATE FUNCTION insteadofview (<parameters>)
RETURNS TABLE (<columns>)
...
RETURN
SELECT <the rest of your view definition>
Depending on how you query your view, you will probably need to pass search criteria into the function as parameters, otherwise performance will be suboptimal because the function will have to return all rows from the query before search arguments can be applied.
According to the manual, as you have noticed views on a table that is renamed continue to point to the original table object. Routines, however, including table functions, will be invalidated and their plans prepared again when they next invoked, using the original source table name.
I have no way of testing this though.
Full syntax to create a table function.
Related
I have an access project that is "linked" to a SQL database that now works like a charm. The last problem I solved was, making sure any Boolean fields be turned to bits with default of 0, and adding the TIMESTAMP in SQL due to the fact that ACCESS is not so much of a genius with record locking (so I was told) .
Now that I tried to connect direct to SQL server by using an ADODB.Recordset and setting the forms.recordset to the recordset, at the OnOpen event of the form, (this recordset runs a stored procedure in SQL, I get the data fine but get the error locking (write conflict) back.
This ADODB.Recordset cursorlocation is set to "adUseClient".
Obviously I no longer have the forms recordsource attached or assigned to the linked SQL table anymore.
Am I missing something? do I need to assign anything to the forms recordsource?
The Idea is trying to connect directly thru the use of stored procedures instead of linked tables.
thanks so much for any help.
The adding of timestamp is a VERY good idea. And do not confuse the term/name used timestamp to mean an actual date/time column. The correct term is "row version".
This issue has ZERO to do with locking. The REASON why you want this column added is because then Access will use that column to determine when the record is dirty, and more imporant figure out that the record been changed. If you omit this column, then access reverts to a column by column testing approach. Not only does this cause more network traffic, but worse for real type values, due to rounding, you can get the dredged this record has been changed by another user. But, it not been changed, and even columns with floating point values will cause access to error out with that changed record.
So, for all tables, and you even see the option included in the SSMA (the access to sql migration wizard that this option is available (and I believe it is a default).
So yes, it is HIGH but VERY high recommended that you include/add a rowversion column to all tables - this will help Access in a HUGE way.
And as noted, there is a long standing issue with bit fields that don't have a default setting. so, you don't want to allow bit fields to be added/created with a null value. So, ensure that there is a default value of 0 (you set this sql server side).
Ok, now that we have the above cleared up?
It not really all that clear as to why you want or need or are adopting a store procedure and code to load/fill up the form. You not see any better performance if you bind the form DIRECTLY to the linked table. Access will ONLY pull the reocrds you tell that form to load.
So, bind the form directly to the linked table. Then, you can launch/open the form say to once reocrd with this:
docmd.OpenForm "frmInvoices",,,"InvoiceNum = 123"
Now, you would of course change the above "123" to some variable or some way to prompt the user for what invoice to work on.
The invoice form will then load to the ONE record. So, even if the form bound (linked table) has 2 million rows? Only ONE record will come down the network pipe. So, all that extra work of a store procedure, creating a recordset and pulling it ? You will gain ZERO in terms of performance, but you are writing all kinds of code when it simply not required, and you not achieve any superior performance to the above one line of code that will automatic filter and ONLY pull down the record that meets the given criteria (in this example invoice number).
So:
Yes, all tables need a PK
Yes, all tables should have a rowversion (but it called a timestamp column - nothing to do with the actual time).
Yes, all bit fields need a default of 0 - don't allow null values.
And last but not least?
I don't see any gains in performance, or even any advantages of attempting to code your way though this by adopting store procedures and that of introducing reocrdset code when none is required, but worse will not gain you performance anyway.
This question is sort of a follow up to this question, but it's different enough of a topic that I feel like it merits it's own discussion. For a bit of background, you can refer to it.
As a part of a new file importing system, I am building an audit system based on this wiki page. But, one of the things that I would like to include in the audit trail is the file name of the file that the data came from (these files are archived for long term storage so if there are questions, I can always go back).
One way I could go it is to create a import_batch record and record the name of the file there and then just stamp records when they update. Which is the path that I'm going down. But, it feels a bit clunky in a way. I'm been pondering the idea of trying to have the audit trigger be able to get the import_batch_id without it having to be in the NEW.* record. It seems like to me there are at least a couple of ways I might be able to accomplish this.
I could have a function that could create a temp table and store any information in it that I want (such as batch # or file name or whatever). This seem pretty clean and as I understand it would only live for the duration of the transaction. And as I understand it, it wouldn't have to worry about naming collisions. Each transaction would have a temp file named "tmp_import_info".
If I only care about the import_batch_id (which has a seq), I could probably just get the current value of the sequencer. I'm not a 100% sure how this would behave in a multi-user setting. I would think it would be possible for trans#1 to create import_batch_id #222 and then trans#2 to start and get #223. And then my audit trail would record the wrong data.
Are there other options that I'm not seeing here? Is there a way to add a transaction/session variable? Basically, something like pg_settings (but, that does allow for inserts, updates and deletes of values).
It feels like the best option might be the temp table.
The main good news for variant 2. is - quoting the manual here:
currval
Return the value most recently obtained by nextval for this sequence in the current session. (An error is reported if nextval has never been called for this sequence in this session.) Because this is returning a session-local value, it gives a predictable answer whether or not other sessions have executed nextval since the current session did.
Store your import file names in a table with a serial primary key. You can refer to your last value from the sequence with currval or lastval. Concurrent users cannot interfere. As long as you don't foil this path inside your own transaction yourself, this is safe.
I'm starting to move more logic into the database, using triggers, views, functions, CTEs, etc. When plv8/json comes out for postgres, I can see myself putting lots of logic in there.
I'm having problems with the "standard" way of doing database migrations in sequel and activerecord. Both sequel and activerecord let you put arbitrary sql code into timestamped files. When each file is ran, a schema_versions table is updated with the filename (or timestamp in the filename), which keeps record of which migrations have been applied to the current database.
If a lot of coding is being done at the database level, that means that modifications to existing views, functions, etc follow the below pattern:
Migration 1 defines a function and a view that uses that function.
-- Migration 1
create function calculate(x int) returns int as $$
return x + 1;
$$ language sql;
create view foos as (
select something, calculate(something) from a_table
);
Requirements change, and I need to change a function type. In Migration 2 I have to drop all objects that depend on foo, and recreate them by copying their entire body -- even if there weren't any changes in most of the other code!
-- Migration 2
-- Have to drop all views and functions that depend on the
-- `calculate(int)` function.
drop view foos;
create or replace calculate(x bigint) returns bigint as $$
return x + 1;
$$ language sql;
-- I could do `drop function calculate(int) cascade`,
-- but I might accidentally drop some objects that wouldn't get recreated below.
-- Now I have to recreate foo.
create view foos as (
select something, calculate(something) from a_table
);
If I'm building a system based on views and functions and triggers, my migrations would be filled with duplicated code, and it's difficult to find the latest version of the code. You might say "don't do that!", but for my purposes (e-commerce, shipping, transactions), I'm finding it's a lot easier and faster to have the database ensure the integrity of the data by doing the logic inside the database.
You can (of course) dump the current database schema (which includes all the code definitions), but I think you lose comments. And you wouldn't generally want to edit a giant file that contains the whole schema.
Any ideas on how to solve this problem?
My best idea is to how the sql code contained in their own canonical files (app/sql/orders/shipping.sql, app/sql/orders/creation.sql, etc). Everyone develops directly on these. Whenever it's time for a release, then you'd want to make a new migration file, look at all the changed code since the previous release, figure out the dependency chain of the database objects that need to be dropped and recreated, and then copy the sql from the canonical sql files into a new sequel/activerecord migration file. But it's a pain. :/
Thoughts are very welcome. I hope I explained this well enough, I'm cutting back on my caffeine intake and I'm a little groggy atm.
Oh, I asked a similar question on Stack Overflow: Changing the type of a column used in other views The answer was a function that let me pass in:
sql code to run
database views to drop and recreate
The function would retrieve the view definition, drop the views, run the sql code, then recreate the view definition (in reverse order of dropping). Perhaps a system of functions like this would help solve the problem of having to copy/paste sql code into the migration files.
I'd recommend liquibase.
You create files which track the changes to your database and these will be run into the database in the correct migration order.
You might find Dave Wheeler's blog-posts interesting starting from here:
http://justatheory.com/computers/databases/simple-sql-change-management.html
My rate of database change is fairly small but I tend to be careless and make small changes to the schema directly, so I've had to come up with a fair bit of infrastructure to catch when I've done so. The basic elements are:
A makefile that can rebuild a development database from scratch
A set of schema-files separated into "modules" (lookups_schema.sql, lookup_data.sql)
A set of update files that transition from one revision to the next
I don't usually have the corresponding downgrade scripts, some people do
A script to populate my database with a plausible amount of test data
Crucially, a test suite via pgTAP that checks my various functions, views and also the upgrade scripts. The upgrade tests can be run against a live database too.
If you have a separate instance of PostgreSQL set up with fsync turned off / on ramdisk etc then rebuilding the whole DB and populating it can take seconds (if you don't have too much test data).
Start with #1, #2, then add #6 (pgTAP is very cool), then the rest. The crucial thing is a test suite that checks your in-database code.
There are tools that try to automate schema changes for you, but they are really only good at adding a new column to a table and that sort of thing. Once you have code in your db then they're not much help.
Contrived example:
{
productName: 'Lost Series 67 DVD',
availableFrom: '19/May/2011',
availableTo: '19/Sep/2011'
}
View storeFront/currentlyAvailableProducts basically checks if current datetime is within availableFrom - availableTo and emits the doc.
I would like to force a view to regenerate at 1am every night, i.e. process/map all docs.
At first I had a simple python script scheduled via crontab that touched each document hence causing a new revision and the view to update,however since couchdb is append only this wasnt very efficient - i.e. loads of unnecessary IO and disk space usage followed by compaction, very resource wasteful on all fronts.
Second solution was to push the view definition again via couchapp push however this meant the view was unavailable (or partially unavailable) for several minutes which was also unacceptable.
Is there any other solutions?
Will's answer is great; but just to get the consensus viewpoint represented here:
Keep one view, and query it differently every day
Determine your time-slice size, for example one day.
Next, for each document, you emit once for every time slice (day) that it is available. So if a document is available from 19 May to 21 May (inclusive), your emit keys would be:
"2011-05-19"
"2011-05-20"
"2011-05-21"
Once that is computed for every document, to find docs available on a certain day, just query the view with (e.g. today) ?key="2011-05-18".
You never have to update or re-run your views.
If you must never change your query URL for some reason, you might be able to use a _show function to 302 (temporary) redirect to today's correct query.
So your view is not being updated automatically I take it?
New and changed documents are not being added on the fly?
Oh I see, you're cheating. You're using "out of document" information (i.e. the current date) during view creation.
There's no view renaming, but if you were desperate you could use url rewriting.
Simply create a design document "each day": /db/_design/today05172011
Then use some url rewriting to change: GET /db/_design/today/_view/yourview
to: GET /db/_design/today051711/_view/yourview
Create the view at 11pm server time (tweak it so that "now" is "tomorrow", or whatever).
Then add some more clean up code to later delete the older views.
This way your view builds each night as you like.
Obviously you'll need to front Couch with some other web server/proxy to pull this off.
It's elegant, and inelegant, at the same time.
If we have a view over another view and we drop the parent view, what happens to the other view? After we recreate the base view, will the second view be active again?
From my understanding of views in DB2 (and SQL in general), they act effectively as aliases to SQL select statements. I would expect that in this situation, your child view would still exist, but querying it when the parent was deleted would result in an error.
Since the definition of the child view is stored and remains static, recreating the parent view with the same object names should result in the child view returning the expected result again.
This would be very easy for you to verify yourself, by the way. :-)
Probably the same thing that happens in this question.
Michael Sharek's response is correct; the remaining views that depended on the view that was dropped will remain invalid (VALID='N' for that row in SYSCAT.VIEWS) even after the dropped view is replaced. You will need to reissue the create statements for any view in SYSCAT.VIEWS where VALID='N', but the good news is that you can overwrite an invalid view without dropping it.
What I typically do is use EXPORT to extract a copy of the TEXT column of every view in SYSCAT.VIEWS where VALID = 'N'. Then I execute the DDL statements in that file and the invalid views are all typically replaced on the first pass. However, if you have a more sophisticated hierarchy of interdependent views, you may need to run the file a couple more times. There's no need to filter out the DDL for the views that were made valid during a previous pass; those statements will be safely rejected with a duplicate object error.