How to avoid having views "marked inoperative"? - db2

How can I modify tables/views that impact other views without having those dependent views "marked inoperative"?
We're running DB2 9.5 LUW. I've read Leons Petrazickis' blog post Find a list of views marked inoperative where he says,
There are also ways to avoid it using transactions, CREATE OR REPLACE statements, and other measures.
Since we can't take advantage of the new features in 9.7 I need someone to elaborate on these other ways that Leons mentions. An example that runs in IBM Data Studio would be great.

The "CREATE OR REPLACE" functionality was added in DB2 9.7. Prior to this, the only way to avoid marking views inoperative is to drop the views before making the changes to the objects underneath the views, and recreate the views after.
Or, avoid making changes to the views' dependent objects. :-)

Related

Fetch Data from Postgresql into a word processing software (word, indesign)

We store an manipulate geodata in postgresql(postgis) and use QGIS to view and edit.
My team and I need to build reports around that data. The reports are quite text heavy and need some good visual formatting. My colleagues are used to MSword or adobe indesign, so we agreed to use one of these two programs to build our final reports.
Our documents will include around 10 indesign/MSword tables. The data for these tables lives in dedicated views in the pg database. The Tables inside the reports should grow with the underlying data when I do my editing. So that everyone can focus on "their" work and to reduce data redundancy.
What would be the best way to do this?
I looked at some (ODBC) catalog plugins for indesign but i'm not
really sure if this is the way to go.
I'm no developer. I need to learn/look into the respective technologies involved anyway
so I wanted to define my tech-stack first (maybe hire someone to do it properly).
Maybe someone can give me a general idea / point me in the right direction with this.
Thanks!

How to trigger something in Snowflake

I want to trigger some SQL code just before some update in a table or just after the update in a table.
It seems like Triggers are not supported by Snowflake.
Any workaround will be appreciated.
Regards,
Neeraj
triggers are indeed not supported by Snowflake but you can simulate the behaviour by using streams and tasks combination:
https://docs.snowflake.net/manuals/user-guide/streams.html - streams are used to track the tables for changes
https://docs.snowflake.net/manuals/user-guide/tasks-intro.html - tasks are used to execute stored procedures
Snowflake appears to offer some really cool features so it's unfortunate that a basic tool of every dba I know is missing. Triggers are great for enforcing business rules upon the application developers.
I've been thinking about what might help cover all use cases - and I'm currently leaning toward moving all insert/update/delete processes to a Stored Procedures (best practice anyway?) - then build the "trigger" activity directly in the SP. I dislike that this buries the events but I think it will get the job done. I also dislike that the number of SP's could grow out of hand.
Now I need to dig deeper to make sure the capability of column level data detection is an option. and/or can I chain SP's for code reuse. If chaining works, then put the trigger logic in it's own SP.
Work around at best - but need to think outside the box.

CQRS with Legacy Systems

I'm looking to convert a relatively new web-based application with a clear domain model over to more of a CQRS style system. My new application is essentially an enhanced replacement of an older existing system.
The existing systems in my organization share a set of common databases, which are updated by an untold number of applications (developed via the Chaos Method) that exist in silos throughout the company. (As it stands, I believe that no single person in the company can identify them all.)
My question is therefore about the read model(s) for my application. Since various status changes, general user data, etc. are updated by other applications outside my control, what's the best way to handle building the read models in such a way that I can deal with outside updates, but still keep things relatively simple?
I've considered the following so far:
Create Views in the database for read models, that read all tables, legacy and new
Add triggers to existing tables to update new read model tables
Add some code to the database (CLR Stored proc/etc [sql server]) to update an outside datastore for read models
Abandon hope
What is the general consensus on how to approach this? Is it folly to think I can bring order to a legacy system without fully rewriting everything from scratch?
I've used option #1 with success. Creating views to demoralize the data to create a read model is a viable option depending on the complexity of the write database(s). Meaning, if it is relatively straight forward joins that most developers can understand then I would take a closer look to see if it's viable for you. I would be careful with having too much complexity in these views.
Another thing to consider is periodic polling to build and update similar to a traditional reporting databases. Although not optimal in comparison to a notification, depending on how stale your read model can be, this might also be an option to look at.
I once was in a similar situation, the following steps was how i did it:
To improve the legacy system and achieve cleaner code base, the key is to take over the write responsibility. But don't be too ambitious as this may introduce interface/contract changing which makes the final deployment risky.
If all the write are fired through anything except direct sql updates, keep them backward compatible as possible as you can. Take them as adapters/clients of your newly developed command handlers.
Some of the write are direct sql updates but out of your control
Ask the team in charge if they can change to your new interface/contract?
If no, see step 3.
Ask if they can tolerate eventual consistency and are willing to replace the sql updates with database procedures?
If yes, put all the sql updates in the procedures and schedule a deployment and see step 4.
If no, maybe you should include them in your refactoring.
Modify the procedure, replace the sql updates with inserting events. And develop a backend job to roll the events and publish them. Make your new application subscribing these events to fire commands to your command handlers.
Emitting events from your command handlers and use them to update the tables that other applications used to consume.
Move to the next part of the legacy system.
If we had an infinitely powerful server, we wouldn't bother with view models and would instead just read from the basic entities tables. View models are meant to improve performance by preparing and maintaining an appropriate dataset for display. If you use a database View as a view model, you've really not gotten much of a performance gain over an adhoc query (if you ignore the preplanning that the sql parser can do for a view).
I've had success with a solution that's less intrusive than #Hippoom's solution, but more responsive than #Derek's. If you have access to the legacy system and can make minor code changes, you can add an asynchronous queue write to generate an event in a queueing system (RabbitMQ, Kafka, or whatever) in the legacy system repositories or wherever data is persisted. Making these writes asynch should not introduce any significant performance costs, and should the queue write fail it will not affect the legacy system. This change is also fairly easy to get through QA.
Then write an event driven piece that updates your read models. During the legacy system update phase (which can take a while), or if you only have access to some of the legacy systems that write to these databases, you can have a small utility that puts a new "UpdateViewModel" event in the queue every couple minutes. Then you would get timely events when the legacy systems save something significant, but are also covered for the systems that you are not able to update.

Managing Large Database Entity Models

I would like hear how other's are effectively (or not) working with the Visual Studio Entity Designer when many database tables exists. It seems to me that navigating the Designer is tough enough to find what you are looking for with just a few tables but how about a database with say 100 to 200 tables? When a table change is made at the database level how is the model updated? Does it overwrite any manual changes you have made to the model? How would you quickly find an entity in the designer to make a change or inspect a change? Seems unrealistic to be scrolling around looking for specific entity.
Thanks for your feedback!
I would like to say we have a fairly large number of tables (52), and i don't have major issues navigating the model, I would be more concerned with EF internally not being able to scale.
I don't use the designer i use the Model browser. There is a feature in the model browser called "Show in Designer" which can take you to any table you are searching for.
(source: microsoft.com)
In all honesty i use the Model browser more than i do the Designer. From the browser you can set pretty much anything (table mappings/ table properties).
When it comes time to update, it does not affect your conceptual model(any manual changes). Unless of course you have removed something from your physical model and it can no longer map.

How to move to a new version control system

My employer has tasked me with becoming our new version control admin. We are currently using two different version control systems for two different code bases. The code/functionality in the two code bases overlap in some areas. We will be moving both code bases to a new version control system.
I am soliciting ideas on how to do this. I suppose we could add the two code bases to the new version control as siblings in the new depot's hierarchy and gradually remove redundancy by gradually promoting to a third sibling in the hierarchy, ultimately working out of the third sibling exclusively. However, this is just a 30,000 ft view of the problem, not a solution. Any ideas, gotchas, procedures to avoid catastrophe?
Thanks
Git can be setup in such a way that svn, git, and cvs clients can all connect. This way you can move over to a central Git repo, but people who are still used to svn can continue to use it.
It sounds that in your specific situation, with two code-bases you want to combine, you should make three repositories and start to combine the first two into the third one.
My advice is to experiment with a few "test" migrations. See how it goes and adjust your scripts as necessary.
Then once your set, you can execute it for real and your done. Archive your old repos too.
Another place you might find inspiration is OpenOffice.org. They are in the middle of going from SVN to Mercurial. They probably have published information on their migration work.
Issues to consider:
How much history will you migrate?
How long will you need to continue using the old systems for patch-work, etc?
How long will you need to keep the old systems around to access historical information?
Does the new target VCS provide an automatic or quasi-automatic migration migration method from either of the two old VCS?
How will you reconcile branching systems in the two old VCS with the model used in the new VCS?
Will tagging work properly?
Can tags be transferred (which will not matter if you are not importing much history)?
What access controls are applied to the old VCS that must be reproduced in the new?
What access controls are to be applied to the new VCS?
This is at least a starting point - I've no doubt forgotten many important topics.