How to apply database updates after deployment? - deployment

i know this is an often asked question on these boards. And usually the question has been about how to manage the changes being made to the database before you even get around to deploy them.Mostly the answer has been to script the database and save it under sourcecontrol and then any additional updates are saved as scripts under version control too.(ex. Tool to upgrade SQL Express database after deployment)
my question is when is it best to apply the database updates , in the installer or when the new version first runs and connects to the database? note this is a WinApp that is deployed to customers each have their own databases.

One thing to add to the script: Back up the database (or at least the tables you're changing!) before applying the changes.

As a user I think I'd prefer it happens during the install, and going a little further that the installer can roll itself back in the event of a failure. My thinking here is that if I am installing an update, I'd like to know when the update is done that it actually is done and has succeeded. I don't want a message coming up the next time I run it informing me that something failed and I've potentially lost all my data. I would assume that a system admin would probably also appreciate install time feedback (of course, that doesn't matter if your web app isn't something that will be installed on a network). Also, as ראובן said, backing up the database would be a nice convenience.

You haven't said much about the architecture of the application, but since an installer is involved I assume it's a client/server application.
If you have a server installer, that's where you want to put it, since the database structure is only going to change once. Since the client installers are going to need to know about the change, it would be nice to have a way to detect the database version change, and for the old client to be able to download the client update from the server automatically and apply it.
If you only have a client installer, I still think it's better to put it there (maybe as a custom action that fires off the executable for updating the database). But it really isn't going to matter, because conceptually one installer or first-time user of the new version is going to have to fire off the changes to the database anyway. The database changes are going to put structural locks on the database so, in practical terms, everyone is going to have to be kicked off the system at that time for the database update to be applied.
Of course, this is all BS if it's not client-server.

Related

My Main branch in TFS just disappeared - why?

Our Main branch was apparently just deleted and there's no record of why. (The branch still appears in Source Control Explorer - When I view the history of the branch it's empty). When I get latest on the branch it deletes everything locally. We have numerous children branches that all appear to be fine, but Main is now empty with no record of how/why. Anybody have any idea how we can figure out what happened and recover it? We have a child branch that should be a duplicate so we should be OK, but we'd really like to figure out what happened!
What may have happened
There are a few things I can think of, the most logical in this case is that someone issued a tf destroy $/project/Branch/* /recursive, that would have the observed effect.
It could also be that someone has renamed the branch, that would not be visible in the history per se, unless you turn on the "Show Deleted Items" option in the options of the Team Foundation Source control options.
Your Application Tier's version control cache may have become corrupted, the chance of this happening is very slim, but it may have caused this. Ensure you have a good backup of your databases even if this may seem the case, if it isn't you're going to need the database backup and the older it is, the more unlikely it is data marked for deletion will still be there.
How can you find out what happened?
Check the tbl_command in the Project Collection Database or access the hidden _oi activity log page on the web access server. You may be able to find the command that caused the deletion.
If that doesn't tell you, analyze the transaction logs of the SQL Server (if your server is configured to keep these).
What to do now?!
Make a backup of your TFS server or secure the ones you have if you haven't done so
If the version control cache is the culprit clearing it (on Application Tier machines) may solve your problem, the cache location shows on the TFS Admin Control panel:
Best way to go about this, is to stop the TFS server temporarily and then delete the contents of this folder.
There seem to be a few ways out:
Forget about it, take the contents of the most up-to-date branch and use that to repopulate the missing data. Just add them to the empty folder, check them in and then re-merge all other branches and resolve all conflicts.
Pro: Fast
Con: you loose history, resolving conflicts will be a horrible task.
Restore the project collection database to a previous point in time (warning! may require restore of all project collections to a previous point in time)
Pro: You get all your history back
Con: You loose changes made since the last known good backup, takes alot of work, will impact all projects in the same collection, possibly all projects on the same server.
Restore the whole server to temporary server and restore the collection with the missing data to the last known good configuration. Use a tool like OpsHub or Team Foundation Migration Toolkit to replay the changes since the disaster.
Pro: You get back to the most up to date point in time
Con: Takes a lot of time and expertise in TFS Migration
Restore the collection database and use the transaction logs to replay as much of the changes to the collection , then skip the transactions that perform the destroy. Be careful though, usually the destroy action marks files as deleted, but a job does the actual deletion in the background.
Pro: You get back to the most up to date point in time
Con: Takes a lot of time and expertise in SQL
Contact Microsoft Support and get a Field expert in the house. They may be able to restore the deletion if it was done without immediately triggering the cleanup job.
Pro: You will get back into the best state possible
Con: it will be costly
Whatever you do, make sure you have a backup of your current situation, that allows you to try different tactics, should your first attempts fail.
Consider splitting the project collection to allow other projects to continue working. You will end up in a situation were this one project ends up in an isolated Project Collection on its own, but it will allow you to move forward quickly.
OK - this is one for the record books, because inexplicably the project reappeared later in the day. All of it's history is back as well. I would have thought that perhaps the DBAs here did a database restore, but that's not possible since all of the checkins that have been happening all day are still there.
So if this happens to you in the future, just cross your fingers and wait a few hours!
p.s. I did look in the SQL logs but couldn't find anything. Bizarre!

Liferay deploy backup with changes in database

When I'd modified or create new tables in Liferay 6.1 and deploy in production server. Liferay automatically makes a backup of each table.
This backup takes a long time when a table has more than 10k records. And a century when has 100k. Although this table hasn't been modified.
What can I do for optimice the deployment to the server?
Many thanks in advance,
At the moment I think only two options are available:
(easy way) Set "build.auto.upgrade=false" in /WEB-INF/src/service.properties to avoid any automatic updates, and perform the db changes (if any) manually.
(hard way) Reworite the Liferays ServiceBuilder so that it perform an update only on those tables which were changed. This will require an EXT development as it is a very core change, and for every next Liferay version you will need to carefully review it and upgrade.
Liferay automatically makes a backup? Where to? This is new to me!
Also, you're describing a "no go" operation: You don't modify any tables in the database. Period. That's what the API is there for. If you do, prepare for disaster, sooner (if you're lucky) or later (according to Murphy, when you forgot that you changed manually. Then you'll blame Liferay for the failure that you caused by manipulating the database).
Do you have your own backup routine implemented that runs on every server restart? This is the only thing that I can imagine to happen here - in that case you'll need to modify your backup strategy. Or the database that you use - maybe a transaction log backup makes more sense than duplicating the table content into the same database...

MongoDB - Safeguard against .remove() entire database?

I'm using MongoDB as my database, and as a first-time back-end developer the ease with which I can delete an entire database/collection really bothers me.
Simply typing db.collection.remove() removes all records from that collection!
I know that an effective backup strategy should render this a non-issue, but I occasionally do run .remove() on some collections, and I'd hate to type in the wrong collection name by accident and (a) have to go through a backup restore, and (b) lose whatever data I had gathered between the backup and the restore, especially as my app gathers a lot of user data.
Is there any 'safeguard' I can set up my database to use, even if it's just a warning/confirmation that says
"Yo, are you sure you want to remove everything from <collectionname>? Choose: Yes/No"
User roles won't fix your problem. If your account has permissions to delete one user, you could accidentally delete them all. If your account has permissions to update an attribute for one user, you could accidentally update all of your users.
There's a simple fix for this however.
Step 0: Backup your database. And test your backups regularly. And make sure you get alerted if the backup did not run, or errored. Replica sets are not backups. I know this is obvious, but evidentally it's not obvious to everybody.
Step 1: Write a web admin GUI interface for your database. This it will only take a day or two -- and it should be simple enough that a secretary or intern could use it without fear for your data. (If you think this will take a long time, find a framework with more bells and whistles. Your admin console doesn't even need to be written in the same language as your app.)
Step 2: Data migrations (maintenance transformations of your database) should always be run from scripts checked into source control and tested on non-prod beforehand. The script could be as simple as mongo -e "foo.update(blah)", but you should run it as a script to avoid cut-n-paste errors. Ideally, you would even have a checklist for all migrations. (Check that you have a recent backup. Check the database log and system load beforehand. Write a before and after query that will tell you if the migration was successful...)
Step 3: You now no longer need to use the production Mongo console. So don't. It's a useful tool for development, but that's only needed on local development databases.
The above-mentioned Roles might be useful for read-only queries. But you can already do that against the non-master replica set member.
tl;dr: You can go pretty far using cowboy admin techniques, but eventually you're going to figure out that it's better (and not much more work) to automate everything.
There is nothing you can do in the current version to provide this functionality.
In a future version when user defined roles are available you could define a role which allows insert() and update() but not remove() or drop() etc. and therefore make yourself log-in as a different higher-role user, but that's not available in the current (2.4) version.

What's the best way to update code remotely?

For example, I have a website with various types of information. If that goes down I have a copy of the same website the users use on a local webserver, like Apache or IIS on the client. They use this local version until the Internet version returns. They can have no downtime, in other words.
The problem is that over time the Internet version will change while the client versions will remain the same unless I touch each client's machine to make the updates. I don't want to do that.
Is there a good way to keep my client up to date so that when I make a change on the server the client gets a copy so they can run it locally if needs be?
Thank you.
EDIT: do you think maybe using SVN and timely running of the update by the clients would work?
EDIT: they'll never ever submit anything. It's just so I don't have to update the client by hand, manually going to the machine. they're webpages that run in case the main server is down.
I will go for Git over SVN because of its distributed nature. Gives you multiple copies of code; use it along with this comment's solution:
Making git auto-commit
to autocommit.
Why not use something like HTTrack to make local copies of your actual internet site on each machine, rather then trying to do a separate deployment. That way you'll automatically stay in sync.
This has the advantage that if, at some point, part of your website is updated dynamically from a database, the user will still be able to have a static copy of the resulting site that is up-to-date.
There are tools like rsync which you can use periodically to sync the changes.

How to limit the effect of client modifications to production systems

Our shop has developed a few WEB/SMS/DB solution for a dozen client installations. The applications have some real-time performance requirements, and are just good enough to function properly. The problem is that the clients (owners of the production servers) are using the same server/database for customizations that are causing problems with the performance of the applications that we created and deployed.
A few examples of clients' customizations:
Adding large tables with many text datatypes for the columns that get cast to other data types in the queries
No primary keys, indexes, or FK constraints
Use of external scripts that use count(*) from table where id = x, in a loop from the script, to determine how to construct more queries later in the same script. (no bulk actions that the planner can optimize or just do everything in a single pass)
All new code files on the server are created/owned by root, with 0777 permissions
The clients don't take suggestions/criticism well. If we just go ahead and try to port/change the scripts ourselves, the old code can come back, clobbering any changes that we make! Or with out limited knowledge of their use cases, we break functionality while trying to optimize their changes.
My question is this: how can we limit the resources to queries/applications other that what we create and deploy? Are there any pragmatic options in scenarios like this? We prided ourselves in having an OSS solution, but it seems that it's become a liability.
We use PG 8.3 running on a range on Linux Distos. The clients prefer php, but shell scripts, perl, python, and plpgsql are all used on the system in one form or another.
This problem started about two minutes after the first client was given full access to the first computer, and it hasn't gone away since. Anytime someone whose priorities are getting business oriented work done quickly they will be sloppy about it and screw up things for everyone. That's just how things work, because proper design and implementation are harder than cheap hacks. You're not going to solve this problem, all you can do is figure out how to make it easier for the client to work with you than against you. If you do it right, it will look like excellent service rather than nagging.
First off, the database side. There's now way to control query resources in PostgreSQL. The main difficulty is that tools like "nice" control CPU usage, but if the database doesn't fit in RAM it may very well be I/O usage that is killing you. See this developer message summarizing the issues here.
Now, if in fact it's CPU the clients are burning through, you can use two techniques to improve that situation:
Install a C function that changes the process priority (example 1, example 2) and make sure whenever they run something it gets called first (maybe put it into their psql config file, there are other ways).
Write a script that looks for postmaster processes spawned by their userid and renice them, make it run often in cron or as a daemon.
It sounds like your problem isn't the particular query processes they're running, but rather other modifications they're making to the larger structure. There's only one way to cope with that: you have to treat the client like they're an intruder and use the approaches of that portion of the computer security field to detect when they screw things up. Seriously! Install an intrusion detection system like Tripwire on the server (there are better tools, that's just the classic example), and have it alert you when they touch anything. New file that's 0777? Should jump right out of a proper IDS report.
On the database side, you can't directly detect the database being modified usefully. You should do a pg_dump of the schema every day into a file (pg_dumpall -g and pg_dump -s, then diff that against the last one you delivered and again alert you when it's changed. If you manage that this well, the contact with the client turns into "we noticed you changed on the server...what is it you're trying to accomplish with that?" which makes you look like you're really paying attention to them. That can turn into a sales opportunity, and they may stop fiddling with things as much just knowing you're going to catch it immediately.
The other thing you should start doing immediately is install as much version control software as you can on each client box. You should be able to login to each system, run the appropriate status/diff tool for the install, and see what's changed. Get that mailed to you regularly too. Again, this works best if combined with something that dumps the schema as a component to what it manages. Not enough people use serious version control approaches on the code that lives in the database.
That's the main set of technical approaches useful here. The rest of what you've got is a classic consulting client management problem that's far more of a people problem than a computer one. Cheer up, it could be worse--FSM help you if you give them ODBC access and they discover they can write their own queries in Access or something simple like that.