Restoring Ingres Database from one system to another system - ingres

We want to restore the database that we have got from the client as backup in our development environment, we are unable to restore the database successfully, can any one help us to know the steps involved in this restore process? Thanks in Advance.

Vijay, if you plan to make a new database out of checkpoints (+journals) made on another (physical) server, then I must disappoint you - it is going to be a painful process. Follow these instructions http://docs.actian.com/ingres/10.0/migration-guide/1375-upgrading-using-upgradedb . The process is basically the same as upgradedb . However, if architecture of the development server is different (say backup has been made on a 32bit system, and development machine is, say POWER6-based) then it is impossible to make your development copy of the database using this method.
On top of all this, this method of restoring backups is not officially supported by Actian.
My recommendation is to use the 'unloaddb' tool on the production server, export the database in some directory, SCP that directory to your development server, and then use the generated 'copy.in' file to create the development database. NOTE: this is the way supported by Actian, and you may find more details on this page: http://docs.actian.com/ingres/10.0/migration-guide/1610-how-you-perform-an-upgrade-using-unloadreload . This is the preferred way of migrating databases across various platforms.

It really depends on how the database has been backed up and provided to you.
In Ingres there is a snapshot (called a checkpoint) that can be restored into a congruent environment, but that can be quite involved.
There is also output from copydb and unloaddb commands which can be reloaded into another database. Things to look out for here are a change in machine architecture or paths that may have been embedded into the scripts.
Do you know how the database was backed up?

Related

Running a standalone postgres instance alongside existing servers

I'm developing an application that heavily relies on Postgres. Currently, users are required to have Postgres set up and place database credentials in a config file that my app reads.
There are a couple of issues with the approach which I am not entirely happy with:
I can't dictate the version of Postgres to use (they may need a certain version for some other task)
Users have to install a separate piece of software before they can use my application.
They would need to provide database credentials to my app (which they may revoke later)
Potentially issues with database name conflicts
Is it possible to run a standalone version of Postgres i.e running it from a folder and potentially alongside another version of Postgres on the same machine?
My current solution is to run it as a docker instance, this avoids any conflicts with data and permissions but does not solve the problem of needing users to install a separate piece of software first.
The application is completely standalone and single user and there is never a need for a user to interact with the database directly, even actions like taking backups is handled by the app, which internally calls pg_dump
(I would have thought something like this would be possible for the people who develop Postgres - I can't imagine they constantly install and remove the software completely from their system in order to test things.)

WSO2 Identity Manager 5.6 : backup and restore procedures

Good morning,
I looked in the forum here and could not find the answer. If I overlooked it, I apologize...
I just joined an existing project team using WSO2 Identity Manager 5.6 and API Gateway.
I understand that WSO2 Identity Manager is made up of several components, among which openLDAP (which contains a Berkeley database) and a postgreSQL database.
The current backup / restore procedures simply 'tar' the whole directory which contains all files related to WSO2 (including directories which contain database files), without stopping WSO2.
I'm a bit doubtful about this type of process for backing up. Is that the right thing to do?
If not, what would the right procedure be?
If I understand correctly, postgreSQL is only used for WSO2 'internal state data' storage, so backing it up may not be useful. So I'm thinking that maybe an export of openLDAP (slapcat command) be enough.
Backing openLDAP is probably not enough. Depending on how the WSO2 components (IS + APIM) are installed, you may also have H2 DBs for the local registry, Solr indexes for the UI, Velocity templates for API deployments, and/or Synapse XMLs for the APIs.
I recommend you to first compare your installation directories and files with the vanilla zips so you know how is it configured before changing your backup process.

Can we create a local database for my application without making it dependent of the OS in which it is installed?

I mean, for example, installing a database using a webserver then deploy the database with my application without more configuration, just a portable database, is it possible? Currently my mongo database is installed and dependent of my computer system, I would make it portable, it is possible to build a database upon my webserver, hence make it portable?
Any hint would be great,
thanks

solution to backup postgresql alongside with file system

I'm developing an application with a postgresql db. Also it stores files in file system and keep their address in the db. I want an open source solution for backing up the app state including database and file storages.
Mandatory requirements:
supports backing up postgresql db when it is running.
support becking up a folder
support compression
Optional requirements:
Can view, create and restore backups in a web console.(Important)
support plugins or custom backup/restore tasks
support other data storages like mysql
support retention
I've seen project like barman or amanda but It seems each one solve some part of the problem.
Should I develop the solution myself?
The application is developed in java, if it matters.

What's best Drupal deployment strategy? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am working on my first Drupal project on XAMPP in my MacBook. It's a prototype and receives positive feedback from my client.
I am going to deploy the project on a Linux VPS two weeks later. Is there a better way than 're-do'ing everything on the server from scratch?
install Drupal
download modules (CCK, Views, Date, Calendar)
create the Contents
...
Thanks
A couple of tips:
Use source control, NOT FTP/etc., for the files. It doesn't matter what you use; we tend to spin up an Unfuddle.com subversion account for each client so they have a place to log bugs as well, but the critical first step is getting the full source tree of your site into version control. When changes are made on the testing server or staging server, you see if they work, you commit, then you update on the live server. Rollbacks and deployment gets a lot, lot simpler. For clusters of multiple webheads you can repeat the process, or rsync from a single 'canonical' server.
If you use SVN, though, you can also use CVS checkouts of Drupal and other modules/themes and the SVN/CVS metadata will be able to live beside each other happily.
For bulky folders like the files directory, use a symlink in the 'proper' location to point to a server-side directory outside of the webroot. That lets your source control repo include all the code and a symlink, instead of all the code and all the files users have uploaded.
Databases are trickier; cleaning up the dev/staging DB and pushing it to live is easiest for the initial rollout but there are a few wrinkles when doing incremental DB updates if users on the live site are also generating content.
I did a presentation on Drupal deployment best practices last year. Feel free to check the slides out.
Features.module is an extremely powerful tool for managing Drupal configuration changes.
Content Types, CCK settings, Views, Drupal Variables, Contexts, Imagecache presets, Menus, Taxonomies, and Permissions can all be rolled into a feature, which can be checked into version control. From there, deploying a new site, or pushing changes to an existing one, is easily managed with the Features UI or Drush.
Make sure you install Strongarm.module for exporting drupal config that gets stored in your Variables table. You can also static content/nodes (ie: about us, faqs, etc) into Features by installing uuid_features.module.
Hands down, this is the best way to work with other developers on the same site, and to move your site from Development to Testing to Staging and Production.
We've had an extensive discussion on this at my workplace, and the way we finally settled on was pushing code updates (including modules and themes) from development to staging to production. We're using Subversion for this, and it's working well so far.
What's particularly important is that you automate a process for pushing the database back from production, so that your developers can keep their copies of the database as close to production as possible. In a mission-critical environment, you want to be absolutely certain a module update isn't going to hose your database. The process we use is as follows:
Install a module on the development server.
Take note of whatever changes and updates were necessary. If there are any hitches, revert and do again until you have a solid, error-free process.
Test your changes! Repeat your testing process as a normal, logged-in user, and again as an anonymous user.
If the update process involved anything other than running update.php, then write a script to do it.
Copy the production database to your staging server, and perform the same steps immediately. If it fails, diagnose the failure and return to step 1. Otherwise, continue.
Test your changes!
BACK UP YOUR PRODUCTION DATABASE and TAKE NOTE OF THE REVISION YOU HAVE CHECKED OUT FROM SVN.
Put your production Drupal in maintenance mode, run "svn update" on your production tree, and go through your update process.
Take Drupal out of maintenance mode and test everything (as admin, regular user, and anonymous)
And that's it. One thing you can never really expect for a community framework such as Drupal is to be able to move your database from testing to production after you go live. From then on, all database moves are from production to testing, which complicates the deployment process somewhat. Be careful! :)
We use the Features module extensively to capture features and then install them easily at the production site.
I'm surprised that no one mentioned the Deployment module. Here is an excerpt from its project page:
... designed to allow users to easily stage content from one Drupal site to another. Deploy automatically manages dependencies between entities (like node references). It is designed to have a rich API which can be easily extended to be used in a variety of content staging situations.
I don't work with Drupal, but I do work with Joomla a lot. I deploy by archiving all the files in the web root (tar and gzip in my case, but you could use zip) and then uploading and expanding that archive on the production server. I then take a SQL dump (mysqldump -u user -h host -p databasename > dump.sql), upload that, and use the reverse command to insert the data (mysql -u produser -h prodDBserver -p prodDatabase < dump.sql). If you don't have shell access you can upload the files one at a time and write a PHP script to import dump.sql.
Any version control system (GIT, SVN) + Features module to deploy Drupal code + custom settings (content types, custom fields, module dependencies, views etc.).
As Deploy module is still in development mode, so you may like to use Node export module in Drupal 7 to deploy your content / nodes.
If you're new to deployment (and or Drupal) then be sure to do everything in one lump.
You have to be quite careful once there are users effecting content while you are working on another copy.
It is possible to leave the tables that relate to actual content, taxonomy, users, etc. rather than their structure. Then push the ones relating to configuration. However, this add an order of magnitude of complexity.
Apologies if deployment is something old hat to you, thus this is vaguely insulting.
A good strategy that I have found and am currently implementing is to use a combination of the deploy module to migrate my content, and then drush along with dbscripts to merge and update the core and modules. It takes care of database merging even if you have live content, security and module updates, and I currently have mine set up to work with svn.