Running a standalone postgres instance alongside existing servers - postgresql

I'm developing an application that heavily relies on Postgres. Currently, users are required to have Postgres set up and place database credentials in a config file that my app reads.
There are a couple of issues with the approach which I am not entirely happy with:
I can't dictate the version of Postgres to use (they may need a certain version for some other task)
Users have to install a separate piece of software before they can use my application.
They would need to provide database credentials to my app (which they may revoke later)
Potentially issues with database name conflicts
Is it possible to run a standalone version of Postgres i.e running it from a folder and potentially alongside another version of Postgres on the same machine?
My current solution is to run it as a docker instance, this avoids any conflicts with data and permissions but does not solve the problem of needing users to install a separate piece of software first.
The application is completely standalone and single user and there is never a need for a user to interact with the database directly, even actions like taking backups is handled by the app, which internally calls pg_dump
(I would have thought something like this would be possible for the people who develop Postgres - I can't imagine they constantly install and remove the software completely from their system in order to test things.)

Related

Can we create a local database for my application without making it dependent of the OS in which it is installed?

I mean, for example, installing a database using a webserver then deploy the database with my application without more configuration, just a portable database, is it possible? Currently my mongo database is installed and dependent of my computer system, I would make it portable, it is possible to build a database upon my webserver, hence make it portable?
Any hint would be great,
thanks

Automated Table creation for Postgres in Cloud Foundry

We have a cloud-foundry app that is bound to a Postgresql service. Right now we have to manually connect to the Posgresql database with pgAdmin, and then manually run the queries to create our tables.
Attempted solution:
Do a cloud foundry run-task in which I would install
1) Install psql and connect to the remote database
2) Create the tables
The problem I ran into was that cf run-task has limited permissions to install packages.
What is the best way to automate database table creation for a cloud-foundry application?
Your application will run as a non-root user, so it will not have the ability to install packages, at least in the traditional way. If you want to install a package, you can use the Apt Buildpack to install it. This will install the package, but into a location that does not require root access. It then adjusts your environment variables so that binaries & libraries can be found properly.
Also keep in mind that tasks are associated with an application (they both use the same droplet), so to make this work you'd need to do one of two things:
1.) Use multi-buildpacks to run the Apt buildpack plus your standard buildpack. This will produce a droplet that has both your required packages and your app bits. Then you can start your app and kick of tasks to set up the DB.
2.) Use two separate apps. One for your actual app and one for your code that seeds the database.
Either one should work though. Both are valid ways to seed your database. The other option, which is what I typically done, is to use some sort of tool to do this. Some frameworks like Rails, have this built-in. If your framework does not, you could bring your own tool, like Flyway. These tools often also help with the evolution of your DB schema, which can be useful too.

Is there a way to upgrade from a Heroku shared database to a production grade database like Basic or Crane?

I have been using the Heroku shared database for a while now in an app and I would like to upgrade to their new Basic/Crane/etc. production grade databases. I don't however see a clear path to do that.
The options as I see them are:
I could use db:pull/db:push to migrate data/schema from the current production database to the new database. I could go into maintenance mode, move the data, then update the config to point to the new database. Not terrible, but I fear that the old schema from the shared database is not v9 compatible? Maybe I'm wrong. This could also take a long time resulting in some major downtime. Not cool.
Use pg:backups to create a backup, and use the heroku pg:restore to move the data over. Again I fear the same schema issues but this would be much faster.
Start with a Basic/Crane database and use their Followers concept. This feels like the right way to do it, but I don't know if this works with the shared databases. If it does I do not understand how.
All of these options I feel require me to upgrade to postgres v9 at some point since all the new databases are v9. Is there a way to do this in the shared environment, and then maybe migrating will be less painful... maybe.
Any ideas or suggestions?
Their Migrating Between Databases document points out that your option (3) using Followers for a fast changeover doesn't work when you're starting with a shared instance. What you should do here is use pg:backups to make a second database to test your application against and confirm the new database acts correctly there. If that works, turn on maintenance mode and do it again for the real migration.
There's very few schema level incompatibility issues going between PostgreSQL 8.4 and 9.1. That migration doc warns about one of them, the Bytea changes. If you're using Bytea, you probably know it; it's not that common. Most of the other things changed between those versions are also features you're unlikely to use, such as the PL/pgSQL modifications made in PostgreSQL 9.0. Again, if you're using PL/pgSQL server-side functions, you'll probably know that too, and most people on Heroku don't.
Don't guess if your database is compatible, test. You can run multiple copies of PostgreSQL locally, they just need different data directories and ports configured. That way you can test against 9.1 and 8.4 at will.
You usually use the pg_dump from 9.1 to dump the 8.4 database - pg_dump knows about older versions, but not newer (obviously).
Unless you're doing something unusual with the database (presumably not, since you're on Heroku) then there's unlikely to be any problems just dumping + restoring between versions.

Restoring Ingres Database from one system to another system

We want to restore the database that we have got from the client as backup in our development environment, we are unable to restore the database successfully, can any one help us to know the steps involved in this restore process? Thanks in Advance.
Vijay, if you plan to make a new database out of checkpoints (+journals) made on another (physical) server, then I must disappoint you - it is going to be a painful process. Follow these instructions http://docs.actian.com/ingres/10.0/migration-guide/1375-upgrading-using-upgradedb . The process is basically the same as upgradedb . However, if architecture of the development server is different (say backup has been made on a 32bit system, and development machine is, say POWER6-based) then it is impossible to make your development copy of the database using this method.
On top of all this, this method of restoring backups is not officially supported by Actian.
My recommendation is to use the 'unloaddb' tool on the production server, export the database in some directory, SCP that directory to your development server, and then use the generated 'copy.in' file to create the development database. NOTE: this is the way supported by Actian, and you may find more details on this page: http://docs.actian.com/ingres/10.0/migration-guide/1610-how-you-perform-an-upgrade-using-unloadreload . This is the preferred way of migrating databases across various platforms.
It really depends on how the database has been backed up and provided to you.
In Ingres there is a snapshot (called a checkpoint) that can be restored into a congruent environment, but that can be quite involved.
There is also output from copydb and unloaddb commands which can be reloaded into another database. Things to look out for here are a change in machine architecture or paths that may have been embedded into the scripts.
Do you know how the database was backed up?

What's best Drupal deployment strategy? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am working on my first Drupal project on XAMPP in my MacBook. It's a prototype and receives positive feedback from my client.
I am going to deploy the project on a Linux VPS two weeks later. Is there a better way than 're-do'ing everything on the server from scratch?
install Drupal
download modules (CCK, Views, Date, Calendar)
create the Contents
...
Thanks
A couple of tips:
Use source control, NOT FTP/etc., for the files. It doesn't matter what you use; we tend to spin up an Unfuddle.com subversion account for each client so they have a place to log bugs as well, but the critical first step is getting the full source tree of your site into version control. When changes are made on the testing server or staging server, you see if they work, you commit, then you update on the live server. Rollbacks and deployment gets a lot, lot simpler. For clusters of multiple webheads you can repeat the process, or rsync from a single 'canonical' server.
If you use SVN, though, you can also use CVS checkouts of Drupal and other modules/themes and the SVN/CVS metadata will be able to live beside each other happily.
For bulky folders like the files directory, use a symlink in the 'proper' location to point to a server-side directory outside of the webroot. That lets your source control repo include all the code and a symlink, instead of all the code and all the files users have uploaded.
Databases are trickier; cleaning up the dev/staging DB and pushing it to live is easiest for the initial rollout but there are a few wrinkles when doing incremental DB updates if users on the live site are also generating content.
I did a presentation on Drupal deployment best practices last year. Feel free to check the slides out.
Features.module is an extremely powerful tool for managing Drupal configuration changes.
Content Types, CCK settings, Views, Drupal Variables, Contexts, Imagecache presets, Menus, Taxonomies, and Permissions can all be rolled into a feature, which can be checked into version control. From there, deploying a new site, or pushing changes to an existing one, is easily managed with the Features UI or Drush.
Make sure you install Strongarm.module for exporting drupal config that gets stored in your Variables table. You can also static content/nodes (ie: about us, faqs, etc) into Features by installing uuid_features.module.
Hands down, this is the best way to work with other developers on the same site, and to move your site from Development to Testing to Staging and Production.
We've had an extensive discussion on this at my workplace, and the way we finally settled on was pushing code updates (including modules and themes) from development to staging to production. We're using Subversion for this, and it's working well so far.
What's particularly important is that you automate a process for pushing the database back from production, so that your developers can keep their copies of the database as close to production as possible. In a mission-critical environment, you want to be absolutely certain a module update isn't going to hose your database. The process we use is as follows:
Install a module on the development server.
Take note of whatever changes and updates were necessary. If there are any hitches, revert and do again until you have a solid, error-free process.
Test your changes! Repeat your testing process as a normal, logged-in user, and again as an anonymous user.
If the update process involved anything other than running update.php, then write a script to do it.
Copy the production database to your staging server, and perform the same steps immediately. If it fails, diagnose the failure and return to step 1. Otherwise, continue.
Test your changes!
BACK UP YOUR PRODUCTION DATABASE and TAKE NOTE OF THE REVISION YOU HAVE CHECKED OUT FROM SVN.
Put your production Drupal in maintenance mode, run "svn update" on your production tree, and go through your update process.
Take Drupal out of maintenance mode and test everything (as admin, regular user, and anonymous)
And that's it. One thing you can never really expect for a community framework such as Drupal is to be able to move your database from testing to production after you go live. From then on, all database moves are from production to testing, which complicates the deployment process somewhat. Be careful! :)
We use the Features module extensively to capture features and then install them easily at the production site.
I'm surprised that no one mentioned the Deployment module. Here is an excerpt from its project page:
... designed to allow users to easily stage content from one Drupal site to another. Deploy automatically manages dependencies between entities (like node references). It is designed to have a rich API which can be easily extended to be used in a variety of content staging situations.
I don't work with Drupal, but I do work with Joomla a lot. I deploy by archiving all the files in the web root (tar and gzip in my case, but you could use zip) and then uploading and expanding that archive on the production server. I then take a SQL dump (mysqldump -u user -h host -p databasename > dump.sql), upload that, and use the reverse command to insert the data (mysql -u produser -h prodDBserver -p prodDatabase < dump.sql). If you don't have shell access you can upload the files one at a time and write a PHP script to import dump.sql.
Any version control system (GIT, SVN) + Features module to deploy Drupal code + custom settings (content types, custom fields, module dependencies, views etc.).
As Deploy module is still in development mode, so you may like to use Node export module in Drupal 7 to deploy your content / nodes.
If you're new to deployment (and or Drupal) then be sure to do everything in one lump.
You have to be quite careful once there are users effecting content while you are working on another copy.
It is possible to leave the tables that relate to actual content, taxonomy, users, etc. rather than their structure. Then push the ones relating to configuration. However, this add an order of magnitude of complexity.
Apologies if deployment is something old hat to you, thus this is vaguely insulting.
A good strategy that I have found and am currently implementing is to use a combination of the deploy module to migrate my content, and then drush along with dbscripts to merge and update the core and modules. It takes care of database merging even if you have live content, security and module updates, and I currently have mine set up to work with svn.