How to move alfresco community addtion from One Ubuntu machine to another ubuntu machine without lose of data? - postgresql

I am new to alfresco and I am using alfresco community addition 5.0 for document management system.Upto now I don't have any problem.
now my doubt is how to move alfresco community from one system to another system without lose of data.

I would suggest you to look at the Backing up and restoring part of the Alfresco documentation.
To make it simple, you have data stored as three parts :
The database which can be backed up and dumped in an other postgresql of your choice
The contentstore which contains the files data of your system can be copied and pasted in your new system (of course with appropriated permissions).
The index which contains your indexed content to make powefull search. The transfert of this one is optional since it can be regenerated from the database and contentstore
Of course, you should do your backup and transfert with your alfresco stopped, since database, contentstore and index are related.

Related

Which kind of Google Cloud Platform mobile backend client is appropriate?

THE PROBLEM
I'm writing a mobile app which will allow a user to log in, save some preferences that must be stored in a database, and display congressional bills to the user.
I've only written simple RESTful services with PHP and MySQL in the past. I'd like to take advantage of newer technologies, and am a little lost on general direction.
The bill data (formatted as JSON) can be gathered by running the scrapers found here. Using docker, I managed to set a working directory and download the files on my local machine.
I've designed a MySQL database for holding the relevant bill and user data.
I started to mess around in Google Cloud Platform, and read the doc that describes different models. I'm thinking of a few different ideas, but aren't familiar with GCP or what I can actually accomplish.
QUESTIONS
1) What are App Engine, Compute Engine, and Container Engine each for? I get the gist that Container Engine holds different instances of stuff you load up with docker, and that Compute Engine sets up a VM, but I don't really understand the relationships. How should I think of them?
2) When I run those scrapers from the shell, where are the files being stored, and how can I check on them? On my computer, I set a working directory, but how do directories work in GCP? Is it just a directory in the currently selected VM, or is this what Buckets are for?
IDEAS
1) Since my bill data already comes as JSON, should I skip the entire process of building a database for the bills and insert them into Firebase somehow? Is this even possible? If so, am I stuck using Firebase's NoSQL, or can I still set up a relational database?
2) I could schedule the scrapers to run periodically, detect new files, and run a script to parse the JSON and insert new bill data into my a database (PostgrSQL?/MySQL?). Then I would write an API.
3) Download the JSON files to a bucket, and write an API that reads from them. Not sure how the performance would compare to using a DB.
I'm open to other suggestions as well.
For your use case (stateless web application), App Engine is probably your best choice. The Google documentation has severalcomparisons of your computing options
You can use App Engine with PHP and cloud-hosted MySQL if you want, which could be a good way to get your toes wet without going in over your head.

MongoDB - Importing a db from local machine to MongoLabs

I have a decent-sized database on my local machine that has a lot of important data that cannot be re-made easily (locally-tested user profile informatino, blog posts, that sorta thing). It's around 50mb in size.
I'm getting close to making my app live, and I want to bring this database to MongoLabs. I know how to connect to MongoLabs and set up a new database there, but I can't work out (if it's even possible) how to import a database from my local machine to MongoLabs, nor can I find any documentation discussing this.
Questions are:
Is this possible to do
How do I do it?
If you open your database at mongolab.com and go to the Tools tab, you should see some helpful commands for migrating your data to your new database.
This support article also has more details:
https://support.mongolab.com/entries/20164381

Track the Page Versions

Please guide me how to solve the following scenario using document-oriented database.
A page called 'Page1' (Version V1.0) can be a content page for documents 'Document1', 'Document2' and 'Document3'. If I edit the contents in 'Page1'(Version should changed from V1.0 to V1.1) This change should reflect only on 'Document1' and 'Document2' but should not affect on 'Document3'.
Now documents 'Document1' and 'Document2' are refere the 'Page1'- Version V1.1. But document 'Document3' still refer the 'Page1' - Version V1.0.
Like this I need to keep track the changes/versions for all the pages.
To keep track of these changes NoSQL databases will be very usefule more than Relational databases.
So in document-oriented databases which one is solve this scenario?
You can use Amazon S3 and keep your document over there. Amazon S3 supports versioning so whenever you download the doc, it always downloads the latest version. You can list all versions and restore any version which you would like to restore again. Amazon S3 will offload your task to keep records of version. So your cloud no sql database always point to current version.

Sitefinity development environment and source code control

There are some queries for which we need resolution before we purchase sitefinity 5.0 license. I would really appreciate if could get answers to these
What are the recommended guidelines to setup the sitefinity project in the source control? If there 4 to 5 developers working on the project, what should be the starting point in setting up the initial codebase? Do every developer has to create the sitefinity website and DB on their dev-boxes?
Is it recommend to setup a common DB for the sitefinity website where all the dev-machine would be connecting to do the development, if not what is the alternative approach?
Is there any online documentation available related to build and release of sitefinity web applications, other than publishing from within the visual studio?
Thanks
Gaurav
We've been developing with Sitefinity since version 2, with multiple developers.
To answer your questions specifically:
Have a single developer (ideally your lead dev) create a clean sitefinity visual studio solution on their local machine. Check it into your source control repository and have each additional developer pull down a copy from there. You're now all in sync.
In terms of database location, two approaches work - either have each person run a local database, and in the web.config setup the connection string location as . (i.e. local). That way no one needs to check out the web.config to run it. Otherwise use a common development/testing server for the database. We've found the easiest way is to each have a local DB, unless multiple devs are working on very specific tasks together at the same time.
I have not seen any online documentation related to building outside of visual studio. If you have TFS or a MS build server, it should work fine as well.
In general, there is nothing 'special' about Sitefinity's architecture that separates it from any other .NET / MSSQL solution. Best practice that falls under these technologies still applies.
My experience with source control has been one of two options. If you are using SQLExpress user instance databases (that is an mdf in the App_Data folder) I've found versioning everything except this database file and the dataconfig.config file in the configurations folder will allow every developer to run their own copy of the website.
from there you can either do some kind of manual merge of the database or just create a new one for deployment.
This option works best if your developers are simply working on features, and don't need to be working on an actual website, modifying content that has to keep in sync.
Alternatively, if they do need to work with live content and it all has to be the same, create the database in a shared server they all have access to, and version everything (since the connection string should be the same for both).
This works best if your developers are doing work to support existing content as opposed to say creating modules that manipulate the database (creating tables, columns, etc), because keep in mind with this method, everyone will be accessing and modifying the same database.
Personally, my preference is option 1, because it allows each developer full control over their environment. the source could then be merged and shadowed to a staging server, so that the main site content is only affected by this one instance.
I hope this is helpful!

Entity Framework - Preserving Data

Using Model First, what is the best way to approach preservation of existing database data when the model changes and the database has to be regenerated?
The Database Power Pack extension no longer works (I've been trying to contact the author). I can't find anything that provides similar functionality.
R.
If database power pack doesn't work there is no other automatic way. Manual way requires running created SQL script on another database and using Visual Studio Database tools to create difference script between the current and the newly created database.