Transferring MongoDB data - mongodb

So, as the documentation says, by default, it stores data in the data/db/ directory. As I can see through the file manager, the folder is empty. I guess the documents are hidden there.
So, If I pull a repository with this folder from another PC, will I be able to access this data through MongoDB?

I guess the documents are hidden there
Unlikely. I'm betting that your data dir is set to another value.
If I pull a repository with this folder from another PC
This may work, but, at best, it'll overwrite your local data files. At worst, it'll overwrite your local data files and mongodb won't boot with your new data files.
A recommended/supported way is to use mongodump/mongorestore. Bonus point: you won't have to care about where your data files are on both computers.

If you have installed MongoDB for example from Debian/Ubuntu package the data directory will be /var/lib/mongodb.
See https://askubuntu.com/questions/982673/where-is-mongo-database-folder-on-the-filesystem

Related

How to automatically-selectively backup critical files on edit?

I have just accidentally deleted one week of coding source files, and even testdisk does not restore them. Even executable jars gone... I use ubuntu. I dont want that happen ever again. How to sufficiently and efficiently make automatic backups (clones) of selected critical files to a different location e.g. home?
I use java, and eclipse as IDE, but this could be any file i work with. E.g. i select certain file, because i can accidentally delete it, so this lightweight backup tool would automatically update it in saved backup location according to saved changes. So if it is lost in working directory, as in my case, i can just take it from backup site on local machine. Pls help. I feel devastated...
cwatch might be a solution i am looking for, but it is too complicated.
p.s. i am aware of question Script to perform a local backup of files stored in Google drive
google services not ok for me.
The simplest solution would be to use GitHub or Bitbucket and to regularly push the changes you made to the online repository. You will benefit more from the usage of a version control software then from a local backup. You can use either of them for free.

Moodle 3.3.2+ Backup with cronjob which folder?

I success installing moodle 3.3.2+ in my VPS. I want to backup those moodle but not knowing which folders should i backup every day...
it is i think not wise to backup all folder in moodle instalation since it would be has very big size..
so could anyone suggest which folder is essential that got updated dynamically and should be include in my backup plan.
thanks in advance
Moodle stores uploaded content in a moodledata folder. To check the path to that folder, take a look in config.php file in the root Moodle directory and search for $CFG->dataroot. This folder as well as the database should go in the backup. The Moodle code itself (unless you have custom changes you want to keep) can be obtained from moodle.org if it is necessary.
See the Moodle docs for more info: https://docs.moodle.org/33/en/Site_backup

Is there an added value for a "file-to-file" Project transfer vs copying the files directly?

We have been using EA's API ProjectTransfer function to do a backup of our projects automatically (we have some projects on the filesystem as well as one project in a DBMS)
However there are some caveats to this function: We cannot run our scripts unattended(as a task running daily). Meaning the user has to be logged on for the script to run since EA cannot be run unattended.
Also, we have noticed a bug in which the Accept Windows Authentication option does not carry with a Project transfer.
This is why we decided to move our scripts to simply copying the files for backup. (And rely on the dbms team for backing up the DBMS repository)
Should we be simply copying the files for backing up the projects? Or is there something important ProjectTransfer is doing?
No, there is no added value. As long as you do a file copy. The project transfer is more meant on a RDBMS-EAP level which can not simply be done with a file copy. For RDBMS-transfers with the same database type you can/should also use database backups as transfer method.

What is the best deployment practice when using MODX?

It is convenient when you have DEVELOPMENT version of application on your local machine and you may deploy it on STAGE server for testing (it's optional) and then deploy it on PRODUCTION server. You can do this relatively easily when there is a fine discretion of code and data in the project (for example, if we store all the code and settings in project files and data in database).
MODX stores templates, snippets, etc. in database. Yes, we can move this code to static files and then we can use version control system for tracking changes of these items. But these ones have representation rows in database too. It means we must update database as before if we added or removed some items.
Looks like we can also get some troubles if we just copied files of extensions instead of making installation by package manager (because extensions often have its own tables in DB).
Another problem is that applications on DEV and PROD have different settings stored in files (configs) and database (user accounts, e.g.).
I do not still see the clear way to organize iterative DEV-STAGE-PROD development cycle. So, my questions are:
Which files and database tables should (or must) I copy when deploying?
What is the mode (replace, ignore) I should do that in?
What is the easiest and fastest way to do that?
My biggest concern here is having to deal with database.
P.S. I'm talking about "Revolution" version of MODX if it matters.
The database should not store any path information at all, previous versions did in the modx_workspaces table, but that has since disappeared [as of 2.2.4 I believe].
If you are concerned about the url changes [dev.mysite.com / stage.mysite.com / production...] don't be - this is all in the .htaccess file [there used to be a site_url system setting, but it also seems to have disappeared.]
The only file you need to worry about is the core/config/config.inc.php ~ create 3 different files with the different paths or just replace them when you migrate.
my process for moving/updating/migrating modx sites is:
clear the cache!!
tar cvfz httpdocs.tar.gz httpdocs/
mysqldump -u -p the_database > export.sql
move the files, tar xvfz & import the database.
It's a good idea to check the modx_workspaves table and if you have used an older version of gallery, check that as well, but most plugins & developers seem to be used to NOT storing path information in code & DB tables.
Of course if you have hardened your installation there are a few more steps, but nothing major. [see the "hardening Modx article on rtfm.modx.com]
I think what you're looking for is this plugin (depending on your version of modx):
https://github.com/digitalbutter/MODX-Mirror
https://github.com/digitalbutter/FEM
All Chunks, Snippets etc. are located on disk. Any changes made to the files will trigger the appropriate database changes without the need to do a complete SQL Import/Reimport. This will allow for any Version Control System / Distributed Development Environment / Automated Deployment.

Keeping SSIS packages under the source control

I store all SSIS packages in Subversion repository, their configuration files as well. Configuration file almost always stored in the same folder where package is.
Problem is - SSIS seems to always store path to configuration file (the one saved in the package itself) as an absolute path.
When someone else checks out folder with the package in the location different from where I had on my development PC the configuration file is not detected (because my absolute path is stored and it doesn't exist on the other developer PC). So another developer has to remove this configuration and add it again from where it is now on his local hard drive. Then changed package is saved which will cause new version to be committed. When I get that version from SVN it will no longer match local path on my PC.
On a related note: another developer may want to change values in configuration file as well. If I later get the latest version of everything from SVN package will no longer work on my PC.
How do you work around these inconveniences?
Another solution is to save your configuration in a database with an environment variable as the first configuration to tell it what database to look in, that's what we do. We have scripts to populate ssisconfig for each server in our source control, but the package uses the actual table data for the database in the environment variable we are using.
Anyone who has heard my SQL Saturday presentations knows I don't much care for XML and this is one of the reasons. A trick to using XML configuration with varying locations is to use an environment variable (indirect configuration) to direct SSIS where it can look for that resource. The big, big downside to this approach is you'd generally need to create an environment variable for each set of configuration files or have a massive, honking .dtsconfig file which becomes painful for versioning.
The option I prefer if XML configuration is a must is that the "variableness" is removed. Developers and admins get together and everyone agrees "there will be a folder everywhere SSIS is done to hold configuration files and that location is X" and then it's just a matter of solving for X. At a previous job, we used D:\ssisdata\configs
#HLGEM's approach of a table for configurations is hands down my favorite approach to SSIS configuration (until you get to 2012 and their project deployment model where configuration is an entirely different animal)
I add a folder called "config" under my projects folder, add it to source control and mantain the config file in this folder. You can also add it to the SSIS project if you like.
I think its a good solution because everybody can have this folder and dowload the config file.
When the package is deployed it will read the config file from where you inform in the deployment manifest so this solution wont impact your development