I am planning on building an AutoCAD extension application that will require custom data be stored outside of the standard .dwg file for AutoCAD drawings. I would like there to be a local file that this custom data is stored in so that the data can be read into AutoCAD or saved from AutoCAD while offline. I have been imagining that each .dwg file would have it’s own separate database file associated with it, but I am also open to the idea of having a single data file locally stored in order to allow for offline reading/writing of my custom data. Does MongoDB support this type of local data storage? There will be a cloud-based database where the data can be read from/written to, but I want there to be a local storage system to allow for offline read/write and also improved speed. I am just a bit confused about this because most resources online seem to address cloud storage and I am having a hard time understanding how to use MongoDB to implement a reliable local storage system.
It's possible to install the MongoDB Community Server edition locally on your machine.
You can download the installer here.
Installation instructions can be found here.
This post addresses where the data is stored. Basically it's one storage location per machine (where you can put all your databases).
You may need a GUI interface to browse all your databases. The community edition installer will prompt you to install Compass. I'm using a different software called Robo 3T for that.
Something like nedb-promises may be of interest for creating a database local to the application.
(I've also been looking into how to use MongoDB locally, so the above is a summary of what I've found so far.)
Related
Good morning,
I looked in the forum here and could not find the answer. If I overlooked it, I apologize...
I just joined an existing project team using WSO2 Identity Manager 5.6 and API Gateway.
I understand that WSO2 Identity Manager is made up of several components, among which openLDAP (which contains a Berkeley database) and a postgreSQL database.
The current backup / restore procedures simply 'tar' the whole directory which contains all files related to WSO2 (including directories which contain database files), without stopping WSO2.
I'm a bit doubtful about this type of process for backing up. Is that the right thing to do?
If not, what would the right procedure be?
If I understand correctly, postgreSQL is only used for WSO2 'internal state data' storage, so backing it up may not be useful. So I'm thinking that maybe an export of openLDAP (slapcat command) be enough.
Backing openLDAP is probably not enough. Depending on how the WSO2 components (IS + APIM) are installed, you may also have H2 DBs for the local registry, Solr indexes for the UI, Velocity templates for API deployments, and/or Synapse XMLs for the APIs.
I recommend you to first compare your installation directories and files with the vanilla zips so you know how is it configured before changing your backup process.
I'm trying to launch several instances of Moodle in a Kubernetes-like container platform to improve performance and make my installation reliable. I came across the following requirement
$CFG->dataroot This MUST be a shared directory where each cluster node
is accessing the files directly. It must be very reliable,
administrators cannot manipulate files directly.
Which tool can be used to transparently sync this directory across several containers? What is the best way to meet this requirement?
I successfully resolved the issue by using ObjectFS plugin for S3 storage and moving sessions to database instead of file system.
I download a lot of csv files via ftp from different sources on a daily basis. I then upload these files into Google Cloud Storage.
Are there any programs/api/tools to automate this?
Looking for a best way, if possible, to load these files directly into Google Cloud Storage without having to locally download them. Something that I can deploy on Google Compute, so I don't need to run a local programs like Filezilla/CrossFTP. The program/tool will keep checking the remote location on a regular basis and load new files into Google Cloud Storage; ensuring a checksum match.
I apologize in advance if this is too vague/generic question.
Sorry, no. Automatically importing objects from a remote FTP server is not currently a feature of GCS.
I'm developing an application with a postgresql db. Also it stores files in file system and keep their address in the db. I want an open source solution for backing up the app state including database and file storages.
Mandatory requirements:
supports backing up postgresql db when it is running.
support becking up a folder
support compression
Optional requirements:
Can view, create and restore backups in a web console.(Important)
support plugins or custom backup/restore tasks
support other data storages like mysql
support retention
I've seen project like barman or amanda but It seems each one solve some part of the problem.
Should I develop the solution myself?
The application is developed in java, if it matters.
We want to restore the database that we have got from the client as backup in our development environment, we are unable to restore the database successfully, can any one help us to know the steps involved in this restore process? Thanks in Advance.
Vijay, if you plan to make a new database out of checkpoints (+journals) made on another (physical) server, then I must disappoint you - it is going to be a painful process. Follow these instructions http://docs.actian.com/ingres/10.0/migration-guide/1375-upgrading-using-upgradedb . The process is basically the same as upgradedb . However, if architecture of the development server is different (say backup has been made on a 32bit system, and development machine is, say POWER6-based) then it is impossible to make your development copy of the database using this method.
On top of all this, this method of restoring backups is not officially supported by Actian.
My recommendation is to use the 'unloaddb' tool on the production server, export the database in some directory, SCP that directory to your development server, and then use the generated 'copy.in' file to create the development database. NOTE: this is the way supported by Actian, and you may find more details on this page: http://docs.actian.com/ingres/10.0/migration-guide/1610-how-you-perform-an-upgrade-using-unloadreload . This is the preferred way of migrating databases across various platforms.
It really depends on how the database has been backed up and provided to you.
In Ingres there is a snapshot (called a checkpoint) that can be restored into a congruent environment, but that can be quite involved.
There is also output from copydb and unloaddb commands which can be reloaded into another database. Things to look out for here are a change in machine architecture or paths that may have been embedded into the scripts.
Do you know how the database was backed up?