Automatic Backup/Restore embedded orientdb database - orientdb

I am using orientDB in embedded mode via java api. How do i perform automatic backup/restore the database at certain interval of time ? Any help would be highly appreciated.

Just create a TimerTask that run every X milliseconds and execute a database.backup(). Example of backup that is executed every 10 minutes (600,000 milliseconds):
new Timer(true).schedule( new TimerTask() {
#Override
public void run() {
database.backup();
}
}, 600000, 600000 );

Related

Update Firestore database just before internet disconnect

Future<void> setActiveDataToApi(
String referencePath1, String referencePath2, bool activeState) async {
await _firestore
.collection(referencePath1)
.doc(referencePath2)
.update({"active": activeState});
I want update user active status just before internet disconnect, how can I do that?
This one is update data my database, I'll update timestamp too but if I periodically update data like 10 second it cost me too much, OnDisconnect()/onDisconnectSetValue() function not exist for firestore and I think it's not for internet just for close app. I use firebase/cloud firestore
u can use the AppLifeCycle to achieve this!
https://api.flutter.dev/flutter/widgets/WidgetsBindingObserver-class.html
class _CustomState
extends State<WidgetBindingsObserverSample> with WidgetsBindingObserver {
#override
void initState() {
super.initState();
WidgetsBinding.instance.addObserver(this);
}
#override
void didChangeAppLifecycleState(AppLifecycleState state) {
if(state==AppLifecycleState.paused){
_syncStatus(status: <Update DB value to offline> );
return;
}
if(state== AppLifecycleState.resumed){
_syncStatus(status: <Update DB value to online> );
return;
}
}
}
Note: this is just an example this can be further optimised and improved
If u want to actually update status whether the user is connected to network or not u can use connectivity_plus, the process for this can be diff, u will have to listen for any network changes for the api provided in the package and call ur api depending on which connection status u wanted to update
There is no reliable way to detect when the internet connection is about to disappear. Imagine being in a train that drives into the tunnel. The first time the app can know that there's no connection is when that connection is already gone, and by then it's too late to write to the database.
Firebase Realtime Database's onDisconnect handler, send a write operation to the server when they are connected, that the server then executes when it detects that the client is gone. There is no equivalent in Firestore, because the wire protocol Firebase uses is not suited for it.
Since you indicate not wanting to do a periodic update, the easiest way I can think of to do this is to actually use both databases in your app, and then use Cloud Functions to carry an onDisconnect write operation from the Realtime Database to Firestore. This is in fact exactly the approach that is outlined in the documentation solution on building a presence system on Firestore, so I recommend checking that out.

run some code 1 hour after a document is updated with cloud function and firestore

The user of my app can receive message and i want them to receive a notification push 1 hour after the document message is created.
Here is my cloud function :
export const scheduler = functions.firestore
.document("profile/{uuid}/message/{messageId}")
.onCreate((change, context) => {
//code that will send a notification push in one hour
return true;
});
I want to try this (i found that solution on that link : https://firebase.google.com/docs/functions/schedule-functions) but i don't know if i can replace "every 5 minutes" with some text saying one hour after :
exports.scheduledFunction = functions.pubsub.schedule('every 5 minutes').onRun((context) => {
console.log('This will be run every 5 minutes!');
return null;
});
The time(s) when scheduled Cloud Functions run must be known at the moment you deploy them. That is not the case for you, since the time depends on each individual document.
In that case you can either periodically run a scheduled Cloud Function that then checks what documents it needs to update, or you can use Cloud Tasks to create a dynamic trigger. For a detailed example of the latter, see Doug's blog post How to schedule a Cloud Function to run in the future with Cloud Tasks (to build a Firestore document TTL).

Heroku mongodb scheduled query

I need to run a scheduled task for a mongodb query once a month. Can anyone point me in the right direction? I didn't write the app, but I need to do some improvements. I'm hoping to do this with an add on possibly or something else that doesn't require me to modify the source code of the original application.
I haven't tried too much, because I'm not sure where to start.
This is the command that the web interface would run today if done manually. I need to automate this process. I caputred this using papertrail in Heroku.
Delete records with query: { find: { created_at: { '$lte': '2019-06-05' } }, count: 10 }
Expected result is I can schedule a query to run once a month and it succeeds.

better solution than timertask with scala in play framework

I need to regularly check the database for updated records. I currently use TimerTask which works fine. However, I've found its efficiency is not good and consumes a lot of server resouces. Is there a solution which can fulfill my requirement but is better?
def checknewmessages() = Action{
request =>
TimerTask(5000){
//code to check database
}
}
I can think of two solutions:
You can use the ReactiveMongo driver for Play which is completely non-blocking and async and capped collection in Mongo DB.
Please see this for an example -
https://github.com/sgodbillon/reactivemongo-tailablecursor-demo
How to listen for changes to a MongoDB collection?
If you are using a database that doesn't support a push mechanisms you can implement that using an Actor by scheduling messages to itself at regular intervals.
If your logic is in your database (stored procedures etc) you could simply create a cron job.
You could also create a command line script that encapsulates the logic and schedule (cron again).
If you have your logic in your web application, you could again create a cron job that simply makes an API call to your app.

Grails save not respect flush option

I'm using grails as a poor man's etl tool for migrating some relatively small db objects from 1 db to the next. I have a controller that reads data from one db (mysql) and writes it into another (pgsql). They use similar domain objects, but not exactly the same ones due to limitations in the multi-datasource support in grails 2.1.X.
Below you'll see my controller and service code:
class GeoETLController {
def zipcodeService
def migrateZipCode() {
def zc = zipcodeService.readMysql();
zipcodeService.writePgSql(zc);
render{["success":true] as JSON}
}
}
And the service:
class ZipcodeService {
def sessionFactory
def propertyInstanceMap = org.codehaus.groovy.grails.plugins.DomainClassGrailsPlugin.PROPERTY_INSTANCE_MAP
def readMysql() {
def zipcode_mysql = Zipcode.list();
println("read, " + zipcode_mysql.size());
return zipcode_mysql;
}
def writePgSql(zipcodes) {
List<PGZipcode> zips = new ArrayList<PGZipcode>();
println("attempting to save, " + zipcodes.size());
def cntr = 0;
zipcodes.each({ Zipcode zipcode ->
cntr++;
def props = zipcode.properties;
PGZipcode zipcode_pg = new PGZipcode(zipcode.properties);
if (!zipcode_pg.save(flush:false)) {
zipcode_pg.errors.each {
println it
}
}
zips.add(zipcode_pg)
if (zips.size() % 100 == 0) {
println("gorm begin" + new Date());
// clear session here.
this.cleanUpGorm();
println("gorm complete" + new Date());
}
});
//Save remaining
this.cleanUpGorm();
println("Final ." + new Date());
}
def cleanUpGorm() {
def session = sessionFactory.currentSession
session.flush()
session.clear()
propertyInstanceMap.get().clear()
}
}
Much of this is taken from my own code and then tweaked to try and get similar performance as seen in http://naleid.com/blog/2009/10/01/batch-import-performance-with-grails-and-mysql/
So, in reviewing my code, whenever zipcode_pg.save() is invoked, an insert statement is created and sent to the database. Good for db consistency, bad for bulk operations.
What is the cause of my instant flushes (note: My datasource and congig groovy files have NO relevant changes)? At this rate, it takes about 7 seconds to process each batch of 100 (14 inserts per second), which when you are dealing with 10,000's of rows, is just a long time...
Appreciate the suggestions.
NOTE: I considered using a pure ETL tool, but with so much domain and service logic already built, figured using grails would be a good reuse of resources. However, didn't imagine this quality of bulk operations
Without seeing your domain objects, this is just a hunch, but I might try specifying validate:false as well in your save() call. Validate() is called by save(), unless you tell Grails not to do that. For example, if you have a unique constraint on any field in your PGZipcode domain object, Hibernate has to do an insert on every new record to leverage the DBMS's unique function and perform a proper validation. Other constraints might require DBMS queries as well, but only unique jumps to mind right now.
From Grails Persistence: Transaction Write-Behind
Hibernate caches database updates where possible, only actually
pushing the changes when it knows that a flush is required, or when a
flush is triggered programmatically. One common case where Hibernate
will flush cached updates is when performing queries since the cached
information might be included in the query results. But as long as
you're doing non-conflicting saves, updates, and deletes, they'll be
batched until the session is flushed.
Alternately, you might try setting the Hibernate session's flush mode explicitly:
sessionFactory.currentSession.setFlushMode(FlushMode.MANUAL);
I'm under the impression the default flush mode might be AUTO.