Save Workflow in Database - workflow

All
Is there any way I can serialize Workflow and save in database and load it later on ?
How can I save instance of workflow in database in C# ?
Thanks

This is called workflow persistence. Here is a simple example to get you started. Essentially, when your workflow idles (there are certain points at which this is possible, e.g. a Delay Activity), if persistence is enabled, your workflow will create a resumption point (i.e. a bookmark), unload and persist to the instance store you've set up (SQL DB, XML file, etc.). When you resume, everything is loaded the way it was before.

Related

Odoo - Why odoo store data of TransientModel like res.config.settings into ir.config_parameter?

res.config.settings model is TransientModel and TransientModel are designed to be stored temporary data, After periodically time it will remove data from the database. For saving those settings we have to implement set_param and get_param method of ir.config_parameter.
So, Here I want to know why odoo used ir.config_parameter to store those setting values. Why we can't make res.config.settings as Model instead of TransientModel and store data into that model.
I'm not odoo so I cannot answer for them, but I can explain you why I consider this logic.
In odoo, you have the traditional MVC paradigm, but there are places where you don't really have a model: you just want a view and a controller. This is the case for wizards in odoo. Wizards are just helper dialogs to let the user perform hard tasks in an easier way.
However, if they have no model, how to define the data they use? Where to attach controller logic? It would be very hard and would request a whole new programming paradigm inside odoo, so they took an easier solution: transient models.
So in odoo a transient model is basically a wizard. What happens when you open a wizard?
Odoo obtains the view definition.
Calls the controller to obtain default data (default_get()).
Draws a form for you, but there's no record yet in the database.
You can alter the form as usual, and the normal compute and onchange methods work out of the box.
You hit some button in the wizard, to perform any action.
Odoo creates a new record in the wizard table.
The method associated with that button is executed in the server.
The next time you open the wizard, it follows the same procedure. As you can imagine, this creates a lot of records, because a wizard record is always used only once, so there's a cron job that deletes that garbage every now and then. That's why they're called "transient".
The purpose of ir.config_parameter is to store and retrieve database-wide configuration. 1 record = 1 parameter. You can create parameters without using a wizard. These are not transient, so it's the proper place to store parameters.
The purpose of res.config.settings is to help you configure the system, but that's more than just storing system parameters: it involves configuring auth methods, installing modules, giving you shortcuts for mail gateways configuration, installing a theme in each website, choosing a chart of accounts for each company, etc.
It would make no sense to merge all of that in ir.config_parameter.

Event logging for auditing with replay

I need to implement an auditing log for GDPR compliance so that we have a record of every consent given or revoked (an event) per user of our system. It has to store how & when it happened alongside things like what the wording of the consent actually was at the time.
So that we can recover from a backup restore, this log will be stored separately from our main DB. We will then need to be able to update the state of the user consent so that it accurately reflects the event log (i.e. the last known value (true/false) of each consent question per user)
I could simply do this using a second postgres instance (our main DB is postgres) with a single table to store the information and then some simple application code to log each event as well as update the main DB. There could also be some simple application logic to find the last known states of each consent from the event log and update the master DB.
To me it seems like a bit of overkill using postgres to store this info? though adding a new technology to store this also seems overkill. Are there any technologies that are more suitable for this sort of thing? It sounds a lot like Event Sourcing to me.
If you're already running postgres, it doesn't seem like overkill, given that it needs to be online and queryable. Something like kafka is often a natural fit for this kind of problem, but that's even more overkill.
This bears a passing resemblance to event sourcing, but on a really small scale. Event sourcing usually means that all your data is expressed in terms of events, and replayed from beginning to end to materialize the current state.
Could you elaborate on this?:
So that we can recover from a backup restore, this log will be stored separately from our main DB.
Doesn't your main database recover from a backup / restore?

Configuring Quartz.net Tasks

I want to be able to set up one or more Jobs/Triggers when my app runs. The list of Jobs and triggers will come from a db table. I DO NOT care about persisting the jobs to the db for restarting or tracking purposes. Basically I just want to use the DB table as an INIt device. Obviously I can do this by writing the code myself but I am wondering if there is some way to use the SQLJobStore to get this functionality without the overhead of keeping the db updated throughout the life of the app using the scheduler.
Thanks for you help!
Eric
The job store's main purpose is to store the scheduler's state, so there is no way built in to do what you want. You can always write directly to the tables if you want and this will give you the results you want, but this isn't really the best way to do it.
The recommended way to do this would be to write some code that reads the data from your table and then connects to the scheduler using remoting to schedule the jobs.

Make sure Entity framework always reads from database?

I have this applikation that is actually two applications, a webapplication and a console application. The console application is used as a scheduled task on the windows machine and is executed 3 times a day to to some recurring work. Both application uses the same Model and repository that is placed in a seperate projekt (class library). The problem is that if the console application need to make som changes to the database it updates the model entity and save the changes to database but when this happens the context in the webbapplication is unaware of this and therefore the object context is not refreshed with the new/updated data and the user of the application can not see the changes.
My question is: Is there a way to tell the objectcontext to always load data from the database, either on the hole objectcontext or for a specific query?
/Regards Vinblad
I don't think you should have this problem in web application. ObjectContext in web application should be created per request so only requests processing during update should be affected.
Anyway there are few methods wich can force ObjectContext to reload data. Queries and load functions allow passing MergeOption which should be able to overwrite current data. But the most interesting should be Refresh method especially with this application.
By Using a DbSet you can you can also make use of the .AsNoTracking() method.
Whenever you run something like
context.Entities.FirstOrDefault()
or whatever query against the context, the data is actually fetched from the database, so you shouldn't be having a problem.
What is your ObjectContext lifetime in the webapp? The ObjectContext is a UnitOfWork, so it should be only created to fetch/write/update data and disposed quickly afterwards.
You can find a similar question here:
Refresh ObjectContext or recreate it to reflect changes made to the database?
FWIW, creating a new (anonymous) object in the query also forces a round trip to the database:
' queries from memory
context.Entities.FirstOrDefault()
' queries from db
context.Entities.Select(Function(x) New With {p.ID, p.Name}).FirstOrDefault()
Please forgive the VB, it's my native language :)

Patterns for accessing remote data with Core Data?

I am trying to write a Core Data application for the iPhone that uses an external data source. I'm not really using Core Data to persist my objects but rather for the object life-cycle management. I have a pretty good idea on how to use Core Data for local data, but have run into a few issues with remote data. I'll just use Flickr's API as an example.
The first thing is that if I need say, a list of the recent photos, I need to grab them from an external data source. After I've retrieved the list, it seems like I should iterate and create managed objects for each photo. At this point, I can continue in my code and use the standard Core Data API to set up a fetch request and retrieve a subset of photos about, say, dogs.
But what if I then want to continue and retrieve a list of the user's photos? Since there's a possibility that these two data sets might intersect, do I have to perform a fetch request on the existing data, update what's already there, and then insert the new objects?
--
In the older pattern, I would simply have separate data structures for each of these data sets and access them appropriately. A recentPhotos set and and a usersPhotos set. But since the general pattern of Core Data seems to be to use one managed object context, it seems (I could be wrong) that I have to merge my data with the main pool of data. But that seems like a lot of overhead just to grab a list of photos. Should I create a separate managed object context for the different set? Should Core Data even be used here?
I think that what I find appealing about Core Data is that before (for a web service) I would make a request for certain data and either filter it in the request or filter it in code and produce a list I would use. With Core Data, I can just get list of objects, add them to my pool (updating old objects as necessary), and then query against it. One problem, I can see with this approach, however, is that if objects are externally deleted, I can't know, since I'm keeping my old data.
Am I way off base here? Are there any patterns people follow for dealing with remote data and Core Data? :) I've found a few posts of people saying they've done it, and that it works for them, but little in the way of examples. Thanks.
You might try a combination of two things. This strategy will give you an interface where you get the results of a NSFetchRequest twice: Once synchronously, and once again when data has been loaded from the network.
Create your own subclass of
NSFetchRequest that takes an additional block property to
execute when the fetch is finished.
This is for your asynchronous
request to the network. Let's call
it FLRFetchRequest
Create a class to which you pass
this request. Let's call it
FLRPhotoManager. FLRPhotoManager has a method executeFetchRequest: which takes an
instance of the FLRFetchRequest and...
Queues your network request based on the fetch request and passes along the retained fetch request to be processed again when the network request is finished.
Executes the fetch request against your CoreData cache and immediately returns the results.
Now when the network request finishes, update your core data cache with the network data, run the fetch request again against the cache, and this time, pull the block from the FLRFetchRequest and pass the results of this fetch request into the block, completing the second phase.
This is the best pattern I have come up with, but like you, I'm interested in other's opinions.
It seems to me that your first instincts are right: you should use fetchrequests to update your existing store. The approach I used for an importer was the following: get a list of all the files that are eligible for importing and store it somewhere. I'm assuming here that getting that list is fast and lightweight (just a name and an url or unique id), but that really importing something will take a bit more time and effort and the user may quit the program or want to do something else before all the importing is done.
Then, on a separate background thread (this is not as hard as it sounds thanks to NSRunLoop and NSTimer, google on "Core Data: Efficiently Importing Data"), get the first item of that list, get the object from Flickr or wherever and search for it in the Core Data database (carefully read Apple's Predicate Programming Guide on setting up efficient, cached NSFetchRequests). If the remote object already lives in Core Data, update the information as necessary, if not insert. When that is done, remove the item from the to-be-imported list and move on to the next one.
As for the problem of objects that have been deleted in the remote store, there are two solutions: periodic syncing or lazy, on-demand syncing. Does importing a photo from Flickr mean importing the original thing and all its metadata (I don't know what the policy is regarding ownership etc) or do you just want to import a thumbnail and some info?
If you store everything locally, you could just run a check every few days or weeks to see if everything in your local store is present remotely as well: if not, the user may decide to keep the photo anyway or delete it.
If you only store thumbnails or previews, then you will need to connect to Flickr each time the user wants to see the full picture. If it has been deleted, you can then inform the user and delete it locally as well, or mark it as not being accessible any more.
For a situation like this you could use Cocoa's archiving facilities to save the photo objects (and an index) to disk between sessions, and just overwrite it all every time the app calls home to Flickr.
But since you're already using Core Data, and like the features it provides, why not modify your data model to include a "source" or "callType" attribute? At the moment you're implicitly creating a bunch of objects with source "Flickr API", but you can just as easily treat the different API calls as unique sources and then store that explicitly.
To handle deletion, the simplest way would be to clear the data store each time it's refreshed. Otherwise you'd need to iterate over everything and only delete the photo objects with filenames that weren't included in the new results.
I'm planning to do something similar to this myself so I hope this helps.
PS: If you're not storing the photo objects between sessions at all, you could just use two different contexts and query them separately. As long as they're never saved, and the central store doesn't have anything in it already, it would work just like you describe.