My question seems similar to this one but the exact details as well as the solution seem to have changed over the years. (For one thing, the ObjectStateManager referenced in the accepted answer is apparently no longer accessible.)
I have a web app that displays an object graph. One of the objects in the graph is a list. The user can edit the items in the list. They can also add new items. When they save their changes, I serialize the whole object graph (including a field indicating whether values were added or modified) and send it to a controller to update the database. I send the added items with uninitialized IDs. In the controller, I deserialize the object graph and pass it to DbContext#Attach(). Then I set the entity states of the attached entries before calling SaveChanges().
This works with any number of changed rows, and it works with a single added row. The database assigns an ID when a row is inserted. But Attach() does not accept multiple added rows with the same uninitialized ID.
Is there a way to suppress the validation of IDs in Attach()? It seems like EF should leave that up to the database. Failing that, how should I go about this?
Related
I'm a researcher in Loren Frank's lab at UCSF using datajoint and files in the nwb format. I made some changes to our code for defining entries in our ElectrodeGroup table, and was hoping to test those by deleting an entry in the table and regenerating it with the new code. I was able to delete the entry, but cannot repopulate it. In particular, when I run ElectrodeGroup.populate() or ElectrodeGroup.populate({"nwb_file_name": my_file_name}), no changes are made to the table. I confirmed that the electrode group I deleted and am trying to regenerate is defined in the original nwb file. I am seeking input on why the populate command seems to not be working here. Thanks in advance for any help!
This user also contacted our team through another channel. Sharing the solution below for future users, in reference to this schema. In short, the populate process is reserved for unique upstream primary keys.
Since the ElectrodeGroup's only upstream table dependency is Session, the make method will only be called if there are no electrode groups for that session. This is because from the perspective of DataJoint, the only 'guaranteed' knowledge about what should exist for this table is defined solely by the presence/absence of related upstream records. Since the 'new' primary 'electrode_group_name' attribute is defined by the ElectrodeGroup table itself, DataJoint doesn't know how many copies will be created by make, and so simply invokes make 1 time per Session, expecting the single make invocation to fully define all possible electrode_group_name values the table will use. If there is one value for that session, no work needs to be done, so no make() invocation occurs.
There are a couple possible solutions:
Model the electrode group explicitly, with a table defines the existence of an electrode group (e.g., ElectrodeGroupConfiguration). This ElectrodeGroup would then inherit primary keys from both Session and ElectrodeGroupConfiguration. The ElectrodeGroup make function would be adjusted to load that unique keys across upstream tables.
Adjust the make function to handle the partial insert/update case, and call the make function directly with the desired primary key when these kinds of 'abnormal' updates need to occur.
Method #1 is 'cleanest' w/r/t to the DataJoint data model (explicitly modeled data dependencies using make/populate), whereas #2 is slightly 'escaping' the DataJoint data model in a controlled way to achieve a desired schema/data result.
I have a CheckBox in my TabPage on my Form, if I select the checkBox, the value is saved in a Table field (present in my FormDataSource: ParametersTable).
I want to refresh the form when I enter in TabPage, just Like pressing F5.
Is it possible?
There is a great article about different methods of refreshing the form's data here. Here are the basic outline:
1. Refresh
This method basically refreshes the data displayed in the form controls with whatever is stored in the form cache for that particular datasource record. Calling refresh() method will NOT reread the record from the database. So if changes happened to the record in another process, these will not be shown after executing refresh().
2. Reread
Calling reread() will query the database and re-read the current record contents into the datasource form cache. This will not display the changes on the form until a redraw of the grid contents happens (for example, when you navigate away from the row or re-open the form).
You should not use it to refresh the form data if you have through code added or removed records.
3. Research
Calling research() will rerun the existing form query against the database, therefore updating the list with new/removed records as well as updating all existing rows. This will honor any existing filters and sorting on the form, that were set by the user.
4. ExecuteQuery
Calling executeQuery() will also rerun the query and update/add/delete the rows in the grid. ExecuteQuery should be used if you have modified the query in your code and need to refresh the form to display the data based on the updated query.
I strongly recommand that you read the article. Try to use some of the methods above or some combinations of them.
Start with research() method, it might solve your problem:
formDataSource.research();
I'm trying to build a caching system for a feed reading application. The idea is each time a new feed is successfully pulled, remove all stored entities in Core Data, and store the first twenty items of the feed (this is used as an offline cache).
The issue I'm running into is my managed object context may have hundreds of items in it when a pull to refresh is performed. I'd like to keep those items in the context while removing any stored items from Core Data and then store the twenty items returned from the refresh call.
For what it's worth, I'm using Magical Record. I've tried looking around for this solution, but either I'm using the wrong keywords or the information is hard to find.
I'm not sure what code to show exactly, but here's the handling of the feed call:
for (id dict in feedArray){
WFeedItem *item = [WFeedItem feedItemWithAttributes:[dict dictionaryByReplacingNullsWithBlanks] inManagedObjectContext:[NSManagedObjectContext defaultContext]];
[parsedArray addObject:item];
}
This gets passed back from the subclassed HTTPClient it's defined in to a view controller that has called it. Bear in mind, this all works fine, it's all a matter of deleting stored items while retaining everything I've gathered during this session in the context.
Just use a different context for importing and storing the new records. Your original object context can remain as it is.
Is there a way to detect that a record being inserted is the result of a clone operation in a trigger?
As part of a managed package, I'd like to clear out some of the custom fields when Opportunity and OpportunityLineItem records are cloned.
Or is a trigger not the correct place to prevent certain fields being cloned?
I had considered creating dedicated code to invoke sObject.Clone() and excluding the fields that aren't required. This doesn't seem like an ideal solution for a managed package as it would also exclude any other custom fields on Opportunity.
In the Winter '16 release, Apex has two new methods that let you detect if a record is being cloned and from what source record id. You can use this in your triggers.
isClone() - Returns true if an entity is cloned from something, even if the entity hasn’t been saved.
getCloneSourceId() - Returns the ID of the entity from which an object was cloned.
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_methods_system_sobject.htm#apex_System_SObject_getCloneSourceId
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_methods_system_sobject.htm#apex_System_SObject_getCloneSourceId
One approach, albeit kind of kludgy, would be to create a new field, say original_id__c, which gets populated by a workflow (or trigger, depending on your preference for the order of execution) when blank with the salesforce id of the record. For new records this field will match the standard salesforce id, for cloned records they won't. There are a number of variations on when and how and what to populate the field with, but the key is to give yourself your own hook to differentiate new and cloned records.
If you're only looking to control the experience for the end user (as opposed to a developer extending your managed package) you can override the standard clone button with a custom page that clears the values for a subset of fields using url hacking. There are some caveats, namely that the field is editable and visible on the page layout for the user who clicked the clone button. As of this writing I don't believe you can package standard button overrides, but the list of what's possible changes with ever release.
You cannot detect clone operation inside the trigger. It is treated as "Insert" operation.
You can still use dedicated code to invoke sObject.Clone() and exclude the fields that aren't required. You can ensure that you include all fields by using the sObject describe information to get hold of all fields for that object, and then exclude the fields that are not required.
Hope this makes sense!
Anup
I have a list of ids of rows that should be selected, but not the actual objects that will be selected. For example, I know Users 16 and 25 should be selected, but I don't have an instance representing them. This could be because they're on a different page of data that I haven't loaded yet.
I want to be able to select these users programmatically even though their data is not loaded yet. I'm implementing a function called setSelectedIds() and it's working great - I scan all visible objects, and if their id matches one of the ids in my set, I set it Selected. Likewise, if the user changes a selection through the human interface, I catch the SelectionChangeEvent and determine whether an id should be added or removed to my backing list of ids.
The actual question:
Is there an event that's always fired when data has been loaded via updateRowData()? The only thing missing from my implementation is a way to handle the loading of new data. I need to be notified when new data is loaded, so I can decide whether to select it or not. RangeChangeEvents happen to soon - those handlers are fired before the data is loaded, and selectionModel.getSelected() returns some null objects. RowCountChangeEvents only happen when the total number of rows changes. What am I missing?
Can't you implement your own SelectionModel? When it's asked whether an object isSelected, it compares its ID with your list of selected IDs. You could even generalize it by using the object's key (given by the ProvidesKey) rather than an hard-coded getId.