I'm trying to write a function to clone a dataset - that is, to create an identical copy to an existing data set, with different primary keys.
I could do this by reading the records, copying the fields one at a time (or using toArray() and fromArray() and unsetting the primary key and resetting any foreign keys along the way), but I was wondering if there's a built-in method for doing this.
I'm using Zend Framework 1.
Nothing similar is implemented out of the box in ZF1. You should create custom Rowset class and define custom clone method which can directly access data of Rowset and apply some transformation e.g. filtering or removal etc.
Related
We have an entity and a corresponding table in the database with one additional column which contains digested hash of the entity fields, calculated each time programmatically in application. Entity has associations with two additional tables/entities which fields also take part in hashing.
Now a decision was made to get rid of one of the fields from the main entity (boolean flag) and exclude it from hashing, since it makes two otherwise identical entities get different hashes when one entity has its flag set to true, while other is false. Since hashes are different both entities get stored in the database, which is not what we want.
Removing the field is simple, but we also need to re-calculate hashes for entities which have been already stored in the database. Since there might be duplicates, we also need to get rid of one of two duplicated entries. This whole operation must be done once after migration.
The stack we use is Quarkus, Flyway, Hibernate with Panache, and PostgreSQL. I have tried to use Flyway callbacks with Event.AFTER_MIGRATE to get all existing entities from the db, but I can't use Panache since its not initialised yet by the time callback hits. Using plain java.sql.* Connection and Statement is pretty cumbersome, cause I need to fetch data from 3 tables, create entity from all of the fields, re-calculate hash and put it back, while taking care of possible conflicts. Another option would be to create a new REST API endpoint specifically for the job which the client will have to call after app has booted, but somehow I don't feel that that is the best solution.
How do you tackle this kind of a situation?
I am working on sailsjs, I have generated api model and controller. I am just wondering if I can post and create many entries of this model instead of use curl in commandline over and over. also does this restful interface support a delete method and update method for multiple rows at once?
Thanks
Most of this information is in the docs http://sailsjs.org/#/documentation/reference/blueprint-api
You can create multiple records at once in a single post by default. Post an array of entries to create.
For update and delete, I believe you will need to tweak the blueprints to look for an array of ids. Waterline, the underlying ORM of Sails supports create and delete on multiple rows, though watch out for breaking associations http://sailsjs.org/#/documentation/reference/waterline/models/update.html?q=notes
In order to override blueprints, create your own blueprints in api/blueprints/ e.g. api/blueprints/update.js and make them look for an array of ids. You'll probably want to start with the default blueprints https://github.com/balderdashy/sails/tree/master/lib/hooks/blueprints/actions.
Also, you'll need to define your own routes as the update and delete actions are by default bound to the PUT 'controller/:id' and DELETE 'controller/:id' respectively, which inherently allows for only a single id.
I am quite new to JPA. I have a particular repository that uses the keys that have parts that are set by the caller and some values that are automatically calculated using these values. There is a need for this :)
Since the keys and entities are simple Java classes it appears to me that I need to put my code that modifies the key (or substitutes it with an internal one with additional values) is the repository implementation. However I do not think that copying the code from SimpleJpaRepository to my custom repositories is a good idea...I think that something should be possible with the entity manager. Basically what I need is proxy that gets called every time something like find() or delete() is called, takes the entity, updates its key, passes the call over to the real repository implementation.
Could someone point me to the right direction or an example that does something similar?
Thanks!
In JPA, you have a bunch of events for this, just chose the one that suits you best. It looks like you are looking for #PrePersist.
http://www.objectdb.com/api/java/jpa/annotations/callback
That said, if the data of these fields is calculated based only in the data of the other fields, it goes against database normalization. A more sensate approach would be make the calculated field #Transient and provide only the getters, that will calculate the values based in the persistent fields.
I'm working with a legacy database that I can't easily create an entity model over because it uses extension tables with what is effectively composite keys and EF only supports single column keys for mapping one entity to multiple tables.
So, what I've decided to do is create updatable views (with INSTEAD OF triggers to handle CRUD operations) over the top of the legacy tables (which cannot be touched) and then have my entity model (either using EF or DevExpress XPO) built on top of the database views. This will also allow me to do stuff like easily add sub-queries in the select clause to retrieve child counts on parent records when retrieving a list of parent records in a single query.
However, I don't particularly want to manually write the SQL for all the views and triggers so I thought I'd use data model defined in the .EDMX file and t4 templates to help me generate the bulk of the T-SQL needed to create the views and the triggers. I thought there would be some template that I could use as the basis for doing this, but seems that's not so easy to find.
Can someone please suggest a t4 template that I could use as the basis where mappings are being retrieved from the .EDMX. Alternatively can anyone advise how to use the StorageMappingItemCollection to retrieve the mapping information from the EDMX file. I know a few people have said that apparently you can't use it or that they just use Linq to Xml, but I would have thought it should certainly be possible to use the StorageMappingItemCollection class as a strongly typed class to access this data.
Any examples of how I could use StorageMappingItemCollection to access mapping info would be very helpful. Thanks.
See http://brewdawg.github.io/Tiraggo.Edmx/ you can install it via NuGet within Visual Studio and it serves up all of the metadata from your EDMX files that Microsoft hides from you, very simple, works great.
I've got an application with MVC and Entity Framework. The application uses Unit of Work and Repository patterns for CRUD operations. But I've got to add now a couple of stored procedures that already exist in database. One of them just retrieves data from one of the entities (this is achieved at this moment by the repository pattern) but adds an extra column to the final result, created and populated in the stored procedure.
I want to integrate the use of these stored procedures into my architecture. I've tried to add the stored procedures to my model, map it to the class and use it, but as I have to add an extra column to this entity in the model, I get an error that this field is not mapped.
Should I use my repository for this particular entity just for Add/Edit/Delete and create another entity with the extra field that will be used for just the Get action using the stored procedure?
Thanks.
Should I use my repository for this particular entity just for Add/Edit/Delete and create another entity with the extra field that will be used for just the Get action using the stored procedure
Depends on the use case? Sounds like it's used for a different case and if so I would create a new entity for it.