First post here so sorry if I seem like a newb,
I am trying to find a way to use Zend_Auth_Adapter with a data mapper, but seem to be struggling. I know I can use Zend_Auth_Adapter_DbTable and associate this with a db table, but this seems to negate the whole reason for having a data mapper (I think)?! Should I be creating a custom adapter for the mapper so that I can use the mapper to choose whatever I want as my data source?
Good question. The proper way to do that would be to roll your own Zend_Auth_Adapter. I have done so for Doctrine (my ORM).
I also use the data mapper pattern throughout my application, but I do not use it for my authentication. It adds a lot of needless overhead imo. I just query the database directly using my Auth_Adapter.
Related
I try to sync data via GraphQL from my backend. Therefore I use the artemis package to generate my classes. However I then want to cache the data localy and therefore use the sqfentity_gen package to generate classes to safe my data via sql. I can use a json constructor with each framework to convert the data between.However I want to encapsulate certain functionality since I dont want to just safe changed data localy but sync it to the backend and handle certain errors like merge conflicts or missing network. Therefore I am thinking about wrapping the classes with an other one since I cant change the generated code. Is this a good idea or are there other solutions which work better? Would you use a completly diffrent setup? Glad for any suggestions
Instead of generate and/or re-generate (update) classes which are based on db-tables (I assume), you can use solution from the box, in example, NReco.GraphQL
It allows you set db-schema in the json file and just set db-connection. If you have a lot of tables, you can just generate that json file based on metatable info.
Storing and updating classes, from my point of view is useless.
I need to read data from existing database is it possible using
compile "org.grails.plugins:db-reverse-engineer:4.0.0"?
My operations are: user should read data from existing table, create new record, create new coulmn, edit coulmn name, edit records.
View format will be in grid like xml grid.
Which technology is the best for these operations in grails, I have plan to work on javascript using jaxrs, is it good to do?
DB Reverse Engineering Plugin (org.grails.plugins:db-reverse-engineer:4.0.0) allows you to generate domain classes using existing DB. After you generate them - just use GORM to perform CRUD operations. You can read about GORM here
You can implement REST api in Grails using standart ways, check this answer to get high level understanding. If you need jaxrs- there is a plugin for that.
I have written a wrapper around ADO.NET's DbProviderFactory that I use extensively throughout my applications. I also have written a lot of code that maps IDataReader rows to POCOs. However, as I have tons of classes the whole thing is getting to be a pain in the ass to maintain.
I have been looking at replacing the whole she-bang with a micro-orm like Petapoco. I have a few queries though:
I have lots of POCOs that contain other POCOs in them as properties. How well does the Petapoco support this?
Should I use a ORM like Massive or Simple.Data that returns a dynamic object and map that to a POCO?
Are there any approaches I can take to the whole mapping of rows to POCOs? I can't really use convention-based tools as my database isn't particularly consistent in how it is designed.
How about using a text templating/code generator to build out a lightweight persistence layer? I have a battle-hardened open source project called TextMetal to generate the necessary persistence layer based on tried and true architectural decisions. The only lacking thing is object to object relations but it does support query expressions and works well with poorly designed data schemas.
You can see a real world project that uses the above tool call Can Do It For.
Feel free to ask me about any design decisions once you take a look-sse.
Simple.Data automagically casts its dynamic type to static types. It will map nested properties as long as they have been eager-loaded using the .With method. So for example
Customer customer = db.Customer.WithOrders().Get(42);
would populate the Orders property of the customer object.
Could you use QueryFirst, or modify it? It takes your sql and wraps it in vanilla ADO code, generated at design time. You get fresh POCOs from your result schema every time you save your file. Additionally, you can choose to test all queries and regenerate all wrappers via the option in the tools menu. It's dependent on Sql Server and SqlClient, so unless you do some modification, you'll lose DbProviderFactory.
I know that core data should not be considered as ORM but it still offers the functionality that is similar to ORM. Just curious, is it implementing data mapper pattern? I know "The Data Mapper is a layer of software that separates the in-memory objects from the database. Its responsibility is to transfer data between the two and also to isolate them from each other." (Martin Fowler). IMHO context manager handles all SQL stuff into one transaction, so it's very performance wise design and IMHO core data might be considered implementing data mapper pattern.
One year latter, I will contribute with my two cents
I am not an ORM expert and just recently started something using a Data Mapper, but as a long time Core Data user I can say that no. The main objective of this pattern is having a clear cut of a domain object from all database related operations.
Once I start writing unit tests, the first thing I notice is that I must load a database, even if it is just some in memory store, but I do must load one. Also there are no mappers for each class, I have no control about how each relation is stored.
Core Data loads lots of meta information about your object graph and forces some structure to them. Although you can change the persistent store and bake something of your own, you will have lots of restrictions about how to do it, with a clear "relational" feeling to it.
The idea is good, we might say it is some variation of it. Something that I do love is that the save operation is done by the context, not the object itself. So there is some type of separation.
However look at those functions like "awakeFromFetch" or "didSave", both operations are related with the data store, not a plain domain object. A proper Data Mapper pattern would allow you to define those operations for each persistent store, not unified in a single object.
UPDATE:
Funny enough one day after my answer I had to deal with an old CoreData based project and must come back to improve this answer. To make things clear, I do consider that "seems like a pattern" is not enough. For example, implementation of the facade and adapter patterns is quite similar, but you name them differently depending on how you use them.
Is Core Data implementing data mapper?
I must say that my "not quite" should have been "definitely not!"
I have just been very angry because I needed to rename some fields and later add new ones. Although I do know quite well how auto-migrations work with Core Data I forgot how annoying these are.
How many times do you need some new field, rename something, experiment until you get it right.... and every single tiny change requires a full blown database migration? With Data Mappers this never happens because domain objects are perfectly decoupled. You only touch the database to catch up with the domain objects after you finish some new feature. Core Data forces you to bind at every single moment every single detail of your domain objects.
Boy, how sweet life was until I forgot that "tiny" annoyance of Core Data being the exact opposite of what you can achieve with data mappers.
I am struggling with how to understand the correct usage of models. Currently i use the inheritance of Db_Table directly and declare all the business logic there. I know it's not correct way to do this.
One solution would be to use Doctrine ORM, but this requires learning curve and all the current components what i use needs to be rewritten paginator and auth. Also Doctrine1 adds a another dozen classes which need to be loaded.
So the current cleanest implementation what i have seen is to use the Data Mapper classes between the so called model and DbTabel. I haven't yet implemented this as it seems to head writing another ORM. But example could be something this: SQL table User
create class with setters, getters, business logic here /model/User.php
data mapper /model/mapper/UserMapper.php, the funcionality is basically writing all the update, save actions in here.
the data source /model/DbTable/User.php extends the Db_Table_Abstract
Problems are with relationships between other models.
I have found it beneficial to not have my models extend Db_Table, but to use composition instead. That means my model 'has a' Db_Table rather than 'is a' Db_Table.
That way I find it much easier to reference multiple tables in the same model, which is a common requirement. This is enough for a simple project. I am currently developing a more complex application and have used the Data Mapper pattern and have found that it has simplified my code more than I would have believed.
Specifically, I have created a class which provides all access to the database and exposes methods such as getUser() etc.. That way, if the DB changes, or my client wants something daft like storing records in XML or we split the servers or something I only have to rewrite one class.
Again, my models do not extend this class, but have an instance of it assigned as a property during construction.
I would say the 'correct' way depends on the situation. Following the YAGNI and KISS principles, it is not good to over-complicate your model setup unless you really believe that it will benefit you in the long run.
What is the application you are developing? How is your current setup of extending Db_Table holding you back?