CakePHP custom data source "READ" return structure - rest

I'm in the process of developing a custom datasource to interface with a REST API. In the example provided on the CakePHP website they return the data from a READ operation in this structure:
Array(
'ModelName'=>Array(all the actual data here)
)
The code looks like:
return array($model->alias => $results);
Is there any specific reason to return the results this way or can I just return as:
return $results;
My concern is that if I don't return in the CakePHP specific format I might not be able to use some other built in functionality. I don't see anything specific about the need for this structure. Any insight would be appreciated.

It comes down to CakePHP's data structure guidelines. The reason Cake uses the model name as the array key and the results as the value is because it makes it very easy to read when you have multiple tables returned in the same query - after all, Cake models are relational database maps and are built to be associative.
How you use the results from Cake's data results is up to you. Yes, there are times when the model name prefix annoys me and I find it useless, but most of the time it can be very useful to help you distinguish between multiple associated table results in one query.
If you don't think you'll ever need this and don't mind breaking Cake's data structure conventions, there's nothing wrong with breaking away from it - but if I were you I would create your API interface in a way where it conforms exactly to the structure that their built in datasources return (for current and future compatibility reasons mainly).
More info on creating a REST API datasource is here in the manual.

Related

Breeze NotMapped (Database) properties - not working

This is actually reposting a question that already exists but I don't think it was properly understood and for us is really important to know if is possible or if it will be:
https://stackoverflow.com/questions/16079703/how-would-one-go-with-saving-a-complex-object-graph-as-xml-in-sql-database-whil
So, what we would like to know is how do we get to transfer from Breeze to server NON MAPPED TO DB entities/properties. For example, let's consider XML (I would't want to generate xml in JS, but I do have XML db columns that need to be populated from complex forms - so we will collect the data in Breeze/KO, will transfer it to server and on server will process and generate the XML, from the NON MAPPED Entities/Properties).
P.S.
I see there is already a NODB approach (http://www.breezejs.com/samples/nodb), so would be really nice if we would be able to make the 2 approaches work together (EF + NODB)
As of Breeze v 1.3.6, there is now an EntityInfo.UnmappedValuesMap property available during the save that exposes all of the unmapped properties on any entity being saved.
Providing I'm understanding your question correctly, any properties declared as 'unmapped' on a breeze entity do get transferred to the server on a save for exactly this purpose. You can intercept and work with this data within the server side BeforeSaveEntity and BeforeSaveEntities methods.
There is more info here regarding "unmapped" properties:
http://www.breezejs.com/documentation/extending-entities

How to expose read model from shared module

I am working on developing a set of assemblies that encapsulate parts of our domain that will be shared by many applications. Using the example of an order management system, one such assembly will contain all of the core operations an application can perform to/with an order. We are applying a simple version of CQS/CQRS so that all operations that change the state of the "system" are represented as public commands, such as CancelOrderCommand, ShipOrderCommand and CreateORderCommand. The command handlers are internal to the assembly.
The question I am struggling to answer is how to best expose the read model to consuming code?
The read model will be used by consuming code to perform queries. I don't know how all of the ways the read model will be used so the interface needs to be flexible to allow any query.
What complicates it for me is that I not only need to expose my aggregate root but there are also several "lookup" lists of related data that client applications may use. For example, each order has an associated OrderType which is data-driven (i.e., not an enum) and contains several properties that will drive some of our business rules that control what operations can/cannot be performed, etc. It is easy inside my module to manage this relationship; however, a client application that allows order creation will most likely need to display the list of possible OrderTypes to the user. As a result, I need to not only expose the list of Order aggregates but the supporting list of OrderTypes (and other lookup lists) from my read model.
How is this typically done?
I'm not sure what else to explain that will help trigger a solution, so please ask away...
I have never seen a CQRS based implementation expose a full dataset for ad-hoc querying so this is an interesting situation! In a typical CQRS scenario you would expose very specific queries because you may want to raise events when they are called (for caching for example - see this post for more details on that).
However since this is your design, let's not worry about "typical" or "correct" CQRS, I guess you just need a solution! One of the best new mechanisms for exposing data for flexible querying I have seen is the Open Data Protocol (OData). It will allow consumers to implement their own filtering, sorting and paging over a data source you expose.
Most implementations of this seem to deal with relational data. If you are dealing with a relational data source then OData might be a nice way to go. I suspect by your comment of "expose my aggregate root" that you might be using a document database? If so, there is one example I have seen of OData services on top of MongoDB: http://bloggingabout.net/blogs/vagif/archive/2012/10/11/mongodb-odata-provider-now-supports-arrays-and-nested-collections.aspx.
I hope that helps, OData is definitely worth looking into. It seems to be growing really quickly and is getting good support on both server and client technology platforms.

Is core data implementing data mapper pattern?

I know that core data should not be considered as ORM but it still offers the functionality that is similar to ORM. Just curious, is it implementing data mapper pattern? I know "The Data Mapper is a layer of software that separates the in-memory objects from the database. Its responsibility is to transfer data between the two and also to isolate them from each other." (Martin Fowler). IMHO context manager handles all SQL stuff into one transaction, so it's very performance wise design and IMHO core data might be considered implementing data mapper pattern.
One year latter, I will contribute with my two cents
I am not an ORM expert and just recently started something using a Data Mapper, but as a long time Core Data user I can say that no. The main objective of this pattern is having a clear cut of a domain object from all database related operations.
Once I start writing unit tests, the first thing I notice is that I must load a database, even if it is just some in memory store, but I do must load one. Also there are no mappers for each class, I have no control about how each relation is stored.
Core Data loads lots of meta information about your object graph and forces some structure to them. Although you can change the persistent store and bake something of your own, you will have lots of restrictions about how to do it, with a clear "relational" feeling to it.
The idea is good, we might say it is some variation of it. Something that I do love is that the save operation is done by the context, not the object itself. So there is some type of separation.
However look at those functions like "awakeFromFetch" or "didSave", both operations are related with the data store, not a plain domain object. A proper Data Mapper pattern would allow you to define those operations for each persistent store, not unified in a single object.
UPDATE:
Funny enough one day after my answer I had to deal with an old CoreData based project and must come back to improve this answer. To make things clear, I do consider that "seems like a pattern" is not enough. For example, implementation of the facade and adapter patterns is quite similar, but you name them differently depending on how you use them.
Is Core Data implementing data mapper?
I must say that my "not quite" should have been "definitely not!"
I have just been very angry because I needed to rename some fields and later add new ones. Although I do know quite well how auto-migrations work with Core Data I forgot how annoying these are.
How many times do you need some new field, rename something, experiment until you get it right.... and every single tiny change requires a full blown database migration? With Data Mappers this never happens because domain objects are perfectly decoupled. You only touch the database to catch up with the domain objects after you finish some new feature. Core Data forces you to bind at every single moment every single detail of your domain objects.
Boy, how sweet life was until I forgot that "tiny" annoyance of Core Data being the exact opposite of what you can achieve with data mappers.

ZF models correct use

I am struggling with how to understand the correct usage of models. Currently i use the inheritance of Db_Table directly and declare all the business logic there. I know it's not correct way to do this.
One solution would be to use Doctrine ORM, but this requires learning curve and all the current components what i use needs to be rewritten paginator and auth. Also Doctrine1 adds a another dozen classes which need to be loaded.
So the current cleanest implementation what i have seen is to use the Data Mapper classes between the so called model and DbTabel. I haven't yet implemented this as it seems to head writing another ORM. But example could be something this: SQL table User
create class with setters, getters, business logic here /model/User.php
data mapper /model/mapper/UserMapper.php, the funcionality is basically writing all the update, save actions in here.
the data source /model/DbTable/User.php extends the Db_Table_Abstract
Problems are with relationships between other models.
I have found it beneficial to not have my models extend Db_Table, but to use composition instead. That means my model 'has a' Db_Table rather than 'is a' Db_Table.
That way I find it much easier to reference multiple tables in the same model, which is a common requirement. This is enough for a simple project. I am currently developing a more complex application and have used the Data Mapper pattern and have found that it has simplified my code more than I would have believed.
Specifically, I have created a class which provides all access to the database and exposes methods such as getUser() etc.. That way, if the DB changes, or my client wants something daft like storing records in XML or we split the servers or something I only have to rewrite one class.
Again, my models do not extend this class, but have an instance of it assigned as a property during construction.
I would say the 'correct' way depends on the situation. Following the YAGNI and KISS principles, it is not good to over-complicate your model setup unless you really believe that it will benefit you in the long run.
What is the application you are developing? How is your current setup of extending Db_Table holding you back?

Zend_Auth_Adapter using a data mapper

First post here so sorry if I seem like a newb,
I am trying to find a way to use Zend_Auth_Adapter with a data mapper, but seem to be struggling. I know I can use Zend_Auth_Adapter_DbTable and associate this with a db table, but this seems to negate the whole reason for having a data mapper (I think)?! Should I be creating a custom adapter for the mapper so that I can use the mapper to choose whatever I want as my data source?
Good question. The proper way to do that would be to roll your own Zend_Auth_Adapter. I have done so for Doctrine (my ORM).
I also use the data mapper pattern throughout my application, but I do not use it for my authentication. It adds a lot of needless overhead imo. I just query the database directly using my Auth_Adapter.