Backbone Project Approach - iphone

I would like to make an application with backbone.js I understand the basics of backbone however I dont really know what the right approach to my problem might be.
I have a big jsonp file that is being retrieve from the server. So the next step would be to put the data from the jsonp file into a model. The data is bloglike containing a imgurl/title/text.
Now I could start a new model like this:
new modelVar = new BackboneModel;
However would that means that I need to create a new variable for every post I want to retrieve or could I let backbone create a set of models containg the post data.
Any suggestions book / blogs are welcome
Thanks

A quick answer could be "no". You can let Backbone loading data in models using a Backbone Collection.
E.g.
new App.Photos([
{url:"http://(...)_1.png", title:"photo1"},
{url:"http://(...)_2.png", title:"photo2"},
{url:"http://(...)_3.png", title:"photo3"}
]);
You just have to get an array of objects in argument when you create your collection prototype. Backbone will automatically create models based on the model attribute defined into the collection object. It's particularly fitted to your needs because you just have to put in argument the parsed json response and your models will be created.
I suggest you Backbone Marionette which is a good choice to start with Backbone implementation in order to get best practices.
https://github.com/derickbailey/backbone.marionette

Related

What benefits do I get from using ODataModel vs. JSONModel?

I'm reading data from HANA using a JSONModel and simply passing in the URL to the source and retrieving it as in the following:
var data = new sap.ui.model.json.JSONModel(urlPath);
Then I can bind it to my view: this.getView().setModel(data);
I have also seen the following way of doing it, where an ODataModel is created and then the JSONModel is created from the data.
var oModel = new sap.ui.model.odata.ODataModel(urlPath);
oModelJson = new sap.ui.model.json.JSONModel();
oModel.read("/Items",
null,
["$filter=ImItems eq 'imputParameter'"],
null,
function(oData, oResponse) {
oModelJson.setData(oData);
},
null
);
What difference is there in creating the ODataModel first than creating the JSONModel at once. So assuming I'm getting back from the database about 5,000 data points, which approach should I use, or would there be no difference?
JSONModel is a Client model to get the data and set data to the view for JSON format.
ODataModel is a model implementation for OData protocol.
This allows CRUD operations on the OData entities. JSONModel doesn't support Create/Update/Delete/Batch operations.
So coming to your scenario, I would suggest to use ODataModel always to do CRUD operations (inclusive of read). Then can use JSON model to bind the data to view.
Note that it's better to have one ODataModel per app and multiple JSONModels bound to views.
Consider using ODataModel V2 and since you have mentioned that you are dealing with 5K data points, if you don't all the data in the UI. Use setSizeLimit to make sure you have set proper upper bound.
Both models can be used without conflict. In fact, most applications will use both.
You want to use the OData model to send/retrieve data from the server. The OData model will handle building the URLs for you. For instance, if you want to filter, sort, or use paging in your data without the OData model, you will need to build the URL yourself.
yourUrl.com/EntitySet?$filter eq Property1='Value'&$sort= ..... &top=... etc.
This will be difficult without the OData model, and makes the application more difficult to maintain and debug. Let the OData model do that for you:
ODataModel.read("/EntitySet, {
filters: [new Filter("Property1", "EQ", "Value")]
});
The greatest benefit of the OData model in my opinion, though, is binding directly from the XML views.
<List items="{/EntitySet}">
<items>
<StandardListItem title="{objectTitle}"/>
</items>
</List>
This will automatically call the backend, retrieve the data from the entity set, and bind it to the list. No need to construct any URLs, make any calls, etc.
Using a JSON model to retrieve the data from an OData service will only make things more difficult than they have to be.
But... that being said... the JSON model is a very powerful tool. You can use it to store configuration data or any data you want to hold in the UI and manipulate. You can use the JSON model as sort of a mini-database in your application that can pass data globally across your application.
To summarize, you should use the OData model to get/send data. You should use the JSON model for local data storage. There will not be conflicts trying to use both.
One major difference between both of them is:
A lot of controls in SAPUI5 for instance, smarttable, bind automatically to the odata entities, meaning it dynamically creates the columns and the tuples based on the Odata metadata XML file. In this scenario, you cannot use a JSON Model.
IMHO, I would go with OData because of this "automatic binding" that a lot of components SAPUI5 have. But, I also ran into scenarios when the OData entities were not structured well, meaning the "automatic binding" that some SAP UI components had, did not work as expected.
In those scenarios, I had to get the JSON out of the OData, created/destroyed a few properties and then I did the bind to the mentioned SAP UI component.

CakePHP custom data source "READ" return structure

I'm in the process of developing a custom datasource to interface with a REST API. In the example provided on the CakePHP website they return the data from a READ operation in this structure:
Array(
'ModelName'=>Array(all the actual data here)
)
The code looks like:
return array($model->alias => $results);
Is there any specific reason to return the results this way or can I just return as:
return $results;
My concern is that if I don't return in the CakePHP specific format I might not be able to use some other built in functionality. I don't see anything specific about the need for this structure. Any insight would be appreciated.
It comes down to CakePHP's data structure guidelines. The reason Cake uses the model name as the array key and the results as the value is because it makes it very easy to read when you have multiple tables returned in the same query - after all, Cake models are relational database maps and are built to be associative.
How you use the results from Cake's data results is up to you. Yes, there are times when the model name prefix annoys me and I find it useless, but most of the time it can be very useful to help you distinguish between multiple associated table results in one query.
If you don't think you'll ever need this and don't mind breaking Cake's data structure conventions, there's nothing wrong with breaking away from it - but if I were you I would create your API interface in a way where it conforms exactly to the structure that their built in datasources return (for current and future compatibility reasons mainly).
More info on creating a REST API datasource is here in the manual.

How to update object in Mongo with an immutable Salat case class

I'm working on a project with Scala, Salat, Casbah, Mongo, Play2, BackboneJS... But it's quite a lot of new things to learn in the same time... I'm ok with Scala but I find my code crappy and I don't really know what's the solution to improve it.
Basically my usecase is:
A MongoDB object is sent to the browser's JS code by Play2
The JS code update the object data (through a Backbone model)
The JS send back the the updated JSON to the server (sent by Backbone save method, and received by Play with a json bodyparser)
The JSON received by Play should update the object in MongoDB
Some fields should not be updatable for security reasons (object id, creationDate...)
My problem is the last part.
I'm using case classes with Salat as a representation of the objects stored in MongoDB.
I don't really know how to handle the JSON i receive from the JS code.
Should I bind the JSON into the Salat case class and then ask Mongo to override the previous object data by the full new case class object?
If so is there a way with Play2 or Salat to automatically create back the case class from the received JSON?
Should I handle my JSON fields individually with $set for the fields I want to update?
Should i make the elements of my case class mutable? It's what we actually do in Java with Hibernate for exemple: get the object from DB, change its state, and save it. But it doesn't seem to be the appropriate way to do with Scala...
If someone can give me some advices for my usecase it would be nice because I really don't know what to do :(
Edit: I asked a related question here: Should I represent database data with immutable or mutable data structures?
Salat handles JSON using lift-json - see https://github.com/novus/salat/wiki/SalatWithPlay2.
Play itself uses Jerkson, which is another way to decode your model objects - see http://blog.xebia.com/2012/07/22/play-body-parsing-with-jerkson/ for an example.
Feel free to make a small sample Github project that demonstrates your issue and post to the Salat mailing list at https://groups.google.com/group/scala-salat for help.
There are really two problems in your question:
How do I use Play Salat.
How do I prevent updates to certain fields.
The answer to your first question lies in the Play Salat documentation. Your second question could be answered a few ways.
a. When the update is pushed to the server from Backbone, you could grab the object id and find it in the database. At that point you have both copies of the object. At that point, you can fire a business rule to make sure the sender didn't attempt to change those fields.
or
b. You could put some of your fields in another document of an embedded document. The client would have access to them for rendering purposes but your API wouldn't allow them to be pushed back to Mongo.
or
c. You could write a custom update query that ignores the fields you don't want changed.
Actually the answer is pretty simple: I didn't know there was a built-in copy method on case classes that allows to copy an immutable case class while changing some data.
I don't have nested case class structures but the Tony Morris suggestion of using Lenses seems nice too.

How to best store object in a CoreData relationship property that may be of many different types?

I need to store an activity feed in an iOS application. Activity feed items will have a payload field which can be one of many (and I really mean many) types of entities in the system.
What is a good way to implement this payload relationship field on the Activity entity in my CoreData model?
Is it possible to use the id data type, or maybe use an NSManagedObject type?
One way to workaround this maybe to just store CoreData's entityId as a string in a special field, but I'd rather avoid that if there is a better way.
Example:
For simplicity let's say we have a not-so-standard blogging model: User, Blog, BlogPost, Comment and the following activities may happen:
User may create a new blog.
User may publish a new blog post.
A blog can be commented on.
A comment maybe liked.
etc.
Each of these generate a new Activity item on the website which in turns have a related payload relation to the item that was modified or being acted on.
Now I need to download, translate and store these activity feed items from the website in my iPhone application... so how do I mimic this payload field since it maybe pointing to any possible entity?
In my real code, though, there are about 10+ types of entities that could be put into this payload field so I'm looking for a good approach here.
If you don't need to search / query the fields of your objects of variable type, then I suggest to use NSCoder to convert them into a binary representation and store them in a BLOB field of your managed object. You might want to store some type information as well in an other field of the same managed object. On the other side if you need to search between these variable objects then you have to create a new managed object type (entity) for each object. See my answer also here: NSCoding VS Core data
Only thing that you can use is NSManagedObject. So you have to create your model and your relation and create new file for Activity and payload that will be subclasses of NSManagedObject.
Take a look at Core Data Programing Guide .
You will find your answers in there.

Saving complex form data in zend framework using doctrine 2

Doctrine 2 integration into ZF seems to make simple things very hard and time consuming(At least for me).
I cant just give a submitted form array to doctrine to automatically map key/value pairs to doctrine entities and it gets very complicated if i have a many-to-many entity and submitted form has nested array.
In symfony, submitted form keys/values are easily and AUTOMATICALLY mapped and saved to doctrine tables. I don't know how to do that in ZF especially if I have "Many to Many" and/or "Many to One" doctrine entities and I have **nested form elements which need multi-level iteration.
I don't want to Set every entity explicitly and create every entity object manually**.
Pain would be alot less if i used ZF`s native database architecture.
I have done some coding and now its done half-automatically but is not very useful.
I think the best solution is to use PHP's Reflection API to inject/retrieve values from your entities (using smart get/set detection as well). I started a little library called ObjectSerializer to help with the process but never finished. That said, if you look at the logic contained in these two classes you might get a good idea where to start.
You should program this sort of logic into your models.
I usually add this sort of logic into my form, for example
class Application_Form_SomeModel extends Zend_Form
{
// the usual stuff
public function populateModel(\Entity\Model $model)
{
$model->setSomething($this->getValue('something'));
$subForm = $this->getSubForm('name');
$relatedModel = new \Entity\RelatedModel;
$relatedModel->setSomething($subForm->getValue('something'));
$model->getRelatedModels()->add($relatedModel);
// assuming the related model collection has cascade persist
// otherwise you'll need to pass in the entity manager to persist
// the new model
}
}
I do it this way due to the fact the form's elements are really only known to the form. This allows you to encapsulate everything to do with the form and its element names within the form without having to assume anything else knows this information.