Which are the most noteable things which make working with Core Data different than with MySQL? - iphone

One thing I have in mind is, that datasets in Core Data (or lets say: managed objects) have no ID like known from other databases such as MySQL. Also, they're not in a specific guaranteed order.
What else makes Core Data much more "special" compared to working with a relational database like MySQL? Besides the whole object graph persisting and ORM stuff?

This is a good article explaining the main differences. The biggy for me is
'Core Data cannot operate on data
without loading the data into memory'
This alone makes core-data and MySQL suited to totally different tasks.

The big difference I would say is that Core Data is built on an ORM, an Object Relational Mapping, while MySQL is just a relational database. You could actually host CoreData on MySQL if Apple wanted to let you. Instead they use a different embedded SQLight solution or an XML representation depending on what you want for your backing store.

Related

Multiple database in EF6

We are involved in quite a new development in which we are remaking our current web shop platform.
In the current platform we do not use EF6 neither other ORM but store procedures to access to the db, but in the new building is what we do.
We have a doubt regarding database design of the new platform. In the current platform we use several different databases depending on the content of them.
For example, we have dedicated databases to store information for products catalogs other dedicated db for handling orders.
Currently all data access is done through stored procedures, so we have no problem with the links between different databases.
The problem appears to us now when we have started to use EF6. In this case each DB is associated with a context and it is not possible to know data from one context to another
unless we implement directly in the source code these relationships using various contexts. It looks like these means we will lose the power of EF6.
The questions we have are:
Is it a bad design maintaining different databases for the same application using EF6?
in case this is a poor design and choosing for a single database, is the performance going to be optimum even driving hundreds of tables (almost 1000) with several TBytes of information?
in the other hand, in the case of opting for the design in which several bbdd appear (it would be much better in our case), what is the best way to handle them EF6?
Thank you very much for your help!
First of all EF is not written to be cross database. You can't write cross database (cross context) queries, lazy load does not work and so on.
This is a big limitation in your case.
EF could work with several schema (actually I don't use it and I don't like it but is just my opinion).
You can use your stored procedures with EF but as I understand you are thinking to stop to use them.
In my experience I wrote several applications with more than one database but the use of the different databases was very limited. In this cases I use cross database views (i.e. one database per company and some common tables with views in company databases that selects data in common tables). In your case, if the tables are sharded everywhere I don't think this is a way you can choose.
So, in my opinion you could change the approach.
If you have backups problems you could shard the huge tables (I think facts tables and tables with pictures) and create cross database views. BTW, also, cross database referential integrity is not supported in SQL Server so you need to write triggers to check it.
If you need to split different application functions (i.e. WMS, CRM and so on) you can use namespaces without bothering about how tables are stored in the DB.

Jsondb performance

Good day
I'm using QtJsonDb from http://qt-project.org/wiki/Building_QtJsonDb_from_Git as a JsonDb backend NoSQL database.
It used to work very good, but now I have over 10,000 records and its becoming very very slow
I'm saving somewhat complex objects to the db
1- how fast should the db be when retrieving the details
2- is there a 3rd party application or framework where I can load the json files and test the queries on them as well and see how is the performance there
Thanks!
Look at MongoDb, it can store data in json and it has ability to add custom indexes for quick retrieval.

EF Interitance and DBA Concerns

For a new project, our application developers are wanting to use Entity Framework's table-per-type inheritance model.
We recently showed this functionality and the resulting table schema to our DBA, and he's expressed concerns, and I'm wondering how to address them. Inheritance is an important part of OO, and from a development side, it would be great to have the DB and ORM support this concept natively. This functionality is part of EF, so it's not like we're pulling the design out of left field.
His main concerns are:
We're not using stored procs
The added complexity will make reporting and data updates harder
We've pretty much addressed his stored proc concerns (and we've been using another ORM for 3 years now).
As far as the complexity, I do see his point, but the counterpoints address them (for me):
Reporting should not be performed from transactional tables (we currently do this), views or a transformed reporting DB should be used.
Data updates on a flatter structure can still mess up data -- it's the responsibility of the person updating the data to understand the structure. The schema used by EF's table-per-type inheritance model isn't that complex, but it must be adhered to when doing manual updates.
I know we're not the first to run into DBA concerns over DB-backed model inheritance. How have others convinced their DBA that this is a good model?
His main concerns are not considering real problems with TPT.
You can use stored procedures with TPT if you want.
Data updates are not harder. EF will deal with them and ensure correct order of data modification.
The main problem of TPT are inefficient queries (check comments as well). TPT in EF has real performance problems becuase it makes a lot of left joins and unions even if it doesn't need data from derived tables. Creating any reporting on this data structure and accessing report data through EF is really bad decision.
Edit:
If his concerns are related to other tools working with your database then they are fully legitimate but in the same time it is only about correct documentation of your database structure.

Synchronizing Core Data data with External Database

I have started working on an iPhone application that where I need to synchronize data with an external MySQL database. The current database scheme uses GUID/UUID fields as primary keys to maintain relationships between tables. I already have this working between a database app and the MySQL database, so this isn't a question regarding synchronization per say.
I've started going down the path of using Core Data, but I'm realizing that it maintains relationships between entities using it's own schema within the SQLite database.
Am I going down the wrong path using Core Data? If not how does one synchronize data between a Core Data store and an external database and still maintain the data relationships?
All you need to do is write the logic to translate entities from one db schema to another. You can fetch objects from the server and convert them to core data objects, and fetch object from core data and convert them to mysql entities when saving to the server. Nothing too difficult involved really
I agree with Griffo; simply translate the rows or entities you retrieve from the mysql database into managed objects (and visa versa).
If I understand what you are looking to correctly, I would definitely recommend using Core data. Translating the data between MySQL and Core Data isn't that hard, and if you use an NSFetchedResultsController to display your data in a UITableView, you practically don't have to write any code.
and you can always preserve the original GUIDs as, for example, optional externalIDs for the imported entities. This way you will be able troubleshoot your data imports easier and correlated the data between the to types of the data stores.

Combining a datastore with Mapkit

Does anyone have any advice on using a datastore with mapkit to provide a database of locations (Restaurants) that are query-able by location?
I would like to use Core data but importing the information into it seems like a project in itself. If anyone has good advice on converting an existing sqlite/cvs file to a coredata sqlite file that would be appreciated.
Is old-fashioned sqlite better than using core data for the task, or is it a case that I should create a web service for the job?
I would like to be able to query the locations based on the map zoom also.
Thanks if you have any advice on the matter.
If you write your object model correctly, you can just point it at an existing sqLite database and it will read it as if core data generated it in the first place.
For example, suppose you have an existing sqlite db of people with columns like firstName, lastName, phone# etc. You just create a core data model with a entity with attributes of firstName, lastName, phone# etc. Spell them the same and make sure they have the right type and then point the NSPersistentStoreCoordinator at the existing database. It will read it in fine.
Core data is always the way to go for any larger data management task. It makes everything so much easier once you learn it.
Edit01:
Never mind the above. I was thinking of Enterprise Objects. Core data won't easily import most existing SQL.
Instead, I would export the sqlite to csv and then use something like cCSVParse to convert to plist. Then you can read it in easily to an array or dictionary and use that to populate the core data db.
That will work easily for db's that don't depend on complex relationships. I think the future benefits of having core data will eventually easily pay for the few man hours spent converting.