I use Zend_Db in some project, I discover Doctrine there is a while but never actually used it.
What are the advantage of Doctrine over Zend_Db ? What are the benefits to use Doctrine ?
By the way, is it easy to use Doctrine with Zend Framework 1.10.7? Integration and use with the other component ? As it doesn't seem to exist a Doctrine Adapter
Thank you
Doctrine is an ORM. It's meant for persisting a rich domain object model to a database and allow querying effectively while maintaining the results as objects. Zend_Db comprises an implementation of the table and row data gateway design patterns, which provide a simple scheme for querying a single table and manipulating its rows. That makes Zend_Db a kind of a lesser cousin of Doctrine, with the latter vastly more powerful and useful while also more complex and resource intensive. If you have a rich domain model with a lot of interrelations, Doctrine is your solution to managing all the complexity. For simple CRUD on simple tables, by all means go for Zend_Db.
You don't need an adapter for Doctrine, you just use it. Several classes in Zend Framework integrate readily with Zend_Db, though - such as validation based on database row existence - and you'll have to cook up your own equivalents. It will take some work but it's not a complex task, and you may be able to find some implementations readily available on the net.
Related
I am curious to know that if Entity framework can create tables in other databases besides MS-SQL ??
Moreover, is there any provision to create XML schema through EF ?
Under the hood Entity Framework uses providers that are specific for different databases. So it depends on a provider whether EF can create tables or not. However, I haven't heard about providers that do not have this possibility. The easiest way to be sure is to write a simple program with a few lines of code.
As to XML schema. Are you asking about using XML files instead of database as the storage for your data? If so, again it depends on the provider. If you want you can theoretically create one that will use XML files. However, I haven't tried to do so and I don't think that it is a good idea. There are technologies that fit here better (see this question).
I have recently started getting familiarized with NoSQL (HBase). I am definitely a noob.
I was investigating about ORMs and high level clients which can be used on HBase and came across a few.
Some ORM libraries like Kundera are providing SQL like data query functionality. I am finding this a little counter intuitive.
Can any one help me understand why we would again need SQL like querying if the whole objective was to move away from it?
Also can anyone comment on your experiences with ORMs for HBase? I looked at a few of them from http://wiki.apache.org/hadoop/SupportingProjects and started looking at Kundera.
Another related question - Does data query with Kundera run map reduce jobs internally?
kundera or Spring data might provide user friendly ORM layer over NoSQL databases, but the underlying entity model still has to be NoSQL friendly. This means that NoSQL users should not blindly follow RDBMS modeling strategies but design ORM entities in such a way so that all NoSQL capabilities can be used.
As a thumb rule, the kundera ORM entities should be designed using query-first strategy where first the queries need to defined so as to create primary keys and also ensuring that relationship model is used as minimal as possible. Querying on random columns and full scans should be avoided and so data might have to be replicated across entities for reducing multiple entity look ups. Also, transactions management needs to be planned. FYI, kundera does not support transactions(beyond single row TX supported by Hbase/Cassandra).
Reason for using Kundera:
1) If looking for SQL like support over HBase. As it is build on top of HBase native API, so it simply transforms these SQL queries in to corresponding GET or PUT method calls.
2) Currently it support HBase-0.20.6 only. Kundera-2.0.6 will enable support for HBase 0-90.x versions.
3) Kundera does not do sometihng out of the box to provide map reduce over SQL like queries. However support for such thing will be provided in Kundera-2.0.6 by enabling support for Hive native queries only!
It is totally JPA compliant, so no need to learn something new. It simply hides complexity at developer level with very minimal effort.
SQL like querying is for developement ease, quick developement, less error prone and reusability ofcourse!
-Vivek
Today I reviewed postgreSQL wiki and I found it is a ORDBMS (object-relational database management system), so I want to know is there any benefits for using postgreSql (RDBMS) behind the JPA (hibernate, eclipselink, ....) instead of a RDBMS (Mysql, ...) for performance issues or not?
As you know JPA use ORM and use JQL (java query language)
Regards
Object-Relational data is defined as structured data, which is user defined types in the database.
OR data types include:
Structs - structured types
Arrays - array types
These types are defined differently in each database, in Oracle they are OBJECT types, VARRAY types, and NESTED TABLE, and REF types.
JDBC standardizes access to OR data types using the Struct, Array and Ref interfaces.
With OR data-types you can have more complex database schemas, such as a TABLE of Employee_Type that has a Varray of Phone_Types and a Ref to it manager.
JPA does not have any direct support for mapping OR data-types, but some providers do.
EclipseLink has support for mapping OR data-types including, Structs, Ref, and Arrays. Custom mappings and annotations are used to map these, but the runtime JPA API is the same.
I would not normally recommend usage of OR data-types, as they are less standard than traditional relational tables, and do not give much benefit. Some database defined OR data-types, such as spatial data-types do offer advantages as they have integrated database support.
See,
http://en.wikibooks.org/wiki/Java_Persistence/Advanced_Topics#Structured_Object-Relational_Data_Types
I would say no. JPA is targeted at RDBMSs, and doesn't use the additional capabilities offered by ORDBMSs.
Now, PostgreSQL is also a very good RDBMS (you're not forced to use its object-oriented features, and my guess would be that most of its users don't), and you may use it with JPA without problem.
JPA is the translator between "thinking in objects" (Java) and "thinking in relations" (SQL). Therefore a JPA implementation will always speak to the DB in terms of relations. "Object Relational" stuff is ignored here.
Ignoring JPA and talking directly to the DB in "ORDBMS" speak won't buy you performance benefits in the most common cases, because ORDBMS are still RDBMS with some glue logic to look a little bit object-stylish. The data is stored in relations, all access paths are the same as pure relational access path.
If you really want to see performance benefits by switching not only the database product but the database technology (or philosophy) you should look at real Object Databases or even NoSQL.
I'm trying to display the results of a sproc in my MVC 3 web app.
However, the sproc calls into 4 tables on one database and joins them with 5 views (single table views only, thank goodness) on another database. Each (SQL Server) db is on a separate server but that shouldn't matter.
I've read this: http://blogs.msdn.com/b/swiss_dpe_team/archive/2008/02/04/linq-to-sql-returning-multiple-result-sets.aspx
and this:
http://www.codeproject.com/KB/dotnet/linqToSql5.aspx
and still cannot determine whether I should use the dataContext classes or just embed the straight SQL.
Perhaps there is a better way to return my results than LINQ to SQL (15 columns, 3 different data types)? I need to update the tables as well. The user will have the ability to update each value if they choose. Is this a task best suited for the entity framework classes?
I plan on using the repository pattern so I can change data access technology if I must but would rather make the correct decision the 1st go 'round.
I was hoping for a resource that was more up-to-date than say, NerdDinner and more robust than the movie apps for MVC3 that abound, particularly implementing the sproc results inside a view. Any suggestions would surely be appreciated. Thanks.
Once you plan to "update" data then you are going to handle it all through stored procedures. Both Linq-to-sql or Entity framework will not help you with this because they are not able to persist changes to something created from arbitrary query. You should very carefully check if you are even able to track the data back to the correct record in the correct table. Generally result of a stored procedure is mostly for viewing the data but once you want to modify the data you must work with each table directly or again use some stored procedure which will do the task. Working with tables from multiple databases can be pretty complex in entity framework (EF doesn't support objects from multiple databases in one entity model).
Also what you mean by 15 columns, 3 different data types? Stored procedures support in both Linq-to-sql and Entity framework will return enumeration of one flattened data type containing 15 properties.
I'm not aware of anything that linq-to-sql can do that Entity Framework can't really, so EF seems to be a better solution in this case. You can add a stored procedure to your Entity Framework model as well, so you can just have it call the procedure and deal with whatever comes back.
Since the end goal will involve accessing the same Databases with either technology and they will be using sql to retrive the data either way its really a subjective anwser.
I would use whatever technology you are most comfortable and focus more on the implementation. Both data access platforms are based off of ado.net technologies and are for the most part equally powerful.
Regardless of the technology I would evaluate how the data is accessed and make implementation decisions based on that.
What is faster - ADO.NET or ADO.NET Entity Framework?
Nothing is faster than an ADO.NET datareader.
Entity framework also uses this in "the basement".
However entitity framework helps you to map from database to objects..
With ADO.NET you have to do that yourself.
It depends on how you program it how fast it is..
When you use ADO.NET datatables as "objects". They are a bit slower and memory hungry than plain objects..
As Julian de Wit says nothing is faster than ADO.NET DataReaders.
ADO.NET Entity Framework is a wrapper to the old ADO.NET.
It is pure Provider independent, ORM and EDL System.
It gives us a lot of benefits that we have had to hand craft or "copy & paste" in the past.
Another benefit that comes with it is that is completely Provider independent.
Even if you like the old ADO.NET mechanism or you are a dinosaur like me(:P) you can use the Entity Framework using the EntityClient like SqlClient, MySqlClient and use the power of Entity-Sql witch is provider independent.
I Know that with ADO.NET you can write a data access Layer and the DataReaders etc can be "independent" but you have steal have Queries that are provider specific.
On the other hand,in an enterprise application you may never want to change the data provider.
But as the technology grows, always new needs arise and you may want have to alter the database schema.
When it happens with the old ADO.NET Framework we have to refactor alot of code which is less than maintainable, no matter how we reuse the code.
The performance will be affected but with all these cache technologies out there we can overcome this.
As i always say, "The C is fast, the Assembly even more...but we use C#/VB.NET/Java"