Thinking at using Cassandra DB with .net, Dapper.net fluentmigration - postgresql

Basically we are looking at solutions to keep our running costs down that will allow us to keep 3+ nodes in sync.
So currently we are looking at PostgreSQL when Cassandra came on our radar.
Can anyone tell me if Dapper.net and fluent-migration works with Cassandra from experience?
Also any advice on this kinda of setup would be appreciated.

Related

what is better in my case Postgres BDR or Postgres-XL?

Currently I am working on making the 2 node cluster of PostgreSQL on bare metal cloud. I am very confused about either which approach should I go.
Like i have one option that is PostgreSQL BDR (bi directional replica). In this approach, I have benefit that my both nodes will have read and write access. but now I came to know about PostgreSQL XL. This approach works on sharding approach. Can anybody tell me or help to that either which approach should I go? Sharding will give me benefit or not? I want my Postgres highly available and fast. Which approach will help me in this regard.
Or any other suggestions that you wanna give me.
One more thing. I want to make my cluster horizontal scalable.
The best solution in most cases is option (c): neither. Use stock PostgreSQL + active/standby failover.
I say that as a BDR developer. It's a great tool (in my opinion) for workloads that need it. But it comes with some considerable costs, like any multi-master system, and should not be used if you don't actually need it.
Most people who think they need multi-master, don't. Or rather, don't understand the impacts and trade-offs.
Read the BDR documentation on multi-master conflicts.

Experiences with db4o in production environment

We are planning to migrate from Prevayler (http://prevayler.org/) to db4o (http://www.db4o.com/), so we wanted to know experiences, pros and cons, and best practices to move forward. What do you think about it? Is it a good solution? Or, maybe moving forward with a NoSQL standard solution would be better? (Such as MongoDB or CouchDB). Thanks!
we use db4o as main db in our production environment (both embedded and client/server), so i am going to share some of my experiences.
Pro:
- very easy for development (you just implememt data classes)
- support both embedded/client server under the same interface, which makes it easy to unittest
- decent performance for small projects
Cons:
- db4o is no longer developed so it's quite dead project, and you wont get much of support for it
- [client/server] everytime you change model you need to redeploy server (not talking about the fact that you need to host server app yourself)
- [client/server] performance degrade with more clients connected - not possible to scale
Summary: db4o is very good as embedded db (mobile app, desktop local db), but if it comes to server application you get into troubles
Given that I did not receive so much feedback, we gave it a try. So far, it seemed to be a good option for a embedded database, that makes much easier the deployment. So, we wrote again the whole persistence layer, with their unit tests and seemed to work fine.
Then, we tried with real data, and we start to have some weird Null Pointers, and we did not know why. Then, we started to read and we found this issue: http://www.gamlor.info/wordpress/2009/09/db4o-activation-update-depth/.
We've been trying to solve for a few hours, but then we decided no to spend more time on it, and found another way. CouchDB, OrientDB or MongoDB are still on our list.

Exporting subset of neo4j database for non-programmer

I've been trying to export a subset of my neo4j database based on a query to a csv file. However, I haven't had much luck. Most of the advice I've found online requires a significant amount of coding - unfortunately, I have only a cursory knowledge of programming languages and have encountered difficulties trying to get this to work. Are there any easy methods to do this without requiring a high level understanding of MySQL or Java? Thanks for your help!

are adhoc queries/updates starting to kill your productivity with MongoDB?

i've been developing a asp mvc website for almost a year now exclusively on mongodb. i've loved it for the most part. development productivity has been great using a C# mongodb driver and tools like mongovue.
however, i've started to reach a point where there are things i really wish i had a SQL server database for. simple tasks like updating a record in the DB and only mildly complex queries to generate some type of report are becoming a pain.
i read an article somewhere that in order for NOSSQL to succeed there needs to be a standard query language for it, and tools developed around it. i'm guessing this is far far away, so right now i'm stuck trying to deal with these things.
i think eventually i will have to have a dual solution with monogDB and sql server. i don't think i will ever get to the point where i am as productive updating and writing queries for mongoDB as i was with sql server.
how are you guys dealing with this when using NOSQL like mongodb? are you facing the same issues as me?
One solution you may consider is LINQPad. You can set up a template with a reference to 10Gen's drivers and write ad-hoc, C# MongoDB queries like you would in your code. My team and I use this method to address the very problem you mention.
Try it out (it's free) and see if it can help with the simple, day-to-day queries you come up with.
Edit I also support Chris's suggestion of familiarizing yourself with the native JSON query language. Nothing beats a quick console window for speed, if you know the syntax.
The official C# driver will probably get a LINQ provider some time in the future, so that'd give .NET devs a familiar syntax for querying and maybe help with initial productivity. There're also some nice docs that help relate MongoDB queries back to SQL:
SQL to Mongo Mapping Chart
SQL to MongoDB (PDF)
These are great for learning, but to get the most out of Mongo it's well worth investing time getting used to the native JSON query syntax and Mongo-specific concepts like map-reduce.
Since your questions asks,
how are you guys dealing with this when using NOSQL like mongodb?
I thought I'd chime in. I felt your pain when working with another NOSQL database, RavenDB.
I wrote a Linqpad driver specifically for ad hoc interactions with RavenDB.
https://github.com/ronnieoverby/RavenDB-Linqpad-Driver

Derby vs PostgreSql Performance Comparison

We are doing research right now on whether to switch our postgresql db to an embedded Derby db. Both would be using glassfish 3 for our data layer. Anybody have any opinions or knowledge that could help us decide?
Thanks!
edit: we are writing some performance tests ourselves right now. Looking for answers more based on experience / first hand knowledge
I know I'm late to post an answer here, but I want to make sure nobody makes the mistake of using Derby over any production-quality database in the future. I apologize in advance for how negative this answer is - I'm trying to capture an entire engineering team's ill feelings in a brief Q&A answer.
Our experience using Derby in many small-ish customer deployments has led us to seriously doubt how useful it is for anything but test environments. Some problems we've had:
Deadlocks caused by lock escalations - this is the biggest one and happens to one customer about once every week or two
Interrupted I/Os cause Derby to fail outright on Solaris (may not be an issue on other platforms) - we had to build a shim to protect it from these failures
Can't handle complicated queries which MySQL/PostgreSQL would handle with ease
Buggy transaction log implementation caused a table corruption which required us to export the database and then re-import it (couldn't just drop the corrupted table), and we still lost the table in the process - thank goodness we had a backup
No LIMIT syntax
Low performance for complicated queries
Low performance for large datasets
Due to the fact that it's embedded, Derby is more of a competitor to SQLite than it is to PostgreSQL, which is an extremely mature production-quality database which is used to store multi-petabyte datasets by some of the largest websites in the world. If you want to be ready for growth and don't want to get caught debugging someone else's database code, I would recommend not using Derby. I don't have any experience with SQLite, but I can't imagine it being much less reliable than Derby has been for us and still being as popular as it is.
In fact, we're in the process of porting to PostgreSQL now.
Derby still is relatively slow in performance, but ... where ever your Java application goes your database server goes, completely platform independent. You don't even need to think about installing a DB server where your Java app is being copied to.
I was using MySQL with Java, but having an embedded implementation of your Database server sitting right within my Java App is just stunning and unprecedented productivity, freedom and flexibility.
Always having a DB server included whenever and wherever on any platform for me is just heaven !!!
Have not compared Postgresql to Derby directly. However, having used both in different circumstances, I have found Derby to be highly reliable. However you will need to pay attention to Derby configuration to ensure it suits your application needs.
When looking at the H2 databases stats site, it's worth reading follow up discussion which comes out in favour of Derby compared to the H2 conclusions. http://groups.google.com/group/h2-database/browse_thread/thread/55a7558563248148?pli=1
Some stats from the H2 database site here:
http://www.h2database.com/html/performance.html
There are a number of performance test suites that are included as part of the Derby source code distribution itself; they are used by Derby developers to conduct their own performance testing of Derby. So if you need examples of performance tests, or want additional ones, you could consider using those. Look in the subdirectory named java/testing/org/apache/derbyTesting/perf in the Derby source distribution.
I'm not sure what you mean that Derby is embedded. Certainly that is an option, but you can also install it as a separate server.