Hi I want to use hikariCP with snappy data. Don't want to involve GemFire in between. How can be direct implementation possible with snappy data.
I tried com.pivotal.gemfirexd.jdbc.ClientDriver which is working, but com.pivotal.snappydata.jdbc.ClientDriver os not working.
Not sure which version of SnappyData you are using. But, the client JDBC driver class is io.snappydata.jdbc.ClientDriver. See here. You can download the driver from here.
Also, not sure I follow "Don't want to involve GemFire in between". SnappyData only uses components from GemFire (for things like distributed membership, etc) and this use is completely hidden.
Related
We are working on an application that will be offered both as a web-based and as a cross-platform desktop solution by means of Electron.
Due to customer requirements, the desktop client cannot make use of "the cloud" to store data; all data should be stored in the local machine or, even better, the user should have the option to keep the database/data file on an external HDD so that another user on the same local network can use the same data file.
We've been looking at NeDB, PouchDB, etc, but all these use either Web SQL or IndexedDB on the browser itself to store the data.
NeDB can theoretically use the file system but that seems only possible for Node Webkit apps.
Another option is of course MongoDB, but it requires setting up a site on a web server. Seeing as how our users will set that up in on their own machines, that will work for one user only but would make it very hard for them to share the data (note: assume users with little technical know-how).
Is there a way to force NeDB to persist data in a file instead of the in-browser database?
Alternatively, does any one know of a file-based, compact database that plays well with electron/node?
We'd preferably like to use a NoSQL database, but options of file-based SQL databases will be considered as well.
I have some experience with NeDB in an Electron app and I can say it will definitely work on the filesystem.
How are you initializing NeDB (or whatever your database choice is)? Also, are you initializing it in the main or renderer process? If you can share that, I think we could trace the issue to a configuration issue.
This is how you start NeDB with a persistent data-store that saves to disk.
var Datastore = require('nedb')
, db = new Datastore({ filename: 'path/to/datafile', autoload: true });
I think MongoDB is going to be overkill for an Electron app (it's meant to be really a high performance, distributed database running in the cloud).
Another option you could consider is LevelDB (a key/value store that can persist to the filesystem) which is popular in the node community. (EDIT 4/17/17 IndexedDB uses LevelDB underneath the hood, so if you go that route, may as well just use that)
One aspect I would definitely evaluate carefully is: How difficult is this database going to be to package and distribute? How do I integrate it into my build system? Level and NeDB can be included simply via npm install and any native code compiling is handled seamlessly with node-gyp, which is as simple as it gets. However, bundling Mongo, for example, will require some work to get a working build for each different platform.
Please don't mark this question as a duplicate. I read the previous questions, but I am still unable to understand it.
I am currently into a project designed in Java which uses MongoDB for persistence. But due to some performance issues with it, I am asked to use Memcached. But I am unable to figure out how can Memcached help me in doing this.
While surfing, I got more confused because of caching services like Memcache and Memcached. Can someone please explain me how are these different and why does PHP comes into the answer in some questions when Memcached is asked.
I request all to answer clearly and let me know with an example how could I use Memcached into my project. What is Memcache, Memcached, Jcache and SpyMemcached?
If possible, please provide a link to complete Memcached example somewhere.
Memcache and Memcached are the same thing, the "correct name" being Memcached ( http://memcached.org/ ).
JCache is the name of a standard Java API (JSR 107 - https://jcp.org/en/jsr/detail?id=107 ) that provides a generic API to interact with caching layer/solutions. (get/set/remove data from a Key/Value cache to simplify)
So you really want to use a caching layer at the top of MongoDB in your Java application you have to:
Install Memcached somewhere on your infrastructure (if not install already, you can test it quickly with telnet. The default port is 11211, so you can run telnet localhost 11211 to see if it is working.
You have to use a JCache implementation for Memcached, for example this one: https://github.com/toelen/spymemcached-jcache This will allow you to store and get data into a Memcached process running somwhere in your infrastructure.
Since you are talking about JCache, you are Using Java, it is also possible to use Java based cache that will work in your JVM Directly without having a 3rd party cache/process (memcached). You can find many of them, it could be for example eHCache, JBoss Cache, and most of them expose their API using the standard JCache API.
Now you need to code your Data Access layer to get the data out of MongoDB and set them into the Cache using JCache API. IN this code you will have to check if a data is in the cache, if not query the data from MongoDB, and set it in the cache and use it. Be careful about the eviction strategy.
This document about using JCache in Google App Engine documentation is interesting to see the "pseudo code" https://developers.google.com/appengine/docs/java/memcache/usingjcache (your code will be different but it should help you to see what you have to do in your code.)
The reason why you often see Memcached and PHP together is just because Memcached is the most common caching layer for PHP application, with many many API/FWK that are using this. In Java we have many options, from a pure Java layer to Memcached or other...
However, this is the "overall" approach, but before doing this I would check "why" you are saying that MongoDB is slow, and solve the issue.
I would like to give a web app with a PostgreSQL database 100% offline functionality. In an ideal case the database should be completely replicated in the browser per user, and synchronized when online. So that the same code can be used to talk to both the offline and online database. I know this is possible with PouchDB and CouchDB, but have not found a solution that works with PostgreSQL. Is this at all possible?
Short answer: I don't know of anything like this that currently exists.
However, in theory, this could be made to work...(long answer:)
Write a PostgreSQL backend for levelup (one exists for MySQL: https://github.com/kesla/mysqldown)
Wire up pouch-server to read/write from your PostgreSQL db using pouchdb's existing leveldb adapter (which in turn will have to be configured to use your postgres backend). Congrats, you can now sync data using PouchDB!
Whether an approach like this is practical in reality for your application is a different question you'll have to answer.
You may be wondering, for example, "will I be able to sync an existing complex schema with multiple tables to the client with this approach?" The answer is probably not - the mysqldown implementation of leveldown uses a single MySQL table with three fields: id, key, and value (source), and I imagine any general-purpose PostgreSQL adapter would be similar (nothing says you can't do a special-purpose adapter just for your app though!).
On the other hand, if you were to implement a couchdb-compatible API (or a subset- you may not need attachments, for example) over your existing database schema, there's nothing stopping you from using PouchDB on the client to talk directly to that as if it were an actual CouchDB - just pop in the URL and call replicate()! Implementing the replication protocol might be a fair bit of work, since you'd need to track revisions and so on somewhere - but again, technically not impossible!
There are also implementations of levelup's backend storage that are designed for browsers. See level.js, which could be another way to sync between a server-side Postgres levelup backend and the browser.
TL;DR: There's tons of work being done around Javascript databases right now. Is syncing with Postgres impossible? probably not. Would it be a lot of work? Definitely. Worth it? Who knows, but it would be cool.
Without installing PostgreSQL on the client? No. Obviously you can cache data for offline use, but an entire RDBMS+procedural languages in Javscript, no.
I've recently came across a need to store a higher amount of files in my application and because PaaS platform used to host the application provides mongo, I've would like to use it.
However because I'm quite inexperienced with mongo I have almost no idea what is the current state of mongo related plugins and tools for grails. What should I use? As I want to keep domain classes in SQL database and use mongo only to store related files (in this case it will be mostly a bunch of PDFs and text documents related to domain instance) the mongoDB ORM [1] plugin seems too "heavy". Unfortunately mongoDB ORM is probably the only mongo plugin for grails in active development at the moment.
In short, what would be the best plugin / library tool-set for this purpose? The closest thing that matches my need I've found is grails-mongo-files plugin [2], which is probably a little bit outdated with no further development.So far it seems that I will have to use mongo's java driver (or the gmongo wrapper) and write some storage service and taglib by myself (what is not necessary a bad thing).
[1] http://grails.org/plugin/mongodb
[2] https://github.com/quirklabs/grails-mongo-file
There is also the mongodb gridfs plugin. http://grails.org/plugin/mongodb-gridfs
One thing to consider is that gridfs effectively does two calls to mongo, one to retrieve file information and one to retrieve the file. So it might not be a good fit if your files are under 16 megabytes.
Here is a post on how to do this manually if you want to bypass plugins - http://jameswilliams.be/blog/entry/171
It's obvious that I'm not an expert on Cassandra. So the question may sound silly.
Given an existing SQL-based project does it give any benefit or is it even possible to apply a no-SQL database(e.g. Cassandra) as an additional layer between business logic and SQL database to speed up our queries or inserts.
It's relatively new technology and I'm trying to find its place.
Cassandra will work fine, but if you don't care if you have to rebuild your data memcached will be faster.
But if you want a persistent cache, Cassandra is probably your best option -- reddit started by using Cassandra like this and is working on moving more functionality to it.
I would go with Windows Server AppFabric aka Velocity distributed cache with SQL Server, assuming you are on the .NET platform.
Scott Hanselman has a bunch of posts on AppFabric.