Please don't mark this question as a duplicate. I read the previous questions, but I am still unable to understand it.
I am currently into a project designed in Java which uses MongoDB for persistence. But due to some performance issues with it, I am asked to use Memcached. But I am unable to figure out how can Memcached help me in doing this.
While surfing, I got more confused because of caching services like Memcache and Memcached. Can someone please explain me how are these different and why does PHP comes into the answer in some questions when Memcached is asked.
I request all to answer clearly and let me know with an example how could I use Memcached into my project. What is Memcache, Memcached, Jcache and SpyMemcached?
If possible, please provide a link to complete Memcached example somewhere.
Memcache and Memcached are the same thing, the "correct name" being Memcached ( http://memcached.org/ ).
JCache is the name of a standard Java API (JSR 107 - https://jcp.org/en/jsr/detail?id=107 ) that provides a generic API to interact with caching layer/solutions. (get/set/remove data from a Key/Value cache to simplify)
So you really want to use a caching layer at the top of MongoDB in your Java application you have to:
Install Memcached somewhere on your infrastructure (if not install already, you can test it quickly with telnet. The default port is 11211, so you can run telnet localhost 11211 to see if it is working.
You have to use a JCache implementation for Memcached, for example this one: https://github.com/toelen/spymemcached-jcache This will allow you to store and get data into a Memcached process running somwhere in your infrastructure.
Since you are talking about JCache, you are Using Java, it is also possible to use Java based cache that will work in your JVM Directly without having a 3rd party cache/process (memcached). You can find many of them, it could be for example eHCache, JBoss Cache, and most of them expose their API using the standard JCache API.
Now you need to code your Data Access layer to get the data out of MongoDB and set them into the Cache using JCache API. IN this code you will have to check if a data is in the cache, if not query the data from MongoDB, and set it in the cache and use it. Be careful about the eviction strategy.
This document about using JCache in Google App Engine documentation is interesting to see the "pseudo code" https://developers.google.com/appengine/docs/java/memcache/usingjcache (your code will be different but it should help you to see what you have to do in your code.)
The reason why you often see Memcached and PHP together is just because Memcached is the most common caching layer for PHP application, with many many API/FWK that are using this. In Java we have many options, from a pure Java layer to Memcached or other...
However, this is the "overall" approach, but before doing this I would check "why" you are saying that MongoDB is slow, and solve the issue.
Related
I would like to give a web app with a PostgreSQL database 100% offline functionality. In an ideal case the database should be completely replicated in the browser per user, and synchronized when online. So that the same code can be used to talk to both the offline and online database. I know this is possible with PouchDB and CouchDB, but have not found a solution that works with PostgreSQL. Is this at all possible?
Short answer: I don't know of anything like this that currently exists.
However, in theory, this could be made to work...(long answer:)
Write a PostgreSQL backend for levelup (one exists for MySQL: https://github.com/kesla/mysqldown)
Wire up pouch-server to read/write from your PostgreSQL db using pouchdb's existing leveldb adapter (which in turn will have to be configured to use your postgres backend). Congrats, you can now sync data using PouchDB!
Whether an approach like this is practical in reality for your application is a different question you'll have to answer.
You may be wondering, for example, "will I be able to sync an existing complex schema with multiple tables to the client with this approach?" The answer is probably not - the mysqldown implementation of leveldown uses a single MySQL table with three fields: id, key, and value (source), and I imagine any general-purpose PostgreSQL adapter would be similar (nothing says you can't do a special-purpose adapter just for your app though!).
On the other hand, if you were to implement a couchdb-compatible API (or a subset- you may not need attachments, for example) over your existing database schema, there's nothing stopping you from using PouchDB on the client to talk directly to that as if it were an actual CouchDB - just pop in the URL and call replicate()! Implementing the replication protocol might be a fair bit of work, since you'd need to track revisions and so on somewhere - but again, technically not impossible!
There are also implementations of levelup's backend storage that are designed for browsers. See level.js, which could be another way to sync between a server-side Postgres levelup backend and the browser.
TL;DR: There's tons of work being done around Javascript databases right now. Is syncing with Postgres impossible? probably not. Would it be a lot of work? Definitely. Worth it? Who knows, but it would be cool.
Without installing PostgreSQL on the client? No. Obviously you can cache data for offline use, but an entire RDBMS+procedural languages in Javscript, no.
I've recently came across a need to store a higher amount of files in my application and because PaaS platform used to host the application provides mongo, I've would like to use it.
However because I'm quite inexperienced with mongo I have almost no idea what is the current state of mongo related plugins and tools for grails. What should I use? As I want to keep domain classes in SQL database and use mongo only to store related files (in this case it will be mostly a bunch of PDFs and text documents related to domain instance) the mongoDB ORM [1] plugin seems too "heavy". Unfortunately mongoDB ORM is probably the only mongo plugin for grails in active development at the moment.
In short, what would be the best plugin / library tool-set for this purpose? The closest thing that matches my need I've found is grails-mongo-files plugin [2], which is probably a little bit outdated with no further development.So far it seems that I will have to use mongo's java driver (or the gmongo wrapper) and write some storage service and taglib by myself (what is not necessary a bad thing).
[1] http://grails.org/plugin/mongodb
[2] https://github.com/quirklabs/grails-mongo-file
There is also the mongodb gridfs plugin. http://grails.org/plugin/mongodb-gridfs
One thing to consider is that gridfs effectively does two calls to mongo, one to retrieve file information and one to retrieve the file. So it might not be a good fit if your files are under 16 megabytes.
Here is a post on how to do this manually if you want to bypass plugins - http://jameswilliams.be/blog/entry/171
I'd like to be able to have an admin app and a client app for my project. Ideally, I'd like to be able to have a shared MongoDB collection. How would I be able to accomplish this?
I tried creating collections with the same name in two different apps, but found that Meteor will keep the data separate. Any idea what I can do? Thanks.
export MONGO_URL=mongodb://localhost:3002/meteor
Then run meteor app, it will change the default database meteor uses. So share databases or collections won't be a problem!
For administrative reason, I would use a individual MongoDB server managed by myself other than using meteor's internal MongoDB.
A reasonable question and probably worth a discussion in excess of this answer:
The MongoDB connection is handled by the Meteor application process itself and this is - as far as I read and understood - part of Meteors philosophy targeting an approach that might be described like: One data source serves one application belonging to it but many clients subscribing to it.
This in mind, combining "admin" and "client" clients in one application (i.e. your Meteor app) is probably the preferred way.
From a server administrative view, however, connections are handled by Meteor in that way that there is always the default local data source which resides in your project directory (.meteor/local/db, try meteor mongo --url to obtain the mongo connection string while the meteor application process is running). But nevertheless one may specify an optional data source string for deployment purposes like described in these deployment instructions.
So you would need to choose a somewhat creepy way of "local development deployment" for your intended setup to get working. Or you go and hack the sources and... no, forget it. You probably want your application and clients to take advantage of e.g. realtime UI updates (publish) and that is why the Meteor application is tied to an "application data source" and vice-versa by now. When connecting from another app, events that trigger changes in the model would not be transported across those applications. The mongoDB instance itself of course isn't aware of that.
I'm sure the core team won't expose the data source connection to a configuration section for considered reasons unless they extend their architecture with some kind of module concept which provides a common service layer of core Model/Collections abstraction across Meteor instances - at least supporting awareness of publish/subscribe events.
Try this DDP test I hacked together for a way to bridge two apps (server A and B).
Both servers can manipulate data, but data is only stored in one collection on server A.
See this link as well
I'd like to know if is possible to use memcached as the default storage for Varnish cache.
Searching on the web I found https://github.com/sodabrew/libvmod-memcached but the examples that I found so far was just about manually storing/retrieving the content on memcached using VCL rules.
What I'm looking for is memcached as the default storage for Varnish, just like we do with file/memory today.
Is there some way to do this ? Thanks in advance
Please have a look at the architecture document on Varnish. You can see the designer had specific ideas on the backend (all in memory, let the kernel decide what goes to swap/disk). Memcache doesn't really fit there. Can you explain why Varnish as-is is insufficient, and you want memcache as the back-end of varnish?
If you want a front-end cache based on memcache, there are probably other solutions or you could write one. I wouldn't pick Varnish just for the VCL language, as I think it's a complex language to accomplish proper caching.
I want the way with the fastest execution time. I'm not feeling comfortable of using web service because i need to create separate php pages and retrieve data as xml. If you think its good to use web service please tell me why. I want to code my database queries right on my c/objective c pages.
I've been searching for libraries. I saw this sequel pro - won't i have any problems on using this - like licensing issues? I also saw this libmysqlclient of cocoa but some say its not working well. I've also read about a library developed by Karl Kraft found here http://www.karlkraft.com/index.php/2010/06/02/mysql-and-objective-c/ but don't know if i could trust this.
I would really appreciate you help.
Definitely build a web service to act as an abstraction layer to your database. Here are some significant reasons in my opinion:
Since you want speed, you will be able to add caching when using the webservice, so you will essentially eliminate the need for identical queries to run (sometimes).
If you need to change your data model later, you just have to modify the webservice backend and don't have to update your app.
You can better control security by not exposing the database to the world, and keep it safe behind the web service.
Your database credentials should not be stored in an app. What if you needed to change those?
I strongly suggest a web service. Hope this helps.
Connect to your DB by PHP and output the result as JSON
is much better and faster then xml and less coding if use JSON Framework.
and never never try to connect to your DB from your iphone because it easy to sniff out the request from iphone.
Being safe then Sorry, keep that in mind