did anyone tried monitoring memcached statistics with openNMS?
If you did, what did you use?
Since version 1.7.4, there'd a memcached monitor bundled with OpenNMS.
Related
I am currently profiling a Tomcat 8.5 application which uses DBCP as standard pool for Neo4j JDBC Driver connections.
What I do not understand is why the number of sockets opened (and not closed) is so high when I start profiling the application (with Yourkit or JProfiler) and this number is low and stable without profiling. Same behavior confirmed with lsof and ls /proc/my_tomcat_pid/fd | wc -l commands.
So I am wondering if my application is really not releasing Neo4j JDBC connections as it should, or if it is overhead introduced by the profiler.
Any clue ?
If you are using YourKit profile you can run dedicated "Not closed socket connections" inspection https://www.yourkit.com/docs/java/help/event_inspections.jsp Profiler will show you where not-closed sockets were created and opened.
Actually it was a Yourkit bug linked with Neo4j JDBC driver and Databases probe, which was quickly fixed by Yourkit team. The fix is available in Build 75 : https://www.yourkit.com/forum/viewtopic.php?f=3&t=39801
How can i run mongo 2.6 and mongo 3.0 at the same time on my linux system ?
I need this because i have 2 projects one is working on mongo 2.6 and one is working on mongo 3.0
Any help is appreciated !!
If you really wan to do this, and I recommend you DO NOT, then the best way is to use a container tech, docker can work quite well or LXC or one of the others.
However, do note that there is a high chance that if your sever is in the "cloud" it is already vitualised as it is as such you will constantly be losing power and resources due to sub virtualising everything over and over.
DO NOT put them on the same host without some kind of separation there will be a high chance of resource contention and conflict that tools like ulimit just cannot solve (well, technically it could but it will be a hairball).
I have an account with MongoLab for MongoDB and the constant calls to this remote server from my app slow it down quite a lot. When I run the app locally on my computer with a local version of Mongod and MongoDB it's far, far faster, as would be expected.
When I deploy my app (running on Node/Express) it will be run from a VPS on CentOS. I have plenty of storage space available on my VPS, are there any major downsides to running MongoDB locally rather than remotely on Mongolab?
Specs of the VPS:
1024MB RAM
1024MB VSwap
4 CPU Cores # 3.3GHz+
60GB SSD space
1Gbps Port
3000GB Bandwidth
Nothing apart from the obvious:
You will have to worry about storage yourself. MongoDB does tend to take a lot of disk space. upgrading storage will probably be harder to manage than letting Mongolab take care of it.
You will have to worry about making sure the Mongo server doesn't crash and it's running fine.
You will have scaling issues in the future once the load on your application and your database increases.
Using a "database-as-a-service" like Mongolab frees you from worrying about a lot of hardware/OS/system level requirements and configuration. Memory optimization? Which file system? Connection limits? Visualization and IO ops issues? (thanks to Nikolay for pointing that one out)
If your VPS provider doesn't account for local traffic in your bandwidth, then you can set up another VPS for MongoDB. That way, the server will be closer so the requests will be faster, and also, it will have the benefits of being an independent server. It still won't be fully managed like MongoLab though.
[ Edit: As Chris pointed out, MongoLab also helps you with your database schema design and bundles MongoDB support with their plans, so that's also nice. ]
Also, this is a good question, but probably not appropriate for StackOverflow. dba.stackexchange.com and serverfault.com are good places for this question.
For my Drupal-based site, I have an architecture with 3 instances running nginx, postgresql, & solr, respectively. I'd like to install Memcached. Should I put it on the nginx or postgresql server? What are the performance implications?
Memcached is very light on CPU usage, so it is a great candidate to gobble up spare web server RAM. Also, you will scale out you web tier much more than your other tiers, and Memcached clustering can pool that RAM together into one logical cache.
If you have any spare RAM on the DB, it is almost always best for performance to let the DB gobble it up.
TL;DR Let DB have all of the RAM, colocate memcached on web tier.
Source: http://code.google.com/p/memcached/wiki/NewHardware
The best is to have a separate server (if you can do that).
Otherwise, it depends on your servers CPU & memory utilization and availability requirements. In general I would avoid running anything extra on a DB server machine...since DB is the foundation of the system and has to be available and performing well.
if your Solr server does not have high traffic an don't utilize much memory I'd put it in there. Memcached servers known to be light on CPU. Also you should estimate how much memory memcached instance will need...to make sure its enough on the server.
To be specific, I only have 1GB of free memory and would like to use only 300MB for Redis. How can I configure it so that it is only uses up to 300MB of memory?
Out of curiosity, what happen when you try to insert a new data and Redis is already used all the memory allocated?
maxmemory is the correct configuration option to prevent Redis from using too much RAM.
If an insert causes maxmemory to be exceeded, the insert operation will sometimes fail.
Redis will do everything in its power to prevent the operation from failing, though. In the newer versions of Redis, you can configure the memory reclaiming policies in the configuration, as well by setting the maxmemory-policy option.
Also, if you have virtual memory options turned on, Redis will begin to store stale data to the disk.
More info:
What does Redis do when it runs out of memory?
You can do that using maxmemory option: maxmemory 314572800 means 300mb.
Since the last answer is from 2011. I am going to write some updated information for users reading in 2019 using Ubuntu 18.04. The configuration file is located in /etc/redis/redis.conf and if you have installed using (default/recommended method) apt install redis-server the default memory limit is set to "0" which practically means there is "no limit" which can be troublesome if user has limited/small amount of RAM/memory. To set your custom memory limit you may simply edit configuration file and type "maxmemory 1gb" as the very first line. Restart redis service for changes to take effect. To verify changes use redis-cli config get maxmemory
Ubuntu 18.04 users may read more here: How to install and configure REDIS on Ubuntu 18.04