We started using memcached on the test server for our social media project and having some problems on ram usage.
We have created a cluster with 1 server node running with just 1 cache bucket sized 128 mb but when we check memcached.exe ram usage from the task manager it' s ram usage rises continously 1mb per second.
Any workaround on this?
Thanks!
If you're using our 1.0.3 product (the current version of our Memcached server) there is a known issue where deleting the default bucket causes a memory leak. Can you let me know whether you deleted the default bucket?
Also, we just released beta 4 of our 1.6.0 product which has support for both Membase buckets as well as Memcached buckets. I would certainly appreciate you taking a look and trying it out. I know it has fixed the memory leak issue.
Thanks so much.
Perry
Related
Currently i'm having a Dedicated VPS Server with 4GB Ram , 50GB Hard-disk , i have a SAAS solution running on the server with more than 1500 customers. Now i'm going to upgrade the projects business plan,There will be 25000 customers and about 500 - 1000 customers using the project realtime . For now it takes 5 Seconds to fetch cassandra database records from the server to the application.Then i came through redis and it says that saving a copy to redis will help to fetch the data much faster and lowers server overhead.
Am i right about this ?
If i need to improve the overall performance , Can anybody tell me what are the things i need to upgrade ?
Can a server with configuration said above can handle cassandra and redis together ?
Thanks in advance .
A machine with 4GB of RAM will probably only be single-core so it's too small for any production workload and only suitable for dev usage where you run 1 or 2 transactions per second, mostly for functional testing.
We generally recommend deploying Cassandra on machines with at least 2 cores + 8GB allocated to the heap (so need at least 16GB of RAM) for low production loads. For moderate loads, 4 cores + 32GB RAM is ideal so you can allocate 16GB to the heap.
If you're just at the proof-of-concept stage, there's a tier on DataStax Astra that's free forever and doesn't require a credit card to create an account. I recommend it to most people because you can launch a cluster in a few clicks and you can quickly focus on developing your app. Cheers!
I have a production environment setup with Postgres CloudSql instance. My database in around 30GB and I have ram of 8GB on master and 16GB on slave. But one weird thing happening with me is that the memory usage on both master and slave is stuck at 43%. I am not sure what is the reason for same. Can anyone help regarding this?
I cannot tell what number exactly the graph represents, but I assume it is allocated memory.
Then that would be fine, because the "free" RAM is actually used by the kernel to cache files, and PostgreSQL uses that memory indirectly via the kernel cache.
Im still fighting with mongoDB and I think this war will end is not soon.
My database has a size of 15.95 Gb;
Objects - 9963099;
Data Size - 4.65g;
Storage Size - 7.21g;
Extents - 269;
Indexes - 19;
Index Size - 1.68g;
Powered by:
Quad Xeon E3-1220 4 × 3.10 GHz / 8Gb
For me to pay dearly for a dedicated server.
On VPS 6GB memory, database is not imported.
Migrate to the cloud service?
https://www.dotcloud.com/pricing.html
I try to pick up the rate but there max 4Gb memory mongoDB (USD 552.96/month o_0), I even import your base can not, not enough memory.
Or something I do not know about cloud services (no experience with)?
Cloud services are not available to a large database mongoDB?
2 x Xeon 3.60 GHz, 2M Cache, 800 MHz FSB / 12Gb
http://support.dell.com/support/edocs/systems/pe1850/en/UG/p1295aa.htm
Will work my database on that server?
This is of course all the fun and get the experience in the development, but already beginning to pall ... =]
You shouldn't have an issue with a db of this size. We were running a mongodb instance on Dotcloud with 100's of GB of data. It may just be because Dotcloud only allow 10GB of HDD space by default per service.
We were able to backup and restore that instance on 4GB of RAM - albeit that it took several hours
I would suggest you email them directly support#dotcloud.com to get help increasing the HDD allocation of your instance.
You can also consider using ObjectRocket which is a MOngoDB as a service. For a 20Gb database the price is $149 per month - http://www.objectrocket.com/pricing
There are some discussions about the problem associated with MongoDB caching in OpenVZ, I was unable to find a practical solution. The issue is related to the memory in OpenVZ, as MongoDB does not consume free memory only. I tried to limit virtual memory by ulimit command, but the problem is that MongoDB server will shut down when reading the limit value of virtual memory (if using unlimited virtual memory, it will shut down when consuming all the machine ram).
Unfortunately, this is an 18-month old issue and it doesn't look like any resolution is planned.
Here's the JIRA ticket.
If you take a look at the ticket, there was a post from 2 days ago that seems to have some workarounds.
Your concept of using ulimit will definitely fail. In fact, I've had to manually set ulimit to unlimited in version of SuSE just to make it work.
I am using a Postgres database and want to tweek the parameters. The problem is that when I set shared_buffer to a value larger than 20mb my server won't start. I am running it on Ubutu 8.10 OS with 1gb RAM. Any ideas?
I imagine you are reaching the limit of shared memory regions.
Check /proc/sys/kernel/shmall and /proc/sys/kernel/shmmax.
More info here:
http://www.redhat.com/docs/manuals/database/RHDB-7.1.3-Manual/admin_user/kernel-resources.html