Shared_buffer > 20mb can not start database? - postgresql

I am using a Postgres database and want to tweek the parameters. The problem is that when I set shared_buffer to a value larger than 20mb my server won't start. I am running it on Ubutu 8.10 OS with 1gb RAM. Any ideas?

I imagine you are reaching the limit of shared memory regions.
Check /proc/sys/kernel/shmall and /proc/sys/kernel/shmmax.
More info here:
http://www.redhat.com/docs/manuals/database/RHDB-7.1.3-Manual/admin_user/kernel-resources.html

Related

Low Ram Usage in Postgres CloudSQL instance?

I have a production environment setup with Postgres CloudSql instance. My database in around 30GB and I have ram of 8GB on master and 16GB on slave. But one weird thing happening with me is that the memory usage on both master and slave is stuck at 43%. I am not sure what is the reason for same. Can anyone help regarding this?
I cannot tell what number exactly the graph represents, but I assume it is allocated memory.
Then that would be fine, because the "free" RAM is actually used by the kernel to cache files, and PostgreSQL uses that memory indirectly via the kernel cache.

Jenkins and PostgreSQL is consuming a lot of memory

We have a Data ware house server running on Debian linux ,We are using PostgreSQL , Jenkins and Python.
It's been few day the memory of the CPU is consuming a lot by jenkins and Postgres.tried to find and check all the ways from google but the issue is still there.
Anyone can give me a lead on how to reduce this memory consumption,It will be very helpful.
below is the output from free -m
total used free shared buff/cache available
Mem: 63805 9152 429 16780 54223 37166
Swap: 0 0 0
below is the postgresql.conf file
Below is the System configurations,
Results from htop
Please don't post text as images. It is hard to read and process.
I don't see your problem.
Your machine has 64 GB RAM, 16 GB are used for PostgreSQL shared memory like you configured, 9 GB are private memory used by processes, and 37 GB are free (the available entry).
Linux uses available memory for the file system cache, which boosts PostgreSQL performance. The low value for free just means that the cache is in use.
For Jenkins, run it with these JAVA Options
JAVA_OPTS=-Xms200m -Xmx300m -XX:PermSize=68m -XX:MaxPermSize=100m
For postgres, start it with option
-c shared_buffers=256MB
These values are the one I use on a small homelab of 8GB memory, you might want to increase these to match your hardware

MongoDB in the cloud hosting, benefits

Im still fighting with mongoDB and I think this war will end is not soon.
My database has a size of 15.95 Gb;
Objects - 9963099;
Data Size - 4.65g;
Storage Size - 7.21g;
Extents - 269;
Indexes - 19;
Index Size - 1.68g;
Powered by:
Quad Xeon E3-1220 4 × 3.10 GHz / 8Gb
For me to pay dearly for a dedicated server.
On VPS 6GB memory, database is not imported.
Migrate to the cloud service?
https://www.dotcloud.com/pricing.html
I try to pick up the rate but there max 4Gb memory mongoDB (USD 552.96/month o_0), I even import your base can not, not enough memory.
Or something I do not know about cloud services (no experience with)?
Cloud services are not available to a large database mongoDB?
2 x Xeon 3.60 GHz, 2M Cache, 800 MHz FSB / 12Gb
http://support.dell.com/support/edocs/systems/pe1850/en/UG/p1295aa.htm
Will work my database on that server?
This is of course all the fun and get the experience in the development, but already beginning to pall ... =]
You shouldn't have an issue with a db of this size. We were running a mongodb instance on Dotcloud with 100's of GB of data. It may just be because Dotcloud only allow 10GB of HDD space by default per service.
We were able to backup and restore that instance on 4GB of RAM - albeit that it took several hours
I would suggest you email them directly support#dotcloud.com to get help increasing the HDD allocation of your instance.
You can also consider using ObjectRocket which is a MOngoDB as a service. For a 20Gb database the price is $149 per month - http://www.objectrocket.com/pricing

What is the Node/Shard limit for MongoDB ona 64 bit machine?

We are currently setting up a mongodb database for our enviroment. We are running into specific collections which will initially be more then 2gb of size.
Out deployment enviroment is a 64bit Ubuntu machine.
We are trying to find what is the limit of a sizes of a specific collection and shard for mongodb in a sharding enviroment?
As far as I know, there is no limit to the size of a collection within MongoDB. The only limit would be the amount of disk space available to you. In the case of sharding, it would be the total amount of disk space available on all shards. And according to the docs, you can only have 1000 shards.

Northscale memcached ram usage problem?

We started using memcached on the test server for our social media project and having some problems on ram usage.
We have created a cluster with 1 server node running with just 1 cache bucket sized 128 mb but when we check memcached.exe ram usage from the task manager it' s ram usage rises continously 1mb per second.
Any workaround on this?
Thanks!
If you're using our 1.0.3 product (the current version of our Memcached server) there is a known issue where deleting the default bucket causes a memory leak. Can you let me know whether you deleted the default bucket?
Also, we just released beta 4 of our 1.6.0 product which has support for both Membase buckets as well as Memcached buckets. I would certainly appreciate you taking a look and trying it out. I know it has fixed the memory leak issue.
Thanks so much.
Perry