Ram configurations on a computer - upgrade

I have an old computer with 3gb (3 x 1gb at 400mhz) and I am to add 2gb but I only have 1 ram slot left. Will this configuration work or do I have to go all 4 x 1gb?
Thanks

From past experience this should work, however this was on a newer RAM module, so I have no clue what would happen with older ones, so I wouldn't be too worried if it doesn't work!
Good luck!

Related

Why Won't Heroku Postgres Cache Hit Rate Go Up?

I am migrating a database to heroku. I am using pg:diagnose to try and ensure that the database will be running smoothly.
Yesterday I noted that my "overall cache hit rate" was around 94%, which is lower than the recommended 99%. My database was running on the "Premium 3" tier which has 15 GB of RAM. So I decided to upgrade to a plan with more RAM, hoping this would lead to a higher cache hit rate. I switched to "Standard 4", which has more than double the RAM. The cache hit rate was low at first, but that was because it was cold.
But now it's the next day, the cache is warm, and my "overall cache hit rate" is back to 94%, right where it started! I must have missed something - I doubled the RAM but I'm not getting any more cache hits?
I would consider upgrading to a yet higher plan, but upgrading plans doesn't seem to help. My data size is 38.9 GB, and my current plan has 30.5 GB of RAM.
Thanks in advance to anyone who can help me understand what's going on here!
The cache-hit rate you are looking at from pg:diagnose seems to be measured about the same way that PostgreSQL itself would derive it--it considers everything found in shared_buffers to be a hit, and every thing else to be a miss. But for the misses, many of them could also be found in memory, it would just be the kernels filecache memory, not PostgreSQL's shared_buffers. From a performance perspective, this should also be hits, but there is no mechanism to count them as such.
I don't know how heroku manages shared_buffers. If shared_buffers stayed the same when you increased the instance size, then you would expect the reported hit rate to also stay the same, even if the true hit rate increased (i.e. more of the buffer misses are being served out of filecache rather truly being read from disk).

Can't map file memory-mongo requires 64 bit build for larger datasets

I have a sharded cluster in 3 systems.
While inserting I get the error message:
cant map file memory-mongo requires 64 bit build for larger datasets
I know that 32 bit machine have a limit size of 2 gb.
I have two questions to ask.
The 2 gb limit is for 1 system, so the total data will be, 6gb as my sharding is done in 3 systems. So it would be only 2 gb or 6 gb?
While sharding is done properly, all the data are stored in single system in spite of distributing data in all the three sharded system?
Does Sharding play any role in increasing the datasize limit?
Does chunk size play any vital role in performance?
I would not recommend you do anything with 32bit MongoDB beyond running it on a development machine where you perhaps cannot run 64bit. Once you hit the limit the file becomes unuseable.
The documentation states "Use 64 bit for production. This is important as if you hit the mmap size limit (exact limit varies but less than 2GB) you will be unable to write to the database (analogous to a disk full condition)."
Sharding is all about scaling out your data set across multiple nodes so in answer to your question, yes you have increased the possible size of your data set. Remember though that namespaces and indexes also take up space.
You haven't specified where your mongos resides??? Where are you seeing the error from - a mongod or the mongos? I suspect that it's the mongod. I believe that you need to look at pre-splitting the chunks - http://docs.mongodb.org/manual/administration/sharding/#splitting-chunks.
which would seem to indicate that all your data is going to the one mongod.
If you have a mongos, what does sh.status() return? Are chunks spread across all mongod's?
For testing, I'd recommend a chunk size of 1mb. In production, it's best to stick with the default of 64mb unless you've some really important reason why you don't want the default and you really know what you are doing. If you have too small of a chunk size, then you will be performing splits far too often.

JVM: How much RAM must be provided for best performance - to not to use pagefile

I wonder how much memory must be proivided for better JVM performance.
It obvious that much is better, but I'm afraid of this kind of situtation:
For example:
I have total 8Gb of RAM.
5 GB already consumed by OS.
I give my JVM -Xmx6000m.
So my question is - When JVM will consume 3Gb of it's 6Gb will it start accessing to pagefile and will slow down ? (pagefile is on HDD and every read is much slower that RAM access)
Or is it best decision to provide only 3Gb ?
Run JConsole or VisualVM to monitor the actual memory usage of your JVM and start tuning from there.
And yes I would prefer to run everything in physical RAM too, but that is more a matter of gut feeling than actual knowledge ;-).

Reduce Membase quota per bucket to 5 MB

In Heroku, I notice that they limit my free Memcached Bucket (actually Membase) to 5MB. However, I tried it on my own server and cannot set Bucket quota to less than 64MB (per node, and for Memcached bucket type). For Membase bucket type, it's even more: 100MB.
Hmm, my server have a humble amount of RAM. And I need to allocate a very small amount of Memcached. Please advice.
Heroku is running a slightly modified version of our memcached software that lets them keep the bucket overhead very low. Unfortunately the "productized" version has some limits imposed to prevent the software from getting itself into trouble.
Especially for Membase buckets, we need at least 100mb in order to safely run.
You may be able to reduce/eliminate these limits if you recompile the source, but that wouldn't be a supported configuration.
Perry
Sorry for the delay in getting back to this...
As with any piece of software, there are internal data structures that need RAM to run...that's what gets allocated immediately with Membase.
If you install memcached, it will use as much RAM as you configure it to use...no more, no less.

what is the suggested number of bytes each time for files too large to be memory mapped at one time?

I am opening files using memory map. The files are apparently too big (6GB on a 32-bit PC) to be mapped in one ago. So I am thinking of mapping part of it each time and adjusting the offsets in the next mapping.
Is there an optimal number of bytes for each mapping or is there a way to determine such a figure?
Thanks.
There is no optimal size. With a 32-bit process, there is only 4 GB of address space total, and usually only 2 GB is available for user mode processes. This 2 GB is then fragmented by code and data from the exe and DLL's, heap allocations, thread stacks, and so on. Given this, you will probably not find more than 1 GB of contigous space to map a file into memory.
The optimal number depends on your app, but I would be concerned mapping more than 512 MB into a 32-bit process. Even with limiting yourself to 512 MB, you might run into some issues depending on your application. Alternatively, if you can go 64-bit there should be no issues mapping multiple gigabytes of a file into memory - you address space is so large this shouldn't cause any issues.
You could use an API like VirtualQuery to find the largest contigous space - but then your actually forcing out of memory errors to occur as you are removing large amounts of address space.
EDIT: I just realized my answer is Windows specific, but you didn't which platform you are discussing. I presume other platforms have similar limiting factors for memory-mapped files.
Does the file need to be memory mapped?
I've edited 8gb video files on a 733Mhz PIII (not pleasant, but doable).