Specman memory configuration - specman

I have a server with 20GB of RAM available.
I need to run a regression with Specman, and wish to optimize it, to run at least 5 tests in parallel.
I know my RTL needs a static 2GB memory size, but testbench size varies.
How can I control Specman, in order not to have one test taking the entire memory on the account of the others?

In order to let all 5 simulations use the server's memory without running out of memory is to set the optimal_process_size to 3-4G.
The automatic gc mechanism of specman will do the work and make sure that each process won't run out of memory.

You can set the optimal_process_size paramater in order to control the amount of memory used by the simulator. This way you take control of the GC process.

Use config memory to specify specman optimal and Max process size, for example :
Config mem -max_process_size=2000M;
If needed, use GC debug options to determine optimal parameters for GC threshold, increments and disk usage.

Yoi can setenv SPECMAN_MEMORY_FULL_DEBUG.
This env variable sets debug flag.
This way you can explore your test and set the optimal process size.
Also, try to use 32 bit mode. It usually consumes less memory, though it has overall memory limitations compared to 64 bits mode.

Related

Ballpark value for `--jobs` in `pg_restore` command

I'm using pg_restore to restore a database to its original state for load testing. I see it has a --jobs=number-of-jobs option.
How can I get a ballpark estimate of what this value should be on my machine? I know this is dependent on a bunch of factors, e.g. machine, dataset, but it would be great to get a conceptual starting point.
I'm on a MacBook Pro so maybe I can use the number of physical CPU cores:
sysctl -n hw.physicalcpu
# 10
If there is no concurrent activity at all, don't exceed the number of parallel I/O requests your disk can handle. This applies to the COPY part of the dump, which is likely I/O bound. But a restore also creates indexes, which uses CPU time (in addition to I/O). So you should also not exceed the number of CPU cores available.

Specman: debugging un-accessible memory

We have a huge environment, built from sub environments which are maintained by many users.
When we run a test, we see that we have a GC every 10us, when we use "show mem" we see that we have about 3GB as un-accessible memory, after the GC it's removed.
How can we determine what causes this huge consumption in our memory?
Using iprof mem, didn't give any "big" memory consumer.
Are you using Specman auto GC? you can check that by doing "config mem" at specman prompt and check that the -automatic_gc_settings=STANDARD. if not, try using the auto gc and see if it makes any change. if yes, you may need to increase the process size. are you running in 32 or 64bit mode?
to better understand the problem and assist you, it will be best if you run with SPECMAN_MEMORY_FULL_DEBUG env variable and send Cadence support the resulting log.
If you open a case for cadence support and send me the number, I can assist you further.
Regards,
Semadar
Customer Support Manager #Cadence

Possible to set a maximum of memory consumption on a powershell script in % or megabyte?

I sometimes have scripts which are eating up all the memory. Since I don't want to monitor them all the time or set cpu priority to low manually I am wondering if there is an option to give that specific script a value (maybe in mb) of memory.
Does this option exist?
PowerShell doesn't provide any built-in way to control system resources like the memory used by a script.
Windows does provide a way to limit system resources to groups of processes, you can learn more about that here: http://msdn.microsoft.com/en-us/library/windows/desktop/ms684161(v=vs.85).aspx
If your scripts are consuming too much memory, I'd suggest investigating the memory leak. There are many tools that help track memory leaks. Some are low level (e.g. using !dumpheap from SOS in windbg - http://msdn.microsoft.com/en-us/library/bb190764(v=vs.110).aspx). Others are pretty smart, letting you take multiple snapshots and show you just the newly allocated objects between the snapshots. You can search for ".Net memory profiler" to get an idea of what's available.

How to keep 32 bit mongodb memory usage down on changing dataset

I'm using MongoDB on a 32 bit production system, which sucks but it's out of my control right now. The challenge is to keep the memory usage under ~2.5GB since going over this will cause 32 bit systems to crash.
According to the mongoDB team, the best way to track the memory usage is to use your operating system's process tracking system (i.e. ps or htop on Unix systems; Process Explorer on Windows.) for virtual memory size.
The DB mainly consists of one table which is continually cycling data, i.e. receiving data at regular intervals from sensors, and every day a cron job wipes all data from before the last 3 days. Over a period of time, the memory usage slowly increases. I took some notes over time using db.serverStats(), db.lectura.totalSize() and ps, shown in the chart below. Note that the size of the table in question has reduced in the last month but the memory usage increased nonetheless.
Now, there is some scope for adjustment in how many days of data I store. Today I deleted basically half of the data, and then restarted mongodb, and yet the mem virtual / mem mapped and most importantly memory usage according to ps have hardly changed! Why do these not reduce when I wipe data (and restart)? I read some other questions where people said that mongo isn't really using all the memory that it might appear to be using, and that you can't clear the cache or limit memory use. But then how can I ensure I stay under the 2.5GB limit?
Unless there is a way to stem this dataset-size-irrespective gradual increase in memory usage, it seems to me that the 32-bit version of Mongo is unuseable. Note: I don't mind losing a bit of performance if it solves the problem.
To answer regarding why the mapped and virtual memory usage does not decrease with the deletes, the mapped number is actually what you get when you mmap() the entire set of data files. This does not shrink when you delete records, because although the space is freed up inside the data files, they are not themselves reduced in size - the files are just more empty afterwards.
Virtual will include journal files, and connections, and other non-data related memory usage also, but the same principle applies there. This, and more, is described here:
http://www.mongodb.org/display/DOCS/Checking+Server+Memory+Usage
So, the 2GB storage size limitation on 32-bit will actually apply to the data files whether or not there is data in them. To reclaim deleted space, you will have to run a repair. This is a blocking operation and will require the database to be offline/unavailable while it was run. It will also need up to 2x the original size in terms of free disk space to be able to run the repair, since it essentially represents writing out the files again from scratch.
This limitation, and the problems it causes, is why the 32-bit version should not be run in production, it is just not suitable. I would recommend getting onto a 64-bit version as soon as possible.
By the way, neither of these figures (mapped or virtual) actually represents your resident memory usage, which is what you really want to look at. The best way to do this over time is via MMS, which is the free monitoring service provided by 10gen - it will graph virtual, mapped and resident memory for you over time as well as plenty of other stats.
If you want an immediate view, run mongostat and check out the corresponding memory columns (res, mapped, virtual).
In general, when using 64-bit builds with essentially unlimited storage, the data will usually greatly exceed the available memory. Therefore, mongod will use all of the available memory it can in terms of resident memory (which is why you should always have swap configured to the OOM Killer does not come into play).
Once that is used, the OS does not stop allocating memory, it will just have the oldest items paged out to make room for the new data (LRU). In other words, the recycling of memory will be done for you, and the resident memory level will remain fairly constant.
Your options for stretching 32-bit are limited, but you can try some things. The thing that you run out of is address space, and the increases in the sizes of additional database files mean that you would like to avoid crossing over the boundary from "n" files to "n+1". It may be worth structuring your data into more or fewer databases so that you can get the maximum amount of actual data into memory and as little as possible "dead space".
For example, if your database named "mydatabase" consists of the files mydatabase.ns (the namespace file) at 16 MB, mydatabase.0 at 64 MB, mydatabase.1 at 128 MB and mydatabase.2 at 256 MB, then the next file created for this database will be mydatabase.3 at 512 MB. If instead of adding to mydatabase you instead created an additional database "mynewdatabase" it would start life with mynewdatabase.ns at 16 MB and mynewdatabase.0 at 64 MB ... quite a bit smaller than the 512 MB that adding to the original database would be. In fact, you could create 4 new databases for less space than would be consumed by adding a new file to the original database, and because the files are smaller they would be easier to fit into contiguous blocks of memory.
It is a well-known message that 32-bit should not be used for production.
Use 64-bit systems.
Point.

Can memcached make full use of multi-core?

Is memcached capable of making full use of multi-core? Or is there any way tuning this?
memcached has "-t" option:
-t <threads>
Number of threads to use to process incoming requests. This option is only meaningful
if memcached was compiled with thread support enabled. It is typically not useful to
set this higher than the number of CPU cores on the memcached server. The default is
4.
so, I believe it can use all your CPU cores, of course if it was compiled with corresponding option.
memcached is multi-threaded by default and has no problem saturating many cores. It's a bit harder to saturate all cores on more massively parallel boxes (e.g. a 256-core CMT box) just because it gets harder to get the data in and out of the network.
If you find areas where some sort of contention is preventing you from saturating cores, file a bug or start a discussion.
Based on a this research by Intel, Memcached v.1.6 beta cannot scale well on a multicore system. Their experiments show that as core counts increase from 1 to 8, maximum throughput (with a median RTT < 1ms SLA) only doubles.
CAREFUL. This terminology is quite confusing. Memcached man page says -t option is only good up to the number of cores. However, this is odd because threads and processes are VERY different. Threads have NOTHING to do with the number of cores. Processes can definitely run on more than one cor, while threads cannot (unless they call to an OS routine, then they can thread switch and pack in more than 100% cpu usage). Threads share memory and just depend on an instruction pointer to differentiate who is who. Processes share nothing unless it is explicitly declared as shared ahead of time, and sharing occurs via the OS.
Overall, I want MORE CLARITY from the Memcached people about whether their app is multiprocessing or multithreaded and thus if it can use more than 100% of cpu.