Looking to keep metrics on my MS Server 2008 R2 - some specific items in the task manager. I have several processes that are appearing to stay open and consuming lots of memory.
What's the best way to measure the memory used by these processes and report them?
Thanks in advance!
M
Related
I installed MongoDb on a windows server 2008 R2 and the hotfix KB2731284 is not installed, but I cannot restart the server easily.
In the hotfix description, I got this message "You run an application that uses the FlushViewOfFile() function to clean up memory-mapped files from the paged memory pool." (see https://support.microsoft.com/en-us/kb/2731284)
My question is, when the funtion FlushViewOfFile() is called? My application is just writing in a collection and get data from it. Do I risk to get some wrong behaviors?
I think you can run MongoDb without applying the Hotfix, but I would not recommend it. In long time you may run into problems. They have included some fixes in MongoDB to workaround the problem.
A detailed description of the problem can be found here and here.
See also this.
On Windows, Memory Mapped File flushes are synchronous operations. When the OS Virtual Memory Manager is asked to flush a memory mapped file, it makes a synchronous write request to the file cache manager in the OS. This causes large I/O stalls on Windows systems with high Disk IO latency, while on Linux the same writes are asynchronous.
The problem becomes critical on high-latency disk drives like Azure persistent storage (10ms). This behavior results in very long bg flush times, capping disk IOPS at 100. On low latency storage (local storage and AWS) the problem is not that visible.
On Windows 7 and Windows Server 2008 R2 when applying the hotfix you get a better file allocation performance what is relevant for MongoDB
I'm using SQL Server 2008 R2. I have a stored procedure that runs bcp via xp_command shell. On my laptop with a copy of the database, a job with 50000 records is almost instant and bcp performance is 71K rows per sec.
I run exactly the same stored procedure on the server and it takes 1h 51 minutes and bcp performance is 7 rows per sec (so 10,000x slower). The query that selects the data runs in under a second on the server BTW. This happened last week and we restarted the SQL Server instance and it ran pretty quick again on server. After about 5 days, the performance got real slow again, but restarting SQL instance didn't help.
My command is:
bcp "exec DBNAME.dbo.SPNAME 224,1 "
queryout "\\Server\path\OUTPUT\11111.txt" -c -t\t -Usa -P"PASSWORD" -SSQLSERVER
If I run activity monitor, I see my stored procedure process and it says RUNNABLE.
The server is on a VM with 4 cores and 28GB RAM.
If I run the same bcp command from a dos shell, I get same.
I'm at a loss where to look now. Anyone got any suggestions?
TIA
Mark
To answer the question of "where to look" and because the task you are trying to complete involves distributed resources (I'm assuming here because you are using UNC paths)... you have to look into differences between the environments, which when comparing execution between Server and laptop... is just about everything.
Storage (and available storage)
CPU (and available cpu)
Network (and available bandwidth)
Memory (and available memory)
SQL Server version/updates
Maintenance schedules (of which the laptop will likely have none)
concurrent activity (of which the laptop will likely have none)
The data you seem to have addressed. You can confirm that the data/database objects are the same? This is a restored database you are working with on the laptop (restored from the server?) or you've manually inspected tables and indexes if not a restore from the server?
If not restored, could the laptop have less data?
To troubleshoot, you'll also need much more than activity monitor. You'll need performance monitor.
This is from some time ago (not sure why things like this dont expire on here, but oh well).
We are playing with a multi tenant architecture not baed on partitions but rather havings tons of databases. We decided to run some tests
Generated 5 000 database schemas, each contains ~ 100 DB objects. 250k tables & 250k other DB objects (keys, indexes) at all.
Found cons:
Tried to open list of tables from SQL MGMT Studio – it took ~ 10-15 min. MGMT Studio allocated ~ 700 Mb of RAM
DB Utilities don’t work – tried Red Gate, DB Forge, Adept SQL Diff
Any advice when managing and running SQL Server like this?
Try to use sqlcmd utility running from command prompt.
You could try writing your own management tool, targeted specifically at what you need using SMO:
Creating SMO Programs - MSDN
That way you could simplify the program and load only what is required and potentially increasing performance.
Looking for any advice I can get.
I have 16 virtual CPUs all writing to a single remote MongoDB server. The machine that's being written to is a 64-bit machine with 32GB RAM, running Windows Server 2008 R2. After a certain amount of time, all the CPUs stop cold (no gradual performance reduction), and any attempt to get a Remote Desktop Connection hangs.
I'm writing from Python via pymongo, and the insert statement is "[collection].insert([document], safe=True)"
I decided to more actively monitor my server as the distributed write job progressed, remoting in from time to time and checking the Task Manager. What I see is a steady memory creep, from 0.0GB all the way up to 29.9GB, in a fairly linear fashion. My leading theory is therefore that my writes are filling up the memory and eventually overwhelming the machine.
Am I missing something really basic? I'm new to MongoDB, but I remember that when writing to a MySQL database, inserts are typically followed by commits, where it's the commit statement that actually makes sure the record is written. Here I'm not doing any commits...?
Thanks,
Dave
Try it with journaling turned off and see if the problem remains.
I have a 4 server asp.net farm. I want to use AppFabric as my session state server but I'm not sure if it will do what I want it to do. Some questions...
1: If some of the nodes crash, is any of the session data lost?
2: Does each server have a copy of the session data in case of failure?
The documentation states that you need to be running Windows Server 2008 Enterprise Edition or above for the "High Availability" features of AppFabric. I am running Windows Server 2008 Standard.
3: Does that mean I need the enterprise edition to have my session data stay safe if some of the nodes fail, or does AppFabric automatically keep the session data copied on all machines in case of failure?
I have't played much with the session state bits yet so this is based on AppFabric generally.
If you're not on Enterprise Edition, you can't use high availability :-( Essentially, in a non-HA scenario, each cache is 'tied' to a single node in your cluster, so the answer to your question is - it depends which node crashes. If it's the one that's got the cache on it then yes, you're boned.
If, however, you are in a HA environment any cache that is created withthe Secondaries option switched on, has two copies of the cache spread across the nodes so that if one goes down, the other copy picks up the load (and another secondary copy is created on another node).
There's quite a good conceptual explanation of HA for AppFabric here.