Im using Windows10, i need to check CPU usage and memory usage for a power shell script which is scheduled to run every 3mins.
I have created a data collector set with following details
I use perfmon, to monitor CPU Usage i have added:
\process(Powershell_12345)%ProcessorTime
to monitor memory usage i have added:
\Memory%Committed bytes in Use
\Memory%Committed bytes
But problem is every time powershell script gets triggered through scheduler a new PID is created and the process has names concatenated with PID like powershell_
If i add the powershell process only till that thread is used, it would get monitored and not for entire day
How do i use perfmon to monitor powershell.exe for a day ?
This should probably be on SF rather than SO but the easiest way to do this is going to just be to monitor All process instances and then go back and grab the powershell instance when you look through the data later. Other option would be to programmatically create the data collector set as the first step in your script, more info on that Here A lot more work but you'll end up with cleaner data so it just depends what's important to you.
A final option if you'd like to avoid using perfmon at all is to take a look at This Function I created for my blog about a year ago, in all of my testing it's results correlate to those seen in perfmon quite well, if your interested in going that way let me know and I can show you how to set it up as a job and have it keep running in a loop while your script runs to generate the data you are looking for.
Related
I thought Task Manager used the WMI performance counters to calculate things like disk usage per process. However, running a PowerShell script that queries Win32_PerfFormattedData_PerfProc_Process, I get the right process and some data filled in--but the relevant values I'm interested in are all zero. (See image.)
What I want to do is simply get (or calculate) the kind of data that Task Manager shows, and then save the data for graphing and/or decision purposes. (e.g. For learning purposes, killing off a program that is using 100% CPU or 16 GB of RAM.)
Running in Windows 10 (Pro). I just tried running as administrator and no difference. Powershell version is 5.1.
Q: So why are these values zero? And, if Task Manager is getting these values... what API is it using if not WMI?
We have 4 x NetApp filers, each with around 50x VOLs. We've been experience performance issues & tracked it down to how fragmented the data is. We've run some measures (all coming back over 7+) and have been gradually manually running the WAFL reallocates (starting with our VMStores) which is improving the fragmentation level to around 3/4.
As ever - time is short and was wondering if anyone had a script which could handle this process? Preferably Powershell or VBScript.
(we have the DataOnTap CMDlets installed & enabled)
I know you can schedule scans but you cant seem to tell the filer to only run one at a time.
I'd ideally like a script which would:
+Pull a csv of volumes
+Measure each volume sequentially, only starting the next measure when the previous has completed, recording the scoring
+Then Reallocate each volume sequentially, only starting the next Reallocate when the previous has completed, recording the new scoring
For your reference:
https://library.netapp.com/ecmdocs/ECMP1196890/html/man1/na_reallocate.1.html
Any help / guidance in this matter would be very much appreciated!
Are you using 7-mode or cDOT?
Anyway, I only know Powershell. The script shouldn't be long and it would go as something like this:
connect the netapp (using connect-nacontroller / connect-nccontroller)
getting all the volumes (using get-navol / get-ncvol)
get the measurement for each volume (either using foreach or perhaps the command could be run once and give the information for all the volumes)
export the output to csv (using export-csv)
a foreach loop iterating on all the volumes:
- if volume is fragmented behind a given threshold
- run the reallocation (I do not know which command needs to be used)
If you want this thing to run eternally just put it all under a while loop, if you are going to schedule this you should rerun the checks to get a new csv with the new measurements.
Disclaimer:
I am not familiar with the reallocation process nor with it's powershell command behaviors. The post should give you pretty much the things to be done but I was only using common sense.
Perhaps the command for reallocation only start the reallocation process and let it run at the background - resulting in all of the reallocation to run simultaneously. If so, a while loop is needed inside the if statement using another command to report the status until it is completed.
You should try to run this on a single volume and then a list including few volumes in order to make sure it runs the way you want it to.
What's the best solution for using Node.js and Redis to create an uptime monitoring system? Can I use Redis as a queue but is not the best way to save information, maybe MongoDB is?
It seems pretty simple but needing to have more than 1 server to guarantee the server is down and make everything work together is not so easy.
To monitor uptime, you would use a Cron job on the system. With each call, you would check to see if the host is up, and how long it would take. And in that script, you would save your data in Redis.
To do this in Node.JS, you would create a script that checks the status of the server. Just making a HTTP request to the server (Or Ping, w.e.) and recording if it fails or not. Then I would just record it to Redis. How you do it does not matter, because the script (if you run the cron every 30 seconds) has [30] seconds before the next run, so you dont have to worry about getting your query to the server. How you save your data is up to you, but in this case even MySQL would work (if you are only doing a small number of sites)
More on Cron # Wikipedia
Can I use Redis as a queue but is not
the best way to save information,
maybe MongoDB is?
You can(should) use Redis as your queue. It is going to be extremely fast.
I also think it is going to be very good option to save the information inside Redis. Unfortunately Redis does not do any timing(yet). I think you could/should use Beanstalkd to put messages on the queue that get delivered when needed(every x seconds). I also think cron is not that a very good idea because you would be needing a lot of them and when using a queue you could do your work faster(share load among multiple processes) also.
Also I don't think you need that much memory to save everything in memory(makes site fast) because dataset is going to be relative simple. Even if you aren't able(smart to get more memory if you ask me) to fit entire dataset in memory you can rely on Redis's virtual memory.
It seems pretty simple but needing to
have more than 1 server to guarantee
the server is down and make everything
work together is not so easy.
Sharding/replication is what I think you should read into to solve this problem(hard). Luckily Redis supports replication(sharding can also be achieved). MongoDB supports sharding/replication out of the box. To be honest I don't think you need sharding yet and your dataset is rather simple so Redis is going to be faster:
http://redis.io/topics/replication
http://www.mongodb.org/display/DOCS/Sharding+Introduction
http://www.mongodb.org/display/DOCS/Replication
http://ngchi.wordpress.com/2010/08/23/towards-auto-sharding-in-your-node-js-app/
I am preparing a small app that will aggregate data on users on my website (via socket.io). I want to insert all data to my monogDB every hour.
What is the best way to do that? setInterval(60000) seems to be a lil bit lame :)
You can use cron for example and run your node.js app as scheduled job.
EDIT:
In case where the program have to run continuously, then probably setTimeout is one of the few possible choices (which is quite simple to implement). Otherwise you can offload your data to some temporary storage system, for example redis and then regularly run other node.js program to move your data, however this may introduce new dependency on other DB system and increase complexity depending on your scenario. Redis can also be in this case as some kind of failsafe solution in case when your main node.js app will unexpectedly be terminated and lose part or all of your data batch.
You should aggregate in real time, not once per hour.
I'd take a look at this presentation by BuddyMedia to see how they are doing real time aggregation down to the minute. I am using an adapted version of this approach for my realtime metrics and it works wonderfully.
http://www.slideshare.net/pstokes2/social-analytics-with-mongodb
Why not just hit the server with a curl request that triggers the database write? You can put the command on an hourly cron job and listen on a local port.
You could have mongo store the last time you copied your data and each time any request comes in you could check to see how long it's been since you last copied your data.
Or you could try a setInterval(checkRestore, 60000) for once a minute checks. checkRestore() would query the server to see if the last updated time is greater than an hour old. There are a few ways to do that.
An easy way to store the date is to just store it as the value of Date.now() (https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Date) and then check for something like db.logs.find({lastUpdate:{$lt:Date.now()-6000000}}).
I think I confused a few different solutions there, but hopefully something like that will work!
If you're using Node, a nice CRON-like tool to use is Forever. It uses to same CRON patterns to handle repetition of jobs.
I am wondering how to get a process run at the command line to use less processing power. The problem I'm having is the the process is basically taking over the CPU and taking MySQL and the rest of the server with it. Everything is becoming very slow.
I have used nice before but haven't had much luck with it. If it is the answer, how would you use it?
I have also thought of putting in sleep commands, but it'll still be using up memory so it's not the best option.
Is there another solution?
It doesn't matter to me how long it runs for, within reason.
If it makes a difference, the script is a PHP script, but I'm running it at the command line as it already takes 30+ minutes to run.
Edit: the process is a migration script, so I really don't want to spend too much time optimizing it as it only needs to be run for testing purposes and once to go live. Just for testing, it keeps bring the server to pretty much a halt...and it's a shared server.
The best you can really do without modifying the program is to change the nice value to the maximum value using nice or renice. Your best bet is probably to profile the program to find out where it is spending most of its time/using most of its memory and try to find a more efficient algorithm for what you are trying to do. For example, if your are operating on a large result set from MySQL you may want to process records one at a time instead of loading the entire result set into memory or perhaps you can optimize your queries or the processing being performed on the results.
You should use nice with 19 "niceness" this makes the process very unlikely to run if there are other processes waiting for the cpu.
nice -n 19 <command>
Be sure that the program does not have busy waits and also check the I/O wait time.
Which process is actually taking up the CPU? PHP or MySQL? If it's MySQL, 'nice' won't help at all (since the server is not 'nice'd up).
If it's MySQL in general you have to look at your queries and MySQL tuning as to why those queries are slamming the server.
Slamming your MySQL server process can show as "the whole system being slow" if your primary view of the system through MySQL.
You should also consider whether the cmd line process is IO intensive. That can be adjusted on some linux distros using the 'ionice' command, though it's usage is not nearly as simplistic as the cpu 'nice' command.
Basic usage:
ionice -n7 cmd
will run 'cmd' using 'best effort' scheduler at the lowest priority. See the man page for more usage details.
Using CPU cycles alone shouldn't take over the rest of the system. You can show this by doing:
while true; do done
This is an infinite loop and will use as much of the CPU cycles it can get (stop it with ^C). You can use top to verify that it is doing its job. I am quite sure that this won't significantly affect the overall performance of your system to the point where MySQL dies.
However, if your PHP script is allocating a lot of memory, that certainly can make a difference. Linux has a tendency to go around killing processes when the system starts to run out of memory.
I would narrow down the problem and be sure of the cause, before looking for a solution.
You could mount your server's interesting directory/filesystem/whatever on another machine via NFS and run the script there (I know, this means avoiding the problem and is not really practical :| ).