I expected the following code to complete in about one second.
It executes in about 20 seconds:
$i = 0; do{sleep -Milliseconds 1; $i=$i+1}while($i -lt 1000)
Could you please suggest why? I'm not able to find any clues in docs.
Thanks in advance!
Calling a cmdlet comes at a cost. Just because you use Start-Sleep -Milliseconds 1, doesn't mean it's going to take 1ms. This is because that cmdlet has overhead it needs to take care of behind the scenes, like setting up the timer, instantiating objects, etc.
Measure-Command { Start-Sleep -Milliseconds 1 }
# TotalMilliseconds : 25.1157
See the above...even though I told it to only run for 1ms, it still took 25ms because of the overhead. This overhead won't be exactly the same every time, but you should always expect there to be some.
On my computer, it seems to average about 16ms of overhead per call. So if you run that 1000 times, then on average, it's going to take 16 seconds to run, just for the sleep alone.
I obtained the average by running this a few times:
Measure-Command { 1..100 | % { Start-Sleep -Milliseconds 1 } }
It's like driving a car. You don't just hop in a car and go, you need to start it up first, and there's things going on behind the scenes the car needs to do in order to start. And that takes a little bit of time.
Related
I have the following script to measure the current cpu clock rate from this [link][1].
$MaxClockSpeed = (Get-CimInstance CIM_Processor).MaxClockSpeed
$ProcessorPerformance = (Get-Counter -Counter "\Processor Information(_Total)\% Processor Performance").CounterSamples.CookedValue
$CurrentClockSpeed = $MaxClockSpeed*($ProcessorPerformance/100)
I am looking to test the performance of the CPU to see how far it can go in terms of frequency.
The reason for this is that we have a few machines that are faulty out of our hundreds of machines, and it turns out that when you push them a bit, their clock rate doesn't change and stays at around 20% utilization. This would allow us, via our monitoring system, to find them easily.
Is there a way to programmatically via powershell to make an intensive task during or just before capturing the actual clock speed to know how far it can get? Something like a loop or something?
[1]: Unable to get current CPU frequency in Powershell or Python
I found something interesting on this website.
So to make the cpu work at 100% for a short time, I can use a background job:
$NumberOfLogicalProcessors = Get-WmiObject win32_processor | Select-Object -ExpandProperty NumberOfLogicalProcessors
ForEach ($core in 1..$NumberOfLogicalProcessors){
start-job -ScriptBlock{
$result = 1;
foreach ($loopnumber in 1..2147483647){
$result=1;
foreach ($loopnumber1 in 1..2147483647){
$result=1;
foreach($number in 1..2147483647){
$result = $result * $number
}
}
$result
}
}
}
Read-Host "Press any key to exit..."
Stop-Job *
It makes the cpu go usually at 100% of utilization. During that time I can easily run the script:
$MaxClockSpeed = (Get-CimInstance CIM_Processor).MaxClockSpeed
$ProcessorPerformance = (Get-Counter -Counter "\Processor Information(_Total)\% Processor Performance").CounterSamples.CookedValue
$CurrentClockSpeed = $MaxClockSpeed*($ProcessorPerformance/100)
For instance, in my case, when I test the $processorPerformance I get "107.998029830411" as an example, so this shows My processor works fine when I push it.
N.B I must add the Start-sleep -Seconds 20 parameter while the background tasks are running, because on all the machines it takes around 15 seconds until the x logical processors are running at 100%.
i have to wait until an element has to be load in the page by using the Selenium powershell.
As a part of automation i have to load the portal and click the element. Implicit wait is not a good practice.
So please suggest the explicit waits with selenium powershell.
Regards
Vinay
worked with the below option:
$seleniumWait = New-Object -TypeName
OpenQA.Selenium.Support.UI.WebDriverWait($driver, (New-TimeSpan -Seconds 60))
$seleniumWait.Until([OpenQA.Selenium.Support.UI.ExpectedConditions]::ElementIsVisible([OpenQA.Selenium.By]::Id("idp-discovery-username"))) | Out-Null
$driver.FindElementById("idp-discovery-username").SendKeys($userName)
Find-SeElement -Driver $d -Wait -Timeout 10 -Css input[name='q']
https://github.com/adamdriscoll/selenium-powershell
Thanks for your response. its an implicit wait, it will wait for 10sec to load the CSS element. if element will load a little longer then we are landing into the catch block. I would like to give a explicit wait: It has to wait certain time limit and presence of the element. Eg: i will give the timeout as 100 sec, if the element has presence in less than 10sec then it will move further. Else it will wait upto 100 sec then failed.
When are PowerShell data sections evaluated?
Specifically, are they only ever evaluated once at the point of runtime definition/loading? Or are they evaluated on every execution of the containing function, even if it has already been defined/loaded?
I'm assuming that the containing context is a function or advanced function that will be called multiple times in a single session after being defined/loaded, rather than a script file that would have to be reloaded on every invocation (as far as I understand, anyway).
Script to test for both questions:
(get-date).TimeOfDay.ToString()
Start-Sleep -Milliseconds 100
DATA dat -supportedCommand Get-Date {
get-date
}
Start-Sleep -Milliseconds 100
(get-date).TimeOfDay.ToString()
Start-Sleep -Milliseconds 100
$dat.TimeOfDay.ToString()
results (note that time from second line is the latest):
12:21:23.3191254
12:21:23.5393705
12:21:23.4306211
Which concludes that:
Data section evaluation is executed immediately, not delayed
Data section is evaluated only once, not on every usage
Data Sections would be much more useful if we had control over those mechanics. For example reading large text file only when needed or refreshing a variable upon every access.
My Tk app has many "wait" windows or pauses in a functions that allow time for other backgrounds commands to do their job. The problem is that using "after 5000" within a function disables all the buttons in the application. I've found a lot of information, the most helpful was at http://wiki.tcl.tk/808. First lesson learned is that "after" without a script won't process the event loop, and second is that vwaits are nested.
So, I use the following simple "pause" function in place of "after":
proc pause {ms {waitvar WAITVAR}} {
global $waitvar
after $ms "set $waitvar 1"
puts "waiting $ms for $waitvar"
vwait $waitvar
puts "pause $ms returned"
}
button .b -text PressMe -command {pause 5000 but[incr i]}; # everyone waits on this
pack .b
after 0 {pause 1000 var1}; pause 3000 var2; # works as expected
after 0 {pause 3000 var3}; pause 1000 var4; # both return after 3 secs
My button is always responsive, but if pressed, all other vwaits are held up for at least another 5 seconds. And a second press within 5 seconds also delays the first one. Understanding that vwaits are nested, this is now expected and not really problematic.
This seems almost too simple a solution, so I'd like to get comments as to what issues I might not have though of.
You've listed the main issue, that a vwait call will merrily nest inside another vwait call. (They're implemented using a recursive call to the event loop engine, so that's to be expected.) This can be a particular problem when you get something that ends up nesting inside itself; you can blow up the stack this way very easily. The traditional way of fixing this is with careful interlocking, such as disabling the button that invokes this particular callback while you're processing the vwait; that also gives quite a good way to indicate to the user that you're busy.
The other approach (which you might well still use with the button disabling) is to break up the code so that instead of:
proc callback {} {
puts "do some stuff 1"
pause 5000
puts "do some stuff 2"
}
You instead do:
proc callback {} {
puts "do some stuff 1"
after 5000 callback2
}
proc callback2 {} {
puts "do some stuff 2"
}
This allows you to avoid the vwait itself. It's called continuation-passing style programming, and it's pretty common in high-quality Tcl code. It does get a bit tricky though. Consider this looping version:
proc callback {} {
for {set i 1} {$i <= 5} {incr i} {
puts "This is iteration $i"
pause 1000
}
puts "all done"
}
In continuation-passing style, you'd do something like this:
proc callback {{i 1}} {
if {$i <= 5} {
puts "This is iteration $i"
after 1000 [list callback [incr i]]
} else {
puts "all done"
}
}
The more local state you've got, the trickier it is to transform the code!
With Tcl/Tk 8.6 you've got some extra techniques.
Firstly, you can use a coroutine to simplify that tricky continuation-passing stuff.
proc callback {} {
coroutine c[incr ::coroutines] apply {{} {
for {set i 1} {$i <= 5} {incr i} {
puts "This is iteration $i"
after 1000 [info coroutine]
yield
}
puts "all done"
}}
}
This is a bit longer, but is much easier as the size and complexity of the state increases.
The other new 8.6 facility is the tk busy command, which can be used to make convenient modal dialogs that you can't interact with while some operation is happening (via clever tricks with invisible windows). It's still up to your code to ensure that the user is told that things are busy, again by marking things disabled, etc., but tk busy can make it much easier to implement (and can help avoid the nest of little tricky problems with grab).
I have a strange problem with my script in powershell, I want to examine the average time of downloading page. I write script which fires frequently. But sometimes my script returns result 0, which means it downloads site in 0 ms. If i modified my script to save whole site to the file when the download time is about 0ms it doesn't saves anything. And I'm interesting if I do something wrong, or powershell function isn't too accurate to count such "small" times.
ps. other "good" results are about 4-9 ms.
Here is a part of my script which responds to count the download time:
$StartTime = Get-Date
$PageDownload = $Request.DownloadString("mypage.com")
$TimeTaken = ((Get-Date) - $StartTime).TotalMilliseconds
Get-Date should be as precise as the system clock is.
There could be web caching going on. Unfortunately, disabling caching for WebClient is not possible, from what I see elsewhere. The "do it right" method is to construct your own Http request with the TcpClient class, but that's also pretty complex.
One easy way to make sure you're not being cached is to put an arbitrary value as a GET request. It's a hack, but it is often enough to fool a cache. So, instead of:
"http://mypage.com"
You use:
"http://mypage.com?someUnusedValueName=$([System.Environment]::TickCount)"