MS Exchange Powershell command exceeds throttling budget - powershell

I'm getting throttling error while getting data from Exchange online with PowerShell command and I'm having trouble understand what exactly exceeds the policy. I haven't found any documentation and all the numbers in the error message looks fine for me.
Category: 1003 Message:This operation exceeds the throttling budget for policy part 'LocalTime', policy value '3000000', Budget type: 'PowerShell'. Suggested backoff time 226271 ms.
PowerShell BudgetType: PowerShell
ActiveRunspaces: 0/20
Balance: -2960943/2160000/-3000000
PowerShellCmdletsLeft: 397/400
ExchangeCmdletsLeft: 197/200
CmdletTimePeriod: 5
DestructiveCmdletsLeft: 120/120
DestructiveCmdletTimePeriod: 60
QueueDepth: 100
MaxRunspacesTimePeriod: 60
RunSpacesRemaining: 20/20
LastTimeFrameUpdate: 6/12/2022 6:45:06 AM
LastTimeFrameUpdateDestructiveCmdlets: 6/12/2022 6:44:57 AM
LastTimeFrameUpdateMaxRunspaces: 6/12/2022 6:44:57 AM
Locked: True
LockRemaining: 00:03:46.2710274 ;
Computer: )
Reason:OverBudgetException
I want to have the policy change to avoid those issues but I'm not really sure what should be changed?

Related

Randomly Login Timeout Expired errors from SQL 2000 DTS against SQL2008R2 databases

I have some JOBs running on SQL Server 2000, which are calling stored procedures or queries against remote SQL Servers (different editions).
The JOB calls a DTS, and is the DTS who does the remote connection and executes the Stored Procedure or gets a query results from the remote server.
This has been working without errors for years. I don't know why during the last month, I'm having randomly errors on these kind of jobs... I've read some other posts and seems to be related to a security issue, but I repeat, the most of times the jobs are working, only some runs are failing with the error.
Executed as user: SERVER\user. DTSRun: Loading... DTSRun: Executing...
DTSRun OnStart: DTSStep_DTSDynamicPropertiesTask_2 DTSRun OnError:
DTSStep_DTSDynamicPropertiesTask_2, Error = -2147467259 (80004005)
Error string: Login timeout expired Error source: Microsoft OLE DB
Provider for SQL Server Help file: Help context: 0
Error Detail Records: Error: -2147467259 (80004005); Provider Error: 0 (0)
Error string: Login timeout expired Error source: Microsoft OLE DB Provider for SQL Server
Help file: Help context: 0 DTSRun OnFinish:
DTSStep_DTSDynamicPropertiesTask_2 DTSRun: Package execution complete.
Process Exit Code 1. The step failed.
I really don't know what to check. After reboot the server the problems are still there. Any help from you guys would be appreciated.
EDIT 2019-02-14 16:15 -------------------------------------------------------------------------------------------
One of the solutions I found has been to change the Remote Login Timeout property from the default 20 seconds to 30 seconds, or to 0 (Zero means without timeout), by executing the next code:
sp_configure 'remote login timeout', 30 --Or 0 seconds for infinite
go
reconfigure with override
go
From: https://support.microsoft.com/es-es/help/314530/error-message-when-you-execute-a-linked-server-query-in-sql-server-tim
I've tried this solution changing it to 30 seconds, but with the same result. Of course I didn't set it to 0 for obvious reasons, the timeouts are there for something. And also tried 300 seconds (5 minutes to make a login!) and still the same.
EDIT 2019-02-25 11:25 -------------------------------------------------------------------------------------------
Very similar to my problem, still not solved...
https://www.sqlservercentral.com/Forums/Topic1727739-391-1.aspx
For the moment I have a temp solution, and it is to increase the Connect Timeout on the Connection object.
It was blank (probably using its default value).
Since I changed this property (Connection Object > Advanced... > Connect Timeout) to 300 I'm not having the problems on these DTS. I leaved 2 DTS without the change to ensure I continued having the problem, and these are the only ones which continue having the problem. The changed ones are working fine now.

How to fill data upto a size in multiple disk?

I am creating 4 mountpoint disk in Windows OS. I need to copy files up to a threshold value (say 50 GB).
I tried with vdbench. It works fine, but it throws an exception at last.
compratio=4
dedupratio=1
dedupunit=256k
* Host Definition section
hd=default,user=Administator,shell=vdbench,jvms=1
hd=localhost,system=localhost
********************************************************************************
* Storage Definition section
fsd=fsd1,anchor=C:\UnMapTest-Volume1\disk1\,depth=1,width=1,files=1,size=5g
fsd=fsd2,anchor=C:\UnMapTest-Volume2\disk2\,depth=1,width=1,files=1,size=5g
fwd=fwd1,fsd=fsd*,operation=write,xfersize=1m,fileio=sequential,fileselect=random,threads=10
rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=1h,interval=1
Below is the exception from vdbench. Due to this my calling script would fail.
05:29:14.287 Message from slave localhost-0:
05:29:14.289 file=C:\UnMapTest-Volume1\disk1\\vdb.1_1.dir\vdb_f0001.file,busy=true
05:29:14.290 Thread: FwgThread write C:\UnMapTest-Volume1\disk1\ rd=rd1 For loops: None
05:29:14.291
05:29:14.292 last_ok_request: Thu Dec 28 05:28:57 PST 2017
05:29:14.292 Duration: 16.92 seconds
05:29:14.293 consecutive_blocks: 10001
05:29:14.294 last_block: FILE_BUSY File busy
05:29:14.294 operation: write
05:29:14.295
05:29:14.296 Do you maybe have more threads running than that you have
05:29:14.296 files and therefore some threads ultimately give up after 10000 tries?
05:29:14.300 *
05:29:14.301 ******************************************************
05:29:14.302 * Slave localhost-0 aborting: Too many thread blocks *
05:29:14.302 ******************************************************
05:29:14.303 *
05:29:21.235
05:29:21.235 Slave localhost-0 prematurely terminated.
05:29:21.235
05:29:21.235 Slave aborted. Abort message received:
05:29:21.235 Too many thread blocks
05:29:21.235
05:29:21.235 Look at file localhost-0.stdout.html for more information.
05:29:21.735
05:29:21.735 Slave localhost-0 prematurely terminated.
05:29:21.735
java.lang.RuntimeException: Slave localhost-0 prematurely terminated.
at Vdb.common.failure(common.java:335)
at Vdb.SlaveStarter.startSlave(SlaveStarter.java:198)
at Vdb.SlaveStarter.run(SlaveStarter.java:47)
I am using PowerShell in a Windows machine. Even if some other tools like Diskspd is having way to fill data up to some threshold then please provide me.
I found the answer by myself.
I have done this using Diskspd.exe as below
The following command fill 50 GB data in the mentioned disk folder
.\diskspd.exe -c50G -b4K -t2 C:\UnMapTest-Volume1\disk1\testfile1.dat
It is very simple than Vdbench for my requirement.
Caution : But it is not having real data so array side disk size is
not shown up with the size

Powershell crashing due to memory constants

I've got a server running around 500 powershell processes. Each of these processes are designed to make WMI calls across our environment. I've been careful to verify that I do not use up all of the server's available memory or CPU. When I have all 500 processes running, I'm at around 70% memory usage.
Just in case anybody is wondering how the individual processes are handled, they are executed using a gearman job worker. Basically a shell python script that calls a powershell script...times 500.
The issue i'm running into is that some of my powershell processes are crashing after running a few hours.
Some of the errors that I'm getting are:
A new guard page for the stack cannot be created
When I open event viewer, I see these events when processes crash
Fault bucket , type 0
Event Name: PowerShell
Response: Not available
Cab Id: 0
Problem signature:
P1: powershell.exe
P2: 6.3.9600.16394
P3: System.OutOfMemoryException
P4: System.OutOfMemoryException
P5: oft.PowerShell.ConsoleHost.ReportExceptionFallback
P6: lization.EncodingTable.nativeCreateOpenFileMapping
P7: Consol.. main thread
P8:
P9:
P10:
Attached files:
These files may be available here:
C:\path
Analysis symbol:
Rechecking for solution: 0
Report Id: ID
Report Status: 2048
Hashed bucket:
I'm guessing it has something to do with powershell running out of memory, but the server is not peaked, and not all processes crash, it is sporadic.
Any help would be appreciated.
Here are more crash results, the powershell fault module names are different from time to time:
Problem Event Name: APPCRASH
Application Name: powershell.exe
Application Version: 6.3.9600.16384
Application Timestamp: 52158733
Fault Module Name: ntdll.dll
Fault Module Version: 6.3.9600.16408
Fault Module Timestamp: 523d45fa
Exception Code: c00000fd
Exception Offset: 00069abb
OS Version: 6.3.9600.2.0.0.272.7
Locale ID: 1033
Additional Information 1: 624b
Additional Information 2: 624b484d3cf74536f98239c741379147
Additional Information 3: a901
Additional Information 4: a901f876e92d1eb79eb3a513defef0c6
Problem signature:
Problem Event Name: APPCRASH
Application Name: powershell.exe
Application Version: 6.3.9600.16384
Application Timestamp: 52158733
Fault Module Name: combase.dll
Fault Module Version: 6.3.9600.16408
Fault Module Timestamp: 523d3001
Exception Code: c00000fd
Exception Offset: 0001a360
OS Version: 6.3.9600.2.0.0.272.7
Locale ID: 1033
Additional Information 1: 81ca
Additional Information 2: 81cae32566783b059420874b47802c3e
Additional Information 3: b637
Additional Information 4: b6375e6f6a866fc9d00393d4649231b8
have you looked at your max memory allocation per shell?
get-item WSMan:\localhost\Shell\MaxMemoryPerShellMB
and if its too low changing this;
set-item WSMan:\localhost\Shell\MaxMemoryPerShellMB 2048
Doesn't .Net have a limit of memory?
If you're using TaskManager to check on memory usage, you might try Process Explorer instead. It sometimes gives very different results.
Thanks everyone for the responses, it turns out that I had a memory leak in my powershell code that was causing memory usage to spike every now and then. Since I was not watching the server at every second, I missed when the memory usage spiked.
An interesting note, it appears that Powershell will not use more then 80% of available memory on a server before killing its own processes.
I had to increase the available memory to 56GB and now I'm not running into any issues whatsoever. I've been running 600 powershell processes for a week now and have not had one crash on me.

JMeter performance plugin report always showing 100% of errors on 200 success response code

After the build is completed, in the performance Trend report error column displays 100% error whereas the HTTP Response code is 200 (Successful)
Expected Result: That should be 0% error in error column.
We have performance plugin 1.13 in jenkins 1.607
My .jtl file contains:
1434631428652,2082,Deactivate_Enrollee,200,OK,setUp Thread Group 1-1,text,true,536,2073
1434631430748,574,Activate_Enrollee,200,OK,setUp Thread Group 1-1,text,true,536,574
1434631431323,315,User_Status,200,OK,setUp Thread Group 1-1,text,true,1317,315
1434631431711,1,Debug Sampler,200,OK,setUp Thread Group 1-1,text,true,807,0
Console output:
Started by user anonymous
Building in workspace /results/jtls
Performance: Percentage of errors greater or equal than 0% sets the build as unstable
Performance: Percentage of errors greater or equal than 0% sets the build as failure
Performance: Recording JMeter reports '*.jtl'
Performance: Parsing JMeter report file APITest_JMeter.jtl
Performance: File APITest_JMeter.jtl reported 100.0% of errors [FAILURE]. Build status is: FAILURE
Build step 'Publish Performance test result report' changed build result to FAILURE
Finished: FAILURE
Can anyone solve this for Jenkins?
It seems that it is due to a defect in Performance Plugin version 1.13.
You may use Performance Plugin version 1.9 or below and let us know if this resolves your issue.
Your jtl file is wrong for the plugin from :
https://github.com/jenkinsci/performance-plugin/blob/master/src/main/java/hudson/plugins/performance/JMeterCsvParser.java#L157
So this leads to failure of parsing this value to a boolean by this code:
sample.setSuccessful(Boolean.valueOf(values[successIdx]));
I think your configuration of saveservice is not well suited for the plugin, you should set:
jmeter.save.saveservice.response_message=false
I think you may be facing well-known critical bug 28426 of Performance Plugin.

Error messages when converting access 2000 to access 97

When using access 2003 to convert an access 2000 database to access 97 I get the following 2 error messages right after the other
'invalid argument'
'Some errors happened during the conversion. No converted database was generated.'
Other than this there is no information provided.
Has anyone come across this before and have any clues about how to find the error
I compacted/repaired the database but this had no effect.
This error occurred because on conversion the size of the Access 97 database would have exceeded 1 GB which is the limit for that version of Access.