I currently need to export the groups of all users in my domain and I get the error Error with Get-ADUser: Invalid enumeration context. In the script I indicate an ou that has 1300 users and it only brings me approximately 900. how could i solve it? I attach the script.
I attach the script
You are hitting the 2 minute time limit. I do not believe there is a way to adjust the timeout, but you can filter:
Get-ADObject -Filter 'CN -like "*a*"'
Repeat a - z
That will give you everyone. You could probably come up with a more creative filter, but this is the answer.
https://learn.microsoft.com/en-us/previous-versions/windows/server/hh531527(v=ws.10)?redirectedfrom=MSDN
From Microsoft:
Timeout Behavior
The following statements specify timeout conditions within the Active
Directory module and describe what can be done about a timeout them.
The default Active Directory module timeout for all operations is 2
minutes.
For search operation, the Active Directory module uses paging control
with a 2-minute timeout for each page search.
Note: Because a search may involve multiple server page requests the
overall search time may exceed 2 minutes.
A TimeoutException error indicates that a timeout has occurred.
For a search operation, you can choose to use a smaller page size, set
with the ResultPageSize parameter, if you are getting a TimeoutException
error.
If after trying these changes you are still getting a TimeoutException
error, consider optimizing your filter using the guidance in the
Optimizing Filters section of this topic.
Related
My atomist client exposes metrics on commands that are run. Each command is a metric with a username element as well a status element.
I've been scraping this data for months without resetting the counts.
My requirement is to show the number of active users over a time period. i.e 1h, 1d, 7d and 30d in Grafana.
The original query was:
count(count({Username=~".+"}) by (Username))
this is an issue because I dont clear the metrics so its always a count since inception.
I then tried this:
count(max_over_time(help_command{job=“Application
Name”,Username=~“.+“}[1w]) -
max_over_time(help_command{job=“Application name”,Username=~“.+“}[1w]
offset 1w) > 0)
which works but only for one command I have about 50 other commands that need to be added to that count.
I tried the:
"{__name__=~".+_command",job="app name"}[1w] offset 1w"
but this is obviously very expensive (timeout in browser) and has issues with integrating max_over_time which doesn't support it.
Any help, am I using the metric in the wrong way. Is there a better way to query... my only option at the moment is the count (format working above for each command)
Thanks in advance.
To start, I will point out a number of issues with your approach.
First, the Prometheus documentation recommends against using arbitrarily large sets of values for labels (as your usernames are). As you can see (based on your experience with the query timing out) they're not entirely wrong to advise against it.
Second, Prometheus may not be the right tool for analytics (such as active users). Partly due to the above, partly because it is inherently limited by the fact that it samples the metrics (which does not appear to be an issue in your case, but may turn out to be).
Third, you collect separate metrics per command (i.e. help_command, foo_command) instead of a single metric with the command name as label (i.e. command_usage{commmand="help"}, command_usage{commmand="foo"})
To get back to your question though, you don't need the max_over_time, you can simply write your query as:
count by(__name__)(
(
{__name__=~".+_command",job=“Application Name”}
-
{__name__=~".+_command",job=“Application name”} offset 1w
) > 0
)
This only works though because you say that whatever exports the counts never resets them. If this is simply because that exporter never restarted and when it will the counts will drop to zero, then you'd need to use increase instead of minus and you'd run into the exact same performance issues as with max_over_time.
count by(__name__)(
increase({__name__=~".+_command",job=“Application Name”}[1w]) > 0
)
I have a system which synchronizes users from a data source. The data source consists of user information. When a new user is synchronized, a PowerShell task is triggered which creates or updates the user. This is all fine but when the amount of new/updated users becomes too big, some of the tasks fail with some interesting errors such as:
"The server has returned the following error: invalid enumeration
context."
or
"A connection to the directory on which to process the request was
unavailable. This is likely a transient condition."
When troubleshooting it seems obvious that the reason these errors occur are a lack of resources. It is because all the simultaneously triggered tasks are importing the module on their own PS session.
So I went trying some different things and measuring Import-Module speed etc. So I've concluded that it is faster to run Import-Module and then Get-ADUser for instance, then just Get-ADUser (which would also import the module).
Measure-Command {Import-Module ActiveDirectory}
Average time 340 ms
Measure-Command {Get-ADUser -Filter *}
Average time 420 ms
Get-ADUser after the module is imported
Average time 10 ms
But these marginal differences are not going to do anything to the issue. So I had to look further. I find that disabling the drive might help speed up the process, so I've added the following before importing the module:
$Env:ADPS_LoadDefaultDrive = 0
Measure-Command {Import-Module ActiveDirectory}
Average time 85 ms
4 times faster! But still the error persists at high amounts of users at the same time (e.g. 50 tasks). So I thought about polling the availability in the script, or make a do..while loop. Or maybe the system which fires the separate tasks needs to be redesigned, to have some sort of queue.
Does anyone recognize this situation? Or have some thoughts they'd like to share on this subject?
It is because all the simultaneously triggered tasks are importing the module on their own PS session.
Then you need to make sure this doesn't happen, or at least not so much that you run out of resources. So you have two options:
Limit the number of tasks that get run at any one time (maybe 5 at a time).
Make it one task that can work on several accounts. This way the module only gets loaded once.
I think option 2 is the better solution. For example, rather than triggering the script right away, your synchronization job could just write the username to a file (or in memory even) and once it's done finding all the users, it triggers the PowerShell script and passes the list (or the script could read the file that was written to). You have options there - whatever works best.
Update: All of .NET is available in PowerShell, so another option is to change the whole script to use .NET's DirectoryEntry instead of the ActiveDirectory module, which would use much less memory. There are even shortcuts for it in PowerShell. For example, [ADSI]"LDAP://$distinguishedName" would create a DirectoryEntry object for a user. It is a substantial rewrite, but the performance is much better (speed and memory consumption).
Here are some examples of searching with DirectorySearcher (using the [ADSISearcher] shortcut): https://blogs.technet.microsoft.com/heyscriptingguy/2010/08/23/use-the-directorysearcher-net-class-and-powershell-to-search-active-directory/
And here's an example of creating accounts with DirectoryEntry: https://www.petri.com/creating-active-directory-user-accounts-adsi-powershell
I've run into a mystifying XMLA timeout error when running an ADOMD.Net command from a .Net application. The Visual Basic routine iterates over a list of mining models residing on a SQL Server Analysis Services 2014 instance and performs a cross-validation test on each one. Whenever the time elapsed on the cross-validation test reaches the 60 minute mark, the XML for Analysis parser throws an error, saying that the request timed out. For any routine operations taking less than one hour, I can use the same ADOMD.Net connections with the same server and application without any hitches. The culprit in such cases is often the ExternalCommandTimeout setting on the server, which defaults to 3600 seconds, i.e one hour. In this case, however, all of the following timeout properties on the server are set to zero: CommitTimeout, ExternalCommandTimeout, ExternalConnectionTimeout, ForceCommitTimeout, IdleConnectionTimeout, IdleOrphanSessionTimeout, MaxIdleSessionTimeout and ServerTimeout.
There are only three other timeout properties available, none of which is set to one hour: MinldleSessionTimeout (currently at 2700), DatabaseConnectionPoolConnectTimeout (now at 60 seconds) and DatabaseConnectionPoolTimeout (at 120000). The MSDN documentation lists another three timeout properties that aren't visible with the Advanced Properties checked in SQL Server Management Studio 2017:
AdminTimeout, DefaultLockTimeoutMS and DatabaseConnectionPoolGeneralTimeout. The first two default to no timeout and the third defaults to one minute. MSDN also mentions a few "forbidden" timeout properties, like SocketOptions\ LingerTimeout, InitialConnectTimeout, ServerReceiveTimeout, ServerSendTimeout, which all carry the warning, "An advanced property that you should not change, except under the guidance of Microsoft support." I do not see any means of setting these through the SSMS 2017 GUI though.
Since I've literally run out of timeout settings to try, I'm stumped as to how to correct this behavior and allow my .Net app to wait on those cross-validations through ADOMD. Long ago I was able to solve a few arcane SSAS timeout issues by appending certain property settings to the connection strings, such as "Connect Timeout=0;CommitTimeout=0;Timeout=0" and so on. Nevertheless, attempting to assign an ExternalCommandTimeout value through the connection string in this manner results in the XMLA error
"The ExternalCommandTimeout property was not recognized." I have not tested each and every one of the SSAS server timeouts in this manner, but this exception signifies that ADOMD.Net connection strings can only accept a subset of the timeout properties.
Am I missing a timeout setting somewhere? Does anyone have any ideas on what else could cause this kind of esoteric error? Thanks in advance. I've put this issue on the back burner about as long as I can and really need to get it fixed now. I wonder if perhaps ADOMD.Net has its own separate timeout settings, perhaps going by different names, but I can't find any documentation to that effect...
I tracked down the cause of this error: buried deep in the VB.Net code on the front end was a line that set the CommandTimeout property of the ADOMD.Net Command object to 3600 seconds. This overrode the connection string settings mentioned above, as well as all of the server-level settings. The problem was masked by the fact that cross-validation retrieval operations were also timing out in the Visual Studio 2017 GUI. That occurred because the VS instance was only recently installed and the Connection and Query Timeouts hadn't yet been set to 0 under Options menu/Business Intelligence Designers/Analysis Services Designs/General.
I think the title pretty much explains it all. I want to know what happens when a dynamic value exceeds its fields limits in MS CRM Workflows.
For Example;
When using the Create step and creating a Task, a Task description field has a limit of 2000 characters. If I am using this task to show an error, maybe a stack overflow, this may sometimes be more than 2000 characters.
What happens in this situation? Is an error thrown or is the dynamic value limited to 2000 characters?
Well after a few tests I have found my answer. It appears the workflow will throw an error and the system job will sit in waiting.
We are trying to implement a suite of spreadsheets that will handle budget figures for a set of stores. Everything works fine until we try to implement a spreadsheet that will collect data from all store spreadsheets and present statistics. Due to the limitation of ImportRange, of a maximum of 50 uses per spreadsheet doc, we have been implementing a Google docs script instead to handle the importing of data. But now when we have made a copy of the document to have one for each month, we are getting problems with our time triggers. We have setup a trigger to run the script once every minute and that results in an error message stating; Service invoked too many times: trigger.
What are the limitations here? And how do we best solve this?
We are also getting some other error messages and would like to know how to solve these;
Document tEHGO48zIBIFYRpb7Xhjwqg is missing (perhaps it was deleted?) (line 191)
Exceeded maximum execution time
Service error: Spreadsheets (line 290)
Where can we find documentation describing the different limitations and error messages?
Quota Limits for many services used with Google Apps Scripts have now been published on the Dashboard at:
https://docs.google.com/macros/dashboard
Just happened the same to me. It seems there is a non-published limit:
Premier accounts usually have larger quotas for every limitation.
The argument is that the account is better verified and less likely to exploit to resources.
But neither the regular limitations or Premier's better quotas are published by Google. And it seems that Googlers can't say it here in the forums either. The only well defined GAS limitation is the email quota, accessible through:
MailApp.getRemainingDailyQuota()
Which is 500 for regular accounts and 1500 for Premier.
Source: Google Support forums
Solutions are:
Join several scripts into one big trigger in case there is a limit in number of triggers
Optimize code (join loops, refresh only the necessary fields, etc.) in case it is based in CPU usage
Move minute timer triggers to OnEdit or OnOpen triggers whenever possible
Get a Premium Account
For your other errors, I haven't encountered any similar. You should post some details on the script or publish some code so we can debug it.