Azure AD Graph API: Bulk update of users - azure-ad-graph-api

Does Azure AD Graph API support batch processing on users? As an example, if I want to update the location for several hundred users in my organization, is there any way I can do that? The only information I could find was what is described here: https://msdn.microsoft.com/en-us/library/azure/ad/graph/howto/azure-ad-graph-api-batch-processing
But as I understand, you can only batch operations on a single user entity in a given batch operation, and even that is limited to 5 operations per changeset. So my only option seems to be to sequentially invoke the API to update every single user in my list. I couldn't find any officially documented rate limiting that may be enforced by Microsoft. So I'm not sure if that approach would even work. Is there a better way to do this?

Yes , Azure AD Graph API support batch processing on users . Please refer to this code sample , check the CreateUsersTest function in that code sample . To make that sample work , you need to add Read and write directory data app permission for your client app :
Another way is to use powershell to add multiple users using a bulk import process:
first create a csv file with appropriate attributes like :
Then you could install Azure ActiveDirectory Powershell (MSOnline).
Connect the service :
PS C:\WINDOWS\system32> connect-msolservice
Import users from csv file :
$users = Import-Csv E:\a.csv
Create users with New-MsolUser command .
$users | ForEach-Object {New-MsolUser -UserPrincipalName $_.UserName -FirstName $_.FirstName -LastName $_.LastName –DisplayName $_.DisplayName -Title $_.JobTitle -Department $_.Department -Country $_.Country}
Update :
Please refer to document : https://msdn.microsoft.com/en-us/library/azure/ad/graph/howto/azure-ad-graph-api-batch-processing
The Graph API supports a subset of the functionality defined by the OData specification:
A single batch can contain a maximum of five queries and/or change sets combined.
A change set can contain a maximum of one source object modification and up to 20 add-link and delete-link operations combined. All operations in the change set must be on a single source entity.
In your scenario ,a single source entity means one user entity , you could create a user , modify that user in a change set , but can't create two users in one change set, since they're two entities .
It seems there is no such document lists rate limiting for batch process , but i have tested create 2000+ users with above code and it works fine .

Related

InputObject Properties Creation

I need some assistance creating a hashtable of users to use with Get-MGBetaUser
On the Microsoft Website (https://learn.microsoft.com/en-us/powershell/module/microsoft.graph.users/get-mguser?view=graph-powershell-1.0) They will give you the parameter it's looking for (UserID) but I can't find any other articles online with exact use case such as this.
Currently I can get one object in the hash and have to access it directly by asking for the Index
$Users['UserID'] = #{
UserID = "<IDOfUser>"
}
Get-MGBetaUser -InputObject $Users.UserID
If I pipe this same hash into Get-MGBetaUser, i'll get the error
Line |
6 | $Users | Get-MGBetaUser
| ~~~~~~~~~~~~~~~~~~~~~~~
| Resource 'System.Collections.Hashtable' does not exist or one of its queried reference-property objects
| are not present.
The hash will have approx. 15-20k userids which will need to be added, and they'll be coming from a CSV
It looks like Microsoft will only accept the Pipeline input through this method with the hash. Everything else I've always done will allow piping an array of IDs into it.
Thank you in advance for any assistance
Thanks all for your responses, as it seems the answer is you couldn't supply the Graph SDK with an array or hash of users as originally intended, I opted to go a different route. The example I was giving was to have x number of jobs spin off based off the number of hashes/files I had, so that I could limit each jobs scope and create realistic timeframes for data pulls.
I decided to create an hash table of Filters which will limit each jobs exposure. In my case, we have 140k+ students, so I created a filter for each SurNames ending in A,B,C... Spinning these into 27 jobs A-Z I can get complete results back within 30 minutes. In my testing, trying to do one big pull of students took hours. The reason for doing this is I am also opting to get Licensing information along with Sign In Activity (Which requires you to supply a GUID if you used a for-each)
Allowing Graph to use the filter enables the SDK to get the data in batches, and not the other way around.
If anyone has further insight into this, or sees a better way (Other than For-Each) feel free to let me know, otherwise I'm marking this closed!

Split Sharepoint Content Database

I have a single SharePoint content database (Sharepoint 2019 - On Premise) that is over 100 GB and I would like to split the SP sites between some new content databases that I will make.
I have created the new content databases but I have no idea on how to move the subsites to them.
Based on research that I have done, it seems I need to:
Create Content Databases
Create site collections in those databases
Move sub collections into new site collections in the new databases.
Question 1 - are the above steps correct or do I have this wrong?
Question 2 - How in the heck do I move subsites out of the almost full content database, into new content Database? Do I move it to
the site collection in the new database? If so How?!?
Thankyou for your brainpower and help
Tried moving subsites and failed
Unfortunately, I could not understand if you wish to transfer just some subsites or a complete site collection, so I will list below both of these ways.
I would strongly suggest that you create a sandbox environment before proceeding with any of the below scripts, just in case you have misunderstood anything.
Before any transfers are performed, you should create the Content Databases that you will be targeting. You can perform such task either via the Central Admin Panel (GUI) or via a PowerShell Script, of which the command would be the below:
#get web app under which you will create the content db.
$WebApp = Get-SPWebApplication
#create the new content database
New-SPContentDatabase "<Name_of_new_Content_db>" -DatabaseServer "<db_server>" -WebApplication $WebApp
#you can also use the below parchment which points directly to the web app.
#New-SPContentDatabase "<Name_of_new_Content_db>" -DatabaseServer "<db_server>" -WebApplication http://web-app/
In case you wish to transfer whole site collections or clone them on to different Content Databases there are three ways to achieve this.
Copy Site Collection, use the Copy-SPSite cmdlet to make a copy of a site collection from an implied source content database to a specified destination content database.
The copy of the site collection has a new URL and a new SiteID.
Copy-SPSite http://web-app/sites/original -DestinationDatabase <Name_of_new_Content_db> -TargetUrl http://web-app/sites/copyfromoriginal
Move Site Collection, the Move-SPSite cmdlet moves the data in the specified site collection from its current content database to the content database specified by the DestinationDatabase parameter.
A no-access lock is applied to the site collection to prevent users from altering data within the site collection while the move is taking place.
Once the move is complete, the site collection is returned to its original lock state. The original URL is preserved, in contrast with Copy-SPSite where you generate a new one.
As you can see, before executing the below script each content database was hosting at least one site collection.
Move-SPSite http://web-app/sites/originalbeforemove -DestinationDatabase <Name_of_new_Content_db>
After the execution, you can see that a site was transfered from the last content database to the second, preserving its original url.
Backup and Restore Site Collection, this combination will save the site collection on the disk and afterwards restore it onto a new Content Database. The Restore-SPSite cmdlet performs a restoration of the site collection to a location specified by the Identity parameter. A content database may only contain one copy of a site collection. If a site collection is backed up and restored to a different URL location within the same Web application, an additional content database must be available to hold the restored copy of the site collection.
Backup-SPSite http://web-app/sites/original -Path C:\Backup\original.bak
Restore-SPSite http://web-app/sites/originalrestored -Path C:\Backup\original.bak -ContentDatabase <Name_of_new_Content_db>
Once I executed the above commands, a new site was restored on the third Content Database, which was basically a clone of the original site. Keep in mind, that with this path you will preserve the original site and will be able to work on the newly restored copy.
In case you wish to transfer just one Sub Site on to a different Content Databases you can follow the below strategy.
Use the -Force flag in case of the below error.
File C:\Backup\export.cmp already exists. To overwrite the existing file use the -Force parameter.
You can import sites only into sites that are based on same template as the exported site. This is refering to the Site Collection and not the SubSite
Import-SPWeb : Cannot import site. The exported site is based on the template STS#3 but the destination site is based on the template STS#0. You can import sites only
into sites that are based on same template as the exported site.
#Create Site Collection in targeted Content Database first
New-SPSite http://web-app/sites/subsiterestoration2 -OwnerAlias "DOMAIN\user" -Language 1033 -Template STS#3 -ContentDatabase <Name_of_new_Content_db>
#export Web object, use force to overwrite the .cmp file
Export-SPWeb http://web-app/sites/original/subsitetomove -Path "C:\Backup\export.cmp" -Force
#Create a new Web under the new Site Collection, although it is not necessary and you can always restore on to the RootWeb. I created the new Web object just to preserve the previous architecture.
New-SPWeb http://web-app/sites/subsiterestoration2/subsitemoved -Template "STS#3"
#Finally, import the exported Web Object on to the Targeted Web
Import-SPWeb http://web-app/sites/subsiterestoration2/subsitemoved -Path "C:\Backup\export.cmp" -UpdateVersions Overwrite
Final Notes
Keep in mind that all of the transfers were performed on sites that did not have any kind of customizations upon them, like Nintex WFs or custom event receivers. These were just plain sites that several Lists and Document Libraries.
Always make sure that once you are performing the below tasks that the Users are not altering data that currently exist within the site collections in question.
To briefly answer your question, yes you have the correct idea of what there is to be done in case you wish to transfer just the a sub site, but you must pick the best method of the above that suits you.
Always pay attention that most of the methods alter the url which points to a subsite, which you should be cautious about if any other third party automations are getting and updating data on Sharepoint with these urls.
I will try to keep this answer updated with the ways of transfering a subsite, in case anything else comes up.

Microsoft Graph API query parameters via powershell not recognized

I've registered an app using the Azure AD portal which I am successfully accessing using invoke-webrequest from a Powershell script. The app tries to list signins but is returning up to the limit of 1000 signin objects. When I try to use query parameters to restrict the fields and amount of data returned, the web request returns an error saying that the AllowedQueryOptions and EnableQueryAttributes need to be updated for the parameters used, eg. select, top, skip. I've tried v1.0 and beta APIs without success.
Is there a way to manipulate these options via powershell so that the query parameters are recognized? I've seen some references to OAuth and ASP.NET Core to do this. Is powershell the "right" way to automate signin retrieval? Will this functionality to manipulate query options ever come to powershell?
Cheers,
-Emanuel
As you mentioned in comments, if it is unnecessary for you to use graph api, I think use Powershell command is an option for your reference.
You can use this command in your Powershell:
Get-AzureADAuditSignInLogs
If you want to use query parameters to restrict the fields and amount of data returned, you can refer to the sample as below:
Get-AzureADAuditSignInLogs -Top 5 | Select-Object -Property Id, UserDisplayName
Hope it helps~

faster way to get AD memberships for millions of AD groups in a network with multiple trusted forests and domains

First, I cannot get into why I need this data and I cannot get into specifics about the network. You'll have to trust me there is no other way to get this data other than a PowerShell script to run LDAP queries.
I am working with a network that has multiple forests and multiple domains. There is a trust between all the forests. I am logged into one domain on one of the forests but because of the trust I can query all of them.
I have a CSV file with millions of AD groups. I need to find all the direct members of everyone of the millions of AD groups. A lot of memberships are cross-domain which means I cannot just use the member property of the AD group and have to, instead, query every domain and check for memberOf.
I have a PowerShell script that gets this data. For various reasons I cannot share my code but here is what it does:
creates an array of System.DirectoryServices.DirectorySearcher objects for all of my domains
iterate through the CSV file that has a list of every AD group and its DN
for each DN, loop over the DirectorySearcher array and find all objects that are a memberOf the AD group in that DirectorySearcher ((memberOf=$adGroupDN))
The code works. But since I'm dealing with an input list with millions of AD groups the script is awfully slow. Based on my test run calculations it will take more than 2 weeks to get all of the data I need.
I'm wondering if there is a better/faster way to do this?
I thought maybe I could use threading or something but I am not sure if that will help nor am I sure where to start.
Any advise is greatly appreciated.
Adding some additional details...
My input list are millions of unique group DNs
I have multiple different forests/domains
My input group DNs are spread across all the forests/domains
The groups that are in my input list of group DNs span different forests/domains (domain1\group1 from my input list has domain2\group2 as a member)
I need to get a complete list of every group that is in the groups from my input list
Because of cross-domain memberships I cannot rely on the member attribute of the my input groups. The only way I know to get it is query every DC/domain for all groups that are are memberOf the groups from my input list.
I can only use PowerShell
I do not have the ActiveDirectory module and can only use the .NET DirectorySearcher
At a high level my code looks like this:
$arrayOfDirectorySearcherObjectsForEachDCInMyNetwork = ... code to create an array of System.DirectoryServices.DirectorySearcher objects, one for each DC/domain in my network
Foreach ($groupDN in $inputListOfUniqueGroupDNs)
{
Foreach ($domain in $arrayOfDirectorySearcherObjectsForEachDCInMyNetwork)
{
...
The only way I can think of making it faster is to multi-thread the second for loop where it queries multiple DCs/domains at the same time using runspaces but I cannot figure out how to do this...
Running the script on a domain controller would give you a slight advantage, if it's an option. But otherwise multi-threading is likely your best bet.
Look into using Start-Job. There's an example here.
That said, I question this:
A lot of memberships are cross-domain which means I cannot just use
the member property of the AD group and have to, instead, query every
domain and check for memberOf.
Group scope is important here. If all of your groups are Universal, then either way shouldn't make a difference (whether you look at member on the group or memberOf on the users).
But it's important to note that memberOf will not show Domain Local groups groups on a different domain (even in the same forest).
The member attribute on a group is always the authoritative source for the members. Yes, getting the details of an account on a trusted domain is a little tougher, but it can be done.
Here is a PowerShell function that will pull the "domain\username" of every member of a group, including those in nested groups.
function OutputMembers {
param([string] $groupDn)
foreach ($m in ([ADSI]("LDAP://" + $groupDn)).member) {
$member = [ADSI]("LDAP://" + $m)
$member.objectClass
if ($member.objectClass -eq "group") {
#this member is a group so pull the members of that group
OutputMembers $member.distinguishedName
} else {
#"msDS-PrincipalName" is not loaded by default, so we have to tell it to get it
$member.Invoke("GetInfoEx", #("msDS-PrincipalName"), 0)
if ([string]::IsNullOrEmpty($member."msDS-PrincipalName")) {
#member is on a trusted domain, so we have to go look it up
$sid = New-Object System.Security.Principal.SecurityIdentifier ($member.objectSid[0], 0)
$sid.Translate([System.Security.Principal.NTAccount]).value
} else {
$member."msDS-PrincipalName"
}
}
}
}
With that, you call that function with the distinguishedName of each group, like:
OutputMembers "CN=MyGroup,OU=Groups,DC=domain,DC=com"
I presume that the performance issue in step 3 as that presumably has an embedded loop (that might even be recursive if you look for indirect group memberships as well):
Foreach ($UserDN in $UserDNs) {
...
Foreach ($GroupDN in $GroupDNs) {
...
Everything in the inner loop is very important for the performance of your script as that will be invoked $UserDNs.Count * $GroupDNs.Count times!
I suspect that there are a lot of redundant LDAP queries in the inner loop (for user that are in the same group) and therefore focus on that and build a kind of custom caching of every redundant query to the server to overcome this. Something like:
$MemberCache = #{}
Function GetMembers([String]$GroupDN) {
If (!$MemberCache.ContainsKey($GroupDN)) {
$MemberCache[$GroupDN] = #{} #HashTables are much faster then using the contains method on an array
# retrieve all members of the AD group in that DirectorySearcher
ForEach ($Member in $Members) {$MemberCache[$GroupDN].$Member = $True}
}
$MemberCache[$GroupDN]
}
Function IsMember([String]$DN, [String]$GroupDN) {
(GetMembers($GroupDN)).ContainsKey($DN)
}
The general idea is that you should not remotely redo the "find all objects that are a memberOf the AD group in that DirectorySearcher ((memberOf=$adGroupDN))" for the same $adGroupDN (any group you already queried before) but retrieve the required information from a local hash table (cache).
There are several optimizations can be done here. These optimizations are not related to Powershell, but to the algorithm itself.
Powershell is not designed to perform such kind of tasks. C\C++ or at least C# should be used instead.
Create and maintain connection to RootDse object of each global catalog that you're querying information from until all job is done. In this case all AD queries will use one single cached connection to AD, which significantly increases performance.
Thumbs up to iRon. Create a cache for all queried groups. For example, if you queried membership for group A and group A is a member of group B, there is no need to query memberhip of group A again. Of course, in your case you just cannot simply store all membership in memory, so you need to create some kind of storage where membership will be saved
Read groups from CSV in parallel. Make several threads. To combine it with group cache you need to operate 2 caches. One for already queried groups and the other one for pending groups, that threads are querying at the moment (to avoid double query the same group in different threads). These caches should be thread safe of course. If one thread sees, that another is querying the group, this thread can skip current group and query any other, and return back later.
For foreign security principals (users and groups from trusted domains) use SID binding. You can extract sid for FSP from Member DN. Direct binding works much faster than DirectorySearcher
Be aware of:
Thumbs up to Gabriel Luci. You need to query member, not memberOf attribute,
Don't forget about nested groups (group A -> group B -> user U). User is a member of group B as well a group A. This rule applies to trusted domain groups as well
Groups may be members of each other. E. g. A -> B -> C -> A or even A -> B -> A. You have to handle this in your script

Active Directory / Powershell - How to identify if a server, in a federated cluster, is down

This question is in the context of a service that is running on a DC server (or is accessing the DC remotely) such that the service can access Active Directory, but the service has no awareness of the Active Directory servers, how many servers there should be, what the server addresses are, etc...
Furthermore, the service must be written under the assumption that the Active Directory setup could involve a group of Federated servers.
So to illustrate the problem by way of an example -
Say I'm trying to run a very simple AD query, via Powershell v2 (or you could use Directory Services), to get all of the ADUsers:
$users = Get-ADUser
Now let's assume that the example company, Contoso, has an AD server in New York (for their NY office), and one in Seattle (for their Seattle office). Also, the service will be pointing to the DC which will be the server in the NY data center.
So for the purposes of simplicity, let's just say that $users returns two user objects with display-name attributes of:
Dan Jump
Jim Wilson
Now let's assume that the Seattle server is down so I run the query again and just get:
Dan Jump
From what I understand - AD will not return an error indicating that the Seattle server is down..it will just return the users it can find..
I know it's possible to detect deleted objects so, if I saved a list of all the users, I could potentially verify that the user was deleted...but that's a bit of overhead especially if I'm interested in more than just a list of users
So is there a way to detect one or more AD servers, in a Federated cluster, are down before I even run my query?
You might like to read this, before you make use of any of the following. S.DS and S.DS.AD abstract a lot of what happens but there's a lot of useful information in there and it might help you to clarify your requirements.
I'm not aware that there's a function to return DCs that are down but the System.DirectoryServices.ActiveDirectory namespace contains classes you need to determine domain topology. For example, the Forest class will return a collection of Domain objects (and Sites and many other useful properties). Domain will give you access to a collection of DomainController objects( as well as the Children and Parent domains and many other props and methods).
You could iterate over the domains to get all DCs and then iterate over the DCs and try a ping although this may not work in a well-secured and segmented network. You might consider trying to connect to each DC using S.DS.DirectoryEntry as that should work, from a DC, in any scenario. Of course, if your network guys have been overzealous with their locking-down, even that might not work.
This sounds like a job for the wonderful people of www.serverfault.com
I do not see how this is programmer specific? It sounds like network troubleshooting? Anyways...
IMO, it depends where your federation servers are located. Are they in the cloud? Are they virtual? If so, it's easy to detect when they go down, through simple API calls to your server platform.
Or you could try to implement a server ping mechanism, like the example on this website here:
$servers = Get-Content 'servers.txt'
ForEach-Object ($server in $servers) {
# Ping the machine to see if it's on the network
$results = Get-WMIObject -query "select StatusCode from
Win32_PingStatus where Address = '$server'"
$responds = $false
ForEach-Object ($result in $results) {
# If the machine responds break out of the result loop and indicate success
if ($result.statuscode -eq 0) {
$responds = $true
break
}
}
If ($responds) {
# Gather info from the server because it responds
Write-Output "$server responds"
} else {
# Let the user know we couldn't connect to the server
Write-Output "$server does not respond"
}
}
** This assumes your servers are "pingable".
You could probably also make use of AD-GetComputer cmdlet found on MS Technet here.
The Get-ADComputer cmdlet gets a computer or performs a search to retrieve multiple computers.
The Identity parameter specifies the Active Directory computer to
retrieve. You can identify a computer by its distinguished name (DN),
GUID, security identifier (SID) or Security Accounts Manager (SAM)
account name. You can also set the parameter to a computer object
variable, such as $ or pass a computer object
through the pipeline to the Identity parameter.
To search for and retrieve more than one computer, use the Filter or
LDAPFilter parameters. The Filter parameter uses the PowerShell
Expression Language to write query strings for Active Directory.
PowerShell Expression Language syntax provides rich type conversion
support for value types received by the Filter parameter. For more
information about the Filter parameter syntax, see
about_ActiveDirectory_Filter. If you have existing LDAP query strings,
you can use the LDAPFilter parameter.
This cmdlet retrieves a default set of computer object properties. To
retrieve additional properties use the Properties parameter. For more
information about the how to determine the properties for computer
objects, see the Properties parameter description.