Having trouble binding to Active Directory with specified credentials - powershell

As part of my current role, I frequently find myself having to work with objects in one of my organisation's resource forests. At the moment in order to do that, I use an RDP session connected to a server within that forest, and authenticate to it with a specific "Admin" account in that forest.
I'm starting to find this tedious, and so I've been trying to come up with a nice profile.ps1 which will get me a DirectoryEntry for the resource forest that I can work on with Powershell (v2.0) on my local workstation instead, and save me the tedium of constantly re-establishing RDP sessions.
So I've got some code in my profile.ps1 which looks like this:
$resforest = "LDAP://DC=ldap,DC=path,DC=details"
$creds = Get-Credential -credential "RESOURCE_FOREST\my_admin_account"
$username = $creds.username
$password = $creds.GetNetworkCredential().password
$directoryentry = New-Object System.DirectoryServices.DirectoryEntry($resforest,$username,$password)
All of this proceeds fine, however, when I come to actually use the entry thus:
$search = New-Object DirectoryServices.DirectorySearcher($directoryentry)
$search.filter = "(&(anr=something_to_look_for))"
$search.findall()
I get a logon failure.
Now, I know the credentials are fine, I can map drives with them from my workstation to machines in the resource forest - and that works fine - so what am I ballsing up here?
PS - Please don't ask me to do anything with Quest's AD cmdlets - they're not allowed here.

Turns out the issue was with the serverless binding I was attempting to do.
If I modify the LDAP path to "LDAP://ldap.path.details/DC=ldap,DC=path,DC=details" then everything works.
Thanks for everyone who at least looked at the question ;)

Related

Having trouble setting Remote Desktop Services Control permissions?

I am new to power shell and I am trying to create a script that would create new users in Active Directory. Currently I am having trouble setting the Remote Desktop Services tab of the User. My code is below.
#Set Remote Control Settings Permissions
#I recieved Server is not operational error. https://learn.microsoft.com/en-us/troubleshoot/windows-server/identity/fail-to-configure-server-using-server-manager I think Invoke Set may not be the command I should use.
$userpath = dsquery user -samid $username
$userpat = "LDAP://$userpath"
$userp = [ADSI]$userpat
$userp.InvokeSet("EnableRemoteControl",2)
$userp.setinfo()
#Remote Desktop Services User profile profile path set to "\\documentsf\profiles\$username" THIS IS MESSING UP ERROR OCCURRS WITH INVOKE SET SAYING NOT SPECIFIED
$userp.InvokeSet("terminalservicesprofilepath","\\documentsf\profiles\$username")
$userp.setinfo()
The remote control permissions are a little strange and work best with the ADSI method, which you're close to already. dsquery actually returns a string with quotes inside it, so you'll either need to strip those quotes first or use a different method - I prefer Get-ADUser:
$LdapUser = "LDAP://" + (Get-ADUser $username).distinguishedName
$User = [ADSI]$LdapUser
$User.InvokeSet("EnableRemoteControl",2)
$User.setinfo()
And to set the remote desktop services profile path for a user:
$User.invokeset("terminalservicesprofilepath","\\Server\Share\$username")
$User.SetInfo()

Double-Hop Errors when running Skype for Business Cmdlets

I am attempting to automate the Skype for Business Server installation process in Powershell, I have a script that remotes into specified machines and begins preparing them as Front-End servers. The problem lies when certain SfB cmdlets (SfB commands are all of the form "verb-Cs...", ex. Get-CsUser or Get-CsPool) are run in remote sessions, they throw the double-hop error:
Exception: Active Directory error "-2147016672" occurred while searching for domain controllers in domain...
This is after running Enable-CsComputer, which enables the computer's role-based off its definition in the topology (topology was published successfully). The user object is in all required groups (RTCUniversalServerAdmins, Schema Admins, CsAdministrators & Local Admin rights on all SfB Servers). Oddly enough, the command 'Import-CsConfiguration -localstore" does not throw errors, and it's in the same remote session. There may be other local or domain groups that I need to be in, but I cannot pinpoint exactly which and have not seen them documented in the Skype build guides. Skype commands that have parameters to specify targets or just pull data, such as Get-CsPool or Get-CsAdForest, do not have errors because they are run in the local scope. The Enable-CsComputer has no parameter for the computer name, it has to be executed from that machine itself.
Enabling CredSSP delegation on each server is not an option, and I'm not understanding why there is a "second hop" in this command! If the second hop was a resource on a file server or database, that would make sense, and be easy to solve, but in this case, I can't track it. Can anyone tell me what I may be missing?
Here's a code sample to try and illustrate. From the jumbox I get the pool data to create an array, and a session is opened to each machine:
$ServerArray =get-cspool -identity $poolName
$i=0
$SessionArray = #{}
foreach($server in $ServerArray.Computers){$SessionArray[$i] = new-PsSession -ComputerName $server}
foreach($session in $SessionArray.values){
invoke-Command -session $session -scriptBlock {
#remote commands:
import-csConfiguration -<config file path> -localstore; #no errors
enable-CsReplica; #no errors
enable-cscomputer; #double hop error here
}}
If I log into that machine and run the same command, it executes fine but the intention of the project is to automate it on an arbitrary number of machines.
It looks like it's just trying to authenticate to a domain controller, which is reasonable. You'll have to approach this like any other double-hop issue.
Microsoft has an article dedicated to the double hop issue, and has a few solutions other than CredSSP that you can look at: Making the second hop in PowerShell Remoting

How to get an environment variable in a Powershell script when it is deployed by SCCM?

I've made a script to automatically change and/or create the default Outlook signature of all the employees in my company.
Technically, it gets the environment variable username where the script is deployed, access to the staff database to get some information regarding this user, then create the 3 different files for the signature by replacing values inside linked docx templates. Quite easy and logical.
After different tests, it is working correctly when you launch the script directly on a computer, either by using Powershell ISE, directly by the CMD or in Visual Studio. But when we tried to deploy it, like it will be, by using SCCM, it can't get any environment variable.
Do any of you have an idea about how to get environment variables in a script when it is deployed by SCCM ?
Here is what I've already tried :
$Name = [Environment]::UserName
$EnvVarUserName = Get-Item Env:\USERNAME
Even stuff like this :
$proc = gwmi win32_process -Filter "Name = 'explorer.exe'"
$report = #()
ForEach ($p in $proc)
{
$temp = "" | Select User
$temp.user = ($p.GetOwner()).User
$report += $temp
}
Thanks in advance and have a nice day y'all !
[EDIT]:
I've found a way of doing this, not the best one, but it works. I get the name of the machine, check the DB where when a laptop is connected to our network it stores the user id and the machine, then get the info in the staff DB.
I will still check for Matt's idea which is pretty interesting and, in a way, more accurate.
Thank you all !
How are you calling the environmental variable? $Env:computernamehas worked for me in scripts pushed out via SCCM before.
Why don't you enumerate the "%SystemDrive%\Users" folder, exclude certain built-in accounts, and handle them all in one batch?
To use the UserName environment variable the script would have to run as the logged-in user, which also implies that all of your users have at least read access to your staff database, which, at least in our environment, would be a big no-no.

Configure SharePoint 2010 UPS with PowerShell

SOLUTION FOUND: For anyone else that happens to come across this problem, have a look-see at this: http://www.harbar.net/archive/2010/10/30/avoiding-the-default-schema-issue-when-creating-the-user-profile.aspx
TL;DR When you create UPS through CA, it creates a dbo user and schema on the SQL server using the farm account, however when doing it through powershell it creates it with a schema and user named after the farm account, but still tries to manage SQL using the dbo schema, which of course fails terribly.
NOTE: I've only included the parts of my script I believe to be relevant. I can provide other parts as needed.
I'm at my wit's end on this one. Everything seems to work fine, except the UPS Synchronization service is stuck on "Starting", and I've left it over 12 hours.
It works fine when it's set up through the GUI, but I'm trying to automate every step possible. While automating I'm trying to include every option available from the GUI so that it's present if it ever needs to be changed.
Here's what I have so far:
$domain = "DOMAIN"
$fqdn = "fully.qualified.domain.name"
$admin_pass = "password"
New-SPManagedPath "personal" -WebApplication "http://portal.$($fqdn):9000/"
$upsPool = New-SPServiceApplicationPool -Name "SharePoint - UPS" -Account "$domain\spsvc"
$upsApp = New-SPProfileServiceApplication -Name "UPS" -ApplicationPool $upsPool -MySiteLocation "http://portal.$($fqdn):9000/" -MySiteManagedPath "personal" -ProfileDBName "UPS_ProfileDB" -ProfileSyncDBName "UPS_SyncDB" -SocialDBName "UPS_SocialDB" -SiteNamingConflictResolution "None"
New-SPProfileServiceApplicationProxy -ServiceApplication $upsApp -Name "UPS Proxy" -DefaultProxyGroup
$upsServ = Get-SPServiceInstance | Where-Object {$_.TypeName -eq "User Profile Service"}
Start-SPServiceInstance $upsServ.Id
$upsSync = Get-SPServiceInstance | Where-Object {$_.TypeName -eq "User Profile Synchronization Service"}
$upsApp.SetSynchronizationMachine("Portal", $upsSync.Id, "$domain\spfarm", $admin_pass)
$upsApp.Update()
Start-SPServiceInstance $upsSync.Id
I've tried running each line one at a time by just copying it directly into the shell window after defining the variables, and none of them give an error, but there has to be something the CA GUI does that I'm missing.
For anyone else that happens to come across this problem, have a look-see at this: http://www.harbar.net/archive/2010/10/30/avoiding-the-default-schema-issue-when-creating-the-user-profile.aspx
TL;DR When you create UPS through CA, it creates a dbo user and schema on the SQL server using the farm account, however when doing it through powershell it creates it with a schema and user named after the farm account, but still tries to manage SQL using the dbo schema, which of course fails terribly.
The workaround is to put my code into its own script file, and then use Start-Process to run the script as the farm account (it's a lot cleaner than the Job method described in the linked article):
$credential = Get-Credential ("$domain\spfarm", $SecureString)
Start-Process -FilePath powershell.exe -ArgumentList "-File C:\upsSync.ps1" -Credential $credential

Creating file in a user context in powershell

I am trying to create a file using powershell in a specific user context. E.g I have a user user01 on my local machine and I want to create a file in its context.
I am doing something like
New-Item c:\file.txt -Credential User01
It works but prompts me for password which I dont want it to. Is there any way I can accomplish this without having it prompt for password ?
The credential parameter on new-item is not actually supported for filesystems, so I'm not sure what you mean by "it works." It does NOT create the file as the passed user. In fact, the filesystem provider will say:
"The provider does not support the use of credentials. Perform the operation again without specifying credentials."
Taking an educated guess, I'd say you're trying to create a file with a different owner. PowerShell cannot do this on its own, so you'll need the following non-trivial script:
http://cosmoskey.blogspot.com/2010/07/setting-owner-on-acl-in-powershell.html
It works by enabling the SeBackup privilege for your security token (but you must already be an administrator.) This allows you to set any arbitrary owner on a file. Normally you can only change owner to administrators or your own account.
Oh, and this script is for powershell 2.0 only.
Rather than use a PowerShell cmdlet or .NET scripting on this one, you might take a look at the Windows utility takeown.exe. However, even it requires you supply the user's password that you're assigning ownership to.
Ok, I do start process in the user context and then create a file. Works like a charm.
Password, FilePath and UserName are passed in as arguments from command line.
$pw = convertto-securestring "$Password" -asplaintext –force
$credential = new-object -typename system.management.automation.pscredential -argumentlist "-default-",$pw
$localArgs = "/c echo>$FilePath"
[System.Diagnostics.Process]::Start("cmd", $localArgs, "$UserName", $credential.Password, "$Computer")
Or just make a call to SUBINACL.EXE? No need for password then.