PowerShell script does not run from TaskScheduler - powershell

I have a script that pulls some information from AD, inserts rows into a temp table, then calls a SQL script that transforms and upserts rows into a crosswalk table. The script runs fine in ISE, but fails when running in TaskScheduler, whether manually run or scheduled.
On the 'ACTION' page, the program is 'powershell.exe', and the arguments are '-executionpolicy bypass C:\scripts\SysManagement\Populate_AD_Xwalk.ps1.' The Last Run Result is (0x1).
Any idea what is wrong?
Thanks
# Invoke-sqlcmd Connection string parameters
$params = #{'server'='xxx';'UserName'='xxx';'Password'='xxx'; 'Database'='xxx'}
######################
# Function to manipulate the data
Function writeDiskInfo
{
param($UPN,$EMAIL,$SAM,$ACTIVE)
$InsertResults = #"
INSERT INTO [xxx].[dbo].[WORK_UPN_Email](UPN, EMAIL, SAM, ACTIVE)
VALUES ('$UPN','$EMAIL','$SAM', '$ACTIVE')
"#
# call the invoke-sqlcmdlet to execute the query
Invoke-sqlcmd #params -Query $InsertResults
}
#####################
# Query AD objects and store in an array
$dp = Get-ADUser -property 'emailaddress' -Filter *
# Loop through array and insert into WORK table
foreach ($item in $dp)
{
# Call the function to transform the data and prepare the data for insertion
writeDiskInfo $item.UserPrincipalName $item.EmailAddress $item.SamAccountName $item.Enabled
}
# Call SQL procedure to delete rows with blank upns and upsert crosswalk table
Invoke-Sqlcmd #params -Query "EXEC ZZproc_Upsert_AD_Email"

The Last Run Result is (0x1) , could mean it’s a privilege issue.
Check what user is the scheduler running as . Can the scheduler run even if the user is not logged in?
I believe you are using AD user for sql operations. Is the scheduler running as that user. OR does the user running scheduler have sufficient DB privileges ?
Let’s say scheduler is being run as the AD user, then check if the user has sufficient privileges to the folder where the powershell script resides
Under system32 folder you have a Folder “Tasks” . Does this user have read and execute privileges to the Tasks folder
Most importantly the user running scheduler should have the privilege “Log on as batch job”

Related

How to catch unexpected error running Liquibase in PowerShell

I have a small CI/CD script which is written in PowerShell, but I don't know how to stop it, if in the script I get unexpected error running Liquibase. All scripts in SQL and work (preconditions in place where I need to add), but I want to have more opportunity to control CI/CD script. Now, if the script gets the exception, it continues execution. The script updates some schemes and some of them has an influence on each other, so order is important.
#update first scheme - ETL (tables, global temp tables, packages)
.\liquibase update --defaults-file=import.properties
#how to stop this script, if I get unexpected error running Liquibase?
#update second scheme - data (only tables and roles for data)
.\liquibase update --defaults-file=data.properties
#update third scheme - views, tables and other for export data
.\liquibase update --defaults-file=export.properties
Have you tried this?
$result = Start-Process -filepath 'path to liquibase' -ArgumentList "your liquibase arguments go here" -wait
if($result.ExitCode -ne 0){
write-host 'something went wrong'
}

PowerShell while loop when function returns value

I have several steps in powershell invoked against sql server.
backup database
add database to availability group
turn on NAV middle tier
Problem is, that add to availability group step just executes command and does not wait until database is recovered on secondary nod. I have a function to check state of database on secondary and this function should wait until database is online and then start NAV middle tier.
I get correct database state (ONLINE/RECOVERING...), ,but somehow I cannot figure out WHILE part. My example here runs even if database is ONLINE or OFFLINE.
What Am doing wrong?
function get-dbstate($db_name_restore){
$query_check_state = "select state_desc from sys.databases where [name] = '"+ $db_name_restore +"'"
$state = invoke-sqlcmd -ServerInstance server2 -Query $query_check_state -ErrorAction Stop
return $state.state_desc
}
while(get-dbstate($db_name_restore) -ne "ONLINE"){
write-host("...still restoring")
Start-Sleep -Seconds 2
}
Ok correct syntax is to put function into brackets:
while((get-dbstate $db_name_restore) -ne "ONLINE"){
write-host "...still restoring"
Start-Sleep -Seconds 2
}

Powershell calls another PS1 in loop finishes after first foreach loop run completes

I have 2 Powershell scripts, one which is the primary (ServerInfo.ps1) and a secondary script which is intended to work as a wrapper, launching the first script within a loop that will use different credentials on each loop due to queries being made to different AD Domains/Forests that require different domain creds for each respective domain.
The primary script runs fine when run on its own if I run it manually and locally from a machine in each respective domain, and does as needed (grabbing details from remote machines and exporting to a csv)
The following is the Wrapper script (domain name examples changed for security reasons).
# This is a Wrapper Script for ServerInfo.ps1
$username = Read-Host -Prompt 'Enter User Account to be used - Do not specify domain'
$Password = Read-Host -Prompt 'Input User Password - NOTE must be the same on all domains' -AsSecureString
$domains = "d1.contoso.com","d2.contoso.com","dev.contosodev1.com","test.contosotest1.com"
$Arguments = "-file c:\serverinfo\ServerInfo.ps1", "-ServerType 'DCs'"
ForEach ($domain in $domains) {
$Credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList "$domain\$username", $Password
Start-Process powershell.exe -Credential $credential -NoNewWindow -ArgumentList $Arguments -WorkingDirectory 'c:\Serverinfo\' -Wait
}
Specifically this will be used to query Domain Controllers with an elevated permissions account that is identical on each domain, as the account used on member computers does not have Builtin Admin or AD (Domain/Enterprise Admin) level rights on the Domain Controllers. The intention is also to run the scripts from a Domain member, not locally on a DC.
As the primary script (serverinfo.ps1) is over 1000 lines of code, I will simply say that with the wrapper passing the argument "-ServerType DCS", ServerInfo.ps1 initially grabs all Domain Controller names from AD of the respective domain the account belongs to, and performs things such as WMI & Registry queries of each DC, exporting the output to a CSV file.
For the first domain, this runs fine without any issue and the ServerInfo.ps1 script does it job querying every DC in the first domain, but then both PowerShell scripts close/stop running without it continuing to the second domain in the wrapper loop aka, the "foreach ($Domain in $Domains)" loop is not working once the first domain completes.
As I don't see any scripting error in the wrapper, and there is no Exit or other cancellation/Finish command in ServerInfo.Ps1, I am at a loss as to why the wrapper is not working as expected.
For the first domain, this runs fine without any issue and the
ServerInfo.ps1 script does it job querying every DC in the first
domain, but then both PowerShell scripts close/stop running without it
continuing to the second domain in the wrapper loop aka, the "foreach
($Domain in $Domains)" loop is not working once the first domain
completes.
I am pretty sure that your loop works. Just insert a Write-Host "Iteration for domain: $domain" inside that loop and you will see, that it indeed iterates over all domains.
I am more concerned about the way you call that other script. With Start-Process -Credential the process will be executed in the user space of the provided user. In your scenario, this will require any trust relationships between your domains, if you want to run it on any domain computer for all domains. Do you have any?
If not, you have to pass the credentials to the called script in some secure way, so that it can authenticate itself when using remote control cmdlets.

Prompt user for input and save that input for future iterations of the script

I built a script that runs on the users PC every minute via a scheduled task. The scheduled task gets created by a batch script that initially runs the PowerShell script but also schedules the task to run every minute.
I want the PowerShell script to prompt the user for certain variables (username, email address) and remember these variables each time it runs automatically until the user next manually runs the script.
I know I can do this:
$email= Read-Host 'What is your email address?'
But how do I make it save the input until it is next opened manually?
One idea I had was to have a batch script that schedules a task to run a batch script every minute. The batch script would then run the PowerShell script every time it is run and automatically and silently respond to the questions asked based on how the user edits the batch script. There has to be a better way than this.
You could do something like this, which would store the configuration in a file as XML under the users profile, unless that file already existed in which case it would load it:
$ConfigPath = "$env:userprofile\userconfig.xml"
If (test-path $ConfigPath){
$Config = Import-Clixml $ConfigPath
}Else{
$ConfigHash = #{
email = read-host "What is your email address?"
username = read-host "What is your username?"
}
$Config = New-Object -TypeName PSObject -Property $ConfigHash
$Config | export-clixml $ConfigPath
}
You then access the configuration settings as follows:
$Config.email
$Config.username
Explanation
Defines a path to store the config (which uses the environment variable UserProfile)
Uses Test-Path to check if the file exists
If it does exist, uses the Import-CliXML cmdlet to load the file in to a PowerShell object in the variable $Config.
If it does not exist, creates a hashtable which prompts the user for each configuration setting
Uses New-Object to turn that Hashtable in to a PowerShell Object which is stored in $Config
Writes $Config out to an XML file using Export-CliXML and stores it under the path defined in $ConfigPath.

Configure SharePoint 2010 UPS with PowerShell

SOLUTION FOUND: For anyone else that happens to come across this problem, have a look-see at this: http://www.harbar.net/archive/2010/10/30/avoiding-the-default-schema-issue-when-creating-the-user-profile.aspx
TL;DR When you create UPS through CA, it creates a dbo user and schema on the SQL server using the farm account, however when doing it through powershell it creates it with a schema and user named after the farm account, but still tries to manage SQL using the dbo schema, which of course fails terribly.
NOTE: I've only included the parts of my script I believe to be relevant. I can provide other parts as needed.
I'm at my wit's end on this one. Everything seems to work fine, except the UPS Synchronization service is stuck on "Starting", and I've left it over 12 hours.
It works fine when it's set up through the GUI, but I'm trying to automate every step possible. While automating I'm trying to include every option available from the GUI so that it's present if it ever needs to be changed.
Here's what I have so far:
$domain = "DOMAIN"
$fqdn = "fully.qualified.domain.name"
$admin_pass = "password"
New-SPManagedPath "personal" -WebApplication "http://portal.$($fqdn):9000/"
$upsPool = New-SPServiceApplicationPool -Name "SharePoint - UPS" -Account "$domain\spsvc"
$upsApp = New-SPProfileServiceApplication -Name "UPS" -ApplicationPool $upsPool -MySiteLocation "http://portal.$($fqdn):9000/" -MySiteManagedPath "personal" -ProfileDBName "UPS_ProfileDB" -ProfileSyncDBName "UPS_SyncDB" -SocialDBName "UPS_SocialDB" -SiteNamingConflictResolution "None"
New-SPProfileServiceApplicationProxy -ServiceApplication $upsApp -Name "UPS Proxy" -DefaultProxyGroup
$upsServ = Get-SPServiceInstance | Where-Object {$_.TypeName -eq "User Profile Service"}
Start-SPServiceInstance $upsServ.Id
$upsSync = Get-SPServiceInstance | Where-Object {$_.TypeName -eq "User Profile Synchronization Service"}
$upsApp.SetSynchronizationMachine("Portal", $upsSync.Id, "$domain\spfarm", $admin_pass)
$upsApp.Update()
Start-SPServiceInstance $upsSync.Id
I've tried running each line one at a time by just copying it directly into the shell window after defining the variables, and none of them give an error, but there has to be something the CA GUI does that I'm missing.
For anyone else that happens to come across this problem, have a look-see at this: http://www.harbar.net/archive/2010/10/30/avoiding-the-default-schema-issue-when-creating-the-user-profile.aspx
TL;DR When you create UPS through CA, it creates a dbo user and schema on the SQL server using the farm account, however when doing it through powershell it creates it with a schema and user named after the farm account, but still tries to manage SQL using the dbo schema, which of course fails terribly.
The workaround is to put my code into its own script file, and then use Start-Process to run the script as the farm account (it's a lot cleaner than the Job method described in the linked article):
$credential = Get-Credential ("$domain\spfarm", $SecureString)
Start-Process -FilePath powershell.exe -ArgumentList "-File C:\upsSync.ps1" -Credential $credential