Azure Data Factory Portal Blades Status Cache Reset - powershell

I have a fairly complex Azure Data Factory (ADF) with many datasets covering a variety of time slices intervals etc. Some of these datasets throw errors from time to time and need to be fixed. Typically I'll deal with the error behind the scenes, in other Azure services or offline. Then once its resolved I simply set the errored/failed time slice status manually using PowerShell.
For example:
Set-AzureRmDataFactorySliceStatus `
-ResourceGroupName $ResourceGroup `
-DataFactoryName $ADFName.DataFactoryName `
-DatasetName $Dataset.DatasetName `
-StartDateTime 2017-02-23 `
-EndDateTime 2017-02-24 `
-Status "Waiting" `
-UpdateType "Individual"
However, in doing this the portal does not react very well when it comes to reporting the actual errors at the main ADF blade. Snippet below.
In the case of the above I certainly do not have 21 errored datasets! How dare it! :-)
So, my question is. Do we know of a way to clear the cache or reset these values in the ADF portal blades? They are inaccurate and it doesn't look great if the customer decides to check on things.
Ideally I'd scrap the portal UI altogether! PowerShell for the win :-)
Many thanks for your time and support.

Related

Create high performance power plan based on existing HP power plan with Powershell

I have a Powershell script I'm working on to automate some processes of our deployment method for a Windows 10 migration. Part of this script activates the High Performance power plan option using powercfg. It works fine on the desktops we use where the three normal plans (Balanced/HP/Power Saver) are already present/created, but the laptops I'm testing it on only have Balanced present in the "Choose power plan" menu.
Usually I'd click through the GUI to "Create power plan" which asks me if I want to base it off an existing plan. Here I'm able to click the High Performance plan template, then just name it as High Performance and use it that way. I'm looking for a way to do this in the Powershell script so I can run this script on both laptops and desktops alike.
Here's what I have for changing the power plan to High Performance on machines where the HP power plan is already created and in the "Choose..." menu (courtesy of SQL Soldier):
Try {
$HighPerf = powercfg -l | %{if($_.contains("High performance")) {$_.split()[3]}}
$CurrPlan = $(powercfg -getactivescheme).split()[3]
if ($CurrPlan -ne $HighPerf) {powercfg -setactive $HighPerf}
Write-Host "Power plan has been configured to High performance."
}
Catch {
Write-Warning -Message "Unable to set power plan to high performance"
}
I have a variable called $Laptop that when -eq to "y" or "Yes" I perform other actions in the script, so I'd like to wrap this code in an if statement based on $Laptop that would create the power plan then activate it if the machine is a laptop, and perform the above code normally if not. I realize this code isn't exactly ironclad, but we only really a select few models so I'm not trying to make it foolproof.
Please let me know if I can clarify anything.

Active Directory transient error when bulk using ActiveDirectory cmdlet

I have a system which synchronizes users from a data source. The data source consists of user information. When a new user is synchronized, a PowerShell task is triggered which creates or updates the user. This is all fine but when the amount of new/updated users becomes too big, some of the tasks fail with some interesting errors such as:
"The server has returned the following error: invalid enumeration
context."
or
"A connection to the directory on which to process the request was
unavailable. This is likely a transient condition."
When troubleshooting it seems obvious that the reason these errors occur are a lack of resources. It is because all the simultaneously triggered tasks are importing the module on their own PS session.
So I went trying some different things and measuring Import-Module speed etc. So I've concluded that it is faster to run Import-Module and then Get-ADUser for instance, then just Get-ADUser (which would also import the module).
Measure-Command {Import-Module ActiveDirectory}
Average time 340 ms
Measure-Command {Get-ADUser -Filter *}
Average time 420 ms
Get-ADUser after the module is imported
Average time 10 ms
But these marginal differences are not going to do anything to the issue. So I had to look further. I find that disabling the drive might help speed up the process, so I've added the following before importing the module:
$Env:ADPS_LoadDefaultDrive = 0
Measure-Command {Import-Module ActiveDirectory}
Average time 85 ms
4 times faster! But still the error persists at high amounts of users at the same time (e.g. 50 tasks). So I thought about polling the availability in the script, or make a do..while loop. Or maybe the system which fires the separate tasks needs to be redesigned, to have some sort of queue.
Does anyone recognize this situation? Or have some thoughts they'd like to share on this subject?
It is because all the simultaneously triggered tasks are importing the module on their own PS session.
Then you need to make sure this doesn't happen, or at least not so much that you run out of resources. So you have two options:
Limit the number of tasks that get run at any one time (maybe 5 at a time).
Make it one task that can work on several accounts. This way the module only gets loaded once.
I think option 2 is the better solution. For example, rather than triggering the script right away, your synchronization job could just write the username to a file (or in memory even) and once it's done finding all the users, it triggers the PowerShell script and passes the list (or the script could read the file that was written to). You have options there - whatever works best.
Update: All of .NET is available in PowerShell, so another option is to change the whole script to use .NET's DirectoryEntry instead of the ActiveDirectory module, which would use much less memory. There are even shortcuts for it in PowerShell. For example, [ADSI]"LDAP://$distinguishedName" would create a DirectoryEntry object for a user. It is a substantial rewrite, but the performance is much better (speed and memory consumption).
Here are some examples of searching with DirectorySearcher (using the [ADSISearcher] shortcut): https://blogs.technet.microsoft.com/heyscriptingguy/2010/08/23/use-the-directorysearcher-net-class-and-powershell-to-search-active-directory/
And here's an example of creating accounts with DirectoryEntry: https://www.petri.com/creating-active-directory-user-accounts-adsi-powershell

Dynamics CRM workflow failing with infinite loop detection - but why?

I want to run a plug-in every 30 minutes, to poll an external system for changes. I am in CRM Online, so I don't have ready access to a scheduling engine.
To run the plug-in, I have a 'trigger' entity with a timezone independent date-
Updating the field also triggers a workflow, which in pseudocode has this logic:
If (Trigger_WaitUntil >= [Process-Execution Time])
{
Timeout until Trigger:WaitUntil
{
Set Trigger_WaitUntil to [Process-Execution Time] + 30 minutes
Stop Workflow with status of: Succeeded
}
}
If Trigger_WaitUntil < [Process-Execution Time])
{
Send email //Tell an admin that the recurring task has self-terminated
Stop Workflow with status of: Canceled
}
So, the behaviour I expect is that every 30 minutes, the 'WaitUntil' field gets updated (and the Plug-in and workflow get triggered again); unless the WaitUntil date is before the Execution time, in which case stop the workflow.
However, 4 hours or so later (probably 8 executions, although I haven't verified that yet) I get an infinite loop warning "This workflow job was canceled because the workflow that started it included an infinite loop. Correct the workflow logic and try again. For information about workflow".
My question is why? Do workflows have a correlation id like plug-ins, which is being carried through to the child workflow? If so, is there anyway I can prevent this, whilst maintaining the current basic mechanism of using a single trigger record to manage the schedule (I've seen other solutions in which workflows create new records, but then you've got to go round tidying up the old trigger records as well)
Yes, this behavior is well-known. The only way to implement recurring workflows without issues with infinite loops in Dynamics CRM and using only OOB features is usage of Bulk Deletion functionality. This article describes how to implement it - http://www.crmsoftwareblog.com/2012/08/using-the-bulk-deletion-process-to-schedule-recurring-workflows/
UPD: If you want to run your code every 30 mins then you will have to create 48 bulkdelete jobs with correspond startdatetime like 12:00, 12: 30, 1:00 ...
The current supported method for CRM is to use the Azure Scheduler.
Excerpt:
create a Web API application to communicate with CRM and our external
provider running on a shared (free) Azure web site and also utilize
the Azure Scheduler to manage the recurrence pattern.
The free version of the Azure Scheduler limits us to execution no more
than once an hour and a maximum of 5 jobs. If you have a lot going on
$20 a month will get you executions every minute and up to 50 jobs -
which sounds like a pretty good deal.
so if you wanted every 30 minutes, you could create two jobs, one on the half hour, and one on the hour.
The Bulk Deletion is an interesting work around and something we've used before. It creates extra work and maintenance though so I try to avoid it if possible.
I would generally recommend building a windows application and using the windows scheduling feature (I know you said you don't have a scheduler available but this is often forgotten). This approach works really well and is very easy to troubleshoot. Writing to logs and sending error email alerts is pretty easy to make it robust. The server doesn't need to be accessible externally, it only needs to reach CRM. If you had CRM on-prem, you could just use the same server.
Azure Scheduler is a great suggestion. This keeps you in the cloud which is nice.
SSIS is another option if you already have KingswaySoft or Cozy Roc in place.
You could build a workflow that creates another record and cleans up after itself; however, this is really using the wrong tool for the job. Also, it's very easy for it to fail and then not initiate the next record.
There is a solution called "Scheduled Workflow Runner". You create a FetchXML query to create a record set to run against, and point it at an on-demand workflow that you want it to run on each record.
http://alexanderdevelopment.net/post/2013/05/18/scheduling-recurring-dynamics-crm-workflows-with-fetchxml/

Powershell launch at specific times

I am building a system to restart computers for patch purposes. Most of the skeleton is there and working, I use workflows and some functions to allow me to capture errors and reboot the systems in a number of ways in case of failures.
One thing I am not sure of is how to set up the timing. I am working on a web interface where people can schedule their reboots, either dynamic (one-time) or regularly scheduled (monthly). The server info and times for the boots is stored in a SQL database.
The part that I am missing is how to trigger the reboots when scheduled. All I can think of is allowing for whole hour increments, and run a script hourly checking to see if any servers in the db have a reboot time that "matches" the current time. This will likely work, but is somewhat inflexible.
Can anyone think of a better way? Some sort of daemon?
For instance, user X has 300 servers assigned to him. He wants 200 rebooted at 10 PM on each Friday, and 50 once a month on Saturday at 11 PM. There will be over a dozen users rebooting 3000-4000 computers, sometimes multiple times monthly.
Ok, let's say you have a script that takes a date and time as an argument that will look up what computers to reboot based on that specific date and time, or schedule, or whatever it is that you're storing in your sql db that specifies how often to reboot things. For the sake of me not really knowing sql that well, we'll pretend that this is functional (would require the PowerShell Community Extensions snapin for Invoke-AdoCommand cmdlet):
[cmdletbinding()]
Param([string]$RebootTime)
$ConStr = 'Data Source=SQLSrv01;Database=RebootTracking;Integrated Security=true;'
$Query = "Select * from Table1 Where Schedule = $RebootTime"
$Data = Invoke-AdoCommand -ProviderName SqlClient -ConnectionString $ConStr -CommandText $Query
$data | ForEach{Do things to shutdown $_.ServerName}
You said you already have things setup to reboot the servers so I didn't even really try there. Then all you have to do is setup a scheduledjob for whenever any server is supposed to be rebooted:
Register-ScheduledJob –FilePath \\Srv01\Scripts\RebootServers.ps1 –Trigger #{Frequency=Weekly; At="10:00PM"; DaysOfWeek="Saturday"; Interval=4} -ArgumentList #{'RebootTime'="Day=Saturday;Weeks=4;Time=22:00"}
That's an example, but you know, you could probably work with it to accomplish your needs. That will run it once every 4 Saturdays (about monthly). That one scheduled task will query the sql server to match up a string against a field, and really you could format that any way you want to make it as specific or general as desired for the match. That way one task could reboot those 200 servers, and another could reboot the other 50 all dependent on the user's desires.

Remotely running Get-EventLog wipes actual event log

I wrote a script to remotely fetch event logs in PowerShell, but I'm not a fan of the way the script makes its own event log entries.
Specifically this script grabs several types of event IDs including the logon/logoff events. Recently I was asked to run the script to fetch events for another user, and had to have this data fetched in a few hours. Normally I start the script and let it run for the majority of the day (because there is usually a LOT of data here), but this time to speed up the process, I spun up 4 instances of the script to fetch this data faster than usual. Each instance of the script was looking at a different time frame so that all 4 scripts combined were fetching in the time frame I had been asked for.
Over the course of 3 hours or so, I had over a million different login attempts for my user ID on this remote computer. I had so many logins that I ended up overwriting the event log data I was originally asked to fetch.
Lessons learned, I'm now researching how to make this faster, more efficient, and more reliable.
Here's the heart of my code, pretty plain and simple, and works for the most part.
$output = Get-EventLog `
-instanceID 4672,4647,4634,4624,4625,4648,4675,4800,4801,4802,4803 `
-logName Security `
-after (Get-Date $inStartDate) `
-before (Get-Date $inEndDate).addDays(1) `
-message ("*" + $InUserID + "*") `
-computerName $inPCID
I guess there are several questions that I haven't been able to figure out in my research thus far. Why would Get-EventLog need to make so many connections? Is it because the connection kept dropping or something?
What would be the fastest way to fetch this data - Using the native Get-EventLog command by specifying a -ComputerName, or should I be using something like Enter-PSSession or Invoke-Command.
Will Enter-PSSession and Invoke-Command both have the same problem I'm having with Get-EventLog?
I'd like to avoid using Enter-PSSession and Invoke-Command for the simple fact that I can't guarantee all machines in the company will have remote-execution enabled.
So, the problem was that Get-EventLog was ultimately wiping the remote event logs I was trying to fetch. Not sure why, but Get-EventLog made over a million "connections" that appeared as logon/logoff events, thus overwriting my logs.
Per #Richard's comment, I did a bit of research, and decided to test using Get-WinEvent, the updated and more powerful replacement for Get-EventLog. After testing around with this for a week, the worst case scenario that I ran into was my script creating 4 logon/logoff events. This is completely acceptable and much nicer than over a million events.
A side question I had related to this topic was performance. Because we're gathering many remote logs, we sometimes need this done as fast as possible. My question is whether or not Get-WinEvent would be fast enough to pull logs, when compared to an Enter-PSSession or an Invoke-Command.
For the scope of this question, Get-WinEvent satisfied both the speed requirements as well as the event requirements and relying on the cmdlet to perform my remoting worked great.
My code is pretty simple, but I'll post it for record reasons:
Get-WinEvent `
-computerName $inPCID `
-FilterHashtable #{LogName="Security";Id=4672,4647,4634,4624,4625,4648,4675,4800,4801,4802,4803; `
StartTime = $inStartDate; EndTime = $inEndDate} | Where-object {$_.message -like ("*" + $InUserID + "*")} `
| export-csv $fullOutputFile