Is there any way to run the datafactory slice using powershell cmdlets - powershell

Is there any way to re run the failed Azure data factory slices by using the powershell cmdlets.
As of now I am re running the slices manually from diagram page. But this is not helping much as I have more than 500 slices and all are scheduled to run on every week.
Just to brief you : 4 days back my database server went down, and all the slices are failed to execute and now I want re run all the slice again.
Also I wanted know, is there any way to get failure notification, if slices failed to execute then I should able to get mail or something so that I can get notified.
Thanks in advance.

You may also try the script mentioned in the link and let us know.
For more information, you may refer the article to monitor and manage Azure Data Factory pipelines.

Last time a similar issue happened to me I ended up using the "Monitor & Manage" tool from the Azure Portal.
You can use the grid view to select your failed slices, and there is a very useful "Rerun" button on the top left corner of the grid.
To get email alerts when a slice fails you can add one using the "Metrics and operations" tool.
The settings is quite well hidden but it exists :)

Related

Is it possible to programmatically create Custom SQL Tableau extracts that can be published and then refreshed on the server?

Given some Custom SQL, I want to create a Tableau Extract programmatically.
Is this possible?
Context of the process is:
Generate SQL scripts for each extract (100+)
Create the (100+) extracts from Step 1
Publish the Extracts to Tableau Online
Refresh them there on a schedule
Step 2 can be done manually using Tableau Desktop's Custom SQL.
As seen here in this help doc, https://help.tableau.com/current/pro/desktop/en-us/customsql.htm
I want to do it (Step 2) programmatically, due to the number of extracts needed and the time it will take.
Yes, you can programmatically create extracts with the Hyper API. Then you have the option to use either Tabcmd, Tableau REST API or Tableau Server Client Python library to publish the extract. If you go with Python to create the extract, then in the same script, you could use the Server Client to publish. Instead of tableau server refreshing it, you would schedule your script with some sort task scheduler, like in Windows Task Scheduler.
Tableau can do Step 4, you just need to configure it.
Is the problem with steps 1-3 that you have the SQL and just need to automate that part ?

Dataprep jobs running for over 72 hours since 6/20 update. Job status reads complete but not published

I have been running daily Dataprep jobs and since the update last week, approximately half of my jobs are now hanging and not being published. They appear as jobs in progress although when I go to the actual job page, the job appears to be complete. There is no publishing action and the publishing target does not appear updated. Some jobs have now been going on for over 72 hours since Friday.
I've seen traces of other users having the same issue online but have not seen any sort of response or recognition from either Google or Trifacta.
I have tried restarting the jobs to no success and it appears that there is no way to cancel those hanging jobs because from Google's perspective, it seems as though the jobs were successful itself, just not published. This problem appears both on my jobs that publish to BigQuery as well as jobs that publish to Google Cloud Storage, as well as manual and scheduled jobs.
This may impact only jobs that have been pushed during the upgrade and should be rather cosmetic in nature. Please note that you won't get charged.
Did the exact same job work before with no changes? If so, please contact support and provide them as reference the successful and now failing job ID so it can be investigated further.
Cheers,
Sebastian
I have come acros the same problem! The output of the jobs is placed in a temp folder in cloudstorage with the output mostly consisting out of multiple files without headers....
It is also creating huge issues here. Instead of the normal output file, it places multiple parts of it in a temp folder without headers. The makes new scheduled jobs that rely on these outputs useless, because it does not load the new output.
If you manually merge the files in the temp folder and add headers (in case of csv) + place them in the correct folder, the output can be created manualy (for csv).
Also no response from Google yet.
We're seeing the exact same thing down to the destinations and job types . . . it's almost like Dataprep is losing track of the underlying DataFlow job and not finishing on its completion (that's why you see the temp files—that's the output, then Dataprep handles the formatting of the output file separately).
Someone was kind enough to already post this on the issue tracker, so please go star it and add any additional details that may be helpful to the Dataprep team:
https://issuetracker.google.com/issues/135865374

Get list of task names on a push queue on google app engine

I have been looking for the last two days for a way to retrieve a list of tasks on a Google app engine push queue either programatically or with a command line command (either gcloud tool or a curl REST API). I have not been able to find anything in their documentation about this. Does anyone know of a way to retrieve a list of the task names on a push queue? Preferably through a command line request. Any help or suggestions on where to look are very much appreciated.

AS/400 End User - run keystrokes automatically

I'm a novice with AS/400. I have a bit of coding experience and know that there's always an access to the backend if you're clever enough. But developers in my organisation said that it's hard to communicate with the server and make it run things remotely.
So I'm wondering if you anyone's got any ideas how I can schedule a simple task. I login to the "Personal Communication", which is the client app. Then I go to a certain menu, ie I543, enter a parameter "1". And Press "ENTER" to run a report which have a file output.
I know there is that "Macro" function within Personal Communication. But that relies on send keys which does not work on a locked screen, nor do I want to activate it manually, which really defies the point of automation.
I was hoping I can schedule a simple call command somehow to activate some kind of procedure. Just need to know if possible and where to start looking? Thanks.
Last millennium's AS/400 and today's IBM i both have a basic job scheduler built in.
From a command line WRKJOBSCDE.
You need to find out what happens when you select menu I543 option 1. Assuming it's a simple CALL MYRPT or SBMJOB CMD(CALL MYRPT) then adding a scheduled job to run the report is easy.
However, you probably don't have the authority to do so. Nor should your developers necessarily be able to do so. Your system administrator is the right person. In a small shop, that might be the guy doing development. In a large one, it's another person or team.
But your developers should have at least pointed you toward the admin and the job scheduler.

Azure WebJob Logging/Emailing

I've converted a console app into a scheduled WebJob. All is working well, but I'm having a little trouble figuring out how to accomplish the error logging/emailing I'd like to have.
1.) I am using Console.WriteLine and Console.Error.WriteLine to create log messages. I see these displayed in the portal when I go to WebJob Run Details. Is there any way to have these logs saved to files somewhere? I added my storage account connection string as AzureWebJobsDashboard and AzureWebJobsStorage. But this appears to have just created an "azure-webjobs-dashboard" blob container that only has a "version" file in it.
2.) Is there a way to get line numbers to show up for exceptions in the WebJob log?
3.) What is the best way to send emails from within the WebJob console app? For example, if a certain condition occurs, I may want to have it send me and/or someone else (depending on what the condition is) an email along with logging the condition using Console.WriteLine or Console.Error.WriteLine. I've seen info on triggering emails via a queue or triggering emails on job failure, but what is the best way to just send an email directly in your console app code when it's running as a WebJob?
How is your job being scheduled? It sounds like you're using the WebJobs SDK - are you using the TimerTrigger for scheduling (from the Extensions library)? That extensions library also contains a new SendGrid binding that you can use to send emails from your job functions. We plan on expanding on that to also facilitate failure notifications like you describe, but it's not there yet. Nothing stops you from building something yourself however, using the new JobHostConfiguration.Tracing.Trace to plug in your own TraceWriter that you can use to catch errors/warnings and act as you see fit. All of this is in the beta1 pre-release.
Using that approach of plugging in a custom TraceWriter, I've been thinking of writing one that allows you to specify an error threshold/sliding window, and if the error rate exceeds, an email or other notification will be sent. All the pieces are there for this, just haven't done it yet :)
Regarding logging, the job logs (including your Console.WriteLines) are actually written to disk in your Web App (details here). You should be able to see them if you browse your site log directory. However, if you're using the SDK and Dashboard, you can also use the TextWriter/TraceWriter bindings for logging. These logs will be written to your storage account and will show up in the Dashboard Functions page per invocation. Here's an example.
Logs to files: You can use a custom TraceWriter https://gist.github.com/aaronhoffman/3e319cf519eb8bf76c8f3e4fa6f1b4ae
Exception Stack Trace Line Numbers: You will need to make sure your project is built with debug info set to "full" (more info http://aaron-hoffman.blogspot.com/2016/07/get-line-numbers-in-exception-stack.html)
SendGrid, Amazon Simple Email Service (SES), etc.