Get list of task names on a push queue on google app engine - rest

I have been looking for the last two days for a way to retrieve a list of tasks on a Google app engine push queue either programatically or with a command line command (either gcloud tool or a curl REST API). I have not been able to find anything in their documentation about this. Does anyone know of a way to retrieve a list of the task names on a push queue? Preferably through a command line request. Any help or suggestions on where to look are very much appreciated.

Related

Extending Azure multistage yaml pipelines logs

I'm trying to log completion of each stage of multi stage yaml pipelines with some custom details.
How can i add custom details to https://dev.azure.com//_settings/audit logs.
Is there a way to persist this information in sqldb or any other persistant storage option.
How can i subscribe to the these log events.
How can i add custom details to
https://dev.azure.com//_settings/audit logs.
I'm afraid this does not available for you to achieve.
Because the sentence format of details is defined and fixed by our backend class. Once the corresponding action occurred, beside the action class, the event method will also be called to generate and record the log into audit page. These are all finished by backend. And we haven't expose this operation permission to users until now.
But based on my own, this is a good idea that we may consider to expand. Because Customized details can make the details more readable for the company. You can raise your idea here, vote and comment it. Our corresponding Product Group review these suggestion regularly, and consider to take into our develop roadmap depend on its priority(votes).
How can i subscribe to the these log events.
There has one important thing that I need let you know, the audit log only keep for 90 days. And it will be cleared after 90 days, including our backend database. The nutshell is, if you want the audit logs which more than 90 days, we also have no idea to help on restore that.
So I suggest you can configure one scheduled pipeline with Powershell task.
In this powershell task, run this api to get and then store it with any file type you want, e.g .csv, .json and etc.
For the schedule value, you can set it as any time period you like. Just less than 90 days, so that it do not make you lose any audit event log.
Is there a way to persist this information in sqldb or any other
persistant storage option.
If you can use a different database, I'd better suggest you consider to using a document storage solution such as CouchDB, DynamoDB or MongoDB.
Depend on your actual used, you can make use of Command line task with self-agent, to execute corresponding storing commands.
For sample, what I used is MongoDB and I can run below commands to store the JSON file that api generated previously:
C:\>mongodb\bin\mongoimport --jsonArray -d mer -c docs --file audit20191231.json

Is there any way to run the datafactory slice using powershell cmdlets

Is there any way to re run the failed Azure data factory slices by using the powershell cmdlets.
As of now I am re running the slices manually from diagram page. But this is not helping much as I have more than 500 slices and all are scheduled to run on every week.
Just to brief you : 4 days back my database server went down, and all the slices are failed to execute and now I want re run all the slice again.
Also I wanted know, is there any way to get failure notification, if slices failed to execute then I should able to get mail or something so that I can get notified.
Thanks in advance.
You may also try the script mentioned in the link and let us know.
For more information, you may refer the article to monitor and manage Azure Data Factory pipelines.
Last time a similar issue happened to me I ended up using the "Monitor & Manage" tool from the Azure Portal.
You can use the grid view to select your failed slices, and there is a very useful "Rerun" button on the top left corner of the grid.
To get email alerts when a slice fails you can add one using the "Metrics and operations" tool.
The settings is quite well hidden but it exists :)

Google SQL Cloud operations callback?

I currently have an application which triggers import jobs to Google SQL Cloud using the their API:
https://cloud.google.com/sql/docs/admin-api/v1beta4/instances/import
This works great. However, this is only a request to import an SQL file. I have to check that the request was successful a minute or two afterwards.
What I would like, is to somehow register a callback to notify my application when the operation is complete. Then I can delete the bucket item and mark the data as persisted.
I have no idea if this is possible, but would be grateful for any advice. Perhaps the PubSub system API could be used for this, but so far have been unable to find any documentation on how this would be done.
There's currently no out of the box way to do this. You need to poll the operation status to determine when it's finished.

Azure WebJob Logging/Emailing

I've converted a console app into a scheduled WebJob. All is working well, but I'm having a little trouble figuring out how to accomplish the error logging/emailing I'd like to have.
1.) I am using Console.WriteLine and Console.Error.WriteLine to create log messages. I see these displayed in the portal when I go to WebJob Run Details. Is there any way to have these logs saved to files somewhere? I added my storage account connection string as AzureWebJobsDashboard and AzureWebJobsStorage. But this appears to have just created an "azure-webjobs-dashboard" blob container that only has a "version" file in it.
2.) Is there a way to get line numbers to show up for exceptions in the WebJob log?
3.) What is the best way to send emails from within the WebJob console app? For example, if a certain condition occurs, I may want to have it send me and/or someone else (depending on what the condition is) an email along with logging the condition using Console.WriteLine or Console.Error.WriteLine. I've seen info on triggering emails via a queue or triggering emails on job failure, but what is the best way to just send an email directly in your console app code when it's running as a WebJob?
How is your job being scheduled? It sounds like you're using the WebJobs SDK - are you using the TimerTrigger for scheduling (from the Extensions library)? That extensions library also contains a new SendGrid binding that you can use to send emails from your job functions. We plan on expanding on that to also facilitate failure notifications like you describe, but it's not there yet. Nothing stops you from building something yourself however, using the new JobHostConfiguration.Tracing.Trace to plug in your own TraceWriter that you can use to catch errors/warnings and act as you see fit. All of this is in the beta1 pre-release.
Using that approach of plugging in a custom TraceWriter, I've been thinking of writing one that allows you to specify an error threshold/sliding window, and if the error rate exceeds, an email or other notification will be sent. All the pieces are there for this, just haven't done it yet :)
Regarding logging, the job logs (including your Console.WriteLines) are actually written to disk in your Web App (details here). You should be able to see them if you browse your site log directory. However, if you're using the SDK and Dashboard, you can also use the TextWriter/TraceWriter bindings for logging. These logs will be written to your storage account and will show up in the Dashboard Functions page per invocation. Here's an example.
Logs to files: You can use a custom TraceWriter https://gist.github.com/aaronhoffman/3e319cf519eb8bf76c8f3e4fa6f1b4ae
Exception Stack Trace Line Numbers: You will need to make sure your project is built with debug info set to "full" (more info http://aaron-hoffman.blogspot.com/2016/07/get-line-numbers-in-exception-stack.html)
SendGrid, Amazon Simple Email Service (SES), etc.

Pattern for Google Alerts-style service

I'm building an application that is constantly collecting data. I want to provide a customizable alerts system for users where they can specify parameters for the types of information they want to be notified about. On top of that, I'd like the user to be able to specify the frequency of alerts (as they come in, daily digest, weekly digest).
Are there any best practices or guides on this topic?
My instincts tell me queues and workers will be involved, but I'm not exactly sure how.
I'm using Parse.com as my database and will also likely index everything with Lucene-style search. So that opens up the possibility of a user specifying a query string to specify what alerts s/he wants.
If you're using Rails and Heroku and Parse, we've done something similar. We actually created a second Heroku app that did not have a web dyno -- it just has a worker dyno. That one can still access the same Parse.com account and runs all of its tasks in a rake task like they specify here:
https://devcenter.heroku.com/articles/scheduler#defining-tasks
We have a few classes that can handle the heavy lifting:
class EmailWorker
def self.send_daily_emails
# queries Parse for what it needs, loops through, sends emails
end
end
We also have the scheduler.rake in lib/tasks:
require 'parse-ruby-client'
task :send_daily_emails => :environment do
EmailWorker.send_daily_emails
end
Our scheduler panel in Heroku is something like this:
rake send_daily_emails
We set it to run every night. Note that the public-facing Heroku web app doesn't do this work but rather the "scheduler" version. You just need to make sure you push to both every time you update your code. This way it's free, but if you ever wanted to combine them it's simple as they're the same code base.
You can also test it by running heroku run rake send_daily_emails from your dev machine.