We have a CloudFormation stack that we want to provide to our clients. When they run the stack, we want to receive some output values directly, i.e. we don't want them to need them to send us the output. Our first thought was to use SNS and the notification capabilities of CF but it seems that the topic must be in the account running the template and can't be in another account. We also considered subscribing to the existing SNS topic as part of the template but that doesn't get a message sent.
We realize that CF is a resource creation tool but we think there must be a way to get the info relayed to us automatically. Doesn't have to be SNS. Any ideas on how we might be able to do this?
Update your CF script to contain a lambda and cloud watch rule which runs every 5 minutes on a cron.
Give the lambda IAM permissions to query the stack/get any output values you require.
When the lambda triggers you query the data you need and can send it to yourself however you see fit. E.g http POST to an API you own.
To finish up your lambda should call the cloud watch API to disable the cloud watch rule so this code doesn't run again.
You should consider if all this offsets the effort of saying to your client "please send us details of x y z". If you have 10 clients, probably not, if you have 1000 clients then possibly.
Related
I need to write a Discord bot using PowerShell. The rules are as follows:
1. The bot must be able to read messages with the prefix 'psdo '
2. The bot must be able to get the reactions of messages
3. The bot must be able to send messages
4. The bot must be able to edit messages
However, there is no way to do so with webhooks as I know. I have a second discord account and could create an application but I need an entrypoint; some method to do these things. I need to do this strictly in powershell. Is there a way to accomplish all of the rules above? I only need the code to do these, and then I can handle the rest for myself.
PS - I have a server where I am the owner, so I can do anything in the server I'm making.
Email logs only if there was an exception at any point in the run.
I only want this for a specific robot, not all robots under management console.
I know there is an option under management console, but that emails for every robot and every log.I don't want that. Thanks
Add a try step as the very first action of your robot. At the lower branch, add any action to log your error (write file, send email):
Configure each action that may fail as follows:
The special robot property Robot.executionErros will hold the relevant error message. Here's an example:
That being said, I would rather rely on Kofax RPA's logging capabilities - any error gets logged to the logging database. You can then use another robot to get those entries and email send messages. This allows you to use a single robot for sending out email notifications instead of adding above steps to each one of them.
I have deployed a webservice on AWS EC2 instances.
I have also implemented a rest call /getStatuswhich returns status of modules in my service in JSON format like Connection status of DB, ActiveMQ cache status etc.
I want a way to creat automatic email trigger which will send mail when there is any issue found in response of /getStatus rest call.
I am looking if its possible using cloudwatch but any other sugestions are welcome
One solution is to make the endpoint return an HTTP status code indicating that something isn't correct (like a 500) and then set up a Route53 Health Check with e-mail notifications (using SNS).
The basic procedure for configuring email alerts is pretty straightforward. Use this flowchart to get started.
If you want detailed instructions, this guide covers how to set up AWS email alerts upon resource status change and includes a few additional steps to refine the reports to be a bit more user-friendly and sent directly to a third-party messenger service.
The workflow will look like this:
Create Route53 Health Check;
Route53 initializes Health Checker Nodes in various regions;
Health Checkers ping the specified URL;
4a. Status is OK if TCP connection is established within 10 seconds and HTTP status code 2xx or 3xx is retrieved within 2 seconds;
OR
4b. Status is FAILURE otherwise: TCP connection fails, TCP connection times out, HTTP status code is 4xx, 5xx or page is too slow (yes, slow 200 response can cause failure);
Health Checker nodes will retry the endpoint as configured;
Cloud watch alarm is triggered on Health Check status change;
Alarm is delivered to AWS SNS topic
AWS SNS notifies topic subscribers
Advanced configuration may be applied to enhance notification contents and delivery method per above guide.
I work for the team that develops Axibase Time Series Database (atsd).
I would suggest a cloudwatch event that runs on a schedule you decide (i.e. every 5 minutes).
The event would call a lambda function, which would make the /getStatus call and decide if an email needs to be sent - if it does, I would further suggest AWS SES to send a custom formatted email with the appropriate alerts to the person(s) that are supposed to get them.
Using the above tools would be 'serverless', and cost very little to nothing and have the benefit of not running on a instance you have to worry about.
I have been working on AWS for the last month trying to scale our application email sending, we were using email chimp and decided to migrate our servers to amazon. Our application currently generates between 3000-4000 emails a day (not all at once and in different time span). The issue i'm trying to solve is deliver the emails in the least time possible (SES send is 14 mails/s.)
What i been able to do is: Application -> SQS -> Lambda pull (schedule 1 per minute that pull 10 messages) -> SES -> SNS -> Application.
The Lambda schedule is generated with the cloudwatch rule, i've seen that you can target events but i haven't been able to do it =(
I'm trying to find the correct approach but i haven't been able to put all my thoughts together.
Can anyone help me? =)
Firstly if you want to increase the maximum send rate, you may open a case in the Support Center.
Then you could setup a CloudWatch alarm for your SQS NumberOfMessagesSent metric and call a SNS topic that triggers a Lambda. You can trigger this Lambda if your NumberOfMessagesSent is greater than a certain value. eg: 1, 10 or maximum SES send rate. Lambda could call SES and send emails for the newly added messages. The method that I'm proposing is SNS -> Lambda -> SES. In this method, you may not want to rely on a schedule.
In our design we have something of a paradox. We have a database of projects. Each project has a status. We have a REST api to change a project from “Ready” status to “Cleanup” status. Two things must happen.
update the status in the database
send out an email to the approvers
Currently RESTful api does 1, and if that is successful, do 2.
But sometimes the email fails to send. But since (1) is already committed, it is not possible to rollback.
I don't want to send the email prior to commit, because I want to make sure the commit is successful before sending the email.
I thought about undoing step 1, but that is very hard. The status change involves adding new records to the history table, so I need to delete them. And if another person make other changes concurrently, the undo might get messed up.
So what can I do? If (2) fails, should I return “200 OK” to the client?
Seems like the best option is to return “500 Server Error” with error message that says “The project status was changed. However, sending the email to the approvers failed. Please take appropriate action.”
Perhaps I should not try to do 1 + 2 in a single operation? But that just puts the burden on the client, which is worse!
Just some random thoughts:
You can have a notification sent status flag along with a datetime of submission. When an email is successful then it flips, if not then it stays. When changes are submitted then your code iterates through ALL unsent notifications and tries to send. No idea what backend db you are suing but I believe many have the functionality to send emails as well. You could have a scheduled Job (SQL Server Agent for MSSQL) that runs hourly and tries to send if the datetime of the submission is lapsed a certain amount or starts setting off alarms if it fails as well.
If ti is that insanely important then maybe you could integrate a third party service such as sendgrid to run as a backup sending mech. That of course would be more $$ though...
Traditionally I've always separated functions like this into a backend worker process that handles this kind of administrative tasking stuff across many different applications. Some notifications get sent out every morning. Some get sent out every 15 minutes. Some are weekly summaries. If I run into a crash and burn then I light up the event log and we are (lucky/unlucky) enough to have server monitoring tools that alert us on specified application events.