How to get sql agent job notification in rds(cloud watch ) - tsql

We are migrating from on prem sql to rds. While migrating sql agent job to rds, we are planning to get notification on RDS itself, so that we wanted to replace Exec msdb.dbo.sp_send_email.
I was able to push notification for sql agent job error on rds (rds-Cloudwatch-agent).
But I have another scenario, "If my query runs more then 60 minutes", I want to send notification on rds stating "your query is running more then 60 min please take a look" (agent job not necessarily fails--for failed job I am able to see notification)
Example scenario
Declare #msg varchar(8000)
set #msg = select * from table a
If #msg in not null
Begin
*Exec msdb.dbo.sp_send_dmail
recipents= #email_id
body=#msg
End
Which I want to replace as
If #msg in not null
Begin
someting here send on rds cloud watch
End
Notification I Would like to see on RDS
Thank you all!!

Related

AWS Athena Federated Query - GENERIC_USER_ERROR when running DB query for PostgreSQL

Hi all,
I am trying to execute queries on a postgresql database I created in AWS.
I added a data source to Athena, I created the data source for postgresql and I created the lambda function.
In Lambda function I set:
default connection string
spill_bucket and spill prefix (I set the same for both: 'athena-spill'. In the S3 page I cannot see any athena-spill bucket)
the security group --> I set the security group I created to access the db
the subnet --> I set one of the database subnet
I deployed the lambda function but I received an error and I had to add a new environment variable created with the connection string but named as 'dbname_connection_string'.
After adding this new env variable I am able to see the database in Athena but when I try to execute any query on this database as:
select * from tests_summary limit 10;
I receive this error after running query:
GENERIC_USER_ERROR: Encountered an exception[com.amazonaws.SdkClientException] from your LambdaFunction[arn:aws:lambda:eu-central-1:449809321626:function:data-production-athena-connector-nina-lambda] executed in context[retrieving meta-data] with message[Unable to execute HTTP request: Connect to s3.eu-central-1.amazonaws.com:443 [s3.eu-central-1.amazonaws.com/52.219.170.25] failed: connect timed out]
This query ran against the "public" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 3366bd80-143e-459c-a4da-5350b5ab4a77
What could be causing the problem?
Thanks a lot!
Root Cause:
VPC have no internet connection issue, causing Lambda can't access S3.
Solution:
Add VPC Gateway Endpoint (Select com.amazonaws.eu-central-1.s3) in Lambda associated VPC.

MDS import data queue

I am following this guidance: https://www.mssqltips.com/sqlservertutorial/3806/sql-server-master-data-services-importing-data/
The instructions say after we load data into the staging tables, we go into the MDS integration screen and select "START BATCHES".
Is this a manual override to begin the process? or how do I know how to automatically queue up a batch to begin?
Thanks!
Alternative way to run the staging process
After you load the staging table with required data.. call/execute the Staging UDP.
Basically, Staging UDPs are different Stored Procedures for every entity in the MDS database (automatically created by MDS) that follow the naming convention:
stg.udp_<EntityName>_Leaf
You have to provide it values for some parameters. Here is a sample code of how to call these.
USE [MDS_DATABASE_NAME]
GO
EXEC [stg].[udp_entityname_Leaf]
#VersionName = N'VERSION_1',
#LogFlag = 1,
#BatchTag = N'batch1'
#UserName=N’domain\user’
GO
For more details look at:
Staging Stored Procedure (Master Data Services).
Do remember that the #BatchTag value has to match the value that you initially populated in the Staging table.
Automating the Staging process
The simplest way for you to do that would be to schedule a job in SQL Agent which would execute something like the code above to call the staging UDP.
Please note that you would need to get creative about figuring out how the Job will know the correct Batch Tag.
That said, a lot of developers just create a single SSIS Package which does the Loading of data in the Staging table (as step 1) and then Executes the Staging UDP (as the final step).
This SSIS package is then executed through a scheduled SQL Agent job.

Google Cloud SQL for PostgreSQL `work_mem`

there. I want to tune Google Cloud SQL for PostgreSQL instance. Currently, I'm trying to eliminate sorting speed degradation:
Sort Method: external merge Disk: 39592kB
Right now work_mem is set to 4MB, and it seems that is too small. After reading docs, I didn't find the way how to change this setting. It's impossible via Web GUI and via command line:
$ gcloud sql instances patch reporting-dev --database-flags work_mem=128MB
The following message will be used for the patch API method.
{"project": "xxx-153410", "name": "reporting-dev", "settings": {"databaseFlags": [{"name": "work_mem", "value": "128MB"}]}}
WARNING: This patch modifies a value that requires your instance to be
restarted. Submitting this patch will immediately restart your
instance if it's running.
Do you want to continue (Y/n)? Y
ERROR: (gcloud.sql.instances.patch) HTTPError 404: Flag requested cannot be set.
Any thoughts on that?
You can change it by user or by database.
alter database db1 set work_mem='64MB';
alter user stan set work_mem='32MB';
User overides db, db overrides postgresql.conf / cluster settings. Both override alter system set ... which you might not be able to use due to security settings.

AWS Unload Error: 'The bucket you are attempting to access must be addressed using the specified endpoint.'

I am running the following query in SQL. I am trying to unload data from Redshift to a bucket in my personal S3 account:
UNLOAD ('SELECT * FROM table WHERE
UPPER(description) LIKE \'%something%\')
TO 's3://mybucketname/sometextname.txt' CREDENTIALS
'aws_access_key_id=xxx;aws_secret_access_key=xxx'
PARALLEL OFF
When I do this, I get the following error:
The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.,Status 301,Error PermanentRedirect,Rid AE9F82CD626A5B05,ExtRid 1hl5HHhv9rkaq0Vw7fB0kpm2WO1uOmy4MmXq
Is my s3 path correct? Do I need to change some permissions for my s3 account or bucket?
This feature is now supported. https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html
unload ('select * from category')
to 's3://your-bucket/your-prefix'
iam_role 'arn:aws:iam::xxxxxxxx:role/redshift-role'
region 'us-west-2';

Can I CREATE TRIGGER in an rds DB?

I'm trying to create a trigger on a table in my Amazon RDS database, and I can't seem to make it happen.
I tried to create a trigger on a table in the mysql client I use (Navicat), and got the error that I needed the SUPER privilege to do so. After some searching, I found that you could SET GLOBAL log_bin_trust_function_creators = 1 to get around this. I tried that using these instructions: http://getasysadmin.com/2011/06/amazon-rds-super-privileges/ (and then restarting the DB server for good measure), but no luck.
I also tried creating the trigger and setting the variable via the mysql commmand line to make sure Navicat wasn't adding anything unwanted to my sql commands, but that failed, too. It also seems from searching that there's no way to grant yourself the SUPER privilege.
So ... is creating a trigger possible in RDS?
Its easy!
Open the RDS web console.
Open the “Parameter Groups” tab.
Create a new Parameter Group. On the dialog, select the MySQL family compatible to your MySQL database version, give it a name and confirm.
Select the just created Parameter Group and issue “Edit Parameters”.
Look for the parameter ‘log_bin_trust_function_creators’ and set its value to ’1′.
Save the changes.
Open the “Instances” tab. Expand your MySQL instance and issue the “Instance Action” named “Modify”.
Select the just created Parameter Group and enable “Apply Immediately”.
Click on “Continue” and confirm the changes.
Again, open the “Instances” tab. Expand your MySQL instance and issue the “Instance Action” named “Modify”.
Dont forget: Open the “Instances” tab. Expand your MySQL instance and issue the “Instance Action” named “Reboot”.
Via - http://techtavern.wordpress.com/2013/06/17/mysql-triggers-and-amazon-rds/
No it is actually not impossible it just takes far too much extra work.
First off it seems to be impossible to apply Super Privileges to a default parameter group.
So what I had to do was to create a new DB Parameter group either through the Console, or the CLI.
What I found was, the key is that the default region was not the region I was trying to use so I had to use a --region parameter to apply it to the group in the correct region where I was deploying my DB Instance
rds-create-db-parameter-group --db-parameter-group-name allow-triggers --description 'parameter group to allow triggers' --region your-region
Next I had to create a DB Instance which used that parameter group. (Again through the console or CLI)
rds-create-db-instance
Then I had to modify the Parameter group to allow log_bin_trust_function_creators only accomplishable through the CLI
rds-modify-db-parameter-group --db-parameter-group-name yourgroupname --region yourRegion --parameters 'name=log_bin_trust_function_creators,value=true,method=immediate'
Lastly I had to modify the created DB Instance to allow triggers, also CLI only.
rds-modify-db-instance --db-instance-identifier your-db-instance-id --db-parameter-group-name allow-triggers --apply-immediately
In addition to the parameter group modification that others have already mentioned, there is a further challenge that arises when using a MySQL database dump (via mysqldump) to create triggers in an AWS RDS instance. You may get a message like this:
ERROR 1227 (42000) at line 875: Access denied; you need (at least one of) the SUPER privilege(s) for this operation
This happens because the dump contains "definer" entries with a username that's different than your RDS master username. One solution is to replace the definer username with your RDS master username. Another solution is not to use mysqldump to create your database.
See this blog post for more information:
http://www.percona.com/blog/2014/07/02/using-mysql-triggers-and-views-in-amazon-rds/
EDIT: It turns out Multi-AZ for MySQL uses "physical replication" and not logical replication, so this may not be correct. At least that's what their documentation says: https://aws.amazon.com/rds/details/multi-az/ - I have asked on their forums what this means, but did not get a reply. What's weird is that my RDS Multi-AZ instance claims it's a "master in a replication setup", even though I have no read replicas.
As the question has already been addressed, this is a comment more than an answer:
I'm surprised nobody takes into account why this feature is not available as a default. Amazon wouldn't disable it just to make people's lives harder.
In a master/slave replication it can be dangerous to use stored procedures and triggers that modify data (as in perform queries other than SELECT).
Please have a read below before disabling this restriction in a master/slave setup, which Amazon RDS is when you use Multi-AZ (and you should, for production at least).
http://dev.mysql.com/doc/refman/5.6/en/stored-programs-logging.html
I followed the above but it did not work for me. I spent almost a day to figure out why it is not working and now I know why. I am listing down steps that I followed to make it work.
Created mysql parameters group using aws web console (make sure that it should have same family as the default parameter group. Earlier, I had created a parameter group but it had different family and so it did not work. This is critical step.
Using aws web console change value of log_bin_trust_function_creators to 1
Apply new parameter group. This is another critical step
rds-modify-db-instance –I $AWS_ACCESS_KEY –S $AWS_SECRET_KEY –region $EC2_REGION \ –db-instance-identifier $DB_INSTANCE \
–db-parameter-group-name $DB_GROUPNAME \
–apply-immediately
You need RDSCli from - http://s3.amazonaws.com/rds-downloads/RDSCli.zip
Then verify if parameter group is associated with your db instance
rds-describe-db-instances \
–I $AWS_ACCESS_KEY \
–S $AWS_SECRET_KEY \
–region $EC2_REGION
And then reboot before you try creating trigger
rds-reboot-db-instance \
–I $AWS_ACCESS_KEY \
–S $AWS_SECRET_KEY \
–region $EC2_REGION \
–db-instance-identifier $DB_INSTANCE
Remember to set below environment variable before you try above commands.
export AWS_ACCESS_KEY=’*****’
export AWS_SECRET_KEY=’*****’
export EC2_REGION=’region’
export AWS_RDS_BIN=”$AWS_RDS_HOME/bin”
export PATH=$PATH:$AWS_RDS_BIN
export JAVA_HOME=c:/jdk1.6_25 (in most cases this is already set)
Thanks to http://blog.iprofs.nl/2013/03/20/rds-database-triggers-for-mysql/ for full details.
AWS lays out how to enable functions and triggers in this post
Create a DB parameter group for your MySQL instance:
Sign in to the AWS Management Console and open the Amazon RDS console.
In the navigation pane, choose Parameter Groups.
Choose Create Parameter Group. The Create Parameter Group window appears.
For Parameter Group Family, choose the parameter group family.
For Group Name, type the name of the new DB parameter group.
For Description, type a description for the new DB parameter group.
Choose Create.
Important
After you create a DB parameter group, you should wait at least 5 minutes before creating your first DB instance that uses that DB parameter group.
For more information about creating a DB parameter group, see Working with DB Parameter Groups - Creating a DB Parameter Group.
Modify the newly created parameter group and set the following parameter:
In the navigation pane, choose Parameter Groups. The available DB parameter groups appear in a list.
In the list, select the parameter group you want to modify.
Choose Edit Parameters and set the following parameter to the specified value:
log_bin_trust_function_creators = 1
Choose Save Changes.
Important
After you modify a DB parameter group, you should wait at least 5 minutes before creating your first DB instance that uses that DB parameter group.
For information about modifiying a DB parameter group, see Working with DB Parameter Groups - Modifying Parameters in a DB Parameter Group.
Associate your RDS DB instance with the new or modified DB parameter group:
In the navigation pane, choose Instances.
Select the DB instance you want to associate with a DB parameter group.
On the Instance Actions menu, choose Modify.
In the Modify DB Instance dialog box, under Database Options, choose the parameter group you want to associate with the DB instance. Changing this setting does not result in an outage. The parameter group name changes immediately, but the actual parameter changes are not applied until you reboot the instance without failover.
Apply changes by rebooting the instance.
For me, it worked as the #foxybagga's answer suggest, but I needed to update the generated sql's dump (from mysqlworkbench) to have CURRENT_USER as the DEFINER
ie:
DELIMITER ;;
/*!50003 CREATE*/ /*!50017 DEFINER=CURRENT_USER*/ /*!50003 TRIGGER `sod_db`.`date`
BEFORE INSERT ON `sod_db`.`CashOut`
FOR EACH ROW
BEGIN
SET NEW.created = NOW();
END */;;
DELIMITER ;
/*!50003 SET sql_mode = #saved_sql_mode */ ;
/*!50003 SET character_set_client = #saved_cs_client */ ;
/*!50003 SET character_set_results = #saved_cs_results */ ;
/*!50003 SET collation_connection = #saved_col_connection */ ;
I hope this helps someone is having the same problem.