Create RDS DB User CloudFormation - aws-cloudformation

As CloudFormation does not natively support creating a DB User for an RDS Database, I am looking for ways to do this via CustomResource. However, even if I write a CustomResource backed by a Lambda function, I do not see an RDS API endpoint that would allow me to add a user to a database instance.
Could anyone suggest potential ways to create a DB User for an Aurora Cluster backed by Postgres 10 database engine?

I do not see an RDS API endpoint that would allow me to add a user to a database instance.
Usually you would set your custom resource to trigger after RDS is created. Thus, you can pass the RDS endpoint url to the lambda using, for example, function environment variables.
Practically DependsOn attribute on your custom resource could be used to ensure that the custom resource triggers after the RDS is successfully created. Not really needed if you pass the RDS url though environmental variables.
Update code with example lambda which uses pymysql:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Handler: index.lambda_handler
Role: !Ref ExecRoleArn
Runtime: python3.7
Environment:
Variables:
DB_HOSTNAME: !Ref DbHostname
DB_USER: !Ref DbMasterUsername
DB_PASSWORD: !Ref DbMasterPassword
DB_NAME: !Ref DbName
VpcConfig:
SecurityGroupIds: [!Ref SecurityGroupId]
SubnetIds: !Ref SubnetIds
Code:
ZipFile: |
import base64
import json
import os
import logging
import random
import sys
import pymysql
import boto3
rds_host = os.environ['DB_HOSTNAME']
rds_user = os.environ['DB_USER']
rds_password = os.environ['DB_PASSWORD']
rds_dbname = os.environ['DB_NAME']
logger = logging.getLogger()
logger.setLevel(logging.INFO)
try:
conn = pymysql.connect(rds_host,
user=rds_user,
passwd=rds_password,
db=rds_dbname,
connect_timeout=5)
except:
logger.error("ERROR: Unexpected error: Could not connect to MySql instance.")
sys.exit()
def lambda_handler(event, context):
print(json.dumps(event))
with conn.cursor() as cur:
cur.execute("create table if not exists Employee (EmpID int NOT NULL auto_increment, Name varchar(255) NOT NULL, PRIMARY KEY (EmpID))")
conn.commit()
return {
'statusCode': 200,
'body': ""
}
Timeout: 60 #
MemorySize: 128
Layers:
- arn:aws:lambda:us-east-1:113088814899:layer:Klayers-python37-PyMySQL:1

Related

What is correct dbt profiles for postgresql?

I have a dbt_project like below.
name: 'DataTransformer'
version: '1.0.0'
config-version: 2
profile: 'DataTransformer'
Using Postgres, hence I have a profile at .dbt/profiles.yml
DataTransformer:
target: dev
outputs:
dev:
type: postgres
host: localhost
port: 55000
user: postgres
pass: postgrespw
dbname: postgres
schema: public
threads: 4
But when I run dbt debug, I got Credentials in profile "DataTransformer", target "dev" invalid: ['dbname'] is not of type 'string'
I have searched, there are several people has encountered the same error but in different databases. Still not sure why happens in my case

Kinesis data stream to lambda function

I have an existing kinesis instance and my aim is to connect to it via a lambda function and process the records.
I created the lambda using vscode aws-toolkit extension by "create new SAM Application". I put some test records using boto3 in python. every time I revoke the lambda locally in the debug mode, the event is an empty object so there are no records to parse.
I can connect to the kinesis and retrieve records in python using boto3 to confirm the existence of the records.
Here is my template.yaml
Globals:
Function:
Timeout: 60
Resources:
KinesisRecors: Type: AWS::Serverless::Function
Properties:
CodeUri: kinesis_records/
Handler: app.lambda_handler
Runtime: python3.8
Events:
KinesisEvent:
Type: Kinesis
Properties:
Stream: arn:aws:....
StartingPosition: TRIM_HORIZON
BatchSize: 10
Enabled: false
I have also tested with Enabled: true with no success
The lamda function
import base64
def lambda_handler(event, context):
for record in event['Records']:
payload=base64.b64decode(record["kinesis"]["data"])
Is it possible to invoke the function locally and get records?

How can I fix 'DB Clusters not supported for engine' when restoring a snapshot on AWS Aurora?

I'm using the AWS API to restore a cluster snapshot. My code is simple enough, and follows the restoreDBClusterFromSnapshot documentation fairly closely:
await rds.restoreDBClusterFromSnapshot({
DBClusterIdentifier: SNAPSHOT_NAME,
SnapshotIdentifier: `arn:aws:rds:eu-west-1:ACCOUNT_ID:cluster-snapshot:SNAPSHOT_NAME`,
Engine: "postgres",
EngineMode: 'provisioned',
EngineVersion: '9.6.12',
Tags: [
{
Key: 'Creator',
Value: USERNAME
}
]
}).promise()
However this call fails with
DB Clusters not supported for engine: postgres
I know that's not true - we're running a Postgres Cluster in production.
How can I restore the cluster snapshot?
The error is a little deceptive - it's not that clusters aren't available for postgres engine, is that postgres isn't a valid engine name.
The correct name for AWS postgres engine is aurora-postgresql. I wasn't able to find any mention of this in the AWS documentation however:
Running rds.describeDBClusters() on the existing cluster made through the Management Console shows the Engine as aurora-postgresql.
The TypeScript info for createDBInstance() mentions:
Valid Values: aurora (for MySQL 5.6-compatible Aurora) aurora-mysql (for MySQL 5.7-compatible Aurora) aurora-postgresql mariadb mysql oracle-ee oracle-se2 oracle-se1 oracle-se postgres sqlserver-ee sqlserver-se sqlserver-ex sqlserver-web
await rds.restoreDBClusterFromSnapshot({
DBClusterIdentifier: SNAPSHOT_NAME,
SnapshotIdentifier: `arn:aws:rds:eu-west-1:ID_NUMBER:cluster-snapshot:${SNAPSHOT_NAME}`,
Engine: "aurora-postgresql",
EngineMode: 'provisioned',
EngineVersion: '9.6.12',
Tags: [
{
Key: 'Creator',
Value: USERNAME
}
]
}).promise()

How to store mongo db backups to google drive using Symfony 3.4

I am trying to upload mongo db backup to google drive
I am installing following bundles dizda/cloud-backup-bundle and Happyr
/
GoogleSiteAuthenticatorBundle for adapters I am using cache/adapter-bundle
configuration:
dizda_cloud_backup:
output_file_prefix: '%dizda_hostname%'
timeout: 300
processor:
type: zip # Required: tar|zip|7z
options:
compression_ratio: 6
password: '%dizda_compressed_password%'
cloud_storages:
google_drive:
token_name: 'AIzaSyA4AE21Y-YqneV5f9POG7MPx4TF1LGmuO8' # Required
remote_path: ~ # Not required, default "/", but you can use path like "/Accounts/backups/"
databases:
mongodb:
all_databases: false # Only required when no database is set
database: '%database_name%'
db_host: '%mongodb_backup_host%'
db_port: '%mongodb_port%'
db_user: '%mongodb_user%'
db_password: '%mongodb_password%'
cache_adapter:
providers:
my_redis:
factory: 'cache.factory.redis'
happyr_google_site_authenticator:
cache_service: 'cache.provider.my_redis'
tokens:
google_drive:
client_id: '85418079755-28ncgsoo91p69bum6ulpt0mipfdocb07.apps.googleusercontent.com'
client_secret: 'qj0ipdwryCNpfbJQbd-mU2Mu'
redirect_url: 'http://localhost:8000/googledrive/'
scopes: ['https://www.googleapis.com/auth/drive']
when I use factory: 'cache.factory.mongodb' getting
You have requested a non-existent service "cache.factory.mongodb" this while running server and while running backup command getting
Something went terribly wrong. We could not create a backup. Read your log files to see what caused this error
I verified logs getting Command "--env=prod dizda:backup:start" exited with code "1" {"command":"--env=prod dizda:backup:start","code":1} []
I am not sure which adapter needs to use and what's going on here.
Can someone help me? Thanks in advance

Access database which is running in EC2 instance through AWS-lambda function

I wrote the lambda function in python3.6 to access the postgresql database which is running in EC2 instance.
psycopg2.connect(user="<USER NAME>",
password="<PASSWORD>",
host="<EC2 IP Address>",
port="<PORT NUMBER>",
database="<DATABASE NAME>")
created deployment package with required dependencies as zip file and uploaded into AWS lambda.To build dependency i followed THIS reference guide.
And also configured Virtual Private Cloud (VPC) as default one and also included Ec2 instance details, but i couldn't get the connection from database. when trying to connect database from lambda result in timeout.
Lambda function:
from __future__ import print_function
import json
import ast,datetime
import psycopg2
def lambda_handler(event, context):
received_event = json.dumps(event, indent=2)
load = ast.literal_eval(received_event)
try:
connection = psycopg2.connect(user="<USER NAME>",
password="<PASSWORD>",
host="<EC2 IP Address>",
# host="localhost",
port="<PORT NUMBER>",
database="<DATABASE NAME>")
cursor = connection.cursor()
postgreSQL_select_Query = "select * from test_table limit 10"
cursor.execute(postgreSQL_select_Query)
print("Selecting rows from mobile table using cursor.fetchall")
mobile_records = cursor.fetchall()
print("Print each row and it's columns values")
for row in mobile_records:
print("Id = ", row[0], )
except (Exception,) as error :
print ("Error while fetching data from PostgreSQL", error)
finally:
#closing database connection.
if(connection):
cursor.close()
connection.close()
print("PostgreSQL connection is closed")
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!'),
'dt' : str(datetime.datetime.now())
}
I googled quite a lot, But i couldn't found any workaround for this.is there any way to accomplish this requirement?
Your configuration would need to be:
A database in a VPC
The Lambda function configured to use the same VPC as the database
A security group on the Lambda function (Lambda-SG)
A security group on the Database (DB-SG) that permits inbound connects from Lambda-SG on the relevant database port
That is, DB-SG refers to Lambda-SG.
For lambda to connect to any resources inside a VPC, it needs to setup ENIs to the related private subnets of the VPC. Have you set up the VPC association and security groups of the EC2 correctly?
You can refer https://docs.aws.amazon.com/lambda/latest/dg/vpc.html