Is it possible to enable Audit Logging to Cloudwatch in Redshift using Cloudformation? - amazon-redshift

I see you can enable logging properties as shown in the documentation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-redshift-cluster-loggingproperties.html
But it does not specify whether or not you can enable audit logging to Cloudwatch when building a Redshift cluster on Cloudformation. It only specifies properties for an S3 bucket to write to. Is this something that has to be enabled via API/CLI/Console?

Related

Alert about DB creation on RDS/Aurora PostgreSQL

I have some Aurora PostgreSQL Clusters created on our AWS account. Because of some access issues (which we are working on already), there are several people in other teams who create random DB's on these Aurora Clusters and then we need to work on cleaning them up.
I wanted to check if there is a way to get alerted (via SNS Notifications etc.) whenever a new DB is created on these AWS Postgres clusters using some AWS Tools itself.
Thanks
You could do it using AWS Aurora Database Activity Streams, it will capture all database activity on the database and send it AWS Kinesis Data Stream and you could create a AWS Lambda function to read Kinesis Data Stream and identify the events needed (ex. create database)and finally send notification to AWS SNS from AWS Lambda code.
Another option is enable pgaudit on your AWS Aurora PostgreSQL, send logs to AWS CloudWatch, create AWS Lambda to read the events from AWS CloudWatch and send AWS Notification
Below you can find step by step on AWS Blog below.
Part 2: Audit Aurora PostgreSQL databases using Database Activity Streams and pgAudit

Trigger a dataflow job deployed through Cloud Run on object creation in GCP Storage Bucket

I have created a dataflow pipeline which read a file from GCS bucket and process it. It is working when I execute the job from my local.
I deployed the dataflow job in Cloud Run with trigger on storage.object.create.
But when I upload any file in GCS bucket, no trigger message shows in the log or dataflow job not executed.
Trigger config
Ingress:Allow traffic
Authentication:Allow authentication
Event source:Cloud Storage
Event type:google.cloud.audit.log.v1.written
Create time:2021-02-12 (16:05:25)
Receive events from:All regions (global)
Service URL path:/
Service account:sdas-pipeline#sdas-demo-project.iam.gserviceaccount.com
Service name:storage.googleapis.com
Method name:storage.objects.create
What am I missing here? Please suggest.
The reason why your Cloud Run service isn't triggered is because there might be no audit logs written whenever an object is created/uploaded to your bucket. An Eventarc trigger is initiated whenever an event is written on Audit logs and by default, Cloud Storage is disabled:
The solution is to enable Audit Logs for Cloud Storage. It can be done two ways:
Enable it on the first time you create an Eventarc trigger.
Or go to IAM & Admin > Audit logs and make sure that all fields are checked for Cloud Storage:
As a reference, Audit logs can be seen on Home > Activity, here's an example:

How can I use dataproc to pull data from bigquery that is not in the same project as my dataproc cluster?

I work for an organisation that needs to pull data from one of our client's bigquery datasets using Spark and given that both the client and ourselves use GCP it makes sense to use Dataproc to achieve this.
I have read Use the BigQuery connector with Spark which looks very useful however it seems to make the assumption that the dataproc cluster, the bigquery dataset and the storage bucket for temporary BigQuery export are all in the same GCP project - that is not the case for me.
I have a service account key file that allows me to connect to and interact with our client's data stored in bigquery, how can I use that service account key file in conjunction with the BigQuery connector and dataproc in order to pull data from bigquery and interact with it in dataproc? To put it another way, how can I modify the code provided at Use the BigQuery connector with Spark to use my service account key file?
To use service account key file authorization you need to set mapred.bq.auth.service.account.enable property to true and point BigQuery connector to a service account json keyfile using mapred.bq.auth.service.account.json.keyfile property (cluster or job). Note that this property value is a local path, that's why you need to distribute a keyfile to all the cluster nodes beforehand, using initialization action, for example.
Alternatively, you can use any authorization method described here, but you need to replace fs.gs properties prefix with mapred.bq for BigQuery connector.

fine grained ACLs in pulumi cloud

It seems that by default a lambda function created by Pulumi has an AWSLambdaFullAccess permissions. This type of access is too wide and I'd like to replace it with fine-grained ACLs.
For instance, assuming I am creating a cloud.Table in my index.js file, I would like to specify that the lambda endpoint I am creating (in the same file) only has read access to that specific table.
Is there a way to do it without coding the IAM policy myself?
The #pulumi/cloud library currently runs all compute (lambdas and containerized services) with a single uniform set of IAM policies on AWS.
You can set the policies to use by running:
pulumi config set cloud-aws:computeIAMRolePolicyARNs "arn:aws:iam::aws:policy/AWSLambdaFullAccess,arn:aws:iam::aws:policy/AmazonEC2ContainerServiceFullAccess"
The values above are the defaults. See https://github.com/pulumi/pulumi-cloud/blob/master/aws/config/index.ts#L52-L56.
There are plans to support more fine-grained control over permissions and computing permissions directly from resources being used in #pulumi/cloud - see e.g. https://github.com/pulumi/pulumi-cloud/issues/145 and https://github.com/pulumi/pulumi-cloud/issues/168.
Lower level libraries (like #pulumi/aws and #pulumi/aws-serverless) provide complete control over the Role and/or Policies applied to Function objects.

Set Monitoring Level to Verbose in an Azure Web Role using Powershell

I've created some custom Performance Counters in our web application deployed to an Azure Web Role. In order to be able to see the values of that Performance Counters in the dashboard, I have to go to the portal, set the Monitoring Level to Verbose, and add the new Metrics in the dashboard.
The problem is that we are creating the infrastructure by code using PowerShell, and every time we recreate the infrastructure, we lost these settings.
Can I set the Monitoring Level and the Metrics (and possibly alerts) via PowerShell?
It seems that you cannot set the monitoring levels and metrics via PowerShell or the REST API. The only think you are allowed to do via REST is to create alerts: http://msdn.microsoft.com/en-us/library/azure/dn510366.aspx
Thanks.