How can one turn on audit logging for RDS via Cloudformation when we setup the RDS instance?
The only way I have seen so far is to setup the instance and then to modify it and check the Audit logging box to forward logs to CW. Can we do this for MySQL when we setup the instance and not having to modify it?
This is not directly available from cloudformation, you need to create a custom resource to enable the logs.
I have created a custom resource to enable logs after the DB instance is created. Here are the cloudformation template and the Boto3 script.
https://gist.github.com/sudharsans/ab950c43f2086801d19b016f73310832
Related
I deploy AWS Aurora for Postgres using AWS CDK, which creates a cluster admin role, and makes its password available as a secret to other infrastructure, notably Lambdas. I'm looking for a way to also create an unprivileged role in the database, and then disseminate its login credentials to Lambdas etc., to eliminate the risk of accessing the database with superuser credentials by design.
CDK only seems to create a single user account, and from there IaC authors have to fend for themselves. How could CDK be adapted to this scenario?
The CDK itself - as all IaC tooling e.g. Terraform - manage the provisioning of infrastructure.
You essentially want to initialise your RDS instance & create a user/role within the database itself, which isn't naturally related to infrastructure provisioning and thus the CDK at all.
While this isn't inbuilt to the CDK, you can use AwsCustomResource to create the unprivileged role via a Lambda after the creation of the RDS database. Take a look at this official blog post titled Use AWS CDK to initialize Amazon RDS instances for some more information on how to get started.
I have successfully created ECS cluster (EC2 Linux + Networking). Is it possible to login to the cluster to perform some administrative tasks? I have not deployed any containers or tasks to it yet. I can’t find any hints for it in AWS console or AWS documentation.
The "cluster" is just a logical grouping of resources. The "cluster" itself isn't a server you can log into or anything. You would perform actions on the cluster via the AWS console or the AWS API. You can connect to the EC2 servers managed by the ECS cluster individually. You would do that via the standard ssh method you would use to connect to any other EC2 Linux server.
ECS will take care most of the administrative works for you.You simply have to deploy and manage your applications on ECS. If you setup ECS correctly, you will never have to connect to instances.
Follow these instructions to deploy your service (docker image): https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html
Also you can use Cloudwatch to store container logs, so that you don't have to connect to instances to check the logs: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
I create and an ecs account name linked to aws account name, enabled ecs. Now when creating a new server group and selecting ecs. Under the account section the ecs account name appears but it wont let me select it.
If you are using Spinnaker < 1.19.X then the AWS ECS provider depends on the AWS EC2 provider and the AWS IAM structure.
Please read: AWS Providers Overview to understand the AWs IAM structure that is required (AWS managing Account and AWS Managed accounts through AssumeRole action)
Then you can set up an AWS EC2 Provider following this easy to get started guide by armory
Finally Set the AWS ECS provider with the legacy instructions found at spinnaker.io
If you are using Spinnaker > 1.19.X then you must use AWS ECS Service linked roles
One very important step is tagging the AWS VPC subnets so that spinnaker can access them.
in unmanaged cluster in order to export the k8s audit log we can use the AuditSink object and redirect the logs to any webhook we would like to . in order to do so we should changed the API server.
in managed cluster the API server is not accessible - is there any way to send the data to webhook as well?
if you can add an example it will be great since i saw the sub/pub option of GCP for example and it seems that i cant use my webhook
Within a managed GKE cluster, the audit logs are sent to Stackdriver Logging. At this time, there is no way to send the logs directly from GKE to a webhook; however, there is a workaround.
You can export the GKE Audit logs from Stackdriver Logging to Pub/Sub using a log sink. You will need to define which GKE Audit logs you will like to export to Pub/Sub.
Once the logs are exported to Pub/Sub, you will then be able to push them from Pub/Sub using your webhook. Cloud Pub/Sub is highly programmable and you can control the data you exchange. Please take a look at this link for an example about webhooks in Cloud Pub/Sub.
I need to do some manual configurations in CloudSQL Instance. The question is that do I have access to the my.cnf file in GCP DB instance.
You do not have access to the raw configuration file but you can configure certain flags in the UI or using gcloud.
https://cloud.google.com/sql/docs/mysql/flags