Retrieve a VPC to create a security group - pulumi

I'm trying to get back a VPC and then create a security group with rules in it. I have been following the steps here in their docs however I need to get a vpc that isn't the default vpc.
I have code like so:
const primaryVpcId = config.require("primaryVpcId");
const primaryVpc = awsx.ec2.Vpc.fromExistingIds("primary", {
vpcId: primaryVpcId
});
const sg = new awsx.ec2.SecurityGroup("jcsg", {vpc:primaryVpc});
The problem is the primaryVpc object is empty so when I run pulumi up it errors saying the subnet ids are empty. I know there is nothing wrong with the vpc in aws so the retrieving of it is failing somehow.

Based on the docs it looks like when using the fromExistingIds you have to specify subresource ids as well. If you're planning on using subnets you'll have to pass in the ids for those too they don't appear to be autodiscovered.
Get an existing Vpc resource's state with the given name and IDs of its relevant sub-resources. This will not cause a VPC (or any sub-resources) to be created, and removing this Vpc from your pulumi application will not cause the existing cloud resource (or sub-resources) to be destroyed.
const importedVpc = awsx.ec2.Vpc.fromExistingIds('primary', {
vpcId: 'theId',
privateSubnetIds: ['id1', 'id2']
})
I imagine you'd have to do the same for any of the properties from ExistingVpcIdArgs (the second parameter to the function) that you plan to use elsewhere in the program.

Related

How make prisma query named to use this name in logs

I know that I can add middleware to log query data: https://www.prisma.io/docs/concepts/components/prisma-client/middleware/logging-middleware
But has prisma special syntax to add name to queries, to use this names in middleware?
For example I have 3 queries to get users, but they are different, I want to add specific names to them, and log this names in logging middleware
Prisma has recently released support for metrics, Prisma metrics give you a detailed insight into how Prisma Client interacts with your database. You can use this insight to help diagnose performance issues with your application.
You can add global labels to your metrics to help you group and separate your metrics. Each instance of Prisma Client adds these labels to the metrics that it generates. For example, you can group your metrics by infrastructure region, or by server, with a label like { server: us_server1', 'app_version': 'one' }
Global labels work with JSON and Prometheus-formatted metrics.
Here's an example:
let metrics = prisma.$metrics.json({
globalLabels: { server: 'us_server1', app_version: 'one' },
})
console.log(metrics)

KMS KeyPolicy for CloudTrail read/write and EventBridge read?

I have the following resources in a CDK project:
from aws_cdk import (
aws_cloudtrial as cloudtrail,
aws_events as events,
aws_events_targets as targets,
aws_kms as kms
)
#Create a Customer-Managed Key (CMK) for encrypting the CloudTrail logs
mykey = kms.Key(self, "key", alias="somekey")
#Create a CloudTrail Trail, an S3 bucket, and a CloudWatch Log Group
trail = cloudtrail.Trail(self, "myct", send_to_cloud_watch_logs=True, management_events=cloudtrail.ReadWriteType.WRITE_ONLY)
#Create an EventBridge Rule to do something when certain events get matched in the CloudWatch Log Group
rule = events.Rule(self, "rule", event_pattern=events.eventPattern(
#the contents of the eventPattern don't matter for this example
), targets= [
#the contents of the targets don't matter either
])
The problem is, if I pass my key to the trail with the encryption_key=mykey parameter, CloudTrail complains that it can't use the key.
I've tried many different KMS policies, but other than making it wide open to the entire world, I can't figure out how to enable my CloudTrail Trail to read/write using the key (it has to put data into the S3 bucket), and allow CloudWatch and EventBridge to decrypt the encrypted data in the S3 bucket.
The documentation on this is very poor, and depending on which source I look at, they use different syntax and don't explain why they do things. Like, here's just one example from a CFT:
Condition:
StringLike:
'kms:EncryptionContext:aws:cloudtrail:arn': !Sub 'arn:aws:cloudtrail:*:${AWS::AccountId}:trail/*'
OK, but what if I need to connect up EventBridge and CloudWatch Logs, too? No example, no mention of it, as if this use case doesn't exist.
If I omit the encryption key, everything works fine - but I do need the data encrypted at rest in S3, since it's capturing sensitive operations in my master payer account.
Is there any shorthand for this in CDK, or is there an example in CFT (or even outside of IaC tools entirely) of the proper key policy to use in this scenario?
I tried variations on mykey.grant_decrypt(trail.log_group), mykey.grant_encrypt_decrypt(trail), mykey.grant_decrypt(rule), etc. and all of them throw an inscrutable stack trace saying something is undefined, so apparently those methods just don't work.

Lookup Subnet ARN By Name in Cloudformation

Is it possible to reference a subnet by tag name in a cloudformation script? I am in a VPC with multiple regions. Each region has subnets with tag names like "app_a", "app_b", "app_c" for application level subnets in availability zones a, b and c. Ideally, I would like to avoid putting all the subnet ARNS in a big map in the Mappings section of the template. Assuming I don't have access to outputs of another template that created the subnet, is there any other way to refer to the subnets by name?
You can add a custom resource where you pass the AccountId, Region, VPC Name. It can return the VPC_ID, Subnets, and whatever else yo need.

No Outputs section in cloudformation template

In cloudformation template, there is an outputs sections which is used for talking to cross-stack.
Is that correct to say that this part should not exist if there is only one stack we are creating in one AWS account?
The outputs section can be used for cross-stack references with Export and Fn::ImportValue. It can also be used for general output for the user. A few examples:
Admin URL like https://123.123.123.123/admin
Credentials for a newly created user
Identifier for any of the resources for easy access
An attribute of a resource like EC2 instance IP address

I'd like to create CloudFormation stack with resources in multiple regions. Is this possible?

Is it possible to create a single Amazon CloudFormation stack template that instantiates an AWS::EC2::Instance in ap-southeast-1 and another AWS::EC2::Instance in us-west-2 for example?
I suspect not, but I've not yet found a definitive yes/no saying that stacks can't have resources spanning multiple regions.
The accepted answer is out of date. It is now possible to create stacks across accounts and regions using CloudFormation StackSets.
A very good question; but I don't think you would be able to create resources spread across multiple regions.
The end point URL for CloudFormation is region based and AFAIK there isn't a place whether you can specify an region specific (diff region) information.
As of today you can compose the CloudFormation template in such way to make it region independent by leveraging the mappings section and get::region function; but making the template spread across multiple regions simultaneously wouldn't be possible; but can be expected down the line.
Your best bet right now would be to use a Cloudformation Custom Resource that invokes a Lambda function in order to create the resources that are in other regions. When you run the CFN template it would invoke the Lambda function where you'd create code (Python, Node.js or Java) that leverages the AWS SDKs to create the resources you need. CFN Custom Resources allow you to pass parameters to the function and get "outputs" back from them so from a CFN perspective you can treat it just like any other resource.
Here's a walkthrough example from the AWS docs: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html
You can create a lambda function invoking to create a resource in another region, and even making your lambda function to invoke another stack in the other region.
To make your life easy, in this case you can use the lambda cli2cloudformation (https://github.com/lucioveloso/cli2cloudformation).
Using it, you can execute CLI commands inside your lambda, and by this way, you specific the --region in the command.
It's also interesting, because you will be able to set a command when your stack is created, updated and deleted.
"myCustomResource": {
"Type": "Custom::LocationConstraint",
"Properties": {
"ServiceToken": "arn:aws:lambda:eu-west-1:432811670411:function:cli2cfn_proxy2",
"CliCommandCreate": "s3api get-bucket-location --bucket my-test-bucket --region eu-west-1",
"CliCommandUpdate": "",
"CliCommandDelete": ""
}
},