Importing ALBName from StackA to use in StackB - aws-cloudformation

I am creating an ALB in amazon with StackA named StackA and I export the ALB name and value using
Export=Export((Join("", [Ref("AWS::StackName"), "-ALB"]))),
Value = GetAtt(ApplicationElasticLB, "DNSName")
I can see in AWS Console that the value is being export for ALB "internal-alb-test-12345678.us-east-1.elb.amazonaws.com"
So now I want to use this ALB name from StackB while creating ECSService.
I am using it like this
LoadBalancerName=ImportValue('StackA-ALB')
But then the AWS throws an error saying
elb name longer than 32. (Service: AmazonECS; Status Code: 400; Error Code: InvalidParameterException
Am I doing anything wrong here ? please help me understand the cause.

I think that you need to export the ALB:ARN not the ALB:Name

Related

AWS CDK NetworkLoadBalancer - add target group (add_targets) fails with target.attachToNetworkTargetGroup is not a function

Basically I'm trying to get an nlb (network load balancer) to point to a alb (application load balancer) but cdk fails at the .add_targets call with the error jsii.errors.JSIIError: target.attachToNetworkTargetGroup is not a function
Here's a snippet of my cdk:
nlb = elbv2.NetworkLoadBalancer(
stack,
id="nlb",
load_balancer_name="my-nlb",
vpc=vpc,
)
cert = elbv2.ListenerCertificate.from_arn(certificate_arn)
listener_80 = nlb.add_listener("listener", port=80)
alb_target_group = elbv2.ApplicationTargetGroup(
stack,
id="alb_target_group",
target_type=elbv2.TargetType.ALB,
protocol=elbv2.ApplicationProtocol.HTTP,
vpc=vpc,
)
listener_80.add_targets(id="target", port=80, targets=[alb_target_group])
I get the following error and it's due to the call to
listener_80.add_targets(id="target", port=80, targets=[alb_target_group])
cdk diff --app "python3 fargate.py"
cluster sec group <class 'NoneType'>
connections <aws_cdk.aws_ec2.Connections object at 0x10c0c91f0>
jsii.errors.JavaScriptError:
TypeError: target.attachToNetworkTargetGroup is not a function
at NetworkTargetGroup.addTarget (/private/var/folders/v0/6bvb2_m975jd380hx464rtzm0000gq/T/
jsii-kernel-wnPJIQ/node_modules/aws-cdk-lib/
aws-elasticloadbalancingv2/lib/nlb/network-target-group.js:1:1547)```
TypeError: target.attachToNetworkTargetGroup is not a function
I'm using
cdk version 2.20.0 and
python v 3.8.0 and
aws-cli/2.3.4
Any idea why I'm getting the
TypeError: target.attachToNetworkTargetGroup is not a function ?
Thanks!
.add_targets and .add_target_group are different, and in your scenario, you should use .add_target_group.
By the way, in AWS CDK, the constants and classes for ALB and NLB are different.
If your architecture is
Crete a network load balancer
Add a listener for port 80
Create a target group which target type is ALB
Attach the target group to the listener
Your snippet would be:
nlb = elbv2.NetworkLoadBalancer(
stack,
id="nlb",
load_balancer_name="my-nlb",
vpc=vpc,
)
listener_80 = nlb.add_listener("listener", port=80)
alb_target_group = elbv2.NetworkTargetGroup(
stack,
id="alb_target_group",
target_type=elbv2.TargetType.ALB,
protocol=elbv2.Protocol.TCP,
vpc=vpc,
)
listener_80.add_target_group("target", alb_target_group)

AKS cluster creation fails with error 'Security rule has invalid Port range'

We are creating an AKS cluster, but it fails at the deployment stage with the below error,
'Security rule has invalid Port range. Value provided: 22,3389. Value should be an integer OR integer range with '-' delimiter. Valid range 0-65535.. Details: [] (Code: SecurityRuleInvalidPortRange)'
We tried using both UI and CLI, but it fails.
Please let me know if somebody is aware of this issue.
Thanks,
Arun
Problem you have mentioned is clear and says that you have defined wrong port number - 223389. While port range is 0-65535.
Probably you wanted to set up destination_port ranges- it should look like: destination_port_ranges = ["22", "3389"]
Similar problem: invalid-port-range-aks.

How to reconcile the Terraform State with an existing bucket?

Using Terraform 11.14
My terraform file contains the following resource:
resource "google_storage_bucket" "assets-bucket" {
name = "${local.assets_bucket_name}"
storage_class = "MULTI_REGIONAL"
force_destroy = true
}
And this bucket has already been created (it exists on the infrastructure based on a previous apply)
However the state (remote on gcs) is inconsistent and doesn't seem to include this bucket.
As a result, terraform apply fails with the following error:
google_storage_bucket.assets-bucket: googleapi: Error 409: You already own this bucket. Please select another name., conflict
How can I reconcile the state? (terraform refresh doesn't help)
EDIT
Following #ydaetskcoR's response, I did:
terraform import module.bf-nathan.google_storage_bucket.assets-bucket my-bucket
The output:
module.bf-nathan.google_storage_bucket.assets-bucket: Importing from ID "my-bucket"...
module.bf-nathan.google_storage_bucket.assets-bucket: Import complete! Imported google_storage_bucket (ID: next-assets-bf-nathan-botfront-cloud)
module.bf-nathan.google_storage_bucket.assets-bucket: Refreshing state... (ID: next-assets-bf-nathan-botfront-cloud)
Error: module.bf-nathan.provider.kubernetes: 1:11: unknown variable accessed: var.cluster_ip in:
https://${var.cluster_ip}
The refreshing step doesn't work. I ran the command from the project's root where a terraform.tfvars file exists.
I tried adding -var-file=terraform.tfvars but no luck. Any idea?
You need to import it into the existing state file. You can do this with the terraform import command for any resource that supports it.
Thankfully the google_storage_bucket resource does support it:
Storage buckets can be imported using the name or project/name. If the project is not passed to the import command it will be inferred from the provider block or environment variables. If it cannot be inferred it will be queried from the Compute API (this will fail if the API is not enabled).
e.g.
$ terraform import google_storage_bucket.image-store image-store-bucket
$ terraform import google_storage_bucket.image-store tf-test-project/image-store-bucket

Splunk with ECS

I am having a problem with configuring Splunk to send logs on ECS Cluster.
From the event tab in service, this error was there
Problem Statement: unable to place a task because no container instance met all of its requirements. The closest matching container-instance xxxx is missing an attribute required by your task.
after doing a deep drive I found have to update /etc/ecs/ecs.config. and entry echo ECS_AVAILABLE_LOGGING_DRIVERS='["splunk","awslogs"]'.
But this couldn't help?
still getting the same error.
Can anyone please help?
If you are looking to send container logs to Splunk. you need to have logConfiguration tag in task definition json with all required details like below
{
"log-driver": "splunk",
"log-opts": {
"splunk-token": "",
"splunk-url": "",
...
}
}
AWS task definition parameters
splunk logging options

ex_modify_instance_attribute and create_node has AuthFailure error use apache-libcloud AWS EC2 driver

When I use AWS EC2 driver invoke create_node and ex_modify_instance_attribute API , I got this error:
raise InvalidCredsError(err_list[-1])
libcloud.common.types.InvalidCredsError: 'AuthFailure: AWS was not able to validate the provided access credentials'
But ex_create_subnet/ list_nodes API success , and I'm sure about I have the permission on AWS IAM to create EC2 instance.
By the way , I am using AWC cn-north-1 region.
I find create node with some parameters will got AuthFailure
The Code:
node = self.conn.create_node(name=instance_name,
image=image,
size=size,
ex_keyname=ex_keyname,
ex_iamprofile=ex_iamprofile,
ex_subnet=ex_subnet,
ex_security_group_ids=ex_security_group_ids,
ex_mincount=ex_mincount,
ex_maxcount=ex_mincount,
ex_blockdevicemappings=config['block_devices'],
ex_assign_public_ip=config['eth0']['need_eip']
)
I just delete some parameters and works:
node = self.conn.create_node(name=instance_name,
image=image,
size=size,
ex_keyname=ex_keyname,
# ex_iamprofile=ex_iamprofile,
ex_subnet=ex_subnet,
# ex_security_group_ids=ex_security_group_ids,
ex_mincount=ex_mincount,
ex_maxcount=ex_mincount,
# ex_blockdevicemappings=config['block_devices'],
# ex_assign_public_ip=config['eth0']['need_eip']
)