CloudFormation AWS::RDS::DBInstance Error "Specifying IOPs is not allowed for this engine" - aws-cloudformation

I have this CloudFormation code
DatabasePrimaryInstanceAurora0:
Type: AWS::RDS::DBInstance
Condition: IsEnvPRO
Properties:
Engine: aurora-postgresql
DBClusterIdentifier: !Ref Aurora0
DBInstanceClass: db.t4g.medium
DBInstanceIdentifier: postgres0-0
DBSubnetGroupName: rds1
StorageType: gp3
According with this https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbinstance.html#cfn-rds-dbinstance-storagetype , we are good to go but I got that error from cloudFormation
Resource handler returned message: "Invalid storage type: gp3 (Service: Rds, Status Code: 400, Request ID: 6b75124d-2e57-47fd-bbf3-54ab4f217a82)" (RequestToken: f72ffede-b654-a065-0c5e-cf91c05473d9, HandlerErrorCode: InvalidRequest)
Thanks to #9bO3av5fw5 I added
Iops: 3000
But then I got...
Resource handler returned message: "Specifying IOPs is not allowed for this engine (Service: Rds, Status Code: 400, Request ID: 2a558dd8-9e6c-47c8-bef0-af674f05760b)" (RequestToken: 53f0e15e-53b7-d9f5-4817-9d2dcc34bef3, HandlerErrorCode: InvalidRequest)
My Engine:
Engine: aurora-postgresql
EngineMode: provisioned

The Answer is here https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
General Purpose SSD gp3 storage is supported on Single-AZ and Multi-AZ DB instances, but isn't supported on Multi-AZ DB clusters. For more information, see Multi-AZ deployments for high availability and Multi-AZ DB cluster deployments.

Aurora has its own storage. It doesn't use the same storage types as RDS. You can't specify gp3 as storage for Aurora databases.

Related

GKE autopilot cluster creation failure

I am trying to create composer environment using terraform in GCP and i could see that its getting failed in one of the project while creating the Kubernetes cluster in autopilot mode; its working fine in other 2 projects where we deployed in the same way.
So i tried to create autopilot kubernetes cluster in manual way as well and we are not able to track what is the issue with it as it shows the below error alone:
k8s cluster creation error
Error while trying it from command line:
gcloud container clusters create-auto test \
--region europe-west2 \
--project=project-id
Note: The Pod address range limits the maximum size of the cluster. Please refer to https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr to learn how to optimize IP address allocation.
Creating cluster test in europe-west2... Cluster is being deployed...done.
ERROR: (gcloud.container.clusters.create-auto) Operation [<Operation
clusterConditions: [<StatusCondition
canonicalCode: CanonicalCodeValueValuesEnum(UNKNOWN, 2)
message: 'Failed to create cluster'>]
detail: 'Failed to create cluster'
endTime: '2022-05-31T20:00:07.8398558Z'
error: <Status
code: 2
details: []
message: 'Failed to create cluster'>
name: 'operation-1654027061293-a14298fa'
nodepoolConditions: []
operationType: OperationTypeValueValuesEnum(CREATE_CLUSTER, 1)
progress: <OperationProgress
metrics: [<Metric
intValue: 12
name: 'CLUSTER_CONFIGURING'>, <Metric
intValue: 12
name: 'CLUSTER_CONFIGURING_TOTAL'>, <Metric
intValue: 9
name: 'CLUSTER_DEPLOYING'>, <Metric
intValue: 9
name: 'CLUSTER_DEPLOYING_TOTAL'>]
stages: []>
selfLink: 'https://container.googleapis.com/v1/projects/projectid/locations/europe-west2/operations/operation-1654027061293-a14298fa'
startTime: '2022-05-31T19:57:41.293067757Z'
status: StatusValueValuesEnum(DONE, 3)
statusMessage: 'Failed to create cluster'
targetLink: 'https://container.googleapis.com/v1/projects/projectid/locations/europe-west2/clusters/test'
zone: 'europe-west2'>] finished with error: Failed to create cluster
Service account “service-xxxxxxxx#container-engine-robot.iam.gserviceaccount.com” needs the role Kubernetes Engine Service Agent (roles/container.serviceAgent) which cased the k8s cluster creation to fail; after granting the permissions, we were able to create clusters

Strimzi Kafka Using local Node Storage

i am running kafka on kubernetes (deployed on Azure) using strimzi for development environment and would prefer to use internal kubernetes node storage. if i use persistant-claim or jbod, it creates standard disks on azure storage. however i prefer to use internal node storage as i have 16 gb available there. i do not want to use ephemeral as i want the data to be persisted atleast on kubernetes nodes.
folllowing is my deployment.yml
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: kafka-cluster
spec:
kafka:
version: 3.1.0
replicas: 2
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
- name: external
type: loadbalancer
tls: false
port: 9094
config:
offsets.topic.replication.factor: 2
transaction.state.log.replication.factor: 2
transaction.state.log.min.isr: 2
default.replication.factor: 2
min.insync.replicas: 2
inter.broker.protocol.version: "3.1"
storage:
type: persistent-claim
size : 2Gi
deleteClaim: false
zookeeper:
replicas: 2
storage:
type: persistent-claim
size: 2Gi
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {}
The persistent-claim storage as you use it will provision the storage using the default storage class which in your case I guess creates standard storage.
You have two options how to use local disk space of the worker node:
You can use the ephemeral type storage. But keep in mind that this is like a temporary directory, it will be lost in every rolling update. Also if you for example delete all the pods at the same time, you will loose all data. As such it is something recommended only for some short-lived clusters in CI, maybe some short development etc. But for sure not for anything where you need reliability.
You can use Local Persistent Volumes which are persistent volumes which are bound to a particular node. These are persistent, so the pods will re-use the volume between restarts and rolling udpates. However, it bounds the pod to the particular worker node the storage is on -> so you cannot easily reschedule it to another worker node. But apart from these limitation, it is something what can be (unlike the ephemeral storage) used with reliability and availability when done right. The local persistent volumes are normally provisioned through StorageClass as well -> so in the Kafka custom resource in Strimzi it will still use the persistent-claim type storage, just with different storage class.
You should really thing what exactly you want to use and why. From my experience, the local persistent volumes are great option when
You run on bare metal / on-premise clusters where often good shared block storage is not available
When you require maximum performance (local storage does not depend on network, so it can be often faster)
But in public clouds with good support for high quality for networked block storage such as Amazon EBS volumes and their Azure or Google counterparts, local storage often brings more problems than advantages because of how it bounds your Kafka brokers to a particular worker node.
Some more details about the local persistent volumes can be found here: https://kubernetes.io/docs/concepts/storage/volumes/#local ... there are also different provisioners which can help you use it. I'm not sure if Azure supports anything out of the box.
Sidenote: 2Gi of space is very small for Kafka. Not sure how much you will be able to do before running out of disk space. Even 16Gi would be quite small. If you know what are you doing, then fine. But if not, you should be careful.

How Do I Attach an ASG to an ALB Target Group?

In AWS' Cloudformation, how do I attach an Autoscaling Group (ASG) to an Application Load Balancer Target Group?
There does not appear to be any direct way to do that directly in a Cloudformation Template (CFT), though it it possible using the AQWS CLI or API. The AWS::ElasticLoadBalancingV2::TargetGroup resource only offers these target types:
instance. Targets are specified by instance ID.
ip. Targets are specified by IP address.
lambda. The target groups contains a single Lambda function.
That is because, apparently, one does not attach an ASG to a target group; instead, one attaches a target group or groups to an ASG.
Seems a little backwards to me, but I'm sure it has to do with the ASG needing to register/deregister its instances as it scales in and out.
See the documentation for the AWS::AutoScaling::AutoScalingGroup resource for details.
Example:
TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
VpcId: !Ref VPC
TargetType: instance
Port: 80
Protocol: HTTP
AutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AvailabilityZones: !GetAZs !Ref "AWS::Region"
MaxSize: "3"
MinSize: "1"
TargetGroupArns:
- !Ref TargetGroup

Kubernetes error building cluster, utility subnet can't be found

Why is it that when I try to update a new Kubernetes cluster it gives the following error:
$ kops update cluster --name k8s-web-dev
error building tasks: could not find utility subnet in zone: "us-east-1b"
I have not been able to deploy it into aws yet. It only creates configs inside s3.
Also because I have private and public subnets I am updating manually k8s config to point to correct subnet-ids. e.g: The ids were added manually.
subnets:
- cidr: 10.0.0.0/19
id: subnet-3724bb40
name: us-east-1b
type: Private
zone: us-east-1b
- cidr: 10.0.64.0/19
id: subnet-918a35c8
name: us-east-1c
type: Private
zone: us-east-1c
- cidr: 10.0.32.0/20
id: subnet-4824bb3f
name: utility-us-east-1b
type: Public
zone: us-east-1b
- cidr: 10.0.96.0/20
id: subnet-908a35c9
name: utility-us-east-1c
type: Public
zone: us-east-1c
Also interestingly enough I did no change in my config. But when I run the kops update once and then once more I get two different results. How is that possible?
kops update cluster --name $n
error building tasks: could not find utility subnet in zone: "us-east-1c"
and then this
kops update cluster --name $n
error building tasks: could not find utility subnet in zone: "us-east-1b"
Using --bastion parameter within kops command line options assumes that bastion instance group is already in place. To create bastion instance group you can use --role flag:
kops create instancegroup bastions --role Bastion --subnet $SUBNET
Check this link for more information.

Kubernetes cannot attach AWS EBS as volume. Probably due to cloud provider issue

I hava a kubernetes cluster up running on AWS. Now when I'm trying to attach a AWS EBS as a volume to a pod, I got a "special device does not exist" problem.
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-xxxxxxx does not exist
I did some research and found that the correct AWS EBS device path should be like this format:
/var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-west-2a/vol-xxxxxxxx
My doubt is that it might because I set up the Kubernetes cluster according to this tutorial and did not set the cloud provider, and therefore the AWS device "does not exit". I wonder if my doubt is correct, and if yes, how to set the cloud provider after the cluster is already up running.
You need to set the cloud provider to properly mount an EBS volume. To do that after the fact set --cloud-provider=aws in the following services:
controller-manager
apiserver
kubelet
Restart everything and try mounting again.
An example pod which mounts an EBS volume explicitly may look like this:
apiVersion: v1
kind: Pod
metadata:
name: test-ebs
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-ebs
name: test-volume
volumes:
- name: test-volume
# This AWS EBS volume must already exist.
awsElasticBlockStore:
volumeID: <volume-id>
fsType: ext4
The Kubernetes version is an important factor here. The EBS mounts was experimental in 1.2.x, I tried it then but without success. In the last releases I never tried it again but be sure to check your IAM roles on the k8s vm's to make sure they have the rights to provision EBS disks.