I have created a brand new Redshift cluster with Terraform. I am able to connect to it and run COPY command but even for small CSV files with 10 lines it takes forever. I have rebooted cluster but is the same. Also there are no errors in stl_load_errors table.
Did someone experience similar issue?
Here is Terraform code for creating VPC endpoint. Maybe at some point I will create complete template for Redshift and update this post.
resource "aws_vpc_endpoint" "s3_vpc_endpoint" {
vpc_id = aws_vpc.vpc.id
service_name = "com.amazonaws.${var.region}.s3"
}
resource "aws_vpc_endpoint_route_table_association" "s3_vpc_endpoint_route_table_association" {
route_table_id = aws_route_table.route_table.id
vpc_endpoint_id = aws_vpc_endpoint.s3_vpc_endpoint.id
}
Related
I have a cluster in ECS with about 20+ services all happily running in it.
I've just uploaded a new image which I want to set up as a daily task. I can create it as a task and run it - the logs indicate it is running to completion.
I've gone into EventBridge and created a Rule, set the detail and cron, I select the target (AWS service), then select ECS task but when I drop the Cluster dropdown it is empty, I can't select a cluster - there are none.
Is this a security issue perhaps or am I missing something elsewhere - can't this be done?
Any help would be much appreciated.
Eventually managed to get this to work. The problem was that I was starting the EventBridge creation process in the wrong region - rookie mistake - so it couldn't see the cluster in the other region. D'Oh!
I have a redshift cluster in private subnet, I have attached an IAM role with the policy of full permission to S3.
But sill copy or unload command stuck and finally got timed out error, any thought?
Seems I have to modify some network or connectivity changes.
i am planning to use remote backend as postgres instead of s3 as enterprise standard.
terraform {
backend "pg" {
conn_str = "postgres://user:pass#db.example.com/schema_name"
}
}
When we use postgres remote backend, when we run terraform init, we have to provide schema which is specific to that terraform folder, as backend supports only one table and new record will be created with workspace name.
I am stuck now, as i have 50 projects and each have 2 tiers which is maintained in different folders, then we need to create 100 schemas in postgres. Also it is difficult to handle so many schemas in automated provisioning.
Can we handle something in similar to S3, where we have one bucket for all projects and multiple entries in same bucket with different key which specified in each terraform script. Can we use single schema for all projects and multiple tables/records based on key provide in backend configuration of each terraform folder.
You can use a single database and the pg provider will automatically create a specified schema.
Something like this:
terraform {
backend "pg" {
conn_str = "postgres://user:pass#db.example.com/terraform_backend"
schema = "fooapp"
}
}
This keeps the projects unique, at least. You could append a tier to that, too, or use Terraform Workspaces.
If you specify the config on the command line (aka partial configuration), as the provider recommends, it might make it easier to dynamically set for your use case:
terraform init \
-backend-config="conn_str=postgres://user:pass#db.example.com/terraform_backend" \
-backend-config="schema=fooapp-prod"
This works pretty well in my scenario similar to yours. Each project has a unique schema in a shared database and no tasks beyond the initial creation/configuration of the database is needed - the provider creates the schema as specified.
Currently creating an RDS per account for several different AWS accounts. I use Cloudformation scripts for this.
When creating these databases I would like for them to have a similar structure. I created an SQL which I can successfully run manually after the script has run. I would like to however execute this automatically as part of running the script.
My solution so far is to create a EC2 instance with a dependency on the RDS to run once and then manually delete it later but this is not a suitable solution. I couldn't find any other way though?
Is it possible to run a query as part of a cloudformation script?
FYI: I'm creating a 11.5 Postgres instance.
The proper way is to use custom resources.
But this requires some new development. But if you have already EC2 instance that does populate the rds from its UserData you can automate its termination as follows:
Set InstanceInitiatedShutdownBehavior to termiante
At the end of UserData execute shutdown -h now to shutdown the instance.
Since your shutdown behavior is terminate, the instance will be automatically terminated.
Currently we have our kubernetes cluster master set to zonal, and require it to be regional. My idea is to convert the existing cluster and all workloads/nodes/resources to some infrastructure-as-code - preferably terraform (but could be as simple as a set of gcloud commands).
I know with GCP I can generate raw command lines for commands I'm about to run, but I don't know how (or if I even can) to convert existing infrastructure to the same.
Based on my research, it looks like it isn't exactly possible to do what I'm trying to do [in a straight-forward fashion]. So I'm looking for any advice, even if it's just to read some other documentation (for a tool I'm not familiar with maybe).
TL;DR: I'm looking to take my existing Google Cloud Platform Kubernetes cluster and rebuild it in order to change the location type from zonal to master - I don't actually care how this is done. What is a currently accepted best-practice way of doing this? If there isn't one, what is a quick and dirty way of doing this?
If you require me to specify further, I will - I have intentionally left out linking to specific research I've done.
Creating a Kubernetes cluster with terraform is very straightforward because ultimately making a Kubernetes cluster in GKE is straightforward, you'd just use the google_container_cluster and google_container_node_pool resources, like so:
resource "google_container_cluster" "primary" {
name = "${var.name}"
region = "${var.region}"
project = "${var.project_id}"
min_master_version = "${var.version}"
addons_config {
kubernetes_dashboard {
disabled = true
}
}
maintenance_policy {
daily_maintenance_window {
start_time = "03:00"
}
}
lifecycle {
ignore_changes = ["node_pool"]
}
node_pool {
name = "default-pool"
}
}
resource "google_container_node_pool" "default" {
name = "default"
project = "${var.project_id}"
region = "${var.region}"
cluster = "${google_container_cluster.primary.name}"
autoscaling {
min_node_count = "${var.node_pool_min_size}"
max_node_count = "${var.node_pool_max_size}"
}
management {
auto_repair = "${var.node_auto_repair}"
auto_upgrade = "${var.node_auto_upgrade}"
}
lifecycle {
ignore_changes = ["initial_node_count"]
}
node_config {
machine_type = "${var.node_machine_type}"
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform",
]
}
depends_on = ["google_container_cluster.primary"]
}
For a more fully featured experience, there are terraform modules available like this one
Converting an existing cluster is considerably more fraught. If you want to use terraform import
terraform import google_container_cluster.mycluster us-east1-a/my-cluster
However, in your comment , you mentioned wanting to convert a zonal cluster to a regional cluster. Unfortunately, that's not possible at this time
You decide whether your cluster is zonal or regional when you create
it. You cannot convert an existing zonal cluster to regional, or vice
versa.
Your best bet, in my opinion, is to:
Create a regional cluster with terraform, giving the cluster a new name
Backup your existing zonal cluster, either using an etcd backup, or a more sophisticated backup using heptio-ark
Restore that backup to your regional cluster
I wanted to achieve exactly that: Take existing cloud infrastructure and bring it to infrastructure as code (IaC), i.e. put it in *.tf files
There were basically 2 options that I found and took into consideration:
terraform import (Documentation)
Because of the following limitation terraform import did not achieve exactly what I was looking for, because it requires to manually create the resources.
The current implementation of Terraform import can only import resources into the state. It does not generate configuration. A future version of Terraform will also generate configuration.
Because of this, prior to running terraform import it is necessary to write manually a resource configuration block for the resource, to which the imported object will be mapped.
Terraformer (GitHub Repo)
A CLI tool that generates tf/json and tfstate files based on existing infrastructure (reverse Terraform).
This tools is provider-agnostic and follows the flow as terraform, i.e. plan and import. It was able to import specific resources entire workspaces and convet it into *.tf files.