I am using terraform to script the sharing of private hosted zone to another AWS account.
Step 1: (In account A) Create 3 private hosted zone with Account A's VPC attached
Step 2: (In account A) Create authorization to Account B' VPC, Account C' VPC
Step 3: (In account B, account C, using assume role) Associate VPC to Account A's private hosted zone
however, in step 3, following errors occur:
2 error(s) occurred:
* module.assciation_mtc.null_resource.associate_with_remote_zone[1]: Error running command 'aws route53 associate-vpc-with-hosted-zone --hosted-zone-id Z13ZRIFNAA9HJT --vpc VPCRegion=ap-southeast-1,VPCId=vpc-05dc595cd7378171d': exit status 255. Output:
An error occurred (NotAuthorizedException) when calling the AssociateVPCWithHostedZone operation: The VPC: vpc-05dc595cd7378171d has not authorized to associate with your hosted zone.
* module.assciation_mtc.null_resource.associate_with_remote_zone[0]: Error running command 'aws route53 associate-vpc-with-hosted-zone --hosted-zone-id Z2N32DMQZFGH6V --vpc VPCRegion=ap-southeast-1,VPCId=vpc-05dc595cd7378171d': exit status 255. Output:
An error occurred (NotAuthorizedException) when calling the AssociateVPCWithHostedZone operation: The VPC: vpc-05dc595cd7378171d has not authorized to associate with your hosted zone.
I have tried to use the exact command using AWS CLI, it works. But don't know why the script fails when terraform excuting it.
Tried Command in Account B, Account C:
aws route53 associate-vpc-with-hosted-zone --hosted-zone-id Z2N32DMQZFGH6V --vpc VPCRegion=ap-southeast-1,VPCId=vpc-05dc595cd7378171d
Terraform Folder Hierarchy:
route53
create_zone
authorization_create
association
/route53/main.tf
//authorize each zone with all vpc
module "authorize_zone_ss" {
source = "./authorization_create"
providers {
aws = "aws.provider_ss"
}
zone_id = "${module.creat_zone.zone_ids[0]}"
zone_ids = ["${module.creat_zone.zone_ids}"]
vpc_ids = ["${var.vpc_ids}"]
}
//associate each vpc to all zone
module "assciation_mtc" {
source = "./association"
providers {
aws = "aws.provider_mtc"
}
zone_ids = ["${module.creat_zone.zone_ids}"]
vpc_id = "${var.vpc_ids[2]}"
}
/route53/authorization_create/main.tf
data "aws_region" "current" {}
//associate 1 private zone with all account's vpc
resource "null_resource" "create_remote_zone_auth" {
count = "${var.zone_number -1}"
triggers {
vpc_id = "${element(var.vpc_ids, count.index +1)}"
}
provisioner "local-exec" {
command = "aws route53 create-vpc-association-authorization --hosted-zone-id ${var.zone_id} --vpc VPCRegion=${data.aws_region.current.name},VPCId=${element(var.vpc_ids, count.index +1)}"
}
}
/route53/association/main.tf
data "aws_region" "current" {}
//associate this vpc to all route 53 private zone
resource "null_resource" "associate_with_remote_zone" {
count = "${var.vpc_number -1}"
triggers {
zone_id = "${element(var.zone_ids, count.index +1)}"
}
provisioner "local-exec" {
command = "aws route53 associate-vpc-with-hosted-zone --hosted-zone-id ${element(var.zone_ids,count.index)} --vpc VPCRegion=${data.aws_region.current.name},VPCId=${var.vpc_id}"
}
}
Expected Results:
All account's VPC (account A,B,C) are authorized to share with all the zone.
ie.
Account A zone 1: Associated with Account A/B/C's VPC
Account A zone 2: Associated with Account A/B/C's VPC
Account A zone 3: Associated with Account A/B/C's VPC
Actual Results:
Error happened when executing the command: associate-vpc-with-hosted-zone
Reference :
https://medium.com/#dalethestirling/managing-route53-cross-account-zone-associations-with-terraform-e1e45de8f3ea
You have to assume the role you want to use before running the local-exec command:
aws sts assume-role --role-arn 'arn-of-role' --role-session-name 'role_session_name' --duration-seconds 3600 --output json
Then export the values so they are used by TF as part of environment variables:
export AWS_ACCESS_KEY_ID=$(echo "${aws_creds}" | grep AccessKeyId | awk -F'"' '{print $4}' )
export AWS_SECRET_ACCESS_KEY=$(echo "${aws_creds}" | grep SecretAccessKey | awk -F'"' '{print $4}' )
export AWS_SESSION_TOKEN=$(echo "${aws_creds}" | grep SessionToken | awk -F'"' '{print $4}' )
export AWS_SECURITY_TOKEN=$(echo "${aws_creds}" | grep SessionToken | awk -F'"' '{print $4}' )
Run the above using the relevent details for your environment and then run the auth / association afterwards
I have find out the problems.
Terraform doesn't take up the role in the null_resources block, so the command is executing using the original terraform role.
I am still trying the solutions.
Is there any help?
Terraform has supported route53_vpc_association_authorization to authorize a VPC in a peer account to be associated with a local Route53 Hosted Zone.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_vpc_association_authorization
The solution now can be implemented through TF natively, no need to use null_resource which is covered by CLI
Related
I am trying to assign contributor rights on a resource group to an Azure Active Directory Group using Terraform. The Terraform script I use looks like this:
# Deploy Resource Groups
resource "azurerm_resource_group" "rg" {
name = "rg-companyname-syn-env-100"
location = "westeurope"
}
# Retrieve data for AAD CloudAdmin groups
data "azuread_group" "cloud_admin" {
display_name = "AAD-GRP-companyname-CloudAdministrators-env"
security_enabled = true
}
# Add "Contributor" role to Cloudadmin AAD group
resource "azurerm_role_assignment" "cloud_admin" {
scope = azurerm_resource_group.rg.id
role_definition_name = "Contributor"
principal_id = data.azuread_group.cloud_admin.id
depends_on = [azurerm_resource_group.rg]
}
If I run this I receive the following error:
╷
│ Error: authorization.RoleAssignmentsClient#Create: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client '<application_object_id>' with object id '<application_object_id>' does not have authorization to perform action 'Microsoft.Authorization/roleAssignments/write' over scope '/subscriptions/<subscription_id>/resourceGroups/rg-companyname-syn-env-100/providers/Microsoft.Authorization/roleAssignments/<role_assignment_id>' or the scope is invalid. If access was recently granted, please refresh your credentials."
│
│ with azurerm_role_assignment.cloud_admin["syn"],
│ on rg.tf line 15, in resource "azurerm_role_assignment" "cloud_admin":
│ 15: resource "azurerm_role_assignment" "cloud_admin" {
│
╵
Note the AAD Group (AAD-GRP-companyname-CloudAdministrators-env) already has the Owner role on the subscription used.
Is there somebody that knows a fix for this problem?
I had this issue occur today after someone else in the team had changed the service principal my deployment ran under to a contributor rather than owner of the subscription. Assign the role the role "owner" of the subscription to the service account your deployments run under
Whichever principal (either you, a service principal or managed identity assigned to a build agent) likely has the built in Azure role Contributor which can manage resources, but not Role Based Access Control (RBAC). So that is the reason you are getting a 403 Unauthorized response.
Contributor role is allowed to read, but not write role assignments. Since you are using Terraform, I would suggest creating a custom role definition which will allow write as well as delete so you can use terraform destroy
You can create a custom role definition by clicky-clicking in the portal, azure cli or Terraform (snippet below); executed by someone with the Owner role.
Once you have a custom role assignment with the appropriate permissions then assign the principal that is executing the terraform apply with that custom role.
data "azurerm_client_config" "current" {
}
resource "azurerm_role_definition" "role_assignment_write_delete" {
name = "RBAC Owner"
scope = data.azurerm_client_config.current.subscription_id
description = "Management of role assignments"
permissions {
actions = [
"Microsoft.Authorization/roleAssignments/write",
"Microsoft.Authorization/roleAssignments/delete",
]
not_actions = []
}
assignable_scopes = [
data.azurerm_client_config.current.subscription_id //or management group
]
}
I am unsure how to restore an AWS documentdb cluster that is managed by terraform.
My terraform setup looks like this:
resource "aws_docdb_cluster" "this" {
cluster_identifier = var.env_name
engine = "docdb"
engine_version = "4.0.0"
master_username = "USERNAME"
master_password = random_password.this.result
db_cluster_parameter_group_name = aws_docdb_cluster_parameter_group.this.name
availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
db_subnet_group_name = aws_docdb_subnet_group.this.name
deletion_protection = true
backup_retention_period = 7
preferred_backup_window = "07:00-09:00"
skip_final_snapshot = false
# Added on 6.25.22 to rollback an incorrect application of the namespace
# migration, which occurred at 2AM EST on June 23.
snapshot_identifier = "...the arn for the snapshot..."
}
resource "aws_docdb_cluster_instance" "this_2a" {
count = 1
engine = "docdb"
availability_zone = "us-east-1a"
auto_minor_version_upgrade = true
cluster_identifier = aws_docdb_cluster.this.id
instance_class = "db.r5.large"
}
resource "aws_docdb_cluster_instance" "this_2b" {
count = 1
engine = "docdb"
availability_zone = "us-east-1b"
auto_minor_version_upgrade = true
cluster_identifier = aws_docdb_cluster.this.id
instance_class = "db.r5.large"
}
resource "aws_docdb_subnet_group" "this" {
name = var.env_name
subnet_ids = module.vpc.private_subnets
}
I added the snapshot_identifier parameter and applied it, expecting a rollback. However, this did not have the intended effect of restoring documentdb state to its settings on June 23rd. (As far as I can tell, nothing changed at all)
I wanted to avoid using the AWS console approach (described here) because that creates a new cluster which won't be tracked by terraform.
What is the proper way of accomplishing this rollback using terraform?
The snapshot_identifier parameter is only used when Terraform creates a new cluster. Setting it after the cluster has been created just tells Terraform "If you ever have to recreate this cluster, use this snapshot".
To actually get Terraform to recreate the cluster you would need to do something else to make Terraform think the cluster needs to be recreated. Possible options are:
Run terraform taint aws_docdb_cluster.this to signal to Terraform that the resource needs to be recreated. It will then recreate it the next time you run terraform apply.
Delete the cluster through some other means, like the AWS console, and then run terraform apply.
The general approach is this, but i have no experience with documentdb. Hope this helps.
0. Take a backup of your terrafrom state file terraform state pull > backup_state_file_timestamp.json
Restore through the console to the point in time you want.
Remove the old instances and cluster from your terraform state file
terraform state rm aws_docdb_cluster_instance.this_2a
terraform state rm aws_docdb_cluster_instance.this_2b
terraform state rm aws_docdb_cluster.this
Import the manually restored cluster and instance into terraform
terraform import aws_docdb_cluster.this cluster_identifier
terraform import rm aws_docdb_cluster_instance.this_2a identifier
terraform import rm aws_docdb_cluster_instance.this_2b identifier
(see import at the bottom https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/docdb_cluster_instance and https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/docdb_cluster)
I need to create GKE cluster and then create namespace and install db through helm to that namespace. Now I have gke-cluster.tf that creates cluster with node pool and helm.tf, that has kubernetes provider and helm_release resource. It first creates cluster, but then tries to install db but namespace doesn't exist yet, so I have to run terraform apply again and it works. I want to avoid scenario with multiple folder and run terraform apply only once. What's the good practice for situaction like this? Thanks for the answers.
The create_namespace argument of helm_release resource can help you.
create_namespace - (Optional) Create the namespace if it does not yet exist. Defaults to false.
https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#create_namespace
Alternatively, you can define a dependency between the namespace resource and helm_release like below:
resource "kubernetes_namespace" "prod" {
metadata {
annotations = {
name = "prod-namespace"
}
labels = {
namespace = "prod"
}
name = "prod"
}
}
resource "helm_release" "arango-crd" {
name = "arango-crd"
chart = "./kube-arangodb-crd"
namespace = "prod"
depends_on = [ kubernetes_namespace.prod ]
}
The solution posted by user adp is correct but I wanted to give more insight on using Terraform for this particular example in regards of running single commmand:
$ terraform apply --auto-approve.
Basing on following comments:
Can you tell how are you creating your namespace? Is it with kubernetes provider? - Dawid Kruk
resource "kubernetes_namespace" - Jozef Vrana
This setup needs specific order of execution. First the cluster, then the resources. By default Terraform will try to create all of the resources at the same time. It is crucial to use a parameter depends_on = [VALUE].
The next issue is that the kubernetes provider will try to fetch the credentials at the start of the process from ~/.kube/config. It will not wait for the cluster provisioning to get the actual credentials. It could:
fail when there is no .kube/config
fetch credentials for the wrong cluster.
There is ongoing feature request to resolve this kind of use case (also there are some workarounds):
Github.com: Hashicorp: Terraform: Issue: depends_on for providers
As an example:
# Create cluster
resource "google_container_cluster" "gke-terraform" {
project = "PROJECT_ID"
name = "gke-terraform"
location = var.zone
initial_node_count = 1
}
# Get the credentials
resource "null_resource" "get-credentials" {
depends_on = [google_container_cluster.gke-terraform]
provisioner "local-exec" {
command = "gcloud container clusters get-credentials ${google_container_cluster.gke-terraform.name} --zone=europe-west3-c"
}
}
# Create a namespace
resource "kubernetes_namespace" "awesome-namespace" {
depends_on = [null_resource.get-credentials]
metadata {
name = "awesome-namespace"
}
}
Assuming that you had earlier configured cluster to work on and you didn't delete it:
Credentials for Kubernetes cluster are fetched.
Terraform will create a cluster named gke-terraform
Terraform will run a local command to get the credentials for gke-terraform cluster
Terraform will create a namespace (using old information):
if you had another cluster in .kube/config configured, it will create a namespace in that cluster (previous one)
if you deleted your previous cluster, it will try to create a namespace in that cluster and fail (previous one)
if you had no .kube/config it will fail on the start
Important!
Using "helm_release" resource seems to get the credentials when provisioning the resources, not at the start!
As said you can use helm provider to provision the resources on your cluster to avoid the issues I described above.
Example on running a single command for creating a cluster and provisioning resources on it:
variable zone {
type = string
default = "europe-west3-c"
}
resource "google_container_cluster" "gke-terraform" {
project = "PROJECT_ID"
name = "gke-terraform"
location = var.zone
initial_node_count = 1
}
data "google_container_cluster" "gke-terraform" {
project = "PROJECT_ID"
name = "gke-terraform"
location = var.zone
}
resource "null_resource" "get-credentials" {
# do not start before resource gke-terraform is provisioned
depends_on = [google_container_cluster.gke-terraform]
provisioner "local-exec" {
command = "gcloud container clusters get-credentials ${google_container_cluster.gke-terraform.name} --zone=${var.zone}"
}
}
resource "helm_release" "mydatabase" {
name = "mydatabase"
chart = "stable/mariadb"
# do not start before the get-credentials resource is run
depends_on = [null_resource.get-credentials]
set {
name = "mariadbUser"
value = "foo"
}
set {
name = "mariadbPassword"
value = "qux"
}
}
Using above configuration will yield:
data.google_container_cluster.gke-terraform: Refreshing state...
google_container_cluster.gke-terraform: Creating...
google_container_cluster.gke-terraform: Still creating... [10s elapsed]
<--OMITTED-->
google_container_cluster.gke-terraform: Still creating... [2m30s elapsed]
google_container_cluster.gke-terraform: Creation complete after 2m38s [id=projects/PROJECT_ID/locations/europe-west3-c/clusters/gke-terraform]
null_resource.get-credentials: Creating...
null_resource.get-credentials: Provisioning with 'local-exec'...
null_resource.get-credentials (local-exec): Executing: ["/bin/sh" "-c" "gcloud container clusters get-credentials gke-terraform --zone=europe-west3-c"]
null_resource.get-credentials (local-exec): Fetching cluster endpoint and auth data.
null_resource.get-credentials (local-exec): kubeconfig entry generated for gke-terraform.
null_resource.get-credentials: Creation complete after 1s [id=4191245626158601026]
helm_release.mydatabase: Creating...
helm_release.mydatabase: Still creating... [10s elapsed]
<--OMITTED-->
helm_release.mydatabase: Still creating... [1m40s elapsed]
helm_release.mydatabase: Creation complete after 1m44s [id=mydatabase]
I am trying to provision a GKE cluster with windows node_pool using google modules, I am calling module
source = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster-update-variant"
version = "9.2.0"
I had to define two pools one for linux pool required by GKE and the windows one we require, terraform always succeeds in provisioning the linux node_pool but fails to provision the windows one and the error message
module.gke.google_container_cluster.primary: Still modifying... [id=projects/uk-xxx-xx-xxx-b821/locations/europe-west2/clusters/gke-nonpci-dev, 24m31s elapsed]
module.gke.google_container_cluster.primary: Still modifying... [id=projects/uk-xxx-xx-xxx-b821/locations/europe-west2/clusters/gke-nonpci-dev, 24m41s elapsed]
module.gke.google_container_cluster.primary: Still modifying... [id=projects/uk-xxx-xx-xxx-b821/locations/europe-west2/clusters/gke-nonpci-dev, 24m51s elapsed]
module.gke.google_container_cluster.primary: Modifications complete after 24m58s [id=projects/xx-xxx-xx-xxx-b821/locations/europe-west2/clusters/gke-nonpci-dev]
module.gke.google_container_node_pool.pools["windows-node-pool"]: Creating...
Error: error creating NodePool: googleapi: Error 400: Workload Identity is not supported on Windows nodes. Create the nodepool without workload identity by specifying --workload-metadata=GCE_METADATA., badRequest
on .terraform\modules\gke\terraform-google-kubernetes-engine-9.2.0\modules\beta-private-cluster-update-variant\cluster.tf line 341, in resource "google_container_node_pool" "pools":
341: resource "google_container_node_pool" "pools" {
I tried many places to set this metadata values but I coldn't get it right:
From terraform side :
I tried many places to add this metadata inside the node_config scope in the module itself or in my main.tf file where I call the module I tried to add it to the windows node_pool scope of the node_pools list but it didn't accept it with a message that setting WORKLOAD IDENTITY isn't expected here
I tried also setting enable_shielded_nodes = false but this didn't really help much.
I tried to test this if it is doable even through the command line this was my command line
C:\>gcloud container node-pools --region europe-west2 list
NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION
default-node-pool-d916 n1-standard-2 100 1.17.9-gke.600
C:\>gcloud container node-pools --region europe-west2 create window-node-pool --cluster=gke-nonpci-dev --image-type=WINDOWS_SAC --no-enable-autoupgrade --machine-type=n1-standard-2
WARNING: Starting in 1.12, new node pools will be created with their legacy Compute Engine instance metadata APIs disabled by default. To create a node pool with legacy instance metadata endpoints disabled, run `node-pools create` with the flag `--metadata disable-legacy-endpoints=true`.
This will disable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
ERROR: (gcloud.container.node-pools.create) ResponseError: code=400, message=Workload Identity is not supported on Windows nodes. Create the nodepool without workload identity by specifying --workload-metadata=GCE_METADATA.
C:\>gcloud container node-pools --region europe-west2 create window-node-pool --cluster=gke-nonpci-dev --image-type=WINDOWS_SAC --no-enable-autoupgrade --machine-type=n1-standard-2 --workload-metadata=GCE_METADATA --metadata disable-legacy-endpoints=true
This will disable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
ERROR: (gcloud.container.node-pools.create) ResponseError: code=400, message=Service account "874988475980-compute#developer.gserviceaccount.com" does not exist.
C:\>gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* tf-xxx-xxx-xx-xxx#xx-xxx-xx-xxx-xxxx.iam.gserviceaccount.com
This service account from running gcloud auth list is the one I am running terraform with but I don't know where is this one in the error message coming from, even though trying to create the windows nodepool through command line as shown above also didn't work I am a bit stuck and I don't know what to do.
As module 9.2.0 is a stable module for us through all our linux based clusters we setup before, hence I thought this may be an old version for a windows node_pool I used the 11.0.0 instead to see if this would make it any different but ended up with a different error
module.gke.google_container_node_pool.pools["default-node-pool"]: Refreshing state... [id=projects/uk-tix-p1-npe-b821/locations/europe-west2/clusters/gke-nonpci-dev/nodePools/default-node-pool-d916]
Error: failed to execute ".terraform/modules/gke.gcloud_delete_default_kube_dns_configmap/terraform-google-gcloud-1.4.1/scripts/check_env.sh": fork/exec .terraform/modules/gke.gcloud_delete_default_kube_dns_configmap/terraform-google-gcloud-1.4.1/scripts/check_env.sh: %1 is not a valid Win32 application.
on .terraform\modules\gke.gcloud_delete_default_kube_dns_configmap\terraform-google-gcloud-1.4.1\main.tf line 70, in data "external" "env_override":
70: data "external" "env_override" {
Error: failed to execute ".terraform/modules/gke.gcloud_wait_for_cluster/terraform-google-gcloud-1.3.0/scripts/check_env.sh": fork/exec .terraform/modules/gke.gcloud_wait_for_cluster/terraform-google-gcloud-1.3.0/scripts/check_env.sh: %1 is not a valid Win32 application.
on .terraform\modules\gke.gcloud_wait_for_cluster\terraform-google-gcloud-1.3.0\main.tf line 70, in data "external" "env_override":
70: data "external" "env_override" {
This how I set node_pools parameters
node_pools = [
{
name = "linux-node-pool"
machine_type = var.nodepool_instance_type
min_count = 1
max_count = 10
disk_size_gb = 100
disk_type = "pd-standard"
image_type = "COS"
auto_repair = true
auto_upgrade = true
service_account = google_service_account.gke_cluster_sa.email
preemptible = var.preemptible
initial_node_count = 1
},
{
name = "windows-node-pool"
machine_type = var.nodepool_instance_type
min_count = 1
max_count = 10
disk_size_gb = 100
disk_type = "pd-standard"
image_type = var.nodepool_image_type
auto_repair = true
auto_upgrade = true
service_account = google_service_account.gke_cluster_sa.email
preemptible = var.preemptible
initial_node_count = 1
}
]
cluster_resource_labels = var.cluster_resource_labels
# health check and webhook firewall rules
node_pools_tags = {
all = [
"xx-xxx-xxx-local-xxx",
]
}
node_pools_metadata = {
all = {
// workload-metadata = "GCE_METADATA"
}
linux-node-pool = {
ssh-keys = join("\n", [for user, key in var.node_ssh_keys : "${user}:${key}"])
block-project-ssh-keys = true
}
windows-node-pool = {
workload-metadata = "GCE_METADATA"
}
}
this is a shared VPC where I provision my cluster with cluster version: 1.17.9-gke.600
Checkout https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/issues/632 for the solution.
Error message is ambiguous and GKE has an internal bug to track this issue. We will improve the error message soon.
When I use AWS EC2 driver invoke create_node and ex_modify_instance_attribute API , I got this error:
raise InvalidCredsError(err_list[-1])
libcloud.common.types.InvalidCredsError: 'AuthFailure: AWS was not able to validate the provided access credentials'
But ex_create_subnet/ list_nodes API success , and I'm sure about I have the permission on AWS IAM to create EC2 instance.
By the way , I am using AWC cn-north-1 region.
I find create node with some parameters will got AuthFailure
The Code:
node = self.conn.create_node(name=instance_name,
image=image,
size=size,
ex_keyname=ex_keyname,
ex_iamprofile=ex_iamprofile,
ex_subnet=ex_subnet,
ex_security_group_ids=ex_security_group_ids,
ex_mincount=ex_mincount,
ex_maxcount=ex_mincount,
ex_blockdevicemappings=config['block_devices'],
ex_assign_public_ip=config['eth0']['need_eip']
)
I just delete some parameters and works:
node = self.conn.create_node(name=instance_name,
image=image,
size=size,
ex_keyname=ex_keyname,
# ex_iamprofile=ex_iamprofile,
ex_subnet=ex_subnet,
# ex_security_group_ids=ex_security_group_ids,
ex_mincount=ex_mincount,
ex_maxcount=ex_mincount,
# ex_blockdevicemappings=config['block_devices'],
# ex_assign_public_ip=config['eth0']['need_eip']
)