I am trying to use a certificate issued in eu-central-1 for my apigateway which is regional and works in the same region.
My terraform code is as follows:
//ACM Certificate
provider "aws" {
region = "eu-central-1"
alias = "eu-central-1"
}
resource "aws_acm_certificate" "certificate" {
provider = "aws.eu-central-1"
domain_name = "*.kumite.xyz"
validation_method = "EMAIL"
}
//Apigateway
resource "aws_api_gateway_rest_api" "kumite_writer_api" {
name = "kumite_writer_api"
endpoint_configuration {
types = ["REGIONAL"]
}
}
resource "aws_api_gateway_domain_name" "domain_name" {
certificate_arn = aws_acm_certificate.certificate.arn
domain_name = "recorder.kumite.xyz"
endpoint_configuration {
types = ["REGIONAL"]
}
}
Unfortunately, I constantly get this error:
Error: Error creating API Gateway Domain Name: BadRequestException: Cannot import certificates for EDGE while REGIONAL is active.
What I am missing here? I think my ApiGateway is not EDGE but REGIONAL so cannot find sense to the error...
Change certificate_arn to regional_certificate_arn.
From documentation (emphasis mine):
When referencing an AWS-managed certificate, the following arguments are supported:
certificate_arn - (Optional) The ARN for an AWS-managed certificate. AWS Certificate Manager is the only supported source. Used when an edge-optimized domain name is desired. Conflicts with certificate_name, certificate_body, certificate_chain, certificate_private_key, regional_certificate_arn, and regional_certificate_name.
regional_certificate_arn - (Optional) The ARN for an AWS-managed certificate. AWS Certificate Manager is the only supported source. Used when a regional domain name is desired. Conflicts with certificate_arn, certificate_name, certificate_body, certificate_chain, and certificate_private_key.
Related
Hello and wish you a great time!
I've got following terraform service account deffinition:
resource "google_service_account" "gke_service_account" {
project = var.context
account_id = var.gke_account_name
display_name = var.gke_account_description
}
That I use in GCP kubernetes node pool:
resource "google_container_node_pool" "gke_node_pool" {
name = "${var.context}-gke-node"
location = var.region
project = var.context
cluster = google_container_cluster.gke_cluster.name
management {
auto_repair = "true"
auto_upgrade = "true"
}
autoscaling {
min_node_count = var.gke_min_node_count
max_node_count = var.gke_max_node_count
}
initial_node_count = var.gke_min_node_count
node_config {
machine_type = var.gke_machine_type
service_account = google_service_account.gke_service_account.email
metadata = {
disable-legacy-endpoints = "true"
}
# Needed for correctly functioning cluster, see
# https://www.terraform.io/docs/providers/google/r/container_cluster.html#oauth_scopes
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/cloud-platform"
]
}
}
However the current solution requires the prod and dev envs to be on various GCP accounts but use the same image from prod artifact registry.
As for now I have JSON key file for service account in prod having access to it's registry. Maybe there's a pretty way to use the json file as a second service account for kubernetes or update current k8s service account with json file to have additional permissions to the remote registry?
I've seen the solutions like put it to a secret or user cross-account-service-account.
But it's not the way I want to resolve it since we have some internal restrictions.
Hope someone faced similar task and has a solution to share - it'll save me real time.
Thanks in advance!
I've an Azure Key Vault(KV) that has shared secrets and a cert that needs to be pulled into different deployments.
E.g. DEV, TEST, UAT, Production all have their own key vaults BUT need access to the shared KV for wild card ssl cert.
I've tried a number of approaches but each has errors. I'm doing something similar for KV within the deployment resource group without issues
Is it possible to have this and then use it as a module? Something like this...
sharedKV.bicep
var kvResourceGroup = 'project-shared-rg'
var subscriptionId = subscription().id
var name = 'project-shared-kv'
resource project_shared_kv 'Microsoft.KeyVault/vaults#2021-06-01-preview' existing = {
name: name
scope: resourceGroup(subscriptionId, kvResourceGroup )
}
And then uses like:
template.bicep
module shared_kv './sharedKeyVault/template.bicep' = {
name: 'sharedKeyVault'
}
resource add_secrect 'Microsoft.KeyVault/vaults/secrets#2021-06-01-preview' = {
name: '${shared_kv.name}/mySecretKey'
properties: {
contentType: 'string'
value: 'secretValue'
attributes: {
enabled: true
}
}
}
If you need to target a different resourceGroup (and/or sub) than the rest of the deployment, the module's scope property needs to target that RG/sub. e.g.
module shared_kv './sharedKeyVault/template.bicep' = {
scope: resourceGroup(kvSubscription, kvResourceGroupName)
name: 'sharedKeyVault'
params: {
subId: kvSubscription
rg: kvResourceGroupName
...
}
}
Ideally, the sub/rg for the KV would be passed in to the module rather than hardcoded (which you probably knew, but just in case...)
Googleapi: Error 403: User not authorized to perform this action
provider "google" {
project = "xxxxxx"
region = "us-central1"
}
resource "google_pubsub_topic" "gke_cluster_upgrade_notifications" {
name = "cluster-notifications"
labels = {
foo = "bar"
}
message_storage_policy {
allowed_persistence_regions = [
"region",
]
}
}
# create the storage bucket for our scripts
resource "google_storage_bucket" "source_code" {
name = "xxxxxx-bucket-lh05111992"
location = "us-central1"
force_destroy = true
}
# zip up function source code
data "archive_file" "function_script_zip" {
type = "zip"
source_dir = "./function/"
output_path = "./function/main.py.zip"
}
# add function source code to storage
resource "google_storage_bucket_object" "function_script_zip" {
name = "main.py.zip"
bucket = google_storage_bucket.source_code.name
source = "./function/main.py.zip"
}
resource "google_cloudfunctions_function" "gke_cluster_upgrade_notifications" {---
-------
}
The service account has the owner role attached
Also tried using
1.export GOOGLE_APPLICATION_CREDENTIALS={{path}}
2.credentials = "${file("credentials.json")}" by place json file in terraform root folder.
It seems that the used account is missing some permissions (e.g. pubsub.topics.create) to create the Cloud Pub/Sub topic. The owner role should be sufficient to create the topic, as it contains the necessary permissions (you can check this here). Therefore, a wrong service account might be set in Terraform.
To address these IAM issues I would suggest:
Use the Policy Troubleshooter.
Impersonate service account and do the API call using CLI with --verbosity=debug flag, which will provide helpful information about the missing permissions.
I am trying to create an AKS cluster with managed identity using Terraform. This is my code so far, pretty basic and standard from a few documentation and blog posts I found online.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "2.79.1"
}
}
}
provider "azurerm" {
features {}
use_msi = true
}
resource "azurerm_resource_group" "rg" {
name = "prod_test"
location = "northeurope"
}
resource "azurerm_kubernetes_cluster" "cluster" {
name = "prod_test_cluster"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = "weak"
default_node_pool {
name = "default"
node_count = "4"
vm_size = "standard_ds3_v2"
}
identity {
type = "SystemAssigned"
}
}
And this is the error message that I can't come around to a solution. Any thoughts on it?
Error: creating Managed Kubernetes Cluster "prod_test_cluster" (Resource Group "prod_test"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="BadRequest" Message="Feature Microsoft.ContainerService/AutoUpgradePreview is not enabled. Please see https://aka.ms/aks/previews for how to enable features."
│
│ with azurerm_kubernetes_cluster.cluster,
│ on main.tf line 19, in resource "azurerm_kubernetes_cluster" "cluster":
│ 19: resource "azurerm_kubernetes_cluster" "cluster" {
│
I tested it on my environment and faced the same issue as you can see below:
So, to give a description on the issue the AutoChannelUpgrade went
to public preview on August 2021. And as per the terraform azurerm provider 2.79.0 , it bydefault passes that value to none in the
backend but as we have not registered for the feature it fails giving
the error Feature Microsoft.ContainerService/AutoUpgradePreview is not enabled.
To confirm you don't have the feature registered you can use the
below command :
az feature show -n AutoUpgradePreview --namespace Microsoft.ContainerService
You will see it not registered as below:
Now to overcome this you can try two solutions as given below:
You can try using terraform azurerm provider 2.78.0 instead of 2.79.1.
Other solution will be to register for the feature and then you can
use the same code that you are using .
You can follow the below steps:
You can use below command to register the feature (it will take around 5
mins to get registered) :
az login --identity
az feature register --namespace Microsoft.ContainerService -n AutoUpgradePreview
After the above is done you can check the registration stauts with below command :
az feature registration show --provider-namespace Microsoft.ContainerService -n AutoUpgradePreview
After the feature status becomes registered you can do a terraform apply to your code .
I tested it using the below code on my VM:
provider "azurerm" {
features {}
subscription_id = "948d4068-xxxxx-xxxxxx-xxxx-e00a844e059b"
tenant_id = "72f988bf-xxxxx-xxxxxx-xxxxx-2d7cd011db47"
use_msi = true
}
resource "azurerm_resource_group" "rg" {
name = "terraformtestansuman"
location = "west us 2"
}
resource "azurerm_kubernetes_cluster" "cluster" {
name = "prod_test_cluster"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = "weak"
default_node_pool {
name = "default"
node_count = "4"
vm_size = "standard_ds3_v2"
}
identity {
type = "SystemAssigned"
}
}
Outputs:
Reference:
Github Issue
Install Azure CLI if not installed on the VM using Microsoft Installer
I am using terraform v0.10.6 to spin up a droplet on digitalocean. I am referencing a key and SSH fingerprint that has already been added to digitalocean in my terraform config (copied below). I am able to log onto existing droplets using this ssh key but not on a newly formed droplet (SSH simply fails). Any thoughts on how to troubleshoot this so that when I launch the droplet via terraform, I should be able to log onto the droplet via the key that has already been added on digitalocean (and visible on DO console). Currently, the droplet appears on the digitalocean admin console but I am never able to SSH onto the server (connection gets denied).
test.tf
# add base droplet with name
resource "digitalocean_droplet" "do-mail" {
image = "ubuntu-16-04-x64"
name = "tmp.validdomain.com"
region = "nyc3"
size = "1gb"
private_networking = true
ssh_keys = [
"${var.ssh_fingerprint}",
]
connection {
user = "root"
type = "ssh"
private_key = "${file(var.private_key)}"
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"export PATH=$PATH:/usr/bin",
"sudo apt-get update",
]
}
}
terraform.tfvars
digitalocean_token = "correcttoken"
public_key = "~/.ssh/id_rsa.pub"
private_key = "~/.ssh/id_rsa"
ssh_fingerprint = "correct:finger:print"
provider.tf
provider "digitalocean" {
token = "${var.digitalocean_token}"
}
variables.tf
##variables used by terraform
# DO token
variable "digitalocean_token" {
type = "string"
}
# DO public key file location on local server
variable "public_key" {
type = "string"
}
# DO private key file location on local server
variable "private_key" {
type = "string"
}
# DO ssh key fingerprint
variable "ssh_fingerprint" {
type = "string"
}
I was able to setup a new droplet with the SSH key at initialization time when I specified the digitalocean token as an environment variable (as opposed to relying on the terraform.tfvars file).