AKS unable to create Worker Node - kubernetes

Trying to Create AKS which is behind Proxy, AKS failed to launch Worker Node in node pool, failing with connection timeout error, https://mcr.microsoft.com/ 443
Tried using below argument but getting error
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#http_proxy_config
resource "azurerm_kubernetes_cluster" "aks_cluster" {
name =
location =
resource_group_name =
dns_prefix =
kubernetes_version =
kubernetes_version =
node_resource_group =
private_cluster_enabled =
http_proxy =
https_proxy =
no_proxy =
╷
│ Error: Unsupported argument
│
│ on aks_cluster.tf line 60, in resource "azurerm_kubernetes_cluster" "aks_cluster":
│ 60: http_proxy = "export http_proxy=http:"
│
│ An argument named "http_proxy" is not expected here.
╵
╷
│ Error: Unsupported argument
│
│ on aks_cluster.tf line 61, in resource "azurerm_kubernetes_cluster" "aks_cluster":
│ 61: https_proxy = "export https_proxy=http://"
│
│ An argument named "https_proxy" is not expected here.
╵
╷
│ Error: Unsupported argument
│
│ on aks_cluster.tf line 62, in resource "azurerm_kubernetes_cluster" "aks_cluster":
│ 62: no_proxy = "localhost,"
│
│ An argument named "no_proxy" is not expected here.
╵
##[error]Terraform command 'validate' failed with exit code
Another one
│ on aks_cluster.tf line 70, in resource "azurerm_kubernetes_cluster" "aks_cluster":
│ 70: http_proxy_config = "export https_proxy=http:///"
│
│ An argument named "http_proxy_config" is not expected here
I did : https://learn.microsoft.com/en-us/azure/aks/http-proxy
checked : https://github.com/hashicorp/terraform-provider-azurerm/pull/14177

You will have to declare the http_proxy , https_proxy and no_proxy inside the http_proxy_config block in azurerm_kubernetes_cluster resource block.
The code will be like below :
resource "azurerm_kubernetes_cluster" "example" {
name = "ansuman-aks1"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
dns_prefix = "ansumanaks1"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
http_proxy_config {
http_proxy = "http://myproxy.server.com:8080/"
https_proxy = "https://myproxy.server.com:8080/"
no_proxy = ["localhost","127.0.0.1"]
}
}
Output:
Note: Please make sure that you have registered HTTPProxyConfigPreview feature in your subscription and after its registered you have registered the provider Microsoft.ContainerService for the feature to take effect as mentioned in this Microsoft Documentation.Also please ensure that you have provided correct proxy API's.

Related

Terraform escape quotes in the string don't work

This is a simplified Terraform code as below:
resource "datadog_monitor" "monitor_error" {
name = "error log"
type = "metric alert"
message = "There is error in the log."
query = "logs(\\\"Error\\\").index(\\\"*\\\").rollup(\\\"count\\\").last(\\\"5m\\\")>0"
}
It passed when running "terraform validate".
But it failed when running "terraform apply" with the following errors.
│ Error: error validating monitor from https://api.datadoghq.com/api/v1/monitor/validate: 400 Bad Request: {"errors":["The value provided for parameter 'query' is invalid"]}
│
│ with datadog_monitor.monitor_error,
│ on main.tf line 6, in resource "datadog_monitor" "monitor_error":
│ 6: resource "datadog_monitor" "monitor_error" {
│
╵
The debug output of terraform apply is as belows:
{"message":"There is error in the log.","name":"error log","options":{"include_tags":true,"new_host_delay":300,"no_data_timeframe":10,"notify_no_data":false,"require_full_window":true,"thresholds":{}},"priority":0,"query":"logs(\\\"Error\\\").index(\\\"*\\\").rollup(\\\"count\\\").last(\\\"5m\\\")\u003e0","tags":[],"type":"metric alert"}:
I also tried to use single slash, it failed the same thing.
What I expect is running "terraform apply", there is no error, monitor can be created.

Not able to deploy kubernetes resources with terraform in EKS

I am trying to deploy Kubernetes resource(Secrets) in AWS EKS using Terraform. Here is my resource looks like
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.34"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.15.0"
}
}
}
data "aws_eks_cluster" "this" {
name = "shared_eks01"
}
data "aws_eks_cluster_auth" "this" {
name = "shared_eks01"
}
provider "kubernetes" {
host = data.aws_eks_cluster.this.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.this.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.this.token
}
provider "aws" {
region = "us-west-2"
}
resource "kubernetes_secret" "spacelift" {
metadata {
name = "spacelift123"
namespace = "spacelift"
}
data = {
"token" = "123"
}
}
I am unable to deploy the resource and I am getting this below error
kubernetes_secret.spacelift: Creating...
╷
│ Error: Post "https://6FC8A63F36709AA...........gr7.us-west-2.eks.amazonaws.com/api/v1/namespaces/spacelift/secrets": dial tcp 127.0.0.1:443: connectex: No connection could be made because the target machine actively refused it.
│
│ with kubernetes_secret.spacelift,
│ on main.tf line 63, in resource "kubernetes_secret" "spacelift":
│ 63: resource "kubernetes_secret" "spacelift" {
Also tried Adding load_config_file = false in provider results in
An argument named "load_config_file" is not expected here.
Can you pls tell me what am I missing?

Error when adding tags to Snowflake resource (role)

I am using:
Terraform v1.2.9
on windows_amd64
provider registry.terraform.io/snowflake-labs/snowflake v0.42.1
My main.tf file is:
terraform {
required_version = ">= 1.1.7"
backend "http" {
}
required_providers {
snowflake = {
source = "Snowflake-Labs/snowflake"
version = "~> 0.42"
}
}
}
provider "snowflake" {
username = "xxxxxx"
account = "yyyyyy"
region = "canada-central.azure"
}
When I add the following tags to a Snowflake role, I have an error. Can you help?
resource "snowflake_role" "operations_manager" {
name = "OPERATIONS_MANAGER"
comment = "A comment"
tag = {
managed-with = "Terraform",
owner = "Joe Smith"
}
}
Error: Unsupported argument
│
│ on functional_roles.tf line 35, in resource "snowflake_role" "operations_manager":
│ 35: tag = {
│
│ An argument named "tag" is not expected here. Did you mean to define a block of type "tag"?

Azure app registration creation error through terraform Azure Devops yml pipeline [duplicate]

This question already has answers here:
json.Marshal(): json: error calling MarshalJSON for type msgraph.Application
(2 answers)
Closed 1 year ago.
I have very simple terraform code.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.46.0"
}
azuread = {
source = "hashicorp/azuread"
version = "~> 2.0.0"
}
}
}
provider "azurerm" {
features {}
}
provider "azuread" {
tenant_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
terraform {
backend "azurerm" {
resource_group_name = "xxxx"
storage_account_name = "xxxxxxxxx"
container_name = "xxxxxxxxxxxxx"
key = "xxxxxxxxxxxxxxxxx"
}
}
data "azuread_client_config" "current" {}
resource "azurerm_resource_group" "test" {
name = "test-rg-005"
location = "East US"
}
resource "azuread_application" "example" {
display_name = "Example-app"
}
However when i run this through yml pipeline on azure devops, i am getting this error during apply stage.
Plan: 1 to add, 0 to change, 0 to destroy.
azuread_application.example: Creating...
│ Error: Could not create application
│
│ with azuread_application.example,
│ on terraform.tf line 42, in resource "azuread_application" "example":
│ 42: resource "azuread_application" "example" {
│
│ json.Marshal(): json: error calling MarshalJSON for type
│ msgraph.Application: json: error calling MarshalJSON for type
│ *msgraph.Owners: marshaling Owners: encountered DirectoryObject with nil
│ ODataId
##[error]Error: The process '/opt/hostedtoolcache/terraform/1.0.5/x64/terraform' failed with
exit code 1
Any clue will be helpful, not really clear what this error is about?
Thanks.
There is a bug in azure Active directory provider after an MSFT update. This is impacting any azure ad provider usage creating new resources, however it seems to be working on already deployed resources, i.e. changing and upgrading the configurations of already deployed resource within azure ad. Following is the link for the bug updates.
https://github.com/hashicorp/terraform-provider-azuread/issues/588

Issue creating a RDS Postregsql instance, with AWS Terraform module

I am attempting to run this module: https://registry.terraform.io/modules/azavea/postgresql-rds/aws/latest Here is the main.tf file created based on the information found there:
provider "aws" {
region = "us-east-2"
access_key = "key_here"
secret_key = "key_here"
}
module postgresql_rds {
source = "github.com/azavea/terraform-aws-postgresql-rds"
vpc_id = "vpc-2470994d"
instance_type = "db.t3.micro"
database_name = "tyler"
database_username = "admin"
database_password = "admin1234"
subnet_group = "tyler-subnet-1"
project = "Postgres-ts"
environment = "Staging"
alarm_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
ok_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
insufficient_data_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
database_identifier = "jl23kj32sdf"
}
I am getting an error:
Error: Error creating DB Instance: DBSubnetGroupNotFoundFault: DBSubnetGroup 'tyler-subnet-1' not found.
│ status code: 404, request id: a95975dd-5456-444a-8f64-440fc4c1782f
│
│ with module.postgresql_rds.aws_db_instance.postgresql,
│ on .terraform/modules/postgresql_rds/main.tf line 46, in resource "aws_db_instance" "postgresql":
│ 46: resource "aws_db_instance" "postgresql" {
I have tried the example from the page:
subnet_group = aws_db_subnet_group.default.name
I used the default example from the page, under "Usage" ie subnet_group = aws_db_subnet_group.default.name. I have also used the subnet ID from AWS. I also assigned a name to the subnet, and used the name "tyler-subnet-1 in the above main.tf). I am getting the same basic error, with all three attempted inputs. Is there something I'm not understanding about the information that is being requested here?
Assuming you have a default subnet group, you can just use it:
subnet_group = "default"
If not you have to create a custom subnet group using aws_db_subnet_group:
resource "aws_db_subnet_group" "default" {
name = "my-subnet-group"
subnet_ids = [<subnet-id-1>, <subnet-id-2>]
tags = {
Name = "My DB subnet group"
}
}
and use the custom group:
subnet_group = aws_db_subnet_group.default.name