Azure Terraform function app deployment issue - azure-devops

I hope somebody can help me with this issue because I don't understand what am I doing wrong.
I am trying to build an azure function app and deploy a zip package (timer trigger) to it.
I have set this code.
resource "azurerm_resource_group" "function-rg" {
location = "westeurope"
name = "resource-group"
}
data "azurerm_storage_account_sas" "sas" {
connection_string = azurerm_storage_account.sthriprdeurcsvtoscim.primary_connection_string
https_only = true
start = "2021-01-01"
expiry = "2023-12-31"
resource_types {
object = true
container = false
service = false
}
services {
blob = true
queue = false
table = false
file = false
}
permissions {
read = true
write = false
delete = false
list = false
add = false
create = false
update = false
process = false
}
}
resource "azurerm_app_service_plan" "ASP-rg-hri-prd-scim" {
location = azurerm_resource_group.function-rg.location
name = "ASP-rghriprdeurcsvtoscim"
resource_group_name = azurerm_resource_group.function-rg.name
kind = "functionapp"
maximum_elastic_worker_count = 1
per_site_scaling = false
reserved = false
sku {
capacity = 0
size = "Y1"
tier = "Dynamic"
}
}
resource "azurerm_storage_container" "deployments" {
name = "function-releases"
storage_account_name = azurerm_storage_account.sthriprdeurcsvtoscim.name
container_access_type = "private"
}
resource "azurerm_storage_blob" "appcode" {
name = "functionapp.zip"
storage_account_name = azurerm_storage_account.sthriprdeurcsvtoscim.name
storage_container_name = azurerm_storage_container.deployments.name
type = "Block"
source = "./functionapp.zip"
}
resource "azurerm_function_app" "func-hri-prd-eur-csv-to-scim" {
storage_account_name = azurerm_storage_account.sthriprdeurcsvtoscim.name
storage_account_access_key = azurerm_storage_account.sthriprdeurcsvtoscim.primary_access_key
app_service_plan_id = azurerm_app_service_plan.ASP-rg-hri-prd-scim.id
location = azurerm_resource_group.function-rg.location
name = "func-hri-prd-csv-to-scim"
resource_group_name = azurerm_resource_group.function-rg.name
app_settings = {
"WEBSITE_RUN_FROM_PACKAGE" = "https://${azurerm_storage_account.sthriprdeurcsvtoscim.name}.blob.core.windows.net/${azurerm_storage_container.deployments.name}/${azurerm_storage_blob.appcode.name}${data.azurerm_storage_account_sas.sas.sas}"
"FUNCTIONS_EXTENSION_VERSION" = "~3"
"FUNCTIONS_WORKER_RUNTIME" = "dotnet"
}
enabled = true
identity {
type = "SystemAssigned"
}
version = "~3"
enable_builtin_logging = false
}
resource "azurerm_storage_account" "sthriprdeurcsvtoscim" {
account_kind = "Storage"
account_replication_type = "LRS"
account_tier = "Standard"
allow_blob_public_access = false
enable_https_traffic_only = true
is_hns_enabled = false
location = azurerm_resource_group.function-rg.location
name = "sthriprdeurcsvtoscim"
resource_group_name = azurerm_resource_group.function-rg.name
}
Goes without saying that terraform apply work without any error. The configurations of the function app are correct and points to the right storage account. The storage account has a container with the zip file containing my azure function code.
But when I go to the function app -> Functions, I don't see any function there.
Can please somebody help me to understand what am I doing wrong in this?
The Function app is a .net3 function

When you create a function app, it isn’t set up for Functions + Terraform. It’s set up for a Visual Code + Functions deployment. We need to adjust both the package.json so that it will produce the ZIP file for us, and the .gitignore so that it ignores the Terraform build files. I use a bunch of helper NPM packages:
azure-functions-core-tools for the func command.
#ffflorian/jszip-cli to ZIP my files up.
mkdirp for creating directories.
npm-run-all and particularly the run-s command for executing things in order.
rimraf for deleting things.
Below is the code how package.json looks like
{
"name": "backend",
"version": "1.0.0",
"description": "",
"scripts": {
"func": "func",
"clean": "rimraf build",
"build:compile": "tsc",
"build:prune": "npm prune --production",
"prebuild:zip": "mkdirp --mode=0700 build",
"build:zip": "jszip-cli",
"build": "run-s clean build:compile build:zip",
"predeploy": "npm run build",
"deploy": "terraform apply"
},
"dependencies": {
},
"devDependencies": {
"azure-functions-core-tools": "^2.7.1724",
"#azure/functions": "^1.0.3",
"#ffflorian/jszip-cli": "^3.0.2",
"mkdirp": "^0.5.1",
"npm-run-all": "^4.1.5",
"rimraf": "^3.0.0",
"typescript": "^3.3.3"
}
}
npm run build will build the ZIP file.
npm run deploy will build the ZIP file and deploy it to Azure.
For complete information check Azure Function app with Terraform.

Related

Automated .kube/config file created with Terraform GKE module doesn't work

We are using the below terraform module to create the GKE cluster and the local config file. But the kubectl command doesn't work. The terminal just keeps on waiting. Then we have to manually run the command gcloud container clusters get-credentials to fetch and set up the local config credentials post the cluster creation, and then the kubectl command works. It doesn't feel elegant to execute the gcloud command after the terraform setup. Therefore please help to correct what's wrong.
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 3.42.0"
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 3.0.0"
}
}
}
provider "google" {
credentials = var.gcp_creds
project = var.project_id
region = var.region
zone = var.zone
}
provider "google-beta" {
credentials = var.gcp_creds
project = var.project_id
region = var.region
zone = var.zone
}
module "gke" {
source = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster"
project_id = var.project_id
name = "cluster"
regional = true
region = var.region
zones = ["${var.region}-a", "${var.region}-b", "${var.region}-c"]
network = module.vpc.network_name
subnetwork = module.vpc.subnets_names[0]
ip_range_pods = "gke-pod-ip-range"
ip_range_services = "gke-service-ip-range"
horizontal_pod_autoscaling = true
enable_private_nodes = true
master_ipv4_cidr_block = "${var.control_plane_cidr}"
node_pools = [
{
name = "node-pool"
machine_type = "${var.machine_type}"
node_locations = "${var.region}-a,${var.region}-b,${var.region}-c"
min_count = "${var.node_pools_min_count}"
max_count = "${var.node_pools_max_count}"
disk_size_gb = "${var.node_pools_disk_size_gb}"
auto_repair = true
auto_upgrade = true
},
]
}
# Configure the authentication and authorisation of the cluster
module "gke_auth" {
source = "terraform-google-modules/kubernetes-engine/google//modules/auth"
depends_on = [module.gke]
project_id = var.project_id
location = module.gke.location
cluster_name = module.gke.name
}
# Define a local file that will store the necessary info such as certificate, user and endpoint to access the cluster
resource "local_file" "kubeconfig" {
content = module.gke_auth.kubeconfig_raw
filename = "~/.kube/config"
}
UPDATE:
We found that the automated .kube/config file was created with the wrong public IP of the API Server whereas the one created with the gcloud command contains the correct IP of the API Server. Any guesses why terraform one is fetching the wrong public IP?

Can somene help me how to give the specific service principal name for azure custom policy in terraform code

The Scenario is:We need to create resources in azure portal through terraform cloud only by using service principals and azure portal shouldn't allow creation of resources by other means.
(No manual creation in azure portal,and also by SVM's tools like vs code)
The following code which i tried:
resource "azurerm_policy_definition" "policych" {
name = "PolicyTestSP"
policy_type = "Custom"
mode = "Indexed"
display_name = "Allow thru SP"
metadata = <<METADATA
{
"category": "General"
}
METADATA
policy_rule = <<POLICY_RULE
{
"if": {
"not": {
"field": "type",
"equals": "https://app.terraform.io/"
}
},
"then": {
"effect": "deny"
}
}
POLICY_RULE
}
resource "azurerm_policy_assignment" "policych" {
name = "Required servive-principles"
display_name = "Allow thru SP"
description = " Require SP Authentication'"
policy_definition_id = "${azurerm_policy_definition.policych.id}"
scope = "/subscriptions/xxxx-xxxx-xxxx-xxxx"
}

Helm - Kubernetes cluster unreachable: the server has asked for the client to provide credentials

I'm trying to deploy an EKS self managed with Terraform. While I can deploy the cluster with addons, vpc, subnet and all other resources, it always fails at helm:
Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials
with module.eks-ssp-kubernetes-addons.module.ingress_nginx[0].helm_release.nginx[0]
on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/ingress-nginx/main.tf line 19, in resource "helm_release" "nginx":
resource "helm_release" "nginx" {
This error repeats for metrics_server, lb_ingress, argocd, but cluster-autoscaler throws:
Warning: Helm release "cluster-autoscaler" was created but has a failed status.
with module.eks-ssp-kubernetes-addons.module.cluster_autoscaler[0].helm_release.cluster_autoscaler[0]
on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/cluster-autoscaler/main.tf line 1, in resource "helm_release" "cluster_autoscaler":
resource "helm_release" "cluster_autoscaler" {
My main.tf looks like this:
terraform {
backend "remote" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.66.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.7.1"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
}
}
data "aws_eks_cluster" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
provider "aws" {
access_key = "xxx"
secret_key = "xxx"
region = "xxx"
assume_role {
role_arn = "xxx"
}
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}
}
My eks.tf looks like this:
module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"
# EKS CLUSTER
tenant = "DevOpsLabs2b"
environment = "dev-test"
zone = ""
terraform_version = "Terraform v1.1.4"
# EKS Cluster VPC and Subnet mandatory config
vpc_id = "xxx"
private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]
# EKS CONTROL PLANE VARIABLES
create_eks = true
kubernetes_version = "1.19"
# EKS SELF MANAGED NODE GROUPS
self_managed_node_groups = {
self_mg = {
node_group_name = "DevOpsLabs2b"
subnet_ids = ["xxx","xxx", "xxx", "xxx"]
create_launch_template = true
launch_template_os = "bottlerocket" # amazonlinux2eks or bottlerocket or windows
custom_ami_id = "xxx"
public_ip = true # Enable only for public subnets
pre_userdata = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT
disk_size = 10
instance_type = "t2.small"
desired_size = 2
max_size = 10
min_size = 0
capacity_type = "" # Optional Use this only for SPOT capacity as capacity_type = "spot"
k8s_labels = {
Environment = "dev-test"
Zone = ""
WorkerType = "SELF_MANAGED_ON_DEMAND"
}
additional_tags = {
ExtraTag = "t2x-on-demand"
Name = "t2x-on-demand"
subnet_type = "public"
}
create_worker_security_group = false # Creates a dedicated sec group for this Node Group
},
}
}
enable_amazon_eks_vpc_cni = true
amazon_eks_vpc_cni_config = {
addon_name = "vpc-cni"
addon_version = "v1.7.5-eksbuild.2"
service_account = "aws-node"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
additional_iam_policies = []
service_account_role_arn = ""
tags = {}
}
enable_amazon_eks_kube_proxy = true
amazon_eks_kube_proxy_config = {
addon_name = "kube-proxy"
addon_version = "v1.19.8-eksbuild.1"
service_account = "kube-proxy"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
additional_iam_policies = []
service_account_role_arn = ""
tags = {}
}
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
depends_on = [module.eks-ssp.self_managed_node_groups]
}
OP has confirmed in the comment that the problem was resolved:
Of course. I think I found the issue. Doing "kubectl get svc" throws: "An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::xxx:user/terraform_deploy is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxx:user/terraform_deploy"
Solved it by using my actual role, that's crazy. No idea why it was calling itself.
For similar problem look also this issue.
I solved this error by adding dependencies in the helm installations.
The depends_on will wait for the step to successfully complete and then helm module runs.
module "nginx-ingress" {
depends_on = [module.eks, module.aws-load-balancer-controller]
source = "terraform-module/release/helm"
...}
module "aws-load-balancer-controller" {
depends_on = [module.eks]
source = "terraform-module/release/helm"
...}
module "helm_autoscaler" {
depends_on = [module.eks]
source = "terraform-module/release/helm"
...}

PowerShell script with parameter for Windows VM instance on Google Cloud Platform

I am trying to deploy Windows VM on Google Cloud through terraform. The VM is getting deployed and I am able to execute PowerShell scripts by using windows-startup-script-url.
With this approach, I can only use scripts which are already stored in Google Storage. If the script has parameters / variables, then how to pass that parameter, any clue !
provider "google" {
project = "my-project"
region = "my-location"
zone = "my-zone"
}
resource "google_compute_instance" "default" {
name = "my-name"
machine_type = "n1-standard-2"
zone = "my-zone"
boot_disk {
initialize_params {
image = "windows-cloud/windows-2019"
}
}
metadata {
windows-startup-script-url = "gs://<my-storage>/<my-script.ps1>"
}
network_interface {
network = "default"
access_config {
}
}
tags = ["http-server", "windows-server"]
}
resource "google_compute_firewall" "http-server" {
name = "default-allow-http"
network = "default"
allow {
protocol = "tcp"
ports = ["80"]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["http-server"]
}
resource "google_compute_firewall" "windows-server" {
name = "windows-server"
network = "default"
allow {
protocol = "tcp"
ports = ["3389"]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["windows-server"]
}
output "ip" {
value = "${google_compute_instance.default.network_interface.0.access_config.0.nat_ip}"
}
Terraform doesn't require startup scripts to be pulled from GCS buckets necessarily.
The example here shows:
}
metadata = {
foo = "bar"
}
metadata_startup_script = "echo hi > /test.txt"
service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}
}
More in Official docs for GCE and Powershell scripting here

How to set aws cloudwatch retention via Terraform

Using Terraform to deploy API Gateway/Lambda and already have the appropriate logs in Cloudwatch. However I can't seem to find a way to set the retention on the logs via Terraform, using my currently deployed resources (below). It looks like the log group resource is where I'd do it, but not sure how to point log stream from api gateway at the new log group. I must be missing something obvious ... any advice is very much appreciated!
resource "aws_api_gateway_account" "name" {
cloudwatch_role_arn = "${aws_iam_role.cloudwatch.arn}"
}
resource "aws_iam_role" "cloudwatch" {
name = "#{name}_APIGatewayCloudWatchLogs"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_policy_attachment" "api_gateway_logs" {
name = "#{name}_api_gateway_logs_policy_attach"
roles = ["${aws_iam_role.cloudwatch.id}"]
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs"
}
resource "aws_api_gateway_method_settings" "name" {
rest_api_id = "${aws_api_gateway_rest_api.name.id}"
stage_name = "${aws_api_gateway_stage.name.stage_name}"
method_path = "${aws_api_gateway_resource.name.path_part}/${aws_api_gateway_method.name.http_method}"
settings {
metrics_enabled = true
logging_level = "INFO"
data_trace_enabled = true
}
}
yes, you can use the Lambda log name to create log resource before you create the Lambda function. Or you can import the existing log groups.
resource "aws_cloudwatch_log_group" "lambda" {
name = "/aws/lambda/${var.env}-${join("", split("_",title(var.lambda_name)))}-Lambda"
retention_in_days = 7
lifecycle {
create_before_destroy = true
prevent_destroy = false
}
}