Create a Azure PostgreSQL schema using terraform on a Azure PostgreSQL Database - postgresql

I am able to create a azurerm_postgresql_flexible_server and azurerm_postgresql_flexible_server_database using terraform.
I am not able to create a schema using TF but not able to get much help on documentation.
I also checked https://registry.terraform.io/providers/cyrilgdn/postgresql/latest/docs/resources/postgresql_schema
but that uses a different provider. I am not sure what am I missing here.
This is the TF template which creates the Azure PostgreSQL server and DB -
module "common_modules" {
source = "../modules/Main"
}
provider "azurerm" {
features {}
}
locals {
#Construct Tag Data for Resource
resourceTags = {
environment = var.environment
createdBy = var.createdBy
managedBy = var.managedBy
colorBand = var.colorBand
purpose = var.purpose
lastUpdateOn = formatdate("DD-MM-YYYY hh:mm:ss ZZZ", timestamp())
}
}
resource "azurerm_postgresql_flexible_server" "postgreSQL" {
name = var.postgreSQL
location = var.location
resource_group_name = var.ckeditorResorceGroup
administrator_login = var.postgreSQLAdmin
administrator_password = var.password
sku_name = "B_Standard_B1ms"
version = "13"
storage_mb = 32768
backup_retention_days = 7
geo_redundant_backup_enabled = false
tags = local.resourceTags
}
resource "azurerm_postgresql_flexible_server_database" "postgreSQLDB" {
name = var.postgreSQLDB
server_id = azurerm_postgresql_flexible_server.postgreSQL.id
collation = "en_US.utf8"
charset = "utf8"
}
resource "azurerm_postgresql_flexible_server_firewall_rule" "postgreSQLFirewallRule" {
name = "allow_access_to_azure_services"
server_id = azurerm_postgresql_flexible_server.postgreSQL.id
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}

have a look at https://registry.terraform.io/providers/cyrilgdn/postgresql or https://github.com/cyrilgdn/terraform-provider-postgresql
usable, but you need network connectivity to resolve names (azure private dns zone) and to connect with postgresql flexible server. The terraform code should run in same vnet like flexi server.

Related

AWS RDS for PostgreSQL Connection attempt timed out error

I created postgresql rds in aws with terraform. I'm checking from the aws console, everything seems normal. But I'm trying to connect to database with DBeaver but I can't connect. Likewise, I can't make the ssh connection for the ec2 I created, maybe there is a connection.
The terraform codes I wrote:
# postgres-db/main.tf
resource "aws_db_instance" "default" {
allocated_storage = 20
storage_type = "gp2"
engine = var.engine
engine_version = var.engine-version
instance_class = var.instance-class
db_name = var.db-name
identifier = var.identifier
username = var.username
password = var.password
port = var.port
publicly_accessible = var.publicly-accessible
db_subnet_group_name = var.db-subnet-group-name
parameter_group_name = var.parameter-group-name
vpc_security_group_ids = var.vpc-security-group-ids
apply_immediately = var.apply-immediately
skip_final_snapshot = true
}
module "service-db" {
source = "./postgres-db"
apply-immediately = true
db-name = var.service-db-name
db-subnet-group-name = data.terraform_remote_state.server.outputs.db_subnet_group
identifier = "${var.app-name}-db"
password = var.service-db-password
publicly-accessible = true # TODO: True for now, but should be false
username = var.service-db-username
vpc-security-group-ids = [data.terraform_remote_state.server.outputs.security_group_allow_internal_postgres]
}
resource "aws_security_group" "allow_internal_postgres" {
name = "allow-internal-postgres"
description = "Allow internal Postgres traffic"
vpc_id = aws_vpc.vpc.id
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = [aws_vpc.vpc.cidr_block, "0.0.0.0/0"] # TODO: Remove public IP
}
}
In the research I did, it was written things like edit the security rules or set it to public, it seems like that anyway.
Security group inbound rules
Public accessible
How can I solve this problem can you please help?
I solved my problem by setting the subnet group to public.
module "service-db" {
source = "./postgres-db"
apply-immediately = true
db-name = var.service-db-name
db-subnet-group-name = data.terraform_remote_state.server.outputs.db_subnet_group_public
identifier = "${var.app-name}-db"
password = var.service-db-password
publicly-accessible = true # TODO: True for now, but should be false
username = var.service-db-username
vpc-security-group-ids = [data.terraform_remote_state.server.outputs.security_group_allow_internal_postgres]
}
resource "aws_db_subnet_group" "private" {
name = "${var.server_name}-db-subnet-group-private"
subnet_ids = aws_subnet.private.*.id
tags = {
Name = "${var.server_name} DB Subnet Group Private"
}
}
resource "aws_db_subnet_group" "public" {
name = "${var.server_name}-db-subnet-group-public"
subnet_ids = aws_subnet.public.*.id
tags = {
Name = "${var.server_name} DB Subnet Group Public"
}
}

Terraform import error retrieving Virtual Machine Scale Set created from an image

I'm trying to import a Linux VM Scale Set that was deployed in the Azure Portal from a custom shared image, also created in the portal. I'm using the following command:
terraform import module.vm_scaleset.azurerm_linux_virtual_machine_scale_set.vmscaleset /subscriptions/00000000-0000-0000-0000-000000000000
/resourceGroups/myrg/providers/Microsoft.Compute/virtualMachineScaleSets/vmss1
Import fails with the following error:
Error: retrieving Virtual Machine Scale Set "vmss1" (Resource Group "myrg"): properties.virtualMachineProfile.osProfile was nil
Below is my VM Scale set module code
data "azurerm_lb" "loadbalancer" {
name = var.lbName
resource_group_name = var.rgName
}
data "azurerm_lb_backend_address_pool" "addresspool" {
loadbalancer_id = data.azurerm_lb.loadbalancer.id
name = var.lbAddressPool
}
data "azurerm_shared_image" "scaleset_image" {
provider = azurerm.ist
name = var.scaleset_image_name
gallery_name = var.scaleset_image_gallery
resource_group_name = var.scaleset_image_rgname
}
resource "azurerm_linux_virtual_machine_scale_set" "vmscaleset" {
name = var.vmssName
resource_group_name = var.rgName
location = var.location
sku = var.vms_sku
instances = var.vm_instances
admin_username = azurerm_key_vault_secret.vmssusername.value
admin_password = azurerm_key_vault_secret.vmsspassword.value
disable_password_authentication = false
zones = var.vmss_zones
source_image_id = data.azurerm_shared_image.scaleset_image.id
tags = module.vmss_tags.tags
os_disk {
storage_account_type = var.vmss_osdisk_storage
caching = "ReadWrite"
create_option = "FromImage"
}
data_disk {
storage_account_type = "StandardSSD_LRS"
caching = "None"
disk_size_gb = 1000
lun = 10
create_option = "FromImage"
}
network_interface {
name = format("nic-%s-001", var.vmssName)
primary = true
enable_accelerated_networking = true
ip_configuration {
name = "internal"
load_balancer_backend_address_pool_ids = [data.azurerm_lb_backend_address_pool.addresspool.id]
primary = true
subnet_id = var.subnet_id
}
}
lifecycle {
ignore_changes = [
tags
]
}
}
The source image was created from a Linux RHEL 8.6 VM that included a custom node.js script.
Examination of the Scale Set in the portal does indeed show that the virtualMachineProfile.osProfile is absent.
I haven't been able to find a solution on any forum. Is there any way to ignore the error and import the Scale Set anyway?

Get aks publicIP/loadbalancer IP of Kuberbenetes address after apply bedrock terraform

Currently I'm applying following terraform template in order to create kubernetes cluster, everything work as I expected.
module "subnet" {
source = "git::https://github.com/microsoft/bedrock//cluster/azure/subnet/?ref=master"
subnet_name = var.subnet_name
vnet_name = var.vnet_name
resource_group_name = data.azurerm_resource_group.keyvault.name
address_prefixes = [var.subnet_prefix]
}
module "aks-gitops" {
source = "git::https://github.com/microsoft/bedrock//cluster/azure/aks-gitops/?ref=master"
acr_enabled = var.acr_enabled
agent_vm_count = var.agent_vm_count
agent_vm_size = var.agent_vm_size
cluster_name = var.cluster_name
dns_prefix = var.dns_prefix
flux_recreate = var.flux_recreate
gc_enabled = var.gc_enabled
gitops_ssh_url = var.gitops_ssh_url
gitops_ssh_key_path = var.gitops_ssh_key_path
gitops_path = var.gitops_path
gitops_poll_interval = var.gitops_poll_interval
gitops_label = var.gitops_label
gitops_url_branch = var.gitops_url_branch
kubernetes_version = var.kubernetes_version
resource_group_name = data.azurerm_resource_group.cluster_rg.name
service_principal_id = var.service_principal_id
service_principal_secret = var.service_principal_secret
ssh_public_key = var.ssh_public_key
vnet_subnet_id = module.subnet.subnet_id
network_plugin = var.network_plugin
network_policy = var.network_policy
oms_agent_enabled = var.oms_agent_enabled
}
The next step in terrafrom is configure the CDN/Domain setup, and it requires the public IP address (which already created in above steps under module "aks-gitops") but the output seem to be not returned with that Ip address.
Any idea for that, since I've just dug all the resource on internet.
every comment is appreciated. !
Thank mates !
To retrieve the FQDN which resolves to the public IP of the cluster, create a data resource that references the newly created cluster.
data "azurerm_kubernetes_cluster" "aks-cluster" {
name = var.cluster_name
resource_group_name = data.azurerm_resource_group.cluster_rg.name
}
The address of the newly created cluster can then be accessed via data.aks-cluster.fqdn
You can follow a similar pattern to retrieve details of a load balancer, or any other resource that is not returned in the module outputs.

How to get public IP of azure VM from the below terraform code

I have a terraform code which needs to retrieve public ip of a vm, here is my code
# Create virtual machine
resource "azurerm_virtual_machine" "myterraformvm" {
name = "myTerraformVM"
location = "Central India"
resource_group_name = "rg-mpg-devops-poc"
network_interface_ids = ["/subscriptions/*************/resourceGroups/rg-mpg-devops-poc/providers/Microsoft.Network/networkInterfaces/nic-mpg-devops"]
vm_size = "Standard_DS1_v2"
storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
os_profile {
computer_name = "myvm"
admin_username = "azureuser"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/azureuser/.ssh/authorized_keys"
key_data = "ssh-rsa *********************"
}}
boot_diagnostics {
enabled = "true"
storage_uri = "https://*******.blob.core.windows.net/"
}}
Here am using NIC id , which will provide public ip by default, Can some one help me on this?
you would use data module for that:
data "azurerm_network_interface" "test" {
name = "acctest-nic"
resource_group_name = "networking"
}
that will give you NIC object, that will have ip_configuration block, that (in turn) will have public_ip_address_id parameter and you will use that to get data for the public ip:
data "azurerm_public_ip" "test" {
name = "name_of_public_ip"
resource_group_name = "name_of_resource_group"
}
output "domain_name_label" {
value = "${data.azurerm_public_ip.test.domain_name_label}"
}
output "public_ip_address" {
value = "${data.azurerm_public_ip.test.ip_address}"
}
you will have to parse resource ID into resource group\name of the resource obviously, but that can be easily done with split + array index
https://www.terraform.io/docs/providers/azurerm/d/public_ip.html
https://www.terraform.io/docs/providers/azurerm/d/network_interface.html
I tried this and could not retrieve the public IP. (more than likely pilot error.)
In my case I needed to retrieve an address for installing chef in a later step, so IP or FQDN would work. Here is how I got through this:
When creating my public ip, I added the domain label. Use this same value when you define your machine name.
resource "azurerm_public_ip" "CSpublicip" {
name = "myPublicIP"
location = "eastus"
resource_group_name = "${azurerm_resource_group.CSgroup.name}"
allocation_method = "Dynamic"
domain_name_label = "csvm${random_integer.server.result}"
When you add the domain label, Azure creates a reachable FQDN. Once you have that, you can use/retrieve the fqdn.
output "AzurePage" {
value = "${azurerm_public_ip.CSpublicip.fqdn}"

Terraform vsphere network interface bond0 configuration

I have a Terraform config that I want to use to create VMs from a Vsphere template (Redhat 7) but I need to be able to specify the network interface to apply the customizations (static IP, Subnet, Gateway, DNS).
provider "vsphere" {
user = "${var.vsphere_user}"
password = "${var.vsphere_password}"
vsphere_server = "${var.vsphere_server}"
allow_unverified_ssl = true
}
resource "vsphere_virtual_machine" "vm1" {
name = "vm1"
folder = "${var.vsphere_folder}"
vcpu = 2
memory = 32768
datacenter = "dc1"
cluster = "cluster1"
skip_customization = false
disk {
template = "${var.vsphere_folder}/${var.template_redhat}"
datastore = "${var.template_datastore}"
type = "thin"
}
network_interface {
label = "${var.vlan}"
ipv4_address = "10.1.1.1"
ipv4_prefix_length = 16
ipv4_gateway = "10.1.1.254"
}
dns_servers = ["10.1.1.254"]
time_zone = "004"
}
I want to apply the static IP to bond0 instead of eth0, is this possible to do in Terraform?
Thanks.