Run custom Powershell script on provisioned Azure VM - powershell

I provisioned VM with following C# snippet
var ssrsVm = new WindowsVirtualMachine("vmssrs001", new WindowsVirtualMachineArgs
{
Name = "vmssrs001",
ResourceGroupName = resourceGroup.Name,
NetworkInterfaceIds = { nic.Id },
Size = "Standard_B1ms",
AdminUsername = ssrsLogin,
AdminPassword = ssrsPassword,
SourceImageReference = new WindowsVirtualMachineSourceImageReferenceArgs
{
Publisher = "microsoftpowerbi",
Offer = "ssrs-2016",
Sku = "dev-rs-only",
Version = "latest"
},
OsDisk = new WindowsVirtualMachineOsDiskArgs
{
Name = "vmssrs001disk",
Caching = "ReadWrite",
DiskSizeGb = 200,
StorageAccountType = "Standard_LRS",
}
});
After VM has been provisioned I would like to run a custom Powershell script on it to add a firewall rule. Now wondering how to do this as a part of the Pulumi app.
With Azure looks like I could do this with RunPowerShellScript but couldn't find anything about it in Pulumi docs, maybe there is a better way to handle my case?
UPDATE
Thanks to Ash's comment I was able to find VirtualMachineRunCommandByVirtualMachine which seems should do what I'm looking for, but unfortunately, following code snippet returns error
var virtualMachineRunCommandByVirtualMachine = new VirtualMachineRunCommandByVirtualMachine("vmssrs001-script",
new VirtualMachineRunCommandByVirtualMachineArgs
{
ResourceGroupName = resourceGroup.Name,
VmName = ssrsVm.Name,
RunAsUser = ssrsLogin,
RunAsPassword = ssrsPassword,
RunCommandName = "enable firewall rule for ssrs",
Source = new VirtualMachineRunCommandScriptSourceArgs
{
Script =
#"Firewall AllowHttpForSSRS
{
Name = 'AllowHTTPForSSRS'
DisplayName = 'AllowHTTPForSSRS'
Group = 'PT Rule Group'
Ensure = 'Present'
Enabled = 'True'
Profile = 'Public'
Direction = 'Inbound'
LocalPort = ('80')
Protocol = 'TCP'
Description = 'Firewall Rule for SSRS HTTP'
}"
}
});
error
The property 'runCommands' is not valid because the 'Microsoft .Compute/RunCommandPreview' feature is not enabled for this subscription."
Looks like other people are struggling with the same here.

You can use a Compute Extension to execute a script against a VM with Pulumi.
This article details some of the options if you just completed the procedure via PowerShell.

As an addition to Ash answer here is how I integrated it with Pulumi
first, I create a blob container for my project scripts
var deploymentContainer = new BlobContainer("deploymentscripts", new BlobContainerArgs
{
ContainerName = "deploymentscripts",
ResourceGroupName = resourceGroup.Name,
AccountName = storageAccount.Name,
});
next, I upload all of my Powershell scripts to create blob
with this snippet
foreach (var file in Directory.EnumerateFiles(Path.Combine(Environment.CurrentDirectory, "Scripts")))
{
var fileName = Path.GetFileName(file);
var blob = new Blob(fileName, new BlobArgs
{
ResourceGroupName = resourceGroup.Name,
AccountName = storageAccount.Name,
ContainerName = deploymentContainer.Name,
Source = new FileAsset(file),
});
deploymentFiles[fileName] = SignedBlobReadUrl(blob, deploymentContainer, storageAccount, resourceGroup);
}
SignedBlobReadUrl grabbed from Pulumi repo.
private static Output<string> SignedBlobReadUrl(Blob blob, BlobContainer container, StorageAccount account, ResourceGroup resourceGroup)
{
return Output.Tuple<string, string, string, string>(
blob.Name, container.Name, account.Name, resourceGroup.Name).Apply(t =>
{
(string blobName, string containerName, string accountName, string resourceGroupName) = t;
var blobSAS = ListStorageAccountServiceSAS.InvokeAsync(new ListStorageAccountServiceSASArgs
{
AccountName = accountName,
Protocols = HttpProtocol.Https,
SharedAccessStartTime = "2021-01-01",
SharedAccessExpiryTime = "2030-01-01",
Resource = SignedResource.C,
ResourceGroupName = resourceGroupName,
Permissions = Permissions.R,
CanonicalizedResource = "/blob/" + accountName + "/" + containerName,
CacheControl = "max-age=5",
});
return Output.Format($"https://{accountName}.blob.core.windows.net/{containerName}/{blobName}?{blobSAS.Result.ServiceSasToken}");
});
}
and lastly, I create Extension to run my script
code
var extension = new Extension("ssrsvmscript", new Pulumi.Azure.Compute.ExtensionArgs
{
Name = "ssrsvmscript",
VirtualMachineId = ssrsVm.Id,
Publisher = "Microsoft.Compute",
Type = "CustomScriptExtension",
TypeHandlerVersion = "1.10",
Settings = deploymentFiles["ssrsvm.ps1"].Apply(script => #" {
""commandToExecute"": ""powershell -ExecutionPolicy Unrestricted -File ssrsvm.ps1"",
""fileUris"": [" + "\"" + script + "\"" + "]}")
});
Hope that will save some time someone else struggling with the problem.

Related

Terraform import error retrieving Virtual Machine Scale Set created from an image

I'm trying to import a Linux VM Scale Set that was deployed in the Azure Portal from a custom shared image, also created in the portal. I'm using the following command:
terraform import module.vm_scaleset.azurerm_linux_virtual_machine_scale_set.vmscaleset /subscriptions/00000000-0000-0000-0000-000000000000
/resourceGroups/myrg/providers/Microsoft.Compute/virtualMachineScaleSets/vmss1
Import fails with the following error:
Error: retrieving Virtual Machine Scale Set "vmss1" (Resource Group "myrg"): properties.virtualMachineProfile.osProfile was nil
Below is my VM Scale set module code
data "azurerm_lb" "loadbalancer" {
name = var.lbName
resource_group_name = var.rgName
}
data "azurerm_lb_backend_address_pool" "addresspool" {
loadbalancer_id = data.azurerm_lb.loadbalancer.id
name = var.lbAddressPool
}
data "azurerm_shared_image" "scaleset_image" {
provider = azurerm.ist
name = var.scaleset_image_name
gallery_name = var.scaleset_image_gallery
resource_group_name = var.scaleset_image_rgname
}
resource "azurerm_linux_virtual_machine_scale_set" "vmscaleset" {
name = var.vmssName
resource_group_name = var.rgName
location = var.location
sku = var.vms_sku
instances = var.vm_instances
admin_username = azurerm_key_vault_secret.vmssusername.value
admin_password = azurerm_key_vault_secret.vmsspassword.value
disable_password_authentication = false
zones = var.vmss_zones
source_image_id = data.azurerm_shared_image.scaleset_image.id
tags = module.vmss_tags.tags
os_disk {
storage_account_type = var.vmss_osdisk_storage
caching = "ReadWrite"
create_option = "FromImage"
}
data_disk {
storage_account_type = "StandardSSD_LRS"
caching = "None"
disk_size_gb = 1000
lun = 10
create_option = "FromImage"
}
network_interface {
name = format("nic-%s-001", var.vmssName)
primary = true
enable_accelerated_networking = true
ip_configuration {
name = "internal"
load_balancer_backend_address_pool_ids = [data.azurerm_lb_backend_address_pool.addresspool.id]
primary = true
subnet_id = var.subnet_id
}
}
lifecycle {
ignore_changes = [
tags
]
}
}
The source image was created from a Linux RHEL 8.6 VM that included a custom node.js script.
Examination of the Scale Set in the portal does indeed show that the virtualMachineProfile.osProfile is absent.
I haven't been able to find a solution on any forum. Is there any way to ignore the error and import the Scale Set anyway?

Create a Azure PostgreSQL schema using terraform on a Azure PostgreSQL Database

I am able to create a azurerm_postgresql_flexible_server and azurerm_postgresql_flexible_server_database using terraform.
I am not able to create a schema using TF but not able to get much help on documentation.
I also checked https://registry.terraform.io/providers/cyrilgdn/postgresql/latest/docs/resources/postgresql_schema
but that uses a different provider. I am not sure what am I missing here.
This is the TF template which creates the Azure PostgreSQL server and DB -
module "common_modules" {
source = "../modules/Main"
}
provider "azurerm" {
features {}
}
locals {
#Construct Tag Data for Resource
resourceTags = {
environment = var.environment
createdBy = var.createdBy
managedBy = var.managedBy
colorBand = var.colorBand
purpose = var.purpose
lastUpdateOn = formatdate("DD-MM-YYYY hh:mm:ss ZZZ", timestamp())
}
}
resource "azurerm_postgresql_flexible_server" "postgreSQL" {
name = var.postgreSQL
location = var.location
resource_group_name = var.ckeditorResorceGroup
administrator_login = var.postgreSQLAdmin
administrator_password = var.password
sku_name = "B_Standard_B1ms"
version = "13"
storage_mb = 32768
backup_retention_days = 7
geo_redundant_backup_enabled = false
tags = local.resourceTags
}
resource "azurerm_postgresql_flexible_server_database" "postgreSQLDB" {
name = var.postgreSQLDB
server_id = azurerm_postgresql_flexible_server.postgreSQL.id
collation = "en_US.utf8"
charset = "utf8"
}
resource "azurerm_postgresql_flexible_server_firewall_rule" "postgreSQLFirewallRule" {
name = "allow_access_to_azure_services"
server_id = azurerm_postgresql_flexible_server.postgreSQL.id
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
have a look at https://registry.terraform.io/providers/cyrilgdn/postgresql or https://github.com/cyrilgdn/terraform-provider-postgresql
usable, but you need network connectivity to resolve names (azure private dns zone) and to connect with postgresql flexible server. The terraform code should run in same vnet like flexi server.

OCI: Create nodes in Kubernetes nodepool with bastion agent configured

I'm trying to deploy a Kubernetes cluster in Oracle Cloud Infrastructure using Terraform.
I want that every node deployed (in private subnet) has the Bastion agent plugin activate in Cloud Agent.
But I cannot see how to define the details of the instance (setting agent_config in the node pool instances).
My code, until now is:
resource "oci_containerengine_cluster" "generated_oci_containerengine_cluster" {
compartment_id = var.cluster_compartment
endpoint_config {
is_public_ip_enabled = "true"
subnet_id = oci_core_subnet.oke_public_api.id
}
kubernetes_version = var.kubernetes_version
name = "josealbarran_labcloudnative_oke"
options {
kubernetes_network_config {
pods_cidr = "10.244.0.0/16"
services_cidr = "10.96.0.0/16"
}
service_lb_subnet_ids = [oci_core_subnet.oke_public_lb.id]
}
vcn_id = var.cluster_vcn
}
# Check doc: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/containerengine_node_pool
resource "oci_containerengine_node_pool" "node_pool01" {
cluster_id = "${oci_containerengine_cluster.generated_oci_containerengine_cluster.id}"
compartment_id = var.cluster_compartment
initial_node_labels {
key = "name"
value = "pool01"
}
kubernetes_version = var.kubernetes_version
name = "lab_cloud_native_oke_pool01"
node_config_details {
size = "${length(data.oci_identity_availability_domains.ads.availability_domains)}"
dynamic "placement_configs" {
for_each = data.oci_identity_availability_domains.ads.availability_domains[*].name
content {
availability_domain = placement_configs.value
subnet_id = oci_core_subnet.oke_private_worker.id
}
}
}
node_shape = "VM.Standard.A1.Flex"
node_shape_config {
memory_in_gbs = "16"
ocpus = "1"
}
node_source_details {
image_id = "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaalgodii3qx3mfasp6ai22bja7mabfwsxiwkzxx7lhdfdbbuyqcznq"
source_type = "IMAGE"
}
ssh_public_key = "ssh-rsa AAAAB3xxxxxxxx......."
timeouts {
create = "60m"
delete = "90m"
}
}
You can use the "cloudinit_config" to run the custom script in OKE node pool in OCI.
second_script_template = templatefile("${path.module}/cloudinit/second.template.sh",{})
More scripts like
data "cloudinit_config" "worker" {
gzip = false
base64_encode = true
part {
filename = "worker.sh"
content_type = "text/x-shellscript"
content = local.worker_script_template
}
part {
filename = "second.sh"
content_type = "text/x-shellscript"
content = local.second_script_template
}
part {
filename = "third.sh"
content_type = "text/x-shellscript"
content = local.third_script_template
}
}
Refer : https://github.com/oracle-terraform-modules/terraform-oci-oke/blob/main/docs/instructions.adoc#14-configuring-cloud-init-for-the-nodepools
If you are looking forward to just edit the default script : https://github.com/oracle-terraform-modules/terraform-oci-oke/blob/main/docs/cloudinit.adoc

How to pass a PowerShell script using user_data (AWS and Terraform)?

I am trying to pass the PowerShell script as the file IIS.txt which is present in the CWD.
I don't see the script running on the server. I am not sure if I am missing something. Any help would be appreciated.
resource "aws_instance" "db1" {
ami = "ami-1234567890"
instance_type = "t3.small"
subnet_id = "${aws_subnet.db.0.id}"
key_name = "ireland"
user_data = "${file("IIS.txt")}"
tags = {
Name = "sql node 1"
}
}
I've used a template_file data and local_file resource for this.
data "template_file" "user_data" {
template = "${file("iis.txt")}"
}
resource "local_file" "user_data" {
content = "${data.template_file.user_data.rendered}"
filename = "user_data-${sha1(data.template_file.user_data.rendered)}.ps"
}
Then update your user_data property content of the local_file resource.
resource "aws_instance" "db1"
{
ami = "ami-1234567890"
instance_type = "t3.small"
subnet_id = "${aws_subnet.db.0.id}"
key_name = "ireland"
user_data = "${local_file.user_data.content}"
tags =
{
Name = "sql node 1"
}
}
This also allows you to get a little fancier and do a template script, and pull TF variables, etc into the template and render it just in time before you deploy.

How to get public IP of azure VM from the below terraform code

I have a terraform code which needs to retrieve public ip of a vm, here is my code
# Create virtual machine
resource "azurerm_virtual_machine" "myterraformvm" {
name = "myTerraformVM"
location = "Central India"
resource_group_name = "rg-mpg-devops-poc"
network_interface_ids = ["/subscriptions/*************/resourceGroups/rg-mpg-devops-poc/providers/Microsoft.Network/networkInterfaces/nic-mpg-devops"]
vm_size = "Standard_DS1_v2"
storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
os_profile {
computer_name = "myvm"
admin_username = "azureuser"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/azureuser/.ssh/authorized_keys"
key_data = "ssh-rsa *********************"
}}
boot_diagnostics {
enabled = "true"
storage_uri = "https://*******.blob.core.windows.net/"
}}
Here am using NIC id , which will provide public ip by default, Can some one help me on this?
you would use data module for that:
data "azurerm_network_interface" "test" {
name = "acctest-nic"
resource_group_name = "networking"
}
that will give you NIC object, that will have ip_configuration block, that (in turn) will have public_ip_address_id parameter and you will use that to get data for the public ip:
data "azurerm_public_ip" "test" {
name = "name_of_public_ip"
resource_group_name = "name_of_resource_group"
}
output "domain_name_label" {
value = "${data.azurerm_public_ip.test.domain_name_label}"
}
output "public_ip_address" {
value = "${data.azurerm_public_ip.test.ip_address}"
}
you will have to parse resource ID into resource group\name of the resource obviously, but that can be easily done with split + array index
https://www.terraform.io/docs/providers/azurerm/d/public_ip.html
https://www.terraform.io/docs/providers/azurerm/d/network_interface.html
I tried this and could not retrieve the public IP. (more than likely pilot error.)
In my case I needed to retrieve an address for installing chef in a later step, so IP or FQDN would work. Here is how I got through this:
When creating my public ip, I added the domain label. Use this same value when you define your machine name.
resource "azurerm_public_ip" "CSpublicip" {
name = "myPublicIP"
location = "eastus"
resource_group_name = "${azurerm_resource_group.CSgroup.name}"
allocation_method = "Dynamic"
domain_name_label = "csvm${random_integer.server.result}"
When you add the domain label, Azure creates a reachable FQDN. Once you have that, you can use/retrieve the fqdn.
output "AzurePage" {
value = "${azurerm_public_ip.CSpublicip.fqdn}"