packer error Failed to send shutdown command: dial tcp 172.29.48.100:22: i/o timeout - powershell

I am trying packer builder with provisioner "shell-local". After successful OS installation I am trying to attach second network adapter. But it stuck in this error. Platform Hyper-V. Code looks like:
source "hyperv-iso" "build-debian" {
boot_command = ["<wait><wait><wait><esc><wait><wait><wait>",
"/install.amd/vmlinuz ",
"initrd=/install.amd/initrd.gz ", "auto=true ", "interface=eth0 ",
"netcfg/disable_dhcp=true ",
"netcfg/confirm_static=true ", "netcfg/get_ipaddress=172.29.48.100 ",
"netcfg/get_netmask=255.255.255.0 ",
"netcfg/get_gateway=172.29.48.1 ", "netcfg/get_nameservers=8.8.8.8 8.8.4.4 ",
"netcfg/get_domain=domain ",
"netcfg/get_hostname=hostname ", "url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg ",
"vga=788 noprompt quiet --<enter> "]
boot_wait = "10s"
configuration_version = "${var.hyperv_version}"
cpus = "${var.cpus}"
disk_block_size = "${var.hyperv_disk_block_size}"
disk_size = "${var.disk_size}"
memory = "${var.memory}"
generation = "${var.hyperv_generation}"
guest_additions_mode = "disable"
http_directory = "${local.http_directory}"
iso_checksum = "sha256:e307d0e583b4a8f7e5b436f8413d4707dd4242b70aea61eb08591dc0378522f3"
iso_url = "http://debian.mirror.vu.lt/debian-cd/11.5.0/amd64/iso-cd/debian-11.5.0-amd64-netinst.iso"
output_directory = "${var.build_directory}/packer-${local.template}-${var.git_sha}"
shutdown_command = "echo 'vagrant' | sudo -S /sbin/halt -h -p"
ssh_host = "${var.ip_address_eth0}"
ssh_keep_alive_interval = "-1s"
ssh_password = "vagrant"
ssh_port = 22
ssh_timeout = "120m"
ssh_username = "vagrant"
headless = "false"
switch_name = "VmNAT"
vm_name = "${local.template}-${var.git_sha}"
}
build {
name = "BUILD: Debian v11.5"
source "hyperv-iso.build-debian" {
}
provisioner "shell-local" {
execute_command = ["powershell.exe", "{{.Vars}} {{.Script}}"]
env_var_format = "$env:%s=\"%s\"; "
tempfile_extension = ".ps1"
pause_before = "60s"
inline =["Import-Module Hyper-V",
"Stop-VM -Name ${local.template}-${var.git_sha}",
"Timeout /T 20",
"Add-VMNetworkAdapter -VMName ${local.template}-${var.git_sha} -SwitchName ${var.hyperv_switch} -Name Static -DeviceNaming Off",
"Start-VM -Name ${local.template}-${var.git_sha}"
}
}
packer logs
Maybe I'm doing something wrong? And someone know how to fix ? Ty for any help
EDITED:
I made some changes and I think its problem with timeout. While provisioned VM are restarted packer after that tries reconnect to VM, but in this time VM still booting and I get errors like. Is that possible that ssh_timeout works only on first boot ?

Related

Run custom Powershell script on provisioned Azure VM

I provisioned VM with following C# snippet
var ssrsVm = new WindowsVirtualMachine("vmssrs001", new WindowsVirtualMachineArgs
{
Name = "vmssrs001",
ResourceGroupName = resourceGroup.Name,
NetworkInterfaceIds = { nic.Id },
Size = "Standard_B1ms",
AdminUsername = ssrsLogin,
AdminPassword = ssrsPassword,
SourceImageReference = new WindowsVirtualMachineSourceImageReferenceArgs
{
Publisher = "microsoftpowerbi",
Offer = "ssrs-2016",
Sku = "dev-rs-only",
Version = "latest"
},
OsDisk = new WindowsVirtualMachineOsDiskArgs
{
Name = "vmssrs001disk",
Caching = "ReadWrite",
DiskSizeGb = 200,
StorageAccountType = "Standard_LRS",
}
});
After VM has been provisioned I would like to run a custom Powershell script on it to add a firewall rule. Now wondering how to do this as a part of the Pulumi app.
With Azure looks like I could do this with RunPowerShellScript but couldn't find anything about it in Pulumi docs, maybe there is a better way to handle my case?
UPDATE
Thanks to Ash's comment I was able to find VirtualMachineRunCommandByVirtualMachine which seems should do what I'm looking for, but unfortunately, following code snippet returns error
var virtualMachineRunCommandByVirtualMachine = new VirtualMachineRunCommandByVirtualMachine("vmssrs001-script",
new VirtualMachineRunCommandByVirtualMachineArgs
{
ResourceGroupName = resourceGroup.Name,
VmName = ssrsVm.Name,
RunAsUser = ssrsLogin,
RunAsPassword = ssrsPassword,
RunCommandName = "enable firewall rule for ssrs",
Source = new VirtualMachineRunCommandScriptSourceArgs
{
Script =
#"Firewall AllowHttpForSSRS
{
Name = 'AllowHTTPForSSRS'
DisplayName = 'AllowHTTPForSSRS'
Group = 'PT Rule Group'
Ensure = 'Present'
Enabled = 'True'
Profile = 'Public'
Direction = 'Inbound'
LocalPort = ('80')
Protocol = 'TCP'
Description = 'Firewall Rule for SSRS HTTP'
}"
}
});
error
The property 'runCommands' is not valid because the 'Microsoft .Compute/RunCommandPreview' feature is not enabled for this subscription."
Looks like other people are struggling with the same here.
You can use a Compute Extension to execute a script against a VM with Pulumi.
This article details some of the options if you just completed the procedure via PowerShell.
As an addition to Ash answer here is how I integrated it with Pulumi
first, I create a blob container for my project scripts
var deploymentContainer = new BlobContainer("deploymentscripts", new BlobContainerArgs
{
ContainerName = "deploymentscripts",
ResourceGroupName = resourceGroup.Name,
AccountName = storageAccount.Name,
});
next, I upload all of my Powershell scripts to create blob
with this snippet
foreach (var file in Directory.EnumerateFiles(Path.Combine(Environment.CurrentDirectory, "Scripts")))
{
var fileName = Path.GetFileName(file);
var blob = new Blob(fileName, new BlobArgs
{
ResourceGroupName = resourceGroup.Name,
AccountName = storageAccount.Name,
ContainerName = deploymentContainer.Name,
Source = new FileAsset(file),
});
deploymentFiles[fileName] = SignedBlobReadUrl(blob, deploymentContainer, storageAccount, resourceGroup);
}
SignedBlobReadUrl grabbed from Pulumi repo.
private static Output<string> SignedBlobReadUrl(Blob blob, BlobContainer container, StorageAccount account, ResourceGroup resourceGroup)
{
return Output.Tuple<string, string, string, string>(
blob.Name, container.Name, account.Name, resourceGroup.Name).Apply(t =>
{
(string blobName, string containerName, string accountName, string resourceGroupName) = t;
var blobSAS = ListStorageAccountServiceSAS.InvokeAsync(new ListStorageAccountServiceSASArgs
{
AccountName = accountName,
Protocols = HttpProtocol.Https,
SharedAccessStartTime = "2021-01-01",
SharedAccessExpiryTime = "2030-01-01",
Resource = SignedResource.C,
ResourceGroupName = resourceGroupName,
Permissions = Permissions.R,
CanonicalizedResource = "/blob/" + accountName + "/" + containerName,
CacheControl = "max-age=5",
});
return Output.Format($"https://{accountName}.blob.core.windows.net/{containerName}/{blobName}?{blobSAS.Result.ServiceSasToken}");
});
}
and lastly, I create Extension to run my script
code
var extension = new Extension("ssrsvmscript", new Pulumi.Azure.Compute.ExtensionArgs
{
Name = "ssrsvmscript",
VirtualMachineId = ssrsVm.Id,
Publisher = "Microsoft.Compute",
Type = "CustomScriptExtension",
TypeHandlerVersion = "1.10",
Settings = deploymentFiles["ssrsvm.ps1"].Apply(script => #" {
""commandToExecute"": ""powershell -ExecutionPolicy Unrestricted -File ssrsvm.ps1"",
""fileUris"": [" + "\"" + script + "\"" + "]}")
});
Hope that will save some time someone else struggling with the problem.

AWS IoT - Connection and publishing operations using paho-mqtt not working

Trying to work with AWS IoT, I have the following code that was working yesterday:
import paho.mqtt.client as mqtt
import ssl, random
from time import sleep
mqtt_url = "XXXXXXXX.iot.us-east-2.amazonaws.com"
root_ca = './certs/iotRootCA.pem'
public_crt = './certs/deviceCert.crt'
private_key = './certs/deviceCert.key'
connflag = False
def on_connect(client, userdata, flags, response_code):
global connflag
connflag = True
print("Connected with status: {0}".format(response_code))
def on_publish(client, userdata, mid):
print userdata + " -- " + mid
#client.disconnect()
if __name__ == "__main__":
print "Loaded MQTT configuration information."
print "Endpoint URL: " + mqtt_url
print "Root Cert: " + root_ca
print "Device Cert: " + public_crt
print "Private Key: " + private_key
client = mqtt.Client()
client.tls_set(root_ca,
certfile = public_crt,
keyfile = private_key,
cert_reqs = ssl.CERT_REQUIRED,
tls_version = ssl.PROTOCOL_TLSv1_2,
ciphers = None)
client.on_connect = on_connect
# client.on_publish = on_publish
print "Connecting to AWS IoT Broker..."
client.connect(mqtt_url, port = 8883, keepalive=60)
client.loop_start()
# client.loop_forever()
while 1==1:
sleep(0.5)
print connflag
if connflag == True:
print "Publishing..."
ap_measurement = random.uniform(25.0, 150.0)
client.publish("ActivePower", ap_measurement, qos=1)
print("ActivePower published: " + "%.2f" % ap_measurement )
else:
print "waiting for connection..."
As I said, yesterday this code was working. Today, I am getting the following (there is no connection):
python awsiot-publish.py
Loaded MQTT configuration information.
Endpoint URL: XXXXXXX.iot.us-east-2.amazonaws.com
Root Cert: ./certs/iotRootCA.pem
Device Cert: ./certs/deviceCert.crt
Private Key: ./certs/deviceCert.key
Connecting to AWS IoT Broker...
False
waiting for connection...
False
waiting for connection...
False
waiting for connection...
False
I do not know if there is a problem with AWS IoT... I just think the documentation is deficient: it is not clear how we can use our code...
I believe your problem is that your certificate's policy does not have the proper permissions to connect. If not specified paho genereates a random client_id. You should set the client_id. You also need a policy that allows your certificate to connect using that client id.
{
"Effect": "Allow",
"Action": "iot:Connect",
"Resource":"arn:aws:iot:us-east1:123456789012:client/yourClientIdGoesHere"
}
It can be useful to set your client_id to the same as your thingname. (This is not necessary though.) You can also set the resource in your policy to * and then connect with any client_id:
{
"Effect": "Allow",
"Action": "iot:Connect",
"Resource":"*"
}

Execute linux command on Centos using dotnet core

I'm running a .NET Core Console Application on CentOS box. The below code is executing for normal command like uptime, but not executing for lz4 -dc --no-sparse vnp.tar.lz4 | tar xf - Logs.pdf:
try
{
var connectionInfo = new ConnectionInfo("server", "username", new PasswordAuthenticationMethod("username", "pwd"));
using (var client = new SshClient(connectionInfo))
{
client.Connect();
Console.WriteLine("Hello World!");
var command = client.CreateCommand("lz4 -dc --no-sparse vnp.tar | tar xf - Logs.pdf");
var result = command.Execute();
Console.WriteLine("yup ! UNIX Commands Executed from C#.net Application");
Console.WriteLine("Response came form UNIX Shell" + result);
client.Disconnect();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
Expected output is Logs.pdf file needs to be extracted and saved in the current location. Can someone correct me where im
If application is running on Linux machine then you can try this also:
string command = "write your command here";
string result = "";
using (System.Diagnostics.Process proc = new System.Diagnostics.Process())
{
proc.StartInfo.FileName = "/bin/bash";
proc.StartInfo.Arguments = "-c \" " + command + " \"";
proc.StartInfo.UseShellExecute = false;
proc.StartInfo.RedirectStandardOutput = true;
proc.StartInfo.RedirectStandardError = true;
proc.Start();
result += proc.StandardOutput.ReadToEnd();
result += proc.StandardError.ReadToEnd();
proc.WaitForExit();
}
return result;

Windows Server 2016 Dockerfile install service

I am attempting to install a service in a docker container on windows server2016.
Simply placing the service there and Powershelling:
New-Service -Name Bob -StartupType Automatic -BinaryPathName .\SVCHost.exe
Adds the service however in the container I get the result:
PS C:\Program Files\COMPANY\Repository> start-service -Name bob
start-service : Service 'bob (Bob)' cannot be started due to the following error: Cannot start service Bob on computer '.'.
At line:1 char:1
+ start-service -Name bob
+ ~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OpenError: (System.ServiceProcess.ServiceController:ServiceController) [Start-Service], ServiceCommandException
I have attempted creating a user and setting the startup user credentials but same issue.
Looking at https://github.com/Microsoft/sql-server-samples/blob/master/samples/manage/windows-containers/mssql-server-2016-express-windows/dockerfile shows that they use sqlexpress to do the install of the service.
Long story short...
How do I register a service in a Windows server 2016 Docker container
Also, look at the Dockerfile for microsoft/iis. The real work in the container is done in the IIS Windows Service, but the entrypoint is a binary called ServiceMonitor.exe. The monitor checks the Windows Service, if the Service fails the exe fails, so Docker knows the container is unhealthy.
Fully qualifying the install name works. thanks #Elton Stoneman
or figured out this works too in my program
public static bool Install(string serviceName, string serviceDescription, string logonUsername, string logonPassword, string exeFile)
{
string managementPath = #"\\.\ROOT\CIMV2:Win32_Service";
ManagementClass mc = new ManagementClass(managementPath);
ManagementBaseObject inParams = mc.GetMethodParameters("create");
inParams["Name"] = serviceName;
inParams["DisplayName"] = serviceDescription;
inParams["PathName"] = exeFile + " -name " + "\"" + serviceName + "\"";
inParams["ServiceType"] = ServiceType.Win32OwnProcess;
inParams["ErrorControl"] = 0;
inParams["StartMode"] = ServiceStartMode.Automatic;
inParams["DesktopInteract"] = false;
inParams["StartName"] = logonUsername;
inParams["StartPassword"] = logonPassword;
inParams["LoadOrderGroup"] = null;
inParams["LoadOrderGroupDependencies"] = null;
inParams["ServiceDependencies"] = null;
ManagementBaseObject outParams = mc.InvokeMethod("create", inParams, null);
string status = outParams["ReturnValue"].ToString();
return (status == "0" || status == "23");
}

one node of a cluster does not show up in Ganglia web portal

In Ganglia, I have configured a 2 clusters. cluster A has 2 nodes, cluster B has 13 nodes respectively. cluster B works well, while cluster A only has 1 node shown. The other node has exactly the same gmond.conf file, which is shown below:
globals {
daemonize = yes
setuid = yes
user = ganglia
debug_level = 0
max_udp_msg_len = 1472
mute = no
deaf = no
host_dmax = 0 /*secs */
cleanup_threshold = 300 /*secs */
gexec = no
send_metadata_interval = 0
}
cluster {
#name = "unspecified"
name = "rpt"
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}
host {
location = "unspecified"
}
udp_send_channel {
#mcast_join = 239.2.11.71
host = qt-dw-master
port = 8557
ttl = 1
}
/*
udp_recv_channel {
#mcast_join = 239.2.11.71
port = 8557
#bind = 239.2.11.71
#bind = qt-dw-master
}
*/
tcp_accept_channel {
port = 8557
}
gmetad.conf on qt-dw-master is shown below:
data_source "rpt" 60 rpt0:8557 rpt1-db:8557
I have tried using multicasting, but does not work. I also want to find log files of gmond, but failed. Anyone can help on this problem?
Are all gmond running in the cluster A? Use commond service gmond status to confirm it