I used sysprep on a VM (new portal) and lost connectivity to the machine - powershell

In the new portal, there's an icon that says 'Capture'. I assume this was for capturing an image of a VM (snapshot), but it was greyed out. Doing a little reading, several posts suggested running sysprep to prepare the machine for a capture.
I ran it according to those instructions, the machine appears to reboot, but all connectivity is lost.
Anyone know what's going on or how to fix it? Also, are there any ways to capture a snapshot in the new portal or do we need to use PS scripts?

the machine appears to reboot, but all connectivity is lost.
It is by design behavior. Before capture a VM image, we should use sysprep to generalize the VM, generalizing a VM removes all your personal account information, among other things, and prepares the machine to be used as an image.
After we run sysprep, we will lost all connection. Run sysprep, we should select shutdown:
For now, we can't via Azure new portal to capture a VM image. We can use PowerShell to capture a VM image, we can refer to this link.
you could create a virtual machine from an image. I can't find the
same function in the new portal.
We can't use Azure new portal to create a VM from image, we can use PowerShell to create a VM from image, we can refer to the link.
Most important:
Before you capture a VM image, you should back up you VM's VHD first, because the process will delete the original virtual machine after it's captured.
The latest version of PowerShell is 3.6.0, you can install it from this page.

Related

Simple workflow for spinning up an Azure VM from a snapshot in ARM?

I need to move from Azure Classic Portal to ARM for working with VMs by November and am trying to get a jump start on learning the new process.
Here is what I do in the Classic Portal now...
Make a Windows Server VM:
Add some software, make some changes, shut it down and click the 'Capture' button in Classic. Provide a name, and label and now have a Snapshot I can make new VM copies from. Easy!
Make a new VM from snapshot:
Click New, Virtual Machine, From Gallery, My Images, Select Image, Create. So easy!
That's it. That's all I do, and all I need to do.
I make 10-30 VMs at a time that way and it's really quick and easy.
How can I do that same workflow in ARM?
I have tried json templates, cmdlets, and the UI in ARM, and cannot for the life of me figure out how emulate the workflow/functionality in Classic in ARM.
Any suggestions?
Thanks in advance!
If I understand correctly , you want to create multiple VMs from one VM Image in ARM.
After you preparing your VM correctly, here is the workflow to Copy VMs from an exiting VM :
Go to Virtual machines >select your VM>Capture>Provide Image Name and Image label (select the box if you have Sysprep your VM)>OK
Go to Images> select the Image you created> Create VM>Provide some necessary Basic information> Provide Size Information>Settings>Summary OK(You can create a template and use it to create more VMs easily in this step)
Note : How to prepare and Capture a Windows VM to a generalized image ,refer to this official document.
But ,if you want to create multiple VMs from a Classic Image in ARM (Usually, we can only create Classic VM by a Classic Image), it may be a lot work to do:
Create a new ARM Storage Account
Copy the Specialized VHD from your Source Storage Account to the new ARM Storage Account
Create your new VM and point the Source VHD (Specialized disk) to the copied VHD
More details about how to move Classic Image to ARM and use it to create VMs you can refer to this link .

Apache CloudStack: No templates showing when adding instance

I have setup the apache cloudstack on CentOS 6.8 machine following quick installation guide. The management server and KVM are setup on the same machine. The management server is running without problems. I was able to add zone, pod, cluster, primary and secondary storage from the web interface. But when I tried to add an instance it is not showing any templates in the second stage as you can see in the screenshot
However, I am able to see two templates under Templates link in web UI.
But when I select the template and navigate to Zone tab, I see Timeout waiting for response from storage host and Ready field shows no.
When I check the management server logs, it seems there is an error when cloudstack tries to mount secondary storage for use. The below segment from cloudstack-management.log file describes this error.
2017-03-09 23:26:43,207 DEBUG [c.c.a.t.Request] (AgentManager-Handler-
14:null) (logid:) Seq 2-7686800138991304712: Processing: { Ans: , MgmtId:
279278805450918, via: 2, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":
{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
GetRootDir for nfs://172.16.10.2/export/secondary failed due to
com.cloud.utils.exception.CloudRuntimeException: Unable to mount
172.16.10.2:/export/secondary at /mnt/SecStorage/6e26529d-c659-3053-8acb-
817a77b6cfc6 due to mount.nfs: Connection timed out\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.getRootDir(Nf
sSecondaryStorageResource.java:2080)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSe
condaryStorageResource.java:1829)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeReques
t(NfsSecondaryStorageResource.java:265)\n\tat
com.cloud.agent.Agent.processRequest(Agent.java:525)\n\tat
com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:833)\n\tat
com.cloud.utils.nio.Task.call(Task.java:83)\n\tat
com.cloud.utils.nio.Task.call(Task.java:29)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\
n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\
n\tat java.lang.Thread.run(Thread.java:745)\n","wait":0}}] }
Can anyone please guide me how to resolve this issue? I have been trying to figure it out for some hours now and don't know how to proceed further.
Edit 1: Please note that my LAN address was 10.103.72.50 which I assume is not /24 address. I tried to give CentOs a static IP by making the following settings in ifcg-eth0 file
DEVICE=eth0
HWADDR=52:54:00:B9:A6:C0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=172.16.10.2
NETMASK=255.255.255.0
GATEWAY=172.16.10.1
DNS1=8.8.8.8
DNS2=8.8.4.4
But doing this would stop my internet. As a workaround, I reverted these changes and installed all the packages first. Then I changed the IP to static by the same configuration settings as above and ran the cloudstack management. Everything worked fine untill I bumped into this template thing. Please help me figure out what might have went wrong
I know I'm late, but for people trying out in the future, here it goes:
I hope you have successfully added a host as mentioned in Quick Install Guide before you changed your IP to static as it autoconfigures VLANs for different traffic and creates two bridges - generally with names 'cloud' or 'cloudbr'. Cloudstack uses the Secondary Storage System VM for doing all the storage-related operations in each Zone and Cluster. What seems to be the problem is that secondary storage system vm (SSVM) is not able to communicate with the management server at port 8250. If not, try manually mounting the NFS server's mount points in the SSVM shell. You can ssh into the SSVM using the below command:
ssh -i /var/cloudstack/management/.ssh/id_rsa -p 3922 root#<Private or Link local Ip address of SSVM>
I suggest you run the /usr/local/cloud/systemvm/ssvm-check.sh after doing ssh into the secondary storage system VM (assuming it is running) and has it's private, public and link local IP address. If that doesn't help you much, take a look at the secondary storage troubleshooting docs at Cloudstack.
I would further recommend, if anyone in future runs into similar issues, check if the SSVM is running and is in "Up" state in the System VMs section of Infrastructure tab and that you are able to open up a console session of it from the browser. If that is working go on to run the ssvm-check.sh script mentioned above which systematically checks each and every point of operation that SSVM executes. Even if console session cannot be opened up, you can still ssh using the link local IP address of SSVM which can be accessed by opening up details of SSVM and than execute the script. If it says, it cannot communicate with Management Server at port 8250, I recommend you check the iptables rules of management server and make sure all traffic is allowed at port 8250. A custom command to check the same is nc -v <mngmnt-server-ip> 8250. You can do a simple search and learn how to add port 8250 in your iptables rules if that is not opened. Next, you mentioned you used CentOS 6.8, so it probably uses older versions of nfs, so execute exportfs -a in your NFS server to make sure all the NFS shares are properly exported and there are no errors. I would recommend that you wait for the downloading status of CentOS 5.5 no GUI kvm template to be complete and its Ready status shown as 'Yes' before you start importing your own templates and ISOs to execute on VMs. Finally, if your ssvm-check.sh script shows everything is good and the download still does not start, you can run the command: service cloud restart and actually check if the service has gotten a PID using service cloud status as the older versions of system vm templates sometimes need us to manually start the cloud service using service cloud start even after the restart command. Restarting the cloud service in SSVM triggers the restart of downloading of all remaining templates and ISOs. Side note: the system VMs uses a Debian kernel if you want to do some more troubleshooting. Hope this helps.

Creating Powershell script for VMWare using veeam

I am trying to create a script to automate the Veeam backup using PowerShell.
I know in the free version I only have 2 options (Veeamzip and Quick Backup).
I have a Drobo on the network with the share setup and accessible.
I have gone into all the VMWare Hypervisors and created an account with the proper permissions to run a backup.
I am down to creating the syntax for running the backup.
I am confused when I look at their document. I am not sure if I am supposed to use a copy, replication, a backup job, or what.
If I can get the initial syntax to run a backup of one machine I know I can build the script.
Any information would be greatly appreciated.
This can not be done with this version of VMWare.

VHD (Virtual PC) - how to force a Startup regardless of changed VHD

I am using a virtual machine to automate the execution of integration tests for a server-based product.
I am using "Windows XP Mode and Virtual PC" on a developer machine.
I am doing everything using PowerShell. I wish to:
mount the VHD (diskpart)
copy the release package onto the VHD file system
dismount the VHD (diskpart, again)
Start the VM
It is all fine as long as I don't do 2. If I change the VHD file system at all then 4. fails silently.
If I then go to the Virtual Machines on the Host and start the VM up using the GUI I get a warning:
"Inconsistency in virtual hard disk time stamp detected"
"The virtual hard disk's parent appears to have been modified"
I suspect there is a security feature in here (would make sense). But in my case this feature is not desirable.
Anyone know how to disable the timestamp checking or set the timestamp after I unmount the VHD (before?) ...?
EDIT: Look at the Startup2() options ... method takes one parameter, one of which says:
vmStartupOption_FixParentTimestampMismatch = 1
... from:
Microsoft method details
As per my edit - there is a Startup2() method that takes a parameter that says to ignore the timestamp.

Has anyone encountered "Win32 Error : The network path was not found" trying to copy files with FinalBuilder 6?

I have a FinalBuilder job that, as a final step, deploys the compiled app and DLLs to a network share on another server.
About 50% of the time, it just fails with
Win32 Error : The network path was not found
Changing the target from \\myserver\myshare to \\myserver.mydomain.com\myshare will often fix it temporarily - the first 2-3 runs after modifying the build file will work, after which it'll start failing again.
The FinalBuilder task is running with domain credentials granting admin access on the target box; and copying files to/from shares on that server via Windows Explorer works reliably.
I'm completely stumped.
Finally tracked this down. The target server was a virtual machine, and the Hyper-V host network settings were set to "Virtual Network" instead of "Virtual Teamed Network"
I have no idea what that means, but having changed it to Virtual Teamed Network, it works flawlessly. O_o
The network path was not found.
This is related to DNS/WINS not being able to look up the name. When I have seen this there are problems with our DNS servers.
Adding an Entry into the lmhost file would prevent the system from looking in DNS/WINS.
If that does not work, another option to consider is to increase the number of retries on the Action. This can be done from the "Runtime" tab of the action by clicking on "Timing Properties"