Is there a way to clone or copy a windows server r12 in google cloud?
I have looked for a way but I couldn't find a way to clone/copy the server into a new windows server r12
You can create a Windows image using these instructions - it requires running sysprep and deleting the existing instance (but save the boot disk and then relaunch your initial instance). Then you can launch as many new copies as you need from the same image.
https://cloud.google.com/compute/docs/instances/windows/creating-windows-os-image
You can also use persistent disk snapshots to backup and duplicate the disk data:
https://cloud.google.com/compute/docs/instances/windows/creating-windows-persistent-disk-snapshot
Related
recently I've got some problems with my old hard drive and windows so I switched over to a new one. I can still access any files on the old hard drive and I still have an installation of mongodb on there with some pretty important data (should have done a backup sometime).
What would be the smartest way to get this data and transfer it to my new instance? Just copying the data files over? This "old" instance is btw not running and its not possible for me to start it again.
You could get the "old" instance running again by running mongo on the system and pointing the DBPath to the folder on the old hard drive.
It looks like copying the files over is a valid option though.
See https://docs.mongodb.com/manual/core/backups/#back-up-with-cp-or-rsync
I want to use Postgresql in Windows Server 2012 R2 for one our project where it can be 24/7 uptime.
I would like to ask the community if I can have 2 master instances in 2 different servers A&B and they will 'work' on the same DB located in a shared file storage in lan. Always one master instance on server A will be online and when it goes offline for some reason (I suppose) a powershell script will recognize that the postgresql service stopped and will start the service in server B. The same script will continuous check that only one service in servers A & B is working to avoid conflicts.
I'd like to ask if this is possible or a better approach for my configuration.
(I can't use replication because when server A shuts down the server B is in read-only mode thing that I don't want)
If you manage to start two instances of PostgreSQL on the same data directory, serious data corruption will happen.
Normally there is a postmaster.pid file that prevents that, but a PostgreSQL server process on a different machine that accesses the same file system will happily unlink that after spewing some log messages, thinking it was left behind from a crash.
So you are really walking on thin ice with a solution like that.
One other issue that you didn't think of is that script that is supposed to check if the server is still running. What if that script fails, because for example the network connection between the two servers is down, but the server is still up an running happily? Such a “split brain” scenario will cause data corruption with your setup.
Another word of caution: since you seem to be using Windows (Powershell?), you probably envision a CIFS file system when you are talking of shared storage. A Windows “network share” is not a reliable file system — last time I checked, it did not honor _commit.
Creating a reliable failover cluster is harder than you think, and I'd recommend that you check existing solutions before you try to roll your own.
I created a windows 2016 instance type on AWS (free tier). I created a "Cold HDD" volume and attached it to the windows 2016 instance thru Managment console. So far so good.
I am able to RDP into the instance after getting Administrator password. But I can't see the attached "Cold HDD" volume when I log into the windows 2016 instance.
So I launched "Disk Management" on the instance and enabled the new volume.
I googled and came to know that we need to run a powershell script to enable all attached volumes at the start of the instance.
Script is:
<powershell>
C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeDisks.ps1
</powershell>
But I can't find C:\ProgramData\Amazon folder at all on C drive of the windows 2016 instance.
I don't know what to do.
https://aws.amazon.com/premiumsupport/knowledge-center/secondary-volumes-windows-server-2016/
The InitializeDisks.ps1 script is part of EC2Launch.
To accommodate the change from .NET Framework to .NET Core, the EC2Config service has been deprecated on Windows Server 2016 AMIs and replaced by EC2Launch. EC2Launch is a bundle of Windows PowerShell scripts that perform many of the tasks performed by the EC2Config service.
This should be installed by default on the Windows 2016 AMI, but note that the C:\ProgramData\Amazon directory is hidden.
If for some reason it is not installed, you should be able to install it manually as follows:
To download and install the latest version of EC2Launch
If you have already installed and configured EC2Launch on an instance, make a backup of the EC2Launch configuration file. The
installation process does not preserve changes in this file. By
default, the file is located in the following directory:
C:\ProgramData\Amazon\EC2-Windows\Launch\Config.
Download EC2Launch.zip from the following location to a directory on the instance:
https://s3.amazonaws.com/ec2-downloads-windows/EC2Launch/latest/EC2-Windows-Launch.zip
Download the Install.ps1 PowerShell script from the following location to the same directory where you downloaded EC2Launch.zip:
https://s3.amazonaws.com/ec2-downloads-windows/EC2Launch/latest/install.ps1
Run Install.ps1
Replace your backup of the EC2Launch configuration file in the C:\ProgramData\Amazon\EC2-Windows\Launch\Config directory.
Source: http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2launch.html
As a test, I just deployed the Windows 2016 Base AMI template and can confirm that C:\ProgramData\Amazon does exist (ProgramData is a hidden directory, so go to View > Show Hidden Files to see it).
I also added a Cold Storage HDD and (as you noted) the following User Data (under the "Advanced Details" section of the "Configure Instance Details" page) to my instance on launch:
<powershell>
C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeDisks.ps1
</powershell>
And can confirm that on boot of the VM the Cold HDD was correctly/automatically initialized and available as the D:\ drive.
If you didn't add the required User Data when you were first launching your instance, you can add it later by selecting your instance and go to Actions > Instance Settings > View/Change User Data.
In the new portal, there's an icon that says 'Capture'. I assume this was for capturing an image of a VM (snapshot), but it was greyed out. Doing a little reading, several posts suggested running sysprep to prepare the machine for a capture.
I ran it according to those instructions, the machine appears to reboot, but all connectivity is lost.
Anyone know what's going on or how to fix it? Also, are there any ways to capture a snapshot in the new portal or do we need to use PS scripts?
the machine appears to reboot, but all connectivity is lost.
It is by design behavior. Before capture a VM image, we should use sysprep to generalize the VM, generalizing a VM removes all your personal account information, among other things, and prepares the machine to be used as an image.
After we run sysprep, we will lost all connection. Run sysprep, we should select shutdown:
For now, we can't via Azure new portal to capture a VM image. We can use PowerShell to capture a VM image, we can refer to this link.
you could create a virtual machine from an image. I can't find the
same function in the new portal.
We can't use Azure new portal to create a VM from image, we can use PowerShell to create a VM from image, we can refer to the link.
Most important:
Before you capture a VM image, you should back up you VM's VHD first, because the process will delete the original virtual machine after it's captured.
The latest version of PowerShell is 3.6.0, you can install it from this page.
This is a follow-up to my stackoverflow post: how do I mount a page blob as a VHD on worker role instance? After the drive is mounted, I will pass that as the value of --dbpath parameter to mongo instance.
In a nutshell, I'm trying to start a single mongo instance with the data directory on azure blob (for durability). I'm building on the HelloWorld example on Azure's site-- instead of starting Tomcat instance, I will start mongo instance.
I suggest you follow this guide: http://www.codeproject.com/Articles/81413/Windows-Azure-Drives-Part-1-Configure-and-Mounting. This guide explains how to mount the drive but it also shows how you can save the drive letter as an environment variable.
This is interesting for when you're starting the mongo instance, you can just use this environment variable together with --dbpath. Maybe it would be best to encapsulate all the code in a console application so that you can simply start it before starting the mongo instance.
I’m not sure if you can mount a drive in Java. Currently this feature is not available in Windows Azure Storage Client for Java: https://github.com/WindowsAzure/azure-sdk-for-java. There’s no native (C++) API either. So you may need to use .NET to mount the drive, and then start your Java process from your .NET application. For now, you can also submit a feature request on http://www.mygreatwindowsazureidea.com/forums/34192-windows-azure-feature-voting.
Best Regards,
Ming Xu.