While creating virtual Machine i forgot to "Tick" - Allocate all disk space now.
I have already setted up machine, and cloned several from the one, and made changes. :(
So i was looking for any option so that i could change my machine (thin disk province) to change to pre-allocated where it will be equal to dedicated size.
using: VMWARE workstation 7.1.4
created disk without allocating disk size now need fixed allocated disk size.
any help would be highly appreciated.
To sum up: how to change growable disk to pre-allocated in vmware workstation.
vmware-vdiskmanager -r sourceDisk.vmdk -t 2 targetDisk.vmdk
as an administrator while using cmd from windows..
lets say, windows, 7 and we have to open cmd as administrator and cd to the particular location where vmware is installed and run this command,
I'm not sure that you can change the disk type to pre-allocated. You can, however, expand a growable-type disk by selecting Utilities->Expand in the Hardware tab of the VM settings. You can only expnad the disk if the VM has no snapshots and the VM is not a linked clone or the parent of a linked clone. In order to make the newly added space available to your VM, you have to use a disk management tool to increase the size of the existing partition to match the new expanded size of the virtual disk.
Related
Cloud platforms like Linode.com often provide hot-pluggable storage volumes that you can easily attach and detach from a Linux virtual machine without restarting it.
I am looking for a way to install Postgres so that its data and configuration ends up on a volume that I have mounted to the virtual machine. The end result should allow me to shut down the machine, detach the volume, spin up another machine with an identical version of Postgres already installed, attach the volume and have Postgres work just like it did on the old machine with all the data, file system permissions and server-wide configuration intact.
Is such a thing possible? Is there a reliable way to move installations (i.e databases and configuration, not the actual binaries) of Postgres across machines?
CLARIFICATION: the virtual machine has two disks:
the "built-in" one which is created when the VM is created and mounted to /. That's where Postgres gets installed to and you can't move this disk.
the hot-pluggable disk which you can easily attach and detach from a running VM. This is where I want Postgres data and configuration to be so I can just detach the disk (after shutting down the VM to prevent data loss/corruption) and attach it to another VM when I want my data to move so it behaves like it did on the old VM (i.e. no failures to start Postgres, no errors about permissions or missing files, etc).
This works just fine. It is not really any different to starting and stopping PostgreSQL and not removing the disk. There are a couple of things to consider though.
You have to make sure it is stopped + writing synced before unmounting the volume. Obvious enough, and I can't believe you'd be able to unmount before sync completed, but worth repeating.
You will want the same version of PostgreSQL, probably on the same version of operating system with the same locales too. Different distributions might compile it with different options.
Although you can put configuration and data in the same directory hierarchy, most distros tend to put config in /etc. If you compile from source yourself this won't be a problem. Alternatively, you can usually override the default locations or, and this is probably simpler, bind-mount the data and config directories into the places your distro expects.
Note that if your storage allows you to connect the same volume to multiple hosts in some sort of "read only" mode that won't work.
Edit: steps from comment moved into body for easier reading.
start up PG, create a table put one row in it.
Stop PG.
Mount your volume at /mnt/db
rsync /var/lib/postgresql/NN/main to /mnt/db/pg_data and /etc/postgresql/NN/main to /mnt/db/pg_etc
rename /var/lib/postgresql/NN/main and add .OLD to the name and do the same with the /etc
bind-mount the dirs from /mnt to replace them
restart PG
Test
Repeat
Return to step 8 until you are happy
I am using Minikube to bootstrap a Kubernetes cluster on my local machine (for learning purposes). I am in Windows platform. Minikube is installed on C drive. It's actually low on disk space due to some personal files and other Softwares. According to Minikube documentations, it requires a 20GB of disk space for its VM. However, when I try to bootstrap the Kubernetes cluster sometimes booting up fails stating low disk space. But disk space is available in my other drives.
By default on which drive, Minikube allocates its space? Installed location? Is there any way to specify on which drive Minikube allocates its 20GB space?
As pointed out in the comments, disk allocation is done by the driver which is used to create the VM. In my case I was using hyperv as my VM driver, so I used following steps. (Your steps may slightly vary according to your Windows OS version - I am using Windows 10)
Start ---> Hyper-V manager ---> Hyper-V settings ---> Change the default folder to store virtual hard disk files
You can find detailed illustration in here
I would like to write a script that checks that Docker has access to a minimum X amount of memory, on Windows. I need this to work with Docker running on Hyper-V.
Get-VM with hyper-v gives me memory assigned for the DockerDesktopVM of 0, I assume because it's using dynamic memory allocation. But I know Docker does have a maximum set to the memory available, i.e. the same memory limit discussed in questions like Docker won't start on Windows- Not enough memory to start Docker
Is there some way to get the memory limit assigned to the Docker container from within powershell or the command line?
Of course as soon as I asked this I found it.
(Get-VMMemory DockerDesktopVM).Startup
I'd like to write a bootloader/ os using uefi so naturally I'm using virtualbox to shortcut the feedback loop. Currently I've made a gpt partition file in my workspace, but now I'd like to hook it up to a virtual machine. Unfortunately GPT is meant to partiton the entire device and I need to do so in a Virtual Hard drive. I've looked into vdi (which I don't think I want) and vhd files, where ultimate I'd like to copy and paste the binary into those files and have it work like a booting a normal hd under efi,... but I'm lost as to where to start.
There a few other virtual hard drive formats but I'm not sure what to pick. Also there is little documentation on how any of these formats work. What type of virtual hard drive can I use to accomplish this task? And which format has the best documentation?
I'll suggest don't go into details of virtual disk layout. Best way to achieve would be:
mounting your choice of virtual disk, so that it appears as a normal disk on the host OS (Microsoft allows mounting of vhd/ vhdx disks on Windows server).
attaching the disk to a proxy VM, from inside that proxy vm, your virtual disk would appear as regular disk.
Once you have abstracted the virtual disk as a regular disk you can write binary data at any offset you wish.
Another interesting thing not all hypervisors support UEFI booting. So you'll have to choose the Hypervisor which supports UEFI booting to complete your end-to-end experiment.
I set up standalone ironic with mitaka version. I created a whole disk image (ubuntu14.04) by virt-install and used Coreos pxe image (Here) as deploy kernel and ramdisk. (ubuntu 14.04) The disk image size is 10G, and it could be deployed successfully on my node. When I logged in the node and checked the disk info, it only used 10G disk size for /dev/sda. The physical disk size of node is 500G.
How do I make my image to use whole disk size of node after deploying? Thanks.
You have a swap partition after the root partition and that will block the online growing of the root FS.
You need to either move the swap to the front of the disk or remove it entirely.