I am using Minikube to bootstrap a Kubernetes cluster on my local machine (for learning purposes). I am in Windows platform. Minikube is installed on C drive. It's actually low on disk space due to some personal files and other Softwares. According to Minikube documentations, it requires a 20GB of disk space for its VM. However, when I try to bootstrap the Kubernetes cluster sometimes booting up fails stating low disk space. But disk space is available in my other drives.
By default on which drive, Minikube allocates its space? Installed location? Is there any way to specify on which drive Minikube allocates its 20GB space?
As pointed out in the comments, disk allocation is done by the driver which is used to create the VM. In my case I was using hyperv as my VM driver, so I used following steps. (Your steps may slightly vary according to your Windows OS version - I am using Windows 10)
Start ---> Hyper-V manager ---> Hyper-V settings ---> Change the default folder to store virtual hard disk files
You can find detailed illustration in here
Related
For whom have installed Eclipse Che. Did you have a look at the RAM usage of the core system ? I mean, before starting to code ?
See What are the minimum requirements to run an Eclipse Che server for 1 user?
I personally run Minishift on Windows 10 in order to play with Red Hat CodeReady Workspaces 1.0.1 (the product version of Che 6.x). I recommend giving minishift 6GB of ram, if you can spare it.
The resource requirements are detailed in the Che documentation admin guide.
For a multi-user Che deployment, there are 3 containers, which require RAM and storage space for persistent volumes. The absolute minimum resources are:
Che workspace server: 750MB of RAM, 1GB of disk in a PVC
Keycloak: 1GB of RAM, 2 PVCs, 1GB each
PostgreSQL: ~515MB of RAM, 1GB PVC for the database
So just to run the workspace server, you will need about 3GB of RAM and 4GB of persistent storage.
In addition, workspaces will need ~2GB of RAM each (requirements change based on language runtime and developer tools). Workspaces can use ephemeral storage or persistent storage for source code and work. If using ephemeral storage, you will need to commit your work before the workspace is stopped or auto-suspended. If using persistent storage, you can use one large PVC which is shared by all workspaces, or a separate PVC per workspace (which may take more storage resources, and makes right-sizing more difficult).
My problem: I am learning Service Fabric, and doing simple tutorials, and the local cluster is filling up my C drive. I run the projects in Visual Studio. It first creates a cluster in a folder SfDevCluster. That takes up 842 MB of space. Then it deploys the services and web api sites. Remember, these are trivial tutorials with almost nothing in them. Now, I notice that I have a folder with a Size = 1.22 TB and Size on Disk of 9.4 GB. I'm not sure how to interpret that. But it consumes the remaining space on my C drive and sets off alarms.
I have other drives with lots of space. I would love to specify that those be used. Is there a way to do that with the service fabric cluster used by Visual Studio? Or is there a way to constrain the overly ambitious size allocations? And if you understand this, can you explain what these unusual folder sizes mean?
In the old days, I would have a hard drive with lots of space. But now, my developer machine has a much faster, but more expensive SSD drive, and space is at a premium. So I need more control of the cluster location.
You can set up a local cluster pointing to a non-system drive by running the DevClusterSetup script in PowerShell. You can find the script under %programfiles%\Microsoft SDKs\Service Fabric\ClusterSetup\. The command line you want is:
.\DevClusterSetup.ps1 -PathToClusterDataRoot <desired_app_and_data_location> -PathToClusterLogRoot <desired_tracelog_location>
If you already have a cluster running, this script will remove it and create a new one (note that this will delete any deployed apps and their data). Once you have the new cluster running, Visual Studio will automatically use that when you deploy locally.
As for the file sizes - this is mostly due to the log file used for replication of state stored in reliable collections. A large, sparse file is preallocated up-front, which is why you see a difference between size and size on disk. We are planning to make these values configurable so that they can be dialed down on local clusters.
In the Service Fabric SDK folder (C:\Program Files\Microsoft SDKs\ServiceFabric), you will find a ClusterSetup folder.
In there you will find ClusterManifestTemplate.json files for the different configurations of the local cluster. These are json configuration files used by the powershell scripts that create and manage the local service fabric cluster.
At the bottom of these files, in "fabricSettings" it is setting the value of the FabricDataRoot and FabricLogRoot, based on the "%systemDrive%". If you replace this by "D:" it should result in a local cluster on the D drive.
After making these changes, I stopped my local fabric, deleted the current fabric folders from my C drive, and rebooted my machine. When I then start a debug session in VS.2017, it creates the local dev fabric on my D drive and deploys the application to that location. (I do notice that some empty folders are created on my C drive but these are not used.)
What you also can do is resetting the local cluster once in a while.
Can be easily done using the Service Fabric Local Cluster Manager application:
Resources:
node1: Physical cluster node 1.
node2: Physical cluster node 2.
cluster1: Cluster containing node1 and node2 used to host virtual machines.
san1: Dell md3200 highly available storage device (SAN).
lun1: A lun dedicated to file server storage located on san1.
driveZ: A hard drive currently a resource on node1 that is 100GB and has the
drive letter Z:\. This drive letter is lun1 that resides on san1.
virtual1: A virtual server used as a file server only.
Synopsis / Goals:
I have two nodes/servers on my network. Theses two nodes (node1 and node2) are part of a cluster (cluster1) that is used for hosting all my virtual machines. There is a SAN involved (san1) that has many LUNs created on it one of which (lun1) will be used to store all data dedicated to a virtual machine (virtual1). Eventually lun1 is created, given the name "storage" and strictly used for the virtual machine "virtual1" to store and access data.
What I have currently in place:
- I currently have created the SAN (san1), created a disk group with the
virtual disk (storage), and assigned a LUN (lun1) to it.
- I have set up two physical servers that are connected to the SAN via SAS
cables (multi paths).
- I have set up the clustering feature on those two servers and have hyper-v
role installed on each as well.
- I have created a cluster (cluster1) with server members node1 and node2.
- I have created a virtual server (virtual1) and made it highly available
on the cluster (cluster1).
Question:
Is it possible to have lun1 (drive z) brought up and accessed by virtual1?
What I have tried:
I had the lun1 aka driveZ showing up in node1's disk management. I then added it as a resource to the cluster storage area. I tried to do two different things. (1) I tried to add it as a Cluster Shared Volume, shortly after I realized that only the cluster members could see/access it and not the virtual machines even though they were created as a service under in the cluster. (2) I tried to move the resource (driveZ) to the virtual machine (virtual1) within cluster1. After doing that I went into the virtual machine settings and added the drive as a SCSI drive (using lun1 # 100GB) and refreshed the Disk Management on the virtual machine (virtual1). The drive showed up and allowed me to assign a drive letter, then asked me if I wanted to format it... What about all my data thats on it?? Was that a bust? Anyway, thats where I'm at right now... Ideas?
Thoughts:
Just so I'm clear, all of this is for testing atm... Actual sizes of resources in production greatly differ. I was thinking about adding the driveZ (lun1) as a Cluster Shared Volume, and then add a new Hyper-V virtual SCSI drive (say 50G so later I can try to expand to 100G, the full size of the physical/SAN drive) to my VM. Storing the fixed VHD (Virtual Hard Disk) inside the Cluster Shared Volume "driveZ". I'm testing it out now... But I have concerns... 1) What happens when I try to create a really large VHD (around 7TB)? 2) Can the fixed disk VHD be expanded in any way? I plan on making my new SAN virtual disk larger than 7TB in the future... Currently its going to stay at 7TB but that will expand at some point...
Figured it out!
The correct way to do it is...
Setup a SAN, create a disk group with two virtual disks, and assigned LUNs to them.
Setup your 2 physical servers with Win Server 2008 R2, connect them both to the SAN.
Add the Failover Cluster feature, and the Hyper-V role to both servers.
For the two drives (from the SAN), bring them online and initialize them both. Create a simple volume on each drive if you wish, even format them if you want.
Create a cluster, add 1 of the virtual disks from the SAN as a Cluster Shared Volume. This will be used to store the virtual machines on.
Create a virtual machine and store it on the CSV ex: C:\ClusterStorage\Volume1\, then power it up.
The second drive you need to take offline. This should just be a drive on the host server. It has to be offline! When you right click and choose offline, go ahead and right click then go to properties. On that page look for the LUN number and write it down.
Open up the VM settings go down to Scsi controller and add a drive. Choose physical drive and choose the correct LUN number. Hit OK and it should show up in the VM Storage Manager.
As a helpful tool check these pages out...
Configuring Disks and Storage
Hyper-V Clustering Video 1
Hyper-V Clustering Video 2
Hyper-V Clustering Video 3
Hyper-V Clustering Video 4
Hyper-V Clustering Video 5
I am using RHEV and I have created two virtual machines. These have too little disk space, how can I edit the disk space of these machines? I cannot find the option anywhere.
You can create new volumes of desired capacity and simply attach them to VM through RHEV Manager WebUI.
While creating virtual Machine i forgot to "Tick" - Allocate all disk space now.
I have already setted up machine, and cloned several from the one, and made changes. :(
So i was looking for any option so that i could change my machine (thin disk province) to change to pre-allocated where it will be equal to dedicated size.
using: VMWARE workstation 7.1.4
created disk without allocating disk size now need fixed allocated disk size.
any help would be highly appreciated.
To sum up: how to change growable disk to pre-allocated in vmware workstation.
vmware-vdiskmanager -r sourceDisk.vmdk -t 2 targetDisk.vmdk
as an administrator while using cmd from windows..
lets say, windows, 7 and we have to open cmd as administrator and cd to the particular location where vmware is installed and run this command,
I'm not sure that you can change the disk type to pre-allocated. You can, however, expand a growable-type disk by selecting Utilities->Expand in the Hardware tab of the VM settings. You can only expnad the disk if the VM has no snapshots and the VM is not a linked clone or the parent of a linked clone. In order to make the newly added space available to your VM, you have to use a disk management tool to increase the size of the existing partition to match the new expanded size of the virtual disk.