How to deploy VM with thick provisioned Eagerly zeo disk type using HTML client? - webclient

Upto ESXi 6.0, I was using legacy C# based Desktop client to deploy my VMs.
For the deployment, I was using Thick provisioned Eager Zero disk type.
Now I am using ESXi 6.5u1 and embedded HTML client to manage this host.
During the deployment steps VMware ask to choose disk as thin or thick and I am using thick option to deploy my VM.
My Question
When i am using thick disk type, VM is deployed but disk become thick provisioned lazy zero.
Can anyone help me to know how to deploy VM with thick provisioned eager zero disk type.
Thanks in advance !!!!

assuming you are using at least mid-size environment, the quickest way is to deploy a VM, and then perform storage-VMotion. Storage-VMotion menu allows you chose VMDK format after the operation is done.

During one of the 'customize settings' step of the VM deployment process, expand the hard disk/s and make the change to the option you would prefer.

Related

Does it make sense Service Fabric in a single machine?

Service Fabric looks great but right now, I do not have enough demand to hire 5 machines (I think it is the minimum number of nodes of a cluster).
I was thinking to install Service Fabric SDK on a single Azure Virtual Machine.
I know that I will not have the main benefits of a service fabric application: reliability and scalability, but I will be developing in a framework that I can easily can hire more machines and to scale if it is necessary in the future without changing anything.
Right now, I have 15 microservices and I plan to add 10 more. At the present I am using IIS and deployment and maintenance is not too fast. It seems that Service Fabric could solve it, plus it would be easily scalabe
Does it make sense to use Service Fabric in a single machine? or better to keep under IIS.
Technically it is possible, though it doesn't make much sense. The one node cluster, runs with a special configuration and so, scale out of that cluster is not supported. You can use a single node cluster for testing and then create another one for production use.

How can I specify where my local developer's service fabric cluster is created?

My problem: I am learning Service Fabric, and doing simple tutorials, and the local cluster is filling up my C drive. I run the projects in Visual Studio. It first creates a cluster in a folder SfDevCluster. That takes up 842 MB of space. Then it deploys the services and web api sites. Remember, these are trivial tutorials with almost nothing in them. Now, I notice that I have a folder with a Size = 1.22 TB and Size on Disk of 9.4 GB. I'm not sure how to interpret that. But it consumes the remaining space on my C drive and sets off alarms.
I have other drives with lots of space. I would love to specify that those be used. Is there a way to do that with the service fabric cluster used by Visual Studio? Or is there a way to constrain the overly ambitious size allocations? And if you understand this, can you explain what these unusual folder sizes mean?
In the old days, I would have a hard drive with lots of space. But now, my developer machine has a much faster, but more expensive SSD drive, and space is at a premium. So I need more control of the cluster location.
You can set up a local cluster pointing to a non-system drive by running the DevClusterSetup script in PowerShell. You can find the script under %programfiles%\Microsoft SDKs\Service Fabric\ClusterSetup\. The command line you want is:
.\DevClusterSetup.ps1 -PathToClusterDataRoot <desired_app_and_data_location> -PathToClusterLogRoot <desired_tracelog_location>
If you already have a cluster running, this script will remove it and create a new one (note that this will delete any deployed apps and their data). Once you have the new cluster running, Visual Studio will automatically use that when you deploy locally.
As for the file sizes - this is mostly due to the log file used for replication of state stored in reliable collections. A large, sparse file is preallocated up-front, which is why you see a difference between size and size on disk. We are planning to make these values configurable so that they can be dialed down on local clusters.
In the Service Fabric SDK folder (C:\Program Files\Microsoft SDKs\ServiceFabric), you will find a ClusterSetup folder.
In there you will find ClusterManifestTemplate.json files for the different configurations of the local cluster. These are json configuration files used by the powershell scripts that create and manage the local service fabric cluster.
At the bottom of these files, in "fabricSettings" it is setting the value of the FabricDataRoot and FabricLogRoot, based on the "%systemDrive%". If you replace this by "D:" it should result in a local cluster on the D drive.
After making these changes, I stopped my local fabric, deleted the current fabric folders from my C drive, and rebooted my machine. When I then start a debug session in VS.2017, it creates the local dev fabric on my D drive and deploys the application to that location. (I do notice that some empty folders are created on my C drive but these are not used.)
What you also can do is resetting the local cluster once in a while.
Can be easily done using the Service Fabric Local Cluster Manager application:

AppFabric setup in a domain

So I am a little confused by reading the documents.
I want to setup AppFabric caching and hosting.
Can I do the following?
DC
SQL Server
AppFabric1
AppFabric2
All these computers are joined to the DC.
I want to be able to have AppFabric1 be the mainhost but also part of the cache cluster?
What about AppFabric2? or AppFabricX? How can I make them part of the cache cluster?
Do I have to make AppFabric1 and AppFabric2 configured in Windows as part of a cluster (i.e setup the entire environment as a cluster)?
Can I install AppFabric independently on AppFabric1 and 2 and have them cluster together and "make it work"? If so - how?
I see documentation about setting it up in a webfarm but also a workgroup... and that's it. nothing about computers joined to a domain.
I want to setup AppFabric caching and hosting.
Caching and Hosting are two totaly different things and generally don't share the same use cases.
AppFabric Caching provides an in-memory, distributed cache platform for Windows Server, previously named Velocity. The cache cluster is a collection of one or more instances of the Caching Service working together. You can easily add new cache host without restarting the cluster in the "storage location" (xml or sql server).
Can I install AppFabric independently on AppFabric1 and 2 and have
them cluster together and "make it work"? If so - how?
Don't worry... this can be done easily during installation. In addition, there are powerfull PS module to to the same thing.
AppFabric Hosting enhance the hosting of WCF and Workflow Foundation services in WAS (autostart, monitoring of hosted services, workflow persistence, ...). There is no cluster here and basically you just have to configure to monitoring/persistence DB for each server.
Just try it !
When you are adding the second node in the AppFabric cluster, make sure to choose the option Join Cluster (instead of New Cluster) and point to the path of the share where you stored the configuration (assuming that you used FILE SHARE to store the configuration of the cluster). The share that you used should be accessible from Appfabric2.

Why does Azure deployment take so long?

I'm trying to understand why it can take from 20-60min to deploy a small application to Azure (using the configuration/package upload method, not from within VS).
I've read through this situation and this one but I'm still a little unclear - is there a weird non-technology ritual that occurs while the instances are distributing, like somebody over at Microsoft lighting a candle or doing a dance?
As a fellow Azure user, I share your pain - deploying isn't "quick"/"painless" - and this hurts especially when you're in a development cycle and want to test dev iterations on Azure. However, in general deployments should take much less than 60 minutes - and less than 20 minutes too.
Steve Marx provided a brief overview of the steps involved in deployment:
http://blog.smarx.com/posts/what-happens-when-you-deploy-on-windows-azure
And he references a deeper level explanation at: http://channel9.msdn.com/blogs/pdc2008/es19
There's a lot that goes on behind the scenes when you deploy an application to the Azure cloud. I don't have any special insight into what's going on behind the curtain, but having worked on the VS tools to upload projects to the Azure cloud, these are my impressions as an outsider looking in:
Among other things:
Hardware must be allocated from the available pool of servers
The VHD of the core OS must be uploaded to the machine
A VM instance must be initialized and booted off that VHD image
Your application package must be copied to the VM and installed
The VM monitor must wait for your service to start up, or fail
The data center load balancer and firewall must be made aware of your application's service endpoints
Once all of that has synchronized, your app is accessible from the web.
The VHD image is probably gigabytes in size, much larger than your app upload. Even on a superfast datacenter network, it takes time to move that much stuff into the VM, unpack it, and boot from it. Also, the load balancer and firewall are probably optimized to make routing requests the highest priority. Reconfiguring the firewall and load balancer is lower priority, and has to be done without interrupting traffic flow.
Also note that all this work only has to be done for a new deployment. Updating an existing deployment rolls out much faster - 2 to 3 minutes instead of 20 to 30 minutes.
Check out this PDC10 video by Mark Russinovich. He goes into great detail on what's going on inside Azure with some insights into the (admittedly slow) deployment process.
Original link is no longer working. Here's another link to a version of the same presentation: https://channel9.msdn.com/events/Build/BUILD2011/SAC-853T

Automatic provisioning of xen in private cloud

I am setting up private cloud for some experiments using xen as the hosting system. But I am faced with a problem for which I can't seem to get solutions.
I have to do some kind of automatic provisioning of VMs given the server load. Eg: if server of type A gets to lets say 60% load the cloud should spawn off another vm instance of the same type to distribute the load(using the netscalar).
Is there an opensource system that can help me or how do I go about developing scripts to do the same.
If I understand you correctly, you want to live-migrate the VMs depending on the load of the host. You can use OpenNebula to help you with this. You can use the advanced scheduler named Haizea with OpenNebula.
While I've never tried this, but you can use this with ONE's APIs to create more VMs if a VM gets too much load.
Take a look at http://openstack.org/
It's opensourced.
OpenStack and OpenNebula are already mentioned, there are two more IaaS open-source projects:
Eucalyptus
Nimbus
use apache cloudstack, it is open-source and it has tight integration with netscalar Load Balancers and F5 Load balancers, check below link for Netscalar LB creation and VM creation. Rules can be set on these and new VMs ca be spanned based on Load.
https://cloudstack.apache.org/docs/api/apidocs-4.5/TOC_Root_Admin.html
There is a Cloud platform called Nimbo that lets you do this and more out of the box... http://www.hcltech.com/cloud-computing/Nimbo/ .