Considerations for developing for a VM deployment - deployment

I'm setting up a system that uses SQL Server 2005, several custom Windows Services, Web Services and a few IIS .NET applications. Getting the whole system setup is a somewhat tedious process.
I wondered whether it would be a good idea to settup the whole system in a VM. Could I then just drop the VM onto a new server and get a huge headstart on configuration?
What things should I be aware of if I pursue this approach? Is it a viable option? Is a VM a decent unit of deployment?
If the concept is feasible, I'd certainly appreciate specific suggestions about the VM setup.

I frequently use this approach. I'll set up a VM in VMware Workstation, configure it to my liking, and then use VMware Importer to import my virtual machine into an ESX environment. From there, I can turn the virtual machine into a template that I can use over and over again for deploying clones of my server or just as a starting point when creating new servers.

· Large quantity of virtual machines (one for each customer)
· Less quantity of physical machines
· VM's we are working on has to be up while the others can be down
· Easy backups so, in case of issues we can start working on the same moment we shut down the vm, etc...
· Physical machines has to configured with the last hardware or almost.
· Depending on your develops, a physical machine can keep between 4 and 8 VM.

Related

Should XAMPP be installed on an actual physical Server?

Is XAMPP just meant for testing and setting up virtual servers ?(cause that's what wiki say)
Can it be installed on an actual physical server? Do developers actually do that?
I'm a little confused cause if it were true, why would anyone install a virtual server on a physical server? It's like trying to run Excel on VirtualBox.
XAMPP simulates a typical stack used for web development on a local machine. If you have access to an actual physical server, you would typically install things like the web server (such as Apache) and MySQL on the server itself. The developers of XAMPP consider it more of a development tool due to certain features being disabled to make dev easier.
Virtualisation in servers is used because the actual physical machines are very powerful and so are idling a large amount of time. Putting those resources to use by creating two virtual servers on top of the host reduces cost and increases operational throughput.
Virtual server and Docker can be used to test with different environments at the same time, or test beta software for future releases. On Maschines that have 6 or 8 cores and running 3.6 Millions instructions per second, there are plenty of resources to have more than 1 maschine virtual or as a docker file, so that you can uses for example different databases, with out them interfering.
Besides phiscal Hard cost mony to buy and to maintain.
Last virtualisation and docker are only files, that you can simply copy to have a backup. A real maschine is a little more work, to make a backup.
But don't use XAMPP as real maschien that is exposed to the world. There much to many security risks ind teh standard configuration.

What could happen if I install all server roles on Windows Azure Pack: Web Sites on one machine?

At work, I'm in the process of installing Windows Azure Pack: Web Sites in a VMWare ESXi lab environment. I have little available RAM and hard drive space on the ESXi.
I originally thought I would be able to do this without spending too much resources. The Azure Pack Express variant is advertised as if it only requires one machine with 8 GBs of RAM. However, after completing the first installation, I discovered that the Azure Pack: Web Sites extension requires no less than 7 different server roles installed on 7 different machines, each with Windows Server 2012 R2. I need a separate Cloud Controller, Cloud Management Server, Cloud Front End, Cloud Publisher, Cloud Shared Web Worker, Cloud Reserved Web Worker and Cloud File Server.
I have no way of freeing up that much resources. In the installation guide for Windows Azure Pack, they "advice" me to use separate VMs for each role, but they don't say explicitly that it won't work. Is it because multiple server roles on one machine will strain resources, or is it because the roles are incompatible and will make the system malfunction? In my case, the Azure Pack will only be used for penetration testing by a single user, so I imagine resources should not be a problem.
I'm not a web administrator, and I'm in over my head on this task. If anyone could give me some advice before I proceed on this, that would be much appreciated.
TLDR: Will there be a critical conflict if I install seven server roles on one machine, or will there just be a strain of resources?

virtual machines automatic deployment and provisioning

In the recent times, I tend to set-up and configure more and more VMs daily with very similar or the very same configuration, and due to the time consumption caused by that, I'm looking for a way to automate the whole process.
I have started looking around and I have found Vagrant which could be a very good starting point.
I would like to create a custom build of a VMWare VM ( vagrant box if not mistaken ), and I would like to use that box as my base and deploy it on my servers.
The trouble starts here:
On my servers I use VMWare vSphere and I see that Vagrant can support it via an external plugin, but, as I read along, I see that vSphere only supports VMs created from template or cloned from an existing one.
Is there any change to run my VMWare Workstation boxes with it ?
Also, I would be very grateful if you could provide me with some more info on the same matter using other ( maybe better suited ) solutions.
I know there are also Chef and Puppet, but are they maybe an overkill for my needs ?Thank you for your time and help,Best regards.
Have you looked into Ansible? http://www.ansible.com/home
There is an opensource one available it is extremly easy to use. Might be what you're looking for.

Simulating computer cluster on simple desktop to test parallel algorithms

I want to try and learn MPI as well as parallel programming.
Can a sandbox be created on my desktop PC?
How can this be done?
Linux and windows solutions are welcome.
If you want to learn MPI, you can definitely do it on a single PC (Most modern MPIs have shared memory based communication for local communication so you don't need additional configuration). So install a popular MPI (MPICH / OpenMPI) on a linux box and get going! If your programs are going to be CPU bound, I'd suggest only running job sizes that equal the number of processor cores on your machine.
Edit: Since you tagged it as a virtualization question, I wanted to add that you could also run MPI on multiple VMs (on VMPlayer or VirtualBox for example) and run your tests. This would need inter-vm networking to be configured (differs based on your virtualization software).
Whatever you choose (single PC vs VMs) it won't change the way you write your MPI programs. Since this is for learning MPI, I'd suggest going with the first approach (run multiple MPI programs on a single PC).
You don't need to have VMs running to launch multiple copies of your application that communicate using MPI .
MPI can help you a virtual cluster on a given single node by launching multiple copies of your applications.
One benifit though, of having it run in a VM is that (as you already mentioned) it provides sand boxing .Thus any issues if your application creates will remain limited to that VM which is running the app copy.

What kind of servers did you virtualize lately?

I wonder what type of servers for internal usage you virtualize in the last -say- 6 months. Here's what we got virtual so far:
mediawiki
bugtracker (mantis)
subversion
We didn't virtualize spezialized desktop PCs which are running a certain software product, that is only used once in a while. Do you plan to get rid of those old machines any time soon?
And which server products do you use? Vmware ESX, Vmware Server, Xen installations...?
My standard answer to questions like this is, "virtualization is great; be aware of its limitations".
I would never rely on a purely-virtual implementation of anything that's an infrastructure-level service (eg the authoritative DNS server for your site; management and monitoring tools).
I work for a company that provides server and network management tools. We are constantly trying to overcome the marketing chutzpah of virtualization vendors in that infrastructure tools shouldn't live in infrastructure tools.
Virtualization wants to control all of your services. However, there are some things that should always exist on physical hardware.
When something goes wrong with your virtual setup, troubleshooting and recovery can take a long time. If you're still running some of those services you require for your company on physical hardware, you're not dead-in-the-water.
Virtualization also introduces clock lag, disk and network IO lag, and other issues you wouldn't see on physical hardware.
Lastly, the virtualization tool you pick then becomes in charge of all of the resources under its command for its hosted VMs. That translates to the hypervisor - not you - deciding what VM should have priority at any given moment. If you're concerned about any tool, service, or function being guaranteed to have certain resources, it will need to be on physical hardware.
For anything that "doesn't matter", like web, mail, dhcp, ldap, etc - virtualization is great.
Our build machine running FinalBuilder runs on a Windows XP Virtual Machine running in VMWare Server on Linux.
It is very practical to move it and also to backup, we just stop the Virtual Machine and copy the disk image.
Some days ago we needed to change the host pc, it took less than 2 hours to have our builder up and running on another pc.
We migrate to a new SBS 2005 Domain last month. We take the opotunity to create virtual machines for the following servers
Buid Machine
Svn Repository Machine
Bug Traking Machine (FogBugz)
Testing Databases
I recently had to build an internal network for our training division, enabling the classrooms to be networked and have access to various technologies. Because of the lack of hardware and equipment and running in an exclusive cash only environment I decided to go with a virtual solution on the server.
The server itself is running CentOS 5.1 with VMWare 1.0.6 loaded as the virtualisation provider. On top of this we have 4 Windows Server 2003 machines running, making up the Active Directory, Exchange, ISA, Database and Windows/AV updates component. File sharing and internet routing through the corporate network and ADSL is handled via the CentOS platform.
The setup allows us to expand to physical machines at a later stage quickly, and allows the main server to replaced with minimum downtime on the network, as it only requires the moving of the Virtual Machines and starting them up on the new box.
Project Management (dotProject)
Generic Testing Servers (IIS, PHP, etc)
Do you plan to get rid of those old machines any time soon? No
And which server products do you use? MS Virtual Server
We use ESX in our labs and lately we've virtualized our document sharing service (KnowledgeTree), the lab management tools and almost all of our department's internal web servers.
We also virtualized almost all of our QA department's test machines, with the exception of the performance and stability testing hardware.
We aren't going to get rid of the hardware any time soon, it will be used to decrease the budget needs and increase the number of projects that can be handled by one lab.
We use VMware ESX 3.5.x exclusively.
We virtualise a copy of a test client and server, so we can deploy to them before sending the files to the customer. They also gets used to test bug reports.
We find this is the biggest benefit to virtualisation as we can keep lots of per-customer versions around.
We also VM our web server, and corporate division has virtualised everything.