Simulating computer cluster on simple desktop to test parallel algorithms - virtualization

I want to try and learn MPI as well as parallel programming.
Can a sandbox be created on my desktop PC?
How can this be done?
Linux and windows solutions are welcome.

If you want to learn MPI, you can definitely do it on a single PC (Most modern MPIs have shared memory based communication for local communication so you don't need additional configuration). So install a popular MPI (MPICH / OpenMPI) on a linux box and get going! If your programs are going to be CPU bound, I'd suggest only running job sizes that equal the number of processor cores on your machine.
Edit: Since you tagged it as a virtualization question, I wanted to add that you could also run MPI on multiple VMs (on VMPlayer or VirtualBox for example) and run your tests. This would need inter-vm networking to be configured (differs based on your virtualization software).
Whatever you choose (single PC vs VMs) it won't change the way you write your MPI programs. Since this is for learning MPI, I'd suggest going with the first approach (run multiple MPI programs on a single PC).

You don't need to have VMs running to launch multiple copies of your application that communicate using MPI .
MPI can help you a virtual cluster on a given single node by launching multiple copies of your applications.
One benifit though, of having it run in a VM is that (as you already mentioned) it provides sand boxing .Thus any issues if your application creates will remain limited to that VM which is running the app copy.

Related

Should XAMPP be installed on an actual physical Server?

Is XAMPP just meant for testing and setting up virtual servers ?(cause that's what wiki say)
Can it be installed on an actual physical server? Do developers actually do that?
I'm a little confused cause if it were true, why would anyone install a virtual server on a physical server? It's like trying to run Excel on VirtualBox.
XAMPP simulates a typical stack used for web development on a local machine. If you have access to an actual physical server, you would typically install things like the web server (such as Apache) and MySQL on the server itself. The developers of XAMPP consider it more of a development tool due to certain features being disabled to make dev easier.
Virtualisation in servers is used because the actual physical machines are very powerful and so are idling a large amount of time. Putting those resources to use by creating two virtual servers on top of the host reduces cost and increases operational throughput.
Virtual server and Docker can be used to test with different environments at the same time, or test beta software for future releases. On Maschines that have 6 or 8 cores and running 3.6 Millions instructions per second, there are plenty of resources to have more than 1 maschine virtual or as a docker file, so that you can uses for example different databases, with out them interfering.
Besides phiscal Hard cost mony to buy and to maintain.
Last virtualisation and docker are only files, that you can simply copy to have a backup. A real maschine is a little more work, to make a backup.
But don't use XAMPP as real maschien that is exposed to the world. There much to many security risks ind teh standard configuration.

For multiple projects using Docker, use Multiple VMs or Single Host with multiple containters

Suppose I had three apps that are currently hosted at Digital Ocean or AWS. Each of them use at least one VM for the database and one or more VMs for the web app.
Now let's say that I wanted to get one dedicated server at OVH with 64GB of RAM and use docker to deploy these apps. Each project would have its own docker-compose file. I'm thinking of two ways of doing this:
Install VMWare Esxi on the server, create one VM for each project and deploy docker containers for the web and database.
Just install Ubuntu as the host OS and manage containers for all apps using separate network entry points(IPs) for each project.
Would I be wasting too much of the server resources going for the first choice?
Would I be overcomplicating my infrastructure by going for the second?
I understand both are valid choices, but what would be the better/suggested way?
Thanks for the help!
will need a full VM for each app with all the memorysharing and I/O stuff. You may use memory ballooning with virtual box (ESXi should have such a feature too, maybe named different). And within every VM you'll have have the docker stack included.
If you use a native OS you'll need the docker stack only once.
What OS do you use within your docker images? An 200MB Ubuntu? Or a 5MB Alpine? If you choose Alpine as your host OS and/or your image OSes you'll be able to keep your "container overhead" much smaller
It depends what system services your apps need, like cron, upstart, ... how much resources does the app need? Is it JVM based app that needs an own JVM in every container? Etc...
At first hand I would plan an Alpine Host with Alpine docker images. If there are Apps that realy need Ubuntu images you can just use it in the image for that specific app.
Have also a look at Docker vs. VM

VM automatic installation

I would love to have an idea on how to automatically install a Windows XP virtual machine on Virtualbox/VmWare. Is this feasable via a programming language, for example ? Or maybe an automated script ? I need this to avoid manual installation each time one of my VMs crashes.
I am not asking for a full program that does this, but I just needs technical hints on how to do this, then I will perform your suggestions myself.
Yes it's possible.
Can't you just make a snapshot to when the VM is working, or at least a "Clean Install" snapshot that saves having to reinstall your OS and common applications every time?
Yes, it is possible to do this via script. Actually, all IaaS cloud companies now try to do the deployment of VMs (and also physical servers) via automation. First of all, it's cheap a quick. And there is little human factor in it.
Not sure about VirtualBox, but if it works with VMware, KVM etc., there is no reason it shouldn't with VB.
As for the script itself, there are big money in this, so finding something may prove difficult. Try to check openStack, AFAIK it should be open source.

Considerations for developing for a VM deployment

I'm setting up a system that uses SQL Server 2005, several custom Windows Services, Web Services and a few IIS .NET applications. Getting the whole system setup is a somewhat tedious process.
I wondered whether it would be a good idea to settup the whole system in a VM. Could I then just drop the VM onto a new server and get a huge headstart on configuration?
What things should I be aware of if I pursue this approach? Is it a viable option? Is a VM a decent unit of deployment?
If the concept is feasible, I'd certainly appreciate specific suggestions about the VM setup.
I frequently use this approach. I'll set up a VM in VMware Workstation, configure it to my liking, and then use VMware Importer to import my virtual machine into an ESX environment. From there, I can turn the virtual machine into a template that I can use over and over again for deploying clones of my server or just as a starting point when creating new servers.
· Large quantity of virtual machines (one for each customer)
· Less quantity of physical machines
· VM's we are working on has to be up while the others can be down
· Easy backups so, in case of issues we can start working on the same moment we shut down the vm, etc...
· Physical machines has to configured with the last hardware or almost.
· Depending on your develops, a physical machine can keep between 4 and 8 VM.

What kind of servers did you virtualize lately?

I wonder what type of servers for internal usage you virtualize in the last -say- 6 months. Here's what we got virtual so far:
mediawiki
bugtracker (mantis)
subversion
We didn't virtualize spezialized desktop PCs which are running a certain software product, that is only used once in a while. Do you plan to get rid of those old machines any time soon?
And which server products do you use? Vmware ESX, Vmware Server, Xen installations...?
My standard answer to questions like this is, "virtualization is great; be aware of its limitations".
I would never rely on a purely-virtual implementation of anything that's an infrastructure-level service (eg the authoritative DNS server for your site; management and monitoring tools).
I work for a company that provides server and network management tools. We are constantly trying to overcome the marketing chutzpah of virtualization vendors in that infrastructure tools shouldn't live in infrastructure tools.
Virtualization wants to control all of your services. However, there are some things that should always exist on physical hardware.
When something goes wrong with your virtual setup, troubleshooting and recovery can take a long time. If you're still running some of those services you require for your company on physical hardware, you're not dead-in-the-water.
Virtualization also introduces clock lag, disk and network IO lag, and other issues you wouldn't see on physical hardware.
Lastly, the virtualization tool you pick then becomes in charge of all of the resources under its command for its hosted VMs. That translates to the hypervisor - not you - deciding what VM should have priority at any given moment. If you're concerned about any tool, service, or function being guaranteed to have certain resources, it will need to be on physical hardware.
For anything that "doesn't matter", like web, mail, dhcp, ldap, etc - virtualization is great.
Our build machine running FinalBuilder runs on a Windows XP Virtual Machine running in VMWare Server on Linux.
It is very practical to move it and also to backup, we just stop the Virtual Machine and copy the disk image.
Some days ago we needed to change the host pc, it took less than 2 hours to have our builder up and running on another pc.
We migrate to a new SBS 2005 Domain last month. We take the opotunity to create virtual machines for the following servers
Buid Machine
Svn Repository Machine
Bug Traking Machine (FogBugz)
Testing Databases
I recently had to build an internal network for our training division, enabling the classrooms to be networked and have access to various technologies. Because of the lack of hardware and equipment and running in an exclusive cash only environment I decided to go with a virtual solution on the server.
The server itself is running CentOS 5.1 with VMWare 1.0.6 loaded as the virtualisation provider. On top of this we have 4 Windows Server 2003 machines running, making up the Active Directory, Exchange, ISA, Database and Windows/AV updates component. File sharing and internet routing through the corporate network and ADSL is handled via the CentOS platform.
The setup allows us to expand to physical machines at a later stage quickly, and allows the main server to replaced with minimum downtime on the network, as it only requires the moving of the Virtual Machines and starting them up on the new box.
Project Management (dotProject)
Generic Testing Servers (IIS, PHP, etc)
Do you plan to get rid of those old machines any time soon? No
And which server products do you use? MS Virtual Server
We use ESX in our labs and lately we've virtualized our document sharing service (KnowledgeTree), the lab management tools and almost all of our department's internal web servers.
We also virtualized almost all of our QA department's test machines, with the exception of the performance and stability testing hardware.
We aren't going to get rid of the hardware any time soon, it will be used to decrease the budget needs and increase the number of projects that can be handled by one lab.
We use VMware ESX 3.5.x exclusively.
We virtualise a copy of a test client and server, so we can deploy to them before sending the files to the customer. They also gets used to test bug reports.
We find this is the biggest benefit to virtualisation as we can keep lots of per-customer versions around.
We also VM our web server, and corporate division has virtualised everything.