What kind of servers did you virtualize lately? - virtualization

I wonder what type of servers for internal usage you virtualize in the last -say- 6 months. Here's what we got virtual so far:
mediawiki
bugtracker (mantis)
subversion
We didn't virtualize spezialized desktop PCs which are running a certain software product, that is only used once in a while. Do you plan to get rid of those old machines any time soon?
And which server products do you use? Vmware ESX, Vmware Server, Xen installations...?

My standard answer to questions like this is, "virtualization is great; be aware of its limitations".
I would never rely on a purely-virtual implementation of anything that's an infrastructure-level service (eg the authoritative DNS server for your site; management and monitoring tools).
I work for a company that provides server and network management tools. We are constantly trying to overcome the marketing chutzpah of virtualization vendors in that infrastructure tools shouldn't live in infrastructure tools.
Virtualization wants to control all of your services. However, there are some things that should always exist on physical hardware.
When something goes wrong with your virtual setup, troubleshooting and recovery can take a long time. If you're still running some of those services you require for your company on physical hardware, you're not dead-in-the-water.
Virtualization also introduces clock lag, disk and network IO lag, and other issues you wouldn't see on physical hardware.
Lastly, the virtualization tool you pick then becomes in charge of all of the resources under its command for its hosted VMs. That translates to the hypervisor - not you - deciding what VM should have priority at any given moment. If you're concerned about any tool, service, or function being guaranteed to have certain resources, it will need to be on physical hardware.
For anything that "doesn't matter", like web, mail, dhcp, ldap, etc - virtualization is great.

Our build machine running FinalBuilder runs on a Windows XP Virtual Machine running in VMWare Server on Linux.
It is very practical to move it and also to backup, we just stop the Virtual Machine and copy the disk image.
Some days ago we needed to change the host pc, it took less than 2 hours to have our builder up and running on another pc.

We migrate to a new SBS 2005 Domain last month. We take the opotunity to create virtual machines for the following servers
Buid Machine
Svn Repository Machine
Bug Traking Machine (FogBugz)
Testing Databases

I recently had to build an internal network for our training division, enabling the classrooms to be networked and have access to various technologies. Because of the lack of hardware and equipment and running in an exclusive cash only environment I decided to go with a virtual solution on the server.
The server itself is running CentOS 5.1 with VMWare 1.0.6 loaded as the virtualisation provider. On top of this we have 4 Windows Server 2003 machines running, making up the Active Directory, Exchange, ISA, Database and Windows/AV updates component. File sharing and internet routing through the corporate network and ADSL is handled via the CentOS platform.
The setup allows us to expand to physical machines at a later stage quickly, and allows the main server to replaced with minimum downtime on the network, as it only requires the moving of the Virtual Machines and starting them up on the new box.

Project Management (dotProject)
Generic Testing Servers (IIS, PHP, etc)
Do you plan to get rid of those old machines any time soon? No
And which server products do you use? MS Virtual Server

We use ESX in our labs and lately we've virtualized our document sharing service (KnowledgeTree), the lab management tools and almost all of our department's internal web servers.
We also virtualized almost all of our QA department's test machines, with the exception of the performance and stability testing hardware.
We aren't going to get rid of the hardware any time soon, it will be used to decrease the budget needs and increase the number of projects that can be handled by one lab.
We use VMware ESX 3.5.x exclusively.

We virtualise a copy of a test client and server, so we can deploy to them before sending the files to the customer. They also gets used to test bug reports.
We find this is the biggest benefit to virtualisation as we can keep lots of per-customer versions around.
We also VM our web server, and corporate division has virtualised everything.

Related

Should XAMPP be installed on an actual physical Server?

Is XAMPP just meant for testing and setting up virtual servers ?(cause that's what wiki say)
Can it be installed on an actual physical server? Do developers actually do that?
I'm a little confused cause if it were true, why would anyone install a virtual server on a physical server? It's like trying to run Excel on VirtualBox.
XAMPP simulates a typical stack used for web development on a local machine. If you have access to an actual physical server, you would typically install things like the web server (such as Apache) and MySQL on the server itself. The developers of XAMPP consider it more of a development tool due to certain features being disabled to make dev easier.
Virtualisation in servers is used because the actual physical machines are very powerful and so are idling a large amount of time. Putting those resources to use by creating two virtual servers on top of the host reduces cost and increases operational throughput.
Virtual server and Docker can be used to test with different environments at the same time, or test beta software for future releases. On Maschines that have 6 or 8 cores and running 3.6 Millions instructions per second, there are plenty of resources to have more than 1 maschine virtual or as a docker file, so that you can uses for example different databases, with out them interfering.
Besides phiscal Hard cost mony to buy and to maintain.
Last virtualisation and docker are only files, that you can simply copy to have a backup. A real maschine is a little more work, to make a backup.
But don't use XAMPP as real maschien that is exposed to the world. There much to many security risks ind teh standard configuration.

What could happen if I install all server roles on Windows Azure Pack: Web Sites on one machine?

At work, I'm in the process of installing Windows Azure Pack: Web Sites in a VMWare ESXi lab environment. I have little available RAM and hard drive space on the ESXi.
I originally thought I would be able to do this without spending too much resources. The Azure Pack Express variant is advertised as if it only requires one machine with 8 GBs of RAM. However, after completing the first installation, I discovered that the Azure Pack: Web Sites extension requires no less than 7 different server roles installed on 7 different machines, each with Windows Server 2012 R2. I need a separate Cloud Controller, Cloud Management Server, Cloud Front End, Cloud Publisher, Cloud Shared Web Worker, Cloud Reserved Web Worker and Cloud File Server.
I have no way of freeing up that much resources. In the installation guide for Windows Azure Pack, they "advice" me to use separate VMs for each role, but they don't say explicitly that it won't work. Is it because multiple server roles on one machine will strain resources, or is it because the roles are incompatible and will make the system malfunction? In my case, the Azure Pack will only be used for penetration testing by a single user, so I imagine resources should not be a problem.
I'm not a web administrator, and I'm in over my head on this task. If anyone could give me some advice before I proceed on this, that would be much appreciated.
TLDR: Will there be a critical conflict if I install seven server roles on one machine, or will there just be a strain of resources?

Simulating computer cluster on simple desktop to test parallel algorithms

I want to try and learn MPI as well as parallel programming.
Can a sandbox be created on my desktop PC?
How can this be done?
Linux and windows solutions are welcome.
If you want to learn MPI, you can definitely do it on a single PC (Most modern MPIs have shared memory based communication for local communication so you don't need additional configuration). So install a popular MPI (MPICH / OpenMPI) on a linux box and get going! If your programs are going to be CPU bound, I'd suggest only running job sizes that equal the number of processor cores on your machine.
Edit: Since you tagged it as a virtualization question, I wanted to add that you could also run MPI on multiple VMs (on VMPlayer or VirtualBox for example) and run your tests. This would need inter-vm networking to be configured (differs based on your virtualization software).
Whatever you choose (single PC vs VMs) it won't change the way you write your MPI programs. Since this is for learning MPI, I'd suggest going with the first approach (run multiple MPI programs on a single PC).
You don't need to have VMs running to launch multiple copies of your application that communicate using MPI .
MPI can help you a virtual cluster on a given single node by launching multiple copies of your applications.
One benifit though, of having it run in a VM is that (as you already mentioned) it provides sand boxing .Thus any issues if your application creates will remain limited to that VM which is running the app copy.

Glassfish 3 EJB app deployment advice?

For a variety of unfortunate management reasons (budget constraints etc.) I, the developer, have been put in the position to deploy the app in a production environment. The catch is that I don't have any experience in production EJB application server deployment. That said, they are aware that there are no guarantees of success.
The context:
The dev server runs on the latest version of Netbeans with Glassfish v3, on a mac machine
98% / 99% uptime is ok, there are no financial/critical transactions
It is a client/server EJB 3 app, and the web tier, business tier, and resource tiers currently run on the same machine.
I have the liberty to choose the hw/sw infrastructure
Load estimations: 10 simultaneous connexions avg, rare 200 peaks
The outbound public data is text/small pics (it's for iPhone clients), inbound HTTP text only
Basic maintenance will be taken care of (backup, server reboot, etc)
My questions for production deployment:
What are the must haves infrastructure-wise? Minimum system specs etc?
Is it ok to keep Glassfish v3?
Which configuration aspects of the server should I focus on?
Worst case scenario: if I deploy the same software infrastructure (Netbeans/Glassfish v3) as during the development, would the server keep up?
Any piece of advice would be most welcome. Thanks!
For the architecture, you can start small with just a single GlassFish instance with no front web server (GlassFish has one built in that is very capable). If you can wait for the release of GlassFish 3.1 you'll be able to add instances (clustered or standalone) and offer scalability and centralized admin.
Most production instances of GlassFish I've seen run with 1GB-2GB of JVM heap (-Xmx) but your mileage may vary if you load lots of data in memory or if you use some frameworks. If you want better reliability, having them on separate machines is a plus obviously. With two instances on the same machine you can offer continuity of service if one instance fails (but not if the machine fails).
I'd suggest scripting as much as possible the provisioning of the resources (connection pool , JDBC datasource, etc...) and applications using the "asadmin" command-line tool and try to not use NetBeans on the production platform.
Benchmarking with simulated load sounds like a wise thing to try to put together before going live and this survival guide will probably come in handy.
You don't mention the database. Isn't there one?
I suggest the following:
Not a Mac expert but I'll say go with 6GB or more RAM
HDD space is not a problem these days
Do not know much abt Mac Processors (watever eq of dual core etc)
Personally I have not used GF3 in Production but I hope it's stable now so you should be ok.
System Architecture:
Receive all HTTP requests on some web server (Apache or Sun web server) and load balance with your Glassfish server(s).
Now depending on your physical (or virtual) machines create instance of Glassfish Application Server on each machine. If you just have one machine then create atleast 2 instances of Glassfish. This will help to put one node down for maintaince and other to keep going.
As far as deployment is concern make sure you stop debug logs and fine tune JPA logs etc.
Use Ant or other scripts to deploy code and taking backup of existing code.
I hope this will help to kick start and rest you can ask or solve as you go along.
Good luck.

Considerations for developing for a VM deployment

I'm setting up a system that uses SQL Server 2005, several custom Windows Services, Web Services and a few IIS .NET applications. Getting the whole system setup is a somewhat tedious process.
I wondered whether it would be a good idea to settup the whole system in a VM. Could I then just drop the VM onto a new server and get a huge headstart on configuration?
What things should I be aware of if I pursue this approach? Is it a viable option? Is a VM a decent unit of deployment?
If the concept is feasible, I'd certainly appreciate specific suggestions about the VM setup.
I frequently use this approach. I'll set up a VM in VMware Workstation, configure it to my liking, and then use VMware Importer to import my virtual machine into an ESX environment. From there, I can turn the virtual machine into a template that I can use over and over again for deploying clones of my server or just as a starting point when creating new servers.
· Large quantity of virtual machines (one for each customer)
· Less quantity of physical machines
· VM's we are working on has to be up while the others can be down
· Easy backups so, in case of issues we can start working on the same moment we shut down the vm, etc...
· Physical machines has to configured with the last hardware or almost.
· Depending on your develops, a physical machine can keep between 4 and 8 VM.