What is the best hardware and software approach to cloning my workstation's dev environment? - vmware-workstation

My team plans to hire a couple of part time contract programmers and interns soon and I would like to reduce the amount of setup time involved with getting each new intern's dev environment up and running.
Considerations:
They will be working on PC based laptops or desktops running windows with VMWare Server and Ubuntu or just Ubuntu. The compuers may or may not be have identical hardware.
Don't want to spend a ton of money, but enough should be spent to ensure they are not frustrated by slow computers, etc.
The environment includes Ruby on Rails, Git, Passenger, Capistrano, Memcachd.
Any suggestions are welcome. If there is a good way to do this using Apple mac mini's that is something we would consider too.

I'd recommend getting everyone on Ubuntu, and writing a setup script that runs the necessary apt-get install invocations (and possible WGET and dpkg commands) needed to result in the standard environment. Then you simply need to keep a copy of that script available on an internal website, and you can run it on your interns or contractors machines or you can have them run it themselves.
If using Windows, it's slightly harder to do, but you could probably write a BATCH script to install Python and run a Python script to do the remaining setup (I suggest doing that simply because trying to do anything that is in anyway sophisticated in BATCH is a good way to drive oneself insane).

Related

Why cant all applications run on one single OS? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Before tools like docker and VMs, bare metal servers were used to deploy and host applications. but tools like docker and VM allow us to have more than one OS on the same machine, compared to bare metal server which only allows us to have one OS.
Why is this an issue? Why cant all application run on a single server? why do some apps need a separate/different server?
Applications are code, which have been compiled to execute on a certain system.
Different OS's
Some OS's have different way of doing things and when we code we have to take that into consideration.
For instance in windows paths look like this c:\this\is\a\path and in linux they may look like this /this/is/a/path. Now if my application is just working with paths I could make my application work on any platform. But I would need to consider how I compile it and what language I run it in, or if its written in a translated language such as python or node.js then I need to ensure that I have written the code in a OS agnostic way. For instance I could reference paths using an OS agnostic way by joining the folders together and not trying to second guess what OS the machine is running on.
If I compile my code from C# but I want it to run on any machine will it check the OS at run time and then alter the way it checks for Paths etc...?
Also an experience I had, where in my web application I had to check if a file was an image, I was using a library which would apperently only work on windows, so when I deployed my docker container to my ubuntu machine I had a run time exception that I had a library missing. It was System.Draw or something. So even once you have your app containerized that may not neccessarily be problem solved 🤣
This is just an example with Paths, but that's just an example. Some .NET Framework applications require the machine has special run times installed on it, and these (someone correct me if I'm wrong) wouldn't install on linux, so then the code wouldn't run.
.NET CORE and Docker
With the advent of .NET Core this is the direction we are trying to move in. For instance .NET Core is supposed to be runable on any platform.
Also with Docker, docker containers wrap everything that is required to run an application into one package, so it doesn't care what your registry settings look like, it doesn't care if your missing the library or that library everything the app needs to run is bundled in with the container. This means if it runs a certain way on system A, then you can expect it will run the same way on system B.
Architecture
Also we have the issue of 32 and 64 bit architecture. This is basically the rawest level of how information gets processed on the machine. When the code is compiled it is compiled into assembly which your CPU then processes. Depending on whether you have a 32/64 bit machine or OS, this will affect whether the OS and CPU will be able to run the instructions. Yes I believe that 32 bit code can run on a 64 bit machine, but not vice versa. Also if you have an old windows game which uses a 16 bit installer, good luck trying to get that to run 😃. I think I did manage to get an old windows game running in ubuntu in 64 bit. It was a 32 bit game, but the installer was a 16 bit installer.
I'm not expecting this answer will win any awards, but might do as a nice place holder until someone provides a better answer 😀
Compiled Languages
Objective C/ Swift - Will this only work on apple devices? ()
.NET Framework - will mostly work on windows devices, although some code may work on linx via mono
Java - This is actually cross platform and runs on the Java Run Time, I'm not sure if what it compiles down to is the same for all machines, or whether it has to be compiled into something different for each platform
c++ Is compiled and what you compile on one OS will not work on another OS.
Interpreted languages
Python runs on any machine, atlthough if you want your script or code to be platform agnostic you have to take care
bash although primarily unix I have seen better support for this on windows lately, I strongly doubt that every script written for unix would run first time on windows without a hitch
php runs on unix based systems and windows. I'm not sure how much care is requrired to keep this code OS agnostic, although I have a feeling some care may be required.

Centos VM vs Centos "real" machine yum package differences

I have two CentOS platforms. Both run "CentOS release 5.10 (Final)". One is a "real" machine and the other is a VM. Both are 64 bit. Call the real machine Prod and the VM Spare.
When I got this gig I was told that the two machines were identical. Spare is supposed to be a hot spare for Prod. It is now obvious that is not true. The two machines have different yum repo lists. There are duplicate looking install packages from different channels. Prod looks like a server. Spare looks like it had been somebody's desktop with Evolution, OpenOffice and other desktop cruft.
Prod and Spare have similar applications installed but found in different repos so the available yum update levels are different.
I have tried disabling the non-standard repos and uninstalling the non-standard packages. This has led to tears as removing X-Windows, for example, has led to the removal of hundreds of dependant modules that in turn have dependants which, in the end, made Spare deaf, blind and mute. Blessedly we had a copy of the VM.
My latest idea is to migrate both machines to the latest stable CentOS level and basically have a do-over. The downside (I think) is the downtime to the production machine and unknown custom software vs new package level issues.
My basic question is, what is the best way to make the platforms as identical as possible, and minimize (or better yet negate) downtime.
How should we maintain packages and other installs across them into the future? I am aware of Puppet, Chef and CFEngine but have not used them before. Are these the way to go for the future? Something else?
This is not really a programming related question (You might have better luck at https://serverfault.com/)
Your question is quite broad, but essentially you want two machines that are as identical as possible, one production, one VM, correct?
Two get machines in a consistent state, you'll need a configuration tool of some sort. Ansible is probably the easiest to get setup and get cracking with. At it's most basic setup, is basically nice wrappers around SSH. With this you can create consistent, and easily track changes to servers as they happen.
To have a VM you can easily provision, I recommend reading up on Vagrant and Packer. Vagrant to easily create a VM that accurately reflects your production environment, packer so you can repeatedly create an image in various platforms. In an ideal case, you can take the configuration tool and use it to provision your VM, meaning you can test your production changes on a VM first.
In general, having repeatable automated configuration you can easily test, I'd also recommend reading up on the concept of DevOps

Windows Web Platform Installer vs Manual Install?

I am going to be moving all my websites to a Windows Web Server 2008 R2 machine. I have installed it in a virtual machine to test that my websites work with it.
I have noticied that there is a program called Web Platform Installer. I have used it to install a few sites but I was just thinking is it a security risk using this? Would it be better for me to manually install the sites (WordPress, Umbraco, etc)
Thanks
We push that out to all our customers just for ease of deployment, and I have not seen any security issues with it, however, I would question its reliability as it fails about 10% of the time (to install whatever I have selected). Having said that, when it does work, its a fairly good tool, as it will install any prerequisites that you may not have been aware of (like SMO, or if you try to install Wordpress without MySQL), and will also keep you up-to-date on newer version of software that you have installed.

How can developers make use of Virtualization?

Where can virtualization techniques be applied by an application developer? How can virtualization be applied on a day-to-day basis?
I would like to understand from veteran developers making use of it. I am interested in the following things:
How it helps in development.
How it could be used for testing purposes.
What are the recommended practices.
The main benefit, in my view, is that in a single machine, you can test an application in:
Different OSs, in case your app is multiplatform
Different configurations, like testing a client in one machine and a server in the other, or trying different parameters
Diffferent performance characteristics, like with minimal CPU and RAM, and with multicore and high amounts of RAM
Additionally, you can provide VM images to distribute applications preconfigured, be it for testing or for running applications in virtualized environments, where it makes sense (for apps which do not demand much power)
Can't say I'm a veteran developer, but I've used virtualization extensively when environments need to be controlled. That goes for:
Development: not only is it really useful to have VMs about for different deployment environments (e.g. browser versions, Windows XP / Vista / 7) but especially for maintenance it's handy to have a VM with the right development tools configured for a particular job.
Testing: this is where VMs really shine: it's great to have different deployment environments that can be set back to a known good configuration and multiple server instances running in parallel to test load balancing.
I've also found it useful to have a standard test image available that I can run locally to verify that a fix works. If it doesn't then I can roll back to the previous snapshot with no problems.
I've been using Virtual PC running Windows XP to test products I'm developing. I have clients who still need XP support while my primary dev environment is Vista (haven't had time to jump to Win7 yet), so having a virtual setup for XP is a big time saver.
Before each client drop, I build and test on my Vista dev machine then fire up VPC with XP, drag the binaries to the XP guest OS (enabled by installing Virtual PC additions on the guest OS) and run my tests there. I use the Undo disk feature of Virtual PC so I can always start with a clean XP image. This process would have been really cumbersome without virtualization.
I can now dump my old PCs at the local PC Recycle with no regrets :)
Some sort of test environment: if you are debugging malware (either writing it or developing a pill against it) it is not clever to use the real OS. The only possible disadvantage is that the viruses can detect that they are being run in the virtualization. :( One of the possibilities to do it is because the VM engines can emulate a finite set of hardware.

I am a long time Ubuntu Linux user (a developer), what are the benefits of using Open Solaris

I am a web developer (J2EE application developer) and just want to expand what tools I use. I want to use Open Solaris for my personal projects. I have nothing against Linux and It looks like a lot of the same tools are on both systems.
Have you jumped to Solaris, was it a good experience?
DTrace, zones, switch between 32 bit and 64 bit mode with a single GRUB switch, ZFS, stable libraries (I can't really emphasize that one enough). Solaris 7 software generally runs on OpenSolaris, otherwise known as Solaris 11. glibc changes between minor kernel releases.
Xen is integrated pretty tightly, and setting up lx zones or virtualization to keep your Linux environment is dead simple.
OpenSolaris now has /usr/bin/gnu, where all you favorite utilities can be found.
Expect, though, to end up fighting the ./configure && make && make install cycle a little bit. A lot of developers assume you're running Linux, and don't prepend -m64 for Solaris, among other things. Compiling wxPython is an adventure, for instance.
Edit: I forgot to mention one (possibly important) thing to you. Package repositories aren't nearly comparable. It's neat that pkg image-update (equivalent to `apt-get update && apt-get upgrade && apt-get dist-upgrade) makes a ZFS snapshot that you can get back to via GRUB at any point, but you have nowhere near as many packages in IPS as apt. All the biggies are there, though.
If you're planning to switch, Sun's documentation is fantastic, and the BigAdmin tips of the day are worth reading for a while to get you up to speed.
For J2EE work per se, probably not much. As a more general developer you may appreciate DTrace. As an admin you'll love ZFS & zones. You'll hate the outdated utilities (mostly user-land) though. FreeBSD is a nice in-between Linux & Solaris though. :)
I guess the underlying OS doesn't matter much for a J2EE developer, as long as you stick to the java platform and don't make use of native libraries through JNI. Having said that, the most important factors to choose an OS would be cost and performance. Now, both Linux and OpenSolaris are open source and free to use, but I'm not sure about using OpenSolaris in commercial deployments. I also don't know how java performance differs from one to the other, but I'm strongly convinced that Sun's implementation for Linux is damn good.
Note: I've never used OpenSolaris and I use mostly Linux.
I'm not certain from your question if you mean for your development desktop or your hosting solution but I can take a crack at both. About six months ago I got hold of a free year of hosting on OpenSolaris running GlassFish. I hadn't used Solaris before and thought it would be a good learning experience. I built a test server, installed OpenSolaris and GlassFish, and used it to practice. It was very strightforward to configure GlassFish and deploy applications. Managing services in OpenSolaris is also simple once you read the right documentation. I like OpenSolaris and I like GlassFish.
Obviouly, I found similarities and differences from previous experience with Java application servers and operating systems. However, I thought so highly of the OS that I switched my desktop over last month. It has been a good experience.
Eclipse is not available on OpenSolaris, unfortunately. If you are an Eclipse user you would have to migrate to NetBeans.