I have two CentOS platforms. Both run "CentOS release 5.10 (Final)". One is a "real" machine and the other is a VM. Both are 64 bit. Call the real machine Prod and the VM Spare.
When I got this gig I was told that the two machines were identical. Spare is supposed to be a hot spare for Prod. It is now obvious that is not true. The two machines have different yum repo lists. There are duplicate looking install packages from different channels. Prod looks like a server. Spare looks like it had been somebody's desktop with Evolution, OpenOffice and other desktop cruft.
Prod and Spare have similar applications installed but found in different repos so the available yum update levels are different.
I have tried disabling the non-standard repos and uninstalling the non-standard packages. This has led to tears as removing X-Windows, for example, has led to the removal of hundreds of dependant modules that in turn have dependants which, in the end, made Spare deaf, blind and mute. Blessedly we had a copy of the VM.
My latest idea is to migrate both machines to the latest stable CentOS level and basically have a do-over. The downside (I think) is the downtime to the production machine and unknown custom software vs new package level issues.
My basic question is, what is the best way to make the platforms as identical as possible, and minimize (or better yet negate) downtime.
How should we maintain packages and other installs across them into the future? I am aware of Puppet, Chef and CFEngine but have not used them before. Are these the way to go for the future? Something else?
This is not really a programming related question (You might have better luck at https://serverfault.com/)
Your question is quite broad, but essentially you want two machines that are as identical as possible, one production, one VM, correct?
Two get machines in a consistent state, you'll need a configuration tool of some sort. Ansible is probably the easiest to get setup and get cracking with. At it's most basic setup, is basically nice wrappers around SSH. With this you can create consistent, and easily track changes to servers as they happen.
To have a VM you can easily provision, I recommend reading up on Vagrant and Packer. Vagrant to easily create a VM that accurately reflects your production environment, packer so you can repeatedly create an image in various platforms. In an ideal case, you can take the configuration tool and use it to provision your VM, meaning you can test your production changes on a VM first.
In general, having repeatable automated configuration you can easily test, I'd also recommend reading up on the concept of DevOps
Related
We use RackSpace for our private cloud and we get to spin up RHEL instances all day everyday, which brings me round to the issue of managing the updates on these systems.
I told everyone to use CentOS, until we can get around to managing the licensing, so I can manage the repos, and take control over what packages are available, security updates and the like.
Do I have to use RHN to manage RHEL packages, or can I use an in house system for systems to collect packages from?
/* please excuse the naiveness of this question - personally, I can't see any reason why I can't use "createrepo" but I though I'd just ask the community to make sure */
CentOS patches are usually within 2-3 days of RHEL, so you should be fine. (Security patches - releases are a whole other story!) Depending on your company size, etc, you may want to pick up a few RHEL licenses anyway to help contribute...
What are the best practices for deploying a Perl application? Assume that you are deploying onto a vanilla box with little CPAN module installation. What are the ideal build, deploy methods? Module::Build, ExtUtils::MakeMaker, other? I am looking for some best practice ideas from those who have done this repeatedly for large scale applications.
The application is deploying onto a server. It's not CPAN or a script. It's actually a PSGI web application. That is, a ton of Perl packages.
I currently have a deployment script that uses Net::SSH::Expect to SSH into new servers, install some tools and configure the server, then pull down the desired application branch from source control. This feels right, but is this best practice?
The next step is building the application. What are the best practices for tracking and managing dependencies, installing those dependencies from CPAN, and ensuring the application is ready to run?
Thanks
The company that I work at currently build RPMs for each and every CPAN & Internal dependency of an application (quite a lot of packages!) that install into the system site_perl directory. This has a number of problems:
It is time consuming to keep building RPMs as versions get bumped across the CPAN.
Tying yourself to the system perl means that you are at the mercy of your distribution to make or break your perl ( in Centos 5 we have a max perl version of 5.8.8 ! ).
If you have multiple applications deployed to the same host, having a single perl library for all applications means that upgrading dependencies can be dangerous without retesting every application of the host. We deploy quite a lot of separate distributions all with varying degrees of maintenance attention, so this is a big deal for us.
We are moving away from building RPMs for every dependency and instead planning to use carton [1] to build a completely self contained perl library for every application we deploy. We're building these libraries into system packages, but you could just as easily tarball them up and manually copy them places if you don't want to deal with a package manager.
The problem with carton is that you'll need to setup an internal CPAN mirror that you can install your internal dependencies to if your application depends on modules that aren't on the CPAN. If you don't want to deal with that, you could always just manually install libs you need into local::lib [2] or perlbrew [3] and package the resulting libraries up for deployment to your production boxes.
With all of the prescribed solutions, be very careful of XS perl libs. You'll need to build your cartons/local:libs/perlbrews on the same architecture as the host you're deploying to and make sure your productions boxes have the same binary dependencies as what you used to build.
To answer the update to your question about whether it is best practice to source checkout and install onto you production host; I personally don't think that it is a good idea. The reasons why I believe that it is risky lays in the fact that it is hard to be completely sure that the set of libraries that you install exactly lines up to the libraries that you tested against, so deployments have the potential to be unpredictable. This issue can be exasperated by webapps as you are very likely to have the same code deployed to multiple production boxes that can get out of synch, also. While the perl community does a wonderful job of trying to release good quality code that is backwards compatible, when things go wrong it is normally quite an effort to figure things out. This is why carton is being developed, as this creates a cache of all the distribution tarballs that you need to install frozen at specific versions so that you can predictably deploy your code. All of that said though; if you are happy to accept that risk and fix things when they break then locally installing should be fine for you. However, at the very minimum I would strongly suggest installing to a local::lib so that you can back up the old local lib before installing updates so you have a rollback point if things get messed up.
Carton
local::lib
perlbrew
If it has some significant CPAN dependencies, then you might want to either write a small script that uses CPAN::Shell to install the necessary modules or edit the Makefile.PL of your application so that it reflects the necessary dependencies in the BUILD_REQUIRES portion of the file.
You may take a look at sparrowdo a perl6 configuration management tool, it comes with some handy plugins related to perl5 deployment, like installing cpan packages or deploying psgi application.
Update: this link https://dev.to/melezhik/deploying-perl5-application-by-sparrowdo-9mb could be useful.
Disclosure - I am the tool author.
My team plans to hire a couple of part time contract programmers and interns soon and I would like to reduce the amount of setup time involved with getting each new intern's dev environment up and running.
Considerations:
They will be working on PC based laptops or desktops running windows with VMWare Server and Ubuntu or just Ubuntu. The compuers may or may not be have identical hardware.
Don't want to spend a ton of money, but enough should be spent to ensure they are not frustrated by slow computers, etc.
The environment includes Ruby on Rails, Git, Passenger, Capistrano, Memcachd.
Any suggestions are welcome. If there is a good way to do this using Apple mac mini's that is something we would consider too.
I'd recommend getting everyone on Ubuntu, and writing a setup script that runs the necessary apt-get install invocations (and possible WGET and dpkg commands) needed to result in the standard environment. Then you simply need to keep a copy of that script available on an internal website, and you can run it on your interns or contractors machines or you can have them run it themselves.
If using Windows, it's slightly harder to do, but you could probably write a BATCH script to install Python and run a Python script to do the remaining setup (I suggest doing that simply because trying to do anything that is in anyway sophisticated in BATCH is a good way to drive oneself insane).
virtual machines hold great promise as a way to distribute hard to configure applications. i have been using jeos vmbuilder (and some bash scripts) to generate my appliances, but i'm looking for something more elegant.
in my case, i'm looking for a solution that will build a linux-based vm with configured versions of tomcat and mysql as a base. each future release would be a new war file and a sql update script. it'd be really nice if already deployed vms could self-update and test builds could be pushed to ec2.
in my brief search, i've found rpath rbuilder, turnkey linux,
vagrant up, suse studio, jeos vmbuilder, and vmware studio. rather than try all of these, i figure i'd ask what this community uses to build and distribute appliances...
I use pungi myself.
Where can virtualization techniques be applied by an application developer? How can virtualization be applied on a day-to-day basis?
I would like to understand from veteran developers making use of it. I am interested in the following things:
How it helps in development.
How it could be used for testing purposes.
What are the recommended practices.
The main benefit, in my view, is that in a single machine, you can test an application in:
Different OSs, in case your app is multiplatform
Different configurations, like testing a client in one machine and a server in the other, or trying different parameters
Diffferent performance characteristics, like with minimal CPU and RAM, and with multicore and high amounts of RAM
Additionally, you can provide VM images to distribute applications preconfigured, be it for testing or for running applications in virtualized environments, where it makes sense (for apps which do not demand much power)
Can't say I'm a veteran developer, but I've used virtualization extensively when environments need to be controlled. That goes for:
Development: not only is it really useful to have VMs about for different deployment environments (e.g. browser versions, Windows XP / Vista / 7) but especially for maintenance it's handy to have a VM with the right development tools configured for a particular job.
Testing: this is where VMs really shine: it's great to have different deployment environments that can be set back to a known good configuration and multiple server instances running in parallel to test load balancing.
I've also found it useful to have a standard test image available that I can run locally to verify that a fix works. If it doesn't then I can roll back to the previous snapshot with no problems.
I've been using Virtual PC running Windows XP to test products I'm developing. I have clients who still need XP support while my primary dev environment is Vista (haven't had time to jump to Win7 yet), so having a virtual setup for XP is a big time saver.
Before each client drop, I build and test on my Vista dev machine then fire up VPC with XP, drag the binaries to the XP guest OS (enabled by installing Virtual PC additions on the guest OS) and run my tests there. I use the Undo disk feature of Virtual PC so I can always start with a clean XP image. This process would have been really cumbersome without virtualization.
I can now dump my old PCs at the local PC Recycle with no regrets :)
Some sort of test environment: if you are debugging malware (either writing it or developing a pill against it) it is not clever to use the real OS. The only possible disadvantage is that the viruses can detect that they are being run in the virtualization. :( One of the possibilities to do it is because the VM engines can emulate a finite set of hardware.