Vagrant Berkshelf - Shelf Path? - plugins

Is it possible to set the path where the berkshelf plugin puts the cookbooks it installs? (As in the .berkshelf folder)
I am running Windows 7.
I am currently trying to install a mysql server using an opscode cookbook to a vm and here at work they have the %HOMEDRIVE% system variable set to a network drive. So when .berkshelf starts at the beginning of the Vagrantfile, it pushes the cookbooks to the network drive and it causes it to be slow and well, its not where it should be. Is there a fix to this?
VirtualBox did this as well, but I fixed it by altering the settings. I tried looking for some sort of equivalent settings for berkshelf, but the closest I got was for the standard berkshelf (thats not a vagrant plugin), it appears you can set this environment variable:
ENV['BERKSHELF_PATH']
Found here:
http://www.rubydoc.info/github/RiotGames/berkshelf/Berkshelf#berkshelf_path-class_method
I need to be able to have the cookbooks it reads from the berksfile store to my laptops local drive instead, as in my scenario I cannot have the mobility of the VM limited to the building because of files that are stored on the network.
Any incite would be much appreciated.
Perhaps its better to use the actual berkshelf over the vagrant plugin?
Thanks.

If you want to have the portability - a full chef-repo ready for chef-solo runs, better off using standalone berkshelf instead of the vagrant-berkshelf plugin - which is NOT that flexibly.
For complex cookbooks, I prefer to use standalone berkshelf as it allows me to do berks install --path chef/cookbooks to copy all cookbooks required from ~/.berkshelf/cookbooks, then I can just tar the whole thing and transfer to other machines for the same chef-solo run. some people use capistrano automate the tar and scp/rsync over the network. I just use rysnc/scp;-)
HTH

Related

how to make a solaris system environment same as another

I have a real host and an vm. they are both solaris system
sjcux-c7build01# uname -a
SunOS sjcux-c7build01 5.8 Generic_Virtual sun4v sparc sun4v
The real host has been used for years.The vm is new created.For maintenance,we want to use vm instead of real host in future.I need to install all the packages and let the vm can do gnu make like the old host.
How to list all the packages the real host has installed?
pkginfo just shows what's bundled with Solaris.
I noticed that directory /usr/local/lib in vm is empty,And In real host ,it has many .so file in it.
There must be many other difference. How to find out them? How to list the packages I need to install?
For example.on the vm ,I can't use git.
ldd git
libiconv.so.2 => /tools/sw/opt/SunOS/5.8/git/git-2.23.0/lib/libz.so/lib/libiconv.so.2 - Not a directory
libintl.so.8 => /tools/sw/opt/SunOS/5.8/git/git-2.23.0/lib/libz.so/lib/libintl.so.8 - Not a directory
So libiconv need to be installed.
I want to make the vm same as the real host, what need I to do? Who can give me some guide~
It is unrealistic to find one by one according to the .so files.
One possible way is to create flash archive of your old machine and install from this archive:
create repository where to store the archive
create flash archive of the system
check the archive
export via NFS the flash archive store
on new machine boot from CD, choose installation media select NFS
For more detailed instruction you can check this article in my blog
After creation of new machine you should take care of changing IP address or unconfigure and configure it from scratch (in sense of network and authentication services) it because two machines will have the same IP.

Deployment of files to Virtual Machines

During our development process the developers do code modifications, compile the code and need to deploy it on a remote machine and test it or debug it remotely.
There are manual steps that are usually needed - stop one or more services, copy the compiled files to specific place in the destination machine and other steps (maybe delete some folder etc.)
I was wondering if there is a tool that as input gets IP of remote machine and predefined steps (stop service, copy local files to remote machine etc) - and just do autmatic deployment for the developer? I'd like to automate this tiring process a bit...
Thanks.
Ant is a common tool for such tasks in Java development. You can use ant to compile your code, use an scp task to copy your binaries to a server and run scripts on that server. The configuration is done by XML and is pretty easy. You should google or search on stackoverflow for some examples.
I use rundeck to control my deployments. I like it's simplicity and the fact that all that's required is SSH access to my servers, enabling me to upload files, and run whatever scripts I require.
It has a simple XML configuration file listing the servers in my network. This makes it really easy to integrate with other CM tools.
For windows deployments you're going to require an SSH implementation installed on each node, or a more complicated deployment tool.

Netbeans: Remote project w/source files over SSH?

Is it possible to set up a remote NetBeans C++ project where the source files are only accessible via SSH?
My project needs to build on a Linux box, but I'd like to develop it on a Windows machine.
Checking out the code via SVN to my Windows machine is not an option since there are a few files that differ only by case, and NTFS is not case sensitive (unfortunately, I can not change them).
I'm well aware that Windows can be kind-of forced be case-aware and the ideal solution is to just re-name those file to something sane.
However, I'm just trying to solve this using NetBeans. Since it's a remote project anyway, why bother to keep any files locally.
Thanks
Currently, no. In general programming files with different cases of the same name is a bad practice.
You can enable case sensitivity in Windows - you may need to have a Professional version or better.
For Windows XP: http://support.microsoft.com/kb/817921
For Windows 7: http://technet.microsoft.com/en-us/library/cc732389.aspx
See also: Windows Services for Unix
Another solution would be to setup VNC/RDP on the remote Unix system. The overall solution should be to conform to a better file naming convention:
Programmer 1: "Hey man, take a look at noCamelCase.cs - I just rewrote it."
Programmer 2: "Um, nocamelcase.cs is blank."
There are two ways of doing remote builds with Netbeans. The first, the project is stored locally. You just create a regular project and on the 2nd page of the wizard you specify the network directory with the source and the remote build host. I've used this for Solaris client to Linux server, but not from Windows as we don't have the mounts exported by SMB. This uses ssh and some shared lib interposers to get the build info.
The second way is to create a remote project. In this case the project is created on the remote host and date is copied on demand to the client. I've only doe a few tests with this as I preferred the first method as it had much better latency.
Lastly, you could either use vnc or install X on your windows machine and do everything on the Linux machine.

Best practices for deploying tools & scripts to production?

I've got a number of batch processes that run behind the scenes for a Linux/PHP website. They are starting to grow in number and complexity, so I want to bring a small amount of process to bear on them.
My source tree has a bunch of cpp files and scripts, organized with development but not deployment in mind. After compiling all the executables, I need to put various scripts and binaries on a cluster of machines. Different machines need different executables, scripts, and config files for their batch processes. I also have a few of tools that I've written that belong on every machine. At the moment, this deployment process is manual and error prone.
I'm guessing I'm just going to end up with a script that runs at the root of the source tree and builds a smaller tree of everything necessary for any of the machines. Then, I'll just rsync that to the appropriate machines. But I'm curious how other people are managing this type of problem. Any ideas?
There are a several categories of tool here. Some people use a combination of tools from these categories. I sometimes use, for example, both Puppet and Capistrano. See Puppet or Capistrano - Use the Right Tool for the Job for a discussion.
Scripting Tools aimed at Deploying an Application:
The general pattern with tools in this category is that you create a script and/or config file, often with sets of commands similar to a Makefile, and the tool will ssh over to your production box, do a checkout of your source, and run whatever other steps are necessary.
Tools in this area usually have facilities for rollback to a previous version. So they'll check out your source to releases/ directory, and create a symbolic link from "current" to "releases/" if all goes well. If there's a problem, you can revert to the previous version by running a command that will remove "current" and link it to the previous releases/ directory.
Capistrano comes from the Rails community but is general-purpose. Users of Capistrano may be interested in deprec, a set of deployment recipes for Capistrano.
Vlad the Deployer is an alternative to Capistrano, again from the Rails community.
Write your own shell script or Makefile.
Options for getting the files to the production box:
Direct checkout from source. Not always possible if your production boxes lack development tools, specifically source code management tools.
Checkout source locally, then tar/zip it up. Use scp or rsync to copy the tarball over. This is sometimes preferred for something like an Amazon EC2 deployment, where a compressed tarball can save time/bandwidth.
Checkout source locally, then rsync it over to the production box.
Packaging Tools
Use your OS's packaging system to generate packages containing the files for your app. Create a master package that has as dependencies the other packages you need. The RubyWorks system is an example of this, used to deploy a Rails stack and sample application. Then it's a matter of using apt, yum/rpm, Windows msi, or whatever to deploy a given version. Rollback involves uninstalling and reinstalling an old version.
General Tools Aimed at Installing Apps/Configs and Maintaining a Set of Systems
These tools do not specifically target the problem of deploying a web app, but rather the more general problem of deploying/maintaining Apps/Configs for a set of servers, or an entire company's workstations. They are aimed more at the system administrator than the web developer, though either can find them useful.
Cfengine is a tool in this category.
Puppet aims to improve on Cfengine. It's got a learning curve but many find it worth the time to figure out how to do the configs. Once you've got it going, each box checks the central server periodically and makes sure everything is up to date. If someone edits a file or changes a permission, this is detected and corrected. So, unlike the deployment tools above, Puppet not only puts files in the right place for you, it ensures they stay that way.
Chef is a little younger than Puppet with a similar approach.
Smartfrog is another tool in this category.
Ansible works with plain YAML files and does not require agents running on the servers it manages
For a comparison of these and many more tools in this category, see the Wikipedia article, Comparison of open source configuration management software.
Take a look at the cfengine tutorial to see if cfengine looks like the right tool for your situation. It may be a little too complicated for a small website, but if it is going to involve more computers and more configuration in the future, at some point you will end up using cfengine or something like that.
Create your own packages in the format your distribution uses, e.g. Debian packages (.deb). These can either be copied to each machine and installed manually, or you can set up your own repository, and add it to your list of sources.
Your packages should be set up so that the scripts they contain consult a configuration file, which is different on each host, depending on what scripts need to be run on each.
To tie it all together, you can create a meta package that just depends on each of the other packages you create. That way, when you set up a new server, you install that one meta package, and the other packages are brought in as dependencies.
Although this process sounds a bit complicated, if you have many scripts and many hosts to deploy them to, it can really pay off in the long run.
I have to roll out PHP scripts and Apache configurations to several customers on a frequent basis. Since they all run Debian Linux, I've set up a Debian package repository on my server and the all the customer has to do is type apt-get upgrade and they get the latest version.
The first thing to do is get all these scripts into a source control repository (svn or git are good) so that you can track changes to these scripts over time.
If you are interested in ruby, check out Capistrano, it is well suited deploying things to multiple machines in a cluster, and is fairly easy to set up. It can read files directly from your version control system.
Puppet is another tool that can be used in this situation. It is similar to cfengine - you create a model of the desired deployment and Puppet figures how to get the environment to this state.

Deploying Perl to a share nothing cluster

Anyone have suggestions for deployment methods for Perl modules to a share nothing cluster?
Our current method is very manual.
Take down half the cluster
Copy Perl modules ( CPAN style modules ) to downed cluster members
ssh to each member and run perl Makefile.pl; make ; make install on each module to be installed
Confirm deployment
In service the newly deployed cluster members, out of service the old cluster members and repeat steps 2 -> 4
This is obviously far from optimal, anyone have or know of good tool chains for deploying Perl modules to a shared nothing cluster?
Take one node offline, install Perl, and then use it to reimage the other nodes.
At least, that's how I imagine you'd want to install software in a shared-nothing cluster. Perl is just the application you happen to be installing.
Assuming all the machines are identical, you should be able to keep one canonical installation, and use rsync or something to keep the others in updated.
I have, in the past, developed a Perl program which used the Expect module (from CPAN) to automate basically the process you described, automatically sshing to each host, copying any necessary files, and performing the installations. Unfortunately, this was developed on-site for a client, so I do not have access to the code to share. If you're familiar with Expect, it shouldn't be too difficult to set up, though.
We currently have a clustered Perl application that does data processing. We also have numerous CPAN modules and modules that we've developed that the software depends on. When you say 'shared nothing', I'm assuming you're referring to things like NFS mounts.
If the machines have identical configurations, then you may be able to build your entire application into a single directory structure (eg: /opt/my-app), tar it up and that could be come the only thing you need to push to the boxes.
As far as deploying it to the boxes, you might be able to use Capistrano. We developed a couple of our own cluster utilities that piggybacked off of ssh - I've released one form of that utility: parallel-jobs. Its README shows an example of executing multiple parallel ssh commands. It's a small step to extend that program to be able to know about your cluster and then be able to execute the same command across the cluster (as opposed to a series of different commands).
If you are using Debian or Ubunto OS you could package your Perl modules - I have open sourced some code to help with this: Perl module builder it's still very rough but does work and can be made to work on your own code as well as CPAN modules, this then makes deployment much easier.
There is also project to get RedHat rpms for all of CPAN, Dave Cross gave a talk Perl in RPM-Land which may be of use.
If you are on some other system which doesn't have packaging then the rsync option (install on one machine and then rsync to the others) should work as well, note you can mount a windows share and rsync to it across unix if needed.
Using a central manager like Puppet makes creating and maintaining machines in a cluster a lot easier to manage, from installing code to managing users and email configuration. There is also a Perl project in the pipeline to do something similar but this has not been made public yet.
Capistrano is a tool that allows you to run commands on a group of servers; it is perfectly suited to making your task considerably easier.
Further down the line of automation, but also complexity, is Puppet that allows you do define a group of servers, give them roles and then push out sets of code to every machine subscribing to a certain role.
I am not sure exactly what a share nothing cluster is, but if it uses some base *nix system like Fedora, Mandriva, or Ubuntu. Many of the perl modules are precompiled for specific architectures. You can easily run these.
If these systems are of the same arch you can do as someone else said and just copy the compiled modules from system to system, just make sure you have all of the dependancies as well on the recipient system.