Remove obviously unwanted cookbooks in chef - chef-solo

I have started using chef but a lot of the times I am going to use it on a CentOs VPS. This means that community cookbooks such windows,iis are of no need to me now and in the foreseeable. I want to be able to customize cookbooks that depend on these cookbooks and its recipes. Is that possible? And if so can you outline the process?
I understand that these cookbooks are meant to be reusable but I wish to customize my chef-repo for a versy specific need.
Hope there is an easy solution out there! Thanks

Of course you can customize it. Just delete those unneeded ones, and write new ones when existing doesn't fit your needs.
Note:
What you must have in your cookbooks directory are:
cookbooks you call in your rolls (or Vagrantfile, if you use vagrant), and
cookbooks previously mentioned ones depends on.
Now, how to find out what are dependencies:
metadata.rb file within each cookbook contains dependencies section.
recipes are included by invocation via "include_recipe". Sometimes metadata are not up to date so you'll have to track it this way.
If OpsCode cookbook doesn't suite your needs (even after changing the attributes), search the github for other sources. There's entire ecosystem around this. If none of this turns out to work write your own cookbooks. Use "knife" tool to create an empty cookbook (template). Command would be:
knife cookbook create COOKBOOK_NAME [--cookbook-path PATH_TO_YOUR_COOKBOOK_DYRECTORY]
Besides, rely on the opscode manuals. There's plenty of stuff there ;)

Related

How to find a class in a list of Nuget packages

My team is using more and more NuGet packages as a way to break the system into smaller pieces and share things between parts. We have adopted a sort of SRP principle for packaging, creating small and hopefully cohesive packages that do just one thing (logging, auditing, security stuff, etc).
Ideally they should be so cohesive and self-contained that it would be straightforward to know what package will contain what you need. However we are not yet there and sometimes is difficult to know what package you should add to access some functionality.
My question is: is there any way to publish and navigate package content information? Like, for instance, in MSDN you can see what assembly contains a class. Would it be possible to know something like that, at the package level?
Thanks.
It's a very localised version, but there is a package searcher for the ASP.NET 5 packages hosted on NuGet. It might be possible to host a version that looks at a wider scope at some point.
https://packagesearch.azurewebsites.net/
The closest functionality I can think of is implemented in ReSharper. However it can only search the packages in nuget.org(closed issue on GitHub). Since packages don't expose type info, JetBrains built a custom index and that's the only data source it can query.

perl: new cpan module maker? local configuration text files and executables, too?

I am writing a perl program that I want to share with others, eventually via cpan. it's getting to the point where I should start thinking about this on a bigger scale.
a decade ago, I used the h2xs package maker once. is this still the most recommended way to get started? there used to be a couple of alternatives. because I am starting from scratch with very little recollection, anything simple will do at this point.
I need to read a few long text files (not perl modules) for configuration. where do I put them and how do I access them, no matter where the module is installed? (FindBin?) _DATA_ is inconvenient.
I need to provide an executable (linux and osx). can putting an executable into the user's path be part of the module installation? (how?)
I would like to be able to continue developing it, run it for test purposes, have a new version, repack it, and reupload it easily.
before uploading to cpan, can I share a cpan bundle for easy local installation to downloaders and testers?
# cpan < mybundle.cpanbundle
advice appreciated.
regards,
/iaw
If anything I say conflicts with Andy Lester, listen to him instead. He knows more than I ever will.
Module::Starter is a good, simple way to generate module scaffolding. My take is it's been the default for this sort of thing for a few years now.
For configuration/support files, I think you probably want File::ShareDir. Might be worth considering Data::Section if it's just a matter of needing multiple __DATA__ sections though.
You can certainly put scripts in the bin subdirectory of your distribution, the build tool will put it in the right place at install time.
A build tool will take care of the work-flow you describe.
Bundles are something different. You make a distribution and share the tarball/archive.
If you set up PERL5LIB appropriately, then repeat make test, make install, make dist to your heart's content. For development/sharing purposes a lot of projects do their work on github or similar - makes it easy to share. They have private accounts for business purposes too. Very useful if you want to rewind and see where/when a problem was introduced.
If you get a copy of cpanm (simple to install, fairly lightweight) then it can install from a tar.gz file or even direct from a git repository. You can also tell it to install to a local dir (local::lib compatible - another utility that's very useful).
Hopefully that's reasonably up-to-date as of 2014. You may see Dist::Zilla mentioned for module development. My understanding is that it's most useful for those with a large family of CPAN distributions to manage. Oh - if you (or other readers) aren't aware of them, do check out autodie and Try::Tiny around errors and exceptions, Moose (for a full-featured object-oriented framework) and Moo (for a smaller lightweight version).
I think that advice is all reasonably non-controversial. I find cpanm to be much more pleasant than the "full" cpan client, and Moo seems pretty popular nowadays too.
Take a look at Module::Starter and its much more capable (and complex) successor Dist::Zilla.
Whatever you do, don't use h2xs. Module::Starter was created specifically because h2xs was such an inappropriate tool for creating distributions.

How do I add custom module distributions to my local CPAN mirror?

I'm getting ready to set up a full CPAN mirror for internal use at my company. However, we have several internal Module::Build based distributions that I'd like to make available to people from this mirror. These distributions should ONLY be available from our mirror; they are internal libraries only. Essentially, once people have set up their CPAN config file to load "cpan.mycompany.com' mirror, I'd like them to be able to do a
cpan install MyCompany::Bundle
cpan install MyCompany::Other::Module
On their command line to install any number of internal, custom module distributions. Ideally, as versions of these module distributions are incremented, all of those versions would be indexed by our internal CPAN mirror and made available, just as as previous versions of CPAN modules are made available.
After the initial question, I was able to come up with some other possibilities.
There's CPAN::Inject, but it looks like I can't use it to get a cpan install My::Module syntax.
Then there's MyCPAN::App::DPAN, which also looks interesting, and almost looks like what I need. Does anyone have experience with this tool?
Another one I just came across was CPAN::Site. This seems to also be able to set up a custom CPAN distribution. Any thoughts on this tool?
If you're using CPAN::Mini to create your mirror, then you use CPAN::Mini::Inject to add your own modules to it.
To do this with a full CPAN mirror, CPAN::Site covers this nicely. It lets you make a mirror, and then inject your own libraries right into it, complete with tools to help you manage setting it up and keeping it up to date.
I would like to second the suggestion for CPAN::Site - the author is responsive and will gladly apply fixes if you ask or file a bug report on the CPAN RT.
I've been using it recently to make a "micro-cpan" containing only what a particular application needs and nothing else, along with cpanminus to make installation in any environment dead-simple. However, don't ask me for my solution - miyagawa++ was at YAPC::NA this year and showed off "Carton" which does all that and more, way better than my hacky stuff.
CPAN::Mini::Inject is perhaps a bit too "low-level" in that it requires that you specify a whole lot of information about each dist up-front before injecting into the minicpan - I feel that just about all of that should be auto-detected by analyzing the dist, for example by using CPAN::ParseDistribution.
MyCPAN::App::DPAN is actually quite cool, but has a bit of a learning curve and may not be the right tool for the job. I've also found it has a tendency to choke on some badly-formed dists and detecting that involves treawling through the logs (as far as I can tell - maybe there's a better way to do it) However, I'd highly suggest checking it out.
If you're still interested in MyCPAN::App::DPAN, I've just posted how I use it to create a mini CPAN-like directory structure, in (one of) the answers to this question:
Internal CPAN - what module
(I don't know if it's OK to link to my own answer here. Let me know if it isn't.)

What are the strengths/weaknesses of ShipIt vs Dist::Zilla?

I started using Dist::Zilla several months ago. However, at YAPC::NA someone mentioned that they use ShipIt instead. Then today I noticed a .shipit file in miyagawa's cpanminus directory on github, so I decided to look into it some more...
My initial impression is that ShipIt has a subset of what is available with Dist::Zilla, but I don't want to jump to conclusions. So, for those who have had experience with both, what are the strengths/weaknesses of ShipIt vs Dist::Zilla?
crossposted at perlmonks
I'm the author of Dist::Zilla.
I evaluated ShipIt pretty extensively before choosing to go ahead and write Dist::Zilla, and initially they covered almost exactly the same problem space: doing all the boring grunt work of building and uploading a CPAN distribution. All of the features that Dist::Zilla now has beyond ShipIt are later additions, more or less.
If you only need the features of ShipIt, I still advise you to strongly consider Dist::Zilla, for one very simple reason: hackability. If I had been able to not write something new, I would've used ShipIt, but I found it to be underdocumented and difficult to extend. Its plugins were not generic enough and the core behavior made too many assumptions about how you'd like to work.
Dist::Zilla was inspired specifically by this problem: it turned everything into a plugin, and every plugin was given a very, very small interface so that its assumptions would be forcibly limited.
One benefit of ShipIt over Dist::Zilla is that ShipIt has (to the best of my knowledge) no plugins that will alter the way you actually write your code. This means your documentation will still look the same, you will still have a Makefile.PL, and so on. Some hackers don't like that so many DZ-based dists fundamentally change the assumptions of how to test and build CPAN code from its source repository. ShipIt will never change that.
It's possible to avoid using any such plugins with Dist::Zilla, but in general my experience is that people do use them, almost always, in one form or another.
As far as I can tell, my initial impressions were correct.
ShipIt provides functionality for releasing distributions:
keeping track of version numbers
integrating with version control
uploading to CPAN
displaying the changelog file in an editor so that you can edit it before release.
Dist::Zilla, by default, provides the ability to upload distributions to CPAN with a single command (i.e. dzil release). Dist::Zilla also has functionality for creating new distributions (i.e. dzil new My::New::Module). It also automatically generates so many of the files that I used to have to maintain by hand.
Using plugins, Dist::Zilla seems able to provide most, if not all, of the functionality available with ShipIt. It is also relatively easy to add brand new features using plugins.

Version control of deliverables

We need to regularly synchronize many dozens of binary files (project executables and DLLs) between many developers at several different locations, so that every developer has an up to date environment to build and test at. Due to nature of the project, updates must be done often and on-demand (overnight updates are not sufficient). This is not pretty, but we are stuck with it for a time.
We settled on using a regular version (source) control system: put everything into it as binary files, get-latest before testing and check-in updated DLL after testing.
It works fine, but a version control client has a lot of features which don't make sense for us and people occasionally get confused.
Are there any tools better suited for the task? Or may be a completely different approach?
Update:
I need to clarify that it's not a tightly integrated project - more like extensible system with a heap of "plugins", including thrid-party ones. We need to make sure those modules-plugins works nicely with recent versions of each other and the core. Centralised build as was suggested was considered initially, but it's not an option.
I'd probably take a look at rsync.
Just create a .CMD file that contains the call to rsync with all the correct parameters and let people call that. rsync is very smart in deciding what part of files need to be transferred, so it'll be very fast even when large files are involved.
What rsync doesn't do though is conflict resolution (or even detection), but in the scenario you described it's more like reading from a central place which is what rsync is designed to handle.
Another option is unison
You should look into continuous integration and having some kind of centralised build process. I can only imagine the kind of hell you're going through with your current approach.
Obviously that doesn't help with the keeping your local files in sync, but I think you have bigger problems with your process.
Building the project should be a centralized process in order to allow for better control soon your solution will be caos in the long run. Anyway here is what I'd do.
Create the usual repositories for
source files, resources,
documentation, etc for each project.
Create a repository for resources.
There will be the latest binary
versions for each project as well as
any required resources, files, etc.
Keep a good folder structure for
each project so developers can
"reference" the files directly.
Create a repository for final buidls
which will hold the actual stable
release. This will get the stable
files, done in an automatic way (if
possible) from the checked in
sources. This will hold the real
product, the real version for
integration testing and so on.
While far from being perfect you'll be able to define well established protocols. Check in your latest dll here, generate the "real" versiĆ³n from latest source here.
What about embedding a 'what' string in the executables and libraries. Then you can synchronise the desired list of versions with a manifest.
We tend to use CVS id strings as a part of the what string.
const char cvsid[] = "#(#)INETOPS_filter_ip_$Revision: 1.9 $";
Entering the command
what filter_ip | grep INETOPS
returns
INETOPS_filter_ip_$Revision: 1.9 $
We do this for all deliverables so we can see if the versions in a bundle of libraries and executables match the list in a associated manifest.
HTH.
cheers,
Rob
Subversion handles binary files really well, is pretty fast, and scriptable. VisualSVN and TortoiseSVN make dealing with Subversion very easy too.
You could set up a folder that's checked out from Subversion with all your binary files (that all developers can push and update to) then just type "svn update" at the command line, or use TortoiseSVN: right click on the folder, click "SVN Update" and it'll update all the files and tell you what's changed.