Chocolatey: Simple Automatic Upgrades of Local Folder Packages - upgrade

I have a folder like c:\chocopkg where I put a couple of packages which I can't find on the official repo.
Creating nupkg archives was really simple and fun. Instead, the Automatic Updater (AU) is too much for me: there is no simple cinst au; one should clone a git repo and also setup a new one even for a local run.
What I need is very simple. I added a script <package>\tools\chocolateyBeforeUpgrade.ps1, with trivial Invoke-WebRequest regexps. It checks for new versions on the vendor's site and can update chocolateyInstall.ps1.
My first question is: Is there some config option to have cup all running a script like this before checking for package status?
If this is not possible, it would be also simple to wrap cup in a, say, cup2 checking and running automatic upgrades, but what file should this wrapper edit before giving control to actual cup?
cup page just says it "upgrades a package or a list of packages", but I don't understand how. I can speculate it looks at the .nuspec version. However in a local share there is no such info without unzipping the .nupkg file and for remote packages this would require a possibly large download.

AU essentially does what you'll need, even if it is a little more setup and work. I know it may feel like too much to start, but you can just start with the files that run the updates.
What I need is very simple. I added a script <package>\tools\chocolateyBeforeUpgrade.ps1, with trivial Invoke-WebRequest regexps. It checks for new versions on the vendor's site and can update chocolateyInstall.ps1.
This isn't going to help with upgrades as it is a chicken and egg scenario. You need the updated package first to be able to upgrade a package. So putting something into beforemodify or the install script is only going to help you on installation. BeforeModify only runs from the already installed package on upgrade/uninstall, so unless there

Related

How can I make a FindMyPackage.cmake module fall back to downloading?

I have a simple CMake find module I've written, for a library of mine used by other projects. It's pretty simplistic, with its full text available here. Mainly there's one find_path() and one find_library(), and then some variables are set.
Now, I want CMake, when trying to find my package, to fall back on:
git-cloning or downloading the package/library from its GitHub repository,
Unpacking the archive, if it was a download
Building the package, either be using the running CMake itself somehow (the package has its own CMakeLists.txt), or by running an arbitrary shell command in the directory into which the packages was downloaded/cloned
The specifics of what happens post-download are less important to me than actually having a download fall-back.
How can I / how should I make this happen?
Notes:
Of course if the download/git clone fails, than finding the package has failed.
No need to worry about specific versions at the repo, although you can if you want to.

How to get useful files from a github package?

Although I've been using github for a while, but every now and then this problem pops up. From a github project, how do I know which filed/folders are actually what I need?(Generally, I don't need those files that are only used by developers)
Take this project (https://github.com/mupchrch/split-diff) for example, I am trying to install this plugin into my Atom editor so I can compare two files, but I find very little is said about how and which files are to be installed/copied, and to where. This "lack" of information happens to many many github projects I stumbled with. Some indicates npm install xxx which I am OK with, some says nothing at all, like the above example. So I conclude I must have missed some very important information regarding how to use github package, something that goes without saying.
Can anyone help me or give me any hint?
I noticed that here is a package.json file, which must indicate this package can be installed by npm. But I need more specific instructions:
Do I need to download all the files and folders? To where? And where do I launch npm install? Or as far as I know, Atom editor has its own install command (apm is it?), where do I run this apm?

When creating a NuGet PowerShell script, how can you tell the difference between a new install and upgrade?

I have an install.ps1 script in my NuGet package. This script runs both during a new install (after all the files have been copied) and during an upgrade.
I want to show a Getting Started page during a new install, but for an upgrade I want to show the Release Notes.
I found this great answer that tells how to open a URL and it works great. But I am stumped on trying to tell the difference between a new install and an upgrade.
The best solution I have come up with so far is to add a Release Notes link to the top of the Getting Started page, but that is something that could easily be missed by upgraders, and is an unwanted distraction for new installers.
I don't think it's possible to know if the current operation is install or upgrade. When NuGet upgrades a package, what NuGet does is basically uninstall the existing package and install the new package.
I suppose you could do something with install.ps1 that "dirties" the project in some way on the first install, which you will not clean up with uninstall.ps1. Maybe insert a dummy file into the project (outside of the normal NuGet handling, so the file won't get uninstalled automatically) or add some other dummy element to the project file. Then, when you see those "leftovers" from a previous install (which were purposely not cleanly uninstalled), you will know that you are installing an upgrade.

What is a good way to deploy a Perl application?

I posted this question looking for something similar to Buildout for Perl. I think Shipwright is what I'm looking for but I'm not really sure. I've played around with it and I created a project, imported all of my source and dependencies and I've exported everything to a vessel then the documentation sort of just stopped. What do I do with a shipyard vessel? Do I do my actual development work in the vessel, or do I do my development in the Shipyard? I'm assuming that the vessel is only for deployment, but how do I actually deploy a vessel to a web server (say I'm using linux, apache and just running straight cgi).
Is Shipwright the right thing for what I'm trying to accomplish or is there something else that would be more appropriate? Ideally I could use Shipwright similar to how I use Buildout. I use Buildout to create a nice isolated environment for my development, and also I use Buildout when deploying to live servers to manage all of my application's dependencies.
EDIT: Here are the highlights of what I can do with Buildout that I would like to be able to do in Perl.
With Buildout, I have a file in my codebase that lists dependencies (which for Perl would either be CPAN modules or other source repositories). I can run a bootstrap script that will fetch all of those dependencies and drop them into a directory within my project and NOT install them at a system level. Buildout also creates utility scripts which can do anything you want (run tests, other command line tools, anything really) and those scripts explicitly add the dependencies to the path so that as my scripts are running all of my dependencies are available to be imported.
What this really does very well is that it allows me to manage my dependencies without having to ever install anything at a system level. Which makes changing from one version to another very easy. Also, it allows me to have multiple Buildout projects running on the same system using different versions of the same module. Finally, one huge benefit is that with Buildout's directory structure, I can just commit the dependencies to source control and to deploy to a new machine I just need to do a checkout and all of my dependencies are already satisfied without having to touch anything installed at a system level.
I don't think you'll find anything exactly like Buildout in Perl, but you could put together a couple of things that would do the trick.
You could use a standard Build.PL script for Module::Build for managing your dependencies and having commands to run tests, etc.
Then you could use cpanminus to do the installation of those dependencies into a local (non-system) directory.
Then you might be able to use Shipwright to do the bundling and deployment of the project with these now-local dependencies.

Best practices for deploying tools & scripts to production?

I've got a number of batch processes that run behind the scenes for a Linux/PHP website. They are starting to grow in number and complexity, so I want to bring a small amount of process to bear on them.
My source tree has a bunch of cpp files and scripts, organized with development but not deployment in mind. After compiling all the executables, I need to put various scripts and binaries on a cluster of machines. Different machines need different executables, scripts, and config files for their batch processes. I also have a few of tools that I've written that belong on every machine. At the moment, this deployment process is manual and error prone.
I'm guessing I'm just going to end up with a script that runs at the root of the source tree and builds a smaller tree of everything necessary for any of the machines. Then, I'll just rsync that to the appropriate machines. But I'm curious how other people are managing this type of problem. Any ideas?
There are a several categories of tool here. Some people use a combination of tools from these categories. I sometimes use, for example, both Puppet and Capistrano. See Puppet or Capistrano - Use the Right Tool for the Job for a discussion.
Scripting Tools aimed at Deploying an Application:
The general pattern with tools in this category is that you create a script and/or config file, often with sets of commands similar to a Makefile, and the tool will ssh over to your production box, do a checkout of your source, and run whatever other steps are necessary.
Tools in this area usually have facilities for rollback to a previous version. So they'll check out your source to releases/ directory, and create a symbolic link from "current" to "releases/" if all goes well. If there's a problem, you can revert to the previous version by running a command that will remove "current" and link it to the previous releases/ directory.
Capistrano comes from the Rails community but is general-purpose. Users of Capistrano may be interested in deprec, a set of deployment recipes for Capistrano.
Vlad the Deployer is an alternative to Capistrano, again from the Rails community.
Write your own shell script or Makefile.
Options for getting the files to the production box:
Direct checkout from source. Not always possible if your production boxes lack development tools, specifically source code management tools.
Checkout source locally, then tar/zip it up. Use scp or rsync to copy the tarball over. This is sometimes preferred for something like an Amazon EC2 deployment, where a compressed tarball can save time/bandwidth.
Checkout source locally, then rsync it over to the production box.
Packaging Tools
Use your OS's packaging system to generate packages containing the files for your app. Create a master package that has as dependencies the other packages you need. The RubyWorks system is an example of this, used to deploy a Rails stack and sample application. Then it's a matter of using apt, yum/rpm, Windows msi, or whatever to deploy a given version. Rollback involves uninstalling and reinstalling an old version.
General Tools Aimed at Installing Apps/Configs and Maintaining a Set of Systems
These tools do not specifically target the problem of deploying a web app, but rather the more general problem of deploying/maintaining Apps/Configs for a set of servers, or an entire company's workstations. They are aimed more at the system administrator than the web developer, though either can find them useful.
Cfengine is a tool in this category.
Puppet aims to improve on Cfengine. It's got a learning curve but many find it worth the time to figure out how to do the configs. Once you've got it going, each box checks the central server periodically and makes sure everything is up to date. If someone edits a file or changes a permission, this is detected and corrected. So, unlike the deployment tools above, Puppet not only puts files in the right place for you, it ensures they stay that way.
Chef is a little younger than Puppet with a similar approach.
Smartfrog is another tool in this category.
Ansible works with plain YAML files and does not require agents running on the servers it manages
For a comparison of these and many more tools in this category, see the Wikipedia article, Comparison of open source configuration management software.
Take a look at the cfengine tutorial to see if cfengine looks like the right tool for your situation. It may be a little too complicated for a small website, but if it is going to involve more computers and more configuration in the future, at some point you will end up using cfengine or something like that.
Create your own packages in the format your distribution uses, e.g. Debian packages (.deb). These can either be copied to each machine and installed manually, or you can set up your own repository, and add it to your list of sources.
Your packages should be set up so that the scripts they contain consult a configuration file, which is different on each host, depending on what scripts need to be run on each.
To tie it all together, you can create a meta package that just depends on each of the other packages you create. That way, when you set up a new server, you install that one meta package, and the other packages are brought in as dependencies.
Although this process sounds a bit complicated, if you have many scripts and many hosts to deploy them to, it can really pay off in the long run.
I have to roll out PHP scripts and Apache configurations to several customers on a frequent basis. Since they all run Debian Linux, I've set up a Debian package repository on my server and the all the customer has to do is type apt-get upgrade and they get the latest version.
The first thing to do is get all these scripts into a source control repository (svn or git are good) so that you can track changes to these scripts over time.
If you are interested in ruby, check out Capistrano, it is well suited deploying things to multiple machines in a cluster, and is fairly easy to set up. It can read files directly from your version control system.
Puppet is another tool that can be used in this situation. It is similar to cfengine - you create a model of the desired deployment and Puppet figures how to get the environment to this state.