How can I deploy Kooboo modules non-interactively? - deployment

I am currently developing a number of Kooboo modules which receive frequent updates. The modules are built automatically using a CI server.
Is there a way to deploy the modules to a running Kooboo instance after build? Since it is a large number of modules and maybe several new versions per day I would really like to avoid having to upload them manually all the time.

Unfortunately, it seems that Kooboo does not provide an API to install modules. The only solution I could find was to copy the files manually (via a custom post-build event) to the Kooboo directories.
Here is what you need to do. All paths are relative to the Kooboo root directory.
Unzip the module ZIP file in Areas/<YourAreaName>
Copy the contents Areas/<YourAreaName>/bin to bin/
Once you have done this, the module should be deployed to Kooboo. Note that this does not run the install events and views specified in the module.

Related

PowerShell Module Deployment Duplication

I am using Azure DevOps to deploy PowerShell modules to a server. This release task deploys the modules to the directory C:\Windows\System32\WindowsPowerShell\v1.0\Modules\. I am able to use the modules once they are deployed to this folder successfully.
If I modify one of the modules and re-release it the file in C:\Windows\System32\WindowsPowerShell\v1.0\Modules\ gets updated, however the old version of the module is still used when running from a batch file using pwsh.
I discovered that the module file also exists in the following paths:
C:\Program Files\PowerShell\Modules\
C:\Program Files\PowerShell\6\Modules\
When deploying the new version using Azure DevOps the old version in the above two directories are not updated. Manually updating the module in those locations fixes the problem.
Why is the module file being copied into those two additional paths?
Should those copies be overwritten when a new version of the module is deployed?
What is the correct way of deploying a module in this scenario?
Powershell uses different paths to load modules. Use $env:PSModulePath -split ";" to know which are the paths being used.
The difference between each path is user scope and usage scope (e.g. made for custom modules or windows official modules).
Now, by default, PS looks for the latest version of each module across all the paths. So maybe the old version is being run because at the time you re-deploy. You are not updating the module version in the Module Manifest, so if PS see they are the "same" version it gets the last one loaded on the PSModulePath.
Take a look at this awesome post for more details: Everything you wanted to know about PowerShell's Module Path
Now to your questions.
Why is the module file being copied into those two additional paths?
This could be a server configuration or the script that you are using to deploy.
Should those copies be overwritten when a new version of the module is deployed?
Not necessarily, if the versions are maintained correctly. On the post shared says how to check the versions of each module.

How to make a Dist::Zilla based Perl module (or app) install files into /etc/?

I maintain multiple Perl written (unix-ish) applications whose current installation process consists of a manually written Makefile and installs configuration files into /etc/.
I'd really like to switch their development to use Dist::Zilla, but so far I haven't found any Dist::Zilla plugin or feature which allows me to put given files into /etc/ when the make install (or ./Build install in case of using Module::Build instead of ExtUtils::MakeMaker) is run by the local administrator who's installing my application.
With pure ExtUtils::MakeMaker, I could define additional make targets in MY::postamble and the let the install target depend on one of them via the depend { install => … } attribute. Doing something similar, but via dzil build, would probably suffice, but I'd appreciate a more obvious way.
One orthogonal approach would be to make the application not to require the files under /etc/ to exist, but for just switching to Dist::Zilla that seems to much change in the actual code despite I only want to change the build system for now.
For the curious: the two applications I currently have in mind for switching to Dist::Zilla are xen-tools and unburden-home-dir.
The best thing to do is to avoid installing files into /etc from any Perl distribution. You cannot ensure that the cpan client (or the installing user) has permissions to install there, and there can be multiple Perls installed on a system, so each one of them would clobber the /etc files of another install. You can't really prevent the file from being overwritten by a subsequent install, so you shouldn't put config data there that you don't want to lose.
You could put the config file in /etc/, if the application knows to look for it there, but you should allow for that path to be customized (say on a test system, look for the file in the local directory, or in a user's home directory).
For installing read-only module-specific data, the best practice in Perl is to install into a Perl-install-specific location, and the module to do that is File::ShareDir::Install. You can use it from Dist::Zilla using the [ShareDir] plugin, Dist::Zilla::Plugin::ShareDir. It is even included in the [#Basic] plugin bundle, so if you use [#Basic] in your dist.ini, you don't need to do anything at all, other than drop your data files into the share/ directory in your distribution repository.
To access the contents of the sharedir from code, use File::ShareDir.
For porting a complex module installer to Dist::Zilla, I recommend my plugins MakeMaker::Custom or ModuleBuild::Custom, depending on which installer you prefer. These allow you to keep your existing Makefile.PL or Build.PL and just have Dist::Zilla plug in necessary bits like the dependencies.

Version controlled South migrations in virtualenv

I have a Django site placed in folder site/. It's under version control. I use South for schema and data migrations for my applications. Site-specific applications are under folder site/ so they are all version-controlled along with their migrations.
I manage a virtualenv to keep third party components dry and safe. I install packages via PyPI. The installed packages' list are frozen in requirements.txt so the they can be easily installed in another environment. The virtualenv is not under VCS. I think it is a good way if virtualenv can be easly deleted and reconstructed at any time. If I need to test my site, for instance, using another version of Python interpreter, simply activate another virtulalenv.
I'd like to use South for third party packages, though. Here comes the problem. Migration scripts stored in the application's folder so they are outside of my site's repository. But I want migration scripts to be under version control so I can run them on different stages as well.
I don't want to version control the whole virtualenv but the migration scripts for third party applications. How can I resolve this conflict? Is there any misconcept in my scenario?
The SOUTH_MIGRATION_MODULES setting allows you to put migration modules for specified apps wherever you want them (i.e. inside your project tree).
I think it depends a litte bit on your version control system. I recommend to use a sparse tree, one that only manages the migration folders of the various packages. Here I see two alternatives:
Make a truly sparse tree for all packages, one that you check out before creating the virtualenv. Then populate the virtualenv, putting stuff into the existing folders.
Collect all migrations into a separate repository, with a folder per project/external dependency. Check this out into the virtualenv, and create symlinks, linking from each project to its migration folder.
In either case, I believe you can arrange for the migrations to exist as a separate project, so you can install it with the same process as you install everything else (easy_install/pip/whatever).

What is a good way to deploy a Perl application?

I posted this question looking for something similar to Buildout for Perl. I think Shipwright is what I'm looking for but I'm not really sure. I've played around with it and I created a project, imported all of my source and dependencies and I've exported everything to a vessel then the documentation sort of just stopped. What do I do with a shipyard vessel? Do I do my actual development work in the vessel, or do I do my development in the Shipyard? I'm assuming that the vessel is only for deployment, but how do I actually deploy a vessel to a web server (say I'm using linux, apache and just running straight cgi).
Is Shipwright the right thing for what I'm trying to accomplish or is there something else that would be more appropriate? Ideally I could use Shipwright similar to how I use Buildout. I use Buildout to create a nice isolated environment for my development, and also I use Buildout when deploying to live servers to manage all of my application's dependencies.
EDIT: Here are the highlights of what I can do with Buildout that I would like to be able to do in Perl.
With Buildout, I have a file in my codebase that lists dependencies (which for Perl would either be CPAN modules or other source repositories). I can run a bootstrap script that will fetch all of those dependencies and drop them into a directory within my project and NOT install them at a system level. Buildout also creates utility scripts which can do anything you want (run tests, other command line tools, anything really) and those scripts explicitly add the dependencies to the path so that as my scripts are running all of my dependencies are available to be imported.
What this really does very well is that it allows me to manage my dependencies without having to ever install anything at a system level. Which makes changing from one version to another very easy. Also, it allows me to have multiple Buildout projects running on the same system using different versions of the same module. Finally, one huge benefit is that with Buildout's directory structure, I can just commit the dependencies to source control and to deploy to a new machine I just need to do a checkout and all of my dependencies are already satisfied without having to touch anything installed at a system level.
I don't think you'll find anything exactly like Buildout in Perl, but you could put together a couple of things that would do the trick.
You could use a standard Build.PL script for Module::Build for managing your dependencies and having commands to run tests, etc.
Then you could use cpanminus to do the installation of those dependencies into a local (non-system) directory.
Then you might be able to use Shipwright to do the bundling and deployment of the project with these now-local dependencies.

Best Way to Deploy Zend Web Application

I've read a lot about deploying applications here, but haven't found a suitable answer to our needs yet.
We have a large web application built with the zend framework that we want to deploy to a remote server. We want to be able to easily and safely deploy a new version of our application to our production server.
What needs to be done is the following:
put up a maintenance page on the production application?
export version from SVN
run a shell script to minify the CSS files in a certain directory (shell script is done)
set file permissions on files and directories
copy/sync? files to a production server -> only changed files?
remove maintenance page from the production application?
We use SVN as a code versioning tool and we are running CentOS as our server OS in production.
I've read about:
rsync
fredistrano / capistrano
phing
custom shell scripts
What are your advices for easy one-click deployment?
I export (or checkout) a copy of the site under a different name (typically the subversion revision number & date) and symlink the document root into place
1000.20100515/
application/
public/
library/
1020.20100621/
current (symlink to 1000.20100515/)
dev (symlink to 1020.20100621/)
# copy whatever 'dev' points to as the new 'current' symlink.
rm current && cp -d dev current
The document root is set in apache to ../current/public
With this, I can check out a new version of the site at leisure, and put the new version live en-mass in a fraction of a second. Rolling back to a previous version of the site is as easy as changing the symlink - should a major problem be found.
Added The ruby-based tool 'Capistrano' can be an excellent method to fully automate this across a number of machines (be it one, or a dozen), and indeed it's my preferred method of deployment now. Capifony is a plugin for Capistrano that also supports Composer-based projects.
Try Capistrano. It is developed for Ruby and you need to have Ruby installed on your computer, but it is not necessary to have it on the target server.
It works with git or svn, and it creates versions on the target server. You can roll back and deploy your new version with one line of CMD.
I have found this tutorial: http://tfountain.co.uk/blog/2009/5/11/zend-framework-capistrano-deployment
You have a modified version of capistrano with another tutorial here: http://www.codewithstyle.eu/2011/05/03/deploying-zend-framework-applications-using-capistrano/