Mac installer package - how to optionally install to multiple locations - deployment

I am trying to create a Mac installer package with pkgbuild and productbuild that installs 1 package to 4 locations optionally, depending on the result of the installer options laid out in the distribution definition xml file.
Unfortunately, I cannot find a way of customising the Distribution.xml to install a package more than once.
The only alternative I can see is bundling 4 identical packages, but with separate install
locations; however this is undesirable as it would make my installer unnecessarily large.
Is there another way of achieving this custom behaviour?

Instead of --component "/Applications/SomeApp.app"
use --root ./basefolder and have root-like structure (example: 'Applications','Library' folders)
/usr/bin/pkgbuild --scripts ./scripts --info PackageInfo --identifier com.app.installer --root ./basefolder "Installer.pkg"
http://s.sudre.free.fr/Stuff/Ivanhoe/FLAT.html is helpful in creating PackageInfo file

Related

How to install a NetBeans plugin via CLI?

Question:
Is there a way to install a downloaded NBM (Netbeans Module) into an already installed Netbeans IDE via CLI?
Current setup
Netbeans 12.3 with Windows 10
Netbeans 12.3 with Linux Mint 20.1
Relevant scenario
If the question comes to your mind 'why aren't you just using the GUI?' or anything like that. Think of the following scenario. When working on an air gapped network with 50 computers you're the one having to install Netbeans plugins on all of that PCs. You're able to put files on those PCs and execute a command via console and you don't want to run around all the buildings and clicking through the process.
Thank you very much in advance.
I think I found a solution. I'll post it here to reflect my research because I've never found a answer on stackoverflow.
When Netbeans is already installed you can use the --help parameter like:
C:\Program Files\NetBeans\netbeans\bin\netbeans64.exe --help
This lists lot of available parameters (which I haven't found a list of on the web) like (shortened):
General options:
--help show this help
--jdkhome <path> path to JDK
--console new open new console for output
Module reload options:
--reload /path/to/module.jar install or reinstall a module JAR file
Additional module options:
--modules
--refresh Refresh all catalogs
--list Prints the list of all modules, their versions and enablement status
--install <arg1>...<argN> Installs provided JAR files as modules
--disable <arg1>...<argN> Disable modules for specified codebase names
--enable <arg1>...<argN> Enable modules for specified codebase names
--update <arg1>...<argN> Updates all or specified modules
--update-all Updates all modules
Core options:
--fontsize <size> set the base font size of the user interface, in points
--userdir <path> use specified directory to store user settings
--nosplash do not show the splash screen
In my case the solution was to use the --install parameter pointing to the jar file to install.Be aware that the NBM files are just containers containing the jar file and some more meta data files like config xml files. You're able to open it via 7zip for example. And you'll have to take care of all the dependencies yourself.

Yocto deploy Debug or Release prebuild?

I am writing a bitbake recipe to deploy a third party pre-built tool, similar to this wiki page: https://wiki.yoctoproject.org/wiki/TipsAndTricks/Packaging_Prebuilt_Libraries
However, I have a Release and Debug pre-build versions of the tool available as *.so files. How do I distinguish inside the recipe which one of both build types I shall deploy?
Thanks and regards,
Martin
You can have two different virtual recipes each with their own .so file. This then warrants a selection in a configuration file (with PREFERRED_PROVIDER_virtual/my-recipe), so either in a machine or distro configuration file. This is probably preferred if you consider having release and debug distros.
A second option is to install the libraries in two different paths, in two different PACKAGES (use FILES_my-package for that) and make them RCONFLICTS_my-package each other to be sure they can't both be in the rootfs. After that, you could write a pkg_postinst_my-package() task specific to each package that actually move the library from the "different" path to the intended one. This will be run both at build time when creating the rootfs and at runtime on first boot, so you need to make sure to exclude one or the other (it's usually done by checking if ${D} exists, which does at build time but not runtime).
c.f.: http://docs.yoctoproject.org/dev-manual/dev-manual-common-tasks.html#post-installation-scripts
If you can manage to have both libraries installed in your rootfs and select the one you want with the LIBRARY_PATH environment variable, a simple recipe, with two packages with each library in a different location, will be sufficient.

How to install multiple versions of a compatible package in CentOS with YUM

Is there a way to install multiple versions of the same package in CentOS/RHEL (7/8) if the package installs separate files in each version?
We have an application we've recently converted to using RPM instead of a home-built package manager based on tar. In order to make atomic-like switches between versions, each version installed in separate directories with the version number in the name, and a symlink with the unversioned name pointed to the current, or previous, version at any given moment in time. The application, of course, used the unversioned name to get init script, configuration files, interpreter version and code. I'm thinking that the alternatives package would be the basis for this, although we wouldn't use the alternatives command to manage symlinks (although there's no technical reason not to).
Not exactly as you describe.
Some packages allow this (Kernel and Kernel-devel being two of them) but i beilieve this is an exception added within the package manager.
Certain Applications like PHP and Python which is perfectly acceptable to have multiple version (Python2.X and 3.X) do this by changing the base name of the application/rpm.
Take a look at: https://rpm.org/user_doc/multiple_versions.html
It gives a good insight on how to achieve what you want

Installing an perl based web-app in extremely restricted environment

Because i have a long series of comments with #ikegami, I cleaning up the question, in a hope it will be more understandable. Unfortunately, english isn't my "main" language. :(
Let say, having an environment where:
no development tools are installed (no make, nor gcc or like)
perl is installed with its core packages, nothing more
no outgoing network access is allowed - e.g. the user couldn't use curl nor cpan to download/install perl dependencies
the user even doesn't have admin (root) rights
but want install and evaluate some perl based web-app, let call it as MyApp
The MyApp
doesn't uses any XS-based module. (at least, I hope - in the development me using plenv and cpanm, so never checked the installed dependencies in depth)
it is an pure PSGI app, the simple plackup app.psgi works OK
the app uses some data-files which should be included in the "deployment".
The main question is: how to prepare the MyApp, and the all used CPAN-modules, to be easily installed in such restricted environment?
The goal is:
i don't need save my efforts and my time
but i want save the user's time and want minimize the needed actions on his side, so the installation (deployment) should be simple-as-possible.
E.g. how to get an running web-app to the user's machine with minimum possible (his) steps.
- the simplest thing is could be something as:
- copy one file (zip, or tarbal)
- unpack it
- from the terminal execute some run.pl in the unpacked directory.
To get the above simple installation, my idea was the following:
1.) Create an tarball, and after the unpacking will contain 3 folders and 1 perl-script, let say:
myapp_repo/
myapp_repo/distlib #will contain all MyApp's perl modules also ALL used CPAN modules and their dependecies
myapp_repo/datafiles #will contain app-specific data files and such
myapp_repo/install.pl
myall_repo/lib #will contain modules directly used by the `install.pl`
2.) I will develop an install.pl script, and it will be used as the installer-tool, like
perl install.pl new /path/to/app_root
and it will (should):
create the all needed directories under the /path/to/app_root (especially the lib where the will install the perl modules)
will call "local" cpanm internally (from the myapp_repo/lib) to install the app's perl modules and their CPAN dependencies using only distribution files from the distlib.
will generate and install the needed runtime script and the app.psgi into the /path/to/app_root/bin
will install the needed data-files for the app.
3.) So, after this the user should be able to simply run:
/path/to/app_root/bin/plackup /path/to/app_root/bin/app.psgi
In short, the user should use:
the system-wide perl and the system-wide perl-core modules
and any other
runtime perl-scripts (like plackup)
and the required CPAN-modules
should be installed to an self-contained directory tree using only files (no net-access).
E.g. the install.pl should somewhat call internally the cpanm to achieve (as equivalent) for the following cpanm command
cpanm --mirror file://path/to/myapp_repo/distlib --mirror-only My::App
which, should install My::App and all dependencies without network access using only the files from the myapp_repo/distlib
Some questions:
Is possible to use cpanm (called as an locally installed module) without the make?
For creating the myapp_repo/distlib, me thinking about using Pinto. Is it the right tool for achieve the above?
forgot me something? or with other words:
Is the above an viable (read: working) way?
are are any other tools, which i could/should to use for simplifying the creation of such distribution tarball?
#ikegami suggesting some method:
- "install everything" in one fresh-directory on my machine
- transfer this self-contained directory to the target machine
It sound very good, because this directory could contain all the needed app-specific data-files too, unfortunately, I don't understand the details how his solution should be done.
The FatPacked solution looks interesting too - need learn about it.
Don't write your own make or installer. Just copy it make from a different machine (which is basically what apt/yum/etc do anyway, and which you'd have to do even if you wrote your own). You'd be able to use cpan in 5 minutes!
Also, that should allow you to install gcc if you need it (e.g. to install an XS module), although it doesn't sound like you do. If you do install gcc, I'd install my own perl to avoid having to deal with PERL5LIB.
Tools such as minicpan will allow you to install any module from CPAN without internet access. Of course, you can keep using the command you are already using it if mirrors the packages you need.
The above explains how to simply and quickly setup a machine so it can use cpan and thus install any module easily.
If you just want to install a specific module and its dependencies, you can completely avoid using cpan on the target machine. First, you need a fresh install of Perl (preferable of the same version as the one on the target system). Then, simply install the module to a fresh dir on your machine, and transfer that dir to the target machine. That's it; nothing else needs to be done. This even works for XS modules if the two machine are similar enough.
This is what ppm (ActiveState's Perl package manager) does.
Unfortunately, while this solution is almost as simple as the one above, it's not nearly as flexible, it doesn't run the test suite of the modules being installed, etc. It does have the advantage of not requiring the transfer of any binary (if you're not installing any XS modules).

How to make a Dist::Zilla based Perl module (or app) install files into /etc/?

I maintain multiple Perl written (unix-ish) applications whose current installation process consists of a manually written Makefile and installs configuration files into /etc/.
I'd really like to switch their development to use Dist::Zilla, but so far I haven't found any Dist::Zilla plugin or feature which allows me to put given files into /etc/ when the make install (or ./Build install in case of using Module::Build instead of ExtUtils::MakeMaker) is run by the local administrator who's installing my application.
With pure ExtUtils::MakeMaker, I could define additional make targets in MY::postamble and the let the install target depend on one of them via the depend { install => … } attribute. Doing something similar, but via dzil build, would probably suffice, but I'd appreciate a more obvious way.
One orthogonal approach would be to make the application not to require the files under /etc/ to exist, but for just switching to Dist::Zilla that seems to much change in the actual code despite I only want to change the build system for now.
For the curious: the two applications I currently have in mind for switching to Dist::Zilla are xen-tools and unburden-home-dir.
The best thing to do is to avoid installing files into /etc from any Perl distribution. You cannot ensure that the cpan client (or the installing user) has permissions to install there, and there can be multiple Perls installed on a system, so each one of them would clobber the /etc files of another install. You can't really prevent the file from being overwritten by a subsequent install, so you shouldn't put config data there that you don't want to lose.
You could put the config file in /etc/, if the application knows to look for it there, but you should allow for that path to be customized (say on a test system, look for the file in the local directory, or in a user's home directory).
For installing read-only module-specific data, the best practice in Perl is to install into a Perl-install-specific location, and the module to do that is File::ShareDir::Install. You can use it from Dist::Zilla using the [ShareDir] plugin, Dist::Zilla::Plugin::ShareDir. It is even included in the [#Basic] plugin bundle, so if you use [#Basic] in your dist.ini, you don't need to do anything at all, other than drop your data files into the share/ directory in your distribution repository.
To access the contents of the sharedir from code, use File::ShareDir.
For porting a complex module installer to Dist::Zilla, I recommend my plugins MakeMaker::Custom or ModuleBuild::Custom, depending on which installer you prefer. These allow you to keep your existing Makefile.PL or Build.PL and just have Dist::Zilla plug in necessary bits like the dependencies.