How to abort the installation of a deb package if the disk space is not enough - iphone

I'm developing a jailbroken program for iPhone. When the disk space is not enough, the installation will still continue, thus part of files were copied, while the other files were not, this makes the disk dirty.
I've written disk space check code at preinst and prerm scripts which are control files of deb package. When disk space is not enough, the control scripts will exit with nonzero code. But the problem is, when we are upgrading a package, if the disk space is not enough, the dpkg will still remove the old files even the prerm script exit with nonzero status, thus upgrading becomes removal which is not my expect result.

I don't know much about Cydia specifically, but if it works exactly like dpkg, then this should be solveable. See the activity diagram for package upgrades at http://people.debian.org/~srivasta/MaintainerScripts.html#sec-3.4.3 .
That shows a few different paths that could be taken in the course of running prerms and preinsts which lead the system back to a clean, old-version-still-installed state. For example, if the new-preinst fails, then the new-postrm will be run with "abort-upgrade" as the parameter. If that succeeds, then the old-postinst is also run with "abort-upgrade". And if that succeeds, you're back to a clean, installed state.

Related

How can I make a FindMyPackage.cmake module fall back to downloading?

I have a simple CMake find module I've written, for a library of mine used by other projects. It's pretty simplistic, with its full text available here. Mainly there's one find_path() and one find_library(), and then some variables are set.
Now, I want CMake, when trying to find my package, to fall back on:
git-cloning or downloading the package/library from its GitHub repository,
Unpacking the archive, if it was a download
Building the package, either be using the running CMake itself somehow (the package has its own CMakeLists.txt), or by running an arbitrary shell command in the directory into which the packages was downloaded/cloned
The specifics of what happens post-download are less important to me than actually having a download fall-back.
How can I / how should I make this happen?
Notes:
Of course if the download/git clone fails, than finding the package has failed.
No need to worry about specific versions at the repo, although you can if you want to.

How to uninstall berrybrew—perlbrew for Windows?

I have been running berrybrew on Windows
(here's the home page and GitHub repository).
I'm having some trouble with it and I want to uninstall and reinstall it, but I can't figure out how to do that.
I am hoping it is as simple as just deleting the directory where it was installed and C:\berrybrew, which is where it seems to keep files, but I don't know for sure. The instructions contain installation instructions, but no uninstallation instructions.
Disclaimer: berrybrew author here...
To uninstall and return your system back to default:
berrybrew off
berrybrew unconfig
then delete the directory you downloaded it to, as well as the installation directory (by default, C:\berrybrew)
Edit your PATH variable to remove any entries that start with C:\berrybrew (or the base install directory if you've changed it from the default). One of the path entries will point to C:\berrybrew\bin, and there may be one more that points to the currently in-use Perl installation (also under C:\berrybrew\...). Technically speaking, there shouldn't be any after the first two commands are run, but one should always verify
Essentially, there's really nothing to "uninstall". It comes down to removing $ENV{PATH} ie. specific environment variables that point to a) berrybrew.exe binary itself, and b) the Perl installation that you last used.
I will update the documentation to provide more clarity in this regard.

Is it safe to recompile and replace executable, while a program is in execution?

If I run an executable, and during execution, I change the executable by recompilation, is it guaranteed that the program will continue execution as per the old executable? In theory, I understand that page faults can occur and hence, I expect changing an executable during execution, may not be a great idea. I have searched for an answer, but I have not got a satisfactory explanation.
Many systems will not allow you to do this. Systems that use the executable for paging will lock the file and prevent you from doing this while running.
Systems that do allow this, will have loaded everything from the executable into memory or secondary storage. So changing the executable while it is running will not affect the running program.

How can I copy a new version of libc?

I have a computer which is not to be connected to the internet for security reasons. It is running Linux. On a separate computer, I have code and a cross compiler for linux. When I move the program over to the offline Linux computer, I cannot execute it, due to an error " version `GLIBC_2.17' not found "
After checking /lib, I see that I do indeed only have version 2.13 of libc. On my development computer, I have all the relevant files for the 2.22 version which is being used to compile the program. I would like to somehow copy this version of libc on my development computer over to my offline computer so that I can run my program.
The problem is, I cannot seem to copy the files. Attempting to do so gives an error:
mv: error writing ‘./libc.so.6’: No space left on device
or something similar. I know this is not actually due to there not being space left on the device, because I don't have much of anything ON the device, and I can copy the files to other places in the filesystem, just not to the /lib directory. How can I migrate my newer version of libc over to the offline computer?
I would like to somehow copy this version of libc on my development computer over to my offline computer so that I can run my program.
Incorrectly updating libc on your "offline" computer is a very easy way to render it unbootable. You should not attempt this unless you understand what you are doing (which you clearly do not), and unless you know how to restore "offline" computer in case of failure.
The best approach is to use the package manager on the "offline" computer to install libc package correctly (details vary on what package manager is being used).
If the package manager approach doesn't work, you can copy individual files. Hoever note that system ld-linux.so and libc.so.6 must match at all times, or every dynamically linked program (your shell, cp, mv etc.) will break.
Since you need to update two files simultaneously, how could this be done? You need a statically-linked copy of cp. It is best to have a statically-linked copy of $SHELL and ln as well (in case you make a mistake).

improve startup times of matlab executable

I have compiled a matlab standalone exe which I can run on any computer that has the MATLAB Compiler Runtime installed.
However starting the exe takes 20-30 seconds !
How can I measure the time accuratly and most important - how can I decrease it down to 1-2 seconds.
This is taken out of Yair Altman's blog:
A splash wrapper application can alleviates much of the pain of the slow startup of deployed (compiled) Matlab applications. A Splash window solution can be found here. While such a splash wrapper is indeed useful, it may also be possible to achieve an actual speedup of the compiled app’s startup using the MCR_CACHE_ROOT environment variable.
Normally, the MCR and the stand-alone executable is unpacked upon every startup in the user’s temp dir, and deleted when the user logs out. Apparently, when the MCR_CACHE_ROOT environment variable is set, these files are only unpacked once and kept for later reuse. If this report is indeed true, this could significantly speed up the startup time of a compiled application in subsequent invocations.
On Linux:
export MCR_CACHE_ROOT=/tmp/mcr_cache_root_$USER # local to host
mkdir -p #MCR_CACHE_ROOT
./myExecutable
On Windows:
REM set MCR_CACHE_ROOT=%TEMP%
set MCR_CACHE_ROOT="C:\Documents and Settings\Yair\Matlab Cache\"
myExecutable.exe
There are also ways to set this env variable permanently on Windows if needed...
Setting MCR_CACHE_ROOT is especially important when running the executable from a network (NFS) location, since unpacking onto a network location could be quite slow. If the executable is run in parallel on different machines (for example, a computer cluster running a parallel program), then this might even cause lock-outs when different clusters try to access the same network location. In both cases, the solution is to set MCR_CACHE_ROOT to a local folder (e.g., /tmp or %TEMP%). If you plan to reuse the extracted files again, then perhaps you should not delete the extracted files but reuse them. Otherwise, simply delete the temporary folder after the executable ends. In the following example, $RANDOM is a bash function that returns a random number:
export MCR_CACHE_ROOT=/tmp/mcr$RANDOM
./matlab_executable
rm -rf $MCR_CACHE_ROOT
Setting MCR_CACHE_ROOT can also be used to solve other performance bottlenecks in deployed applications, as explained in a MathWorks technical solution and a related article here.
In a related matter, compiled Matlab executable may fail with a Could not access the MCR component cache error, when Matlab cannot write in the MCR cache directory due to missing permission rights. This can be avoided by setting MCR_CACHE_ROOT to a non-existent directory, or to a folder in which there is global access permissions (/tmp or %TEMP% are usually such writable folders) – see related posts here and here.
If you are using deploytool to compile your code, under Project - Settings-Toolboxes on path uncheck any toolboxes that aren't needed by your executable. I recently had this issue and the above steps cut the executable file size in half and significantly reduced the start time of the executable.