Does use of 'virtualenv' leave my "real" Python installation alone? - virtualenv

I've had many questions about Python for which a suggested answer is often "use virtualenv", but I have a (lovingly maintained and perfectly functioning) Python installation that I'm loath to disturb.
I want to be absolutely sure, so I'll ask twice: Does use of virtualenv in any way disturb my "real" Python installation? Using virtualenv does not in any way modify the files or paths in my "real" installation, right?

Virtualenv creates separated Python environment. Python interpreter is linked from one of system-installed that you choose creating virtualenv( --python commandline switch) and, optionally, wheater use or not system site-packages (--system-site-packages).
All packages that you install using virtualenv remains only on virtualenv directory site-packages folder and do not mess system packages.

Related

Installing an perl based web-app in extremely restricted environment

Because i have a long series of comments with #ikegami, I cleaning up the question, in a hope it will be more understandable. Unfortunately, english isn't my "main" language. :(
Let say, having an environment where:
no development tools are installed (no make, nor gcc or like)
perl is installed with its core packages, nothing more
no outgoing network access is allowed - e.g. the user couldn't use curl nor cpan to download/install perl dependencies
the user even doesn't have admin (root) rights
but want install and evaluate some perl based web-app, let call it as MyApp
The MyApp
doesn't uses any XS-based module. (at least, I hope - in the development me using plenv and cpanm, so never checked the installed dependencies in depth)
it is an pure PSGI app, the simple plackup app.psgi works OK
the app uses some data-files which should be included in the "deployment".
The main question is: how to prepare the MyApp, and the all used CPAN-modules, to be easily installed in such restricted environment?
The goal is:
i don't need save my efforts and my time
but i want save the user's time and want minimize the needed actions on his side, so the installation (deployment) should be simple-as-possible.
E.g. how to get an running web-app to the user's machine with minimum possible (his) steps.
- the simplest thing is could be something as:
- copy one file (zip, or tarbal)
- unpack it
- from the terminal execute some run.pl in the unpacked directory.
To get the above simple installation, my idea was the following:
1.) Create an tarball, and after the unpacking will contain 3 folders and 1 perl-script, let say:
myapp_repo/
myapp_repo/distlib #will contain all MyApp's perl modules also ALL used CPAN modules and their dependecies
myapp_repo/datafiles #will contain app-specific data files and such
myapp_repo/install.pl
myall_repo/lib #will contain modules directly used by the `install.pl`
2.) I will develop an install.pl script, and it will be used as the installer-tool, like
perl install.pl new /path/to/app_root
and it will (should):
create the all needed directories under the /path/to/app_root (especially the lib where the will install the perl modules)
will call "local" cpanm internally (from the myapp_repo/lib) to install the app's perl modules and their CPAN dependencies using only distribution files from the distlib.
will generate and install the needed runtime script and the app.psgi into the /path/to/app_root/bin
will install the needed data-files for the app.
3.) So, after this the user should be able to simply run:
/path/to/app_root/bin/plackup /path/to/app_root/bin/app.psgi
In short, the user should use:
the system-wide perl and the system-wide perl-core modules
and any other
runtime perl-scripts (like plackup)
and the required CPAN-modules
should be installed to an self-contained directory tree using only files (no net-access).
E.g. the install.pl should somewhat call internally the cpanm to achieve (as equivalent) for the following cpanm command
cpanm --mirror file://path/to/myapp_repo/distlib --mirror-only My::App
which, should install My::App and all dependencies without network access using only the files from the myapp_repo/distlib
Some questions:
Is possible to use cpanm (called as an locally installed module) without the make?
For creating the myapp_repo/distlib, me thinking about using Pinto. Is it the right tool for achieve the above?
forgot me something? or with other words:
Is the above an viable (read: working) way?
are are any other tools, which i could/should to use for simplifying the creation of such distribution tarball?
#ikegami suggesting some method:
- "install everything" in one fresh-directory on my machine
- transfer this self-contained directory to the target machine
It sound very good, because this directory could contain all the needed app-specific data-files too, unfortunately, I don't understand the details how his solution should be done.
The FatPacked solution looks interesting too - need learn about it.
Don't write your own make or installer. Just copy it make from a different machine (which is basically what apt/yum/etc do anyway, and which you'd have to do even if you wrote your own). You'd be able to use cpan in 5 minutes!
Also, that should allow you to install gcc if you need it (e.g. to install an XS module), although it doesn't sound like you do. If you do install gcc, I'd install my own perl to avoid having to deal with PERL5LIB.
Tools such as minicpan will allow you to install any module from CPAN without internet access. Of course, you can keep using the command you are already using it if mirrors the packages you need.
The above explains how to simply and quickly setup a machine so it can use cpan and thus install any module easily.
If you just want to install a specific module and its dependencies, you can completely avoid using cpan on the target machine. First, you need a fresh install of Perl (preferable of the same version as the one on the target system). Then, simply install the module to a fresh dir on your machine, and transfer that dir to the target machine. That's it; nothing else needs to be done. This even works for XS modules if the two machine are similar enough.
This is what ppm (ActiveState's Perl package manager) does.
Unfortunately, while this solution is almost as simple as the one above, it's not nearly as flexible, it doesn't run the test suite of the modules being installed, etc. It does have the advantage of not requiring the transfer of any binary (if you're not installing any XS modules).

Install just one package globally on Julia

I have a fresh Julia instalation on a machine that I want to use as a number-crunching server for various persons on a lab. There seems to be this nice package called jupyterhub wich makes the Jupyter Notebook interface avaible to various clients simultaneusly. A web page which I am unable to find again began suggesting something like "first install IJulia globally, then install JupyterHub..."
I cannot seem to find a nice way to install ONE package globally.
Update
In Julia-v0.7+, we need to use JULIA_DEPOT_PATH instead of JULIA_PKGDIR and the LOAD_PATH looks something like this:
julia> LOAD_PATH
3-element Array{Any,1}:
Base.CurrentEnv()
Any[Base.NamedEnv("v0.7.0"), Base.NamedEnv("v0.7"), Base.NamedEnv("v0"), Base.NamedEnv("default"), Base.NamedEnv("v0.7", create=true)]
"/Users/gnimuc/Codes/julia/usr/share/julia/stdlib/v0.7"
Old Post
"first install IJulia globally, then install JupyterHub..."
I don't know whether this is true or not, by following these steps below, you can install IJulia after you installed Jupyterhub.
Install packages system-wide/globally for every user
this question has already been answered here by Stefan Karpinski. so what we need is just use this method to install the IJulia.jl package.
There's a Julia variable called LOAD_PATH that is arranged to point at two system directories under your julia installation. E.g.:
julia> LOAD_PATH
2-element Array{Union(ASCIIString,UTF8String),1}:
"/opt/julia-0.3.3/usr/local/share/julia/site/v0.3"
"/opt/julia-0.3.3/usr/share/julia/site/v0.3"
If you install packages under either of those directories, then everyone using that Julia will see them. One way to do this is to run julia as a user who can write to those directories after doing export JULIA_PKGDIR=/opt/julia-0.3.3/usr/share/julia/site in the shell. That way Julia will use that as it's package directory and normal package commands will allow you to install packages for everyone....
Make IJulia working with Jupyterhub
in order to make IJulia and Jupyterhub working with each other for all the users, you should copy the folder your/user/.local/share/jupyter/kernels/julia/ to /usr/local/share/jupyter/kernels/. I write down some of the steps that I used in my test Dockerfile. the code is ugly, but it works.
Steps: (after you successfully installed Jupyterhub)
note that, you should do the following steps as root and I assume that your julia was globally installed at /opt/julia_0.4.0/.
make our global package directory and set up JULIA_PKGDIR:
mkdir /opt/global-packages
echo 'push!(LOAD_PATH, "/opt/global-packages/.julia/v0.4/")' >> /opt/julia_0.4.0/etc/julia/juliarc.jl
export JULIA_PKGDIR=/opt/global-packages/.julia/
install "IJulia" using package manager:
julia -e 'Pkg.init()'
julia -e 'Pkg.add("IJulia")'
copy kernelspecs to /usr/local/share/jupyter/kernels/ which can be used by any new user added by Jupyterhub:
jupyter kernelspec list
cd /usr/local/share/ && mkdir -p jupyter/kernels/
cp -r /home/your-user-name/.local/share/jupyter/kernels/julia-0.4-your-julia-version /usr/local/share/jupyter/kernels/

How to make a Dist::Zilla based Perl module (or app) install files into /etc/?

I maintain multiple Perl written (unix-ish) applications whose current installation process consists of a manually written Makefile and installs configuration files into /etc/.
I'd really like to switch their development to use Dist::Zilla, but so far I haven't found any Dist::Zilla plugin or feature which allows me to put given files into /etc/ when the make install (or ./Build install in case of using Module::Build instead of ExtUtils::MakeMaker) is run by the local administrator who's installing my application.
With pure ExtUtils::MakeMaker, I could define additional make targets in MY::postamble and the let the install target depend on one of them via the depend { install => … } attribute. Doing something similar, but via dzil build, would probably suffice, but I'd appreciate a more obvious way.
One orthogonal approach would be to make the application not to require the files under /etc/ to exist, but for just switching to Dist::Zilla that seems to much change in the actual code despite I only want to change the build system for now.
For the curious: the two applications I currently have in mind for switching to Dist::Zilla are xen-tools and unburden-home-dir.
The best thing to do is to avoid installing files into /etc from any Perl distribution. You cannot ensure that the cpan client (or the installing user) has permissions to install there, and there can be multiple Perls installed on a system, so each one of them would clobber the /etc files of another install. You can't really prevent the file from being overwritten by a subsequent install, so you shouldn't put config data there that you don't want to lose.
You could put the config file in /etc/, if the application knows to look for it there, but you should allow for that path to be customized (say on a test system, look for the file in the local directory, or in a user's home directory).
For installing read-only module-specific data, the best practice in Perl is to install into a Perl-install-specific location, and the module to do that is File::ShareDir::Install. You can use it from Dist::Zilla using the [ShareDir] plugin, Dist::Zilla::Plugin::ShareDir. It is even included in the [#Basic] plugin bundle, so if you use [#Basic] in your dist.ini, you don't need to do anything at all, other than drop your data files into the share/ directory in your distribution repository.
To access the contents of the sharedir from code, use File::ShareDir.
For porting a complex module installer to Dist::Zilla, I recommend my plugins MakeMaker::Custom or ModuleBuild::Custom, depending on which installer you prefer. These allow you to keep your existing Makefile.PL or Build.PL and just have Dist::Zilla plug in necessary bits like the dependencies.

Installing external packages into Canopy Python using easy_install as non-admin

I am user of a Windows computer without admin rights and just installed Canopy Python from Enthought (and I was really excited that I was able to do this without admin rights). I would now like to install an external package (that is not available in Canopy Python as an academic user). The instructions on the support page from Enthought suggest that to install an external package, we can just open a command window, make sure that Canopy Python is on the SHELL path, and then "follow standard Python installation procedures from the command line," with the suggested approach being to use easy_install. However, as a non-admin, when trying to use easy_install, a dialog box pops up requesting a admin username/password (which I do not have as a regular user). Does anyone know if it is possible to use easy_install as a non-admin or if there is an alternative solution to install external packages for non-admin users for Canopy Python?
Is it possible that you are picking up the easy_install of another Python distribution on your machine?
The default location of easy_install in Canopy is
C:\Users\YourName\AppData\Local\Enthought\Canopy\User\Scripts\easy_install
Please try using the full address explicitly and see if that works, in theory you should not need admin rights.
Update: The problem is due to one of the heuristics used by Windows UAC to determine if an application requires privilege escalation: If there is the word "setup" or "install" in the name, it will prompt for elevation. (See the answer below by Mona regarding which files to rename.) It's probably easier to rename easy_install, and use it to install pip (easy install pip), and use that instead.
As an update, after searching some more on User Access Control (UAC) on windows (and coming across something that mentioned that having "install" in the name of a program may cause problems, but I do not know for sure if this was the cause of a problem in my case), I just tried the following "hack" which worked for me (but perhaps someone else can suggest a more elegant solution or can provide more feedback as to why this works):
Go to C:\Users\YourName\AppData\Local\Enthought\Canopy\User\Scripts\
Rename easy_install.exe to easy.exe
Rename easy_install-script.py to easy-script.py
Run "C:\Users\YourName\AppData\Local\Enthought\Canopy\User\Scripts\easy.exe PackageName" from the command line.
(Wait for package to be installed and check for success by opening Canopy Python and trying to import the package.)
Some additional comments: I received an error if I didn't perform step #3 above (renaming the .py file as well). Also, I needed to type the full path to easy.exe in this case from the command line.
This worked for me (and I can use the external package), but again, my guess is that there should be a more "official" solution that does not require renaming easy_install.

How to install Perl offline

I have a Linux server that has no access to the internet (access is prevented by a firewall). I would like to install a new Perl. What are my options and what is the best way to do this? The system Perl (included in OS installation) must remain unchanged.
I have been using perlbrew and I think it is the best way to do an online installation. But all the steps involved in perlbrew seem to require internet access: you download it from the net, it downloads new Perl versions from the net etc. and I haven't found a glue how to make it work offline.
If perlbrew is out of question I could build Perl from source into a custom location on the server. I assume that this could end up being complicated, time-consuming and error-prone. And every time I update Perl I have make a new build manually.
There can also be other ways to install that I'm not currently aware of. And of course I could stick with the system Perl but it is an outdated version and I'm already using the new syntax features. Or I could start negotiations to change the firewall policy to allow internet access for perlbrew.
But all the steps involved in perlbrew seem to require internet access
Not if properly configured.
To install perlbrew itself off-line, install the App-perlbrew dist. Following its dependencies manually is a chore, so instead prepare a MiniCPAN mirror (with -p to include Perl dists), take it over to the target machine and configure CPAN to use the local mirror. Run cpan App::perlbrew to install.
After perlbrew is installed, run its mirror command to configure a CPAN mirror into $PERLBREWROOT/Config.pm. Edit this file to change it to the local MiniCPAN mirror. Drop Perl dist tarballs into $PERLBREWROOT/dists/.
Be aware that compiling Perl requires a working C compiler toolchain, and optionally the development files for libdb (BerkeleyDB) and gdbm. (Read the INSTALL file once over, even though perlbrew's autoconfiguration and Perl's configure.SH defaults hide these details from you.)
The compiler toolchain is probably much more difficult to procure off-line, unless the OS installation has already been used before for compiling other C stuff.
There's nothing that special about perlbrew. If you aren't going to use it to download the Perl sources, it's not saving you that much. Once you have the Perl sources, you just need to configure and install it:
% ./Configure -des -Dprefix=/path/to/installation
% make install
Once done, everything for that Perl is under that installation path.
I dislike perlbrew mostly because it hides from people how amazingly simple this task is so they feel like they can't do it on their own.
Have you considered attacking it from a different direction? Keeping this up-to-date is going to be a pain if you have to request internet access each time. Likewise, if you've missed out/misconfigured any packages in your CPAN mirror it's difficult to correct once you're actually trying to use them.
Perhaps just build a small VM with a cut-down linux + perl + modules. Keep that up-to-date at your end and just take the whole lot in on a USB stick. You'd have a known-working easy-to-setup installation.
What I personally do is using git checkout when I'm offline (and not on vacation). Once you have the whole git work directory, it's trivial to build any released version by checking out the tags:
git checkout v5.17.4
git clean -f # cleanup previously compiled .o files etc
sh ./Configure ...
Depending on how you can transfer files to your host, this can be handy, since you you can also setup a private git repo there so other computer can git push new commits to there.