Vagrant - cache RPM's for faster provision - centos

Currently have a Vagrant setup with a CentOS box with a shell provision script that installs few RPM's (via yum install). I'm constantly doing vagrant destroy -f && vagrant up, thus downloading those RPM's every time.
What's the best way to cache the downloaded RPM's and avoid downloading them on each iteration?

Moving the cachedir to the shared folder /vagrant seems to work fine.
To change it, provision a /etc/yum.conf with the following edit:
cachedir=/vagrant/tmp/yum/$basearch/$releasever
keepcache=1
Now your cache is preserved outside the VM.

Related

ERROR: gcloud crashed (ModuleNotFoundError): No module named 'distutils.spawn'

I have been deploying my service on App Engine for a long time now and never had an issue until today.
Command to Deploy
gcloud app deploy app.yaml
Output
Beginning deployment of service [default]...
Building and pushing image for service [default]
ERROR: gcloud crashed (ModuleNotFoundError): No module named 'distutils.spawn'
I just deployed this morning with no issues and randomly when I tried to redeploy now I get the above error. Hopefully someone can help figure out what caused this issue.
For info:
app.yaml
runtime: custom
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 4
disk_size_gb: 10
Gcloud version
$ gcloud --version
Google Cloud SDK 341.0.0
alpha 2021.05.14
beta 2021.05.14
bq 2.0.68
core 2021.05.14
gsutil 4.62
minikube 1.20.0
skaffold 1.23.0
I had a similar issue, on my case this was the solution:
sudo apt-get install python3-distutils
Same exact problem.
Building and pushing image for service [default]
ERROR: gcloud crashed (ModuleNotFoundError): No module named 'distutils.spawn'
This issue seemed to be in the snap install of google-cloud-sdk in Ubuntu 20.04.2 LTS (You can select it pre-installed during the ISO setup.. DONT)
I was getting this in 18.04 as well
FINALLY solved it..
But.. I had to make sure I did not snap install google-cloud-sdk
I also..
sudo apt update
sudo apt upgrade
Then I made sure the snap install was not installed. (After a fresh install of Ubuntu). Sense I use dockerfiles it's easy for me to zap a dev environment and get it back.
But Id imagine if you can't zap your os and make sure not to let the OS put it's snap install of google-cloud-sdk.. You could snap remove google-cloud-sdk and then hunt for all it's configuration files.. And remove them.
At that point
https://cloud.google.com/sdk/docs/install#deb
Follow that... I did so exactly... FINALLY seemed to work. I used the apt install route they explain.. NOT the snap.
I tried all the pip install sudo apt-get install python3-distutils Till I was blue in the face... NADA.
somehow.. The Snap being present puts PATH settings that use the wrong distutils.
On my box now that I search for it.. In Totally fresh OS state... No Snap install and going through exactly the cloud.google.com/sdk/docs/install#deb work..
Here is distutils everywhere on my box in Ubuntu 20.03.2 LTS
$ sudo find / -name distutils
/snap/lxd/19188/lib/python2.7/distutils
/snap/core18/1944/usr/lib/python3.6/distutils
/snap/core18/1944/usr/lib/python3.7/distutils
/snap/core18/1944/usr/lib/python3.8/distutils
/usr/lib/python3.8/distutils
/usr/lib/python3.9/distutils
/usr/lib/python2.7/distutils
Note.. There's no google-cloud-sdk in the snap!!
The gcloud app deploy FINALLY works!! Passes the part where it starts to deploy.
But as the others in this.. It happened completly random.
All I can guess is...... Something clobered distutils as an update somewhere and started pointing to a garbage path.
Make sure you search for distutils find out where it is.. what's referencing it.. Somewhere in that mess you can fix it.
One thing I was able to discover is this problem will come default from 20.04.2.
I downloaded the most recent iso.. thinking it was an 18.04 issue.
Installed it fresh into Virtual Box.. And got exactly this same issue. So my solution fix (no SNAP).. Is against a totally clean 20.04.2 brand spanking new Ubuntu LTS VM. Default everything.
===============
Regarding the random one day it worked.. the next it didn't...
Here's the thing about snaps in Ubuntu:
https://www.google.com/search?q=Do+snap+packages+update+automatically%3F&rlz=1C1CHBF_enUS834US834&ei=ygynYJGRIo3f-gSLzb3YDg&oq=Do+snap+packages+update+automatically%3F&gs_lcp=Cgdnd3Mtd2l6EAMyCAghEBYQHRAeUJ-TCVifkwlgj5kJaABwAXgAgAFziAHVAZIBAzEuMZgBAKABAqABAaoBB2d3cy13aXrAAQE&sclient=gws-wiz&ved=0ahUKEwiRnrfXz9nwAhWNr54KHYtmD-sQ4dUDCA4&uact=5
"Do snap packages update automatically?
Snaps update automatically, and by default, the snapd daemon checks for updates 4 times a day. Each update check is called a refresh."
so that's how it randomly broke if you used a snap

How should I handle Perl module updates when maintaining docker images?

I'm working on building a docker image to be able to run all of our Perl applications. The applications require hundreds of CPAN modules to be installed. The full build of the docker image takes about an hour to complete.
After doing the initial image, I'm not sure how best to handle ongoing updates.
We could keep a single Dockerfile in git, and then modify this as required, and push new builds up to dockerhub. However if the person doing the build doesn't have all of the intermediate images, then adding a single CPAN module could be an extremely tedious process, and it might take an hour before they even know if the new module installs correctly. Also it would be downloading every CPAN module again, which seems a bit risky, as there might be a breaking change in the new module.
Alternatively, the person doing the build could pull the latest docker-hub image, and then install the cpan module interactively, commit the build and push the new image to dockerhub. However then we only have our dockerhub images, but not master Dockerfile.
Or another option would be to create a Dockerfile for each new build, which references the previous dockerhub image. This seems overly complicated though.
Option 1) seems wrong. I'm fairly sure we don't want to be rebuilding the entire image from the base OS just to install one additional module. However being dependent on images without Dockerfiles seems risky as well.
You could use the standard module installer for your underlying OS on your docker image.
For example, if its RedHat then use yum and only use CPAN when they are not available
FROM centos:centos7
RUN yum -y install cpanm gcc perl perl-App-cpanminus perl-Config-Tiny && yum clean all
RUN cpanm install Some::Module; rm -fr root/.cpanm; exit 0
taken from here and modified
I would try to have a base image which the actual applications use
I would also avoid doing things interactively (e.g. script a dockerfile) as you want to be able to repeat the build when upstream dependencies change, which docker hub does for you.
EDIT
You can convert perl modules into your own packages using dh-make-perl
You can load these into your own Ubuntu repo using reprepro or a paid solution of Artifactory
These can then be installed using apt-get when you use your repo as a source from within a dockerfile.
When I have tried a similar thing before There are a few problems
Your apps don't work with the latest version of modules
There are far more dependencies than you expected
Some modules wont package
Benefits are
You keep the build tools (gcc, etc) off the app servers
You know much more about your dependencies

Vagrant Berkshelf - Shelf Path?

Is it possible to set the path where the berkshelf plugin puts the cookbooks it installs? (As in the .berkshelf folder)
I am running Windows 7.
I am currently trying to install a mysql server using an opscode cookbook to a vm and here at work they have the %HOMEDRIVE% system variable set to a network drive. So when .berkshelf starts at the beginning of the Vagrantfile, it pushes the cookbooks to the network drive and it causes it to be slow and well, its not where it should be. Is there a fix to this?
VirtualBox did this as well, but I fixed it by altering the settings. I tried looking for some sort of equivalent settings for berkshelf, but the closest I got was for the standard berkshelf (thats not a vagrant plugin), it appears you can set this environment variable:
ENV['BERKSHELF_PATH']
Found here:
http://www.rubydoc.info/github/RiotGames/berkshelf/Berkshelf#berkshelf_path-class_method
I need to be able to have the cookbooks it reads from the berksfile store to my laptops local drive instead, as in my scenario I cannot have the mobility of the VM limited to the building because of files that are stored on the network.
Any incite would be much appreciated.
Perhaps its better to use the actual berkshelf over the vagrant plugin?
Thanks.
If you want to have the portability - a full chef-repo ready for chef-solo runs, better off using standalone berkshelf instead of the vagrant-berkshelf plugin - which is NOT that flexibly.
For complex cookbooks, I prefer to use standalone berkshelf as it allows me to do berks install --path chef/cookbooks to copy all cookbooks required from ~/.berkshelf/cookbooks, then I can just tar the whole thing and transfer to other machines for the same chef-solo run. some people use capistrano automate the tar and scp/rsync over the network. I just use rysnc/scp;-)
HTH

How to migrate virtualenv

I have a relatively big project that has many dependencies, and I would like to distribute this project around, but installing these dependencies where a bit of a pain, and takes a very long time (pip install takes quite some time). So I was wondering if it was possible to migrate a whole virtualenv to another machine and have it running.
I tried copying the whole virtualenv, but whenever I try running something, this virtualenv still uses the path of my old machine. For instance when I run
source activate
pserve development.ini
I get
bash: ../bin/pserve: /home/sshum/backend/bin/python: bad interpreter: No such file or directory
This is my old directory. So is there a way to have virtualenv reconfigure this path with a new path?
I tried sed -i 's/sshum/dev1/g' * in the bin directory and it solved that issue. However, I'm getting a different issue now, my guess is that this sed changed something.
I've confirmed that I have libssl-dev installed but when I run python I get:
E: Unable to locate package libssl.so.1.0.0
E: Couldn't find any package by regex 'libssl.so.1.0.0'
But when I run aptitude search libssl and I see:
i A libssl-dev - SSL development libraries, header files and documentation
I also tried virtualenv --relocatable backend but no go.
Export virtualenvironment
from within the virtual environment:
pip freeze > requirements.txt
as example, here is for myproject virtual environment:
once in the new machine & environment, copy the requirements.txt into the new project folder in the new machine and run the terminal command:
sudo pip install -r requirements.txt
then you should have all the packages previously available in the old virtual environment.
When you create a new virtualenv it is configured for the computer it is running on. I even think that it is configured for that specific directory it is created in. So I think you should always create a fresh virtualenv when you move you code. What might work is copying the lib/Pythonx.x/site-packages in your virtualenv directory, but I don't think that is a particularly good solution.
What may be a better solution is using the pip download cache. This will at least speed up the download part of pip install. Have a look at this thread: How do I install from a local cache with pip?
The clean way seems to be with virtualenv --relocatable.
Alternatively, you can do it manually by editing the VIRTUAL_ENV path in bin/activate to reflect the changes. If you choose to do so, you must also edit the first line (#) of bin/pserve which indicates the interpreter path.

Issues with meteor app on vagrant share

I have a vagrant VM (virtualbox) setup with meteor. My host and guest are both Ubuntu. The VM contains a vboxfs share folder setup through the Vagrantfile. The behavior I am noticing is similar to a NFS mount.
I am able to create a meteor project in this shared folder, but when I run the project I get errors pointing to mongodb.
If I follow instructions on
https://github.com/pixelhandler/vagrant-dev-env/blob/master/README.md
my app works just fine.
Upon further investigation it seems that MongoDB does not work on NFS shares, http://www.mongodb.org/display/DOCS/NFS
Has anyone else run in to this issue? and if so, have you figured out a (non-rsync) solution?
I plan to send link of this question to 10gen, perhaps someone from their team can answer it.
Not sure what Mongo's plans are re running on NFS / vboxfs, but you could work around this by running your own MongoDB not in the shared folder (eg, use the ubuntu mongodb package). Use the MONGO_URL environment variable to tell meteor where to connect. If you pass this variable, meteor will not try to start MongoDB in the meteor project directory.
You can move the data dir somewhere inside the VM, and use a symlink from the vagrant folder:
cd /vagrant/.meteor/local
ln -s ~/db/
This means the data will not be shared, but you probably want it git ignored anyway.
(https://grahamrhay.wordpress.com/2013/06/18/running-meteor-in-a-vagrant-virtualbox/)
grahamrhay's solution would not work with the vagrant box started on Windows. There is no way to make symbolic links on windows for vagrant, at least not for administrator accounts.