How to use virtualenv + virtualenvwrapper properly with Vagrant? - virtualenv

I found that the most convenient way of installing virtualenv + virtualenvwrapper is by using virtualenvburrito.
Now I can manage to automate my pip installs in a vagrant provision by the following:
Line in Vagrantfile:
config.vm.provision :shell, :path => "bootstrap.sh"
Lines in bootstrap.sh:
curl -s https://raw.github.com/brainsik/virtualenv-burrito/master/virtualenv-burrito.sh | $SHELL
source /root/.venvburrito/startup.sh
cd /vagrant
mkvirtualenv my_project
pip install -r requirements.txt
Then I run vagrant ssh but then I have to run the following to access my virtual environment:
sudo -i
source /root/.venvburrito/startup.sh
workon my_project
I don't want to always have to run sudo -i and source /root/.venvburrito/startup.sh, I just want to be able to run workon my_project directly.
But
(I.) I can't seem to append source /root/.venvburrito/startup.sh to my ~/.profile and
(II.) even if it was appended to that file I'd get a permissionerror. I can't seem to change the permissions for any protected file either.

The best way to deal with (I.) and (II.) is to set the privileged attribute in the Vagrantfile to false.
See here

Related

Not able to use npm cli in a github workflows

I want to use a npm cli utility (this one), inside a bash script and to run it via a github workflow.
This is my basic script:
folder="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
mkdir -p "$folder"/rawdata
mkdir -p "$folder"/processing
npm list -g --depth=0
rm "$folder"/rawdata/"$reg".png
capture-website --delay 5 --full-page --width 1280 --height 720 --output "$folder"/rawdata/"$reg".png "https://ondata.github.io/vaccinipertutti/?area=SIC"
I run it using this github workflow (it's Ubuntu 20.04.2 LTS), in which I set capture-website-cli installation in this way:
mkdir ~/.npm-global
npm config set prefix '~/.npm-global'
export PATH=~/.npm-global/bin:$PATH
source ~/.profile
NPM_CONFIG_PREFIX=~/.npm-global
npm install -g capture-website
But when the script runs I have this error:
./test.sh: line 13: capture-website: command not found
It seems that it has been not installed in ~/.npm-global.
If I run find /home/runner -executable -name capture-website I have
/home/runner/.npm-global/lib/node_modules/capture-website
Do you have some advice to solve my problem?
You might need to differentiate between:
capture-website
sindresorhus/capture-website-cli which provide a CLI (commannd-line interface) way to use that npm library (library means: no executable in .npm-global/bin)
You need to install the latter in your runner $PATH.
npm install --global capture-website-cli

How to build docker image that have telnet client

I've been trying to build docker image for my flask app. The application have a script that relied on telnet command. However, after putting the app in then container, the script stop working since the telnet command not found in the container.
How can I make telnet avilable in docker container?
here is my docker file:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim-buster
EXPOSE 5000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
ADD requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
ADD . /app
# Switching to a non-root user, please refer to https://aka.ms/vscode-docker-python-user-rights
RUN useradd appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "run:app"]
You can update the package list and install telnet as part of your Docker file. For example.
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim-buster
#update package list and install telnet
RUN apt update && apt install telnet
This will refresh the package list of the OS then install telnet into the OS

Run Kitura Docker Image causes libmysqlclient.so.18 Error

after i had some previous problem to Dockerise my MySQL Kitura SETUP here : Docker Build Kitura Sqift Container - Shim.h mysql.h file not found
I am running in a new Problem i can not solve following the Guide from : https://www.kitura.io/docs/deploying/docker.html .
After i followed all the steps and also did the fixing on the MySQL issue previously i was now able to run the following command :
docker run -p 8080:8080 -it myapp-run
THis however leads to the following issue :
error while loading shared libraries: libmysqlclient.so.18: cannot open shared object file: No such file or directory
i assume something tries again to open the libmysqclclient from some wrong Environmental Directories ?
But how can i fix this issues by building the docker images ... is there any way and better a smart way ?
Thanks a lot again for the help.
I was able to update and enhance my dockerfile this is now running smoothly and also can be used for CI and CD tasks.
FROM ibmcom/swift-ubuntu-runtime:latest
##FROM ibmcom/swift-ubuntu-runtime:5.0.1
LABEL maintainer="IBM Swift Engineering at IBM Cloud"
LABEL Description="Template Dockerfile that extends the ibmcom/swift-ubuntu-runtime image."
# We can replace this port with what the user wants
EXPOSE 8080
# Default user if not provided
ARG bx_dev_user=root
ARG bx_dev_userid=1000
# Install system level packages
RUN apt-get update && apt-get dist-upgrade -y
RUN apt-get update && apt-get install -y sudo libmysqlclient-dev
# Add utils files
ADD https://raw.githubusercontent.com/IBM-Swift/swift-ubuntu-docker/master/utils/run-utils.sh /swift-utils/run-utils.sh
ADD https://raw.githubusercontent.com/IBM-Swift/swift-ubuntu-docker/master/utils/common-utils.sh /swift-utils/common-utils.sh
RUN chmod -R 555 /swift-utils
# Create user if not root
RUN if [ $bx_dev_user != "root" ]; then useradd -ms /bin/bash -u $bx_dev_userid $bx_dev_user; fi
# Bundle application source & binaries
COPY ./.build /swift-project/.build
# Command to start Swift application
CMD [ "sh", "-c", "cd /swift-project && .build/release/Beautylivery_Server_New" ]

cannot execute binary file centos?

I am using centos 6.9 and want to install xampp. But when I run the command on the terminal it showing error i.e. cannot execute binary file. So, How can I fix this problem and successfully install xampp ? Please help me.
chmod +x xampp-linux-x64-7.0.22-0-installer.run
./xampp-linux-x64-7.0.22-0-installer.run
after this command it showing
bash: ./xampp-linux-x64-7.0.22-0-installer.run: cannot execute binary file
You're probably running the install (binary) with a lesser privileged user. You'll have to use root user for modifying SELinux settings as such:
semanage fcontext -a -t httpd_sys_script_exec_t '/<install-location>(/.*)/?'
restorecon -R -v /<install-location>/

Can't install packages using pip in Virtualenv on remote using Fabric

I have being using fabric to deploy an app with virtualenv. I was using fabric 1.4 and upgraded to 1.5.1 last week. My script stopped working.
It can't install the requirements. It seems it's not activating the virtualenv. In my code, I have:
with cd('%(path)s' % env):
with prefix('source bin/activate'):
run('pip install -U distribute')
I'm getting a permission denied error: error: could not delete '/usr/local/lib/python2.7/dist-packages/pkg_resources.py': Permission denied
The command being executed is:
Executed: /bin/bash -l -c "cd /var/www/myproject && source bin/activate && export PATH=\"\\$PATH:\\"/var/www/myproject\\" \" && pip install -U distribute"
If I ssh to the remote machine and run cd /var/www/myproject && source bin/activate && pip install -U distribute, it works just fine.
Why is my fabric script not working?
Thanks in advance
Instead of the serial approach with..
source bin/activate
pip install -U distribute
..directly use the pip executable of the virtualenv:
myenv/bin/pip install -U distribute
Although not exactly a solution, fabtools has a number of functions related to virtualenvs that are very handy. They do (almost) all of the hard work for you, and are probably worth using to check it isn't something else going wrong.
# Cut (and modified) from the fabtools documentation
from fabric.api import *
from fabtools import require
import fabtools
#task
def setup():
# Require a Python package
with fabtools.python.virtualenv('/home/myuser/env'):
require.python.package('pyramid')