I have created a python virtual enviornment to runn an applicaton using these instructions:
git clone http://github.com/MediaCrush/MediaCrush && cd MediaCrush
Create a virtual environment
Note: you'll need to use Python 2. If Python 3 is your default python interpreter (python --version), add --python=python2 to the virtualenv command.
virtualenv . --no-site-packages
Activate the virtualenv
source bin/activate
Install pip requirements
pip install -r requirements.txt
Install coffeescript
npm install -g coffee-script
Configure MediaCrush
cp config.ini.sample config.ini
Review config.ini and change any details you like. The default place to store uploaded files is ./storage, which you'll need to create (mkdir storage) and set the storage_folder variable in the config to an absolute path to this folder.
Compile static files
If you make a change to any of the scripts, you will need to run the compile_static.py script.
python compile_static.py
Start the services
You'll want to make sure Redis is running at this point. It's probably best to set it up to run when you boot up the server (systemctl enable redis.service on Arch).
MediaCrush requires the daemon and the website to be running concurently to work correctly. The website is app.py, and the daemon is celery. The daemon is responsible for handling media processing. Run the daemon, then the website:
celery worker -A mediacrush -Q celery,priority
python app.py
This runs the site in debug mode. If you want to run this on a production server, you'll probably want to run it with gunicorn, and probably behind an nginx proxy like we do.
gunicorn -w 4 app:app
I am trying to set this up on a remote server which is hosting 2 other websites.
I haven't actually got it to work properly yet, but what I don't understand is does this
virtual environment have to be running continuously?
If I close my remote connection, or exit the environment does the application cease to function?
And if not how do I exit the virtual environment and continue to work on the server?
The virtual environment isn't something that needs to be running. It's basically a directory where Python libraries and executables can be installed, and a handful of environment variables to ensure that:
new libraries are installed in the virtual environment
When a Python program looks for a library, it looks in the virtual environment
When the system looks for a program to run, it looks in the virtual environment first.
One of the things that happens when you activate the virtual environment is it defines a shell function called deactivate that unsets all the environment variables. So, to get out of the virtual environment, you just type deactivate.
If I close my remote connection, or exit the environment does the application cease to function?
It depends on how you've started your application. If you are just launching it from the command line, then when you close your connection the application will be stopped. Typically you want to use a service like upstart to start and manage your application (the particular service you choose is typically determined by your server's OS). When you configure that service, you'll want to make sure it runs source $your_environment_dir/bin/activate before starting your app, so that your app will run in the virtual environment.
Related
I want to install and set up TYPO3 on my local machine. What's the best practice and fastest way to do so?
For running TYPO3 on a local machine you need a web server running on your machine.
This can be done in different ways:
Native Web Server, PHP and database on a Linux based machine
Virtual Machine (VirtualBox, VMWare, Parallels, etc.)
Vagrant
Docker
Currently the fastest way to a "non power user" in my opinion is ddev.
ddev is a user-friendly possibility to run a perfect environment for TYPO3 on a docker base. It runs on Linux, Mac and Windows (minimum version 10, hyper-v recommended) and it brings all technologies you need for best experience.
Install Docker and ddev, see https://ddev.readthedocs.io/en/stable/
Create a folder for your installation, e.g. ~/Websites/my-website/ or C:\Websites\my-website\ and go into it.
Run ddev config and set these three options in the dialog:
Project name (default is your folder name): Whatever you like
Docroot location: public and say yes for creating
Project type: typo3
Run ddev start to start the Docker containers and add your root password to set the hosts entry (for accessing it via local domain)
Run ddev composer create typo3/cms-base-distribution ^9 and say yes for overwriting
Run ddev config again and just hit enter for every dialog to create a file which provides the DB credentials for your TYPO3 installation
Run ddev exec vendor/bin/typo3cms install:setup --no-interaction --admin-user-name=admin --admin-password=password --site-setup-type=site
That's all, you have a running TYPO3 instance on your local machine.
You can access it by using <project-name>.ddev.site in your browser, in our example it should be http://my-website.ddev.site. To get into the TYPO3 backend you only need to put the credentials admin:password on http://my-website.ddev.site/typo3.
For troubleshooting go to:
https://ddev.readthedocs.io/en/stable/users/troubleshooting/
https://docs.typo3.org/typo3cms/InstallationGuide/Troubleshooting/Index.html
https://docs.typo3.org/typo3cms/ContributionWorkflowGuide/Appendix/SettingUpTypo3Ddev.html
I'm trying to create a virtual environment to deploy a Flask app. However, when I try to create a virtual environment using virtualenv, I get this error:
Using base prefix '//anaconda'
New python executable in /Users/sydney/Desktop/ptproject/venv/bin/python
ERROR: The executable /Users/sydney/Desktop/ptproject/venv/bin/python is not functioning
ERROR: It thinks sys.prefix is '/Users/sydney/Desktop/ptproject' (should be '/Users/sydney/Desktop/ptproject/venv')
ERROR: virtualenv is not compatible with this system or executable
I think that I installed virtualenv using conda. When I use which virtualenv, I get this
//anaconda/bin/virtualenv
Is this an incorrect location for virtualenv? I can't figure out what else the problem would be. I don't understand the error log at all.
It turns out that virtualenv just doesn't work correctly with conda. For example:
https://github.com/conda/conda/issues/1367
(A workaround is proposed at the end of that thread, but it looks like you may be seeing a slightly different error, so maybe it won't work for you.)
Instead of deploying your app with virtualenv, why not just use a proper conda environment? Conda environments are more general (and powerful) than those provided by virtualenv.
For example, to create a new environment with python-2.7 and flask in it:
conda create -n my-new-env flask python=2.7
I'm trying to imagine a workflow that could be applied on a scientific work environment. My work involves doing some scientific coding, basically with Python, pandas, numpy and friends. Sometimes I have to use some modules that are not common standards in the scientific community and sometimes I have to integrate some compiled code in my chain of simulations. The code I run is most of the time parallelised with IPython notebook.
What do I find interesting about docker?
The fact that I could create a docker containing my code and its working environment. I can then send the docker to my colleges, without asking them to change their work environment, e.g., install an outdated version of a module so that they can run my code.
A rough draft of the workflow I have in mind goes something as follows:
Develop locally until I have a version I want to share with somebody.
Build a docker, possibly with a hook from a git repo.
Share the docker.
Can somebody give me some pointers of what I should take into account to develop further this workflow? A point that intrigues me: code running on a docker can lunch parallel process on the several cores of the machine? e.g., an IPython notebook connected to a cluster.
Docker can launch multiple process/thread on multiple core. Multiple processes may need the use of a supervisor (see : https://docs.docker.com/articles/using_supervisord/ )
You should probably build an image that contain the things you always use and use it as a base for all your project. (Would save you the pain of writing a complete Dockerfile each time)
Why not develop directly in a container and use the commit command to save your progress on a local docker registry? Then share the final image to your colleague.
How to make a local registry : https://blog.codecentric.de/en/2014/02/docker-registry-run-private-docker-image-repository/
Even though you'll have a full container, I think a package manager like conda can still be a solid part of the base image for your workflow.
FROM ubuntu:14.04
RUN apt-get update && apt-get install curl -y
# Install miniconda
RUN curl -LO http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh
RUN bash Miniconda-latest-Linux-x86_64.sh -p /miniconda -b
RUN rm Miniconda-latest-Linux-x86_64.sh
ENV PATH=/miniconda/bin:${PATH}
RUN conda update -y conda
* from nice example showing docker + miniconda + flask
Wrt doing source activate <env> in the Dockerfile you need to:
RUN /bin/bash -c "source activate <env> && <do something in the env>"
Is it possible to set the path where the berkshelf plugin puts the cookbooks it installs? (As in the .berkshelf folder)
I am running Windows 7.
I am currently trying to install a mysql server using an opscode cookbook to a vm and here at work they have the %HOMEDRIVE% system variable set to a network drive. So when .berkshelf starts at the beginning of the Vagrantfile, it pushes the cookbooks to the network drive and it causes it to be slow and well, its not where it should be. Is there a fix to this?
VirtualBox did this as well, but I fixed it by altering the settings. I tried looking for some sort of equivalent settings for berkshelf, but the closest I got was for the standard berkshelf (thats not a vagrant plugin), it appears you can set this environment variable:
ENV['BERKSHELF_PATH']
Found here:
http://www.rubydoc.info/github/RiotGames/berkshelf/Berkshelf#berkshelf_path-class_method
I need to be able to have the cookbooks it reads from the berksfile store to my laptops local drive instead, as in my scenario I cannot have the mobility of the VM limited to the building because of files that are stored on the network.
Any incite would be much appreciated.
Perhaps its better to use the actual berkshelf over the vagrant plugin?
Thanks.
If you want to have the portability - a full chef-repo ready for chef-solo runs, better off using standalone berkshelf instead of the vagrant-berkshelf plugin - which is NOT that flexibly.
For complex cookbooks, I prefer to use standalone berkshelf as it allows me to do berks install --path chef/cookbooks to copy all cookbooks required from ~/.berkshelf/cookbooks, then I can just tar the whole thing and transfer to other machines for the same chef-solo run. some people use capistrano automate the tar and scp/rsync over the network. I just use rysnc/scp;-)
HTH
As I understand, all that Capistrano does is ssh into the server and execute the commands we want it to (mostly).
I've used rvm in some past couple of projects, and had to install the rvm-capistrano gem. Otherwise, it failed to find the executables (or so I recall), even though we had a proper .rvmrc file (with the correct ruby and the correct gemset) in the repository.
Similarly, today I was setting up deployment for a project for which I'm using pythonbrew, and a simple "cd #{deploy_to}/current && pythonbrew venv use myenv && gunicorn_django -c gunicorn.py" gave me an error message saying "cannot find the executable gunicorn_django". This, I suppose is because the virtualenv was not activated correctly. But didn't we activate the environment when we did "pythonbrew venv use myenv"? The complete command works fine if I ssh into the server and execute it on the shell, but it doesn't when I do it via Capistrano.
My question is - why does Capistrano need modifications to play along with programs like rvm and pythonbrew, even though all it's doing is executing a couple of commands over ssh?
Thats because their ssh'ing in doesn't activate your shell's environment. So it's not picking up the source statements that enable the magic. Just do an rvm use ... before running commands instead of assuming the cd will pick that up automatically. Should be fine then. If you had been using fabric there is the env() context manager that you could use to be sure thats run before each command.