How to install Orange library in web-server? - orange

I usually use Orange especially python script widget .
For some reason, i want use web server to do my data mining job.
How to install Orange library on web server and use it like i use python script widget ?

If you install Orange without the graphical interface, you will have to make everything in scripts (saved workflows will not work).
I have not tested it, but try checking out Orange's git repository (https://github.com/biolab/orange3), modify its setup.py file by replacing
requirements = ['requirements-core.txt', 'requirements-gui.txt']
with
requirements = ['requirements-core.txt’]
and then try installing with pip install .

Related

Getting Install path of a package just installed by chocolatey in powershell

After I install a package in powershell by using
"choco install $package" where package is taken from a config file and would look like "WinRar" so I would be doing choco install WinRar, how do i get the exact path this package was just installed to?
For example when I am installing PhantomJS using this, it gets installed to C:\ProgramData\chocolatey\lib\PhantomJS\tools\phantomjs-2.1.1-windows and I as the developer know that, but since I need to add this to the env path, depending on which version the install command installs, the path will be different. I need to get the exact path so i can set the environmental variable to right place.
PhantomJS is just one example, but a lot of packages get installed into directories where their version is apart of the path and getting the path from the powershell install scripts would really be helpful.
Is there anything like this available for the package manager? I assume figuring out where the package just got installed to should be possible because I see it displayed on my terminal window, just don't know how to access it in powershell.
Thanks.
Currently there is not a way, but there is a thought to maybe provide back a list of package results with that information (along with more). That is still in a feature request so look for it to be developed in the coming months.
You could parse the Chocolatey output to determine where Chocolatey saw things get installed and we are working to make that detection even better.

Installing an perl based web-app in extremely restricted environment

Because i have a long series of comments with #ikegami, I cleaning up the question, in a hope it will be more understandable. Unfortunately, english isn't my "main" language. :(
Let say, having an environment where:
no development tools are installed (no make, nor gcc or like)
perl is installed with its core packages, nothing more
no outgoing network access is allowed - e.g. the user couldn't use curl nor cpan to download/install perl dependencies
the user even doesn't have admin (root) rights
but want install and evaluate some perl based web-app, let call it as MyApp
The MyApp
doesn't uses any XS-based module. (at least, I hope - in the development me using plenv and cpanm, so never checked the installed dependencies in depth)
it is an pure PSGI app, the simple plackup app.psgi works OK
the app uses some data-files which should be included in the "deployment".
The main question is: how to prepare the MyApp, and the all used CPAN-modules, to be easily installed in such restricted environment?
The goal is:
i don't need save my efforts and my time
but i want save the user's time and want minimize the needed actions on his side, so the installation (deployment) should be simple-as-possible.
E.g. how to get an running web-app to the user's machine with minimum possible (his) steps.
- the simplest thing is could be something as:
- copy one file (zip, or tarbal)
- unpack it
- from the terminal execute some run.pl in the unpacked directory.
To get the above simple installation, my idea was the following:
1.) Create an tarball, and after the unpacking will contain 3 folders and 1 perl-script, let say:
myapp_repo/
myapp_repo/distlib #will contain all MyApp's perl modules also ALL used CPAN modules and their dependecies
myapp_repo/datafiles #will contain app-specific data files and such
myapp_repo/install.pl
myall_repo/lib #will contain modules directly used by the `install.pl`
2.) I will develop an install.pl script, and it will be used as the installer-tool, like
perl install.pl new /path/to/app_root
and it will (should):
create the all needed directories under the /path/to/app_root (especially the lib where the will install the perl modules)
will call "local" cpanm internally (from the myapp_repo/lib) to install the app's perl modules and their CPAN dependencies using only distribution files from the distlib.
will generate and install the needed runtime script and the app.psgi into the /path/to/app_root/bin
will install the needed data-files for the app.
3.) So, after this the user should be able to simply run:
/path/to/app_root/bin/plackup /path/to/app_root/bin/app.psgi
In short, the user should use:
the system-wide perl and the system-wide perl-core modules
and any other
runtime perl-scripts (like plackup)
and the required CPAN-modules
should be installed to an self-contained directory tree using only files (no net-access).
E.g. the install.pl should somewhat call internally the cpanm to achieve (as equivalent) for the following cpanm command
cpanm --mirror file://path/to/myapp_repo/distlib --mirror-only My::App
which, should install My::App and all dependencies without network access using only the files from the myapp_repo/distlib
Some questions:
Is possible to use cpanm (called as an locally installed module) without the make?
For creating the myapp_repo/distlib, me thinking about using Pinto. Is it the right tool for achieve the above?
forgot me something? or with other words:
Is the above an viable (read: working) way?
are are any other tools, which i could/should to use for simplifying the creation of such distribution tarball?
#ikegami suggesting some method:
- "install everything" in one fresh-directory on my machine
- transfer this self-contained directory to the target machine
It sound very good, because this directory could contain all the needed app-specific data-files too, unfortunately, I don't understand the details how his solution should be done.
The FatPacked solution looks interesting too - need learn about it.
Don't write your own make or installer. Just copy it make from a different machine (which is basically what apt/yum/etc do anyway, and which you'd have to do even if you wrote your own). You'd be able to use cpan in 5 minutes!
Also, that should allow you to install gcc if you need it (e.g. to install an XS module), although it doesn't sound like you do. If you do install gcc, I'd install my own perl to avoid having to deal with PERL5LIB.
Tools such as minicpan will allow you to install any module from CPAN without internet access. Of course, you can keep using the command you are already using it if mirrors the packages you need.
The above explains how to simply and quickly setup a machine so it can use cpan and thus install any module easily.
If you just want to install a specific module and its dependencies, you can completely avoid using cpan on the target machine. First, you need a fresh install of Perl (preferable of the same version as the one on the target system). Then, simply install the module to a fresh dir on your machine, and transfer that dir to the target machine. That's it; nothing else needs to be done. This even works for XS modules if the two machine are similar enough.
This is what ppm (ActiveState's Perl package manager) does.
Unfortunately, while this solution is almost as simple as the one above, it's not nearly as flexible, it doesn't run the test suite of the modules being installed, etc. It does have the advantage of not requiring the transfer of any binary (if you're not installing any XS modules).

How should I handle Perl module updates when maintaining docker images?

I'm working on building a docker image to be able to run all of our Perl applications. The applications require hundreds of CPAN modules to be installed. The full build of the docker image takes about an hour to complete.
After doing the initial image, I'm not sure how best to handle ongoing updates.
We could keep a single Dockerfile in git, and then modify this as required, and push new builds up to dockerhub. However if the person doing the build doesn't have all of the intermediate images, then adding a single CPAN module could be an extremely tedious process, and it might take an hour before they even know if the new module installs correctly. Also it would be downloading every CPAN module again, which seems a bit risky, as there might be a breaking change in the new module.
Alternatively, the person doing the build could pull the latest docker-hub image, and then install the cpan module interactively, commit the build and push the new image to dockerhub. However then we only have our dockerhub images, but not master Dockerfile.
Or another option would be to create a Dockerfile for each new build, which references the previous dockerhub image. This seems overly complicated though.
Option 1) seems wrong. I'm fairly sure we don't want to be rebuilding the entire image from the base OS just to install one additional module. However being dependent on images without Dockerfiles seems risky as well.
You could use the standard module installer for your underlying OS on your docker image.
For example, if its RedHat then use yum and only use CPAN when they are not available
FROM centos:centos7
RUN yum -y install cpanm gcc perl perl-App-cpanminus perl-Config-Tiny && yum clean all
RUN cpanm install Some::Module; rm -fr root/.cpanm; exit 0
taken from here and modified
I would try to have a base image which the actual applications use
I would also avoid doing things interactively (e.g. script a dockerfile) as you want to be able to repeat the build when upstream dependencies change, which docker hub does for you.
EDIT
You can convert perl modules into your own packages using dh-make-perl
You can load these into your own Ubuntu repo using reprepro or a paid solution of Artifactory
These can then be installed using apt-get when you use your repo as a source from within a dockerfile.
When I have tried a similar thing before There are a few problems
Your apps don't work with the latest version of modules
There are far more dependencies than you expected
Some modules wont package
Benefits are
You keep the build tools (gcc, etc) off the app servers
You know much more about your dependencies

Deploying meteor app to a webserver

Does anyone know a step by step guide to deploy the own meteor app from windows to a webspace (not xxx.meteor.com).
I've found some tools like meteor.sh, but I'm a beginner and it's difficult without a guidance and without linux (needed to execute sh-files for example)
Make your project locally
Build your project locally, you could test it using meteor run or even meteor deploy xxx.meteor.com to see if its working
Bundle your app
Use meteor bundle deploy.tar.gz to make a file called deploy.tar.gz in your meteor directory containing your project
Upload your file to your server
This depends more on how your server is/what your platform is but you can use a tool to upload it for you (e.g Transmit on mac)
Install node.js & fibers on your platform if you don't have it already
This depends alot on your server platform. Have a look at http://nodejs.org/ for more detailed instructions
Extract your bundle
If on a *nix platform you could do the below in the directory where you uploaded your bundle (explanation):
tar -xzvf bundle.tar.gz
Enter the directory and install fibers
Fibers is needed for any meteor project, it helps use synchronous style code on server side javascript:
cd bundle/programs/server/node_modules
rm -r fibers
npm install fibers#1.0.1
The first line enters the directory in your bundle where fibers is installed, the second removes it, and the third reinstalls it.
Get MongoDB on another server or use a third party service like mongohq
Meteor production deployments need another mongodb. You can either install it on another server or use a third party server. It's not recommended to install it on the same server you install meteor on.
Finally check if your project is runnable
cd ../../../
node MONGO_URL=mongodb://dbuser:dbpassword#dbhost:dbport/meteor ROOT_URL=http://yourwebsite.com app.js
The first line gets you back to the bundle directory and the second runs node.js on your project with the parameters that let you connect to your mongodb database.
Install something to let it run in the background
It depends on which one you want to use, foreverjs is quite easy to use
npm install forever -g
If you get an error problem try using sudo before the npm (which lets you run as a super user).
Then you can run forever:
forever start MONGO_URL=mongodb://dbuser:dbpassword#dbhost:dbport/meteor ROOT_URL=http://yourwebsite.com app.js
And its done!
Extra notes
While its not that easy to get started from scratch this should help you get started. You still need to secure your mongodb server up if you've used your own servers.
The meteor.sh script does pretty much the same as above but very quickly if you learn to use that instead it might be faster to deploy updates
You might not have wget or a couple of commands that you might need that come up and give you Unknown command errors. Have a go at running yum or apt-get and see which one of the two you might have. You can then install the required package using one of these installer tools, i.e with yum install wget
I hope this helps you, its not that easy to deploy to a server on the first shot as a couple of things might be missing (files/packages/dependencies), you might run into other problems with permissions & stuff, but you could always ask on serverfault or here on stackoverflow on what you run into.
I recommend Meteoric.
Note, that you need to run meteoric from your development machine.
Script is self explanatory and works perfect for me.

Ruby 1.9.1 Installation on Debian

Currently having a bit of a nightmare trying to run code on another machine. I've been developing a Sinatra app as part of an internship I'm doing. I'm developing on an Ubuntu 12.04 machine, with ruby1.9.3 (through RVM). My supervisor wants to run it on his Debian Squeeze machine, the development server. I listed all the necessary gems in the Gemfile, and pushed up the initial commit. However, we just can't seem to get it running on the Debian box.
Ruby1.8 was initially installed, before my supervisor was aware we'd need Ruby1.9 and up. The Ruby1.9.1-full debian package was installed, but trying to run the Sinatra app with the ruby1.9.1 application.rb does nothing. I added in some print statements to debug it, and the ruby interpreter is reaching the end of the file - the problem is that it is not starting up WEBrick. This exact same code has no problem running on my machine, why is it being so problematic on Debian?
NOTE: Don't suggest switching to RVM. My supervisor is adamant we only use official packages, so it's beyond my control.
I have my Sinatra apps configured a bit differently. That is, I don't run them with ruby application.rb, rather I have a config.ru file with instructions to the Rack middleware. When I want to run my app i just run rackup and the server will start.
The minimal example layout as shown in the Sinatra Readme is as follows.
A basic Sinatra application.rb file:
require 'sinatra'
get '/' do
'Hello world!'
end
and the config.ru:
require './application'
run Sinatra::Application
I don't really know if or how this would make a difference in your situation, but it was the first thing that sprung to mind.
P.S.
Now that I think of it, another thing you could try is to use another server than WEBrick. I think if you add
gem 'thin'
to your Gemfile it should automatically use Thin instead. Remember to re-run bundle install first.