Add external library to an action - ibm-cloud

I'm developing an action in IBM-Clound functions, that is called in Watson Assistant dialog.This action has to make a SOAP petition to a WS. The problem is when I try to import the suds library because It is not in the default python libraries. How can I add the library?
Thanks in advance.

You can package Python dependencies by using a virtual environment, virtualenv. The virtual environment allows you to link additional packages that can be installed by using pip, for example.
To install dependencies, package them in a virtual environment, and create a compatible OpenWhisk action:
Create a requirements.txt file that contains the pip modules and versions to install.
Install the dependencies and create a virtual environment. The virtual environment directory must be named virtualenv. To ensure compatibility with the OpenWhisk runtime container, package installations inside a virtual environment must use the image that corresponds to the kind.
For kind python:2 use the docker image openwhisk/python2action.
For kind python:3.6 use the docker image ibmfunctions/action-python-v3.6.
For kind python:3.7 use the docker image ibmfunctions/action-python-v3.7.
docker run --rm -v "$PWD:/tmp" ibmfunctions/action-python-v3 bash -c "cd tmp && virtualenv virtualenv && source virtualenv/bin/activate && pip install -r requirements.txt"
Package the virtualenv directory and any additional Python files. The source file that contains the entry point must be named main.py.
zip -r helloPython.zip virtualenv __main__.py
Create the action helloPython.
ibmcloud fn action create helloPython --kind python-jessie:3 helloPython.zip
For more details, refer this link

Related

Failed to build a Conda package: missing gtkdocize in conda-builder gitlab-ci environment

I am using an automatic package creation pipeline in gitlab-ci, to build Conda packages for software we use in my company.
One of the software we use relies on gtkdocize, and checks for it in the
configure script. It is only needed for the build, not for the execution.
So, I am not able to build the package because the conda-builder image does
not contain this program.
I am new to Conda, and gitlab-ci, and I imagine conda-builder is a generic
Docker image for building Conda packages in general. How can I add a package
to "my" conda-builder image ?
Or maybe there is a build dependency I am missing in my recipe ? I cannot
find where gtkdocize can come from.
Any help would be appreciated.
The gtkdocize binary is used to set up an Autotools-based project using gtk-doc for generating the API reference. You will need to install whatever package provides gtkdocize; on Debian/Ubuntu, the package is called gtk-doc-tools, whereas on Fedora it's called gtk-doc.

Is there a way to build ZFS for the linux-virt kernel on Alpine?

I want to run ZFS inside of a virtual machine using Alpine Linux. The linux-virt kernel is much smaller and does not have the 200+MB of firmware files listed as dependencies, so that is the kernel I selected for the VM. However, I now find that there is no zfs-virt package, only zfs-vanilla which installs the vanilla kernel and all the firmware as dependencies.
Is there a zfs-virt package available in perhaps a third party repository? If not, I am not against building the package myself, but I'm relatively new to Alpine so have not yet figured out how its build system works nor if it is possible to build against an already-compiled kernel (in my own past experience, the only way I've successfully built kernel modules is using the source tree of the target kernel)
Just copy paste latest zfx-lts to zfs-virt, then abuild checksum and abuild -r. There is a prebuild package in my repository.
Install by
(cd /etc/apk/keys; sudo curl -LO https://repo.wener.me/alpine/wenermail#gmail.com-5dc8c7cd.rsa.pub )
echo https://repo.wener.me/alpine/v3.11/community | sudo tee -a /etc/apk/repositories
sudo apk add zfs-virt

Can I install and run multiple versions of gcloud (google cloud sdk) on the same machine?

Features and options in gcloud are sometimes deprecated/removed. If CI depends on it and refactoring is not an option while at the same time we need to use new features which come out in later releases can we have multiple versions of gcloud installed on same machines and used concurrently?
There are multiple ways to install Cloud SDK on your machine. For this probably simplest would be to download versioned package from https://cloud.google.com/sdk/downloads#versioned.
For example you can do
gsutil cp gs://cloud-sdk-release/google-cloud-sdk-VERSION-linux-x86_64.tar.gz .
where VERSION is you want to get (for example "161.0.0"). You could also use wget or curl or simply use browser to download the package for your platform.
Then unzip/untar into your desired location for example
mkdir -p ~/cloudsdk/161.0.0
tar xzf google-cloud-sdk-161.0.0-linux-x86_64.tar.gz -C ~/cloudsdk/161.0.0
repeat for some different version:
mkdir -p ~/cloudsdk/130.0.0
tar xzf google-cloud-sdk-130.0.0-linux-x86_64.tar.gz -C ~/cloudsdk/130.0.0
Now you can run gcloud via
~/cloudsdk/161.0.0/google-cloud-sdk/bin/gcloud components list
or
~/cloudsdk/130.0.0/google-cloud-sdk/bin/gcloud components list
Note both version will share same config directory. This is generally undesirable, because there could have been changes between versions in how they treat configuration. To force different Cloud SDK versions use different gcloud configurations set CLOUDSDK_CONFIG environment variable to point to different gcloud config directory. For example:
$ CLOUDSDK_CONFIG=~/.config/gcloud-legacy ~/cloudsdk/130.0.0/google-cloud-sdk/bin/gcloud

Install just one package globally on Julia

I have a fresh Julia instalation on a machine that I want to use as a number-crunching server for various persons on a lab. There seems to be this nice package called jupyterhub wich makes the Jupyter Notebook interface avaible to various clients simultaneusly. A web page which I am unable to find again began suggesting something like "first install IJulia globally, then install JupyterHub..."
I cannot seem to find a nice way to install ONE package globally.
Update
In Julia-v0.7+, we need to use JULIA_DEPOT_PATH instead of JULIA_PKGDIR and the LOAD_PATH looks something like this:
julia> LOAD_PATH
3-element Array{Any,1}:
Base.CurrentEnv()
Any[Base.NamedEnv("v0.7.0"), Base.NamedEnv("v0.7"), Base.NamedEnv("v0"), Base.NamedEnv("default"), Base.NamedEnv("v0.7", create=true)]
"/Users/gnimuc/Codes/julia/usr/share/julia/stdlib/v0.7"
Old Post
"first install IJulia globally, then install JupyterHub..."
I don't know whether this is true or not, by following these steps below, you can install IJulia after you installed Jupyterhub.
Install packages system-wide/globally for every user
this question has already been answered here by Stefan Karpinski. so what we need is just use this method to install the IJulia.jl package.
There's a Julia variable called LOAD_PATH that is arranged to point at two system directories under your julia installation. E.g.:
julia> LOAD_PATH
2-element Array{Union(ASCIIString,UTF8String),1}:
"/opt/julia-0.3.3/usr/local/share/julia/site/v0.3"
"/opt/julia-0.3.3/usr/share/julia/site/v0.3"
If you install packages under either of those directories, then everyone using that Julia will see them. One way to do this is to run julia as a user who can write to those directories after doing export JULIA_PKGDIR=/opt/julia-0.3.3/usr/share/julia/site in the shell. That way Julia will use that as it's package directory and normal package commands will allow you to install packages for everyone....
Make IJulia working with Jupyterhub
in order to make IJulia and Jupyterhub working with each other for all the users, you should copy the folder your/user/.local/share/jupyter/kernels/julia/ to /usr/local/share/jupyter/kernels/. I write down some of the steps that I used in my test Dockerfile. the code is ugly, but it works.
Steps: (after you successfully installed Jupyterhub)
note that, you should do the following steps as root and I assume that your julia was globally installed at /opt/julia_0.4.0/.
make our global package directory and set up JULIA_PKGDIR:
mkdir /opt/global-packages
echo 'push!(LOAD_PATH, "/opt/global-packages/.julia/v0.4/")' >> /opt/julia_0.4.0/etc/julia/juliarc.jl
export JULIA_PKGDIR=/opt/global-packages/.julia/
install "IJulia" using package manager:
julia -e 'Pkg.init()'
julia -e 'Pkg.add("IJulia")'
copy kernelspecs to /usr/local/share/jupyter/kernels/ which can be used by any new user added by Jupyterhub:
jupyter kernelspec list
cd /usr/local/share/ && mkdir -p jupyter/kernels/
cp -r /home/your-user-name/.local/share/jupyter/kernels/julia-0.4-your-julia-version /usr/local/share/jupyter/kernels/

How to migrate virtualenv

I have a relatively big project that has many dependencies, and I would like to distribute this project around, but installing these dependencies where a bit of a pain, and takes a very long time (pip install takes quite some time). So I was wondering if it was possible to migrate a whole virtualenv to another machine and have it running.
I tried copying the whole virtualenv, but whenever I try running something, this virtualenv still uses the path of my old machine. For instance when I run
source activate
pserve development.ini
I get
bash: ../bin/pserve: /home/sshum/backend/bin/python: bad interpreter: No such file or directory
This is my old directory. So is there a way to have virtualenv reconfigure this path with a new path?
I tried sed -i 's/sshum/dev1/g' * in the bin directory and it solved that issue. However, I'm getting a different issue now, my guess is that this sed changed something.
I've confirmed that I have libssl-dev installed but when I run python I get:
E: Unable to locate package libssl.so.1.0.0
E: Couldn't find any package by regex 'libssl.so.1.0.0'
But when I run aptitude search libssl and I see:
i A libssl-dev - SSL development libraries, header files and documentation
I also tried virtualenv --relocatable backend but no go.
Export virtualenvironment
from within the virtual environment:
pip freeze > requirements.txt
as example, here is for myproject virtual environment:
once in the new machine & environment, copy the requirements.txt into the new project folder in the new machine and run the terminal command:
sudo pip install -r requirements.txt
then you should have all the packages previously available in the old virtual environment.
When you create a new virtualenv it is configured for the computer it is running on. I even think that it is configured for that specific directory it is created in. So I think you should always create a fresh virtualenv when you move you code. What might work is copying the lib/Pythonx.x/site-packages in your virtualenv directory, but I don't think that is a particularly good solution.
What may be a better solution is using the pip download cache. This will at least speed up the download part of pip install. Have a look at this thread: How do I install from a local cache with pip?
The clean way seems to be with virtualenv --relocatable.
Alternatively, you can do it manually by editing the VIRTUAL_ENV path in bin/activate to reflect the changes. If you choose to do so, you must also edit the first line (#) of bin/pserve which indicates the interpreter path.