how do i install Aerospike REST Gateway? - rest

i want to use REST from aerospike because its said language agnostic, im using Ubuntu 20. im trying to understand intallation part here :
https://github.com/aerospike/aerospike-rest-gateway
https://github.com/aerospike/aerospike-rest-gateway/blob/master/docs/installation-and-config.md
but its soo unclear what to do first and they jumps to "./gradlew build" at start. i put mindlessly to terminal its show like this, totally no clue
# ./gradlew build
bash: ./gradlew: No such file or directory

There are a few ways to run the REST Gateway.
You can clone the repo's master branch and build it yourself. You can then run the jar file as shown in the readme.
make build
java -jar build/libs/aerospike-rest-gateway-<VERSION>.jar --aerospike.restclient.hostname=<aerospike-host>
Download the already built jar from the download page or download it using
wget https://download.aerospike.com/artifacts/aerospike-client-rest/<VERSION>/aerospike-client-rest-<VERSION>.tgz
Untar the archive
tar -xzf aerospike-client-rest-<VERSION>.tgz
Run the jar
java -jar aerospike-client-rest-2.0.1/as-rest-client-<VERSION>.jar --aerospike.restclient.hostname=<aerospike-host>
Use docker:
docker run -itd --rm -p 8080:8080 --name AS_Rest1 -e aerospike_restclient_hostname=<aerospike-host> aeropsike/aerospike-rest-gateway:latest
Note 1: These examples assume security is disabled.
Note 2: The REST client was recently renamed the REST Gateway, which is the reason for the differing artifact names.
As far as why ./gradlew build is not running, it is a bit hard to tell. Running ./gradlew build assumes you cloned the repo and the repo is your current working directory. If you provide more info about your CWD and the steps you have followed up to this point I can help further.

Related

Install MongoDB on Manjaro

I'm facing difficulties installing the MongoDB community server on Manjaro Linux.
There isn't official documentation on how to install it on Arch-based systems and Pacman can't find it in the AUR repos.
Has anyone ever tried to install it?
Here is what I did to install.
As the package is not available in the official Arch repositories and can't be installed using pacman, you need to follow a few steps to install it.
First, you need to get the URL for the repo of prebuilt binaries from AUR. It can be found here and by the time of writing this it was https://aur.archlinux.org/mongodb-bin.git
Simply clone the repo in your home directory or anywhere else. Do git clone https://aur.archlinux.org/mongodb-bin.git, then head to the cloned directory, cd mongodb-bin.
Now, all you need to do is to run makepkg -si command to make the package. the -s flag will handle the dependencies for you and the -i flag will install the package.
After makepkg finishes its execution, don't forget to start mongodb.service. Run systemctl start mongodb and if needed enable it with systemctl enable mongodb.
Type mongo in the terminal and if the Mongo Shell runs you are all set.
Later edit (8.2.2021): This package is now available in AUR.
It is available in AUR, so you can view it with pamac with -a flag,
eg.
pamac search -a mongodb-bin
pamac info -a mongodb-bin
And, then build and install with (this can be done after manually cloning too) -
pamac build mongodb-bin
Note that there's also a package named mongodb, but mongodb-bin is a newer release (you can check the version numbers by search or info arguments)
I've been using mongodb via docker for a couple of years.
In my experience, it's easier than installing the regular way. (assuming you already have docker installed)
1. Ensure you have docker installed
If you don't already have it, you can install via pacman/pamac, because it's in the official Arch/Manjaro package repositories. The easiest way is to run the following command:
sudo pacman -S docker
2. Run a single docker command
sudo docker run -d -p 27017:27017 -v ~/mongodb_data:/data/db mongo
This command will run mongodb on a port 27017, and place its data files into a folder ~/mongodb_data.
If you're running this command for the first time, it will also download all the required files.
Now you're successfully running a local instance of mongodb, and you can connect it with your favorite db management tool or from your code.

nvidia-docker - can cuda_runtime be available while building a container?

While attempting to compile darknet in the build command of a docker container I constantly run into the exception include/darknet.h:11:30: fatal error: cuda_runtime.h: No such file or directory.
I am building the container from the instructions here: https://github.com/NVIDIA/nvidia-docker/wiki/Deploy-on-Amazon-EC2. I have a simple Dockerfile I am testing with - the relevant parts:
FROM nvidia/cuda:9.2-runtime-ubuntu16.04
...
WORKDIR /
RUN apt-get install -y git
RUN git clone https://github.com/pjreddie/darknet.git
WORKDIR /darknet
# Set OpenCV makefile flag
RUN sed -i '/OPENCV=0/c\OPENCV=1' Makefile
RUN sed -i '/GPU=0/c\GPU=1' Makefile
#RUN ln -s /usr/local/cuda-9.2 /usr/local/cuda
# HERE I have been playing with commands to show me the state of the docker image to try to troubleshoot the problem
RUN find / -name "cuda_runtime.h"
RUN ls /usr/local/cuda/lib64/
RUN less /usr/local/cuda/README
RUN make
Most of the documentation I see references using the nvidia libraries when running a container, but the darknet compiles differently when built with gpu support so I need cuda_runtime.h available at build time.
Perhaps I misunderstand what nvidia-docker is doing - I'm assuming that nvidia-docker exists because the Nvidia code must be installed on the actual host machine and not inside the container & they use some mechanism to share the "native" code with the containers so the GPU can be managed - is that correct?
Should I even be trying to build darknet when building my container or should I be installing it on the host machine, then making it available somehow to the container? This seems to go against the portability of the containers but I can live with some constraints to get access to the GPU.
FROM nvidia/cuda:9.2-runtime-ubuntu16.04
Your image only has bits and pieces of CUDA-9.2 needed to run a CUDA app, but does not have the bits needed to build one.
You need to use -devel variant.

Add Java in Python Flask Cloud Foundry

I need to run java command from python flask application which is deployed using cf. How can we make java runtime available to this python flask app.
I tried using multi-buildpack, but java_buildpack expects some jar or war to be executed while deploying the application.
Any approach that would make java available to python flask app?
The last buildpack in the buildpack chain is responsible for determining a command to start your application which is why the Java buildpack expects a JAR/WAR to execute.
The Java buildpack, as of writing this, does not ship a supply script so it can only be run as the last buildpack when using multi buildpack support. It looks like that at some point in the future the Java buildpack will provide a supply script, but this is still being worked out here.
For now, what you can do is use the apt-buildpack and install a JRE/JDK that way.
To do this, add a file called apt.yml to the root of your project folder. In that file, put the following:
---
packages:
- openjdk-8-jre
repos:
- deb http://ppa.launchpad.net/openjdk-r/ppa/ubuntu trusty main
keys:
- https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xEB9B1D8886F44E2A
This will tell the apt buildpack to add a PPA for Ubuntu Trusty where we can get the latest openjdk8. This gets installed under /home/vcap/deps/0, which puts the java executable at /home/vcap/deps/0/lib/jvm/java-8-openjdk-amd64/bin/java.
Note: The java binary is unfortunately not on the path because of the way Ubuntu uses update-alternatives and we can't use that tool to put it on the path in the CF app container because we don't have root access.
After setting that up, you'd follow the normal instructions for using multiple buildpacks.
$ cf push YOUR-APP --no-start -b binary_buildpack
$ cf v3-push YOUR-APP -b https://github.com/cloudfoundry/apt-buildpack#v0.1.1 -b python_buildpack
Note: The process to push using multiple buildpacks will likely change in the future and v3-push, which is currently experimental, will go away.
Note: The example above hard codes version v0.1.1 of the apt buildpack. You should use the latest stable release, which you can find here. Using the master branch is not recommended.
One way to achieve your goal to combine Java and Python would be with context-based routing. I have an example to combines Python and Node.js, but the approach is the same.
Basically, you have a second app serving one or more paths of a domain / URI.

Snapcraft Placing Default Config Files

I am trying to build an application release by using Snapcraft.io, and I have almost all working.Snapcraft already compiles the source code, generates the .snap file, includes all the dependencies, and so on.However, I am stuck at how I can initialize some configuration files in the SNAP_USER_DATA folder after the first app install.I do not want to place the files in the default read-only path SNAP, as the default parameters should be modified by the user, also I need to generate some additional files, like server certificates.So I need to copy some files, and also run a script after the first install. Is this possible?
Thanks.
Because snaps are installed as root, it's impossible to do exactly as you ask at install time, as $SNAP_USER_DATA is user-specific, so it'll always be root's. However, you can do this at install-time using a system-wide directory, such as $SNAP_DATA, using the install hook:
$ snapcraft init
Created snap/snapcraft.yaml.
Edit the file to your liking or run `snapcraft` to get started
Create the hook. In our case we'll just create a new file in $SNAP_DATA, but you could do whatever you wanted.
$ mkdir -p snap/hooks
$ echo "touch \$SNAP_DATA/foo" >> snap/hooks/install
$ chmod a+x snap/hooks/install
Build the snap.
$ snapcraft
Preparing to pull my-part
Pulling my-part
Preparing to build my-part
Building my-part
Staging my-part
Priming my-part
Snapping 'my-snap-name' |
Snapped my-snap-name_0.1_amd64.snap
Install the snap. This will run the install hook.
$ sudo snap install my-snap-name_0.1_amd64.snap --devmode --dangerous
my-snap-name 0.1 installed
Notice a file was created in $SNAP_DATA.
$ ls /var/snap/my-snap-name/current
foo
The only way to get similar functionality for $SNAP_USER_DATA would be to wrap your real command in a script that creates the config. This command is then run by the user, which means you get the $SNAP_USER_DATA you intend. Of course, this isn't at install-time.

Docker workflow for scientific computing

I'm trying to imagine a workflow that could be applied on a scientific work environment. My work involves doing some scientific coding, basically with Python, pandas, numpy and friends. Sometimes I have to use some modules that are not common standards in the scientific community and sometimes I have to integrate some compiled code in my chain of simulations. The code I run is most of the time parallelised with IPython notebook.
What do I find interesting about docker?
The fact that I could create a docker containing my code and its working environment. I can then send the docker to my colleges, without asking them to change their work environment, e.g., install an outdated version of a module so that they can run my code.
A rough draft of the workflow I have in mind goes something as follows:
Develop locally until I have a version I want to share with somebody.
Build a docker, possibly with a hook from a git repo.
Share the docker.
Can somebody give me some pointers of what I should take into account to develop further this workflow? A point that intrigues me: code running on a docker can lunch parallel process on the several cores of the machine? e.g., an IPython notebook connected to a cluster.
Docker can launch multiple process/thread on multiple core. Multiple processes may need the use of a supervisor (see : https://docs.docker.com/articles/using_supervisord/ )
You should probably build an image that contain the things you always use and use it as a base for all your project. (Would save you the pain of writing a complete Dockerfile each time)
Why not develop directly in a container and use the commit command to save your progress on a local docker registry? Then share the final image to your colleague.
How to make a local registry : https://blog.codecentric.de/en/2014/02/docker-registry-run-private-docker-image-repository/
Even though you'll have a full container, I think a package manager like conda can still be a solid part of the base image for your workflow.
FROM ubuntu:14.04
RUN apt-get update && apt-get install curl -y
# Install miniconda
RUN curl -LO http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh
RUN bash Miniconda-latest-Linux-x86_64.sh -p /miniconda -b
RUN rm Miniconda-latest-Linux-x86_64.sh
ENV PATH=/miniconda/bin:${PATH}
RUN conda update -y conda
* from nice example showing docker + miniconda + flask
Wrt doing source activate <env> in the Dockerfile you need to:
RUN /bin/bash -c "source activate <env> && <do something in the env>"