Can't find dependencies when deploying function in google cloud build - docker-compose

So im trying to create a google cloud function that imports a python package called pdftotext. Now in order to pip install pdftotext you have to install some system dependencies. i.e:
sudo apt install build-essential libpoppler-cpp-dev pkg-config python3-dev
Now my solution to doing that is to create a requirements.txt and a cloudbuild.yml file that I upload to google source repositories and then use a cloud build trigger that listens to the repo, and deploys the function when something is pushed to the repo.
my cloudbuild.yml file looks like this:
steps:
# Install OS Dependencies
- name: "docker.io/library/python:3.9"
id: "OS Dependencies"
entrypoint: bash
args:
- '-c'
- |
apt-get update
apt-get install -y build-essential libpoppler-cpp-dev pkg-config python3-dev
apt-get install -y pip
pip3 install -t /workspace/lib -r requirements.txt
# Deploy Function
- name: "gcr.io/cloud-builders/gcloud"
id: "Deploy Function"
args:
[
"functions",
"deploy",
"pdf_handler",
"--entry-point",
"main",
"--source",
".",
"--runtime",
"python39",
"--memory",
"256MB",
"--service-account",
"my_service_account",
"--trigger-http",
"--timeout",
"540",
"--region",
"europe-west1",
]
options:
logging: CLOUD_LOGGING_ONLY
The trigger tries to deploy the function but i keep getting this error even though i installed the OS dependencies
"Deploy Function": pdftotext.cpp:3:10: fatal error: poppler/cpp/poppler-document.h: No such file or directory
It seems like the function deployment can't find the location where the dependencies are installed.
I've tried installing and deploying in the same step but still get the same error.
Any advice is appreciated.
Thanks in advance!

When you deploy with Cloud Functions, ONLY your code is taken and packaged (in a container) by the service.
During the packaging, another Cloud Build is called to build that container (with Buildpacks.io) and then to deploy it. That deployment doesn't care that you install some APT packages in your environment. But your /lib directory is uploaded to that new Cloud Build
You should update your requirements.txt of the Cloud Functions code that you deploy to point to the /lib directory to prevent PIP looking for external package (and compilation requirement)

Related

Error in openshift standard_init_linux.go:219: exec user process caused: no such file or directory

I am running 90 microservices in openshift and few of the services are in CrashLoopBackOff and logs showing the following error message.
Error:
OC logs -f :
"standard_init_linux.go:219: exec user process caused: no such file or directory"
OC Describe:
Is there an issue with the image because describe output shows:
"Container image "IMAGE_TAG" already present on machine"
Due to the fact that there is lack of information -
it is impossible to say exactly where the problem is.
I have found some similar errors.
Here is one of the best solutions that matches description of your problem:
Here the key to solve the problem was replacing the aronautica crate via rust-argon2 and modifying the Dockerfile:
FROM rust AS build
WORKDIR /usr/src
RUN apt-get update && apt-get upgrade -y && apt-get install -y build-essential git clang llvm-dev libclang-dev libssl-dev pkg-config libpq-dev brotli
RUN USER=root cargo new loxe-api WORKDIR /usr/src/loxe-api COPY
Cargo.toml Cargo.lock ./ COPY data ./data COPY migrations ./migrations
RUN cargo build --release
# Copy the source and build the application. COPY src ./src ENV PKG_CONFIG_ALLOW_CROSS=1 ENV
OPENSSL_INCLUDE_DIR="/usr/include/openssl" RUN cargo install --path .
FROM debian:buster-slim COPY --from=build
/usr/local/cargo/bin/loxe-api .
# standard env COPY .env ./.env COPY data ./data COPY migrations ./migrations RUN apt-get update && apt-get install -y libssl-dev
pkg-config libpq-dev brotli CMD ["/loxe-api"] ```
See also this similar issues:
second one
third one

Running sbt in a Docker Container

I am trying to use Github actions for my scala project and created a Docker workflow for it. Basically, I am trying to install sbt into my container and run the project.
Dockerfile looks like this:
FROM centos:centos8
ENV SCALA_VERSION 2.13.1
ENV SBT_VERSION 1.5.2
RUN yum install -y epel-release
RUN yum update -y && yum install -y wget
# INSTALL JAVA
RUN yum install -y java-11-openjdk
# INSTALL SBT
RUN wget http://dl.bintray.com/sbt/rpm/sbt-${SBT_VERSION}.rpm
RUN yum install -y sbt-${SBT_VERSION}.rpm
RUN wget -O /usr/local/bin/sbt-launch.jar http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch/$SBT_VERSION/sbt-launch.jar
WORKDIR /root
EXPOSE 8080
RUN sbt compile
CMD sbt run
But when I push anything, I get the following error:
The command '/bin/sh -c wget http://dl.bintray.com/sbt/rpm/sbt-${SBT_VERSION}.rpm' returned a non-zero code: 8
When I check the link manually (by setting the sbt version), I see indeed bintray responds with 403 forbidden error but status.bintray.com tells all systems are operational.
Am I doing something wrong or is something wrong with bintray?
Forbidden doesnt mean non operational.
I think that url is incorrect as its not hosted on bintray rather jfrog, please see section on Centos which states
remove old Bintray repo file
https://www.scala-sbt.org/1.x/docs/Installing-sbt-on-Linux.html

Ansible and docker-compose pull / up -d

I'm trying to run theses commands :
docker-compose pull
docker-compose up -d
docker-compose -f other_file.yaml pull
docker-compose -f other_file.yaml up -d
Here's my Ansible code for this specific task :
- name: Run docker-compose
docker_compose:
project_src: {{ my_project_path }}
files:
- docker-compose.yaml
- other_file.yaml
I'm getting the error bellow
Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) on managed's Python /usr/bin/python3.
Please read module documentation and install in the appropriate location.
If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter, for example via `pip install docker` or `pip install docker-py` (Python 2.6).
The error was: No module named 'docker'
The fact is that the python interpreter is set up in the ansible.cfg as /usr/bin/python3 which is the good one.
The version of python3 installed is 3.6.9 and the python module "docker" is installed.
Any idea on where this error comes from ? Been reading documentation, and others post all day.
Thanks !
Finally understood why the problem occured.
I was installing the python library with pip3 install <lib> the fact is that it will not work if you're using sudo to run some modules in Ansible because sudo pip3 is different from pip3 on its own.
So quick solution ? sudo pip3 install docker docker-compose

Unable to install ccxt.pro via pipenv - pipenv requires an #egg fragment for version controlled dependencies

I try to install ccxt.pro via pipenv. I usually use python venv module to create virtual environment but I try to work with pipenv too.
In ccxt.pro documentation, package should be installed via pip3 over https or ssh. Via github user and password I am able to install it.
When I tried to install this library over pipenv, I receive Installation Failed error:
pipenv install git+https://github.com/kroitor/ccxt.pro.git#subdirectory=python
Installing git+https://github.com/kroitor/ccxt.pro.git#subdirectory=python…
WARNING: pipenv requires an #egg fragment for version controlled dependencies. Please
install remote dependency in the form git+https://github.com/kroitor/ccxt.pro.git#egg=<package-name>.
✘ Installation Failed
I install all dependencies from setup.py but the problem persist. Triead to apply this but it get stucked on Installing....
Questions:
How can I install ccxt.pro via pipenv?
Why am I not able to install it the same way like with pip install command?

Deploying spree on AWS

I'm trying to deploy on AWS a spree application.
After setting up elastic-beanstalk and adding to
my_project/.ebextensions/ this .config file
packages:
yum:
git-core: []
container_commands:
bundle:
command: "gem install bundle"
assets:
command: "bundle exec rake assets:precompile"
db:
command: "bundle exec rake db:migrate"
leader_only: true
I use git aws.push to deploy my app, only to get this error message:
Could not find rake-10.1.0 in any of the sources (Bundler::GemNotFound)
double-checking on my gem set, using
bundle show rake
gives me:
... /gems/rake-10.1.0
while looking at the logfile from AWS I find this error:
sh: git: command not found
Git error: command `git clone 'https://github.com/spree/spree.git'
what am I doing wrong?
You'll need to ensure that git is installed on the server.
Try creating a file called:
.ebextensions/YOUR_APPLICATION_NAME.config
which contains
packages:
yum:
git: []
This will install git with yum as part of your deployment.
Another option is to use spree from a gem instead of sourcing it from git.
For more information, check out this article on the AWS Blog about deploying Ruby Applications to Elastic Beanstalk.