Gitlab Pages + Doxygen + Graphviz creates graphs with corrupted characters - unicode

I'm using Gitlab Pages to host a Doxygen-created API for my project. I also leverage the graphviz project to create dependency graphs. I use the CI script to install the packages and build the documentation:
pages:
stage: build
image: alpine
script:
- apk update && apk add doxygen
- apk add graphviz
- doxygen doxy/dox_config
- mv docs/html/ public/
artifacts:
paths:
- public
only:
- master
dependencies: []
The CI script runs without any errors other than a Doxygen error complaining it can't find LaTeX and dvips, neither of which should affect the graphviz pictures. My graphs look like the following:
I'm not really sure what the problem is or how to fix it. Why are all the characters wrong?

It turns out the issue is with the Docker image used. Alpine doesn't include the correct fonts, but Debian has all the prerequisites. While there is almost definitely a way to install the fonts with Alpine, I just switched to the Debian docker image. Here is a working YML script:
pages:
stage: build
image: ubuntu:trusty
script:
- export DEBIAN_FRONTEND=noninteractive
- apt-get -yq update
- apt-get -yq install graphviz
- apt-get -yq install doxygen
- doxygen doxy/dox_config
- mv docs/html/ public/
artifacts:
paths:
- public

Installing either the package ttf-freefont or ttf-ubuntu-font-family will fix the problem. Here is my Dockerfile
FROM alpine:3.6
RUN apk --update add \
doxygen \
graphviz \
ttf-freefont \
&& rm -rf /var/cache/apk/*
ttf-ubuntu-font-family is more narrow font, so your boxes will become a bit smaller.

Related

Can't find dependencies when deploying function in google cloud build

So im trying to create a google cloud function that imports a python package called pdftotext. Now in order to pip install pdftotext you have to install some system dependencies. i.e:
sudo apt install build-essential libpoppler-cpp-dev pkg-config python3-dev
Now my solution to doing that is to create a requirements.txt and a cloudbuild.yml file that I upload to google source repositories and then use a cloud build trigger that listens to the repo, and deploys the function when something is pushed to the repo.
my cloudbuild.yml file looks like this:
steps:
# Install OS Dependencies
- name: "docker.io/library/python:3.9"
id: "OS Dependencies"
entrypoint: bash
args:
- '-c'
- |
apt-get update
apt-get install -y build-essential libpoppler-cpp-dev pkg-config python3-dev
apt-get install -y pip
pip3 install -t /workspace/lib -r requirements.txt
# Deploy Function
- name: "gcr.io/cloud-builders/gcloud"
id: "Deploy Function"
args:
[
"functions",
"deploy",
"pdf_handler",
"--entry-point",
"main",
"--source",
".",
"--runtime",
"python39",
"--memory",
"256MB",
"--service-account",
"my_service_account",
"--trigger-http",
"--timeout",
"540",
"--region",
"europe-west1",
]
options:
logging: CLOUD_LOGGING_ONLY
The trigger tries to deploy the function but i keep getting this error even though i installed the OS dependencies
"Deploy Function": pdftotext.cpp:3:10: fatal error: poppler/cpp/poppler-document.h: No such file or directory
It seems like the function deployment can't find the location where the dependencies are installed.
I've tried installing and deploying in the same step but still get the same error.
Any advice is appreciated.
Thanks in advance!
When you deploy with Cloud Functions, ONLY your code is taken and packaged (in a container) by the service.
During the packaging, another Cloud Build is called to build that container (with Buildpacks.io) and then to deploy it. That deployment doesn't care that you install some APT packages in your environment. But your /lib directory is uploaded to that new Cloud Build
You should update your requirements.txt of the Cloud Functions code that you deploy to point to the /lib directory to prevent PIP looking for external package (and compilation requirement)

Error in openshift standard_init_linux.go:219: exec user process caused: no such file or directory

I am running 90 microservices in openshift and few of the services are in CrashLoopBackOff and logs showing the following error message.
Error:
OC logs -f :
"standard_init_linux.go:219: exec user process caused: no such file or directory"
OC Describe:
Is there an issue with the image because describe output shows:
"Container image "IMAGE_TAG" already present on machine"
Due to the fact that there is lack of information -
it is impossible to say exactly where the problem is.
I have found some similar errors.
Here is one of the best solutions that matches description of your problem:
Here the key to solve the problem was replacing the aronautica crate via rust-argon2 and modifying the Dockerfile:
FROM rust AS build
WORKDIR /usr/src
RUN apt-get update && apt-get upgrade -y && apt-get install -y build-essential git clang llvm-dev libclang-dev libssl-dev pkg-config libpq-dev brotli
RUN USER=root cargo new loxe-api WORKDIR /usr/src/loxe-api COPY
Cargo.toml Cargo.lock ./ COPY data ./data COPY migrations ./migrations
RUN cargo build --release
# Copy the source and build the application. COPY src ./src ENV PKG_CONFIG_ALLOW_CROSS=1 ENV
OPENSSL_INCLUDE_DIR="/usr/include/openssl" RUN cargo install --path .
FROM debian:buster-slim COPY --from=build
/usr/local/cargo/bin/loxe-api .
# standard env COPY .env ./.env COPY data ./data COPY migrations ./migrations RUN apt-get update && apt-get install -y libssl-dev
pkg-config libpq-dev brotli CMD ["/loxe-api"] ```
See also this similar issues:
second one
third one

Bitbucket Piplines: Upload from a sub directory to staging / production server

I am trying to upload my project using bitbucket's pipeline service and its working fine. However, I only need to upload the files from a specific sub-directory.
My directory structure is as follows:
Repository:
- Appz
- Android
- iOS
- Designs
- Appz
- Web
- Web
- Html
- Laravel
I need to upload the files form the Repository / Web only (not from any other directory). But the pipeline service is uploading the entire repository to server.
bitbucket-pipelines.yml
image: php:7.2
pipelines:
branches:
master:
- step:
script:
- apt-get update && apt-get install -y unzip git-ftp
- export PROJECT_NAME=Web/
- git-ftp init --user $FTP_USERNAME --passwd $FTP_PASSWORD ftp://domain/path
I found the solution. The only command which requires the modification is the git-ftp command. However, I also found that the export command doesn't have anything to do here, so I removed it and the command still worked as I require.
Here how it goes:
- apt-get update && apt-get install -y unzip git-ftp
- git-ftp init --syncroot Web --user $FTP_USERNAME --passwd $FTP_PASSWORD ftp://domain/path
The specified --syncroot <PATH/TO/DIRECTORY> parameter is all it takes to set the desired source location from where the pipeline service need to fetch and upload files.
I hope this helps.
Thank you.

How to install VS Code in Alpine Linux

I have an operating environment that is Alpine linux only and I need to install VS Code. How can VS Code be run on Alpine Linux?
Dockerfile:
FROM node:14.19.0-alpine
RUN set -x \
&& sed -i 's/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apk/repositories \
&& apk upgrade \
&& apk --no-cache add alpine-sdk bash libstdc++ libc6-compat \
&& npm config set python python3
RUN git clone --recursive https://github.com/cdr/code-server.git
RUN cd code-server
RUN yarn global add code-server
ENV PASSWORD=changeme
ENTRYPOINT code-server --bind-addr 0:8443
Commands:
docker build . -t vscode
docker run -d -e PASSWORD=111111 -p8443:8443 vscode:latest
http://hostname:8443
Download it in Flatpak repos, it will run natively in a Gnome SDK environment.
Use a self-hosted environment such as Theia (https://www.theia-ide.org/index.html) or coder-editor (https://coder.com/). I've never tried them, I use the Flatpak one, but they seem interesting (you can "build" your osn editor in a Node environment).
Apologies for necrobump, but as what Marco suggested, coder.com has moved to github
the software, code-server is quite litterally VSCode, as a web application, I have been using this for about half a year and it is quite well developed, Alpine support is still spotty but i recall getting a few releases to function well a while back when i ran Alpine as my main.

how to install package from alpine aports

I have been trying to install a package that exists in alpine aports
and specifically that one but I cannot find how. Is it even possible? If yes, how?
The package filebeat you defined in the question is located in edge branch of testing repository. There is no such repo in alpine container by default.
In order to install a filebeat package on the alpine platform we need:
1. Add testing repo:
/ # echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing/" >> /etc/apk/repositories
2. Install the filebeat package:
/ # apk add --no-cache filebeat
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/edge/testing/x86_64/APKINDEX.tar.gz
WARNING: This apk-tools is OLD! Some packages might not function properly.
(1/1) Installing filebeat (5.6.3-r0)
Executing busybox-1.27.2-r7.trigger
OK: 20 MiB in 12 packages