Automatically install services from folder when starting - service

I'm currently running arangodb using docker and I want to be able to start with a clean slate just by restarting my containers.
I have mounted volumes in docker where I want the code of my services to be mounted.
How can I automatically have arangodb install those services? I want to be able to edit the code in the volume to be able to develop my services without having to upload them again. Also it is important that I can run VCS directly in the mounted volume from my client machine.

The ArangoDB container has script hooks that can be used in derived containers by placing files in specific directories:
FROM arangodb/testdrivearangodocker
MAINTAINER Frank Celler <info#arangodb.com>
COPY test.js /docker-entrypoint-initdb.d
COPY test.sh /docker-entrypoint-initdb.d
COPY dumps /docker-entrypoint-initdb.d/dumps
COPY verify.js /
As we demonstrate in this testcontainer.
the dumps directory will be restored using arangorestore
.js files will be executed using arangosh
.sh files will be executed
This script mechanism is implemented in this part of the docker entrypoint script.
With ArangoDB 3.3 you can use the old foxx-manager to install services, ArangoDB 3.4 on you may use foxx-cli for that purpose.

Related

Use init container for running commands in the actual pod

I need to install some libraries in my pod before it starts working as expected.
My use case: I need some libraries that will support SMB (samba), and the image that I have to use does not have it installed.
Unfortunately, exec'ing into the actual pod and running commands do not seem to be a very good idea.
Is there a way by which I can use an init-container to install libsmbclient-dev in my ubuntu pod?
Edit: Some restrictions in my case.
I use helm chart to install my app (nextcloud). So I guess I cannot use a custom image (as far as I know, we cannot use our own images in an existing helm chart). This would have been the best solution.
I cannot run commands in kubernetes value.yaml since I do not use kubectl to install my app. Also I need to restart apache2 after I install the library, and unfortunately, restarting apache2 results in restarting the pod, effectively making the whole installation meaningless.
Since nextcloud helm allows the use of initcontainers, I wondered if that could be used, but as far as I understand the usability of initcontainers, this is not possible (?).
You should build your own container image - e.g. with docker - and push it to a container repository that is suitable for your cluster, e.g. Docker Hub, AWS ECR, Google Artifact registry ...
First install docker (https://docs.docker.com/get-docker/)
create an empty directory and change into it.
Then create a file Dockerfile with the following content:
FROM ubuntu:22.04
RUN apt-get update && apt-get install -y libsmbclient-dev \
&& rm -rf /var/lib/apt/lists/*
Execute
docker build -t myimage:latest .
This will download Ubuntu and build a new container image where the commands from the RUN statement will be executed. The image name will be myimage and the version will be latest.
Then push your image with docker push to your appropriate repository.
See also Docker best practices:
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

What's the fastest way to install and set up TYPO3 locally?

I want to install and set up TYPO3 on my local machine. What's the best practice and fastest way to do so?
For running TYPO3 on a local machine you need a web server running on your machine.
This can be done in different ways:
Native Web Server, PHP and database on a Linux based machine
Virtual Machine (VirtualBox, VMWare, Parallels, etc.)
Vagrant
Docker
Currently the fastest way to a "non power user" in my opinion is ddev.
ddev is a user-friendly possibility to run a perfect environment for TYPO3 on a docker base. It runs on Linux, Mac and Windows (minimum version 10, hyper-v recommended) and it brings all technologies you need for best experience.
Install Docker and ddev, see https://ddev.readthedocs.io/en/stable/
Create a folder for your installation, e.g. ~/Websites/my-website/ or C:\Websites\my-website\ and go into it.
Run ddev config and set these three options in the dialog:
Project name (default is your folder name): Whatever you like
Docroot location: public and say yes for creating
Project type: typo3
Run ddev start to start the Docker containers and add your root password to set the hosts entry (for accessing it via local domain)
Run ddev composer create typo3/cms-base-distribution ^9 and say yes for overwriting
Run ddev config again and just hit enter for every dialog to create a file which provides the DB credentials for your TYPO3 installation
Run ddev exec vendor/bin/typo3cms install:setup --no-interaction --admin-user-name=admin --admin-password=password --site-setup-type=site
That's all, you have a running TYPO3 instance on your local machine.
You can access it by using <project-name>.ddev.site in your browser, in our example it should be http://my-website.ddev.site. To get into the TYPO3 backend you only need to put the credentials admin:password on http://my-website.ddev.site/typo3.
For troubleshooting go to:
https://ddev.readthedocs.io/en/stable/users/troubleshooting/
https://docs.typo3.org/typo3cms/InstallationGuide/Troubleshooting/Index.html
https://docs.typo3.org/typo3cms/ContributionWorkflowGuide/Appendix/SettingUpTypo3Ddev.html

nvidia-docker - can cuda_runtime be available while building a container?

While attempting to compile darknet in the build command of a docker container I constantly run into the exception include/darknet.h:11:30: fatal error: cuda_runtime.h: No such file or directory.
I am building the container from the instructions here: https://github.com/NVIDIA/nvidia-docker/wiki/Deploy-on-Amazon-EC2. I have a simple Dockerfile I am testing with - the relevant parts:
FROM nvidia/cuda:9.2-runtime-ubuntu16.04
...
WORKDIR /
RUN apt-get install -y git
RUN git clone https://github.com/pjreddie/darknet.git
WORKDIR /darknet
# Set OpenCV makefile flag
RUN sed -i '/OPENCV=0/c\OPENCV=1' Makefile
RUN sed -i '/GPU=0/c\GPU=1' Makefile
#RUN ln -s /usr/local/cuda-9.2 /usr/local/cuda
# HERE I have been playing with commands to show me the state of the docker image to try to troubleshoot the problem
RUN find / -name "cuda_runtime.h"
RUN ls /usr/local/cuda/lib64/
RUN less /usr/local/cuda/README
RUN make
Most of the documentation I see references using the nvidia libraries when running a container, but the darknet compiles differently when built with gpu support so I need cuda_runtime.h available at build time.
Perhaps I misunderstand what nvidia-docker is doing - I'm assuming that nvidia-docker exists because the Nvidia code must be installed on the actual host machine and not inside the container & they use some mechanism to share the "native" code with the containers so the GPU can be managed - is that correct?
Should I even be trying to build darknet when building my container or should I be installing it on the host machine, then making it available somehow to the container? This seems to go against the portability of the containers but I can live with some constraints to get access to the GPU.
FROM nvidia/cuda:9.2-runtime-ubuntu16.04
Your image only has bits and pieces of CUDA-9.2 needed to run a CUDA app, but does not have the bits needed to build one.
You need to use -devel variant.

Vagrant Berkshelf - Shelf Path?

Is it possible to set the path where the berkshelf plugin puts the cookbooks it installs? (As in the .berkshelf folder)
I am running Windows 7.
I am currently trying to install a mysql server using an opscode cookbook to a vm and here at work they have the %HOMEDRIVE% system variable set to a network drive. So when .berkshelf starts at the beginning of the Vagrantfile, it pushes the cookbooks to the network drive and it causes it to be slow and well, its not where it should be. Is there a fix to this?
VirtualBox did this as well, but I fixed it by altering the settings. I tried looking for some sort of equivalent settings for berkshelf, but the closest I got was for the standard berkshelf (thats not a vagrant plugin), it appears you can set this environment variable:
ENV['BERKSHELF_PATH']
Found here:
http://www.rubydoc.info/github/RiotGames/berkshelf/Berkshelf#berkshelf_path-class_method
I need to be able to have the cookbooks it reads from the berksfile store to my laptops local drive instead, as in my scenario I cannot have the mobility of the VM limited to the building because of files that are stored on the network.
Any incite would be much appreciated.
Perhaps its better to use the actual berkshelf over the vagrant plugin?
Thanks.
If you want to have the portability - a full chef-repo ready for chef-solo runs, better off using standalone berkshelf instead of the vagrant-berkshelf plugin - which is NOT that flexibly.
For complex cookbooks, I prefer to use standalone berkshelf as it allows me to do berks install --path chef/cookbooks to copy all cookbooks required from ~/.berkshelf/cookbooks, then I can just tar the whole thing and transfer to other machines for the same chef-solo run. some people use capistrano automate the tar and scp/rsync over the network. I just use rysnc/scp;-)
HTH

Issues with meteor app on vagrant share

I have a vagrant VM (virtualbox) setup with meteor. My host and guest are both Ubuntu. The VM contains a vboxfs share folder setup through the Vagrantfile. The behavior I am noticing is similar to a NFS mount.
I am able to create a meteor project in this shared folder, but when I run the project I get errors pointing to mongodb.
If I follow instructions on
https://github.com/pixelhandler/vagrant-dev-env/blob/master/README.md
my app works just fine.
Upon further investigation it seems that MongoDB does not work on NFS shares, http://www.mongodb.org/display/DOCS/NFS
Has anyone else run in to this issue? and if so, have you figured out a (non-rsync) solution?
I plan to send link of this question to 10gen, perhaps someone from their team can answer it.
Not sure what Mongo's plans are re running on NFS / vboxfs, but you could work around this by running your own MongoDB not in the shared folder (eg, use the ubuntu mongodb package). Use the MONGO_URL environment variable to tell meteor where to connect. If you pass this variable, meteor will not try to start MongoDB in the meteor project directory.
You can move the data dir somewhere inside the VM, and use a symlink from the vagrant folder:
cd /vagrant/.meteor/local
ln -s ~/db/
This means the data will not be shared, but you probably want it git ignored anyway.
(https://grahamrhay.wordpress.com/2013/06/18/running-meteor-in-a-vagrant-virtualbox/)
grahamrhay's solution would not work with the vagrant box started on Windows. There is no way to make symbolic links on windows for vagrant, at least not for administrator accounts.