AlpineLinux PXE boot specify startup script as kernel parameter - alpine-linux

Is there a way to specify a script as kernel parameter during pxe boot? I want run a bunch of computers as workers. I want them to use PXE to boot AlpineLinux and then run a bash script that will load my app and join my cluster.

Change dir:
cd /tmp
Create directory structure:
.
└── etc
├── init.d
│   └── local.stop
└── runlevels
└── default
└── local.stop -> /etc/init.d/local.stop
mkdir ./etc/{init.d,runlevels/default}/
Create file ./etc/init.d/local.stop:
#!/sbin/openrc-run
start () {
wget http://172.16.11.8/share/video.mp4 -O /root/video.mp4
}
chmod +x ./etc/init.d/local.stop
cd /tmp/etc/runlevels/default
Make symlink:
ln -s /etc/init.d/local.stop local.stop
Go back:
cd /tmp
Create archive:
tar -czvf alpine-test-01.tar.gz ./etc/
Make pxelinux (on your tftp server) menu:
label insatll-alpine
menu label Install Alpine Linux [test]
kernel alpine-installer/boot/vmlinuz-lts
initrd alpine-installer/boot/initramfs-lts
append ip=dhcp alpine_repo=https://dl-cdn.alpinelinux.org/alpine/latest-stable/main modloop=https://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/x86_64/netboot/modloop-lts modules=loop,squashfs,sd-mod,usb-storage apkovl=http://{YOUR_WEBSERVER}/{YOUR_DIR}/alpine-test-01.tar.gz
And run:
My webserver log:
10.10.15.43 172.16.11.8 - [27/Aug/2021:01:15:22 +0300] "GET /share/video.mp4 HTTP/1.1" 200 5853379 "-" "Wget"

Related

tileserver-gl: custom config file docker-compose

I am trying to include a Tileserver GL container to my existing docker-compose, using a personalized config.json file. Here is the relevant part of the docker-compose.yml
osm_tile_server:
image: maptiler/tileserver-gl
container_name: open_tile_server
volumes:
- ./Tile_server/data:/data
ports:
- '8081:8080'
- '5431:5432'
command:
- '-c my_config.json'
the data folder structure
.Tile_server/data/
├── malta.bbox
├── malta.osm.pbf
├── my_config.json
├── quickstart_checklist.chk
├── styles
│ └── my_style.json
└── tiles.mbtiles
when running docker-compose up the -c my_config.json file is ignored.
However it works if I simply run docker run -it -v $(pwd)/Tile_server/data:/data -p 8081:80 maptiler/tileserver-gl -c my_config.json and even weirdly, if I use --verbose as the command instance instead of -c my_config.json, the option is executed.

psql: error: server closed the connection unexpectedly. This probably means the server terminated abnormally before or while processing the request

I am new to postgres I want to containerize postgres , below is my dockerfile
FROM postgres:13.3-alpine
ENV POSTGRES_USER="postgres"
COPY . /docker-entrypoint-initdb.d
RUN chmod 777 /docker-entrypoint-initdb.d/main.sh
EXPOSE 5432
ENTRYPOINT ["/docker-entrypoint-initdb.d/main.sh"]
And i have some Initialization scripts (main.sh) that should run when containers start i have placed them inside docker-entrypoint-initdb.d. Files are
.
├── ddl
│   ├── create_db_ddl.sql
│   ├── create_index_ddl.sql
│   └── create_table_ddl.sql
├── dml
│   ├── insert_emm_cat4_child_que.sql
│   ├── insert_emm_data_cat1.sql
│   ├── insert_emm_data_cat2.sql
│   ├── insert_emm_data_cat3.sql
│   ├── insert_emm_data_cat4.sql
│   ├── insert_emm_template.sql
│   └── insert_master_data.sql
├── Dockerfile
├── init.sql
├── Jenkinsfile
└── main.sh
when i run the container it throws this error message
############ Create database and schema if not exist ###########
psql: error: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
this is from my main.sh file
One observation i made if in dockerfile if i dont include ENTRYPOINT ["/docker-entrypoint-initdb.d/main.sh"] and after docker exec into postgres container and then run ./main.sh manually it works fine
my main.sh file looks like this
#!/bin/sh
## SET password
export PGPASSWORD='sapient#123'
#Set the value of variable
database="survey_platform"
user="postgres"
## execute scripts
echo "############ Create database and schema if not exist ###########"
psql -h <IP>-p 5432 -U $user -f "ddl/create_db_ddl.sql"
echo "############ Create table if not exist ###########"
psql -h <IP> -p 5432 -U $user -d $database -f "ddl/create_table_ddl.sql"
echo "############ Create index if not exist ###########"
psql -h <IP> -p 5432 -U $user -d $database -f "ddl/create_index_ddl.sql"
/bin/sh
I am confused why i am not able to run main.sh with ENTRYPOINT from dockerfile
You don't need ENTRYPOINT, or even this wrapper script here. If you COPY the *.sql files into /docker-entrypoint-initdb.d, the postgres image will run them, in alphabetical order, with appropriate credentials, the first time the container starts up with an uninitialized database.
FROM postgres:13.3-alpine
COPY ddl/create_db_ddl.sql /docker-entrypoint-initdb.d/01_create_db_ddl.sql
COPY ddl/create_table_ddl.sql /docker-entrypoint-initdb.d/02_create_table_ddl.sql
COPY ddl/create_index_ddl.sql /docker-entrypoint-initdb.d/03_create_index_ddl.sql
# No EXPOSE, ENTRYPOINT, CMD, etc.
Note that these scripts are only run if the database data doesn't exist at all. If you store the database data in a named volume or host directory (and you should), these scripts will not be re-run if there's data there.
Fundamentally a Docker container only runs one command, and when that command completes, the container exits as well. The postgres image has a pretty involved entrypoint script that starts up a temporary non-networked database to run the init scripts; if you specify ENTRYPOINT in a derived Dockerfile, that command runs instead of the standard initialization script or the actual database. Your setup tries to run psql in the container, but since that's running instead of the database, there's nothing for it to connect to.

sam package too large when deploying custom runtime lambda

So I replicated this project which uses swift as custom lambda runtime using a makefile as build method.
Now I created a AWS CodePipeline that packages my project using CodeBuild using sam package and finally deploys it via CloudFormation.
The codeUri of my lambda is set at the root folder like you see in the repo I linked above. I think that is how it should as I saw that as well in the sam documentation under the custom runtime section. The problem with that is that sam package packages my entire project and lambda is complaining at deploy time that the zip is too large.
How would I set up the makefile as well as the template.yml so that sam package only packages my lambdas?
So I got it to work with a slightly different strategy. This is for anyone who found them selfs in the same situation.
1. Don't use sam to build your lambda functions.
I am running a variety on shell scripts to initiate the swift build in the /scripts folder.
.
├── Package.resolved
├── Package.swift
├── README.md
├── Sources
│ └── YourFirstLambda
│ ├── main.swift
│ └── requirements.txt
├── buildspec.yml
├── samconfig.toml
├── scripts
│ ├── build-and-package-all.sh
│ ├── build-and-package.sh
│ └── package.sh
└── template.yml
build-and-package-all.sh
Start this shell script from inside the scripts folder. You can change this behavior if you change all the dir paths.
This initiate the build-and-packange.sh script for each function defined in the array lambdas.
declare -a lambdas=("YourFirstLambda" "YourSeconLambda")
workspace="$(pwd)/.."
## now loop through the above array
if [ -f /.dockerenv ]; then
# This is executed if run inside docker
echo "I'm inside matrix ;(";
for lambda in "${lambdas[#]}"
do
# Second parameter is wheather we are inside a docker container or not
./build-and-package.sh $lambda "FALSE"
done
else
echo "I'm living in real world!";
for lambda in "${lambdas[#]}"
do
# Second parameter is wheather we are inside a docker container or not
./build-and-package.sh $lambda "TRUE"
done
fi
build-and-package.sh
This script runs
swift build and
package.sh
on a docker container if the build-and-package-all.sh is executed on a bare metal machine. This is useful because you can run this on a machine that does not have swift installed.
On the other hand we will run swift build on bare metal if we are already in a docker container. This might be the case like it was for me when you want to build you functions using AWS CodeBuild. They also use a docker container so there is no need to start a docker container inside a docker container.
set -eu
executable=$1
isBareMetal=$2
workspace="$(pwd)/.."
if [ $isBareMetal == "TRUE" ]; then
echo "-------------------------------------------------------------------------"
echo "building \"$executable\" lambda"
echo "-------------------------------------------------------------------------"
docker run --rm -v "$workspace":/workspace -w /workspace/ codebuild-swift \
bash -cl "swift build --product $executable -c release"
echo "done"
echo "-------------------------------------------------------------------------"
echo "packaging \"$executable\" lambda"
echo "-------------------------------------------------------------------------"
docker run --rm -v "$workspace":/workspace -w /workspace/ codebuild-swift \
bash -cl "sh scripts/package.sh $executable"
echo "done"
else
echo "-------------------------------------------------------------------------"
echo "building \"$executable\" lambda"
echo "-------------------------------------------------------------------------"
cd $workspace
swift build --product $executable -c release
echo "done"
echo "-------------------------------------------------------------------------"
echo "packaging \"$executable\" lambda"
echo "-------------------------------------------------------------------------"
sh $workspace/scripts/package.sh $executable
echo "done"
fi
Finally we package the swift lambda to a .zip.
package.sh
set -eu
executable=$1
target=".build/lambda/$executable"
rm -rf "$target"
mkdir -p "$target"
cp ".build/release/$executable" "$target/"
# add the target deps based on ldd
ldd ".build/release/$executable" | grep swift | cut -d' ' -f3 | xargs cp -Lv -t "$target"
cd "$target"
ln -s "$executable" "bootstrap"
zip --symlinks lambda.zip *
2. Tell sam where to find the zipped lambda
In the template.yml you should have a section that describes your lambda like so:
...
YourLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Timeout: 5
Handler: Provided
Runtime: provided
MemorySize: 128
Description: Test Lambda
Role: !GetAtt Role.Arn
CodeUri: .build/lambda/YourLambdaFunction/lambda.zip
...
You can now use sam build, sam deploy or sam package. Sam will only upload the zipped lambda which should be in the 30Mb range. Probably less for you if you do not have many dependencies.
Side note.
You will need a docker container that has swift installed. My docker image is tagged codebuild-swift and uses the following docker file. If you name your docker image differently you then have to update the build-and-package.sh:
FROM swift:5.2-amazonlinux2
RUN yum -y install \
git \
libuuid-devel \
libicu-devel \
libedit-devel \
libxml2-devel \
sqlite-devel \
python-devel \
ncurses-devel \
curl-devel \
openssl-devel \
tzdata \
libtool \
gcc-c++ \
jq \
tar \
zip \
glibc-static
The shell scripts above are all based from this site:
Getting started with sift AWS Lambda runtime

Postgres Docker: "postgres: could not access the server configuration file "/var/lib/postgresql/data/postgresql.conf": No such file or directory"

I am having weird issues with official postgres docker image. Most of the time it works fine, if I shut down the container and launch it again, I sometimes get this error but it's not every time:
PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres: could not access the server configuration file "/var/lib/postgresql/data/postgresql.conf": No such file or directory
I am launching postgres image using this command:
export $(grep -v '^#' .env | xargs) && docker run --rm --name postgres \
-e POSTGRES_USER=$POSTGRES_USER \
-e POSTGRES_DB=$POSTGRES_DB \
-e POSTGRES_PASSWORD=$POSTGRES_PASSWORD \
-p $POSTGRES_PORT:$POSTGRES_PORT \
-v $POSTGRES_DEVELOPMENT_DATA:/var/lib/postgresql/data \
postgres
I keep variables in .env file, they look like this:
POSTGRES_USER=custom-db
POSTGRES_DB=custom-db
POSTGRES_PASSWORD=12345678
POSTGRES_PORT=5432
POSTGRES_DEVELOPMENT_DATA=/tmp/custom-db-pgdata
When I try to echo variables the values are there so I don't think I'm passing null values to docker env variables.
The directory on my host machine looks something like this:
/tmp/custom-db-pgdata
├── base
│   ├── 1
│   ├── 13407
│   ├── 13408
│   └── 16384
├── global
├── pg_logical
├── pg_multixact
│   ├── members
│   └── offsets
├── pg_notify
├── pg_stat
├── pg_stat_tmp
├── pg_subtrans
├── pg_wal
│   └── archive_status
└── pg_xact
If it's inconsistent in how it works between executions on the same machine and same session (aka without rebooting) then something isn't mapping your directories properly. Finding what it is that's breaking will be difficult, more so since you're on a Mac. Docker on a Mac you has the extra bonus of running through a VM, so docker is mapping your local drive/path through to the VM and then mapping that into the container image, so there are two different layers where things can go wrong.
Dario has the right idea in his clarifying comments, you shouldn't rely on /tmp since that also has Mac Magic to it. It's actually /var/private/somegarbagestring and is different every bootup. Try switching to a /Users/$USER/dbpath folder and move your data to that, so at least you're debugging with one less layer of magic between data and database.

How can I restore my postgresql docker volume?

I use docker-compose to start my application. One container use postgresql.
I created a script who backup container volume in a tar.gz file.
backup.tar.gz
├── base
├── _data
├── ...
├── pg_hba.conf
└── old.txt
If I inspect my files in my volume no old.txt
sudo tree -L 1 /var/lib/docker/volumes/application_postgresql/_data
├── base
├── _data
├── ...
└── pg_hba.conf
I try to stop my container (docker-compose stop db), untar into /var/lib/docker/volumes/application_postgresql/_data and restart my container (docker-compose restart db). But it did not seem to work.
Files looks good
sudo tree -L 1 /var/lib/docker/volumes/application_postgresql/_data
├── base
├── _data
├── ...
├── pg_hba.conf
└── old.txt
but my container don't want to start
db_1 | initdb: directory "/var/lib/postgresql/data" exists but is not empty
db_1 | If you want to create a new database system, either remove or empty
db_1 | the directory "/var/lib/postgresql/data" or run initdb
db_1 | with an argument other than "/var/lib/postgresql/data".
How can I restore my postgresql volume ?
I know that the solution would be more elegant with a pgdump. But I want to create a backup script that don't know about the environment.
What strategy could I use ?