Enable mDNS within buildroot - buildroot

I have a working buildroot build (2017.02.1). I need to add mDNS.
Using the configuration menu I managed to add and build the first avahi auto ip option.
When I add the mDNS and libdns_sd sub options the build fails (below).
I have removed 'dbus' from 'fs_skeleton/etc/passwd' but the build still fails.
I am new to buildroot & so any pointers would help!
mkdir -p /home/user/buildroot-mywork/buildroot/output/target/etc
( \
echo "NAME=Buildroot"; \
echo "VERSION=2017.02.1-00039-g464795e"; \
echo "ID=buildroot"; \
echo "VERSION_ID=2017.02.1"; \
echo "PRETTY_NAME=\"Buildroot 2017.02.1\"" \
) > /home/user/buildroot-mywork/buildroot/output/target/etc/os-release
>>> Copying overlay /home/user/buildroot-mywork/buildroot/../target/device/myproduct_mx6/production/rootfs_overlay
>>> Executing post-build script /home/user/buildroot-mywork/buildroot/../target/device/myproduct_mx6/production/postbuild.sh
!*!*!*[ POST BUILD ]*!*!*!
>>> Generating root filesystem image rootfs.tar
rm -f /home/user/buildroot-mywork/buildroot/output/build/_fakeroot.fs
rm -f /home/user/buildroot-mywork/buildroot/output/target/THIS_IS_NOT_YOUR_ROOT_FILESYSTEM
rm -f /home/user/buildroot-mywork/buildroot/output/build/_users_table.txt
echo '#!/bin/sh' > /home/user/buildroot-mywork/buildroot/output/build/_fakeroot.fs
echo "set -e" >> /home/user/buildroot-mywork/buildroot/output/build/_fakeroot.fs
echo "chown -h -R 0:0 /home/user/buildroot-mywork/buildroot/output/target" >> /home/user/buildroot-mywork/buildroot/output/build/_fakeroot.fs
printf ' avahi -1 avahi -1 * - - -\n dbus -1 dbus -1 * /var/run/dbus - dbus DBus messagebus user\n mosquitto -1 nogroup -1 * - - - Mosquitto user\n sshd -1 sshd -1 * - - - SSH drop priv user\n\n' >> /home/user/buildroot-mywork/buildroot/output/build/_users_table.txt
PATH="/opt/buildroot-2017.02.1/bin:/opt/buildroot-2017.02.1/sbin:/opt/buildroot-2017.02.1/usr/bin:/opt/buildroot-2017.02.1/usr/sbin:/home/user/bin:/home/user/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" /home/user/buildroot-mywork/buildroot/support/scripts/mkusers /home/user/buildroot-mywork/buildroot/output/build/_users_table.txt /home/user/buildroot-mywork/buildroot/output/target >> /home/user/buildroot-mywork/buildroot/output/build/_fakeroot.fs
mkusers: user 'dbus' already exists with group 'avahi' (wants 'dbus')
fs/tar/tar.mk:14: recipe for target '/home/user/buildroot-mywork/buildroot/output/images/rootfs.tar' failed
make[2]: *** [/home/user/buildroot-mywork/buildroot/output/images/rootfs.tar] Error 1
Makefile:79: recipe for target '_all' failed
make[1]: *** [_all] Error 2
make[1]: Leaving directory '/home/user/buildroot-mywork/buildroot'
Makefile:120: recipe for target 'all' failed
make: *** [all] Error 2
user#SDKQ:~/buildroot-mywork$
_user_table.txt:
avahi -1 avahi -1 * - - -
dbus -1 dbus -1 * /var/run/dbus - dbus DBus messagebus user
mosquitto -1 nogroup -1 * - - - Mosquitto user
sshd -1 sshd -1 * - - - SSH drop priv user

This looks weird. Please report this bug to the Buildroot bug tracker, after making sure:
1/ That you can reproduce after a completely clean build (make clean all)
2/ That you include a Buildroot .config file that allows to reproduce the problem.

Related

Permission denied, please try again github

I'd like to have files automatically uploaded to my server when using the git push command. But the problem is that it stops at the keys and gives an error ( Load key "/home/runner/.ssh/key": invalid format ). On the hosting, the keys are added, in the settings of the github repository - too. Maybe someone faced similar? How can this problem be solved?
UPD: I fixed the error by changing the output of the key, but the following appeared .. Writes access denied.
Here is the updated code:
name: Deploy
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
# Setup key
- run: set -eu
- run: mkdir "$HOME/.ssh"
- run: echo "${{ secrets.key }}" > "$HOME/.ssh/key"
- run: chmod 600 "$HOME/.ssh/key"
# Deploy
- run: rsync -e "ssh -p 1022 -i $HOME/.ssh/key -o StrictHostKeyChecking=no" --archive --compress --delete . *server*:/*link*/public_html/
Error code:
Run rsync -e "ssh -p 1022 -i $HOME/.ssh/key -o StrictHostKeyChecking=no" --archive --compress --delete . *server*:*link*/public_html/
Warning: Permanently added '*server*,[*IP*]:1022' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Received disconnect from *IP* port 1022:2: Too many authentication failures
Disconnected from *IP* port 1022
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(235) [sender=3.1.3]
Error: Process completed with exit code 255.
Try to change key file and folder .ssh permissions to
.ssh directory: 700 (drwx------)
public key (.pub file): 644 (-rw-r--r--)
private key (id_rsa): 600 (-rw-------)
lastly your home directory should not be writeable by the group or others (at most 755 (drwxr-xr-x))
http://linuxcommand.org/lc3_man_pages/ssh1.html

Buildroot problem with libicudata. What's this?

When I try to build system with buildroot I have problem with makin libicudata.
make[2]: Entering directory '/home/ser-builder2/buildroot/output/build/icu-64-2/source/data'
/bin/bash ../mkinstalldirs /home/ser-builder2/buildroot/output/host/x86_64-buildroot-linux-gnu/sysroot/usr/lib
LD_LIBRARY_PATH=/home/ser-builder2/buildroot/output/build/host-icu-64-2/source/stubdata:/home/ser-builder2/buildroot/output/build/host-icu-64-2/source/tools/ctestfw:/home/ser-builder2/buildroot/output/build/host-icu-64-2/source/lib:$LD_LIBRARY_PATH /home/ser-builder2/buildroot/output/build/host-icu-64-2/source/bin/pkgdata -O ../data/icupkg.inc -q -c -s /home/ser-builder2/buildroot/output/build/icu-64-2/source/data/out/build/icudt64l -d ../lib -m dll -r 64.2 -e icudt64 -T ./out/tmp -s ./out/build/icudt64l -p icudt64l -L icudata ./out/tmp/icudata.lst -I /home/ser-builder2/buildroot/output/host/x86_64-buildroot-linux-gnu/sysroot/usr/lib
pkgdata: cd ../lib/ && /usr/bin/install -c libicudata.so.64.2 /home/ser-builder2/buildroot/output/host/x86_64-buildroot-linux-gnu/sysroot/usr/lib/libicudata.so.64.2
pkgdata: cd /home/ser-builder2/buildroot/output/host/x86_64-buildroot-linux-gnu/sysroot/usr/lib && rm -f libicudata.so.64 && ln -s libicudata.so.64.2 libicudata.so.64
Segmentation fault (core dumped)
-- return status = 35584
Error creating symbolic links. Failed command: cd /home/ser-builder2/buildroot/output/host/x86_64-buildroot-linux-gnu/sysroot/usr/lib && rm -f libicudata.so.64 && ln -s libicudata.so.64.2 libicudata.so.64
Makefile:180: recipe for target 'install-local' failed
make[2]: *** [install-local] Error 1
make[2]: Leaving directory '/home/ser-builder2/buildroot/output/build/icu-64-2/source/data'
Makefile:153: recipe for target 'install-recursive' failed
make[1]: *** [install-recursive] Error 2
make[1]: Leaving directory '/home/ser-builder2/buildroot/output/build/icu-64-2/source'
package/pkg-generic.mk:278: recipe for target '/home/ser-builder2/buildroot/output/build/icu-64-2/.stamp_staging_installed' failed
make: *** [/home/ser-builder2/buildroot/output/build/icu-64-2/.stamp_staging_installed] Error 2
I don't know what to do with this. I tried to build this on another computer and everything works fine.
I run into this problem too and tracked it down to the execution of pkgdata. Somehow LD_LIBRARY_PATH got screwed which causes '/usr/bin/rm' to fail. To solve it, I've modified Makefile.in and removed LD_LIBRARY_PATH=... from the invocation line.
diff -r -u a/source/Makefile.in b/source/Makefile.in
--- a/source/Makefile.in 2022-06-02 10:48:31.493046292 +0200
+++ b/source/Makefile.in 2022-06-02 11:25:57.303758900 +0200
## -258,7 +258,7 ##
#(echo 'TOOLBINDIR=$$(cross_buildroot)/bin' ;\
echo 'TOOLLIBDIR=$$(cross_buildroot)/lib' ;\
echo "INVOKE=$(LDLIBRARYPATH_ENVVAR)=$(LIBRARY_PATH_PREFIX)"'$$(TOOLLIBDIR):$$(cross_buildroot)/stubdata:$$(cross_buildroot)/tools/ctestfw:$$$$'"$(LDLIBRARYPATH_ENVVAR)" ;\
- echo "PKGDATA_INVOKE=$(LDLIBRARYPATH_ENVVAR)=$(LIBRARY_PATH_PREFIX)"'$$(cross_buildroot)/stubdata:$$(cross_buildroot)/tools/ctestfw:$$(TOOLLIBDIR):$$$$'"$(LDLIBRARYPATH_ENVVAR) " ;\
+ echo "PKGDATA_INVOKE= " ;\
echo ) >> $#
config/icucross.inc: $(top_builddir)/icudefs.mk $(top_builddir)/Makefile #platform_make_fragment#
I know that this question is quite old now, but I want to mention it if someone else will run into this.

Docker compose install error 'curl: (56) OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104' in Ubuntu

I am trying to install docker compose on the Ubuntu 18.04.2 LTS.
I tried installing using the official link here and followed the Docker Compose documentation given, but when i run the command
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
then after some time it gives me this error
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 617 0 617 0 0 613 0 --:--:-- 0:00:01 --:--:-- 613
24 8280k 24 2056k 0 0 789 0 2:59:06 0:44:27 2:14:39 0
**curl: (56) OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104**
Kindly help me on this i have tried many times but it is not working.
I had the same problem. I assume that you are using Docker Docs, which are usually outdated. You should go to Docker Compose Github instead.
Solution
1 - Open Linux Terminal by pressing Ctrl + Alt + T
2 - Install curl:
sudo apt install curl
3 - Turn on root privileges in terminal for your user (something like admin in Windows OS), with command:
sudo -i
4 - Go to Docker Compose Github. In releases you will find this code. Run it in your linux terminal.
curl -L https://github.com/docker/compose/releases/download/1.25.1-rc1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
5 - Turn off root privileges in terminal for your user, with command:
exit
6 - Check if docker-compose is installed with command:
docker-compose version
Outcome: In your terminal, you should see docker-compose version number and some other informations.

How to dump DB via cron inside container?

I use docker-compose which ups a stack.
Relative code:
db:
build: ./dockerfiles/postgres
container_name: postgres-container
volumes:
- ./dockerfiles/postgres/pgdata:/var/lib/postgresql/data
- ./dockerfiles/postgres/backups:/pg_backups
Dockerfile for Postgres:
FROM postgres:latest
RUN mkdir /pg_backups && > /etc/cron.d/pg_backup-cron && echo "00 22 * * * /backup.sh" >> /etc/cron.d/pg_backup-cron
ADD ./backup.sh /
RUN chmod +x /backup.sh
backup.sh
#!/bin/sh
# Dump DBs
now=$(date +"%d-%m-%Y_%H-%M")
pg_dump -h db -U postgres -d postgres > "/pg_backups/db_dump_$now.sql"
# remove all files (type f) modified longer than 30 days ago under /pg_backups
find /pg_backups -name "*.sql" -type f -mtime +30 -delete
exit 0
Cron simply does not launch the script. How to fix that?
FINAL VERSION
Based on #Farhad Farahi answer, below is the final result:
On host I made a script:
#!/bin/bash
# Creates Cron Job which backups DB in Docker everyday at 22:00 host time
croncmd_backup="docker exec -it postgres-container bash -c '/pg_backups/backup.sh'"
cronjob_backup="00 22 * * * $croncmd_backup"
if [[ $# -eq 0 ]] ; then
echo -e 'Please provide one of the arguments (example: ./run_after_install.sh add-cron-db-backup):
1) add-cron-db-backup
2) remove-cron-db-backup'
# In order to avoid task duplications in cron, the script checks, if there is already back-up job in cron
elif [[ $1 == add-cron-db-backup ]]; then
( crontab -l | grep -v -F "$croncmd_backup" ; echo "$cronjob_backup" ) | crontab -
echo "==>>> Backup task added to Cron"
# Remove back-up job from cron
elif [[ $1 == remove-cron-db-backup ]]; then
( crontab -l | grep -v -F "$croncmd_backup" ) | crontab -
echo "==>>> Backup task removed from Cron"
fi
This script adds cron task to host, which launches the script backup.sh (see above) in a container.
For this implementation there is no need to use Dockerfile for Postgres, so relevant part of docker-compose.yml should look like:
version: '2'
services:
db:
image: postgres:latest
container_name: postgres-container
volumes:
- ./dockerfiles/postgres/pgdata:/var/lib/postgresql/data
- ./dockerfiles/postgres/backups:/pg_backups
Things you should know:
cron service is not started by default in postgres library image.
when you change cron config, you need to reload cron service.
Recommendation:
Use docker host's cron and use docker exec to launch the periodic tasks.
Advantages of this approach:
Unified Configuration for all containers.
Avoids running multiple cron services in multiple containers (Better usage of system resources aswell as less management overhead.
Honors Microservices Philosophy.
Based on the Farhad's answer I created a file postgres_backup.sh on the host with the next content:
#!/bin/bash
# Creates Cron Job which backups DB in Docker everyday at 22:00 host time
croncmd_backup="docker exec -it postgres-container bash -c '/db_backups/script/backup.sh'"
cronjob_backup="00 22 * * * $croncmd_backup"
if [[ $# -eq 0 ]] ; then
echo -e 'Please provide one of the arguments (example: ./postgres_backup.sh add-cron-db-backup):
1 > add-cron-db-backup
2 > remove-cron-db-backup
elif [[ $1 == add-cron-db-backup ]]; then
( crontab -l | grep -v -F "$croncmd_backup" ; echo "$cronjob_backup" ) | crontab -
echo "==>>> Backup task added to Local (not container) Cron"
elif [[ $1 == remove-cron-db-backup ]]; then
( crontab -l | grep -v -F "$croncmd_backup" ) | crontab -
echo "==>>> Backup task removed from Cron"
fi
and I added a file /db_backups/script/backup.sh to Docker's Postgres Image with the content:
#!/bin/sh
# Dump DBs
now=$(date +"%d-%m-%Y_%H-%M")
pg_dump -h db -U postgres -d postgres > "/db_backups/backups/db_dump_$now.sql"
# remove all files (type f) modified longer than 30 days ago under /db_backups/backups
find /db_backups/backups -name "*.sql" -type f -mtime +30 -delete
exit 0

How do I get pcp to automatically attach nodes to postgres pgpool?

I'm using postgres 9.4.9, pgpool 3.5.4 on centos 6.8.
I'm having a major hard time getting pgpool to automatically detect when nodes are up (it often detects the first node but rarely detects the secondary) but if I use pcp_attach_node to tell it what nodes are up, then everything is hunky dory.
So I figured until I could properly sort the issue out, I would write a little script to check the status of the nodes and attach them as appropriate, but I'm having trouble with the password prompt. According to the documentation, I should be able to issue commands like
pcp_attach_node 10 localhost 9898 pgpool mypass 1
but that just complains
pcp_attach_node: Warning: extra command-line argument "localhost" ignored
pcp_attach_node: Warning: extra command-line argument "9898" ignored
pcp_attach_node: Warning: extra command-line argument "pgpool" ignored
pcp_attach_node: Warning: extra command-line argument "mypass" ignored
pcp_attach_node: Warning: extra command-line argument "1" ignored
it'll only work when I use parameters like
pcp_attach_node -U pgpool -h localhost -p 9898 -n 1
and there's no parameter for the password, I have to manually enter it at the prompt.
Any suggestions for sorting this other than using Expect?
You have to create PCPPASSFILE. Search pgpool documentation for more info.
Example 1:
create PCPPASSFILE for logged user (vi ~/.pcppass), file content is 127.0.0.1:9897:user:pass (hostname:port:username:password), set file permissions 0600 (chmod 0600 ~/.pcppass)
command should run without asking for password
pcp_attach_node -h 127.0.0.1 -U user -p 9897 -w -n 1
Example 2:
create PCPPASSFILE (vi /usr/local/etc/.pcppass), file content is 127.0.0.1:9897:user:pass (hostname:port:username:password), set file permissions 0600 (chmod 0600 /usr/local/etc/.pcppass), set variable PCPPASSFILE (export PCPPASSFILE=/usr/local/etc/.pcppass)
command should run without asking for password
pcp_attach_node -h 127.0.0.1 -U user -p 9897 -w -n 1
Script for auto attach the node
You can schedule this script with for example crontab.
#!/bin/bash
#pgpool status
#0 - This state is only used during the initialization. PCP will never display it.
#1 - Node is up. No connections yet.
#2 - Node is up. Connections are pooled.
#3 - Node is down.
source $HOME/.bash_profile
export PCPPASSFILE=/appl/scripts/.pcppass
STATUS_0=$(/usr/local/bin/pcp_node_info -h 127.0.0.1 -U postgres -p 9897 -n 0 -w | cut -d " " -f 3)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] NODE 0 status "$STATUS_0;
if (( $STATUS_0 == 3 ))
then
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [WARN] NODE 0 is down - attaching node"
TMP=$(/usr/local/bin/pcp_attach_node -h 127.0.0.1 -U postgres -p 9897 -n 0 -w -v)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] "$TMP
fi
STATUS_1=$(/usr/local/bin/pcp_node_info -h 127.0.0.1 -U postgres -p 9897 -n 1 -w | cut -d " " -f 3)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] NODE 1 status "$STATUS_1;
if (( $STATUS_1 == 3 ))
then
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [WARN] NODE 1 is down - attaching node"
TMP=$(/usr/local/bin/pcp_attach_node -h 127.0.0.1 -U postgres -p 9897 -n 1 -w -v)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] "$TMP
fi
exit 0
yes you can trigger execution of this command using a customised failover_command (failover.sh in your /etc/pgpool)
Automated way to up your pgpool down node:
copy this script into a file with execute permission to your desired location with postgres ownership into all nodes.
run crontab -e comamnd under postgres user
Finally set that script to run every minute at crontab . But to execute it for every second you may create your own
service and run it.
#!/bin/bash
# This script will up all pgpool down node
#************************
#******NODE STATUS*******
#************************
# 0 - This state is only used during the initialization.
# 1 - Node is up. No connection yet.
# 2 - Node is up and connection is pooled.
# 3 - Node is down
#************************
#******SCRIPT*******
#************************
server_node_list=(0 1 2)
for server_node in ${server_node_list[#]}
do
source $HOME/.bash_profile
export PCPPASSFILE=/var/lib/pgsql/.pcppass
node_status=$(pcp_node_info -p 9898 -h localhost -U pgpool -n $server_node -w | cut -d ' ' -f 3);
if [[ $node_status == 3 ]]
then
pcp_attach_node -n $server_node -U pgpool -p 9898 -w -v
fi
done