Logstash service failure CentOS7 - Some newbie questions - service

I am really struggling to launch logstash as a service on CentOS 7. Since I cannot figure out what or where to set the
-DJava.io.tmpdir= variable (which apparently would solve my issue), I am trying to create a little script to launch the logstash command line on boot.
The following line works manually for me:
sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d
That successfully loads and opens port 5000.
So I am trying to create a boot time script to run that line and start logstash.
My problem is that I think I need the sudo command as it fails to run without it. Does anyone know how I can get this to work?
I have the following files /etc/systemd/system/mylogstash.service:
[Unit]
After=network.target
[Service]
ExecStart=/usr/local/bin/disk-space-check.sh
[Install]
WantedBy=default.target
and also /usr/local/bin/mylogstashstart.sh:
#!/bin/bash
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d
To make the script executable I have done:
chmod 744 /usr/local/bin/mylogstash.sh
and
chmod 664 /etc/systemd/system/mylogstash.service
It fails to execute as there are insufficient permissions. How do I replicate the Sudo in the script without storing a password and do I even need to?
Can anyone tell me where I have gone wrong please? I'm getting pretty desperate, and no-one likes to see a man desperate...
Thanks,
QR

Related

How to configure telnet service for yocto image

telnet is necessary in order to maintain compatibility with older software in this case. I'm working with the Yocto Rocko 2.4.2 distribution. when I try to telnet to the board I'm getting the oh so detailed message "connection refused".
Using the method here and the options here I modified the busybox configuration per suggestion. When the board is booted up and logged in, if you execute: telnet, it spits out usage info and a quick directory check shows that telnet is installed to /usr/bin/telnet. My guess is that the telnet client is installed but the telnet server is not running?
I need to get telnetd to start manually at least so I know it will work with an init script in place. The second reference link there suggests that 'telnetd will not be started automatically though...' and that there will need to be an init script. How can I start telnetd manually for testing?
systemctl enable telnetd
returns: Unit telnetd.service could not be found
UPDATE
telnetd in located in /usr/sbin/telnetd. I was able to manually start the telnetd service for testing from there. After manually starting the service telnet login now works. looking into writing a systemd init script to auto start the telnetd service, so I suppose this issue is closed. unless anyone would like to offer up detailed telnet busybox configuration and setup steps as an answer to 'How to configure telnet service for yocto image'
update
Perhaps there is something more? I created a unit file that looks like this:
[Unit]
Description=auto start telnetd
[Service]
ExecStart=/usr/sbin/telnetd
[Install]
WantedBy=multi-user.target
on reboot, systemd indicates the process executed and succeeded:
systemctl status telnetd
.
.
.
Process: 466 ExecStart=/usr/sbin/telnetd (code=exited, status=0/SUCCESS)
.
.
.
The service is not running however. netstat -l does not list it and telnet login fails. Something I'm missing?
last update...i think
so following this post, I managed to get telnet.socket service to startup on reboot.
systemctl status telnet.socket
shows that it is running and listening on 23. Now however, when I try to remote in with telnet I'm getting
Connection closed by foreign host
Everything I've read so far has been talking about xinetd service (which I do not have...). What is confusing is that, if I just navigate to /usr/sbin/ and execute telnetd, the server is up and running and I can telnet into the board, so I do not believe I'm missing any utilities or services (like the above mentioned xinetd), but something is still not being configured correctly. any ideas?

Cant enable service with systemctl

I made this service:
#!/bin/bash
node ../../home/NodeServer/server.js
All it should do is start the server on bootup, so i wanted to do
sudo systemctl enable startServer.service
But I got this error:
startServer.sh.service is not a native service, redirecting to systemd-sysv-insall.
Executing: /lib/systemd/systemd-sysv-install enable startServer.sh
update-rc.d: error: startServer.sh Default-Start contains no runlevels, aborting.
When i try to do
sudo systemctl start startServer.service
it works like intended.
I had the same problem. I solve it typing again the file because it seems that there was a strange character that was broken the parser. Hope this helps!
You want to execute a script, which is not the same as a service.
You can make a file called startServer.service and write the following into it:
[Unit]
Description=Start server that does a thing
[Service]
ExecStart=node /home/NodeServer/server.js
If you want to enable the service, do the following:
sudo ln -s /home/NodeServer/startServer.service /etc/systemd/system/
and now you should be able to start the service.

Postgres with Docker: Postgres fails to load when persisting data

I'm new to Postgres.
I updated the Dockerfile I use and successfully installed Postgresql on it. (My image runs Ubuntu 16.04 and I'm using Postgres 9.6.)
Everything worked fine until I tried to move the database to a Volume with docker-compose (that was after making a copy of the container's folder with cp -R /var/lib/postgresql /somevolume/.)
The issue is that Postgres just keeps crashing, as witnessed by supervisord:
2017-07-26 18:55:38,346 INFO exited: postgresql (exit status 1; not expected)
2017-07-26 18:55:39,355 INFO spawned: 'postgresql' with pid 195
2017-07-26 18:55:40,430 INFO success: postgresql entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-26 18:55:40,763 INFO exited: postgresql (exit status 1; not expected)
2017-07-26 18:55:41,767 INFO spawned: 'postgresql' with pid 197
2017-07-26 18:55:42,841 INFO success: postgresql entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-26 18:55:43,179 INFO exited: postgresql (exit status 1; not expected)
(and so on…)
Logs
It's not clear to me what's happening as /var/log/postgresql remains empty.
chown?
I suspect it has to do with the user. If I compare the data folder inside the container and the copy I made of it to the volume, the only difference is that the original is owned by postgres while the copy is owned by root.
I tried running chown -R postgres:postgres on the copy. The operation was performed successfully, however postmaster.pid remains owned by root and I think that would be the issue.
Questions
How can I get more information about the cause of the crash?
How can I make it so that postmaster.id be owned by postgres ?
Should I consider running postgres with root instead?
Any hint welcome.
EDIT: links to the Dockerfile and the docker-compose.xml.
I'll answer my own question:
Logs & errors
What made matters more complicated was that I was not getting any specific error message.
To change that, I disabled the [program:postgresql] section in supervisord and, instead, started postgres manually from the command-line (thanks to Miguel Marques for setting me on the right track with his comment.)
Then I finally got some useful error messages:
2017-08-02 08:27:09.134 UTC [37] LOG: could not open temporary statistics file "/var/run/postgresql/9.6-main.pg_stat_tmp/global.tmp": No such file or directory
Fixing the configuration
I fixed the error above with this, eventually adding them to my Dockerfile:
mkdir -p /var/run/postgresql/9.6-main.pg_stat_tmp
chown postgres.postgres /var/run/postgresql/9.6-main.pg_stat_tmp -R
(Kudos to this guy for the fix.)
To make the data permanent, I also had to do this, for the volume to be accessible by postgres:
mkdir -p /var/lib/postgresql/9.6/main
chmod 700 /var/lib/postgresql/9.6/main
I also used initdb to initialize the data directory. BEWARE! This will erase any data found in that folder. Like so:
rm -R /var/lib/postgresql/9.6/main/*
ls /var/lib/postgresql/9.6/main/
/usr/lib/postgresql/9.6/bin/initdb -D /var/lib/postgresql/9.6/main
Testing
After the above, I could finally run postgres properly. I used this command to run it and test from the command-line:
su postgres
/usr/lib/postgresql/9.6/bin/postgres -D /var/lib/postgresql/9.6/main -c config_file=/etc/postgresql/9.6/main/postgresql.conf # as per the Docker docs
To test, I kept it running and then, from another prompt, checked everything ran fine with this:
su postgres
psql
CREATE TABLE cities ( name varchar(80), location point );
INSERT INTO cities VALUES ('San Francisco', '(-194.0, 53.0)');
select * from cities; # repeat this command after restarting the container to check that the data does persist
…making sure to restart the container and test again to check the data did persist.
And then finally restored the [program:postgresql] section in supervisord, rebuilt the image and restarted the container, making sure everything ran fine (in particular supervisord: tail /var/log/supervisor/supervisord.log), which it did.
(The command I used inside of supervisord.conf is also /usr/lib/postgresql/9.6/bin/postgres -D /var/lib/postgresql/9.6/main -c config_file=/etc/postgresql/9.6/main/postgresql.conf, as per this Docker article and other postgres+supervisord examples. Other options would have been using pg_ctl or an init.d script, but it's not clear to me why/when one would use those.)
I spent a lot of time on this. Hopefully the detailed answer will help someone down the line.
P.S.: I did end up producing a minimal example of my issue. If that can help anyone, here they are: Dockerfile, supervisord.conf and docker-compose.yml.
I do not know if this would be another way to achieve the same result (I'm new on Docker and Postgres too), but have you try the oficial repository image for Postgres (https://hub.docker.com/_/postgres/)?
I'm getting the data out of the container setting the environment variable PGDATA to '/var/lib/postgresql/data/pgdata' and binding this to an external volume on the run command:
docker run --name bd_TEST --network=my_network --restart=always -e POSTGRES_USER="superuser" -e POSTGRES_PASSWORD="myawesomepass" -e PGDATA="/var/lib/postgresql/data/pgdata" -v /var/local/db_data:/var/lib/postgresql/data/pgdata -itd -p 5432:5432 postgres:9.6
When the volume is empty, all the files are created by the image startup script, and if they already exist, the database start to used it.
From past experience I can see what may be a problem. I can't say if this will help but it is worth a try.
I would have added this as a comment, but I can't because my rep isn't hight enough.
I've spied a couple problems with how you have structured your statements in your Dockerfile. You have installed various things multiple times and also updated sporadically through the code. In my own files i've noticed that this can lead to somewhat random behaviour of my services and installation because of the different layers.
This may not seem to solve your problem directly, but cleaning up your file as is outlined in the best practices has solved many Dockerfile problems for me in the past.
One of the first places upon finding such problems is to start here at the best practices for RUN. This has helped me solve tricky problems in the past and I hope it'll solve or at least make it easier.
Pay special attention to this part:
After building the image, all layers are in the Docker cache. Suppose you later modify apt-get install by adding extra package:
FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y curl nginx
Docker sees the initial and modified instructions as identical and reuses the cache from previous
steps. As a result the apt-get update is NOT executed because the
build uses the cached version. Because the apt-get update is not run,
your build can potentially get an outdated version of the curl and
nginx packages.
After reading this I would start by consolidating all your dependencies.
In my case, having the same error, I debugged it until I found out:
the disk was full and I increased the diskspace to solve this.
(stupid error, easy fix - maybe reading this here helps someone not wasting time)
also linking this questiong for other options:
Supervisord "exit status 1 not expected" running php script
https://serverfault.com/questions/537773/supervisor-process-exits-with-exit-status-1-not-expected/1076115#1076115

Why doesn't my systemd startup script run?

I'm trying to run a trivial script at bootup on my debian testing machine. I followed a few guides, but the services do not start. Can someone show me what I'm doing wrong? I'd like to understand systemd before I start whining about it on the internet.
I created /etc/systemd/system/startup-scripts.service (sometimes here, sometimes as a symlinks to /lib/systemd/system/my-file.service) and wrote
[Unit]
Description=Sync date at boot up
[Service]
ExecStart=/usr/bin/startup-script.sh
Type=simple
[Install]
WantedBy=multi-user.target
then ran
sudo systemctl enable startup-scripts.service
I also filed out /usr/bin/startup-script.sh, made it executable, and ran it. As far as I could tell the script would run, but my reboots have been fruitless.
I'm guessing the answer will involve journalctl. Not really sure what I'm looking at here. I also wouldn't be surprised if multi-user is the wrong target. It was the most reasonable looking one, but I'm not really confident about what it's for.

restart openerp 7 server on Xubuntu

I am coding a custom module on Oerp7 on Xubuntu 12.04, and today, suddently (after some moduifications in the code I think), the restart server command still do not affecting my module.
i restart with this command :
sudo /etc/init.d/openerp-server restart
but the compiled (.pyc) files stayed unchange.
If I delete the module in the addons dir, the module don't properly work giving me a message saying that models are absent. that is normal; but why restart don't change anything. even if I modify the init.py or openerp.py files.
According tome is as restarting by this command now make nothing, while yesterday it did.
So, please, how could I fix that now.
You need to have -u modulename in the command line that starts the OpenERP server. So either modify the /etc/init.d/openerp-server script to have it there, or just start the server manually while you are developing.
Try
sudo /etc/init.d/openerp-server stop
ps aux | grep openerp
to see if the server really stopped.
Start the server with
sudo /etc/init.d/openerp-server start
Look also in the logs (/var/log/openerp/openerp-server.log for ex.) to see what heppens.