I'm trying to provision a development box with Vagrant and a CentOS 6.5 base box. I want memcached to automatically start at system boot/vagrant up.
I have tried adding memcached -d -l localhost -p11211 to /etc/rc.d/rc.local and this does not work.
I have also tried adding to /etc/init/vagrant-mounted.conf
start on vagrant-mounted
memcached -d -l localhost -p11211
[EDIT]
I've updated /etc/rc.d/rc.local to now use the following
chkconfig memcached on
service memcached start
I'm not seeing anything in the /var/log/boot.log. It looks like rc.local is not being run at all. It has ugo+x permissions; so the file is definitely executable, but it doesn't appear to run at all.
Does memcached -d -l localhost -p11211 exit immediately or spawn a process?
If it keeps running, try: nohup memcached -d -l localhost -p11211 &
Also, try putting it in /etc/rc.local as
memcached -d -l localhost -p11211 >/var/log/memcached.log 2>&1
That will give you a log file with possible errors.
Lastly, does your install of memcached not have an init.d file in /etc/init.d ?
if it does, simply do chkconfig servicename on && service servicename start
Related
Recently I downloaded the new version Keycloak 17.0.0 Quarkus distribution, unzipped and started the Keycloak server by running bin/kc.sh start-dev from my local $KEYCLOAK_HOME directory in a CygWin Bash window. The server is up and running and I have configured my initial admin user. I am also able to login to the Keycloak UI.
There is no cloud environment yet, no fancy configuration, it's only the bare standalone quarkus impl.
Question: How can I gracefull stop/quit/terminate the Keycloak server process? (Ctrl+C does not help in this case, because this command is not really scriptable)
Before moving to v17 I started my experiments with v16.1.0 Wildfly distribution and I was using ${KEYCLOAK_HOME}/bin/jboss-cli.sh --connect --commands="shutdown,quit" to terminate the server. But v17 (quarkus) does not contain the jboss-cli.sh script.
This may not be a graceful stoppage/shutdown but we can still use it in a script until we find a better way.
fuser : an utility to identify processes using files or sockets
If CygWin bash supports Linux fuser command you can try : fuser -k 8080/tcp
Here is what I'm using on Linux
If keycloak is running on its default https port
sudo fuser -k 8443/tcp
If keycloak is running on its http default port
sudo fuser -k 8080/tcp
If you running keycloak on some_custom_prot
sudo fuser -k some_custom_prot/tcp
It looks like capturing the PID and killing that later will work in v17 (I'm not sure this was true for v15 and WildFly):
$ ./keycloak-17.0.1/bin/kc.sh start-dev --http-port=8080 > keycloak.stdout 2>&1 & echo "$!" > keycloak.pid
$ cat keycloak.pid | xargs kill -TERM
Stop
Linux: ./jboss-cli.sh --connect command=:shutdown
Windows: jboss-cli.bat --connect command=:shutdown
To test streaming replication, I would like to create a second Postgres instance on the same machine. The idea is that if it can be done on the test server, then it should be trivial to set it up on the two production servers.
The instances should use different configuration files and different data directories. I tried following the instructions here http://ubuntuforums.org/showthread.php?t=1431697 but I haven't figured out how to get Postgres to use a different configuration file. If I copy the init script, the scripts are just aliases to the same Postgres instance.
I'm using Postgres 9.3 and the Postgres help pages say to specify the configuration file on the postgres command line. I'm not really sure what this means. Am I supposed to install some client for this to work? Thanks.
I assume you can work your way out on using postgresql utilities.
Create the clusters
$ initdb -D /path/to/datadb1
$ initdb -D /path/to/datadb2
Run the instances
$ pg_ctl -D /path/to/datadb1 -o "-p 5433" -l /path/to/logdb1 start
$ pg_ctl -D /path/to/datadb2 -o "-p 5434" -l /path/to/logdb2 start
Test streaming
Now you have two instances running on ports 5433 and 5434. Configuration files for them are in data dirs specified by initdb. Tweak them for streaming replication.
Your default installation remains untouched in port 5432.
On Debian based distros you could use pg_createcluster instead of initdb:
$ pg_createcluster -u [user] -g [group] -d /path/to/data -l /path/to/log -p 5433
Also pg_ctlcluster is an alternative to pg_ctl.
Steps to create New Server Instance on PostgreSQL 9.5
On command prompt run:
initdb -D Instance_Directory_path -U username -W
(prompts for password)
Once the new Instance Directory is created. Run command prompt as Administrator
pg_ctl register -N service_name -D Instance_Directory_path -o "-p port_no"
After the service is registered, start server
pg_ctl start -D Instance_Directory_path -o "-p port_no"
To complete other answers, on CentOS 6 AND 7.
After running something like
$ initdb -D /path/to/newdb
You'll have to change at least port configuration option and, probably, listen_addresses in config file postgresql.conf.
Instead of starting inmediatly this new instance, which has been explained in previous answers, maybe you want new instance to run automatically on system start (in case of shutdown, e.g.). To do this, as CentOS doesn't have pg_ctl register option (only for Windows) you'll have to create a new service file and register it in order systemctl or service can start it up automatically.
Centos 6
Follow next commands to get service's init file:
[root#machine ~]# service postgresql-9.6 edit
Usage: /etc/init.d/postgresql-9.6 {start|stop|status|restart|upgrade|condrestart|try-restart|reload|force-reload|initdb|promote}
[root#machine ~]# cd /etc/init.d # Now we know where service file is
[root#machine init.d]# cp -p postgresql-9.6 postgresql-9.6_5433
[root#machine init.d]# vi postgresql-9.6_5433
Now you can change PGDATA directory with the one where new instance resides. If you're using Postgresql version previous to 9.4 (which you shouldn't by the time of this answer) you'll have to change PGPORT too with the value where new instance is listening to.
The name of the new service is up to you. I usually take original service name and add port number at the end.
Now you only have to register new service:
[root#machine init.d]# chkconfig postgresql-9.6_5433 on # service registered!
[root#machine init.d]# service postgresql-9.6_5433 start
Iniciando servicios postgresql-9.6_5433: [ OK ]
[root#machine init.d]# service postgresql-9.6_5433 status
Se está ejecutando postgresql-9.6_5433 (pid 120993)...
Centos 7
In CentOS 7 instead of service to control services running on the machine you have systemctl and commands and paths change a bit. But the process is the same: create new service file, edit with the new location/port, register and start:
[root#localhost ~]# locate postgresql.service
/etc/systemd/system/multi-user.target.wants/postgresql.service
/usr/lib/systemd/system/postgresql.service
[root#localhost ~]# cd /usr/lib/systemd/system
[root#localhost ~]# cp -p postgresql.service postgresql_5433.service
[root#localhost ~]# vi postgresql_5433.service
# Change PGDATA and maybe PGPORT if PG version <9.4
[root#localhost ~]# systemctl enable postgresql_5433.service
[root#localhost ~]# systemctl start postgresql_5433.service
[root#localhost ~]# systemctl list-unit-files | grep postgres
postgresql.service enabled
postgresql_5433.service enabled
Context:
I'm testing an elasticsearch 1.7.1 configuration that's set up by chef, and testing in kitchen
The chef script and configuration works because it's running in production somehow
running service elasticsearch start as the elasticsearch user fails, but the actual call it delegates to does not.
From what I've learned, chef scripts are run as root. So, when the test fails (it checks to see if elasticsearch is running by running service elasticsearch status), I log into the vagrant machine. As root, if I run service elasticsearch start, I get an OK (which is incorrect, but another issue) and then run a subsequent service elasticsearch status, I'm met with the error: elasticsearch dead but pid file exists
Digging further, I set debug statements on the init.d script that's run by service and saw that the actual command was basically a call to the init.d/functions function daemon, which just calls:
runuser -s /bin/bash elasticsearch -c 'ulimit -S -c 0 >/dev/null 2>&1 ; /usr/share/elasticsearch/bin/elasticsearch -p /var/run/elasticsearch/elasticsearch.pid -d -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch/ -Des.default.path.data=/data/elasticsearch/data/ -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch/'
So I tried a sudo su - elasticsearch and then ran the part in quotes:
[elasticsearch#default-centos ~]$ ulimit -S -c 0 >/dev/null 2>&1 ;
/usr/share/elasticsearch/bin/elasticsearch
-p /var/run/elasticsearch/elasticsearch.pid -d
-Des.default.path.home=/usr/share/elasticsearch
-Des.default.path.logs=/var/log/elasticsearch/
-Des.default.path.data=/data/elasticsearch/data/
-Des.default.path.work=/tmp/elasticsearch
-Des.default.path.conf=/etc/elasticsearch/
A subsequent service elasticsearch status shows that elasticsearch is running just fine! I've even set the logging to TRACE, and there's no indication that elasticsearch has crashed.
I'm starting to work with docker to automate envorinments, then I'm trying to build a simple LAMP so the Dockerfile is the following:
FROM centos:7
ENV container=docker
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
RUN yum -y update; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
RUN yum -y update && yum clean all
RUN yum -y install firewalld httpd mariadb-server mariadb php php-mysql php-gd php-pear php-xml php-bcmath php-mbstring php-mcrypt php-php-gettext
#Enable services
RUN systemctl enable httpd.service
RUN systemctl enable mariadb.service
#start services
RUN systemctl start httpd.service
RUN systemctl start mariadb.service
#Open firewall ports
RUN firewall-cmd --permanent --add-service=http
RUN firewall-cmd --permanent --add-service=https
RUN firewall-cmd --reload
EXPOSE 80
CMD ["/usr/sbin/init"]
so when I build the image
docker build -t myimage .
Then when I run the code I get the following mistake:
The command '/bin/sh -c systemctl start httpd.service' returned a non-zero code: 1
When I enter to interactive mode(jumping the commands after RUN systemctl start httpd.service and rebuidling the image):
docker run -t -i myimage /bin/bash
And after try to start manually the service httpd I get the following mistake:
Failed to get D-Bus connection: No connection to service manager.
so, I don't know what am I doing wrong?
First of all, welcome to Docker! :-) Loads of Docker tutorials and docs are written around Ubuntu containers, but I like Centos too.
Ok, there are a couple of things to talk about here:
You're running up against a known issue with systemd-based Docker containers where they seem to need extra privileges to run, and even then lots of extra config is required to get them working. The Red Hat team are experimenting with some fixes (mentioned in comments) but not sure where that's at.
If you wish to try getting it working, these are the best instructions I've found, but I've played with this several times in the last couple of weeks and not got it working yet.
What people might say is "the real issue" here is that a Docker container should not be thought of as a "mini Virtual Machine". Docker is designed to run one "root" process per container, and the container system makes it easy to compose multiple containers together - they are small on disk, light on memory usage and easy to network together.
Here's a blog post from Docker which gives some background on this. There's also the "Docker Fundamentals" docs on Dockerizing applications and Working with containers.
So arguably the best way to proceed with the setup you're attempting to create here (though it might sound more complicated at the beginning) is to break your "stack" up into the services you need, and then use a tool like docker-compose (introduction, documentation) to create single-purpose Docker containers as required.
In your case above, you have two services, a web server and a database server. Therefore two Docker containers should work well, connected together by the database network connection. Here are some examples:
example with Symfony app, nginx and MariaDB
example with MariaDB + NodeJS
If you run one service per Docker container, you don't need to use systemd to manage them, as the Docker daemon manages each container sort of like it is a Unix process. When the process dies, the Docker container dies, and this is important because the Docker server monitors containers and can restart them automatically, or notify you.
This looks more like a perfect example where my docker-systemctl-replacement would fit into. It can easily interpret "systemctl start httpd.service" without an active SystemD around. I have done the same for some database services but not specifically the mariadb.service - may be you could give it a try.
As I said in title, I've installed PostgreSQL usind MacPorts, but cannot access it.
The installation process was
$ sudo port install postgresql83-server
$ sudo mkdir -p /opt/local/var/db/postgresql83/webcraft
$ sudo chown postgres:postgres /opt/local/var/db/postgresql83/webcraft
$ sudo su postgres -c '/opt/local/lib/postgresql83/bin/initdb -D /opt/local/var/db/postgresql83/webcraft'
$ sudo launchctl load -w /Library/LaunchDaemons/org.macports.postgresql83-server.plist
My PATH is
/opt/local/lib/postgresql83/bin:/opt/local/lib/mysql5/bin:/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin
I try to connect the server using psql client
$ psql
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Here is some info
$ ps ax | grep postgres | grep -v grep
52 ?? Ss 0:00.00 /opt/local/bin/daemondo --label=postgresql83-server --start-cmd /opt/local/etc/LaunchDaemons/org.macports.postgresql83-server/postgresql83-server.wrapper start ; --stop-cmd /opt/local/etc/LaunchDaemons/org.macports.postgresql83-server/postgresql83-server.wrapper stop ; --restart-cmd /opt/local/etc/LaunchDaemons/org.macports.postgresql83-server/postgresql83-server.wrapper restart ; --pid=none
Did you try running:
which psql
I imagine psql is still referencing /usr/bin/psql, and the macports version of psql is suffixed with the version number, in your case psql83. You can alias psql to psql83 as a simple workaround. Better would be to change the default:
sudo port select --set postgresql postgresql83
That will do the proper routing.
There is a very easy solution to this, but it's not well documented in my opinion:
MacPorts encourages installing their *_select ports to manage potentially multiple versions of software (say you want Postgres93 and Postgres94 at the same time). It's a great feature, but it adds an extra step that is for some reason rarely mentioned in the docs:
$ sudo port install postgresql94-server
Many failed attempts at starting the server later..
$ sudo port install postgresql_select
$ sudo port select postgresql
Available versions for postgresql:
none (active)
postgresql94
Well that can't be good!
$ sudo port select postgresql postgresql94
$ sudo port load postgresql94-server
You're kidding me. Now it's running?
Simply installing Postgres doesn't fully setup symlinks to make it easily runnable. Installing postrgresql_select gives MacPorts the information it needs to do that via port select. Once you've selected the active version of your choice, starting the Posgres server via luanchctl is as easy as port load postgresqlXX-server.
I know this is a very late answer and doesn't answer your full question, but launchctl will show different results depending on if you are superuser or not.
Try doing:
sudo launchctl list | grep postgres
I had exactly the same problem on my MacBook Pro. I could resolve the problem after I rode this blog post here and all the comments:
http://benscheirman.com/2010/06/installing-postgresql-for-rails-on-mac-os-x
The Problem is that postgres is not really running. I recognized this after I did a port scan to my own machine and realized that nothing is running on Port 5432.
I created a small script "start_pg_server.sh":
#!/bin/sh
sudo su postgres -c 'pg_ctl start -D /opt/local/var/db/postgresql83/defaultdb/'
after executing this script the server was running and I could connect me with pgAdmin. I was also able to run my ruby stuff with rake db:create and rake db:migrate.
After I restored using Timemachine I had the same problem.
The reason was that the permissions were mangled and postgres could not write the pid file.
Running this solved it for me:
sudo chown -R postgres:postgres /opt/local/var/db/postgresql91/
sudo port unload postgresql91-server
sudo port load postgresql91-server
Did you by any chance create your postgres user with a shell of /usr/bin/false? If so, the startup script won't work because it uses su which passes commands you send it through the shell.
If you did set it to /usr/bin/false, try changing it to /bin/bash and that might fix things.