How to start hazelcast when try configuring on redhat linux systemctl status hazelcast.service i found error cannot find main class and cannot find any documentation please help. Thank you
From your comment it's obvious you want to run the old Hazelcast IMDG version 3.12.12. This version is not supported by native package managers (as the 5.1 is).
Still, you can install this old Hazelcast as a standalone app and configure the systemd service on your own. See this example repository: https://github.com/kwart/hazelcast-linux-service/tree/3.12.z/
These would be the steps on RHEL (run them as root):
# Prerequisities
dnf install -y wget curl unzip git rsync java-1.8.0-openjdk-headless
# Clone the repo (with 3.12.z branch)
git clone -b 3.12.z https://github.com/kwart/hazelcast-linux-service.git
cd hazelcast-linux-service
# Create the hazelcast user/group
groupadd -r hazelcast
useradd -r -g hazelcast -d /opt/hazelcast -s /sbin/nologin hazelcast
# Install Hazelcast
HAZELCAST_VERSION=3.12.12
wget https://github.com/hazelcast/hazelcast/releases/download/v$HAZELCAST_VERSION/hazelcast-$HAZELCAST_VERSION.zip
unzip hazelcast-$HAZELCAST_VERSION.zip -d /opt
ln -s /opt/hazelcast-$HAZELCAST_VERSION /opt/hazelcast
# Change owner of the Hazelcast directories and links
chown -R hazelcast:hazelcast /opt/hazelcast /opt/hazelcast-$HAZELCAST_VERSION
# Copy service and config files
rsync -r etc/ /etc
# Start and enable service
systemctl daemon-reload
systemctl start hazelcast.service
systemctl enable hazelcast.service
Warning: Hazelcast 3.12.z already reached the end of the standard support. It's highly recommended to use an up-to-date version.
I have an ECS container running that is not receiving updates from new files written to the s3 bucket it is mounting.
Meaning, when a new file is written to the S3 bucket, I am unable to see it in the container I am mounting in.
Image:
FROM cubejs/cube:v0.29.17
RUN apt-get update
RUN apt-get -y install s3fs
COPY ./entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["cubejs", "server"]
entrypoint.sh:
#!/bin/bash
set -e
bucket=muhbucket
[ ! -d /cube/conf/schema ] && mkdir /cube/conf/schema
s3fs ${bucket} /cube/conf/schema -o ecs
echo "Mounted ${bucket} to /cube/conf/schema"
exec "$#"
s3fs 1.87 and later have a stat_cache_expire value of 900 seconds (15 minutes) which can delay updates. You can reduce this value although it will make operations like readdir slower. s3fs 1.86 and older cached files forever which made multi-client updates impossible. Some older Linux distributions like Ubuntu 20.04 continue to ship these older s3fs versions so you might accidentally be using these.
I've been trying to install CouchDB on a fresh centos7 in digital ocean droplet. I get no errors trying to install with the following steps:
yum -y update
yum -y groupinstall "Development Tools"
yum -y install libicu-devel curl-devel ncurses-devel libtool libxslt fop java-1.6.0-openjdk java-1.6.0-openjdk-devel unixODBC unixODBC-devel openssl-devel
Step 2 - Installing Erlang
wget http://www.erlang.org/download/otp_src_R16B02.tar.gz
tar -zxvf otp_src_R16B02.tar.gz
cd otp_src_R16B02
./configure && make
make install
Step 3 - Installing the SpiderMonkey JS Engine
wget http://ftp.mozilla.org/pub/mozilla.org/js/js185-1.0.0.tar.gz
tar -zxvf js185-1.0.0.tar.gz
cd js-1.8.5/js/src
./configure && make
make install
Step 4 - Installing CouchDB
wget http://mirror.olnevhost.net/pub/apache/couchdb/source/1.6.1/apache-couchdb-1.6.1.tar.gz
tar -xvf apache-couchdb-1.6.1.tar.gz
cd apache-couchdb-1.6.1
./configure && make
make install
Step 5 - Setting up CouchDB
adduser --no-create-home couchdb
chown -R couchdb:couchdb /usr/local/var/lib/couchdb /usr/local/var/log/couchdb /usr/local/var/run/couchdb
ln -sf /usr/local/etc/rc.d/couchdb /etc/init.d/couchdb
chkconfig --add couchdb
chkconfig couchdb on
vi /usr/local/etc/couchdb/local.ini
Should you need to access couchdb from the web, in the [httpd] section, look for a setting called bind_address and change it to 0.0.0.0 - this will make CouchDB bind all available addresses.
[httpd]
port = 5984
bind_address = 0.0.0.0
service couchdb start
/etc/init.d/couchdb status (this has no output)
And i get the following when i try to run:
/usr/local/bin/couchdb
Apache CouchDB 1.6.1 (LogLevel=info) is starting.
{"init terminating in do_boot",{{badmatch,{error,{bad_return,{{couch_app,start,[normal,["/usr/local/etc/couchdb/default.ini","/usr/local/etc/couchdb/local.ini"]]},{'EXIT',{{badmatch,{error,{shutdown,{failed_to_start_child,couch_secondary_services,{shutdown,{",[]},{couch_uuids,new_prefix,0,[{file,"couch_uuids.erl"},{line,84}]},{couch_uuids,state,0,[{file,"couch_uuids.erl"},{line,100}]},{couch_uuids,init,1,[{file,"couch_uuids.erl"},{line,50}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}}}}}}},[{couch_server_sup,start_server,1,[{file,"couch_server_sup.erl"},{line,98}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,269}]}]}}}}}},[{couch,start,0,[{file,"couch.erl"},{line,18}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}
Crash dump was written to: erl_crash.dump
init terminating in do_boot ()
Does anyone know how to get past this?
Note I get no such file or directory when trying the answer from here
Can you check if erlang-crypto is a separate module that is maybe not installed?
CouchDB doesn’t (imho rightfully) doesn’t account for distributions splitting up the monolithically released Erlang installation.
Your error is raised in the UUID module and the only thing I can think of immediately is the crypto dependency that might be missing.
I'm running postgres inside a docker container to limit the amount of system resources it has access to. I'm having some trouble understanding how to make the data persistent. I've read the following articles:
https://www.andreagrandi.it/2015/02/21/how-to-create-a-docker-image-for-postgresql-and-persist-data/
http://container42.com/2013/12/16/persistent-volumes-with-docker-container-as-volume-pattern/
Which suggest using a data only container, and then having my postgres container link to it. What I'm failing to understand is; what's the advantage to this? As far as I can tell, if for some reason the docker-machine shut down (for example; moving it to a different physical machine), the data only container stops running, and all of it's contents are lost? I've tried creating a volume in the postgres container, but it doesn't actually seem to save anything to the disk.
Here's my docker file. What am I doing wrong?
FROM ubuntu
MAINTAINER Andrew Broadbent <andrew.broadbent#manchester.ac.uk>
# Add the PostgreSQL PGP key to verify their Debian packages.
# It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
# Add PostgreSQL's repository. It contains the most recent stable release
# of PostgreSQL, ``9.3``.
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list
# Install ``python-software-properties``, ``software-properties-common`` and PostgreSQL 9.3
# There are some warnings (in red) that show up during the build. You can hide
# them by prefixing each apt-get statement with DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y python-software-properties software-properties-common postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3
# Note: The official Debian and Ubuntu images automatically ``apt-get clean``
# after each ``apt-get``
# Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed``
USER postgres
# Create a PostgreSQL role named ``docker`` with ``docker`` as the password and
# then create a database `docker` owned by the ``docker`` role.
# Note: here we use ``&&\`` to run commands one after the other - the ``\``
# allows the RUN command to span multiple lines.
RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" &&\
createdb -O docker docker
# Complete configuration
USER root
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
# Expose the PostgreSQL port
EXPOSE 5432
# Add VOLUMEs to allow backup of config, logs and databases
RUN mkdir -p /var/run/postgresql && chown -R postgres /var/run/postgresql
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
# Set the default command to run when starting the container
USER postgres
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]
This one answers your question about data container:
docker mounting volumes on host
Regarding to your dockerfile, I would suggest you either:
1) use data container pattern
2) mount the volume to host machine by specifying: docker run -v [host-path]:[container-path] ..., so that data will be kept at one place in your host and will not be lost after the container is removed.
Ref: https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume
Looking down the road at sharding, we would like to be able to have multiple mongos instances. The recommendation seems to be to put mongos on each application server. I was thinking I'd just load balance them on their own servers, but this article http://craiggwilson.com/2013/10/21/load-balanced-mongos/ indicates that there are issue with this.
So I'm back to having it on the application servers. However, we are using Elastic Beanstalk. I could install Mongo on this as a package install. But, this creates an issue with Mongos. I have not been able to find out how to get a mongos startup going using the mongodb.conf file. For replicated servers, or config servers, additional entries in the conf file can cause it to start up the way I want. But I can't do that with Mongos. If I install Mongo, it actually starts up as mongodb. I need to kill that behaviour, and get it to start as Mongos, pointed at my config servers.
All I can think of is:
Kill the mongodb startup script, that autostarts the database in 'normal' mode.
Create a new upstart script that starts up mongos, pointed at the config servers.
Any thoughts on this? Or does anyone know if I'm just being obtuse, and I can copy a new mongodb.conf file into place on beanstalk that will start up the server as mongos?
We are not planning on doing this right off the bat, but we need to prepare somewhat, as if I don't have the pieces in place, I'll need to completely rebuild my beanstalk servers after the fact. I'd rather deploy ready to go, with all the software installed.
I created a folder called ".ebextensions" and a file called "aws.config". The contents of this file is as follows: -
files:
"/etc/yum.repos.d/mongodb.repo":
mode: "000644"
content: |
[MongoDB]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1
container_commands:
01_enable_rootaccess:
command: echo Defaults:root \!requiretty >> /etc/sudoers
02_install_mongo:
command: yum install -y mongo-10gen-server
ignoreErrors: true
03_turn_mongod_off:
command: sudo chkconfig mongod off
04_create_mongos_startup_script:
command: sudo sh -c "echo '/usr/bin/mongos -configdb $MONGO_CONFIG_IPS -fork -logpath /var/log/mongo/mongos.log --logappend' > /etc/init.d/mongos.sh"
05_update_mongos_startup_permissions:
command: sudo chmod +x /etc/init.d/mongos.sh
06_start_mongos:
command: sudo bash /etc/init.d/mongos.sh
What this file does is: -
Creates a "mongodb.repo" file (see http://docs.mongodb.org/manual/tutorial/install-mongodb-on-red-hat-centos-or-fedora-linux/).
Runs 4 container commands (these are run after the server is created but before the WAR is deployed. These are: -
Enable root access - this is required for "sudo" commands afaik.
Install Mongo - install mongo as a service using the yum command. We only need "mongos" but this has not been separated yet from the mongo server. This may change in future.
Change config for mongod to "off" - this means if the server restarts the mongod program isn't run if the server restarts.
Create script to run mongos. Note the $MONGO_CONFIG_IPS in step 4, you can pass these in using the configuration page in Elastic Beanstalk. This will run on a server reboot.
Set permissions to execute. These reason I did 4/5 as opposed to putting into into a files: section is that it did not create the IP addresses from the environment variable.
Run script created in step 4.
This works for me. My WAR file simply connects to localhost and all the traffic goes through the router. I stumbled about for a couple of days on this as the documentation is fairly slim in both Amazon AWS and MongoDB.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
UPDATE: - If you are having problems with my old answer, please try the following - it works for version 3 of Mongo and is currently being used in our production MongoDB cluster.
This version is more advanced in that it uses internal DNS (via AWS Route53) - note the mongo-cfg1.internal .... This is recommended best practices and well worth setting up your private zone using Route53. This means if there's an issue with one of the MongoDB Config instances you can replace the broken instance and update the private IP address in Route53 - no updates required in each elastic beanstalk which is really cool. However, if you don't want to create a zone you can simply insert the IP addresses in configDB attribute (like my first example).
files:
"/etc/yum.repos.d/mongodb.repo":
mode: "000644"
content: |
[mongodb-org-3.0]
name=MongoDB Repository
baseurl=http://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/3.0/x86_64/
gpgcheck=0
enabled=1
"/opt/mongos.conf":
mode: "000755"
content: |
net:
port: 27017
operationProfiling: {}
processManagement:
fork: "true"
sharding:
configDB: mongo-cfg1.internal.company.com:27019,mongo-cfg2.internal.company.com:27019,mongo-cfg3.internal.company.com:27019
systemLog:
destination: file
path: /var/log/mongos.log
container_commands:
01_install_mongo:
command: yum install -y mongodb-org-mongos-3.0.2
ignoreErrors: true
02_start_mongos:
command: "/usr/bin/mongos -f /opt/mongos.conf > /dev/null 2>&1 &"
I couldn't get #bobmarksie's solution to work, but thanks to anowak and avinci here for this .ebextensions/aws.config file:
files:
"/home/ec2-user/install_mongo.sh" :
mode: "0007555"
owner: root
group: root
content: |
#!/bin/bash
echo "[MongoDB]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1" | tee -a /etc/yum.repos.d/mongodb.repo
yum -y update
yum -y install mongodb-org-server mongodb-org-shell mongodb-org-tools
commands:
01install_mongo:
command: ./install_mongo.sh
cwd: /home/ec2-user
test: '[ ! -f /usr/bin/mongo ] && echo "MongoDB not installed"'
services:
sysvinit:
mongod:
enabled: true
ensureRunning: true
commands: ['01install_mongo']