hazelcast start error Main process exited, code=exited, status=1/FAILURE on linux - redhat

How to start hazelcast when try configuring on redhat linux systemctl status hazelcast.service i found error cannot find main class and cannot find any documentation please help. Thank you

From your comment it's obvious you want to run the old Hazelcast IMDG version 3.12.12. This version is not supported by native package managers (as the 5.1 is).
Still, you can install this old Hazelcast as a standalone app and configure the systemd service on your own. See this example repository: https://github.com/kwart/hazelcast-linux-service/tree/3.12.z/
These would be the steps on RHEL (run them as root):
# Prerequisities
dnf install -y wget curl unzip git rsync java-1.8.0-openjdk-headless
# Clone the repo (with 3.12.z branch)
git clone -b 3.12.z https://github.com/kwart/hazelcast-linux-service.git
cd hazelcast-linux-service
# Create the hazelcast user/group
groupadd -r hazelcast
useradd -r -g hazelcast -d /opt/hazelcast -s /sbin/nologin hazelcast
# Install Hazelcast
HAZELCAST_VERSION=3.12.12
wget https://github.com/hazelcast/hazelcast/releases/download/v$HAZELCAST_VERSION/hazelcast-$HAZELCAST_VERSION.zip
unzip hazelcast-$HAZELCAST_VERSION.zip -d /opt
ln -s /opt/hazelcast-$HAZELCAST_VERSION /opt/hazelcast
# Change owner of the Hazelcast directories and links
chown -R hazelcast:hazelcast /opt/hazelcast /opt/hazelcast-$HAZELCAST_VERSION
# Copy service and config files
rsync -r etc/ /etc
# Start and enable service
systemctl daemon-reload
systemctl start hazelcast.service
systemctl enable hazelcast.service
Warning: Hazelcast 3.12.z already reached the end of the standard support. It's highly recommended to use an up-to-date version.

Related

where is confluent path on debian

I installed confluent community platform Manual Install using Systemd on Ubuntu and Debian
I can start:
sudo systemctl start confluent-zookeeper
sudo systemctl start confluent-kafka
...
but where is my confluent path?
I do not have confluent cli
please help
I encountered same problem when setting up elastisearch for confluent. In my system /usr/bin acted as /bin and remaining set up was successfull . So try using /usr as path to confluent if following Manual Install using Systemd on Ubuntu and Debian.
You can manually install Confluent CLI:
curl -L https://cnfl.io/cli | sh -s -- -b /<path-to-directory>/bin
Run this script to install the Confluent CLI. This command creates a
bin directory in your designated location (<path-to-directory>/bin).
The location must be in your PATH (e.g. /usr/local/bin).

Local Kubernetes on CentOS

I am trying to install Kubernetes locally on my CentOS. I am following this blog http://containertutorials.com/get_started_kubernetes/index.html, with appropriate changes to match CentOS and latest Kubernetes version.
./kube-up.sh script runs and exists with no errors and I don't see the server started on port 8080. Is there a way to know what was the error and if there is any other procedure to follow on CentOS 6.3
The easiest way to install the kubernetes cluster is using kubeadm. The initial post which details the steps of setup is here. And the detailed documentation for the kubeadm can be found here. With this you will get the latest released kubernetes.
If you really want to use the script to bring up the cluster, I did following:
Install the required packages
yum install -y git docker etcd
Start docker process
systemctl enable --now docker
Install golang
Latest go version because default centos golang is old and for kubernetes to compile we need at least go1.7
curl -O https://storage.googleapis.com/golang/go1.8.1.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.8.1.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
Setup GOPATH
export GOPATH=~/go
export GOBIN=$GOPATH/bin
export PATH=$PATH:$GOBIN
Download k8s source and other golang dependencies
Note: this might take sometime depending on your internet speed
go get -d k8s.io/kubernetes
go get -u github.com/cloudflare/cfssl/cmd/...
Start cluster
cd $GOPATH/src/k8s.io/kubernetes
./hack/local-up-cluster.sh
In new terminal
alias kubectl=$GOPATH/src/k8s.io/kubernetes/cluster/kubectl.sh
kubectl get nodes

CouchDB won't start badmatch error bad_return CentOS7

I've been trying to install CouchDB on a fresh centos7 in digital ocean droplet. I get no errors trying to install with the following steps:
yum -y update
yum -y groupinstall "Development Tools"
yum -y install libicu-devel curl-devel ncurses-devel libtool libxslt fop java-1.6.0-openjdk java-1.6.0-openjdk-devel unixODBC unixODBC-devel openssl-devel
Step 2 - Installing Erlang
wget http://www.erlang.org/download/otp_src_R16B02.tar.gz
tar -zxvf otp_src_R16B02.tar.gz
cd otp_src_R16B02
./configure && make
make install
Step 3 - Installing the SpiderMonkey JS Engine
wget http://ftp.mozilla.org/pub/mozilla.org/js/js185-1.0.0.tar.gz
tar -zxvf js185-1.0.0.tar.gz
cd js-1.8.5/js/src
./configure && make
make install
Step 4 - Installing CouchDB
wget http://mirror.olnevhost.net/pub/apache/couchdb/source/1.6.1/apache-couchdb-1.6.1.tar.gz
tar -xvf apache-couchdb-1.6.1.tar.gz
cd apache-couchdb-1.6.1
./configure && make
make install
Step 5 - Setting up CouchDB
adduser --no-create-home couchdb
chown -R couchdb:couchdb /usr/local/var/lib/couchdb /usr/local/var/log/couchdb /usr/local/var/run/couchdb
ln -sf /usr/local/etc/rc.d/couchdb /etc/init.d/couchdb
chkconfig --add couchdb
chkconfig couchdb on
vi /usr/local/etc/couchdb/local.ini
Should you need to access couchdb from the web, in the [httpd] section, look for a setting called bind_address and change it to 0.0.0.0 - this will make CouchDB bind all available addresses.
[httpd]
port = 5984
bind_address = 0.0.0.0
service couchdb start
/etc/init.d/couchdb status (this has no output)
And i get the following when i try to run:
/usr/local/bin/couchdb
Apache CouchDB 1.6.1 (LogLevel=info) is starting.
{"init terminating in do_boot",{{badmatch,{error,{bad_return,{{couch_app,start,[normal,["/usr/local/etc/couchdb/default.ini","/usr/local/etc/couchdb/local.ini"]]},{'EXIT',{{badmatch,{error,{shutdown,{failed_to_start_child,couch_secondary_services,{shutdown,{",[]},{couch_uuids,new_prefix,0,[{file,"couch_uuids.erl"},{line,84}]},{couch_uuids,state,0,[{file,"couch_uuids.erl"},{line,100}]},{couch_uuids,init,1,[{file,"couch_uuids.erl"},{line,50}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}}}}}}},[{couch_server_sup,start_server,1,[{file,"couch_server_sup.erl"},{line,98}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,269}]}]}}}}}},[{couch,start,0,[{file,"couch.erl"},{line,18}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}
Crash dump was written to: erl_crash.dump
init terminating in do_boot ()
Does anyone know how to get past this?
Note I get no such file or directory when trying the answer from here
Can you check if erlang-crypto is a separate module that is maybe not installed?
CouchDB doesn’t (imho rightfully) doesn’t account for distributions splitting up the monolithically released Erlang installation.
Your error is raised in the UUID module and the only thing I can think of immediately is the crypto dependency that might be missing.

Docker Lamp Centos7: '/bin/sh -c systemctl start httpd.service' returned a non-zero code: 1

I'm starting to work with docker to automate envorinments, then I'm trying to build a simple LAMP so the Dockerfile is the following:
FROM centos:7
ENV container=docker
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
RUN yum -y update; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
RUN yum -y update && yum clean all
RUN yum -y install firewalld httpd mariadb-server mariadb php php-mysql php-gd php-pear php-xml php-bcmath php-mbstring php-mcrypt php-php-gettext
#Enable services
RUN systemctl enable httpd.service
RUN systemctl enable mariadb.service
#start services
RUN systemctl start httpd.service
RUN systemctl start mariadb.service
#Open firewall ports
RUN firewall-cmd --permanent --add-service=http
RUN firewall-cmd --permanent --add-service=https
RUN firewall-cmd --reload
EXPOSE 80
CMD ["/usr/sbin/init"]
so when I build the image
docker build -t myimage .
Then when I run the code I get the following mistake:
The command '/bin/sh -c systemctl start httpd.service' returned a non-zero code: 1
When I enter to interactive mode(jumping the commands after RUN systemctl start httpd.service and rebuidling the image):
docker run -t -i myimage /bin/bash
And after try to start manually the service httpd I get the following mistake:
Failed to get D-Bus connection: No connection to service manager.
so, I don't know what am I doing wrong?
First of all, welcome to Docker! :-) Loads of Docker tutorials and docs are written around Ubuntu containers, but I like Centos too.
Ok, there are a couple of things to talk about here:
You're running up against a known issue with systemd-based Docker containers where they seem to need extra privileges to run, and even then lots of extra config is required to get them working. The Red Hat team are experimenting with some fixes (mentioned in comments) but not sure where that's at.
If you wish to try getting it working, these are the best instructions I've found, but I've played with this several times in the last couple of weeks and not got it working yet.
What people might say is "the real issue" here is that a Docker container should not be thought of as a "mini Virtual Machine". Docker is designed to run one "root" process per container, and the container system makes it easy to compose multiple containers together - they are small on disk, light on memory usage and easy to network together.
Here's a blog post from Docker which gives some background on this. There's also the "Docker Fundamentals" docs on Dockerizing applications and Working with containers.
So arguably the best way to proceed with the setup you're attempting to create here (though it might sound more complicated at the beginning) is to break your "stack" up into the services you need, and then use a tool like docker-compose (introduction, documentation) to create single-purpose Docker containers as required.
In your case above, you have two services, a web server and a database server. Therefore two Docker containers should work well, connected together by the database network connection. Here are some examples:
example with Symfony app, nginx and MariaDB
example with MariaDB + NodeJS
If you run one service per Docker container, you don't need to use systemd to manage them, as the Docker daemon manages each container sort of like it is a Unix process. When the process dies, the Docker container dies, and this is important because the Docker server monitors containers and can restart them automatically, or notify you.
This looks more like a perfect example where my docker-systemctl-replacement would fit into. It can easily interpret "systemctl start httpd.service" without an active SystemD around. I have done the same for some database services but not specifically the mariadb.service - may be you could give it a try.

sudo service mongodb restart gives "unrecognized service error" in ubuntu 14.0.4

I just installed mongoDB on ubuntu 14.0.4.
I tried to start the shell but I'm getting a connection refused error.
me#medev:/etc/init.d$ mongo
MongoDB shell version: 2.6.5
connecting to: test
2014-11-10T15:06:28.084-0500 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2014-11-10T15:06:28.085-0500 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed
So I decided to try to restart the service but that's failing too. I get the following error message:
me#medev:/etc/init.d$ sudo service mongodb restart
mongodb: unrecognized service
me#medev:/etc/init.d$
This is what I have in my /var/log/mongodb/mongod.log - http://pastebin.com/MrHt8tce
what i've tried so far:
I found another post here: can't start mongodb as sudo
which made a comment about remove the mongo lock file.
I deleted the lock file and then retried my command but it still fails as you can see below:
me#medev:/var/lib/mongodb$ sudo rm mongod.lock
me#medev:/var/lib/mongodb$ ls
journal local.0 local.ns _tmp
me#medev:/var/lib/mongodb$ sudo service mongodb start
mongodb: unrecognized service
But I can start it using /etc/init.d as you can see below:
me#medev:/var/lib/mongodb$ sudo /etc/init.d/mongod start
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service mongod start
Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the start(8) utility, e.g. start mongod
mongod start/running, process 27469
me#medev:/var/lib/mongodb$ ls
journal local.0 local.ns mongod.lock
me#medev:/var/lib/mongodb$ mongo
MongoDB shell version: 2.6.5
connecting to: test
> db
test
>
Any ideas on why I can't start it using the service command would be appreciated. From what I've read, i should be using sudo service mongodb
Try this:
Write mongodb instead of mongod
sudo service mongodb status
I got the same error one day You should use this:
1.Get the status of your mongo service:
/etc/init.d/mongod status
or
sudo service mongod status
2.If it's not started repair it like this:
sudo rm /var/lib/mongodb/mongod.lock
mongod --repair
sudo service mongodb start
And check again if the service is started again(1)
For me the solution was to replace
service mongod start
with
start mongod
You need to make sure the file (ex. /etc/init.d/mongodb) has execute permissions.
chmod +x /etc/init.d/mongodb
For debian, from the 10gen repo, between 2.4.x and 2.6.x, they renamed the init script /etc/init.d/mongodb to /etc/init.d/mongod, and the default config file from /etc/mongodb.conf to /etc/mongod.conf, and the PID and lock files from "mongodb" to "mongod" too. This made upgrading a pain, and I don't see it mentioned in their docs anywhere. Anyway, the solution is to remove the old "mongodb" versions:
update-rc.d -f mongodb remove
rm /etc/init.d/mongodb
rm /var/run/mongodb.pid
diff -ur /etc/mongodb.conf /etc/mongod.conf
Now, look and see what config changes you need to keep, and put them in mongod.conf.
Then:
rm /etc/mongodb.conf
Now you can:
service mongod restart
I installed mongo server on Debian Jessie using manual from official site.
It didn't started after recommended command sudo service mongod restart with the same error - mongodb: unrecognized service.
After looking into installed package contents, I noticed that it contains only Systemd service unit, but no SystemV init script:
# dpkg -L mongodb-org-server
/.
/usr
/usr/bin
/usr/bin/mongod
/usr/share
/usr/share/lintian
/usr/share/lintian/overrides
/usr/share/lintian/overrides/mongodb-org-server
/usr/share/doc
/usr/share/doc/mongodb-org-server
/usr/share/doc/mongodb-org-server/LICENSE-Community.txt
/usr/share/doc/mongodb-org-server/README
/usr/share/doc/mongodb-org-server/copyright
/usr/share/doc/mongodb-org-server/changelog.gz
/usr/share/doc/mongodb-org-server/GNU-AGPL-3.0.gz
/usr/share/doc/mongodb-org-server/THIRD-PARTY-NOTICES.gz
/usr/share/doc/mongodb-org-server/MPL-2.gz
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/mongod.1.gz
/etc
/etc/mongod.conf
/lib
/lib/systemd
/lib/systemd/system
/lib/systemd/system/mongod.service
But my system was running on SysV init:
# stat /proc/1/exe
File: '/proc/1/exe' -> '/sbin/init'
So, there are 2 options now:
(Continue on SysV) Write sysV init script manually as #khylo mentioned above
(Switch to SystemD) and run systemctl start mongod
For me nothing have helped, I've ended up with a solution:
create /lib/systemd/system/mongod.service file with content
[Unit]
Description=High-performance, schema-free document-oriented database
After=network.target
Documentation=https://docs.mongodb.org/manual
[Service]
User=mongodb
Group=mongodb
ExecStart=/usr/bin/mongod --quiet --config /etc/mongod.conf
[Install]
WantedBy=multi-user.target
then start/stop commands should work
$ sudo service mongod start
For reference - I have Ubuntu 14.04 LTS, MongoDB 3.2.9 installed from
deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.2 multiverse
You can use mongod command instead of mongodb, if you find any issue regarding dbpath in mongo you can use my answer in the link below.
https://stackoverflow.com/a/53057695/8247133
I think you may have installed the version of mongodb for the wrong system distro.
Take a look at how to install mongodb for ubuntu and debian:
http://docs.mongodb.org/manual/tutorial/install-mongodb-on-debian/
http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
I had a similar problem, and what happened was that I was installing the ubuntu packages in debian
Original Source - https://www.techrepublic.com/article/how-to-install-mongodb-community-edition-on-ubuntu-linux/
If you're on Ubuntu 16.04 and face the unrecognized service error, these instructions will fix it for you:-
Open a terminal window.
Issue the command sudo apt-key adv —keyserver hkp://keyserver.ubuntu.com:80 —recv EA312927
Issue the command sudo touch /etc/apt/sources.list.d/mongodb-org.list
Issue the command sudo gedit /etc/apt/sources.list.d/mongodb-org.list
Copy and paste one of the following lines from below (depending upon your release) into the open file.
For 12.04: deb http://repo.mongodb.org/apt/ubuntu precise/mongodb-org/3.6 multiverse
For 14.04: deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.6 multiverse
For 16.04: deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.6 multiverse
Make sure to edit the version number with the appropriate latest version and save the file.
Installation
Open a terminal window and issue command sudo apt-get update && sudo apt-get install -y mongodb-org
Let the installation complete.
Running MongoDB To start the database, issue the command sudo service mongodb start. You should now be able to issue the command to see that MongoDB is running: systemctl status mongodb
Ubuntu 16.04 solution
If you are using Ubuntu 16.04, you may run into an issue where you see the error mongodb: unrecognized service due to the switch from upstart to systemd. To get around this, you have to follow these steps.
If you added the /etc/apt/sources.list.d/mongodb-org.list, remove it with the command sudo rm /etc/apt/sources.list.d/mongodb-org.list
Update apt with the command sudo apt-get update
Install the official MongoDB version from the standard repositories with the command sudo apt-get install mongodb in order to get the service set up properly
Remove what you just installed with the command sudo apt-get remove mongodb && sudo apt-get autoremove
Now follow steps 1 through 5 listed above to install MongoDB; this should re-install the latest version of MongoDB with the systemd services already in place. When you issue the command systemctl status mongodb you should see that the server is active.
I mostly copy pasted the above (with minor modifications and typo fixes) from here - https://www.techrepublic.com/article/how-to-install-mongodb-community-edition-on-ubuntu-linux/
This is a simple solution that worked for me with the same problem (I think):
mv /var/lib/mongodb /var/lib/mongodb_backup
mkdir /var/lib/mongodb
chmod 700 /var/lib/mongodb
chown mongodb:daemon /var/lib/mongodb
systemctl restart mongodb or service mongod restart
If you're running Ubuntu in WSL (Windows Subsystem for Linux), you will have issues because WSL does not currently support systemd.
The link below explains how to run MongoDB without systemd, and even how to add a script for using the service command with WSL.
https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-database#mongodb-init-system-differences
tutorials may start MongoDB using the operating system's built-in init system. You might see the command sudo systemctl status mongodb used in tutorials or articles. Currently WSL does not have support for systemd (a service management system in Linux).
You shouldn't notice a difference, but if a tutorial recommends using sudo systemctl, instead use: sudo /etc/init.d/. For example, sudo systemctl status docker, for WSL would be sudo /etc/init.d/docker status ...or you can also use sudo service docker status.