Configure keycloak.service to run keycloak 18.0.2 as a daemon process in rhel - keycloak

I try to configure keycloak.service in systemd to run keycloak 18.0.2 as a daemon process. There is current folder which symlink to kk folder. I try to start kk in dev mode on port 8180
[Unit]
Description=Keycloak
After=network.target
[Service]
Type=idle
User=keycloak
Group=keycloak
ExecStart=/opt/keycloak/current/bin/kc.sh start-dev --http-port=8180
TimeoutStartSec=600
TimeoutStopSec=600
[Install]
WantedBy=multi-user.target
But it didn't work.
Also if i just run
bin/kc.sh start-dev --http-port=8180
it work correctly, but not as a daemon process

Solve the problem. Right configuration is:
[Unit]
Description=Keycloak
After=network.target
[Service]
User=keycloak
Group=keycloak
ExecStart=/opt/keycloak/current/bin/kc.sh start-dev --http-port=8180
[Install]
WantedBy=multi-user.target
Make sure that correct user have all needded rights
chown keycloak: -R /opt/keycloak

You are missing the configuraiton file for keycloak to run, another solution would be to directly copy the file at: ~/keycloak/docs/contrib/scripts/systemd/wildfly.service to your /etc/systemd/system/ repertory, either way your deamon file should look like this:
[Unit]
Description=The Keycloak Server
After=network.target
[Service]
EnvironmentFile=/etc/keycloak/keycloak.conf
User=keycloak
Group=keycloak
PIDFile=/var/run/keycloak/keycloak.pid
ExecStart=/opt/keycloak/bin/launch.sh $WILDFLY_MODE $WILDFLY_CONFIG $WILDFLY_BIND
StandardOutput=null
[Install]
WantedBy=multi-user.target

You should add ! before the commnd and it will work like that :
ExecStart=!/opt/keycloak/current/bin/kc.sh start-dev --http-port=8180

Related

How to pull mongodb logs with Wazuh agent?

I did following settings on /var/ossec/etc/ossec.conf and after that I restart agent but it's not showing logs on the Kibana dashboard
<localfile>
<log_format>syslog</log_format>
<location>/var/log/mongodb/mongod.log</location>
I performed a basic installation of Wazuh + MongoDB on agent side with the following results:
MongoDB by default writes inside syslog file located at /var/log/syslog.
Inside /var/log/mongodb/mongod.log there are internal mongo daemon logs that are more specific.
We could monitor such logs on Wazuh agent by:
<localfile>
<log_format>syslog</log_format>
<location>/var/log/syslog</location>
</localfile>
This rule is included by default on the agent but anyway is good to remember.
the other one as you point it out:
<localfile>
<log_format>syslog</log_format>
<location>/var/log/mongodb/mongod.log</location>
</localfile>
I only see that you didn't copy the closing tag </location> but it could be copy mistake, whatever is good to take a look at /var/ossec/logs/ossec.log to find some error.
With that configuration we could receive alerts like this:
** Alert 1595929148.661787: - syslog,access_control,authentication_failed,pci_dss_10.2.4,pci_dss_10.2.5,gpg13_7.8,gdpr_IV_35.7.d,gdpr_IV_32.2,hipaa_164.312.b,nist_800_53_AU.14,nist_800_53_AC.7,tsc_CC6.1,tsc_CC6.8,tsc_CC7.2,tsc_CC7.3,
2020 Jul 28 09:39:08 (ubuntu-bionic) any->/var/log/mongodb/mongod.log
Rule: 2501 (level 5) -> 'syslog: User authentication failure.'
2020-07-28T09:39:07.431+0000 I ACCESS [conn38] SASL SCRAM-SHA-1 authentication failed for root on admin from client 127.0.0.1:52244 ; UserNotFound: Could not find user "root" for db "admin"
If we run mongo -u root (with bad password) on agent side.

MongoDB connecting to Node on AWS Mean stack

I have been trying for a few days to connect my mongoDB to MEAN stack without success. Its running on AWS lightsail (Bitnami). The website itself is running fine, except for any pages that have an ajax/db call, as the database is not connecting/authenticating my connection string.
I am using the mongo,node, express part of the stack, I do not need or know any angular at present. I thought this would be easier than setting up on nodeJS and then adding mongoDB separately (well i did try that first with similar problems), I do intend to learn angualar in future so this is probably better long term. Server side set up is currently a weakness.
I am using mongoose for the connection. I can access the database using:
mongo admin --username root -p password via SSH.
I can also access the db via Rockmongo SSH. There is currently only one admin user in the database at present i.e. root.
My initial server/startup file is below:
server.js
const app = require('/opt/bitnami/apps/MYAPP/app.js');
require('dotenv').config({ path: 'variables.env' });
const mongoose = require("mongoose");
mongoose.Promise = global.Promise;
mongoose.connect(process.env.DATABASE_CONN);
app.listen(3000,function(){
console.log("Server has started!");
});
variables.env
DATABASE_CONN = mongodb://root:password#127.0.0.1:27017/MYAPPDATABASE
I have also tried many other connections strings, exchanging root for bitnami default user, etc.
When I go to my app folder and start the server (npm start or node server.js), the website starts up but with the below mongoDB authentication errors, below is only the first section.
> Server has started!
Connection error: { MongoError: Authentication failed.
at /opt/bitnami/apps/MYAPP/node_modules/mongoose/node_modules/mongodb-core/lib/con
nection/pool.js:595:61
at authenticateStragglers (/opt/bitnami/apps/MYAPP/node_modules/mongoose/node_modu
les/mongodb-core/lib/connection/pool.js:513:16)
at Connection.messageHandler (/opt/bitnami/apps/MYAPP/node_modules/mongoose/node_m
odules/mongodb-core/lib/connection/pool.js:549:5)
at emitMessageHandler (/opt/bitnami/apps/MYAPP/node_modules/mongoose/node_modules/
mongodb-core/lib/connection/connection.js:309:10)
at Socket.<anonymous> (/opt/bitnami/apps/MYAPP/node_modules/mongoose/node_modules/
mongodb-core/lib/connection/connection.js:452:17)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7)
at addChunk (_stream_readable.js:263:12)
at readableAddChunk (_stream_readable.js:250:11)
at Socket.Readable.push (_stream_readable.js:208:10)
at TCP.onread (net.js:597:20)
name: 'MongoError',
message: 'Authentication failed.',
ok: 0,
errmsg: 'Authentication failed.',
code: 18,
codeName: 'AuthenticationFailed' }
Any help or direction would be much appreciated. Thank you kindly.
Mos.
OK. Solution found.
In the mongo.conf file I set the dbpath to /data/db
The mongo shell is pointed to /opt/bitnami/mongodb/tmp/mongodb-27017.sock "$#"
go to /opt/bitnami/mongodb/bin/mongo
change /opt/bitnami/mongodb/tmp/mongodb-27017.sock "$#" to /tmp/mongodb-27017.sock "$#"
Can do that using sudo nano /opt/bitnami/mongodb/bin/mongo then edit the file.
Still have noauth turned on, so next step is getting my db connection string to authenticate.
Hope it helps someone.
Thanks
Some improvement. I have edited mongo.conf for now to enable no auth. I then ran mongod which stated no /data/db folder for which it stores data. So I created the folders and ran mongod again. Now all pages work but the mongo shell command 'mongo' will not work on the terminal.
I think its because the mongod dbpath is set to data/db and the mongodb conf file dbpath is set to /opt/bitnami/mongodb/data/db.
So I am trying to update the mongod dbpath but it doesnt seem to update.

ceph-mon[1437]: warning: unable to create /var/run/ceph: (13) Permission denied

I followed the ceph document manual install,and I use tarballs. The installation process went smoothly, but when I run start service display warning
Started Ceph cluster monitor daemon.
ceph-mon[1437]: warning: unable to create /var/run/ceph: (13) Permission denied
ceph-mon[1437]: 2018-08-15 12:21:08.625 7f04fa393180 -1 asok(0x55dee6e4c240) AdminSocketConfigObs::init: failed:
so, I run
chmod 775 -R /var/run/
After, ceph-mon service is normal, but when the system is rebooted, the warning appears again.
I tried to change /etc/ceph/ceph.conf. I added:
[client]
admin socket = /tmp/ceph/$cluster-$name.asok
But it didn't work. What should I do?
I solved this question
total two methods
1、modify ceph-mon#.service file,ceph replace root
ExecStart=/usr/bin/ceph-mon -f --cluster ${CLUSTER} --id %i --setuser root --setgroup root
or
2、modify ceph.conf file ,add
[mon]
run dir = XXXX (the path you want to use)
please trie:
chown ceph:ceph /var/run/ceph
You can check if the directory /var/run/ceph exists. If not, create a directory and give permission to your ceph user
sudo mkdir /var/run/ceph
sudo chown ceph:ceph /var/run/ceph

Graylog2 mongo profiler plugin won't connect to mongo instance, error: "Exception opening socket" (everything dockerized)

I am trying to play with graylog's mongo profiler plugin using docker to run everything. But I can't get any profiling logs into graylog.
When I start the mongo input from the graylog UI it eventually times out with an error:
Timed out after 30000 ms while waiting for a server that matches WritableServerSelector. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:37017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}].
This is my setup based on following the graylog dockerhub installation and the mongo profiler plugin guide and modifying bits:
(1) I bring up graylog, mongo and elastic using a docker-compose file:
version: '2'
services:
some-mongo:
image: "mongo:3"
some-elasticsearch:
image: "elasticsearch:2"
command: "elasticsearch -Des.cluster.name='graylog'"
graylog:
image: graylog2/server:2.2.1-1
environment:
GRAYLOG_PASSWORD_SECRET: somepasswordpepper
GRAYLOG_ROOT_PASSWORD_SHA2: 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
GRAYLOG_WEB_ENDPOINT_URI: http://127.0.0.1:9000/api
links:
- some-mongo:mongo
- some-elasticsearch:elasticsearch
ports:
- "9000:9000"
- "514:514/udp"
- "12202:12202"
- "37017:37017"
That has worked fine so far and I've been able to send in syslog udp messages and gelf http messages.
(2) I created a separate mongo docker container with ports mapped because I worry that if I use 27017, that graylog might look in its own internal mongodb container:
docker run -d -p 37017:27017 mongo:2.4
I start a mongo session and enable profiling for a "graylog" database:
$ mongo --port 37017
> use graylog
> db.setProfilingLevel(2)
{ "was" : 0, "slowms" : 100, "ok" : 1 }
> db.foo.insert({_id:1})
// Check that profiling data is being written to system.profile:
> db.system.profile.find().limit(1).sort( { ts : -1 } ).pretty()
{
"op" : "query",
"ns" : "graylog.foo",
"query" : {
},
"ntoreturn" : 0,
"ntoskip" : 0,
....
"allUsers" : [ ],
"user" : ""
}
So it seems like the mongod instance is running and profiling is working.
(3) I download the plugin jar and docker cp it into the plugins dir inside the graylog docker container. Something like:
docker cp graylog-plugin-mongodb-profiler-2.0.1.jar e89a2decda37:/usr/share/graylog/plugin
Then restart graylog.
I can see that the file is there:
$ docker exec -it e89a2decda37 /bin/sh
# ls /usr/share/graylog/plugin
graylog-plugin-anonymous-usage-statistics-2.2.1.jar graylog-plugin-map-widget-2.2.1.jar
graylog-plugin-beats-2.2.1.jar graylog-plugin-mongodb-profiler-2.0.1.jar
graylog-plugin-collector-2.2.1.jar graylog-plugin-pipeline-processor-2.2.1.jar
graylog-plugin-enterprise-integration-2.2.1.jar
So that part seemed to work fine and I can see an entry "Mongo profiler input" in the list of input types in the graylog UI.
(4) I create a "Mongo profiler input" input with:
hostname: localhost
port: 37017
database: graylog
(5) After I click save, the input tries to start then eventually fails like above. Restarting graylog or trying to restart the input results in the same failures.
I have tried step (2) with different versions of mongo in case there was some driver incompatibility but they all fail with same error. I've tried docker images:
mongo:3
mongo:2.6
mongo:2.4
Thanks in advance!
Like Thilo suggested above, the hostname for the graylog input shouldn't be "localhost" as that will point graylog to the docker container hosting it.
So I found the ip of the host machine using:
docker exec -it [CONTAINER ID] /bin/sh
/sbin/ip route|awk '/default/ { print $3 }'
and rewired the input and Bob's your Uncle!

Odoo v8 server won't start from eclipse

I am trying to start Odoo v8 server from Eclipse ide. I have set the debug configurations and have given the config file path in the arguments as
-c /etc/odoo-server.conf.
When I do debug as python run, I do not get any error. The log file also does not show any error. But when I open localhost:8069 from the browser.
I get server not found error. This does not happen when I start the server through the terminal. Can anyone please tell me what could be the problem?
Below is the odoo-server.conf content:
[options]
; This is the password that allows database operations:
admin_passwd = admin
db_host = False
db_port = False
db_user = odoo
db_password = odoo
addons_path = /opt/odoo/addons
logfile = /var/log/odoo/odoo-server.log
Below is the server traceback:
2014-11-15 07:47:06,205 3875 INFO ? openerp: OpenERP version 8.0
2014-11-15 07:47:06,206 3875 INFO ? openerp: addons paths: ['/home/hassan/.local/share/Odoo/addons/8.0', u'/opt/odoo/addons', '/opt/odoo/openerp/addons']
2014-11-15 07:47:06,207 3875 INFO ? openerp: database hostname: localhost
2014-11-15 07:47:06,207 3875 INFO ? openerp: database port: 5432
2014-11-15 07:47:06,207 3875 INFO ? openerp: database user: odoo
2014-11-15 07:47:07,046 3875 INFO ? openerp.service.server: Evented Service (longpolling) running on 0.0.0.0:8072
Try to check whether the configuration file that you have set is having appropriate access rights. And try not to log the errors into a file, instead allow it show it on the console on Eclipse IDE.
There's nothing wrong with your "traceback": is a normal log, with only INFO messages and it tells you that the server is running and waiting for requests on port 8072.
Point a browser at http://localhost:8072 and you should see a login page.
I know my answer is late, but i ran into this problem today, and got the server running, so thought I should share :
Do not start the openerp server as : Debug as -> Python run.
It only starts the longpolling service.
Try running as : Run as -> Python run
This will start the Http service at your defined port or by default at 8069.