Mongooseim 3.6.0 postgress connection Issue - postgresql

Hi I am new to mongooseim , I am planing to setup mongooseim in my local setup with postgres database connection. I have installed mongooseim 3.6.0 in ubuntu 14.04 machine. And create database in postgres & add the schema for that.Then I have done the following changes in mongooseim.cfg file.
{outgoing_pools, [
{rdbms, global, default, [{workers, 1}], [{server, {psql, "localhost", 5432, "mongooseim", "postgres", "password"}}]}
]}.
And this one.
{rdbms_server_type, pgsql}.
These are the changes I have done for default config file. Then When I restart the server it gives this error. postgres server running & user credentials are working.
020-06-05 18:17:46.815 [info] <0.249.0> msg: "Starting reporters with []\n", options: []
2020-06-05 18:17:47.034 [notice] <0.130.0>#lager_file_backend:143 Changed loglevel of /var/log/mongooseim/ejabberd.log to info
2020-06-05 18:17:47.136 [info] <0.43.0> Application mnesia exited with reason: stopped
2020-06-05 18:17:47.621 [error] <0.593.0>#mongoose_rdbms_psql:connect CRASH REPORT Process <0.593.0> with 0 neighbours crashed with reason: call to undefined function mongoose_rdbms_psql:connect({psql,"server",5432,"mongooseim","postgres","password"}, 5000)
2020-06-05 18:17:47.621 [error] <0.592.0>#mongoose_rdbms_psql:connect Supervisor 'wpool_pool-mongoose_wpool$rdbms$global$default-process-sup' had child 'wpool_pool-mongoose_wpool$rdbms$global$default-1' started with wpool_process:start_link('wpool_pool-mongoose_wpool$rdbms$global$default-1', mongoose_rdbms, [{server,{psql,"server",5432,"mongooseim","postgres","password"}}], [{queue_manager,'wpool_pool-mongoose_wpool$rdbms$global$default-queue-manager'},{time_checker,'wpool_pool-mongoose_wpool$rdbms$global$default-time-checker'},...]) at undefined exit with reason call to undefined function mongoose_rdbms_psql:connect({psql,"server",5432,"mongooseim","postgres","password"}, 5000) in context start_error
2020-06-05 18:17:47.622 [error] <0.589.0> Supervisor 'mongoose_wpool$rdbms$global$default' had child 'wpool_pool-mongoose_wpool$rdbms$global$default-process-sup' started with wpool_process_sup:start_link('mongoose_wpool$rdbms$global$default', 'wpool_pool-mongoose_wpool$rdbms$global$default-process-sup', [{queue_manager,'wpool_pool-mongoose_wpool$rdbms$global$default-queue-manager'},{time_checker,'wpool_pool-mongoose_wpool$rdbms$global$default-time-checker'},...]) at undefined exit with reason {shutdown,{failed_to_start_child,'wpool_pool-mongoose_wpool$rdbms$global$default-1',{undef,[{mongoose_rdbms_psql,connect,[{psql,"server",5432,"mongooseim","postgres","password"},5000],[]},{mongoose_rdbms,connect,4,[{file,"/root/deb/mongooseim/_build/prod/lib/mongooseim/src/rdbms/mongoose_rdbms.erl"},{line,668}]},{mongoose_rdbms,init,1,[{file,"/root/deb/mongooseim/_build/prod/lib/mongooseim/src/rdbms/mongoose_rdbms.erl"},{line,431}]},{wpool_process,init,1,[{file,"/root/deb/mongooseim/_build/..."},...]},...]}}} in context start_error
2020-06-05 18:17:47.622 [error] <0.583.0>#mongoose_wpool_mgr:handle_call:105 Pool not started: {error,{{shutdown,{failed_to_start_child,'wpool_pool-mongoose_wpool$rdbms$global$default-process-sup',{shutdown,{failed_to_start_child,'wpool_pool-mongoose_wpool$rdbms$global$default-1',{undef,[{mongoose_rdbms_psql,connect,[{psql,"server",5432,"mongooseim","postgres","password"},5000],[]},{mongoose_rdbms,connect,4,[{file,"/root/deb/mongooseim/_build/prod/lib/mongooseim/src/rdbms/mongoose_rdbms.erl"},{line,668}]},{mongoose_rdbms,init,1,[{file,"/root/deb/mongooseim/_build/prod/lib/mongooseim/src/rdbms/mongoose_rdbms.erl"},{line,431}]},{wpool_process,init,1,[{file,"/root/deb/mongooseim/_build/default/lib/worker_pool/src/wpool_process.erl"},{line,85}]},{gen_server,init_it,2,[{file,"gen_server.erl"},{line,374}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,342}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]}}}}},{child,undefined,'mongoose_wpool$rdbms$global$default',{wpool,start_pool,['mongoose_wpool$rdbms$global$default',[{worker,{mongoose_rdbms,[{server,{psql,"server",5432,"mongooseim","postgres","password"}}]}},{pool_sup_shutdown,infinity},{workers,1}]]},temporary,infinity,supervisor,[wpool]}}}
2020-06-05 18:17:47.678 [warning] <0.615.0>#service_mongoose_system_metrics:report_transparency:129 We are gathering the MongooseIM system's metrics to analyse the trends and needs of our users, improve MongooseIM, and know where to focus our efforts. For more info on how to customise, read, enable, and disable these metrics visit:
- MongooseIM docs -
https://mongooseim.readthedocs.io/en/latest/operation-and-maintenance/System-Metrics-Privacy-Policy/
- MongooseIM GitHub page - https://github.com/esl/MongooseIM
The last sent report is also written to a file /var/log/mongooseim/system_metrics_report.json
2020-06-05 18:17:48.404 [warning] <0.1289.0>#nkpacket_stun:get_stun_servers:231 Current NAT is changing ports!

Related

Error while using persistent datasource using mongodb ini hyperledger composer

I am trying to use persistent datasource using mongoDB in hyperledger composer on a UBUNTU droplet
but after starting the rest server and den after issuing a command docker logs -f rest i am getting the following error(i have provided a link to the image)
webuser#ubuntu16:~$ docker logs -f rest
[2018-08-29T12:38:31.278Z] PM2 log: Launching in no daemon mode
[2018-08-29T12:38:31.351Z] PM2 log: Starting execution sequence in -fork mode- for app name:composer-rest-server id:0
[2018-08-29T12:38:31.359Z] PM2 log: App name:composer-rest-server id:0 online
WARNING: NODE_APP_INSTANCE value of '0' did not match any instance config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
Discovering types from business network definition ...
(node:15) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
Connection fails: Error: Error trying to ping. Error: Failed to connect before the deadline
It will be retried for the next request.
Exception: Error: Error trying to ping. Error: Failed to connect before the deadline
Error: Error trying to ping. Error: Failed to connect before the deadline
at _checkRuntimeVersions.then.catch (/home/composer/.npm-global/lib/node_modules/composer-rest-server/node_modules/composer-connector-hlfv1/lib/hlfconnection.js:806:34)
at <anonymous>
[2018-08-29T12:38:41.021Z] PM2 log: App [composer-rest-server] with id [0] and pid [15], exited with code [1] via signal [SIGINT]
[2018-08-29T12:38:41.024Z] PM2 log: Starting execution sequence in -fork mode- for app name:composer-rest-server id:0
[2018-08-29T12:38:41.028Z] PM2 log: App name:composer-rest-server id:0 online
WARNING: NODE_APP_INSTANCE value of '0' did not match any instance config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
Discovering types from business network definition ...
(node:40) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
Connection fails: Error: Error trying to ping. Error: Failed to connect before the deadline
It will be retried for the next request.
I don't understand what is the problem and what wrong I am doing because I have followed all the steps in the hyperledger composer document with success....
Is it because I am using it on ubuntu droplet....?? anyone help
EDIT
I followed all the steps mentioned in this tutorial
but instead of using google authentication i am using github authentication.
Also i have changed my local host to the ip of my ubuntu droplet in connection.json file and also in this command
sed -e 's/localhost:7051/peer0.org1.example.com:7051/' -e 's/localhost:7053/peer0.org1.example.com:7053/' -e 's/localhost:7054/ca.org1.example.com:7054/' -e 's/localhost:7050/orderer.example.com:7050/' < $HOME/.composer/cards/restadmin#trade-network/connection.json > /tmp/connection.json && cp -p /tmp/connection.json $HOME/.composer/cards/restadmin#trade-network/
bt yet with no success! i get the following error now.....
webuser#ubuntu16:~$ docker logs rest
[2018-08-30T05:03:02.916Z] PM2 log: Launching in no daemon mode
[2018-08-30T05:03:02.989Z] PM2 log: Starting execution sequence in -fork mode- for app name:composer-rest-server id:0
[2018-08-30T05:03:02.997Z] PM2 log: App name:composer-rest-server id:0 online
WARNING: NODE_APP_INSTANCE value of '0' did not match any instance config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
Discovering types from business network definition ...
(node:15) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
Discovering the Returning Transactions..
Discovered types from business network definition
Generating schemas for all types in business network definition ...
Generated schemas for all types in business network definition
Adding schemas for all types to Loopback ...
Added schemas for all types to Loopback
SyntaxError: Unexpected string in JSON at position 92
at JSON.parse ()
at Promise.then (/home/composer/.npm-global/lib/node_modules/composer-rest-server/server/server.js:141:34)
at
at process._tickDomainCallback (internal/process/next_tick.js:228:7)
[2018-08-30T05:03:09.942Z] PM2 log: App [composer-rest-server] with id [0] and pid [15], exited with code 1 via signal [SIGINT]
This error Error trying to ping. Error: Failed to connect before the deadline means that the composer-rest-server in the container cannot see/connect to the underlying Fabric at the URLs in the connection.json of the card that you are using to start the REST server.
There are a number of reasons why:
The Fabric is not started
You are using a Business Network Card that has localhost in the URLs of the connection.json, and localhost just re-directs back into the rest container.
Your rest container is started on a different Docker network bridge to your Fabric containers and cannot connect to the Fabric.
Have you followed this tutorial in the Composer documentation? If followed completely it will avoid the 3 problems mentioned above.

kaa data collection doesn't retrieve data mongodb

I installed kaa iot server manually on ubuntu 16.04, and use data collection sample to test how it works.
the code run without any errors, but when I run these commands below nothing happens:
mongo kaa
db.logs_$my_app_token$.find()
I even comment out bind_ip of mongodb.conf and restart mongodb, zookeeper and kaa-node services, but nothings changed.
I also regenerated SDK and rebuild project but that wouldn't help either.
finally this is the kaa log:
2018-06-05 15:03:53,899 [Thread-3] TRACE
o.k.k.s.c.s.l.DynamicLoadManager - DynamicLoadManager recalculate() got 0 redirection rules
2018-06-05 15:03:59,472 [EPS-core-dispatcher-6] DEBUG
o.k.k.s.o.s.a.a.c.OperationsServerActor - Received: org.kaaproject.kaa.server.operations.service.akka.messages.core.stats.StatusRequestMessage#30d61bb1
2018-06-05 15:03:59,472 [EPS-core-dispatcher-6] DEBUG o.k.k.s.o.s.a.a.c.OperationsServerActor - [14fc1a87-8b34-47f6-8f39-d91aff7bfff7] Processing status request
2018-06-05 15:03:59,475 [pool-5-thread-1] INFO o.k.k.s.o.s.l.DefaultLoadBalancingService - Updated load info: {"endpointCount": 0, "loadAverage": 0.02}
2018-06-05 15:03:59,477 [Curator-PathChildrenCache-0] INFO o.k.k.s.c.s.l.DynamicLoadManager - Operations server [-1835393002][localhost:9090] updated
2018-06-05 15:03:59,477 [Curator-PathChildrenCache-4] DEBUG o.k.k.s.o.s.c.DefaultClusterService - Update of node [localhost:9090:1528181889050]-[{"endpointCount": 0, "loadAverage": 0.02}] is pushed to resolver org.kaaproject.kaa.server.hash.ConsistentHashResolver#1d0276a4
2018-06-05 15:04:03,899 [Thread-3] INFO o.k.k.s.c.s.l.LoadDistributionService - Load distribution service recalculation started...
2018-06-05 15:04:03,899 [Thread-3] INFO o.k.k.s.c.s.l.DynamicLoadManager - DynamicLoadManager recalculate() started... lastBootstrapServersUpdateFailed false
2018-06-05 15:04:03,899 [Thread-3] DEBUG o.k.k.s.c.s.l.d.EndpointCountRebalancer - No rebalancing in standalone mode
2018-06-05 15:04:03,899 [Thread-3] TRACE o.k.k.s.c.s.l.DynamicLoadManager - DynamicLoadManager recalculate() got 0 redirection rules
thank you for your help to fix this problem...
After lots of searching and checking, I finally found it!!!
there are multiple reason that this would happen:
if you are using Kaa Sanddbox make sure that you set your network setting into bridge (not NAT).
check your iptables and find out if these ports are open: 9888,9889,9997,9999.
if you are using virtual machine as your server, make sure that hosts firewall system doesn't block the ports.(This is what happened to me...)

FTPD Server Issue

So I am trying to use my xampp server and for the life of me can't understand why my ProFTPD will not turn on. It only became cause for concern when I saw the word "bogon" in the application log. Can anyone translate to me what the application log means and maybe how I go about troubleshooting the problem ?
Stopping all servers...
Stopping Apache Web Server...
/Applications/XAMPP/xamppfiles/apache2/scripts/ctl.sh : httpd stopped
Stopping MySQL Database...
/Applications/XAMPP/xamppfiles/mysql/scripts/ctl.sh : mysql stopped
Starting ProFTPD...
Exit code: 8
Stdout:
Checking syntax of configuration file
proftpd config test fails, aborting
Stderr:
bogon proftpd[3948]: warning: unable to determine IP address of 'bogon'
bogon proftpd[3948]: error: no valid servers configured
bogon proftpd[3948]: Fatal: error processing configuration file '/Applications/XAMPP/xamppfiles/etc/proftpd.conf'

Lagom's embedded Kafka fails to start after killing Lagom process once

I've played around with lagom-scala-word-count Activator template and I was forced to kill the application process. Since then embedded kafka doesn't work - this project and every new I create became unusable. I've tried:
running sbt clean, to delete embedded Kafka data
creating brand new project (from other activator templates)
restarting my machine.
Despite this I can't get Lagom to work. During first launch I get following lines in log:
[warn] o.a.k.c.NetworkClient - Error while fetching metadata with correlation id 1 : {wordCount=LEADER_NOT_AVAILABLE}
[warn] o.a.k.c.NetworkClient - Error while fetching metadata with correlation id 2 : {wordCount=LEADER_NOT_AVAILABLE}
[warn] o.a.k.c.NetworkClient - Error while fetching metadata with correlation id 4 : {wordCount=LEADER_NOT_AVAILABLE}
[warn] a.k.KafkaConsumerActor - Consumer interrupted with WakeupException after timeout. Message: null. Current value of akka.kafka.consumer.wakeup-timeout is 3000 milliseconds
[warn] a.k.KafkaConsumerActor - Consumer interrupted with WakeupException after timeout. Message: null. Current value of akka.kafka.consumer.wakeup-timeout is 3000 milliseconds
Next launches result in:
[info] Starting Kafka
[info] Starting Cassandra
....Kafka Server closed unexpectedly.
....
[info] Cassandra server running at 127.0.0.1:4000
I've posted full server.log from lagom-internal-meta-project-kafka at https://gist.github.com/szymonbaranczyk/a93273537b42aafa45bf67446dd41adb.
Is it possible that some corrupted Embedded Kafka's data is stored globally on my pc and causes this?
For future reference, as James mentioned in comments, you have to delete the folder lagom-internal-meta-project-kafka in target/lagom-dynamic-projects. I don't know why it's not get deleted automatically.

Zoomdata error - Connect to localhost:3333 [localhost/127.0.0.1] failed: Connection refused

Good morning.
I am an intern currently working on a proof of concept requiring me to use zoomdata to visualize data from elasticsearch on ubuntu 16.04 LTS.
I managed to connect both on docker - in short, each process ran in a separate, isolated container and communicated through ports - but the performance wasn't good enough (the visualisation encountered bugs for files > 25 mb).
So I decided to install zoomdata on my computer (trial, .deb version)
I also installed mongodb first.
However, when I run zoomdata, I have two issues, and I believe that solving the second might solve the first:
-the first is that when I create a new elasticsearch connexion, I enter exactly the same parameters as with docker (I've triple checked, they are accurate):
node name: elasticsearch (the name of my node)
client type : HTTP node
adress: 192.168.10.4 and port: 9200
-the second is when I run zoomdata:
During the initialisation (more accurately, the spark one) I have an error message (more details below):
c.z.s.c.r.impl.CategoriesServiceClient : Attempt: 1 to send
categories registration failed. Connect to localhost:3333
[localhost/127.0.0.1] failed: Connection refused
followed by the same message over and over again (the attempt number changes) as the rest of the program executes normally.
I took a look at the logs - computer(bug) and docker(works).
Note: the SAML error doesn't stop the docker version from working, so unless fixing it fixes other problems It's not my priority.
Computer:
2016-06-15 16:04:06.680 ERROR 1 --- [ main]
c.z.core.service.impl.SamlService : Error initializing SAML.
Disabling SAML.
2016-06-15 15:58:12.125 INFO 8149 --- [ main]
com.zoomdata.dao.mongo.KeyValueDao : Inserting into key value
collection com.zoomdata.model.dto.SamlConfig#4f3431a1
2016-06-15 15:58:12.789 INFO 8149 --- [ main]
c.z.core.init.SystemVersionBeanFactory : Server version 2.2.6,
database version 2.2.6, git commit :
0403c951946e5daf03000d83ec45ad85d0ce4a56, built on 06060725
2016-06-15 15:58:17.571 ERROR 8149 --- [actory-thread-1]
c.z.s.c.r.impl.CategoriesServiceClient : Attempt: 1 to send
categories registration failed. Connect to localhost:3333
[localhost/127.0.0.1] failed: Connection refused
2016-06-15 15:58:17.776 ERROR 8149 --- [actory-thread-1]
c.z.s.c.r.impl.CategoriesServiceClient : Attempt: 2 to send
categories registration failed. Connect to localhost:3333
[localhost/127.0.0.1] failed: Connection refused
2016-06-15 15:58:18.537 INFO 8149 --- [ main]
c.z.c.s.impl.SparkContextManager$2 : Running Spark version 1.5.1
Docker:
2016-06-15 16:04:06.680 ERROR 1 --- [ main]
c.z.core.service.impl.SamlService : Error initializing SAML.
Disabling SAML.
2016-06-15 16:04:06.681 INFO 1 --- [ main]
com.zoomdata.dao.mongo.KeyValueDao : Inserting into key value
collection com.zoomdata.model.dto.SamlConfig#43b09564
2016-06-15
16:04:07.209 INFO 1 --- [ main]
c.z.core.init.SystemVersionBeanFactory : Server version 2.2.7,
database version 2.2.7, git commit :
97c9ba3c3662d1646b4e380e7640766674673039, built on 06131356
2016-06-15
16:04:12.117 INFO 1 --- [actory-thread-1]
c.z.s.c.r.impl.CategoriesServiceClient : Registered to handle
categories: [field-refresh, source-refresh]
2016-06-15 16:04:12.427 INFO 1 --- [ main]
c.z.c.s.impl.SparkContextManager$2 : Running Spark version 1.5.1
The server versions are different (2.2.6 for computer, 2.2.7 for docker) - I'll try to update the computer one but I don't have much hope that it will work.
What I tried already:
-use zoomdata as root: zoomdata refused, and internet told me why it was a bad idea to begin with.
-deactivate all firewalls : didn't do anything.
-search on the internet: didn't manage to solve.
I'm all out of ideas, and would be grateful for any assistance.
EDIT : I updated to 2.2.7, but the problem persists.
I'm wondering if there may be a problem with authorisations (since the connexion is refused).
I also tried disabling SSL on my zoomdata server.
EDIT: after a discussion on the logs and .properties files seen here
the issue is the scheduler service not connecting to mongodb, although zoomdata does connect.
I filled the files following the installation guide but to no avail.
From the logs I see that scheduler cannot connect to mongo because socket exception. Found possible solution here:
stackoverflow.com/questions/15419325/mongodb-insert-fails-due-to-socket-exception
Comment out the line in your mongod.conf that binds the IP to 127.0.0.1. Usually, it is set to 127.0.0.1 by default.
For Linux, this config file location should be be /etc/mongod.conf.
Once you comment that out , it will receive connections from all interfaces.
This fixed it for me as i was getting these socket exceptions as well.