kaa data collection doesn't retrieve data mongodb - mongodb

I installed kaa iot server manually on ubuntu 16.04, and use data collection sample to test how it works.
the code run without any errors, but when I run these commands below nothing happens:
mongo kaa
db.logs_$my_app_token$.find()
I even comment out bind_ip of mongodb.conf and restart mongodb, zookeeper and kaa-node services, but nothings changed.
I also regenerated SDK and rebuild project but that wouldn't help either.
finally this is the kaa log:
2018-06-05 15:03:53,899 [Thread-3] TRACE
o.k.k.s.c.s.l.DynamicLoadManager - DynamicLoadManager recalculate() got 0 redirection rules
2018-06-05 15:03:59,472 [EPS-core-dispatcher-6] DEBUG
o.k.k.s.o.s.a.a.c.OperationsServerActor - Received: org.kaaproject.kaa.server.operations.service.akka.messages.core.stats.StatusRequestMessage#30d61bb1
2018-06-05 15:03:59,472 [EPS-core-dispatcher-6] DEBUG o.k.k.s.o.s.a.a.c.OperationsServerActor - [14fc1a87-8b34-47f6-8f39-d91aff7bfff7] Processing status request
2018-06-05 15:03:59,475 [pool-5-thread-1] INFO o.k.k.s.o.s.l.DefaultLoadBalancingService - Updated load info: {"endpointCount": 0, "loadAverage": 0.02}
2018-06-05 15:03:59,477 [Curator-PathChildrenCache-0] INFO o.k.k.s.c.s.l.DynamicLoadManager - Operations server [-1835393002][localhost:9090] updated
2018-06-05 15:03:59,477 [Curator-PathChildrenCache-4] DEBUG o.k.k.s.o.s.c.DefaultClusterService - Update of node [localhost:9090:1528181889050]-[{"endpointCount": 0, "loadAverage": 0.02}] is pushed to resolver org.kaaproject.kaa.server.hash.ConsistentHashResolver#1d0276a4
2018-06-05 15:04:03,899 [Thread-3] INFO o.k.k.s.c.s.l.LoadDistributionService - Load distribution service recalculation started...
2018-06-05 15:04:03,899 [Thread-3] INFO o.k.k.s.c.s.l.DynamicLoadManager - DynamicLoadManager recalculate() started... lastBootstrapServersUpdateFailed false
2018-06-05 15:04:03,899 [Thread-3] DEBUG o.k.k.s.c.s.l.d.EndpointCountRebalancer - No rebalancing in standalone mode
2018-06-05 15:04:03,899 [Thread-3] TRACE o.k.k.s.c.s.l.DynamicLoadManager - DynamicLoadManager recalculate() got 0 redirection rules
thank you for your help to fix this problem...

After lots of searching and checking, I finally found it!!!
there are multiple reason that this would happen:
if you are using Kaa Sanddbox make sure that you set your network setting into bridge (not NAT).
check your iptables and find out if these ports are open: 9888,9889,9997,9999.
if you are using virtual machine as your server, make sure that hosts firewall system doesn't block the ports.(This is what happened to me...)

Related

Failed to establish connection to Neo4j usign bolt scheme even after successfully enabling Bolt

I want to connect to Neo4j database using my creds. I am tunneling into a machine and once that is done, I open my broswer at the port: localhost:7474.
I tried with both neo4j and bolt scheme to connect at the url:
bolt://<node_ip>:7687 and neo4j://<node_ip>:7687 but the connection times out.
I tried checking the logs but only found that bolt scheme is enabled:
bash-4.2$ tail -f /logs/debug.log
2021-07-02 21:26:03.323+0000 WARN [o.n.k.a.p.GlobalProcedures] Failed to load `org.apache.commons.logging.impl.LogKitLogger` from plugin jar `/home/sandbox/neo/plugins/apoc-4.2.0.2-all.jar`: org/apache/log/Logger
2021-07-02 21:26:03.946+0000 INFO [c.n.m.g.GlobalMetricsExtension] Sending metrics to CSV file at /home/sandbox/neo/metrics
2021-07-02 21:26:03.973+0000 INFO [o.n.b.BoltServer] Bolt enabled on 0.0.0.0:7687.
2021-07-02 21:26:03.974+0000 INFO [o.n.b.BoltServer] Bolt (Routing) enabled on 0.0.0.0:7688.
2021-07-02 21:26:03.974+0000 INFO [o.n.s.AbstractNeoWebServer$ServerComponentsLifecycleAdapter] Starting web server
2021-07-02 21:26:04.001+0000 INFO [o.n.s.m.ThirdPartyJAXRSModule] Mounted unmanaged extension [n10s.endpoint] at [/rdf]
2021-07-02 21:26:05.341+0000 INFO [c.n.s.e.EnterpriseNeoWebServer] Remote interface available at http://<node_ip>:7474/
2021-07-02 21:26:05.341+0000 INFO [o.n.s.AbstractNeoWebServer$ServerComponentsLifecycleAdapter] Web server started.
2021-07-02 21:35:34.565+0000 INFO [c.n.c.c.c.l.s.Segments] [system/00000000] Pruning SegmentFile{path=raft.log.0, header=SegmentHeader{formatVersion=2, recordOffset=56, prevFileLastIndex=-1, segmentNumber=0, prevIndex=-1, prevTerm=-1}}
2021-07-02 21:35:46.079+0000 INFO [c.n.c.c.c.l.s.Segments] [neo4j/32f6599b] Pruning SegmentFile{path=raft.log.0, header=SegmentHeader{formatVersion=2, recordOffset=56, prevFileLastIndex=-1, segmentNumber=0, prevIndex=-1, prevTerm=-1}}
The query log is empty, as I could not execute any query:
bash-4.2$ tail -f query.log
2021-07-02 21:25:52.510+0000 INFO Query started: id:1 - 1009 ms: 0 B - embedded-session neo4j - - call db.clearQueryCaches() - {} - runtime=pipelined - {}
2021-07-02 21:25:52.580+0000 INFO id:1 - 1080 ms: 112 B - embedded-session neo4j - - call db.clearQueryCaches() - {} - runtime=pipelined - {}
The other articles or answers that I read were mostly about misconfiguration: wrong ports but I don't think that is the case with me since I checked from debug.log file that my ports are alright.
FWIW, I am using 3 replicas for my Neo4j and right now, connecting to just one pod.
I am tunnelling both the ports:
ssh -L 7687:$IP:7687 -L 7474:$IP:7474 domain_name.com -N```
Perhaps you've already checked this, but if not, can you ensure that port 7687 is also forwarded. When I tunnelled via browser, my expectation was that 7474 would be sufficient, but it turned out that forwarding 7687 is also necessary.
So, instead of providing localhost in the connection string, I made a silly mistake of writing down the actual IP and that was the reason for connection timeout.

Fusion Freeswitch Maximum Calls In Progress

I use Fusion core Freeswitch to build my PBX Server.
My version:
FreeSWITCH version: 1.10.2-release-14-f7bdd3845a~64bit (-release-14-f7bdd3845a 64bit)
it working find till last month BUT when user registrations reach to 1000
i have check Freeswitch log ( debug level) freeswitch still working
I have check postgreSql log still working
But client disconnected ( webrct from Web using SipJS and Zoiper use TCP protocol) and can not connect to Freeswitch for Registrations , so it can make any call at this time.
At this time when i see log it show "Maximum Calls In Progress"
I have try increase session reach to 5000 and session per second to 1000 and flush cached/ restart freeswitch but still not woking.
Here my switch.conf.xml
Here my postgresql.conf
Here my log when server down: fs_log
You can see i restart freeswitch at this log:
2020-07-29 14:39:08.291394 [INFO] switch_core.c:2879 Shutting down ca289c03-0617-46bf-a7af-eda4a4fe2fbb 2020-07-29 14:39:08.291394 [NOTICE] switch_core_session.c:407 Hangup sofia/internal/1100365#125.212.xxx.xxx [CS_NEW] [SYSTEM_SHUTDOWN]
Please take a look at help me solve this.

FTPD Server Issue

So I am trying to use my xampp server and for the life of me can't understand why my ProFTPD will not turn on. It only became cause for concern when I saw the word "bogon" in the application log. Can anyone translate to me what the application log means and maybe how I go about troubleshooting the problem ?
Stopping all servers...
Stopping Apache Web Server...
/Applications/XAMPP/xamppfiles/apache2/scripts/ctl.sh : httpd stopped
Stopping MySQL Database...
/Applications/XAMPP/xamppfiles/mysql/scripts/ctl.sh : mysql stopped
Starting ProFTPD...
Exit code: 8
Stdout:
Checking syntax of configuration file
proftpd config test fails, aborting
Stderr:
bogon proftpd[3948]: warning: unable to determine IP address of 'bogon'
bogon proftpd[3948]: error: no valid servers configured
bogon proftpd[3948]: Fatal: error processing configuration file '/Applications/XAMPP/xamppfiles/etc/proftpd.conf'

Cassandra Channel has been closed

We have a small test cluster with 3 nodes on Amazon. Everything seems working with cqlsh. But when I try to debug my app from my laptop (outside of Amazon of course), I'm getting 'Channel has been closed' errors, and it starts retrying forever. I know it's likely caused by the config in cassandra.ymal, as it shows some private IPs in my Eclipse console. Tried many different ways but still getting the same problem. Appreciate any input on this. How to get rid of the private IPs 10.251.x.x from the client?
Here are some context,
Versions:
[cqlsh 4.0.1 | Cassandra 2.0.4 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
cassandra-driver-core-2.0.0-rc1.jar
In cassandra.ymal:
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "54.203.x.x,54.203.x.y"
listen_address: 10.251.a.b
broadcast_address: 54.203.x.x
native_transport_port: 9042
endpoint_snitch: Ec2MultiRegionSnitch
In Eclipse console:
DEBUG [main] (ControlConnection.java:145) - [Control connection] Successfully connected to /54.203.x.x
DEBUG [Cassandra Java Driver worker-0] (Session.java:379) - Adding /54.203.x.x to list of queried hosts
DEBUG [Cassandra Java Driver worker-1] (Session.java:379) - Adding /10.251.a.c to list of queried hosts
DEBUG [Cassandra Java Driver worker-1] (Connection.java:103) - [/10.251.a.c-1] Error connecting to /10.251.a.c (connection timed out: /10.251.a.c:9042)
DEBUG [Cassandra Java Driver worker-1] (Session.java:390) - Error creating pool to /10.251.a.c ([/10.251.a.c] Cannot connect)
DEBUG [Cassandra Java Driver worker-1] (Cluster.java:1064) - /10.251.a.c is down, scheduling connection retries
DEBUG [New I/O worker #4] (Connection.java:194) - Defuncting connection to /10.251.a.c
com.datastax.driver.core.TransportException: [/10.251.a.b] Channel has been closed
at com.datastax.driver.core.Connection$Dispatcher.channelClosed(Connection.java:548)
...
It seem that your Java driver is using auto discovery by calling "describe cluster" to get a list of all nodes in your cluster. In AWS using Ec2Snitch, that yields to private ips which obviously won't work from outside of AWS. There is a discussion on this topic here:
https://datastax-oss.atlassian.net/browse/JAVA-145
The last commend got my attention. It says you can do something with LoadBalancingPolicy of the driver to limit the nodes. Hope this includes specifying the specific IPs so it does not auto discover.

How to enable JBoss server TRACE log?

I am having web application running in JBOSS AS 4.2.2.
Observed that jboss server automatically shuts down, and the following exception is observed in server.log
14:20:38,048 INFO [Server] Runtime shutdown hook called, forceHalt: true
14:20:38,049 INFO [Server] JBoss SHUTDOWN: Undeploying all packages
I want to enable TRACE for org.jboss.system.server.Server in jboss-log4j.xml, to hopefully get some more info when the server shuts down.
Please let me know how to enable TRACE for org.jboss.system.server.Server in jboss-log4j.xml.
I was able to add trace for server log and i could see the following output when JBOSS AS shuts down automatically:
2010-06-09 19:07:46,631 DEBUG [org.jboss.wsf.stack.jbws.RequestHandlerImpl] END handleRequest: jboss.ws:context=hpnp_lqs,endpoint=APIWebService
2010-06-09 19:07:46,631 DEBUG [org.jboss.ws.core.soap.MessageContextAssociation] popMessageContext: org.jboss.ws.core.jaxws.handler.SOAPMessageContextJAXWS#3290a11e (Thread http-0.0.0.0-8080-1)
2010-06-09 19:07:55,895 INFO [org.jboss.system.server.Server] Runtime shutdown hook called, forceHalt: true
2010-06-09 19:07:55,895 TRACE [org.jboss.system.server.Server] Shutdown caller:
java.lang.Throwable: Here
at org.jboss.system.server.ServerImpl$ShutdownHook.shutdown(ServerImpl.java:1017)
at org.jboss.system.server.ServerImpl$ShutdownHook.run(ServerImpl.java:996)
2010-06-09 19:07:55,895 INFO [org.jboss.system.server.Server] JBoss SHUTDOWN: Undeploying all packages
If anybody, has any clue, on what might be cause for automatic shutdown, pls help me.
Thanks!
There's a JBoss wiki page listing log output for various shutdown causes. It looks like yours was caused by a Ctrl-C. I assume you would have known if you hit Ctrl-C, though.
On unix-type servers, Ctrl-C generates a TERM signal, which could also come from someone or some script running as your jboss user or as root executing "kill <jboss pid>". If you're on linux I'd take a look at this question about the OOM killer.
One possible cause for this behaviour is console logout. We have observed this with our own server.
In brief, by default the Sun JVM listens to the event of the console user logging out, and shuts itself down automatically when that happens. To disable this, start the JVM with the -Xrs parameter.
See here for more details (look for Mysterious shutdowns).
One possible cause for a forced shutdown is if the virtual machine is out of memory.
I had this problem several years ago when a colleague implemented some very nasty bulk loading of objects from a database which caused jboss to shutdown on certain requests.
Try searching for "memory" or similar keywords in the log file and/or monitor the memory usage of the server.