Kaa, My first application - can't retrieve collected data - mongodb

I'm doing my first kaa application, i have a stuck at Retrieve collected data step.
I have build my client project, kaa-app run as below:
viettq#viettq:~/Documents/workspace/kaa_example/build$
viettq#viettq:~/Documents/workspace/kaa_example/build$
viettq#viettq:~/Documents/workspace/kaa_example/build$
viettq#viettq:~/Documents/workspace/kaa_example/build$
viettq#viettq:~/Documents/workspace/kaa_example/build$
viettq#viettq:~/Documents/workspace/kaa_example/build$
viettq#viettq:~/Documents/workspace/kaa_example/build$ ./kaa-app
Default sample period: 1 seconds
Sampled temperature: 33
2017/02/17 2:29:10 [WARNING] [kaa_bootstrap_manager.c:612] (-7) - Could not find next Bootstrap access point (protocol: id=0x56C8FF92, version=1)
2017/02/17 2:29:10 [ERROR] [kaa_tcp_channel.c:307] (-7) - Kaa TCP channel [0x929A2016] error notifying bootstrap manager on access point failure
2017/02/17 2:29:10 [ERROR] [kaa_client.c:240] (-7) - Failed to process OUT event for the client socket 3
Sampled temperature: 30
Sampled temperature: 32
Sampled temperature: 26
2017/02/17 2:29:13 [WARNING] [kaa_bootstrap_manager.c:612] (-7) - Could not find next Bootstrap access point (protocol: id=0x56C8FF92, version=1)
2017/02/17 2:29:13 [ERROR] [kaa_tcp_channel.c:307] (-7) - Kaa TCP channel [0x929A2016] error notifying bootstrap manager on access point failure
2017/02/17 2:29:13 [ERROR] [kaa_client.c:240] (-7) - Failed to process OUT event for the client socket 3
Sampled temperature: 31
Sampled temperature: 32
Sampled temperature: 31
Some data send to kaa sandbox server.
My ssh to kaa sandbox server as below:
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$ mongo kaa
MongoDB shell version: 2.6.1
connecting to: kaa
> db.logs_80610364736216152939.find()
> db.logs_80610364736216152939.find()
> db.logs_80610364736216152939.find()
>
[2]+ Stopped mongo kaa
kaa#kaa-sandbox.kaaproject.org:~$ mongo kaa
MongoDB shell version: 2.6.1
connecting to: kaa
> db.logs_80610364736216152939.find()
>
>
>
Nothing in mongoDB shell.
I done everything full compilance the Kaa official tutorial
http://kaaproject.github.io/kaa/docs/v0.10.0/Programming-guide/Your-first-Kaa-application/
But i retrieve nothing from mongoDB shell.
Please help me slove it.
Thank advance!

From the logs you provided:
2017/02/17 2:29:10 [WARNING] [kaa_bootstrap_manager.c:612] (-7) - Could not find next Bootstrap access point (protocol: id=0x56C8FF92, version=1)
2017/02/17 2:29:10 [ERROR] [kaa_tcp_channel.c:307] (-7) - Kaa TCP channel [0x929A2016] error notifying bootstrap manager on access point failure
2017/02/17 2:29:10 [ERROR] [kaa_client.c:240] (-7) - Failed to process OUT event for the client socket 3
The application can`t send information to Kaa Sandbox server. Check that you correctly created all schemas and log appender. Check that you can connect to the server.
Please also check the Sandbox configuration for the Kaa host on the Management page. There should be set the PC host (which runs the Kaa Sandbox VM) machine's IP address accessible from the host you are running the application.
Note that each time the Kaa host setting is changed, you should re-generate Kaa SDK and re-build your application with that new SDK files. Otherwise, the application might fail connecting to the Kaa operations service.

Related

Unable to start keycloak 19 in production mode

we have keycloak 14 connected to an Amazon RDS Aurora PostgreSQL and now trying to update to Keycloak 19 with the same DB but is failing with the below error
2022-09-20 09:54:56,959 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (production) mode
2022-09-20 09:54:56,960 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: ISPN000324: Cache 'realms' is in 'STOPPING' state and this is an invocation not belonging to an on-going transaction, so it does not accept new invocations. Either restart it or recreate the cache container.
Any help would be appreciated.

Hyperledger fabric chaincode connection with peer getting dropped

I have a hyperledger fabric network version 2.4.4 running on Kubernetes. The peers and other components are running behind istio ingress. The chaincode is running on dind (docker-in-docker) container and connects to peer through its URL. The problem is the chaincode connection is being dropped after few minutes. Below are the logs:
2022-07-14T04:31:13.057Z info [c-api:lib/handler.js] [assetschannel-ddc183b4] Calling chaincode Invoke() succeeded. Sending COMPLETED message back to peer
2022-07-14T04:33:04.197Z error [c-api:lib/handler.js] Chat stream with peer - on error: %j "Error: 14 UNAVAILABLE: Connection dropped\n at Object.callErrorFromStatus (/usr/local/src/node_modules/#grpc/grpc-js/build/src/call.js:31:26)\n at Object.onReceiveStatus (/usr/local/src/node_modules/#grpc/grpc-js/build/src/client.js:391:49)\n at Object.onReceiveStatus (/usr/local/src/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:328:181)\n at /usr/local/src/node_modules/#grpc/grpc-js/build/src/call-stream.js:187:78\n at processTicksAndRejections (node:internal/process/task_queues:78:11)"
I did set the following environment variables in the peer pod to keep the connection alive:
CORE_CHAINCODE_KEEPALIVE: 60000
CORE_PEER_KEEPALIVE_CLIENT_INTERVAL: 600s
CORE_PEER_KEEPALIVE_CLIENT_TIMEOUT: 2s
CORE_PEER_KEEPALIVE_DELIVERYCLIENT_INTERVAL: 20s
CORE_PEER_KEEPALIVE_MININTERVAL: 15s
but this did not resolve the issue.
Any suggestions would be appreciated.
It appears to be an issue with aws elb. The idle timeout was set to 60s which was breaking the connection between chaincode and peer when there was no communication between them. Increasing this time fixed the issue.

Connect static agents to dynamic jenkins (Jenkins running in OpenShift)

We are running Jenkins in Open-shift and its fully up and running. Now, when trying to add a static agents we are getting the 404 not found error.
Agent startup script:
java -jar remoting_dslave.jar -jnlpUrl http://xxx-xxx-xxx.apps.ocp1.uat.dbs.com/computer/xxxxxxxx2a/jenkins-agent.jnlp -secret xxxxxxxxxxxxxxxxxxxxxx -workDir "/dcifent/JenkinsSlaves/ci3_dynamicSlave"
Getting the below error:
WARNING: Connection refused (Connection refused)
Jun 07, 2022 11:17:23 AM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: http:/xxxxxxxxx.apps.ocp1.uat.dbs.com/ provided port:8080 is not reachable
java.io.IOException: http://xxxxxxxxx.apps.ocp1.uat.dbs.com/ provided port:8080 is not reachable
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:311)
at hudson.remoting.Engine.innerRun(Engine.java:689)
at hudson.remoting.Engine.run(Engine.java:514)
Creted new router in OpenShift for port 8080 updated the startup script as below
java -jar agent.jar -jnlpUrl http://routefor8080.apps.ocp1.uat.dbs.com/computer/xxxxxxxx2a/jenkins-agent.jnlp -secret xxxxxxxxxxxxxxxxx -workDir "/dcifent/JenkinsSlaves/ci3_dynamicSlave"
Getting the different error now.
Failed to obtain http://routefor8080.apps.ocp1.uat.dbs.com/computer/xxxxxxxx2a/jenkins-agent.jnlp?encrypt=true
java.io.IOException: Failed to load http://routefor8080.apps.ocp1.uat.dbs.com/computer/xxxxxxxx2a/jenkins-agent.jnlp?encrypt=true: 404 Not Found
at hudson.remoting.Launcher.parseJnlpArguments(Launcher.java:517)
at hudson.remoting.Launcher.run(Launcher.java:345)
at hudson.remoting.Launcher.main(Launcher.java:296)
Waiting 10 seconds before retry
How can i connect static agents to dynamic Jenkins, can someone please help?

Running Apache Atlas standalone

I am trying to run Apache Atlas in a standalone fashion on Ubuntu - meaning without having to setup Solr and/or HBase.
What I did (according to the documentation: http://atlas.apache.org/0.8.1/InstallationSteps.html) was cloning the Git repository, build the maven project with embadded HBase an dSolr:
mvn clean package -Pdist,embedded-hbase-solr
Unpacked the resuting tar.gz file and executed bin/atlas_start.py - without having changed any configuration. To my understanding of the documentatino that should actually start up HBase along with Atlas - right?
The is what I find in logs/applocation.log:
2017-11-30 17:14:24,093 INFO - [main:] ~ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (Atlas:216)
2017-11-30 17:14:24,093 INFO - [main:] ~ Server starting with TLS ? false on port 21000 (Atlas:217)
2017-11-30 17:14:24,093 INFO - [main:] ~ <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< (Atlas:218)
2017-11-30 17:14:27,684 INFO - [main:] ~ No authentication method configured. Defaulting to simple authentication (LoginProcessor:102)
2017-11-30 17:14:28,527 INFO - [main:] ~ Logged in user daniel (auth:SIMPLE) (LoginProcessor:77)
2017-11-30 17:14:31,777 INFO - [main:] ~ Not running setup per configuration atlas.server.run.setup.on.start. (SetupSteps$SetupRequired:189)
2017-11-30 17:14:39,456 WARN - [main-SendThread(localhost:2181):] ~ Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (ClientCnxn$SendThread:110$
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-11-30 17:14:39,594 WARN - [main:] ~ Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = Connecti$
2017-11-30 17:14:40,593 WARN - [main-SendThread(localhost:2181):] ~ Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (ClientCnxn$SendThread:110$
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
...
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-11-30 17:14:56,185 WARN - [main:] ~ Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = Connecti$
2017-11-30 17:14:56,186 ERROR - [main:] ~ ZooKeeper exists failed after 4 attempts (RecoverableZooKeeper:277)
2017-11-30 17:14:56,186 WARN - [main:] ~ hconnection-0x1dba4e060x0, quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid) (ZKUtil:544)
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:541)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
To me it reads as if no HBase (and Zookeeper) are started by the script.
Am I missing something?
Thanks for your hints!
OK, meanwhile I figured out the issue. The start script obviously does not execute the script conf/atlas-env.sh which sets some environment variable. Among this are MANAGE_LOCAL_HBASE and MANAGE_LOCAL_SOLR. So if you set those two env vars to true (and set JAVA_HOME properly which is needed for the embedded HBase), then Atlas automatically starts HBase and Solr - and we get a local running instance of Atlas!
Maybe this helps someone who comes across the same issue in future!
Update March 2021
There are two ways of running apache atlas:
A) Building it from scratch:
git clone https://github.com/apache/atlas
mvn clean install -DskipTests
mvn clean package -Pdist -DskipTests
Running atlas_start.py:
python <atlas-directory>/conf/atlas_start.py
B) Using docker image:
docker-compose.yml
version: "3.3"
services:
atlas:
image: sburn/apache-atlas
container_name: atlas
ports:
- "21000:21000"
volumes:
- "./bash_script:/app"
command: bash -exc "/opt/apache-atlas-2.1.0/bin/atlas_start.py"
docker-compose up

iRODS configuration - Could not start iRODS server during

I've installed postgres as database and then iRODS in Ubuntu 14.04. Then I start its configuration
sudo /var/lib/irods/packaging/setup_irods.sh
After the configuration phase, when iRODS starts the updtating, the first 4 steps go well
Stopping iRODS server...
-----------------------------
Running irods_setup.pl...
Step 1 of 4: Configuring database user...
Updating user's .pgpass...
Skipped. File already uptodate.
Step 2 of 4: Creating database and tables...
Checking whether iCAT database exists...
[mydb] on [localhost] found.
Updating user's .odbc.ini...
Creating iCAT tables...
Skipped. Tables already created.
Testing database communications...
Step 3 of 4: Configuring iRODS server...
Updating /etc/irods/server_config.json...
Updating /etc/irods/database_config.json...
Step 4 of 4: Configuring iRODS user and starting server...
Updating iRODS user's ~/.irods/irods_environment.json...
Starting iRODS server...
but at the end I get this error
Could not start iRODS server.
Starting iRODS server...
Traceback (most recent call last):
File "/var/lib/irods/iRODS/scripts/python/get_db_schema_version.py", line 77, in <module>
current_schema_version = get_current_schema_version(cfg)
File "/var/lib/irods/iRODS/scripts/python/get_db_schema_version.py", line 61, in get_current_schema_version
'get_current_schema_version: failed to find result line for schema_version\n\n{}'.format(format_cmd_result(result)))
RuntimeError: get_current_schema_version: failed to find result line for schema_version
return code: [0]
stdout:
stderr:
ERROR: relation "r_grid_configuration" does not exist
LINE 1: ...option_value from R_GRID_CON...
^
Confirming catalog_schema_version... Success
Validating [/var/lib/irods/.irods/irods_environment.json]... Success
Validating [/etc/irods/server_config.json]... Success
Validating [/etc/irods/hosts_config.json]... Success
Validating [/etc/irods/host_access_control_config.json]... Success
Validating [/etc/irods/database_config.json]... Success
(1) Waiting for process bound to port 5432 ... [-]
(2) Waiting for process bound to port 5432 ... [-]
(4) Waiting for process bound to port 5432 ... [-]
Port 5432 In Use ... Not Starting iRODS Server
Install problem:
Cannot start iRODS server.
Found 0 processes:
There are no iRODS servers running.
Abort.
Have you any ideas on what went wrong?
Because I don't have enough reputation to comment:
Which version of iRODS are you using?
This portion of the output:
Creating iCAT tables...
Skipped. Tables already created.
combined with this portion:
ERROR: relation "r_grid_configuration" does not exist
suggests that the setup ran before, but only partially completed, leaving the system in a broken state. I would recommend reinstallating from scratch, which includes:
Uninstalling the iRODS icat and db plugin packages:
sudo dpkg -P irods-icat irods-database-plugin-postgres
note: make sure to use the -P, so that the configuration files are removed from dpkg's database.
Dropping and remaking the database
Deleting the following directories:
sudo rm -rf /tmp/irods /etc/irods /var/lib/irods
Reinstalling the packages and running sudo /var/lib/irods/packaging/setup_irods.sh
This portion of the output:
(1) Waiting for process bound to port 5432 ... [-]
(2) Waiting for process bound to port 5432 ... [-]
(4) Waiting for process bound to port 5432 ... [-]
Port 5432 In Use ... Not Starting iRODS Server
suggests that you are using port 5432 as your iRODS server port. This will conflict with the default Postgres port. I recommend using the default iRODS server port of 1247. This value was queried during setup as:
iRODS server's port [1247]:
and is recorded in /etc/irods/server_config.json under the zone_port entry.
iRODS-Chat:
It may be easier to continue this on the iRODS-Chat google group. Repairing installs can require back-and-forth communication, which may not be inline with standard stackoverflow usage.