Hello I'm trying to launch the FIWARE orion context broker using an atlas mongodb
mongodb+srv://<user>:****************#<domainid>.mongodb.net/<aut_db>
I have tried multiple variations of the docker run command but I never get the connection to success
trace:
$ docker run --name orion -p 1027:1026 fiware/orion:latest -dbhost "<domainid>.mongodb.net/<aut_db>?ssl=true&retryWrites=true&w=majority" -dbuser <user> -dbpwd U9NjLafksdv9mavW -logLevel DEBUG
time=2021-12-01T14:38:42.188Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1063]:main | msg=start command line </usr/bin/contextBroker -fg -multiservice -ngsiv1Autocast -disableFileLog -dbhost <domainid>.mongodb.net/<aut_db>?ssl=true&retryWrites=true&w=majority -dbuser <user> -dbpwd ****** -logLevel DEBUG>
time=2021-12-01T14:38:42.188Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1137]:main | msg=Orion Context Broker is running
time=2021-12-01T14:40:23.011Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=mongoConnectionPool.cpp[220]:mongoConnect | msg=Database Startup Error (cannot connect to mongo - doing 100 retries with a 1000 millisecond interval)
time=2021-12-01T14:40:23.011Z | lvl=FATAL | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=MongoGlobal.cpp[142]:mongoInit | msg=Fatal Error (MongoDB error)
time=2021-12-01T14:40:23.012Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[591]:exitFunc | msg=Orion shutdown completed
Has someone know how I can configure orion context broker to connect to an mongo atlas?
EDIT: I have checked and I have access from that server to the mongo-db with the mongo shell
Orion currently supports only mongodb:// connection strings, built from the -db, -dbhost, -rplSet, -dbTimeout, -dbuser, -dbpwd, -dbAuthMech, -dbAuthDb, -dbSSL and -dbDisableRetryWrites CLI parameters(or default values, if omitted) as can be see in the source code.
There is an issue in the Orion repository with the aim of making this more flexible, allowing to provide the connection string directly overriding the above parameters.
While that issue comes (contributions welcome ;) you could hack the Orion source code cited above to build an specific version for you with support for mongo+srv:// connection strings.
Related
cant add name of the authorized network whie using gcloud sql patch command
stage("Editing Authorized Networks of ${instance_name} CloudSQL") {
steps {
sh "gcloud sql instances patch ${instance_name} --authorized-networks ${network_name}=${ip_range}"
}
}
tried this
gcloud sql instances patch instanceid --authorized-networks Rajeev-Home=0.0.0.0/0
getting this
ERROR: (gcloud.sql.instances.patch) argument --authorized-networks: Bad value [Rajeev-Home=0.0.0.0/0]: Must be specified in CIDR notation, also known as 'slash' notation (e.g. 192.168.100.0/24).
Usage: gcloud sql instances patch INSTANCE [optional flags]
optional flags may be --activation-policy | --active-directory-domain |
--assign-ip | --async | --audit-bucket-path |
--audit-retention-interval | --audit-upload-interval |
--authorized-gae-apps | --authorized-networks |
--availability-type | --no-backup | --backup-location |
--backup-start-time | --clear-authorized-networks |
--clear-database-flags | --clear-gae-apps |
--clear-password-policy | --connector-enforcement |
--cpu | --database-flags | --database-version |
--deletion-protection |
--deny-maintenance-period-end-date |
--deny-maintenance-period-start-date |
--deny-maintenance-period-time | --diff |
--enable-bin-log | --enable-database-replication |
--enable-google-private-path |
--enable-password-policy |
--enable-point-in-time-recovery | --follow-gae-app |
--gce-zone | --help |
--insights-config-query-insights-enabled |
--insights-config-query-plans-per-minute |
--insights-config-query-string-length |
--insights-config-record-application-tags |
--insights-config-record-client-address |
--maintenance-release-channel | --maintenance-version |
--maintenance-window-any | --maintenance-window-day |
--maintenance-window-hour | --memory | --network |
--password-policy-complexity |
--password-policy-disallow-username-substring |
--password-policy-min-length |
--password-policy-password-change-interval |
--password-policy-reuse-interval | --pricing-plan |
--remove-deny-maintenance-period | --replication |
--require-ssl | --retained-backups-count |
--retained-transaction-log-days | --secondary-zone |
--storage-auto-increase | --storage-size | --tier |
--zone
Orion version is 2.1.0
Orion is started in HTTPS using the -https option
We use the "HTTPS" protocol schema in URL in our subscriptions --> reference" : "https://cygnus.domain.com/notify"
When we insert an Entity matching the subscription, the Entity is created in Orion, but not in STH.
However Orion Logs return: Notification Successfully Sent to https://cygnus.domain.com:443/notify
If we use the "HTTP" protocol schema in URL in our subscriptions it works
If we use curl to notify dirctly Cygnus in HTTP or HTTPS it works
Orion Logs bellow:
time=Friday 22 Feb 11:24:28 2019.158Z | lvl=INFO | corr=N/A | trans=1550831768-689-00000000058 | from=pending | srv=pending | subsrv=pending | comp=Orion | op=logMsg.h[1832]:lmTransactionStart | msg=Starting transaction to https://cygnus.domain.com:443/notify
time=Friday 22 Feb 11:24:28 2019.159Z | lvl=INFO | corr=6a8319ac-3694-11e9-872e-0242c0a81006 | trans=1550831768-689-00000000056 | from=10.6.11.36 | srv=svctestnca | subsrv=/svcpath/testnca | comp=Orion | op=logMsg.h[1916]:lmTransactionEnd | msg=Transaction ended
time=Friday 22 Feb 11:24:28 2019.177Z | lvl=INFO | corr=N/A | trans=1550831768-689-00000000057 | from=pending | srv=pending | subsrv=pending | comp=Orion | op=httpRequestSend.cpp[615]:httpRequestSendWithCurl | msg=Notification Successfully Sent to https://cygnus.domain.com:443/notify
time=Friday 22 Feb 11:24:28 2019.159Z | lvl=INFO | corr=N/A | trans=1550831768-689-00000000058 | from=pending | srv=pending | subsrv=pending | comp=Orion | op=httpRequestSend.cpp[594]:httpRequestSendWithCurl | msg=Sending message 20 to HTTP server: sending message of 826 bytes to HTTP server
time=Friday 22 Feb 11:24:28 2019.176Z | lvl=INFO | corr=N/A | trans=1550831768-689-00000000058 | from=pending | srv=pending | subsrv=pending | comp=Orion | op=logMsg.h[1916]:lmTransactionEnd | msg=Transaction ended
Thanks for your help.
This problem has been solved
The subscription reference is https://cygnus.domain.com/notify
but Orion transform this in https://cygnus.domain.com:443/notify
We have a HAProxy to load balance requests. An ACL is present to accept the doamin cygnus.domain.com but no ACL is present to accept cygnus.domain.com:443
Modifying the ACL resolve the problem
Old ACL : acl IS_Cygnus hdr(host) -i cygnus.domain.com
New ACL : acl IS_Cygnus hdr_beg(host) -i cygnus.domain.com
I have started an OrientDB Enterprise 2.7 with two nodes. Here is how my setup look.
CONFIGURED SERVERS
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
|# |Name |Status|Connections|StartedOn |Binary |HTTP |UsedMemory |FreeMemory |MaxMemory|
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
|0 |Batman|ONLINE|3 |2016-08-16 15:28:23|10.0.0.195:2424|10.0.0.195:2480|480.98MB (94.49%)|28.02MB (5.51%) |509.00MB |
|1 |Robin |ONLINE|3 |2016-08-16 15:29:40|10.0.0.37:2424 |10.0.0.37:2480 |403.50MB (79.35%)|105.00MB (20.65%)|508.50MB |
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
orientdb {db=SocialPosts3}> clusters
Now I have two Vertex classes User and Notes. With an edge type Posted. All Vertex and Edges have properties. There are also unique index on both the Vertex class.
I started pushing data using Java API :
while (retry++ != MAX_RETRY) {
try {
properties.put(uniqueIndexname, uniqueIndexValue);
Iterable<Vertex> resultset = graph.getVertices(className, new String[] { uniqueIndexname },
new Object[] { uniqueIndexValue });
if (resultset != null) {
vertex = resultset.iterator().hasNext() ? resultset.iterator().next() : null;
}
if (vertex == null) {
vertex = graph.addVertex("class:" + className, properties);
graph.commit();
return vertex;
} else {
for (String key : properties.keySet()) {
vertex.setProperty(key, properties.get(key));
}
}
logger.info("Completed upserting vertex " + uniqueIndexValue);
graph.commit();
break;
} catch (ONeedRetryException ex) {
logger.warn("Retry for exception - " + uniqueIndexValue);
} catch (Exception e) {
logger.error("Can not create vertex - " + e.getMessage());
graph.rollback();
break;
}
}
Similarly for the Notes and edges.
I populate around 200k user and 3.5M Notes. Now I notice that all the data is going only to one node.
On running "clusters" command I see that all the clusters are created on the same node, and hence all data is present only on one node.
|22 |note | 26|Note | | 75| Robin | [Batman] | true |
|23 |note_1 | 27|Note | |1750902| Batman | [Robin] | true |
|24 |note_2 | 28|Note | |1750789| Batman | [Robin] | true |
|25 |note_3 | 29|Note | | 75| Robin | [Batman] | true |
|26 |posted | 34|Posted | | 0| Robin | [Batman] | true |
|27 |posted_1 | 35|Posted | | 1| Robin | [Batman] | true |
|28 |posted_2 | 36|Posted | |1739823| Batman | [Robin] | true |
|29 |posted_3 | 37|Posted | |1749250| Batman | [Robin] | true |
|30 |user | 30|User | | 102059| Batman | [Robin] | true |
|31 |user_1 | 31|User | | 1| Robin | [Batman] | true |
|32 |user_2 | 32|User | | 0| Robin | [Batman] | true |
|33 |user_3 | 33|User | | 102127| Batman | [Robin] | true |
I see the CPU of one node is like 99% and other is <1%.
How can I make sure that data is uniformly distributed across all nodes in the cluster?
Update:
Database is propagated to both the nodes. I can login to both the node studio and see the listed database. Also querying any node gives same results, so nodes are in sync.
Server Log from one of the node, and it is almost same on the other node.
2016-08-18 19:28:49:668 INFO [Robin]<-[Batman] Received new status Batman.SocialPosts3=SYNCHRONIZING [OHazelcastPlugin]
2016-08-18 19:28:49:670 INFO [Robin] Current node started as MASTER for database 'SocialPosts3' [OHazelcastPlugin]
2016-08-18 19:28:49:671 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=2)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+-----------+----------+-------------+
| | | | MASTER |
| | | |SYNCHRONIZING|
+--------+-----------+----------+-------------+
|CLUSTER |writeQuorum|readQuorum| Batman |
+--------+-----------+----------+-------------+
|* | 1 | 1 | X |
|internal| 1 | 1 | |
+--------+-----------+----------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:49:671 INFO [Robin] Saving distributed configuration file for database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3/distributed-config.json [OHazelcastPlugin]
2016-08-18 19:28:49:766 INFO [Robin] Adding node 'Robin' in partition: SocialPosts3 db=[*] v=3 [ODistributedDatabaseImpl$1]
2016-08-18 19:28:49:767 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=3)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+-----------+----------+-------------+-------------+
| | | | MASTER | MASTER |
| | | |SYNCHRONIZING|SYNCHRONIZING|
+--------+-----------+----------+-------------+-------------+
|CLUSTER |writeQuorum|readQuorum| Batman | Robin |
+--------+-----------+----------+-------------+-------------+
|* | 2 | 1 | X | o |
|internal| 2 | 1 | | |
+--------+-----------+----------+-------------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:49:767 INFO [Robin] Saving distributed configuration file for database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3/distributed-config.json [OHazelcastPlugin]
2016-08-18 19:28:49:769 WARNI [Robin]->[[Batman]] Requesting deploy of database 'SocialPosts3' on local server... [OHazelcastPlugin]
2016-08-18 19:28:52:192 INFO [Robin]<-[Batman] Copying remote database 'SocialPosts3' to: /tmp/orientdb/install_SocialPosts3.zip [OHazelcastPlugin]
2016-08-18 19:28:52:193 INFO [Robin]<-[Batman] Installing database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3... [OHazelcastPlugin]
2016-08-18 19:28:52:193 INFO [Robin] - writing chunk #1 offset=0 size=43.38KB [OHazelcastPlugin]
2016-08-18 19:28:52:194 INFO [Robin] Database copied correctly, size=43.38KB [ODistributedAbstractPlugin$3]
2016-08-18 19:28:52:279 WARNI {db=SocialPosts3} Storage 'SocialPosts3' was not closed properly. Will try to recover from write ahead log [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:279 SEVER {db=SocialPosts3} Restore is not possible because write ahead log is empty. [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:279 INFO {db=SocialPosts3} Storage data recover was completed [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:294 INFO {db=SocialPosts3} [Robin] Installed database 'SocialPosts3' (LSN=OLogSequenceNumber{segment=0, position=24}) [OHazelcastPlugin]
2016-08-18 19:28:52:304 INFO [Robin] Reassigning cluster ownership for database SocialPosts3 [OHazelcastPlugin]
2016-08-18 19:28:52:305 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=3)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+----+-----------+----------+-------------+-------------+
| | | | | MASTER | MASTER |
| | | | |SYNCHRONIZING|SYNCHRONIZING|
+--------+----+-----------+----------+-------------+-------------+
|CLUSTER | id|writeQuorum|readQuorum| Batman | Robin |
+--------+----+-----------+----------+-------------+-------------+
|* | | 2 | 1 | X | o |
|internal| 0| 2 | 1 | | |
+--------+----+-----------+----------+-------------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:52:305 INFO [Robin] Distributed servers status:
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+
|Name |Status|Databases |Conns|StartedOn |Binary |HTTP |UsedMemory |
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+
|Batman|ONLINE|GoodBoys=ONLINE (MASTER) |5 |2016-08-16 15:28:23|10.0.0.195:2424|10.0.0.195:2480|426.47MB/509.00MB (83.79%)|
| | |SocialPosts=ONLINE (MASTER) | | | | | |
| | |GratefulDeadConcerts=ONLINE (MASTER)| | | | | |
|Robin*|ONLINE|GoodBoys=ONLINE (MASTER) |3 |2016-08-16 15:29:40|10.0.0.37:2424 |10.0.0.37:2480 |353.77MB/507.50MB (69.71%)|
| | |SocialPosts=ONLINE (MASTER) | | | | | |
| | |GratefulDeadConcerts=ONLINE (MASTER)| | | | | |
| | |SocialPosts3=SYNCHRONIZING (MASTER) | | | | | |
| | |SocialPosts2=ONLINE (MASTER) | | | | | |
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+
I am trying to run a CREATE INDEX CONCURRENTLY command against a Postgres 9.2 database. I implemented a MigrationResolver as shown in issue 655. When this migration step is run via mvn flyway:migrate or similar, the command starts but hangs in waiting mode.
I verified that the command is executing via the pg_stat_activity table:
test_2015_04_13_110536=# select * from pg_stat_activity;
datid | datname | pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | xact_start | query_start | state_change | waiting | state | query
-------+------------------------+-------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+---------+---------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
21095 | test_2015_04_13_110536 | 56695 | 16385 | postgres | psql | | | -1 | 2015-04-13 11:10:01.127768-06 | 2015-04-13 11:13:08.936651-06 | 2015-04-13 11:13:08.936651-06 | 2015-04-13 11:13:08.936655-06 | f | active | select * from pg_stat_activity;
21095 | test_2015_04_13_110536 | 56824 | 16385 | postgres | | 127.0.0.1 | | 52437 | 2015-04-13 11:12:55.438927-06 | 2015-04-13 11:12:55.476442-06 | 2015-04-13 11:12:55.487139-06 | 2015-04-13 11:12:55.487175-06 | f | idle in transaction | SELECT "version_rank","installed_rank","version","description","type","script","checksum","installed_on","installed_by","execution_time","success" FROM "public"."schema_version" ORDER BY "version_rank"
21095 | test_2015_04_13_110536 | 56825 | 16385 | postgres | | 127.0.0.1 | | 52438 | 2015-04-13 11:12:55.443687-06 | 2015-04-13 11:12:55.49024-06 | 2015-04-13 11:12:55.49024-06 | 2015-04-13 11:12:55.490241-06 | t | active | CREATE UNIQUE INDEX CONCURRENTLY person_restrict_duplicates_2_idx ON person(name, person_month, person_year)
(3 rows)
An example project that replicates this problem can be found in my github: chrisphelps/flyway-experiment
My suspicion is that the flyway query against schema version which is idle in transaction is preventing postgres from proceeding with the index creation.
How can I resolve the conflict so that postgres will proceed with the migration? Has anyone been able to apply this sort of migration to postgres via flyway?
In the meantime, there is a Resolver included in flyway which looks for some magic in the filename.
Just add the prefix 'NT' (for No-Transaction) to your migration file, i. e.
V01__usual_migration_1.sql
V02__another_migration.sql
NTV03__migration_that_does_not_run_in_transaction.sql
V04__classical_migration_4.sql
etc.
For some reason, I cannot get HandlerSocket to start listening when I start mariadb (version
10.0.14). I am using Cent OS 6.5.
my.cnf has the following settings:
handlersocket_port = 9998
handlersocket_port_wr = 9999
handlersocket_address = 127.0.0.1
Calling "SHOW GLOBAL VARIABLES LIKE 'handlersocket%'" from the mariaDb prompt shows:
+-------------------------------+-----------+
| Variable_name | Value |
+-------------------------------+-----------+
| handlersocket_accept_balance | 0 |
| handlersocket_address | 127.0.0.1 |
| handlersocket_backlog | 32768 |
| handlersocket_epoll | 1 |
| handlersocket_plain_secret | |
| handlersocket_plain_secret_wr | |
| handlersocket_port | 9998 |
| handlersocket_port_wr | 9999 |
| handlersocket_rcvbuf | 0 |
| handlersocket_readsize | 0 |
| handlersocket_sndbuf | 0 |
| handlersocket_threads | 16 |
| handlersocket_threads_wr | 1 |
| handlersocket_timeout | 300 |
| handlersocket_verbose | 10 |
| handlersocket_wrlock_timeout | 12 |
+-------------------------------+-----------+
I can start mariadb successfully, but when I check to see which ports are actively listening,
neither 9998 nor 9999 show up. I've checked the mysqld.log file, but no errors seem to be occurring.
Answering my own question here -
SELINUX needed to be set to permissive mode to get HandlerSocket started.