Inserts (JavaAPI) fail after restarting node in distributed OrientDB cluster - orientdb

A two node distributed OrientDB system, embedded mode, using TCP-IP for node discovery. The class event is sharded on four clusters. After restarting one node, exactly half of the inserts on that node fail with the error message:
INFO Local node 'orientdb-lab-node2' is not the owner for cluster 'event_1' (it is 'orientdb-lab-node1'). Reloading distributed configuration for database 'test-db' [ODistributedStorage]
and the stack trace:
com.orientechnologies.orient.server.distributed.ODistributedConfigurationChangedException: Local node 'orientdb-lab-node2' is not the owner for cluster 'event_1' (it is 'orientdb-lab-node1')
DB name="test-db"
DB name="test-db"
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.throwSerializedException(OChannelBinaryAsynchClient.java:437)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.handleStatus(OChannelBinaryAsynchClient.java:388)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.beginResponse(OChannelBinaryAsynchClient.java:270)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.beginResponse(OChannelBinaryAsynchClient.java:162)
at com.orientechnologies.orient.client.remote.OStorageRemote.beginResponse(OStorageRemote.java:2138)
at com.orientechnologies.orient.client.remote.OStorageRemote$6.execute(OStorageRemote.java:548)
at com.orientechnologies.orient.client.remote.OStorageRemote$6.execute(OStorageRemote.java:542)
at com.orientechnologies.orient.client.remote.OStorageRemote$1.execute(OStorageRemote.java:164)
at com.orientechnologies.orient.client.remote.OStorageRemote.baseNetworkOperation(OStorageRemote.java:235)
at com.orientechnologies.orient.client.remote.OStorageRemote.asyncNetworkOperation(OStorageRemote.java:156)
at com.orientechnologies.orient.client.remote.OStorageRemote.createRecord(OStorageRemote.java:528)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.executeSaveRecord(ODatabaseDocumentTx.java:2095)
at com.orientechnologies.orient.core.tx.OTransactionNoTx.saveNew(OTransactionNoTx.java:246)
at com.orientechnologies.orient.core.tx.OTransactionNoTx.saveRecord(OTransactionNoTx.java:179)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.save(ODatabaseDocumentTx.java:2597)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.save(ODatabaseDocumentTx.java:103)
at com.orientechnologies.orient.core.record.impl.ODocument.save(ODocument.java:1802)
at com.orientechnologies.orient.core.record.impl.ODocument.save(ODocument.java:1793)
at lab.orientdb.OrientDbClient.insert(OrientDbClient.java:10)
at lab.orientdb.Main.main(Main.java:24)
This is what the cluster configuration looks like from node1:
Node 1 and 2 running, 10 inserts on each node
CLUSTERS (collections)
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|# |NAME | ID|CLASS |CONFLICT-STRATEGY|COUNT| OWNER_SERVER | OTHER_SERVERS |AUTO_DEPLOY_NEW_NODE|
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|5 |event | 17|event | | 8|orientdb-lab-node2|[orientdb-lab-node1]| true |
|6 |event_1 | 18|event | | 3|orientdb-lab-node1|[orientdb-lab-node2]| true |
|7 |event_2 | 19|event | | 2|orientdb-lab-node1|[orientdb-lab-node2]| true |
|8 |event_3 | 20|event | | 7|orientdb-lab-node2|[orientdb-lab-node1]| true |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
| |TOTAL | | | | 20| | | |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
Node 2 stopped
CLUSTERS (collections)
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|# |NAME | ID|CLASS |CONFLICT-STRATEGY|COUNT| OWNER_SERVER | OTHER_SERVERS |AUTO_DEPLOY_NEW_NODE|
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|5 |event | 17|event | | 8|orientdb-lab-node1|[orientdb-lab-node2]| true |
|6 |event_1 | 18|event | | 3|orientdb-lab-node1|[orientdb-lab-node2]| true |
|7 |event_2 | 19|event | | 2|orientdb-lab-node1|[orientdb-lab-node2]| true |
|8 |event_3 | 20|event | | 7|orientdb-lab-node1|[orientdb-lab-node2]| true |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
| |TOTAL | | | | 20| | | |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
Node 2 restarted, 5 successful inserts and 5 failed
CLUSTERS (collections)
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|# |NAME | ID|CLASS |CONFLICT-STRATEGY|COUNT| OWNER_SERVER | OTHER_SERVERS |AUTO_DEPLOY_NEW_NODE|
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
|5 |event | 17|event | | 11|orientdb-lab-node2|[orientdb-lab-node1]| true |
|6 |event_1 | 18|event | | 3|orientdb-lab-node1|[orientdb-lab-node2]| true |
|7 |event_2 | 19|event | | 2|orientdb-lab-node1|[orientdb-lab-node2]| true |
|8 |event_3 | 20|event | | 9|orientdb-lab-node2|[orientdb-lab-node1]| true |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
| |TOTAL | | | | 25| | | |
+----+-----------+----+---------+-----------------+-----+------------------+--------------------+--------------------+
Any tip or advice appreciated. Thanks.

This issue has been resolved on OrientDB 2.2.13-SNAPSHOT, so should be ok in a release version very soon: https://github.com/orientechnologies/orientdb/issues/6897

Related

Spark Scala input empty values according result from self joined dataframe query

I struggle to write my spark scala code to fill rows for which the coverage is empty using self join with conditions.
This is the data :
+----+--------------+----------+--------+
| ID | date_in_days | coverage | values |
+----+--------------+----------+--------+
| 1 | 2020-09-01 | | 0.128 |
| 1 | 2020-09-03 | 0 | 0.358 |
| 1 | 2020-09-04 | 0 | 0.035 |
| 1 | 2020-09-05 | | |
| 1 | 2020-09-06 | | |
| 1 | 2020-09-19 | | |
| 1 | 2020-09-12 | | |
| 1 | 2020-09-18 | | |
| 1 | 2020-09-11 | | |
| 1 | 2020-09-16 | | |
| 1 | 2020-09-21 | 13 | 0.554 |
| 1 | 2020-09-23 | | |
| 1 | 2020-09-30 | | |
+----+--------------+----------+--------+
Expected result :
+----+--------------+----------+--------+
| ID | date_in_day | coverage | values |
+----+--------------+----------+--------+
| 1 | 2020-09-01 | -1 | 0.128 |
| 1 | 2020-09-03 | 0 | 0.358 |
| 1 | 2020-09-04 | 0 | 0.035 |
| 1 | 2020-09-05 | 0 | |
| 1 | 2020-09-06 | 0 | |
| 1 | 2020-09-19 | 0 | |
| 1 | 2020-09-12 | 0 | |
| 1 | 2020-09-18 | 0 | |
| 1 | 2020-09-11 | 0 | |
| 1 | 2020-09-16 | 0 | |
| 1 | 2020-09-21 | 13 | 0.554 |
| 1 | 2020-09-23 | -1 | |
| 1 | 2020-09-30 | -1 | |
What I am trying to do:
For each different ID (Dataframe partitioned by ID) sorted by date
Use case : row coverage column is null let's call it rowEmptycoverage:
Find in the DF the first row with date_in_days > rowEmptycoverage.date_in_days and with coverage >= 0. Let's call it rowFirstDateGreater
Then if rowFirstDateGreater.values > 500 set rowEmptycoverage.coverage to 0. Set it to -1 otherwise.
I am kind of lost in mixing when join where...
I am assuming that you mean values > 0.500 and not values > 500. Also the logic remains unclear. Here I am assuming that you are searching in the order of the column date_in_days and not in the order of the dataframe.
In any case we can refine the solution to match your exact need. The overall idea is to use a Window to fetch the next date for which the coverage is not null, check if values meet the desired criteria and update coverage.
It goes as follows:
val win = Window.partitionBy("ID").orderBy("date_in_days")
.rangeBetween(Window.currentRow, Window.unboundedFollowing)
df
// creating a struct binding coverage and values
.withColumn("cov_str", when('coverage isNull, lit(null))
.otherwise(struct('coverage, 'values)))
// finding the first row (starting from the current date, in order of
// date_in_days) for which the coverage is not null
.withColumn("next_cov_str", first('cov_str, ignoreNulls=true) over win)
// updating coverage. We keep the original value if not null, put 0 if values
// meets the criteria (that you can change) and -1 otherwise.
.withColumn("coverage", coalesce(
'coverage,
when($"next_cov_str.values" > 0.500, lit(0)),
lit(-1)
))
.show(false)
+---+-------------------+--------+------+-----------+------------+
|ID |date_in_days |coverage|values|cov_str |next_cov_str|
+---+-------------------+--------+------+-----------+------------+
|1 |2020-09-01 00:00:00|-1 |0.128 |null |[0, 0.358] |
|1 |2020-09-03 00:00:00|0 |0.358 |[0, 0.358] |[0, 0.358] |
|1 |2020-09-04 00:00:00|0 |0.035 |[0, 0.035] |[0, 0.035] |
|1 |2020-09-05 00:00:00|0 |null |null |[13, 0.554] |
|1 |2020-09-06 00:00:00|0 |null |null |[13, 0.554] |
|1 |2020-09-11 00:00:00|0 |null |null |[13, 0.554] |
|1 |2020-09-12 00:00:00|0 |null |null |[13, 0.554] |
|1 |2020-09-16 00:00:00|0 |null |null |[13, 0.554] |
|1 |2020-09-18 00:00:00|0 |null |null |[13, 0.554] |
|1 |2020-09-19 00:00:00|0 |null |null |[13, 0.554] |
|1 |2020-09-21 00:00:00|13 |0.554 |[13, 0.554]|[13, 0.554] |
|1 |2020-09-23 00:00:00|-1 |null |null |null |
|1 |2020-09-30 00:00:00|-1 |null |null |null |
+---+-------------------+--------+------+-----------+------------+
You may then use drop("cov_str", "next_cov_str") but I leave them here for clarity.

Postgres select from table and spread evenly

I have a 2 tables. First table contains information of the object, second table contains related objects. Second table objects have 4 types( lets call em A,B,C,D).
I need a query that does something like this
|table1 object id | A |value for A|B | value for B| C | value for C|D | vlaue for D|
| 1 | 12| cat | 13| dog | 2 | house | 43| car |
| 1 | 5 | lion | | | | | | |
The column "table1 object id" in real table is multiple columns of data from table 1(for single object its all the same, just repeated on multiple rows because of table 2).
Where 2nd table is in form
|type|value|table 1 object id| id |
|A |cat | 1 | 12|
|B |dog | 1 | 13|
|C |house| 1 | 2 |
|D |car | 1 | 43 |
|A |lion | 1 | 5 |
I hope this is clear enough of the thing i want.
I have tryed using AND and OR and JOIN. This does not seem like something that can be done with crosstab.
EDIT
Table 2
|type|value|table 1 object id| id |
|A |cat | 1 | 12|
|B |dog | 1 | 13|
|C |house| 1 | 2 |
|D |car | 1 | 43 |
|A |lion | 1 | 5 |
|C |wolf | 2 | 6 |
Table 1
| id | value1 | value 2|value 3|
| 1 | hello | test | hmmm |
| 2 | bye | test2 | hmm2 |
Result
|value1| value2| value3| A| value| B |value| C|value | D | value|
|hello | test | hmmm |12| cat | 13| dog |2 | house | 23| car |
|hello | test | hmmm |5 | lion | | | | | | |
|bye | test2 | hmm2 | | | | |6 | wolf | | |
I hope this explains bit bettter of what I want to achieve.

OrientDB distributed mode : data not getting distributed across various nodes

I have started an OrientDB Enterprise 2.7 with two nodes. Here is how my setup look.
CONFIGURED SERVERS
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
|# |Name |Status|Connections|StartedOn |Binary |HTTP |UsedMemory |FreeMemory |MaxMemory|
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
|0 |Batman|ONLINE|3 |2016-08-16 15:28:23|10.0.0.195:2424|10.0.0.195:2480|480.98MB (94.49%)|28.02MB (5.51%) |509.00MB |
|1 |Robin |ONLINE|3 |2016-08-16 15:29:40|10.0.0.37:2424 |10.0.0.37:2480 |403.50MB (79.35%)|105.00MB (20.65%)|508.50MB |
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
orientdb {db=SocialPosts3}> clusters
Now I have two Vertex classes User and Notes. With an edge type Posted. All Vertex and Edges have properties. There are also unique index on both the Vertex class.
I started pushing data using Java API :
while (retry++ != MAX_RETRY) {
try {
properties.put(uniqueIndexname, uniqueIndexValue);
Iterable<Vertex> resultset = graph.getVertices(className, new String[] { uniqueIndexname },
new Object[] { uniqueIndexValue });
if (resultset != null) {
vertex = resultset.iterator().hasNext() ? resultset.iterator().next() : null;
}
if (vertex == null) {
vertex = graph.addVertex("class:" + className, properties);
graph.commit();
return vertex;
} else {
for (String key : properties.keySet()) {
vertex.setProperty(key, properties.get(key));
}
}
logger.info("Completed upserting vertex " + uniqueIndexValue);
graph.commit();
break;
} catch (ONeedRetryException ex) {
logger.warn("Retry for exception - " + uniqueIndexValue);
} catch (Exception e) {
logger.error("Can not create vertex - " + e.getMessage());
graph.rollback();
break;
}
}
Similarly for the Notes and edges.
I populate around 200k user and 3.5M Notes. Now I notice that all the data is going only to one node.
On running "clusters" command I see that all the clusters are created on the same node, and hence all data is present only on one node.
|22 |note | 26|Note | | 75| Robin | [Batman] | true |
|23 |note_1 | 27|Note | |1750902| Batman | [Robin] | true |
|24 |note_2 | 28|Note | |1750789| Batman | [Robin] | true |
|25 |note_3 | 29|Note | | 75| Robin | [Batman] | true |
|26 |posted | 34|Posted | | 0| Robin | [Batman] | true |
|27 |posted_1 | 35|Posted | | 1| Robin | [Batman] | true |
|28 |posted_2 | 36|Posted | |1739823| Batman | [Robin] | true |
|29 |posted_3 | 37|Posted | |1749250| Batman | [Robin] | true |
|30 |user | 30|User | | 102059| Batman | [Robin] | true |
|31 |user_1 | 31|User | | 1| Robin | [Batman] | true |
|32 |user_2 | 32|User | | 0| Robin | [Batman] | true |
|33 |user_3 | 33|User | | 102127| Batman | [Robin] | true |
I see the CPU of one node is like 99% and other is <1%.
How can I make sure that data is uniformly distributed across all nodes in the cluster?
Update:
Database is propagated to both the nodes. I can login to both the node studio and see the listed database. Also querying any node gives same results, so nodes are in sync.
Server Log from one of the node, and it is almost same on the other node.
2016-08-18 19:28:49:668 INFO [Robin]<-[Batman] Received new status Batman.SocialPosts3=SYNCHRONIZING [OHazelcastPlugin]
2016-08-18 19:28:49:670 INFO [Robin] Current node started as MASTER for database 'SocialPosts3' [OHazelcastPlugin]
2016-08-18 19:28:49:671 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=2)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+-----------+----------+-------------+
| | | | MASTER |
| | | |SYNCHRONIZING|
+--------+-----------+----------+-------------+
|CLUSTER |writeQuorum|readQuorum| Batman |
+--------+-----------+----------+-------------+
|* | 1 | 1 | X |
|internal| 1 | 1 | |
+--------+-----------+----------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:49:671 INFO [Robin] Saving distributed configuration file for database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3/distributed-config.json [OHazelcastPlugin]
2016-08-18 19:28:49:766 INFO [Robin] Adding node 'Robin' in partition: SocialPosts3 db=[*] v=3 [ODistributedDatabaseImpl$1]
2016-08-18 19:28:49:767 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=3)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+-----------+----------+-------------+-------------+
| | | | MASTER | MASTER |
| | | |SYNCHRONIZING|SYNCHRONIZING|
+--------+-----------+----------+-------------+-------------+
|CLUSTER |writeQuorum|readQuorum| Batman | Robin |
+--------+-----------+----------+-------------+-------------+
|* | 2 | 1 | X | o |
|internal| 2 | 1 | | |
+--------+-----------+----------+-------------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:49:767 INFO [Robin] Saving distributed configuration file for database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3/distributed-config.json [OHazelcastPlugin]
2016-08-18 19:28:49:769 WARNI [Robin]->[[Batman]] Requesting deploy of database 'SocialPosts3' on local server... [OHazelcastPlugin]
2016-08-18 19:28:52:192 INFO [Robin]<-[Batman] Copying remote database 'SocialPosts3' to: /tmp/orientdb/install_SocialPosts3.zip [OHazelcastPlugin]
2016-08-18 19:28:52:193 INFO [Robin]<-[Batman] Installing database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3... [OHazelcastPlugin]
2016-08-18 19:28:52:193 INFO [Robin] - writing chunk #1 offset=0 size=43.38KB [OHazelcastPlugin]
2016-08-18 19:28:52:194 INFO [Robin] Database copied correctly, size=43.38KB [ODistributedAbstractPlugin$3]
2016-08-18 19:28:52:279 WARNI {db=SocialPosts3} Storage 'SocialPosts3' was not closed properly. Will try to recover from write ahead log [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:279 SEVER {db=SocialPosts3} Restore is not possible because write ahead log is empty. [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:279 INFO {db=SocialPosts3} Storage data recover was completed [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:294 INFO {db=SocialPosts3} [Robin] Installed database 'SocialPosts3' (LSN=OLogSequenceNumber{segment=0, position=24}) [OHazelcastPlugin]
2016-08-18 19:28:52:304 INFO [Robin] Reassigning cluster ownership for database SocialPosts3 [OHazelcastPlugin]
2016-08-18 19:28:52:305 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=3)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+----+-----------+----------+-------------+-------------+
| | | | | MASTER | MASTER |
| | | | |SYNCHRONIZING|SYNCHRONIZING|
+--------+----+-----------+----------+-------------+-------------+
|CLUSTER | id|writeQuorum|readQuorum| Batman | Robin |
+--------+----+-----------+----------+-------------+-------------+
|* | | 2 | 1 | X | o |
|internal| 0| 2 | 1 | | |
+--------+----+-----------+----------+-------------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:52:305 INFO [Robin] Distributed servers status:
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+
|Name |Status|Databases |Conns|StartedOn |Binary |HTTP |UsedMemory |
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+
|Batman|ONLINE|GoodBoys=ONLINE (MASTER) |5 |2016-08-16 15:28:23|10.0.0.195:2424|10.0.0.195:2480|426.47MB/509.00MB (83.79%)|
| | |SocialPosts=ONLINE (MASTER) | | | | | |
| | |GratefulDeadConcerts=ONLINE (MASTER)| | | | | |
|Robin*|ONLINE|GoodBoys=ONLINE (MASTER) |3 |2016-08-16 15:29:40|10.0.0.37:2424 |10.0.0.37:2480 |353.77MB/507.50MB (69.71%)|
| | |SocialPosts=ONLINE (MASTER) | | | | | |
| | |GratefulDeadConcerts=ONLINE (MASTER)| | | | | |
| | |SocialPosts3=SYNCHRONIZING (MASTER) | | | | | |
| | |SocialPosts2=ONLINE (MASTER) | | | | | |
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+

PostgreSQL concatenate two tables on each row

I have two tables, as such (where value in all cases is a character varying):
table_one:
|id|value|
|--+-----|
|1 |A |
|2 |B |
|3 |C |
|4 |D |
table_two:
|id|value|
|--+-----|
|11|0 |
|12|1 |
|13|2 |
|14|3 |
|15|4 |
|16|5 |
I want to do a query that results in the following:
result:
|value|
|-----|
|A_0 |
|A_1 |
|A_2 |
|A_3 |
|A_4 |
|A_5 |
|B_0 |
|B_1 |
|B_2 |
|B_3 |
|B_4 |
|B_5 |
|C_0 |
|C_1 |
|C_2 |
|C_3 |
|C_4 |
|C_5 |
|D_0 |
|D_1 |
|D_2 |
|D_3 |
|D_4 |
|D_5 |
This is for a seldom run report on PostgreSQL 8.4.9, so performance isn't critical. If the answer is 9.x specific, upgrading certainly isn't out of the question, so feel free to give 9.x specific answers. Any ideas?
select
table_one.val || '_' || table_two.val
from
table_one
cross join table_two
Documentation for cross join.
SQLFiddle

NDepend query methods/types in framework assembly being used by other assemblies/types

I am trying to determine which types or methods in a base framework assembly are being used by other assemblies in the application system. I cannot seem to find a straight-cut query to do that.
What i have to do is first determine which assemblies are directly using the framework assembly, then manually list them in a second query
SELECT TYPES FROM ASSEMBLIES "IBM.Data.DB2"
WHERE IsDirectlyUsedBy "ASSEMBLY:FirstDirectUsedByAssebmly"
OR IsDirectlyUsedBy "ASSEMBLY:SecondDirectUsedByAssebmly"
OR IsDirectlyUsedBy "ASSEMBLY:ThirdDirectUsedByAssebmly"
OR IsDirectlyUsedBy "ASSEMBLY:FourthDirectUsedByAssebmly"
Is there a better/faster way to query for this?
Additionally, the query results are focused on the matched types only. The Dependency graph or matrix exported only shows details of those. I do not know how to render a graph that shows those types or methods plus show the dependent types/methods from other assemblies that are consuming them?
UPDATE
I cannot use a query like
SELECT METHODS/TYPES WHERE IsPublic AND !CouldBeInternal
because the results return very queer results of using obfuscated types within the IBM.Data.DB2 assembly.
SELECT TYPES
FROM ASSEMBLIES "IBM.Data.DB2"
WHERE IsPublic AND !CouldBeInternal
48 items
--------------------------------------------------+--------------+
types |# IL instructi|
|ons |
--------------------------------------------------+--------------+
IBM.Data.DB2.ae+m |0 |
| |
IBM.Data.DB2.ae+x |0 |
| |
IBM.Data.DB2.ae+f |0 |
| |
IBM.Data.DB2.ae+ac |0 |
| |
IBM.Data.DB2.ae+aa |0 |
| |
IBM.Data.DB2.ae+u |0 |
| |
IBM.Data.DB2.ae+z |0 |
| |
IBM.Data.DB2.ae+e |0 |
| |
IBM.Data.DB2.ae+b |0 |
| |
IBM.Data.DB2.ae+g |0 |
| |
IBM.Data.DB2.ae+ab |0 |
| |
IBM.Data.DB2.ae+h |0 |
| |
IBM.Data.DB2.ae+r |0 |
| |
IBM.Data.DB2.ae+p |0 |
| |
IBM.Data.DB2.ae+ad |0 |
| |
IBM.Data.DB2.ae+i |0 |
| |
IBM.Data.DB2.ae+j |0 |
| |
IBM.Data.DB2.ae+t |0 |
| |
IBM.Data.DB2.ae+af |0 |
| |
IBM.Data.DB2.ae+k |0 |
| |
IBM.Data.DB2.ae+l |0 |
| |
IBM.Data.DB2.ae+y |0 |
| |
IBM.Data.DB2.ae+a |0 |
| |
IBM.Data.DB2.ae+q |0 |
| |
IBM.Data.DB2.ae+n |0 |
| |
IBM.Data.DB2.ae+d |0 |
| |
IBM.Data.DB2.ae+c |0 |
| |
IBM.Data.DB2.ae+ae |0 |
| |
IBM.Data.DB2.ae+o |0 |
| |
IBM.Data.DB2.ae+w |0 |
| |
IBM.Data.DB2.ae+s |0 |
| |
IBM.Data.DB2.ae+v |0 |
| |
IBM.Data.DB2.DB2Command |2 527 |
| |
IBM.Data.DB2.DB2Connection |3 246 |
| |
IBM.Data.DB2.DB2DataAdapter |520 |
| |
IBM.Data.DB2.DB2DataReader |4 220 |
| |
IBM.Data.DB2.DB2_UDF_PLATFORM |0 |
| |
IBM.Data.DB2.DB2Enumerator+DB2EnumInstance |19 |
| |
IBM.Data.DB2.DB2Enumerator+DB2EnumDatabase |15 |
| |
IBM.Data.DB2.DB2Error |98 |
| |
IBM.Data.DB2.DB2ErrorCollection |55 |
| |
IBM.Data.DB2.DB2Exception |185 |
| |
IBM.Data.DB2.DB2Parameter |1 853 |
| |
IBM.Data.DB2.DB2ParameterCollection |1 383 |
| |
IBM.Data.DB2.DB2RowUpdatedEventHandler |0 |
| |
IBM.Data.DB2.DB2RowUpdatedEventArgs |14 |
| |
IBM.Data.DB2.DB2Type |0 |
| |
IBM.Data.DB2.DB2XmlReader |500 |
| |
--------------------------------------------------+--------------+
Sum: |14 635 |
| |
Average: |304.9 |
| |
Minimum: |0 |
| |
Maximum: |4 220 |
| |
Standard deviation: |868.22 |
| |
Variance: |753 808 |
| |
--------------------------------------------------+--------------+
Our code does not use those types and enums directly.
This query returns the methods (respectively the types), that are public and could not be internal. Hence, it returns the methods/types that are indeed used outside of their declaring assembly.
SELECT METHODS/TYPES WHERE IsPublic AND !CouldBeInternal