Null pointer exception when loading into OrientDB from Oracle - orientdb

I have installed OrientDB V 2.1.11 on a Mac - El Capitan.
I am following the instructions as per the OrientDB documentation.
http://orientdb.com/docs/last/Import-from-DBMS.html
When I run the oetl.sh I get a null pointer exception. I assume it is connecting to the Oracle instance.
Json config:
{
"config": {
"log": "error"
},
"extractor" : {
"jdbc": { "driver": "oracle.jdbc.OracleDriver",
"url": "jdbc:oracle:thin:#<dbUrl>:1521:<dbSid>",
"userName": "username",
"userPassword": "password",
"query": "select sold_to_party_nbr from customer" }
},
"transformers" : [
{ "vertex": { "class": "Company"} }
],
"loader" : {
"orientdb": {
"dbURL": "plocal:../databases/BetterDemo",
"dbUser": "admin",
"dbPassword": "admin",
"dbAutoCreate": true
}
}
}
Error:
sharon.oconnor$ ./oetl.sh ../loadFromOracle.json
OrientDB etl v.2.1.11 (build UNKNOWN#rddb5c0b4761473ae9549c3ac94871ab56ef5af2c; 2016-02-15 10:49:20+0000) www.orientdb.com
Exception in thread "main" java.lang.NullPointerException
at com.orientechnologies.orient.etl.transformer.OVertexTransformer.begin(OVertexTransformer.java:53)
at com.orientechnologies.orient.etl.OETLPipeline.begin(OETLPipeline.java:72)
at com.orientechnologies.orient.etl.OETLProcessor.executeSequentially(OETLProcessor.java:465)
at com.orientechnologies.orient.etl.OETLProcessor.execute(OETLProcessor.java:269)
at com.orientechnologies.orient.etl.OETLProcessor.main(OETLProcessor.java:116)
The data in Oracle looks like this:
0000281305
0000281362
0000281378
0000281381
0000281519
0000281524
0000281563
0000281566
0000281579
0000281582
0000281623
0000281633
I have created a Company class with a sold_to_party_nbr string property in the BetterDemo database.
How can I degbug further to figure out what is wrong?

Related

No logging in to tables in PostgresDB

In my program I am getting proper loggings in to File. But not into the tables in the DB. I have created 'Logs' schema in 'SuperMarketDb'.
I want to store the logs in to 'SuperMarket_Logs' table in the 'Logs' Schema.
Connection string for logging is "Logging" and mentioned in appsettings.json along with "SuperMarketDb" connection string.
? -> Should I have to separately add connection string as the Args for "PostgreSQL" ?
? -> Should I do Migration or Updation in the DB ?
appsettings.json
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*",
"ConnectionStrings": {
"SuperMarketDb" : "Host=localhost;Database=SuperMarketDb;Username=postgres;Password=Admin#123",
"Logging" : "Host=localhost;Database=SuperMarketDb;Username=postgres;Password=Admin#123"
},
"KestrelServer": {
"Endpoints": {
"Http": {
"Port": 5050,
"Scheme": "http"
}
}
},
"Serilog":{
"using":["Serilog.Sinks.File",
"Serilog.Sinks.PostgreSQL"],
"Minimumlevel": {
"Default" : "Error"
},
"WriteTo":[{
"Name" : "File",
"Args" : {
"Path" : "C:\\Users\\muhammed.irshad\\Desktop\\.NET\\SuperMarket\\SuperMarket.Api.Employees\\Logs\\ApiLog-.log",
"rollingInterval" : "Day"}
},
{
"Name" : "PostgreSQL",
"Args": {
"connectionString": "Logging",
"schemaName" : "Logs",
"tableName": "SuperMarket_Logs",
"needAutoCreateTable": true,
"batchPostingLimit": 1
}
}
]
}
}
Serilog service in Program.cs
var _logger = new LoggerConfiguration().ReadFrom.Configuration(builder.Configuration).
Enrich.FromLogContext().CreateLogger();
builder.Logging.AddSerilog(_logger);

Connect strapi with mongoose to a MongoDb (mLab)

I tried to connect Strapi to mLab with this database.js config but it doesn't work. I get the error :
ConnectorError: connector "strapi-hook-mongoose" not found: Cannot find module 'strapi-connector-strapi-hook-mongoose'
Here is my database.js config file :
{
"defaultConnection": "default",
"connections": {
"default": {
"connector": "strapi-hook-mongoose",
"settings": {
"database": "strapi-test",
"host": "ds131914.mlab.com",
"srv": false,
"port": "31914",
"username": "root",
"password": "root010101"
},
"options": {
"authenticationDatabase": "strapi-test"
}
}
}
}
What should I do ?
After some search, it appers to me that this database.js config was from an old tutorial (this one). So to solve this probleme, you first need to install npm i -S strapi-connector-mongoose in order to install the right connecter.
Now, you need to change you database.js config for the desire environement. In my case, it was production. So edit config/environement/production/database.js like this :
{
"defaultConnection": "default",
"connections": {
"default": {
"connector": "mongoose",
"settings": {
"client": "mongo",
"host": "ds131914.mlab.com",
"port": "31914",
"srv": false,
"database": "strapi-test",
"username": "root",
"password": "root010101"
},
"options": {
"authenticationDatabase": "strapi-test",
"ssl": false
}
}
}
}
Like this, it should work !

stitch mongoClient auth AnonymousCredential dont continue inside the continueWith

i am logging in to make a search on mongodb stitch client.
but after the AnonymousCredential authentication it does nothing.
kt code:
val mongoClient = client!!.getServiceClient(RemoteMongoClient.factory, "mongodb-atlas")
client!!.auth
.loginWithCredential(AnonymousCredential())
.continueWith{ task -> { ...some code... } }
but it never gets to the "some code" part.
and i dont know why cause in stitch UI logs i get OK status:
{
"arguments": [
{
"database": "test",
"collection": "users",
"query": {
"id": ""
},
"limit": {
"$numberInt": "1"
},
"project": null,
"sort": null
}
],
"name": "find",
"service": "mongodb-atlas"
}
Function Call Location: US-VA
Compute Used: 624980924 bytes•ms
Remote IP Address: 201.124.215.137
SDK: android v0.0
Platform Version: 8.1.0
Rule Performance Metrics:
{
"test.users": {
"roles": {
"default": {
"matching_documents": 1,
"evaluated_fields": 0,
"discarded_fields": 0
}
},
"no_matching_role": 0
}
}

Only data from node 1 visible in a 2 node OrientDB cluster

I created a 2-node OrientDB cluster by following the below steps. But while distributing it, the data present in only one of the node is accessible. Please can you help me debug this issue. The OrientDB version is 2.2.6
Steps involved :
Utilized plocal mode in ETL tool and stored part of the data in node 1 and the other part in node2. The data stored actually belongs to just one class of vertex alone. ( On checking the data from console, the data has got injested properly ).
Then executed both the nodes in distributed mode, data from only one machineis accessible.
The default-distributed-db-config.json file is specified below :
{
"autoDeploy": true,
"readQuorum": 1,
"writeQuorum": 1,
"executionMode": "undefined",
"readYourWrites": true,
"servers": {
"*": "master"
},
"clusters": {
"internal": {
},
"address": {
"servers" : [ "orientmaster" ]
},
"address_1": {
"servers" : [ "orientslave1" ]
},
"*": {
"servers": ["<NEW_NODE>"]
}
}
}
There are two clusters created for the vertex named address namely address and address_1. The data in machine orientslave1 is stored using ETL tool into cluster address_1 , similarly the data in machine orientmaster is stored into the cluster address. ( I've ensured that both of these cluster ids are different at time of creation )
However when these two machines are connected together in distributed mode, the data in cluster address_1 is only visible
The ETL json is attached below :
{
"source": { "file": { "path": "/home/ubuntu/labvolume1/DataStorage/geo1_5lacs.csv" } },
"extractor": { "csv": {"columnsOnFirstLine": false, "columns":["place:string"] } },
"transformers": [
{ "vertex": { "class": "ADDRESS", "skipDuplicates":true } }
],
"loader": {
"orientdb": {
"dbURL": "plocal:/home/ubuntu/labvolume1/orientdb/databases/ETL_Test1",
"dbType": "graph",
"dbUser": "admin",
"dbPassword": "admin",
"dbAutoCreate": true,
"wal": false,
"tx":false,
"classes": [
{"name": "ADDRESS", "extends": "V", "clusters":1}
], "indexes": [
{"class":"ADDRESS", "fields":["place:string"], "type":"UNIQUE" }
]
}
}
}
Please let me know, if there is anything i'm doing wrongly

"No nodes configured for partition" after creating a database via ETL

I've just created a custom database using the following ETL config,
{
"source": { "file": { "path": "./mydata.csv" } },
"extractor": { "row": {} },
"transformers": [
{ "csv": {} },
{ "vertex": { "class": "MyClass" } }
],
"loader": {
"orientdb": {
"dbURL": "plocal:/opt/orientdb/databases/MyData",
"dbUser": "root",
"dbPassword": "qrefhiuqwriouhwqv",
"dbType": "graph",
"classes": [
{"name": "MyClass", "extends": "V"},
]
}
}
}
Now, when I go to the web console, I can see I have 433k records of type MyClass created at database MyData.
When I try to query it with "select from MyClass", I get the error
2015-04-06 23:56:25:541 SEVERE Internal server error:
com.orientechnologies.orient.server.distributed.ODistributedException:
No nodes configured for partition 'MyClass.[]' request:
id=-1 from=node1428362873334 task=command_sql(select from MyClass) userName= [ONetworkProtocolHttpDb]
What am I doing wrong?