Gentics Mesh Schema can't be found after just creating it - orientdb

I'm able to create a new schema using the MeshRestClient, and get a successful response back.
Although, just after that, I try to create a node using the schema and I get an exception about missing the referenced schema.
Log output from creating a schema -
12:30:13.177 [] INFO [vert.x-worker-thread-9] [JULLogDelegate.java:167] - 127.0.0.1 - POST /api/v1/schemas/f0ee56b03d514a5fae56b03d519a5f04 HTTP/1.1 201 835 - 20 ms
12:30:13.179 [] INFO [main] [MeshService.java:81] - created schema - uuid: f0ee56b03d514a5fae56b03d519a5f04, name: form_definition
Then when creating a new node using that schema reference -
Caused by: com.gentics.mesh.rest.client.MeshRestClientMessageException: Error:404 in POST /api/v1/demo/nodes : Not Found Info: Object with uuid "f0ee56b03d514a5fae56b03d519a5f04" could not be found.
I tried setting the schema name and the schema reference in the NodeCreateRequest, but both complain.
public MeshRequest<NodeResponse> saveFormDefinition(Map<String, Object> form) {
NodeCreateRequest nodeCreateRequest = new NodeCreateRequest()
.setSchema(formDefinitionSchema.toReference())
.setLanguage("en")
.setParentNodeUuid(formsFolderNode);
String formName = (String)form.get("name");
nodeCreateRequest.getFields().putString("name", formName);
return this.client.createNode(this.meshProjectName, nodeCreateRequest);
}
Is there a time period I need to wait before it's available?
Or any other thoughts?
Thanks!

The problem was that I never subscribed to the assignSchemaToProject request -
client.assignSchemaToProject(meshProjectName, response.getUuid())
So after I subscribed, the request was executed and now available for the CreateNodeRequest.
client.assignSchemaToProject(meshProjectName, response.getUuid()).blockingGet();

Related

Ktable to KGroupTable - Schema Not available (State Store ChangeLog Schema not registered)

I have a Kafka topic - let's activity-daily-aggregate,
and I want to do aggregate (add/sub) using KGroupTable. So I read the topic using the
final KTable<String, GenericRecord> inputKTable =
builder.table("activity-daily-aggregate",Consumed.with(new StringSerde(), getConsumerSerde());
Note: getConsumerSerde - returns >> new GenericAvroSerde(mockSchemaRegistryClient)
2.Next Step,
inputKTable.groupBy(
(key,value)->KeyValue.pair(KeyMapper.generateGroupKey(value), new JsonValueMapper().apply(value)),
Grouped.with(AppSerdes.String(), AppSerdes.jsonNode())
);
Before Step 1 and 2 I have configured MockSchemaRegistryClient with
mockSchemaRegistryClient.register("activity-daily-aggregate-key",
Schema.parse(AppUtils.class.getResourceAsStream("/avro/key.avsc")));
mockSchemaRegistryClient.register("activity-daily-aggregate-value",
Schema.parse(AppUtils.class.getResourceAsStream("/avro/daily-activity-aggregate.avsc")))
While I run the topology - using test cases, I get an error at Step 2.
org.apache.kafka.streams.errors.StreamsException: Exception caught in process. taskId=0_0, processor=KSTREAM-SOURCE-0000000011, topic=activity-daily-aggregate, partition=0, offset=0, stacktrace=org.apache.kafka.common.errors.SerializationException: Error retrieving Avro schema: {"type":"record","name":"FactActivity","namespace":"com.ascendlearning.avro","fields":.....}
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Schema Not Found; error code: 404001
The Error goes off when i register the schema with mockSchemaRegistryClient,
stream-app-id-activity-daily-aggregate-STATE-STORE-0000000010-changelog-key
stream-app-id-activity-daily-aggregate-STATE-STORE-0000000010-changelog-value
=> /avro/daily-activity-aggregate.avsc
Do we need to do this step? I thought it might be handled automatically by the topology
From the blog,
https://blog.jdriven.com/2019/12/kafka-streams-topologytestdriver-with-avro/
When you configure the same mock:// URL in both the Properties passed into TopologyTestDriver, as well as for the (de)serializer instances passed into createInputTopic and createOutputTopic, all (de)serializers will use the same MockSchemaRegistryClient, with a single in-memory schema store.
// Configure Serdes to use the same mock schema registry URL
Map<String, String> config = Map.of(
AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, MOCK_SCHEMA_REGISTRY_URL);
avroUserSerde.configure(config, false);
avroColorSerde.configure(config, false);
// Define input and output topics to use in tests
usersTopic = testDriver.createInputTopic(
"users-topic",
stringSerde.serializer(),
avroUserSerde.serializer());
colorsTopic = testDriver.createOutputTopic(
"colors-topic",
stringSerde.deserializer(),
avroColorSerde.deserializer());
I was not passing the mock registry client - schema URL in the serdes passed to input /output topic.

How to connect Orion with the public cosmos.lab.fi-ware.org instance using Cygnus

I am trying to persist my Orion data into the public cosmos.lab.fi-ware.org instance using Cygnus.
Cygnus is up and running and the HDFSSink part of my /usr/cygnus/conf/agent_1.conf looks like this:
# OrionHDFSSink configuration
cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
cygnusagent.sinks.hdfs-sink.type = com.telefonica.iot.cygnus.sinks.OrionHDFSSink
cygnusagent.sinks.hdfs-sink.enable_grouping = false
cygnusagent.sinks.hdfs-sink.backend_impl = rest
cygnusagent.sinks.hdfs-sink.hdfs_host = cosmos.lab.fi-ware.org
cygnusagent.sinks.hdfs-sink.hdfs_port = 14000
cygnusagent.sinks.hdfs-sink.hdfs_username = myUsernameInCosmosLabInstance
cygnusagent.sinks.hdfs-sink.hdfs_password = myPasswordInCosmosLabInstance
cygnusagent.sinks.hdfs-sink.oauth2_token = myTokenForCosmosLabInstance
cygnusagent.sinks.hdfs-sink.hive = true
cygnusagent.sinks.hdfs-sink.hive.server_version = 2
cygnusagent.sinks.hdfs-sink.hive.host = cosmos.lablfi-ware.org
cygnusagent.sinks.hdfs-sink.hive.port = 10000
cygnusagent.sinks.hdfs-sink.hive.db_type = default-db
I add a new subscription with Cygnus as the reference endpoint and I send an update to previously created NGSIEntity, but nothing appears in my cosmos.lab.fi-ware.org instance.
When looking at /var/log/cygnus/cygnus.log I cant find nothing useful, and I find some Java errors.
I am using Orion v. 0.28 and Cygnus v. 0.13.
As the log is saying:
Could not open connection to jdbc:hive2://cosmos.lablfi-ware.org:10000/default: java.net.UnknownHostException: cosmos.lablfi-ware.org
You must configure the right Hive endpoint:
cygnusagent.sinks.hdfs-sink.hive.host = cosmos.lab.fiware.org
Instead of:
cygnusagent.sinks.hdfs-sink.hive.host = cosmos.lablfi-ware.org
NOTE: Youy may have noticed I've used cosmos.lab.fiware.org. Both cosmos.lab.fiware.org and cosmos.lab.fi-ware.org are valid, bit the first one is preferred.
To find the data that Orion was persisting in my Cosmos global instance:
From Hadoop:
# hive
hive> select * from myUsernameInCosmosLabInstance_def_serv_def_servpath_room1_room_column;
Alternative method:
# hadoop fs -ls /user/myUsernameInCosmosInstance/def_serv/def_servpath/Room1_Room/Room1_Room.txt

orient db unable to open any kind of graph

I am new to orient-db and have run into a major block even trying to open a simple in memory database.
Here is my two lines of code (in java)
OrientGraphFactory factory = new
OrientGraphFactory("memory:test").setupPool(1,10);
// EVERY TIME YOU NEED A GRAPH INSTANCE
OrientGraph g = factory.getTx();
try {
} finally {
g.shutdown();
}
I get the following error:
Exception in thread "main" com.orientechnologies.orient.core.exception.OStorageException: Cannot open local storage 'test' with mode=rw
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.open(OAbstractPaginatedStorage.java:210)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.open(ODatabaseDocumentTx.java:223)
at com.orientechnologies.orient.core.db.OPartitionedDatabasePool.acquire(OPartitionedDatabasePool.java:287)
at com.tinkerpop.blueprints.impls.orient.OrientBaseGraph.<init>(OrientBaseGraph.java:163)
at com.tinkerpop.blueprints.impls.orient.OrientTransactionalGraph.<init>(OrientTransactionalGraph.java:78)
at com.tinkerpop.blueprints.impls.orient.OrientGraph.<init>(OrientGraph.java:128)
at com.tinkerpop.blueprints.impls.orient.OrientGraphFactory.getTx(OrientGraphFactory.java:74)
Caused by: com.orientechnologies.orient.core.exception.OStorageException:
Cannot open the storage 'test' because it does not exist in path: test
at
com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage .open(OAbstractPaginatedStorage.java:154)
... 7 more
What 'path' is it talking about? How is a path even relevant when trying to open a simple in memory database? Furthermore I have also tried this with plocal:/..... ,,, and I always get the above error.
Regards,
Bhargav.
Try to create the database first :
OrientGraphNoTx graph = new OrientGraphNoTx ("memory:test");
Then use the pool :
OrientGraphFactory factory = new OrientGraphFactory ("memory:test").setupPool (1, 10);
By the way which db version are you using ?
Databases created as in-memory only needs to be created first and the pool didn't allow it (fixed in last snapshot). Try acquiring an instance from the factory without pool, like:
OrientGraphFactory factory = newOrientGraphFactory("memory:test");
factory.getTx().shutdown(); // AUTO-CREATE THE GRAPH IF NOT EXISTS
factory.setupPool(1,10);
// EVERY TIME YOU NEED A GRAPH INSTANCE
OrientGraph g = factory.getTx();
try {
} finally {
g.shutdown();
}

Understanding Esper IO Http example

What is Trigger Event here ?
How to plug this to the EsperEngine for getting events ?
What URI should be passed ? how should engineURI look like ?
Is it the remote location of the esper engine ?
ConfigurationHTTPAdapter adapterConfig = new ConfigurationHTTPAdapter();
// add additional configuration
Request request = new Request();
request.setStream("TriggerEvent");
request.setUri("http://localhost:8077/root");
adapterConfig.getRequests().add(request);
// start adapter
EsperIOHTTPAdapter httpAdapter = new EsperIOHTTPAdapter(adapterConfig, "engineURI");
httpAdapter.start();
// destroy the adapter when done
httpAdapter.destroy();
Changed the stream from TriggerEvents to HttpEvents and I get this exception given below
ConfigurationException: Event type by name 'HttpEvents' not found
The "engineURI" is a name for the CEP engine instance and has nothing to do with the EsperIO http transport. Its a name for looking up what engines exists and finding the engine by name. So any text can be used here and the default CEP engine is named "default" when you allocate the default one.
You should define the event type of the event you expect to receive via http. A sample code is in http://svn.codehaus.org/esper/esper/trunk/esperio-socket/src/test/java/com/espertech/esperio/socket/TestSocketAdapterCSV.java
You need to declare your event type(s) in either Java, or through Esper's EPL statements.
The reason why you are getting exception is because your type is not defined.
Then you can start sending events by specifying type you are sending in HTTP request. For example, here is a bit of code in python:
import urllib
cepurl = "http://localhost:8084"
param = urllib.urlencode({'stream':'DataEvent',
'date': datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ"),
'src':data["ipsrc"],
'dst':data["ipdst"],
'type':data["type"]})
# sending event:
f = urllib.urlopen(cepurl + "/sendevent?" + param);
rez = f.read()
in java this probably would be something like this:
SupportHTTPClient client = new SupportHTTPClient();
client.request(8084, "sendevent", "stream", "DataEvent", "date", "mydate");

Exception with creating Edge inside Transaction

I'm really new to orientDB , so I'm probably doing something very wrong, still here it goes:
OrientGraphFactory factory = new OrientGraphFactory("remote:localhost/testdb","root","12345").setupPool(2, 10);;
I created a few VertexTypes and EdgeTypes:
OrientGraphNoTx graph = factory.getNoTx();
graph.createVertexType("Company");
graph.createVertexType("Contract");
graph.createEdgeType("SignedWith");
Also created a few indexes :
graph.createKeyIndex("itemid", Vertex.class, new Parameter<>("class", "Contract" ));
graph.createKeyIndex("itemid", Vertex.class, new Parameter<>("class", "Company"));
Now while creating I do the following :
OrientGraph graph = factory.getTx();
Vertex contract = graph.addVertex("class:Contract");
contract.setProperty("itemid", field.longValue());
[... many other properties]
Vertex company = graph.addVertex("class:Company");
company.setProperty("itemid", field.longValue());
[... many other properties]
contract.addEdge("SignedWith", company);
// Also tried this way:
//graph.addEdge(null, contract, company ,"SignedWith" );
And everytime I keep getting :
[debug] c.j.n.n.OrientDBUtils - verify contract has id : #16:-2
[debug] c.j.n.n.OrientDBUtils - verify company has id : #12:-2
[error] c.j.n.c.OrientDBIndexerRunnable - NeoIndexerRunnable - indexing problem Contract Id:20
com.orientechnologies.orient.core.exception.ORecordNotFoundException: The record with id '#16:-2' not found
at com.orientechnologies.orient.core.record.ORecordAbstract.reload(ORecordAbstract.java:320) ~[orientdb-core-2.0-M3.jar:2.0-M3]
at com.orientechnologies.orient.core.record.impl.ODocument.reload(ODocument.java:653) ~[orientdb-core-2.0-M3.jar:2.0-M3]
at com.orientechnologies.orient.core.record.impl.ODocument.reload(ODocument.java:69) ~[orientdb-core-2.0-M3.jar:2.0-M3]
at com.orientechnologies.orient.core.record.ORecordAbstract.checkForLoading(ORecordAbstract.java:470) ~[orientdb-core-2.0-M3.jar:2.0-M3]
at com.orientechnologies.orient.core.record.impl.ODocument.rawField(ODocument.java:819) ~[orientdb-core-2.0-M3.jar:2.0-M3]
Caused by: com.orientechnologies.orient.core.exception.ORecordNotFoundException: Record with rid #16:-2 was not found in database
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.reload(ODatabaseDocumentTx.java:1389) ~[orientdb-core-2.0-M3.jar:2.0-M3]
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.reload(ODatabaseDocumentTx.java:123) ~[orientdb-core-2.0-M3.jar:2.0-M3]
at com.orientechnologies.orient.core.record.ORecordAbstract.reload(ORecordAbstract.java:314) ~[orientdb-core-2.0-M3.jar:2.0-M3]
at com.orientechnologies.orient.core.record.impl.ODocument.reload(ODocument.java:653) ~[orientdb-core-2.0-M3.jar:2.0-M3]
at com.orientechnologies.orient.core.record.impl.ODocument.reload(ODocument.java:69) ~[orientdb-core-2.0-M3.jar:2.0-M3]
Am I missing something ?
Running with orientDB 2.0-M3 on Linux.
Thanks so much for your help
This happens when the transaction is rolledback between the creation of the Vertexes.
I was expecting to see a more explicit Exception when creating a Vertex with a OrientGraph object, when a transaction isn't available or was rolledback. It in fact creates a new Transaction automatically .