Hi I am new to OrientDB and I search about this in google and I could find this.
http://orientdb.com/docs/last/Binary-Data.html.
May be this question is not a valid but I have a doubt what will be type of element which will store binary data.
1.if we are trying to save image as Schema Full property?
2 if we are trying to save image as Schema less property?
As mentioned in above document.
ODocument doc = new ODocument();
doc.field("binary", "Binary data".getBytes());
doc.save();
where will 'doc' will get saved?
Would it possible to give some example on how to save image/binary data and retrieve it.
They binary data type for binary types is OType.BINARY
If you don't specify a class for the document, it will be saved in the "default" cluster. Then you can query it with SELECT FROM cluster:default WHERE ...
BUT I strongly discourage you from doing it, please also consider that in v 3.0 automatic save to the default cluster no longer supported (but you can still do doc.save("default") explicitly)
In general it's much better to create a specific class and save your docs there, eg.
//create the schema only the first time of course
OClass class = db.getMetadata().getSchema().createClass("Image");
class.createProperty("binary", OType.BINARY); // if you want it schemaful
ODocument doc = db.newInstance("Image")
doc.field("binary", "Binary data".getBytes());
doc.save();
Related
I want to save data into Elasticsearch using Spark.
I use this connector: https://www.elastic.co/guide/en/elasticsearch/hadoop/master/spark.html#spark-installation
I can save data using saveToEsWithMeta method on RDD with a case class. But when I want to set field named #timestamp I have a problem. I added an attribute name #timestamp into my case class but this attribute is saved with name '$attimestamp' in Elasticsearch instead of '#timestamp'.
I found a workaround using a Map instead of a case class, but do you know a solution using a case class?
Thanks a lot,
BenoƮt
Maybe try this from the documentation you linked to:
For cases where the id (or other metadata fields like ttl or
timestamp) of the document needs to be specified, one can do so by
setting the appropriate mapping namely es.mapping.id. Following the
previous example, to indicate to Elasticsearch to use the field id as
the document id, update the RDD configuration (it is also possible to
set the property on the SparkConf though due to its global effect it
is discouraged):
EsSpark.saveToEs(rdd, "spark/docs", Map("es.mapping.id" -> "id"))
I want to view the schema of data which are being stored in kvstore , like what are the keys and their type and also values and their type(as Oracle NoSql is a key-value store). As per my knowledge we can use "show schema " command but it will work only if Avro schema is added in that particular store and second thing is it will give the information of only value names and its type but key name and its type is still a bottleneck.
So is there any utility I can use to view the structure of data like we use "describe" command in oracle SQL ?
You are right that 'kv->show schema' will show you the field names (columns) and its types when you have a Avro schema. When you don't register a schema then database have no knowledge of what your value object looks like. In that case client application maintains the schema of the value field (instead of the database).
About the keys, a) keys are always string type b) you can view them from the datashell prompt if you do something like this "kv-> get kv -keyonly -all".
I would also like to mention that in the upcoming R3 release we will be introducing table data model which will give you much closer experience to relational database (in case of table definitions). You can take a look of a webinar we did on this subject: http://bit.ly/1lPazSZ.
Hope that helps,
Anuj
Is it possible to fetch items by plain SQL query instead of building query by DSL using SORM?
For example is there an API for making something like
val metallica = Db.query[Artist].fromString("SELECT * FROM artist WHERE name = ?", "Metallica").fetchOne() // Option[Artist]
instead of
val metallica = Db.query[Artist].whereEqual("name", "Metallica").fetchOne() // Option[Artist]
Since populating an entity with collections and other structured values involves fetching data from multiple tables in an unjoinable way, the API for fetching it directly will most probably never get exposed. However another approach to this problem is currently being considered.
Here's how it could be implemented:
val artists : Seq[Artist]
= Db.fetchWithSql[Artist]("SELECT id FROM artist WHERE name = ?", "Metallica")
If this issue gets a notable support either here or, even better, here, it will probably get implemented in the next minor release.
Update
implemented in 0.3.1
If you want to fetch only one object (by 2 and more arguments) you can
also do the following:
by using Sorm Querier
Db.query[Artist].where(Querier.And(Querier.Equal("name", "Name"), Querier.Equal("surname", "surname"))).fetchOne()
or just
Db.query[Artist].whereEqual("name", "Name").whereEqual( "surname","surname").fetchOne()
I am in the process of migrating a database in MySQL to MongoDB. However, I am running into a problem where MongoDB changes the document type based on the length/value of the string/integer data used to initialize it. Is there a way to prevent this? I want the types to be same across a collection.
I am new to this technology and apologize if I missed something. I looked around and could not find a solution to this. Any pointers are greatly appreciated.
thanks,
Asha
If you're writing your migration application in C++, check out the BSONObjBuilder class in "bson/bsonobjbuilder.h". If you create your individual BSON documents using the "append" methods of BSONObjBuilder, the builder will use the static types of the fields to set the appropriate BSON type in the output object.
For example:
int count = /*something from a mysql query*/;
std::string name = /*something else from a mysql query*/;
BSONObjBuilder builder;
builder.append("count", count);
builder.append("name", name);
BSONObj result = builder.obj();
I am using Rogue/Lift Mongo record to query MongoDb. I am trying to create different query according to the sort field name. I have therefore a string name of the field that I want to use to sort the results.
I have tried to use Record.fieldByName in OrderAsc:
...query.orderAsc (elem => elem.fieldByName(columnName).open_!)
but I obtain "no type parameter for orderAsc".
How can I make it working? Honestly all the type programming in Rogue is quite difficult to follow.
Thanks
The problem is that you cannot dynamically generate a query with Rogue easily. As solution I used Lift Mongo Db that allows the usage of strings (without compile checking) for these kind of operations that requires dynamic sorting.