How to add properties for TitanVertext in titan graph 1.0.0 - titan

I am using titan 1.0.0-hadoop1. I am trying to add some list of properties to Vertex that i am creating. In earlier versions such as 0.5.4 you can add property directly with setProperty, but in latest API i find it difficult to add property. I could not even find right solution in the internet.
Please help me in adding the properties to Vertex in Titan Java API.

An example will help:
Vertex vertex = graph.addVertex();
vertex.property("ID", "123"); //Creates ID property with value 123
creates the property.
To query the property:
vertex.property("ID"); //Returns the property object
vertex.value("ID"); //Returns "123"
vertex.values(); //Returns all the values of all the properties
When you having difficulty understanding the Titan API. I recommend looking at the TinkerPop API. Titan implements it so all tinkerpop commands work with titan graphs.

I am also using titan 1.0.0 graph database with cassandra storage backend and has the same problem after upgrading from 0.5.4 version. I found simple generic solution to add any Collection object (Set or List) to the vertex property using this method.
public static void setMultiElementProperties(TitanElement element, String key, Collection collection) {
if (element != null && key != null && collection != null) {
// Put item from collection to the property of type Cardinality.LIST or Cardinality.SET
for (Object item : collection) {
if (item != null)
element.property(key, item);
}
}
}
Same method implementation with Java 8 syntacs:
public static void setMultiElementProperties(TitanElement element, String key, Collection collection) {
if (element != null && key != null && collection != null) {
// Put item from collection to the property of type Cardinality.LIST or Cardinality.SET
collection.stream().filter(item -> item != null).forEach(item -> element.property(key, item));
}
}
TitanElement object is the parent of TitanVertex and TitanEdge objects so you can pass vertex or edge to this method. Of course you need to declare element property first with Cardinality.Set or Cardinality.List using TitanManagement to use the multi value property.
TitanManagement tm = tittanGraph.openManagement();
tm.makePropertyKey(key).cardinality(Cardinality.LIST).make();// or Cardinality.SET
tm.commit();
To retrieve the collection from element property you can simple use:
Iterator<Object> collectionIter = element.values(key);
And this is Java 8 way to iterate over it:
List<Object> myList = new ArrayList<>();
collectionIter.forEachRemaining(item -> myList.add(item));

Related

MyBatis - ResultMap according to javaType

Hello StackOverflowers,
There is something I don't get about MyBatis resultMap.
The model I'm working on is beeing updated. We decided to create a new graph of objects which reflects our future DB schema (the current one is awful).
To sum up our problem, here is a simple case:
The current Object whith is related to table SITE is org.example.model.SiteModel. We created a new Object called org.example.entity.Site. (The package name is temporary).
The goal is now to use the existing SQL request developed thank to MyBatis and add a new ResultMap linked to the return type of our method.
Here is a an example:
/**
* Get all site defined as template.
*/
#Select("SELECT * FROM SITE WHERE ISTEMPLATE = 'True'")
#ResultMap({"siteResMap" , "siteResultMap"})
#Options(statementType = StatementType.CALLABLE)
<T> List<T> findTemplates();
Then, in an XML configuration file, we defined the following mappings:
...
<resultMap id="siteResMap" type="org.example.entity.Site" />
<resultMap id="siteResultMap" type="org.example.model.SiteModel" />
...
And then we call the method from our DAO:
List<Site> site = siteDao.findTemplates();
List<SiteModel> siteMod = siteDao.findTemplates();
What we are expecting from this is a dynamic interpretation from MyBatis, taking the right ResultMap according to the computed return type.
But both list are shown as List<org.example.entity.Site> from debuger.
It makes me think that the first ResultMap is taken, ignoring the second one.
Am I missing something ? Is there a way to make MyBatis behave in such way ?
Regards
After a lot a research and code exploration, we found out that the String[] of ResultMap is not designed to link java return types to the resultMap.
This is function retrieving the resultmap (from org.apache.ibatis.executor.resultset.DefaultResultSetHandler)
public List<Object> handleResultSets(Statement stmt) throws SQLException {
ErrorContext.instance().activity("handling results").object(mappedStatement.getId());
final List<Object> multipleResults = new ArrayList<Object>();
int resultSetCount = 0;
ResultSetWrapper rsw = getFirstResultSet(stmt);
List<ResultMap> resultMaps = mappedStatement.getResultMaps();
int resultMapCount = resultMaps.size();
validateResultMapsCount(rsw, resultMapCount);
while (rsw != null && resultMapCount > resultSetCount) {
ResultMap resultMap = resultMaps.get(resultSetCount);
handleResultSet(rsw, resultMap, multipleResults, null);
rsw = getNextResultSet(stmt);
cleanUpAfterHandlingResultSet();
resultSetCount++;
}
String[] resultSets = mappedStatement.getResulSets();
if (resultSets != null) {
while (rsw != null && resultSetCount < resultSets.length) {
ResultMapping parentMapping = nextResultMaps.get(resultSets[resultSetCount]);
if (parentMapping != null) {
String nestedResultMapId = parentMapping.getNestedResultMapId();
ResultMap resultMap = configuration.getResultMap(nestedResultMapId);
handleResultSet(rsw, resultMap, null, parentMapping);
}
rsw = getNextResultSet(stmt);
cleanUpAfterHandlingResultSet();
resultSetCount++;
}
}
return collapseSingleResultList(multipleResults);
}
It explains why we always got a List of elements of type of the first resultMap.
We created a new Dao to map new object types.

Saving a Hashtable using Mongo & the Play framework?

I've got a model defined like the following...
#MongoEntity
public class Ent extends MongoModel{
public Hashtable<Integer, CustomType> fil;
public int ID;
public Ent(){
fil = new Hashtable<Integer, CustomType>();
}
}
CustomType is a datatype I've created which basically holds a list of items (among other things). At some point in my web application I update the hashtable from a controller and then read back the size of the item I just updated. Like the following...
public static void addToHash(CustomType type, int ID, int key){
//First I add an element to the list I'm storing in custom type.
Ent ent = Ent.find("byID",ID).first();
CustomType element = user.fil.get(key);
if(element == null) element = new CustomType();
element.add(type);
ent.save();
//Next I reset the variables and read back the value I just stored..
ent = null;
ent = User.find("byID",ID).first();
element = ent.fil.get(ID);
System.out.println("SIZE = " + element.size()); //null pointer here
}
As you can see by my above example I add the element, save the model and then attempt to read back what I have just added and it has not been saved. The above model Ent is a minimal version of the entire Model I'm actually using. All other values in the model including List's, String's, Integer's etc. update correctly when they're updated but this Hashtable I'm storing isn't. Why would this be happening and how could I correct it?
You should probably post on the play framework forum for better help..
Alternatives for a mongodb framework are morphia and springdata which have good documentation.
Not sure how Play maps a hash table to a document value, but it seems it cannot update just the hash table using a mongo operator.
You should be able to mark the whole document for update which would work but slower.

Using subquery in poco to fill property

I am trying to use a property on a POCO that uses LINQ to ENTITY to pull the first object out of a HashSet property on the same POCO. My object contains the following:
public virtual HashSet<ScheduleWaypoint> ScheduleWaypoints { get; set; }
public ScheduleWaypoint ArrivalStation {
get {
if (this.ScheduleWaypoints != null && this.ScheduleWaypoints.Count() > 0) {
return this.ScheduleWaypoints.Where(row => row.WaypointType.Type.Trim() == "SA").OrderByDescending(row => row.ScheduledTime).First();
} else
return null;
}
}
If I were working with just one object I can't say for certain if this would work but I know that it does not work inside other linq queries. I don't have access to the ID of the ScheduleWaypoint when creating the object, only after it is populated could I possibly do that. Is there a way that I can get this to work? Right now it is telling me:
The specified type member 'ArivalStation' is not supported in LINQ to
Entities. Only initializers, entity members, and entity navigation
properties are supported.
Is there something I can do to get access to this information on a property rather than constantly doing joins when I need the info?
Thanks.
You cannot use custom properties in linq-to-entities query. Only properties mapped directly to the database can be used = you must have sub query directly in your linq-to-entities query returning your ArrivalStation. Perhaps it can be wrapped as simple extension method:
public static IQueryable<ScheduleWaypoint> GetArrivalStation(this IQueryable<ScheduleWaypoints> waypoints, int routeId)
{
return waypoints.Where(w => w.WaypointType.Type.Trim() == "SA" && w.Route.Id == routeId)
.OrderByDescending(w => w.ScheduledTime)
.FirstOrDefault();
}
Where Route is your principal entity where way points are defined. FirstOrDefault is used because sub queries cannot use just First.

How to get the type of an object in a collection of objects at runtime?

In my code I get the type of an object (Party object) in a for loop and get the property info of a particular property "firstname". All the objects in the Parties[] collection return the same type so I would like to get the type outside of do while loop only once and still need to be able to get the property "firstname" from the correct party object.
Is it possible to do this way? Thanks for any help.
public List<Party> Parties { get; set; }
PropertyInfo info = null;
i = 1
do
{
foreach (Field field in TotalFields)
{
info = Parties[i - 1].GetType().GetProperty("firstname");
//some other code here
}
i++;
} while (i <= Parties.Count);
When you get the value for a property through a PropertyInfo object, you need to pass an object instance from which to fetch the value. This means that you can reuse the same PropertyInfo instance for several objects, given that they are of the same type as the PropertyInfo was created for:
// Note how we get the PropertyInfo from the Type itself, not an object
// instance of that type.
PropertyInfo propInfo = typeof(YourType).GetProperty("SomeProperty");
foreach (YourType item in SomeList)
{
// this assumes that YourType.SomeProperty is a string, just as an example
string value = (string)propInfo.GetValue(item, null);
// do something sensible with value
}
Your question is tagged as being C# 3, but for completeness it's worth mentioning that this can be made somewhat simpler in C# 4 by using dynamic:
foreach (dynamic item in SomeList)
{
string value = item.SomeProperty;
// do something sensible with value
}

Add index with entity framework code first (CTP5)

Is there a way to get EF CTP5 to create an index when it creates a schema?
Update: See here for how EF 6.1 handles this (as pointed out by juFo below).
You can take advantage of the new CTP5’s ExecuteSqlCommand method on Database class which allows raw SQL commands to be executed against the database.
The best place to invoke SqlCommand method for this purpose is inside a Seed method that has been overridden in a custom Initializer class. For example:
protected override void Seed(EntityMappingContext context)
{
context.Database.ExecuteSqlCommand("CREATE INDEX IX_NAME ON ...");
}
As some mentioned in the comments to Mortezas answer there is a CreateIndex/DropIndex method if you use migrations.
But if you are in "debug"/development mode and is changing the schema all the time and are recreating the database every time you can use the example mentioned in Morteza answer.
To make it a little easier, I have written a very simple extension method to make it strongly typed, as inspiration that I want to share with anyone who reads this question and maybe would like this approach aswell. Just change it to fit your needs and way of naming indexes.
You use it like this: context.Database.CreateUniqueIndex<User>(x => x.Name);
.
public static void CreateUniqueIndex<TModel>(this Database database, Expression<Func<TModel, object>> expression)
{
if (database == null)
throw new ArgumentNullException("database");
// Assumes singular table name matching the name of the Model type
var tableName = typeof(TModel).Name;
var columnName = GetLambdaExpressionName(expression.Body);
var indexName = string.Format("IX_{0}_{1}", tableName, columnName);
var createIndexSql = string.Format("CREATE UNIQUE INDEX {0} ON {1} ({2})", indexName, tableName, columnName);
database.ExecuteSqlCommand(createIndexSql);
}
public static string GetLambdaExpressionName(Expression expression)
{
MemberExpression memberExp = expression as MemberExpression;
if (memberExp == null)
{
// Check if it is an UnaryExpression and unwrap it
var unaryExp = expression as UnaryExpression;
if (unaryExp != null)
memberExp = unaryExp.Operand as MemberExpression;
}
if (memberExp == null)
throw new ArgumentException("Cannot get name from expression", "expression");
return memberExp.Member.Name;
}
Update: From version 6.1 and onwards there is an [Index] attribute available.
For more info, see http://msdn.microsoft.com/en-US/data/jj591583#Index
This feature should be available in the near-future via data annotations and the Fluent API. Microsoft have added it into their public backlog:
http://entityframework.codeplex.com/workitem/list/basic?keywords=DevDiv [Id=87553]
Until then, you'll need to use a seed method on a custom Initializer class to execute the SQL to create the unique index, and if you're using code-first migrations, create a new migration for adding the unique index, and use the CreateIndex and DropIndex methods in your Up and Down methods for the migration to create and drop the index.
Check my answer here Entity Framework Code First Fluent Api: Adding Indexes to columns this allows you to define multi column indexes by using attributes on properties.