Setting a hierarchy filter via script - unity3d

In the top of the Hierarchy window of the Unity Editor there is a field for filtering the hierarchy:
My question is if you can set that filter from an editor script and how. I can barely find anything according to this on the web.
Thanks in advance.

UnityEditor.SceneModeUtility.SearchForType seems to be a step in the right direction.
The good news is, that you can see the implementation of that method in MonoDevelop..
Taking a closer look at it tells us the methods we'd need.
public static void SearchForType (Type type)
{
Object[] array = Resources.FindObjectsOfTypeAll (typeof(SceneHierarchyWindow));
SceneHierarchyWindow sceneHierarchyWindow = (array.Length <= 0) ? null : (array [0] as SceneHierarchyWindow);
if (sceneHierarchyWindow)
{
SceneModeUtility.s_HierarchyWindow = sceneHierarchyWindow;
if (type == null || type == typeof(GameObject))
{
SceneModeUtility.s_FocusType = null;
sceneHierarchyWindow.ClearSearchFilter ();
}
else
{
SceneModeUtility.s_FocusType = type;
if (sceneHierarchyWindow.searchMode == SearchableEditorWindow.SearchMode.Name)
{
sceneHierarchyWindow.searchMode = SearchableEditorWindow.SearchMode.All;
}
sceneHierarchyWindow.SetSearchFilter ("t:" + type.Name, sceneHierarchyWindow.searchMode, false);
sceneHierarchyWindow.hasSearchFilterFocus = true;
}
}
else
{
SceneModeUtility.s_FocusType = null;
}
}
And now the bad news, due to their protection level, you can neither access the hierarchy window directly, nor can you use the SetSearchFilter method.
Maybe you could write an editor script, similar to the hierarchy view, where you have full control, and can do whatever you want.

Thanks to d4RK I found out how to do it using Reflection:
public const int FILTERMODE_ALL = 0;
public const int FILTERMODE_NAME = 1;
public const int FILTERMODE_TYPE = 2;
public static void SetSearchFilter(string filter, int filterMode) {
SearchableEditorWindow[] windows = (SearchableEditorWindow[])Resources.FindObjectsOfTypeAll (typeof(SearchableEditorWindow));
foreach (SearchableEditorWindow window in windows) {
if(window.GetType().ToString() == "UnityEditor.SceneHierarchyWindow") {
hierarchy = window;
break;
}
}
if (hierarchy == null)
return;
MethodInfo setSearchType = typeof(SearchableEditorWindow).GetMethod("SetSearchFilter", BindingFlags.NonPublic | BindingFlags.Instance);
object[] parameters = new object[]{filter, filterMode, false};
setSearchType.Invoke(hierarchy, parameters);
}
This may not be the most elegant way, but it works like a charm and can easily be extended to apply the same filter to the SceneView.

As of Unity 2018 there is an additional boolean parameter required for the SetSearchFilter method.
So change this line
object[] parameters = new object[]{filter, filterMode, false};
to
object[] parameters = new object[]{filter, filterMode, false, false};
This should resolve the TargetParameterCountException Ugo Hed mentioned.

Related

Plugin dev: How to search only for interfaces in DLTK/PDT plugin?

I develop extension of PDT plugin. I need dialog with interfaces only (not classes). Basic code looks like it:
OpenTypeSelectionDialog2 dialog = new OpenTypeSelectionDialog2(
DLTKUIPlugin.getActiveWorkbenchShell(),
multi,
PlatformUI.getWorkbench().getProgressService(),
null,
type,
PHPUILanguageToolkit.getInstance());
It's works fine but I get classes and interfaces together (type variables). Is any method to filter it? I can't find this kind of mechanism in PDT but classes and interfaces are recognize correctly (icons next to names).
I don't know if its the best solution but it works.
int falseFlags = 0;
int trueFlags = 0;
IDLTKSearchScope scope = SearchEngine.createSearchScope(getScriptFolder().getScriptProject());
trueFlags = PHPFlags.AccInterface;
OpenTypeSelectionDialog2 dialog = new OpenTypeSelectionDialog2(
DLTKUIPlugin.getActiveWorkbenchShell(),
multi,
PlatformUI.getWorkbench().getProgressService(),
scope,
IDLTKSearchConstants.TYPE,
new PHPTypeSelectionExtension(trueFlags, falseFlags),
PHPUILanguageToolkit.getInstance());
And PHPTypeSelectionExtension looks like this:
public class PHPTypeSelectionExtension extends TypeSelectionExtension {
/**
* #see PHPFlags
*/
private int trueFlags = 0;
private int falseFlags = 0;
public PHPTypeSelectionExtension() {
}
public PHPTypeSelectionExtension(int trueFlags, int falseFlags) {
super();
this.trueFlags = trueFlags;
this.falseFlags = falseFlags;
}
#Override
public ITypeInfoFilterExtension getFilterExtension() {
return new ITypeInfoFilterExtension() {
#Override
public boolean select(ITypeInfoRequestor typeInfoRequestor) {
if (falseFlags != 0 && (falseFlags & typeInfoRequestor.getModifiers()) != 0) {
// Try to filter by black list.
return false;
} else if (trueFlags == 0 || (trueFlags & typeInfoRequestor.getModifiers()) != 0) {
// Try to filter by white list, if trueFlags == 0 this is fine 'couse we pass black list.
return true;
} else {
// Rest is filter out.
return false;
}
}
};
}
#SuppressWarnings("restriction")
#Override
public ISelectionStatusValidator getSelectionValidator() {
return new TypedElementSelectionValidator(new Class[] {IType.class, INamespace.class}, false);
}
}

Generating Cache Keys from IQueryable For Caching Results of EF Code First Queries

I'm trying to implement a caching scheme for my EF Repository similar to the one blogged here. As the author and commenters have reported the limitation is that the key generation method cannot produce cache keys that vary with a given query's parameters. Here is the cache key generation method:
private static string GetKey<T>(IQueryable<T> query)
{
string key = string.Concat(query.ToString(), "\n\r",
typeof(T).AssemblyQualifiedName);
return key;
}
So the following queries will yield the same cache key:
var isActive = true;
var query = context.Products
.OrderBy(one => one.ProductNumber)
.Where(one => one.IsActive == isActive).AsCacheable();
and
var isActive = false;
var query = context.Products
.OrderBy(one => one.ProductNumber)
.Where(one => one.IsActive == isActive).AsCacheable();
Notice that the only difference is that isActive = true in the first query and isActive = false in the second.
Any suggestions/insight to efficiently generating cache keys which vary by IQueryable parameters would be truly appreciated.
Kudos to Sergey Barskiy for sharing the EF CodeFirst caching scheme.
Update
I took the approach of traversing the IQueryable's expression tree myself with the goal of resolving the values of the parameters used in the query. With maxlego's suggestion, I extended the System.Linq.Expressions.ExpressionVisitor class to visit the expression nodes that we're interested in - in this case, the MemberExpression. The updated GetKey method looks something like this:
public static string GetKey<T>(IQueryable<T> query)
{
var keyBuilder = new StringBuilder(query.ToString());
var queryParamVisitor = new QueryParameterVisitor(keyBuilder);
queryParamVisitor.GetQueryParameters(query.Expression);
keyBuilder.Append("\n\r");
keyBuilder.Append(typeof (T).AssemblyQualifiedName);
return keyBuilder.ToString();
}
And the QueryParameterVisitor class, which was inspired by the answers of Bryan Watts and Marc Gravell to this question, looks like this:
/// <summary>
/// <see cref="ExpressionVisitor"/> subclass which encapsulates logic to
/// traverse an expression tree and resolve all the query parameter values
/// </summary>
internal class QueryParameterVisitor : ExpressionVisitor
{
public QueryParameterVisitor(StringBuilder sb)
{
QueryParamBuilder = sb;
Visited = new Dictionary<int, bool>();
}
protected StringBuilder QueryParamBuilder { get; set; }
protected Dictionary<int, bool> Visited { get; set; }
public StringBuilder GetQueryParameters(Expression expression)
{
Visit(expression);
return QueryParamBuilder;
}
private static object GetMemberValue(MemberExpression memberExpression, Dictionary<int, bool> visited)
{
object value;
if (!TryGetMemberValue(memberExpression, out value, visited))
{
UnaryExpression objectMember = Expression.Convert(memberExpression, typeof (object));
Expression<Func<object>> getterLambda = Expression.Lambda<Func<object>>(objectMember);
Func<object> getter = null;
try
{
getter = getterLambda.Compile();
}
catch (InvalidOperationException)
{
}
if (getter != null) value = getter();
}
return value;
}
private static bool TryGetMemberValue(Expression expression, out object value, Dictionary<int, bool> visited)
{
if (expression == null)
{
// used for static fields, etc
value = null;
return true;
}
// Mark this node as visited (processed)
int expressionHash = expression.GetHashCode();
if (!visited.ContainsKey(expressionHash))
{
visited.Add(expressionHash, true);
}
// Get Member Value, recurse if necessary
switch (expression.NodeType)
{
case ExpressionType.Constant:
value = ((ConstantExpression) expression).Value;
return true;
case ExpressionType.MemberAccess:
var me = (MemberExpression) expression;
object target;
if (TryGetMemberValue(me.Expression, out target, visited))
{
// instance target
switch (me.Member.MemberType)
{
case MemberTypes.Field:
value = ((FieldInfo) me.Member).GetValue(target);
return true;
case MemberTypes.Property:
value = ((PropertyInfo) me.Member).GetValue(target, null);
return true;
}
}
break;
}
// Could not retrieve value
value = null;
return false;
}
protected override Expression VisitMember(MemberExpression node)
{
// Only process nodes that haven't been processed before, this could happen because our traversal
// is depth-first and will "visit" the nodes in the subtree before this method (VisitMember) does
if (!Visited.ContainsKey(node.GetHashCode()))
{
object value = GetMemberValue(node, Visited);
if (value != null)
{
QueryParamBuilder.Append("\n\r");
QueryParamBuilder.Append(value.ToString());
}
}
return base.VisitMember(node);
}
}
I'm still doing some performance profiling on the cache key generation and hoping that it isn't too expensive (I'll update the question with the results once I have them). I'll leave the question open, in case anyone has suggestions on how to optimize this process or has a recommendation for a more efficient method for generating cache keys with vary with the query parameters. Although this method produces the desired output, it is by no means optimal.
i suggest to use ExpressionVisitor
http://msdn.microsoft.com/en-us/library/bb882521(v=vs.90).aspx
Just for the record, "Caching the results of LINQ queries" works well with the EF and it's able to work with parameters correctly, so it can be considered as a good second level cache implementation for EF.
While the solution of the OP works quite well, I found that the performance of the solution is a little bit poor.
The duration of the key generation varied between 300ms and 1200ms for my queries.
However, I've found another solution that has quite better performance (<10ms).
public static string ToTraceString<T>(DbQuery<T> query)
{
var internalQueryField = query.GetType().GetFields(BindingFlags.NonPublic | BindingFlags.Instance).Where(f => f.Name.Equals("_internalQuery")).FirstOrDefault();
var internalQuery = internalQueryField.GetValue(query);
var objectQueryField = internalQuery.GetType().GetFields(BindingFlags.NonPublic | BindingFlags.Instance).Where(f => f.Name.Equals("_objectQuery")).FirstOrDefault();
var objectQuery = objectQueryField.GetValue(internalQuery) as ObjectQuery<T>;
return ToTraceStringWithParameters(objectQuery);
}
private static string ToTraceStringWithParameters<T>(ObjectQuery<T> query)
{
string traceString = query.ToTraceString() + "\n";
foreach (var parameter in query.Parameters)
{
traceString += parameter.Name + " [" + parameter.ParameterType.FullName + "] = " + parameter.Value + "\n";
}
return traceString;
}

Balancing String based Binary Search Tree (For Spellchecking)

Update: I can't get "Balancing" to work, because I cannot get "doAVLBalance" to recognize the member functions "isBalanced()", "isRightHeavy()", "isLeftHeavy". And I don't know why! I tried Sash's example(3rd answer) exactly but I get "deceleration is incompatible" and I couldn't fix that...so I tried doing it my way...and it tells me those member functions don't exist, when they clearly do.
"Error: class "IntBinaryTree:TreeNode" has no member "isRightHeavy".
I'm stuck after trying for the last 4 hours :(. Updated code below, help would be much appreciated!!
I'm creating a String based Binary Search Tree and need to make it a "Balanced" tree. How do I do this?* Help please!! Thanks in advance!
BinarySearchTree.cpp:
bool IntBinaryTree::leftRotation(TreeNode *root)
{
//TreeNode *nodePtr = root; // Can use nodePtr instead of root, better?
// root, nodePtr, this->?
if(NULL == root)
{return NULL;}
TreeNode *rightOfTheRoot = root->right;
root->right = rightOfTheRoot->left;
rightOfTheRoot->left = root;
return rightOfTheRoot;
}
bool IntBinaryTree::rightRotation(TreeNode *root)
{
if(NULL == root)
{return NULL;}
TreeNode *leftOfTheRoot = root->left;
root->left = leftOfTheRoot->right;
leftOfTheRoot->right = root;
return leftOfTheRoot;
}
bool IntBinaryTree::doAVLBalance(TreeNode *root)
{
if(NULL==root)
{return NULL;}
else if(root->isBalanced()) // Don't have "isBalanced"
{return root;}
root->left = doAVLBalance(root->left);
root->right = doAVLBalance(root->right);
getDepth(root); //Don't have this function yet
if(root->isRightHeavy()) // Don't have "isRightHeavey"
{
if(root->right->isLeftheavey())
{
root->right = rightRotation(root->right);
}
root = leftRotation(root);
}
else if(root->isLeftheavey()) // Don't have "isLeftHeavey"
{
if(root->left->isRightHeavey())
{
root->left = leftRotation(root->left);
}
root = rightRotation(root);
}
return root;
}
void IntBinaryTree::insert(TreeNode *&nodePtr, TreeNode *&newNode)
{
if(nodePtr == NULL)
nodePtr = newNode; //Insert node
else if(newNode->value < nodePtr->value)
insert(nodePtr->left, newNode); //Search left branch
else
insert(nodePtr->right, newNode); //search right branch
}
//
// Displays the number of nodes in the Tree
int IntBinaryTree::numberNodes(TreeNode *root)
{
TreeNode *nodePtr = root;
if(root == NULL)
return 0;
int count = 1; // our actual node
if(nodePtr->left !=NULL)
{ count += numberNodes(nodePtr->left);
}
if(nodePtr->right != NULL)
{
count += numberNodes(nodePtr->right);
}
return count;
}
// Insert member function
void IntBinaryTree::insertNode(string num)
{
TreeNode *newNode; // Poitner to a new node.
// Create a new node and store num in it.
newNode = new TreeNode;
newNode->value = num;
newNode->left = newNode->right = NULL;
//Insert the node.
insert(root, newNode);
}
// More member functions, etc.
BinarySearchTree.h:
class IntBinaryTree
{
private:
struct TreeNode
{
string value; // Value in the node
TreeNode *left; // Pointer to left child node
TreeNode *right; // Pointer to right child node
};
//Private Members Functions
// Removed for shortness
void displayInOrder(TreeNode *) const;
public:
TreeNode *root;
//Constructor
IntBinaryTree()
{ root = NULL; }
//Destructor
~IntBinaryTree()
{ destroySubTree(root); }
// Binary tree Operations
void insertNode(string);
// Removed for shortness
int numberNodes(TreeNode *root);
//int balancedTree(string, int, int); // TreeBalanced
bool leftRotation(TreeNode *root);
bool rightRotation(TreeNode *root);
bool doAVLBalance(TreeNode *root); // void doAVLBalance();
bool isAVLBalanced();
int calculateAndGetAVLBalanceFactor(TreeNode *root);
int getAVLBalanceFactor()
{
TreeNode *nodePtr = root; // Okay to do this? instead of just
// left->mDepth
// right->mDepth
int leftTreeDepth = (left !=NULL) ? nodePtr->left->Depth : -1;
int rightTreeDepth = (right != NULL) ? nodePtr->right->Depth : -1;
return(leftTreeDepth - rightTreeDepth);
}
bool isRightheavey() { return (getAVLBalanceFactor() <= -2); }
bool isLeftheavey() { return (getAVLBalanceFactor() >= 2); }
bool isBalanced()
{
int balanceFactor = getAVLBalanceFactor();
return (balanceFactor >= -1 && balanceFactor <= 1);
}
int getDepth(TreeNode *root); // getDepth
void displayInOrder() const
{ displayInOrder(root); }
// Removed for shortness
};
Programmers use AVL Tree concepts to balance binary trees. It is quite simple. More information can be found online. Quick wiki link
Below is the sample code which does tree balance using AVL algorithm.
Node *BinarySearchTree::leftRotation(Node *root)
{
if(NULL == root)
{
return NULL;
}
Node *rightOfTheRoot = root->mRight;
root->mRight = rightOfTheRoot->mLeft;
rightOfTheRoot->mLeft = root;
return rightOfTheRoot;
}
Node *BinarySearchTree::rightRotation(Node *root)
{
if(NULL == root)
{
return NULL;
}
Node *leftOfTheRoot = root->mLeft;
root->mLeft = leftOfTheRoot->mRight;
leftOfTheRoot->mRight = root;
return leftOfTheRoot;
}
Node *BinarySearchTree::doAVLBalance(Node *root)
{
if(NULL == root)
{
return NULL;
}
else if(root->isBalanced())
{
return root;
}
root->mLeft = doAVLBalance(root->mLeft);
root->mRight = doAVLBalance(root->mRight);
getDepth(root);
if(root->isRightHeavy())
{
if(root->mRight->isLeftHeavy())
{
root->mRight = rightRotation(root->mRight);
}
root = leftRotation(root);
}
else if(root->isLeftHeavy())
{
if(root->mLeft->isRightHeavy())
{
root->mLeft = leftRotation(root->mLeft);
}
root = rightRotation(root);
}
return root;
}
Class Definition
class BinarySearchTree
{
public:
// .. lots of methods
Node *getRoot();
int getDepth(Node *root);
bool isAVLBalanced();
int calculateAndGetAVLBalanceFactor(Node *root);
void doAVLBalance();
private:
Node *mRoot;
};
class Node
{
public:
int mData;
Node *mLeft;
Node *mRight;
bool mHasVisited;
int mDepth;
public:
Node(int data)
: mData(data),
mLeft(NULL),
mRight(NULL),
mHasVisited(false),
mDepth(0)
{
}
int getData() { return mData; }
void setData(int data) { mData = data; }
void setRight(Node *right) { mRight = right;}
void setLeft(Node *left) { mLeft = left; }
Node * getRight() { return mRight; }
Node * getLeft() { return mLeft; }
bool hasLeft() { return (mLeft != NULL); }
bool hasRight() { return (mRight != NULL); }
bool isVisited() { return (mHasVisited == true); }
int getAVLBalanceFactor()
{
int leftTreeDepth = (mLeft != NULL) ? mLeft->mDepth : -1;
int rightTreeDepth = (mRight != NULL) ? mRight->mDepth : -1;
return(leftTreeDepth - rightTreeDepth);
}
bool isRightHeavy() { return (getAVLBalanceFactor() <= -2); }
bool isLeftHeavy() { return (getAVLBalanceFactor() >= 2); }
bool isBalanced()
{
int balanceFactor = getAVLBalanceFactor();
return (balanceFactor >= -1 && balanceFactor <= 1);
}
};
There are many ways to do this, but I'd suggest that you actually not do this as all. If you want to store a BST of strings, there are much better options:
Use a prewritten binary search tree class. The C++ std::set class offers the same time guarantees as a balanced binary search tree and is often implemented as such. It's substantially easier to use than rolling you own BST.
Use a trie instead. The trie data structure is simpler and more efficient than a BST of strings, requires no balancing at all, and is faster than a BST.
If you really must write your own balanced BST, you have many options. Most BST implementations that use balancing are extremely complex and are not for the faint of heart. I'd suggest implementing either a treap or a splay tree, which are two balanced BST structures that are rather simple to implement. They're both more complex than the code you have above and I can't in this short space provide an implementation, but a Wikipedia search for these structures should give you plenty of advice on how to proceed.
Hope this helps!
Unfortunately, we programmers are literal beasts.
make it a "Balanced" tree.
"Balanced" is context dependent. The introductory data structures classes typically refer to a tree being "balanced" when the difference between the node of greatest depth and the node of least depth is minimized. However, as mentioned by Sir Templatetypedef, a splay tree is considered a balancing tree. This is because it can balance trees rather well in cases that few nodes accessed together at one time frequently. This is because it takes less node traversals to get at the data in a splay tree than a conventional binary tree in these cases. On the other hand, its worst-case performance on an access-by-access basis can be as bad as a linked list.
Speaking of linked lists...
Because otherwise without the "Balancing" it's the same as a linked-list I read and defeats the purpose.
It can be as bad, but for randomized inserts it isn't. If you insert already-sorted data, most binary search tree implementations will store data like an bloated and ordered linked list. However, that's only because you're building one side of the tree continually. (Imagine inserting 1, 2, 3, 4, 5, 6, 7, etc... into a binary tree. Try it on paper and see what happens.)
If you have to balance in a theoretical worst-case-must-guaranteed sense, I recommend looking up red-black trees. (Google it, second link is pretty good.)
If you have to balance it in a reasonable way for this particular scenario, I'd go with integer indices and a decent hash function -- that way the balancing will happen probabilistically without any extra code. That is to say, make your comparison function look like hash(strA) < hash(strB) instead of what you've got now. (For a quick but effective hash for this case, look up FNV hashing. First hit on Google. Go down until you see useful code.) You can worry about the details of implementation efficiency if you want to. (For example, you don't have to perform both hashes every single time you compare since one of the strings never changes.)
If you can get away with it, I strongly recommend the latter if you're in a crunch for time and want something fast. Otherwise, red-black trees are worthwhile since they're extremely useful in practice when you need to roll your own height-balanced binary trees.
Finally, addressing your code above, see the comments in the code below:
int IntBinaryTree::numberNodes(TreeNode *root)
{
if(root = NULL) // You're using '=' where you want '==' -- common mistake.
// Consider getting used to putting the value first -- that is,
// "NULL == root". That way if you make that mistake again, the
// compiler will error in many cases.
return 0;
/*
if(TreeNode.left=null && TreeNode.right==null) // Meant to use '==' again.
{ return 1; }
return numberNodes(node.left) + numberNodes(node.right);
*/
int count = 1; // our actual node
if (left != NULL)
{
// You likely meant 'root.left' on the next line, not 'TreeNode.left'.
count += numberNodes(TreeNode.left);
// That's probably the line that's giving you the error.
}
if (right != NULL)
{
count += numberNodes(root.right);
}
return count;
}

Serializing Entity Framework problems

Like several other people, I'm having problems serializing Entity Framework objects, so that I can send the data over AJAX in a JSON format.
I've got the following server-side method, which I'm attempting to call using AJAX through jQuery
[WebMethod]
public static IEnumerable<Message> GetAllMessages(int officerId)
{
SIBSv2Entities db = new SIBSv2Entities();
return (from m in db.MessageRecipients
where m.OfficerId == officerId
select m.Message).AsEnumerable<Message>();
}
Calling this via AJAX results in this error:
A circular reference was detected while serializing an object of type \u0027System.Data.Metadata.Edm.AssociationType
Which is because of the way the Entity Framework creates circular references to keep all the objects related and accessible server side.
I came across the following code from (http://hellowebapps.com/2010-09-26/producing-json-from-entity-framework-4-0-generated-classes/) which claims to get around this problem by capping the maximum depth for references. I've added the code below, because I had to tweak it slightly to get it work (All angled brackets are missing from the code on the website)
using System.Web.Script.Serialization;
using System.Collections.Generic;
using System.Collections;
using System.Linq;
using System;
public class EFObjectConverter : JavaScriptConverter
{
private int _currentDepth = 1;
private readonly int _maxDepth = 2;
private readonly List<int> _processedObjects = new List<int>();
private readonly Type[] _builtInTypes = new[]{
typeof(bool),
typeof(byte),
typeof(sbyte),
typeof(char),
typeof(decimal),
typeof(double),
typeof(float),
typeof(int),
typeof(uint),
typeof(long),
typeof(ulong),
typeof(short),
typeof(ushort),
typeof(string),
typeof(DateTime),
typeof(Guid)
};
public EFObjectConverter( int maxDepth = 2,
EFObjectConverter parent = null)
{
_maxDepth = maxDepth;
if (parent != null)
{
_currentDepth += parent._currentDepth;
}
}
public override object Deserialize( IDictionary<string,object> dictionary, Type type, JavaScriptSerializer serializer)
{
return null;
}
public override IDictionary<string,object> Serialize(object obj, JavaScriptSerializer serializer)
{
_processedObjects.Add(obj.GetHashCode());
Type type = obj.GetType();
var properties = from p in type.GetProperties()
where p.CanWrite &&
p.CanWrite &&
_builtInTypes.Contains(p.PropertyType)
select p;
var result = properties.ToDictionary(
property => property.Name,
property => (Object)(property.GetValue(obj, null)
== null
? ""
: property.GetValue(obj, null).ToString().Trim())
);
if (_maxDepth >= _currentDepth)
{
var complexProperties = from p in type.GetProperties()
where p.CanWrite &&
p.CanRead &&
!_builtInTypes.Contains(p.PropertyType) &&
!_processedObjects.Contains(p.GetValue(obj, null)
== null
? 0
: p.GetValue(obj, null).GetHashCode())
select p;
foreach (var property in complexProperties)
{
var js = new JavaScriptSerializer();
js.RegisterConverters(new List<JavaScriptConverter> { new EFObjectConverter(_maxDepth - _currentDepth, this) });
result.Add(property.Name, js.Serialize(property.GetValue(obj, null)));
}
}
return result;
}
public override IEnumerable<System.Type> SupportedTypes
{
get
{
return GetType().Assembly.GetTypes();
}
}
}
However even when using that code, in the following way:
var js = new System.Web.Script.Serialization.JavaScriptSerializer();
js.RegisterConverters(new List<System.Web.Script.Serialization.JavaScriptConverter> { new EFObjectConverter(2) });
return js.Serialize(messages);
I'm still seeing the A circular reference was detected... exception being thrown!
I solved these issues with the following classes:
public class EFJavaScriptSerializer : JavaScriptSerializer
{
public EFJavaScriptSerializer()
{
RegisterConverters(new List<JavaScriptConverter>{new EFJavaScriptConverter()});
}
}
and
public class EFJavaScriptConverter : JavaScriptConverter
{
private int _currentDepth = 1;
private readonly int _maxDepth = 1;
private readonly List<object> _processedObjects = new List<object>();
private readonly Type[] _builtInTypes = new[]
{
typeof(int?),
typeof(double?),
typeof(bool?),
typeof(bool),
typeof(byte),
typeof(sbyte),
typeof(char),
typeof(decimal),
typeof(double),
typeof(float),
typeof(int),
typeof(uint),
typeof(long),
typeof(ulong),
typeof(short),
typeof(ushort),
typeof(string),
typeof(DateTime),
typeof(DateTime?),
typeof(Guid)
};
public EFJavaScriptConverter() : this(1, null) { }
public EFJavaScriptConverter(int maxDepth = 1, EFJavaScriptConverter parent = null)
{
_maxDepth = maxDepth;
if (parent != null)
{
_currentDepth += parent._currentDepth;
}
}
public override object Deserialize(IDictionary<string, object> dictionary, Type type, JavaScriptSerializer serializer)
{
return null;
}
public override IDictionary<string, object> Serialize(object obj, JavaScriptSerializer serializer)
{
_processedObjects.Add(obj.GetHashCode());
var type = obj.GetType();
var properties = from p in type.GetProperties()
where p.CanRead && p.GetIndexParameters().Count() == 0 &&
_builtInTypes.Contains(p.PropertyType)
select p;
var result = properties.ToDictionary(
p => p.Name,
p => (Object)TryGetStringValue(p, obj));
if (_maxDepth >= _currentDepth)
{
var complexProperties = from p in type.GetProperties()
where p.CanRead &&
p.GetIndexParameters().Count() == 0 &&
!_builtInTypes.Contains(p.PropertyType) &&
p.Name != "RelationshipManager" &&
!AllreadyAdded(p, obj)
select p;
foreach (var property in complexProperties)
{
var complexValue = TryGetValue(property, obj);
if(complexValue != null)
{
var js = new EFJavaScriptConverter(_maxDepth - _currentDepth, this);
result.Add(property.Name, js.Serialize(complexValue, new EFJavaScriptSerializer()));
}
}
}
return result;
}
private bool AllreadyAdded(PropertyInfo p, object obj)
{
var val = TryGetValue(p, obj);
return _processedObjects.Contains(val == null ? 0 : val.GetHashCode());
}
private static object TryGetValue(PropertyInfo p, object obj)
{
var parameters = p.GetIndexParameters();
if (parameters.Length == 0)
{
return p.GetValue(obj, null);
}
else
{
//cant serialize these
return null;
}
}
private static object TryGetStringValue(PropertyInfo p, object obj)
{
if (p.GetIndexParameters().Length == 0)
{
var val = p.GetValue(obj, null);
return val;
}
else
{
return string.Empty;
}
}
public override IEnumerable<Type> SupportedTypes
{
get
{
var types = new List<Type>();
//ef types
types.AddRange(Assembly.GetAssembly(typeof(DbContext)).GetTypes());
//model types
types.AddRange(Assembly.GetAssembly(typeof(BaseViewModel)).GetTypes());
return types;
}
}
}
You can now safely make a call like new EFJavaScriptSerializer().Serialize(obj)
Update : since version Telerik v1.3+ you can now override the GridActionAttribute.CreateActionResult method and hence you can easily integrate this Serializer into specific controller methods by applying your custom [GridAction] attribute:
[Grid]
public ActionResult _GetOrders(int id)
{
return new GridModel(Service.GetOrders(id));
}
and
public class GridAttribute : GridActionAttribute, IActionFilter
{
/// <summary>
/// Determines the depth that the serializer will traverse
/// </summary>
public int SerializationDepth { get; set; }
/// <summary>
/// Initializes a new instance of the <see cref="GridActionAttribute"/> class.
/// </summary>
public GridAttribute()
: base()
{
ActionParameterName = "command";
SerializationDepth = 1;
}
protected override ActionResult CreateActionResult(object model)
{
return new EFJsonResult
{
Data = model,
JsonRequestBehavior = JsonRequestBehavior.AllowGet,
MaxSerializationDepth = SerializationDepth
};
}
}
and finally..
public class EFJsonResult : JsonResult
{
const string JsonRequest_GetNotAllowed = "This request has been blocked because sensitive information could be disclosed to third party web sites when this is used in a GET request. To allow GET requests, set JsonRequestBehavior to AllowGet.";
public EFJsonResult()
{
MaxJsonLength = 1024000000;
RecursionLimit = 10;
MaxSerializationDepth = 1;
}
public int MaxJsonLength { get; set; }
public int RecursionLimit { get; set; }
public int MaxSerializationDepth { get; set; }
public override void ExecuteResult(ControllerContext context)
{
if (context == null)
{
throw new ArgumentNullException("context");
}
if (JsonRequestBehavior == JsonRequestBehavior.DenyGet &&
String.Equals(context.HttpContext.Request.HttpMethod, "GET", StringComparison.OrdinalIgnoreCase))
{
throw new InvalidOperationException(JsonRequest_GetNotAllowed);
}
var response = context.HttpContext.Response;
if (!String.IsNullOrEmpty(ContentType))
{
response.ContentType = ContentType;
}
else
{
response.ContentType = "application/json";
}
if (ContentEncoding != null)
{
response.ContentEncoding = ContentEncoding;
}
if (Data != null)
{
var serializer = new JavaScriptSerializer
{
MaxJsonLength = MaxJsonLength,
RecursionLimit = RecursionLimit
};
serializer.RegisterConverters(new List<JavaScriptConverter> { new EFJsonConverter(MaxSerializationDepth) });
response.Write(serializer.Serialize(Data));
}
}
You can also detach the object from the context and it will remove the navigation properties so that it can be serialized. For my data repository classes that are used with Json i use something like this.
public DataModel.Page GetPage(Guid idPage, bool detach = false)
{
var results = from p in DataContext.Pages
where p.idPage == idPage
select p;
if (results.Count() == 0)
return null;
else
{
var result = results.First();
if (detach)
DataContext.Detach(result);
return result;
}
}
By default the returned object will have all of the complex/navigation properties, but by setting detach = true it will remove those properties and return the base object only. For a list of objects the implementation looks like this
public List<DataModel.Page> GetPageList(Guid idSite, bool detach = false)
{
var results = from p in DataContext.Pages
where p.idSite == idSite
select p;
if (results.Count() > 0)
{
if (detach)
{
List<DataModel.Page> retValue = new List<DataModel.Page>();
foreach (var result in results)
{
DataContext.Detach(result);
retValue.Add(result);
}
return retValue;
}
else
return results.ToList();
}
else
return new List<DataModel.Page>();
}
I have just successfully tested this code.
It may be that in your case your Message object is in a different assembly? The overriden Property SupportedTypes is returning everything ONLY in its own Assembly so when serialize is called the JavaScriptSerializer defaults to the standard JavaScriptConverter.
You should be able to verify this debugging.
Your error occured due to some "Reference" classes generated by EF for some entities with 1:1 relations and that the JavaScriptSerializer failed to serialize.
I've used a workaround by adding a new condition :
!p.Name.EndsWith("Reference")
The code to get the complex properties looks like this :
var complexProperties = from p in type.GetProperties()
where p.CanWrite &&
p.CanRead &&
!p.Name.EndsWith("Reference") &&
!_builtInTypes.Contains(p.PropertyType) &&
!_processedObjects.Contains(p.GetValue(obj, null)
== null
? 0
: p.GetValue(obj, null).GetHashCode())
select p;
Hope this help you.
I had a similar problem with pushing my view via Ajax to UI components.
I also found and tried to use that code sample you provided. Some problems I had with that code:
SupportedTypes wasn't grabbing the types I needed, so the converter wasn't being called
If the maximum depth is hit, the serialization would be truncated
It threw out any other converters I had on the existing serializer by creating its own new JavaScriptSerializer
Here are the fixes I implemented for those issues:
Reusing the same serializer
I simply reused the existing serializer that is passed into Serialize to solve this problem. This broke the depth hack though.
Truncating on already-visited, rather than on depth
Instead of truncating on depth, I created a HashSet<object> of already seen instances (with a custom IEqualityComparer that checked reference equality). I simply didn't recurse if I found an instance I'd already seen. This is the same detection mechanism built into the JavaScriptSerializer itself, so worked quite well.
The only problem with this solution is that the serialization output isn't very deterministic. The order of truncation is strongly dependent on the order that reflections finds the properties. You could solve this (with a perf hit) by sorting before recursing.
SupportedTypes needed the right types
My JavaScriptConverter couldn't live in the same assembly as my model. If you plan to reuse this converter code, you'll probably run into the same problem.
To solve this I had to pre-traverse the object tree, keeping a HashSet<Type> of already seen types (to avoid my own infinite recursion), and pass that to the JavaScriptConverter before registering it.
Looking back on my solution, I would now use code generation templates to create a list of the entity types. This would be much more foolproof (it uses simple iteration), and have much better perf since it would produce a list at compile time. I'd still pass this to the converter so it could be reused between models.
My final solution
I threw out that code and tried again :)
I simply wrote code to project onto new types ("ViewModel" types - in your case, it would be service contract types) before doing my serialization. The intention of my code was made more explicit, it allowed me to serialize just the data I wanted, and it didn't have the potential of slipping in queries on accident (e.g. serializing my whole DB).
My types were fairly simple, and I didn't need most of them for my view. I might look into AutoMapper to do some of this projection in the future.

Can this extension method be improved?

I have the following extension method
public static class ListExtensions
{
public static IEnumerable<T> Search<T>(this ICollection<T> collection, string stringToSearch)
{
foreach (T t in collection)
{
Type k = t.GetType();
PropertyInfo pi = k.GetProperty("Name");
if (pi.GetValue(t, null).Equals(stringToSearch))
{
yield return t;
}
}
}
}
What it does is by using reflection, it finds the name property and then filteres the record from the collection based on the matching string.
This method is being called as
List<FactorClass> listFC = new List<FactorClass>();
listFC.Add(new FactorClass { Name = "BKP", FactorValue="Book to price",IsGlobal =false });
listFC.Add(new FactorClass { Name = "YLD", FactorValue = "Dividend yield", IsGlobal = false });
listFC.Add(new FactorClass { Name = "EPM", FactorValue = "emp", IsGlobal = false });
listFC.Add(new FactorClass { Name = "SE", FactorValue = "something else", IsGlobal = false });
List<FactorClass> listFC1 = listFC.Search("BKP").ToList();
It is working fine.
But a closer look into the extension method will reveal that
Type k = t.GetType();
PropertyInfo pi = k.GetProperty("Name");
is actually inside a foreach loop which is actually not needed. I think we can take it outside the loop.
But how?
PLease help. (C#3.0)
Using reflection in this way is ugly to me.
Are you sure you need a 100% generic "T" and can't use a base class or interface?
If I were you, I would consider using the .Where<T>(Func<T, Boolean>) LINQ method instead of writing your own Search function.
An example use is:
List<FactorClass> listFC1 = listFC.Where(fc => fc.Name == "BKP").ToList();
public static IEnumerable<T> Search<T>(this ICollection<T> collection, string stringToSearch)
{
Type k = typeof(T);
PropertyInfo pi = k.GetProperty("Name");
foreach (T t in collection)
{
if (pi.GetValue(t, null).Equals(stringToSearch))
{
yield return t;
}
}
}
There's a couple of things you could do -- first you could institute a constraint on the generic type to an interface that has a name property. If it can only take a FactorClass, then you don't really need a generic type -- you could make it an extension to an ICollection<FactorClass>. If you go the interface route (or with the non-generic version), you can simply reference the property and won't have a need for reflection. If, for some reason, this doesn't work you can do:
var k = typeof(T);
var pi = k.GetProperty("Name");
foreach (T t in collection)
{
if (pi.GetValue(t, null).Equals(stringToSearch))
{
yield return t;
}
}
using an interface it might look like
public static IEnumerable<T> Search<T>(this ICollection<T> collection, string stringToSearch) where T : INameable
{
foreach (T t in collection)
{
if (string.Equals( t.Name, stringToSearch))
{
yield return t;
}
}
}
EDIT: After seeing #Jeff's comment, this is really only useful if you're doing something more complex than simply checking the value against one of the properties. He's absolutely correct in that using Where is a better solution for that problem.
Just get the type of T
Type k = typeof(T);
PropertyInfo pi = k.GetProperty("Name");
foreach (T t in collection)
{
if (pi.GetValue(t, null).Equals(stringToSearch))
{
yield return t;
}
}