I'm want to create a collection that will use my source blocks (there are many) as the key element to lookup a value. I want to use this collection as part of a inject function that will use the source name to get the value I need.
I'm wondering it will look like this:
int injectfunction = collectionName.get(SourceBlockName);
I'm open to hear about other more efficient methods.
I've tried using a LinkedHashMap with a Key element class "Other" then specified it as "Source" and the value element as int. I also tried Source, but in both cases it didn't work.
create a collection of type LinkedHashMap with key type String and value element class Source
Then you can populate your collection using this for example:
for(Object o : findAll(getEmbeddedObjects(),o->o instanceof Source)){
collection.put(((Source)o).getName(),(Source)o);
}
Then you can just use the name of the source to inject:
collection.get("source name").inject();
Related
I am new to SWT and RCP I am trying to use TreeViewer.
By referring to some documents, I came to know there is method:
treeViewer.Updte(Object , Properties).
I need to know how SWT figure out which data is for which field.
The method is called update:
public void update(Object element, String[] properties)
Here element must be an object that equals one of the objects returned by the content provider for the tree.
If you have called
treeViewer.setUseHashlookup(true);
then a hash table (similar to HashMap) is used to find the tree element corresponding to the element. Otherwise the tree is just searched exhaustively to find the element.
Is it possible to use different type attribute (instead of _class) for each polymorphic collection like it's implemented in Doctrine(PHP) or Jackson libraries? Current solution allows to store type information in document field. By default it is a full class name stored in the field named _class.
We can easy change it to save some custom string (alias) instead of full class name and change default discriminator field name from _class to something else.
In my situation I'm working on legacy database while legacy application is still in use. Legacy application uses Doctrine (PHP) ODM as datalayer.
Doctrine allows to define discriminator field name (_class in SpringData) by annotation and have it different for each collection.
In Spring Data when I pass typeKey to DefaultMongoTypeMapper it used for all collections.
Thanks.
// MyCustomMongoTypeMapper.java
// ...
#SuppressWarnings("unchecked")
#Override
public <T> TypeInformation<? extends T> readType(DBObject source, TypeInformation<T> basicType) {
Assert.notNull(basicType);
Class<?> documentsTargetType = null;
Class<? super T> parent = basicType.getType();
while (parent != null && parent != java.lang.Object.class) {
final String discriminatorKey = getDiscriminatorKey(parent); //fetch key from annotation
if (null == discriminatorKey) {
parent = parent.getSuperclass();
} else {
accessor.setKey(discriminatorKey);
return super.readType(source, basicType);
}
}
accessor.resetKey();
return super.readType(source, basicType);
}
Something that should work for you is completely exchanging the MongoTypeMapper instance that MappingMongoConverter uses. As you discovered the already available implementation assumes a common field name and takes yet another strategy to either write the fully-qualified class name or an alias or the like.
However, you should be able to just write your own and particularly focus on the following methods:
void writeType(TypeInformation<?> type, DBObject dbObject) — you basically get the type and have complete control over where and how to put that into the DBObject.
<T> TypeInformation<? extends T> readType(DBObject source, TypeInformation<T> defaultType); — you get the type declared on the reading side (i.e. usually the most common type of the hierarchy) and based on that have to lookup the type from the given source document. I guess that's exactly the inverse of what's to be implemented in the other method.
On a final note, I would strongly recommend against using different type field names for different collections as on the reading side you might run into places where just Object is declared on the property and you basically don't get no clue where to even look for in the document.
I know that MongoDb C# driver doesn't support projections so I searched a little bit and I found that many people uses a mongoCursor to perform such queries, I'm trying to select only specific fields and my code is the following:
public T GetSingle<T>(Expression<Func<T, bool>> criteria,params Expression<Func<T, object>>[] fields) where T : class
{
Collection = GetCollection<T>();
return Collection.FindAs<T>(Query<T>.Where(criteria)).SetFields(Fields<T>.Include(fields)).SetLimit(1).SingleOrDefault();
}
I got and custom repository for users on top of that:
public User GetByEmail(string mail, params Expression<Func<User, object>>[] fields)
{
return GetSingle<User>(x=>x.Email==mail,fields);
}
this is the usage:
_repository.GetByEmail(email, x=>x.Id,x=>x.DisplayName,x=>x.ProfilePicture)
but I'm getting the fields included in the parameter but also all the Enums,dates and Boolean values that are part of the class User, the values that are string and not included in the field list are null so that's fine
what can I do to avoid that?
By using SetFields, you can specify what goes through the wire. However, you're still asking the driver to return hydrated objects of type T, User in this case.
Now, similar to say an int, enum and boolean are value types, so their value can't be null. So this is strictly a C#-problem: there is simply no value for these properties to indicate that they don't exist. Instead, they assume a default value (e.g. false for bool and 0 for numeric types). A string, on the other hand, is a reference type so it can be null.
Strategies
Make the properties nullable You can use nullable fields in your models, e.g.:
class User {
public bool? GetMailNotifications { get; set; }
}
That way, the value type can have one of its valid values or be null. This can, however, be clumsy to work with because you'll have to do null checks and use myUser.GetMailNotifications.Value or the myUser.GetMailNotifications.GetValueOrDefault helper whenever you want to access the property.
Simply include the fields instead this doesn't answer the question of how to it, but there are at least three good reasons why it's a good idea to include them:
When passing a User object around, it's desirable that the object is in a valid state. Otherwise, you might pass a partially hydrated object to a method which passes it further and at some point, someone attempts an operation that doesn't make sense because the object is incomplete
It's easier to use
The performance benefit is negligible, unless you're embedding huge arrays which I would suggest to refrain from anyway and which isn't the case here.
So the question is: why do you want to make all the effort of excluding certain fields?
I have an simple object that has a name
public class Foo {
private String name
}
Each user on the site may have up to 10 Foo's associated with them. Within this context, when a new Foo is created, I would like to validate that there isn't another foo associated with the same user that already exists.
I could Create a custom Bean Validator But annotations require the paramaeters to be defined during compilation. How would I then pass across the names of the existing Foos?
As suggested in various places, I could use EL expressions as an alternative way to pick up the data. This feels like using a sledgehammer to crack a nut. It also brings in a whole bunch of potential issues to consider least of all being ease of testing.
I could do class-wide validation using a boolean field
#AssertTrue(message="Name already exists")
public boolean isNameUnique() {
return (existingNames.contains(name));
}
But the validation message would not show up next to the name field. It is a cosmetic issue and this can be a backup plan. However, its not ideal.
Which brings me to the question:
Is there a simple way to write a Bean Validator that can check the value against a collection of values at the field level and meet the following restrictions ?
Previous values determined at runtime
Not using things like EL expressions
Field level validation instead of class level.
EDIT in reponse to Hardy:
The Foo class is an entity persisted within a database. They are picked up and used through a DAO interface.
I could loop through the entities but that means plugging the DAO into the validator and not to mention that the I would need to write the same thing again if I have another class that too has this constraint.
It would help to see how you want to use the Foo class. Can you extend your example code? Are they kept in a list of Foo instances. A custom constraint seems to be a good fit. Why do you need to pass any parameters to the constraints. I would just iterate over the foos and check whether the names are unique.
I need a way to select objects given the name:string of the object and ObjectContext, but dont know how to do this.
I will use this to create a generic lookup dropdown editor template in ASP.MVC
So when view contains #Html.EditorFor (student=>student.School), it will show dropDown containing list of schools.
I get the target entity name from relation.ToMember, but don't know how to query data records with this input.
Currently I have added a custom method which gets string and returns innumerable and inside that I have a big switch case "School": return this.SchooleSet;
Is there a right way to do this.
I also want to add a generic method which allows me to query using syntax like ctx.Select<Teacher>().Where(...)
again here I have implemented with switch but there should be a better way to do this.
Try the CreateObjectSet method.
var q = ctx.CreateObjectSet<Teacher>().Where(...);