In Guvnor documentation, I know how to define data enumeration and use it in Guvnor. Is it possible to fetch data enumeration from my own Java code?
From Guvnor's documentation:
Loading enums programmatically: In some cases, people may want to load their enumeration data entirely from external data source (such as a relational database). To do this, you can implement a class that returns a Map. The key of the map is a string (which is the Fact.field name as shown above), and the value is a java.util.List of Strings.
public class SampleDataSource2 {
public Map<String>, List<String> loadData() {
Map data = new HashMap();
List d = new ArrayList();
d.add("value1");
d.add("value2");
data.put("Fact.field", d);
return data;
}
}
And in the enumeration in the BRMS, you put:
=(new SampleDataSource2()).loadData()
The "=" tells it to load the data by executing your code.
Best Regards,
I hope its not too late to answer this.
To load enum from application to guvnor,
Build the enum class dynamically from string (in my case enum values is provided by user via GUI)
Add it to a jar, convert it to byte array
POST it to guvnor as asset (model jar) via REST call
Call save repository operation (change in source code of guvnor)
Now enum will be visible as fact in your rule window.
Editing/Deletion of model jar and validation of rules aftermath is something you have to take care.
Related
I'm a beginner in MVVM architecture and I'm stuck on an issue in a product app.
The issue: Mapping a Model to an Entity
Let me explain my code structure;
Domain > Respository > order_repo.dart
This order_repo.dart is an abstract class OrderRepo that declares a function getOrders that returns OrderEntity.
Data > Repo Implementation > order_repo_impl.dart
The order_repo_impl.dart contains a class OrderRepoImpl that defines the function getOrders that returns OrderModel that extends OrderEntity.
Domain > Usecase > order_usecase.dart
The order_usecase.dart contains a class that uses OrderUsecase that uses an instance of OrderRepo to call getOrders. Both the getOrders and call functions return OrderEntity.
The problem is that when I call the usecase, I expect it to return OrderEnity but the runtimeType is OrderModel. I tried to parse it as OrderModel but I could not do that because I get this warning, Unnecessary Cast, because at compile time the compiler is also expecting OrderEntity.
One solution I found is to define a Translator, that would convert OrderModel to OrderEntity inside usecase, but I'm confused regarding the right place for its definition, because I cannot use OrderModel inside the Domain layer as per Clean Architecutre to keep Domain independent of other layers and if I define the Tanslator inside Data layer, I still cannot call it in the usecase, because of the same reason.
It is bad practice
The order_repo_impl.dart contains a class OrderRepoImpl that defines the function getOrders that returns OrderModel that extends OrderEntity.
Not always model can extend entity f.e. (Code-Generated Model)
The domain layer should not know about implementation and about the data layer, because Implementation can change but Business logic will not as long as there is no changes in business logic.
I found is to define a Translator that would convert OrderModel to OrderEntity inside usecase
It is good practice
It is good practice to convert it inside implementation of repository. Because, repository is kind of binding between domain and data layers,
Or even, You can create converter class that translate entity to model and vice verca and the instance of the class will be to the constructor of repository implementation.
Why it is better to follow these advices.
Implementation quite often can change but abstraction rarely changes.
Easy to switch from one implementation to another one.
You follow SOLID principles
P.s. I hope I could answer to your question. Whether you have feel free to ask it on comments
Is it possible to use different type attribute (instead of _class) for each polymorphic collection like it's implemented in Doctrine(PHP) or Jackson libraries? Current solution allows to store type information in document field. By default it is a full class name stored in the field named _class.
We can easy change it to save some custom string (alias) instead of full class name and change default discriminator field name from _class to something else.
In my situation I'm working on legacy database while legacy application is still in use. Legacy application uses Doctrine (PHP) ODM as datalayer.
Doctrine allows to define discriminator field name (_class in SpringData) by annotation and have it different for each collection.
In Spring Data when I pass typeKey to DefaultMongoTypeMapper it used for all collections.
Thanks.
// MyCustomMongoTypeMapper.java
// ...
#SuppressWarnings("unchecked")
#Override
public <T> TypeInformation<? extends T> readType(DBObject source, TypeInformation<T> basicType) {
Assert.notNull(basicType);
Class<?> documentsTargetType = null;
Class<? super T> parent = basicType.getType();
while (parent != null && parent != java.lang.Object.class) {
final String discriminatorKey = getDiscriminatorKey(parent); //fetch key from annotation
if (null == discriminatorKey) {
parent = parent.getSuperclass();
} else {
accessor.setKey(discriminatorKey);
return super.readType(source, basicType);
}
}
accessor.resetKey();
return super.readType(source, basicType);
}
Something that should work for you is completely exchanging the MongoTypeMapper instance that MappingMongoConverter uses. As you discovered the already available implementation assumes a common field name and takes yet another strategy to either write the fully-qualified class name or an alias or the like.
However, you should be able to just write your own and particularly focus on the following methods:
void writeType(TypeInformation<?> type, DBObject dbObject) — you basically get the type and have complete control over where and how to put that into the DBObject.
<T> TypeInformation<? extends T> readType(DBObject source, TypeInformation<T> defaultType); — you get the type declared on the reading side (i.e. usually the most common type of the hierarchy) and based on that have to lookup the type from the given source document. I guess that's exactly the inverse of what's to be implemented in the other method.
On a final note, I would strongly recommend against using different type field names for different collections as on the reading side you might run into places where just Object is declared on the property and you basically don't get no clue where to even look for in the document.
The title may seems confusing, but it's not easy to describe the question in few words. Let me explain the situation:
We have a web application project, and a calculation engine project. The web application collect user input and use the engine to generate some result, and represent to user. Both user input, engine output and other data will be persisted to DB using JPA.
The engine input and output consist of objects in tree structure, example like:
Class InputA {
String attrA1;
List<InputB> inputBs;
}
Class InputB {
String attrB1;
List<InputC> inputCs;
}
Class InputC {
String attrC1;
}
The engine output is in similar style.
The web application project handle the data persistence using JPA. We need to persist the engine input and output, as well as some other data that related to the input and output. Such data can be seem as extra fields to certain class. For example:
We want to persist extra field, so it looks like:
Class InputBx extends InputB{
String attrBx1;
}
Class InputCx extends InputC{
String attrCx1;
}
In Java OO world, this works, we can store a list of InputBx in InputA, and store a list of InputCx in InputBx because of the inheritance.
But we meet trouble when using JPA to persist the extended objects.
First of all, it requires the engine project to make their class become JPA entities. The engine was working fine by itself, it accept correct input and generate correct output. It doesn't smell good to force their model to become JPA entities when another project try to persist the model.
Second, the JPA doesn't accept the inherited objects when using InputA as the entry. From JPA point of view, it only know that InputA contains a list of InputB, and not possible to persist/retrieve a list of InputBx in object of InputA.
When trying to solve this, we had come up 2 ideas, but neither one satisfied us:
idea 1:
Use composition instead inheritance, so we still persist the original InputA and it's tree structure include InputB and InputC:
Class InputBx{
String attrBx1;
InputB inputB;
}
Class InputCx{
String attrCx1;
InputC inputC;
}
So the original input object tree can be smoothly retrieved, and InputBx and InputCx objects needs to be retrieved using the InputB and InputC objects in the tree as references.
The good thing is that no matter what changes made to the structure of the original input class tree (such as change attribute name, add/remove attributes in the classes), the extended class InputBx and InputCx and their attributes automatically synchronized.
The drawback is that this structure increases the calls to the database, and the model is not easy to use in the application(both back end and front end). Whenever we want related information of InputB or InputC, we need to manually code to search the corresponding object of InputBx and InputCx.
idea 2:
Manually make mirror classes to form a similar structure of the original input classes. So we created:
Class InputAx {
String attrA1;
List<InputBx> inputBs;
}
Class InputBx {
String attrB1;
List<InputCx> inputCs;
String attrBx1;
}
Class InputCx {
String attrC1;
String attrCx1;
}
We could use this as model of the web application, and the JPA entities as well. Here's what we could get:
Now the engine project can be set free, it doesn't need to bind to how the other projects persist these input/output objects. The engine project is independant now.
The JPA persistence works just fluent, no extra calls to database is required
The back end and front end UI just use this model to get both original input objects and related information with no effort. When trying use engine to perform calculation, we can use a mapping mechanism to transfer between the original objects and extended objects.
The drawback is also obvious:
There is duplication in the class structure, which is not desired from the OO point of view.
When considering it as DTO to reduce the database calls, it can be claimed as anti-pattern when using DTO in local transfer.
The structure is not automatically synchronized with the original model. So if there are any changes made to the original model, we need to manually update this model as well. If some developers forget to do this, there will be some not-easy-to-find defects.
I'm looking for the following help:
Is there any existing good/best practices or patterns to solve similar situation we meet? Or any anti-patterns that we should try to avoid? References to web articles are welcome.
If possible, can you comment on the idea 1 and idea 2, from the aspect of OO design, Persistence practices, your experience, ect.
I will be grateful for your help.
I am very new to GWT.
I am using ext-gwt widgets.
I found many places in my office code containing like,
class A extends BaseModel{
private UserAccountDetailsDto userAccountDetailsDto = null;
//SETTER & GETTER IN BASEMODEL WAY
}
Also, the DTO reference is unused.
public class UserAccountDetailsDto implements Serializable{
private Long userId=null;
private String userName=null;
private String userAccount=null;
private String userPermissions=null;
//NORMAL SETTER & GETTER
}
Now, I am able to get the result from GWT Server side Code and things Work fine, but when I comment the DTO reference inside the class A, I am not getting any Result.
Please explain me the need of that.
Thanks
Well the problem is in implementation of GXT BaseModel and GWT-RPC serialization.
BaseModel is based around special GXT map, RpcMap. This map has defined special serialization rules, which let's avoid RPC type explosion, but as side effect, only some simple types stored in map will be serialized. E.g. you can put any type inside the map, but if you serialize/deserialize it, only values of type Integer, String ,Double,Byte, Float and Short (and arrays of this types) will be present. So the meaning behind putting reference to the DTO inside BaseModel, is to tell GWT-RPC that this type is also have to be serialized.
Detailed explanation
Basically GWT-RPC works like this:
When you define an interface for service, GWT-RPC analyzes all the classes used in parameters/ return type, to create serializers/deserializers. If you return something like Map<Object,Object> from your service, GWT-RPC will have to create a serializer for each class which implements Map and Serializable interfaces, but also it will generate serializers for each class which implements Serializable. In the end it is quite a bad situation, because the size of your compiled js file will be much biggger. This situation is called GWT-RPC type explosion.
So, in the BaseModel, all values are stored in RpcMap. And RpcMap has custom written serializer (RpcMap_CustomFieldSerializer you can see it's code if you interested how to create such things), so it doesn't cause the problem described above. But since it has custom serializer GWT dosn't know which custom class have been put inside RpcMap, and it doesn't generate serializers for them. So when you put some field into your BaseModel class, gwt knows that it might need to be able to serialize this class, so it will generate all the required stuff for this class.
Porting GXT2 Application code using BaseModel to GXT3 Model is uphill task. It would be more or less completely rewrite on model side with ModelProviders from GXT3 providing some flexibility. Any code that relies on Model's events, store, record etc are in for a rewrite.
in my ASP MVC 2 application I follow the strongly typed view pattern with specific viewmodels.
Im my application viewmodels are responsible for converting between models and viewmodels. My viewmodels I have a static ToViewModel(...) function which creates a new viewmodel for the corresponding model. So far I'm fine with that.
When want I edit a model, I send the created viewmodel over the wire and apply the changes to back to the model. For this purpose I use a static ToModel(...) method (also declared in the view model). Here the stubs for clarification:
public class UserViewModel
{
...
public static void ToViewModel(User user, UserViewModel userViewModel)
{
...
}
public static void toModel(User user, UserViewModel userViewModel)
{
???
}
}
So, now my "Problem":
Some models are complex (more than just strings, ints,...). So persistence logic has to be put somewhere.(With persistence logic I mean the decisions wheater to create a new DB entry or not,... not just rough CRUD - I use repositories for that)
I don't think it's a good idea to put it in my repositories, as repositories (in my understanding) should not be concerned with something that comes from the view.I thought about putting it in the ToModel(...) method but I'm not sure if thats the right approach.
Can you give me a hint?
Lg
warappa
Warappa - we use both a repository pattern and viewmodels as well.
However, we have two additonal layers:
service
task
The service layer deals with stuff like persisting relational data (complex object models) etc. The task layer deals with fancy linq correlations of the data and any extra manipulation that's required in order to present the correct data to the viewmodel.
Outwith the scope of this, we also have a 'filters' class per entity. This allows us to target extension methods per class where required.
simples... :)
In our MVC projects we have a seperate location for Converters.
We have two types of converter, an IConverter and an ITwoWayConverter (a bit more too it than that but I'm keeping it simple).
The ITwoWayConverter contains two primary methods ConvertTo and ConvertFrom which contain the logic for converting a model to a view model and visa versa.
This way you can create specific converts for switching between types such as:
public class ProductToProductViewModelConverter : ITwoWayConverter<Product,ProductViewModel>
We then inject the relevant converters into our controller as needed.
This means that your conversion from one type to another is not limited by a single converter (stored inside the model or wherever).