I came across a scenario in using RxJava and I am not quite sure if I should use an Observable<T> or a final ImmutableList<T>.
Basically, if I import a final and immutable dataset once and never again, should I really expose that as a cold Observable<T>?
public final class StrategyManager {
private static final StrategyManager instance = new StrategyManager();
private final ImmutableList<Strategy> strategies;
private StrategyManager() {
strategies = //import from db
}
public Observable<Strategy> getStrategies() {
return Observable.from(strategies);
}
public static StrategyManager get() {
return instance;
}
}
Or should I just expose it as an ImmutableList<T>?
public final class StrategyManager {
private static final StrategyManager instance = new StrategyManager();
private final ImmutableList<Strategy> strategies;
private StrategyManager() {
strategies = //import from db
}
public ImmutableList<Strategy> getStrategies() {
return strategies;
}
public static StrategyManager get() {
return instance;
}
}
If I expose it as an ImmutableList<T>, the clients have one less monad to deal with for something that will always be constant.
However, maybe I lose flexibility and should use an Observable<T>. For instance, I can decide to use RxJava-JDBC to query the data directly on each call without any caching. Or I can cache() or even replay() so the data can expire and free up memory.
public final class StrategyManager {
private static final StrategyManager instance = new StrategyManager();
private final Observable<Strategy> strategies;
private Database db = null;
private StrategyManager() {
strategies = db.select("SELECT * FROM STRATEGY")
.autoMap(Strategy.class)
.replay(1, TimeUnit.MINUTES)
.autoConnect();
}
public Observable<Strategy> getStrategies() {
return strategies;
}
public static StrategyManager get() {
return instance;
}
}
So my question is, are there situations to not use an Observable? Or in a reactive application, should I always use an Observable even for constant data sets that will not change? Am I right that I should use the latter for flexibility and easily changing behaviors?
I like the question. I suppose there a lots of factors involved in the decision to use a reactive API and no clear Yes or No answer, just a judgement call on what the future might hold.
should I always use an Observable even for constant data sets that will not change?
If you want maximal flexibility, don't mind burdening the client with using RxJava, don't mind debugging difficulties (you've seen long RxJava stacktraces) then use an Observable. Note that even for a "constant data set that will not change" your memory constraints might change and a large data set might not be suitable for holding in memory any more.
Another thing to keep in mind is that Observables do have some processing overhead (volatile reads on every emission for potentially every operator in the chain) so for performance reasons it's sometimes good not to use.
Your use cases, your data and your benchmarks will really determine which way you go.
Would be interesting to hear from API designers within Netflix (and anywhere else) about their experience.
Related
The Versioning API is powerful. However, with the pattern of using it, the code will quickly get messy and hard to read and maintain.
Over the time, product need to move fast to introduce new business/requirements. Is there any advice to use this API wisely.
I would suggest using a Global Version Provider design pattern in Cadence/Temporal workflow if possible.
Key Idea
The versioning API is very powerful to let you change the behavior of the existing workflow executions in a deterministic way(backward compatible). In real world, you may only care about adding the new behavior, and being okay to only introduce this new behavior to newly started workflow executions. In this case, you use a global version provider to unify the versioning for the whole workflow.
The Key idea is that we are versioning the whole workflow (that's why it's called GlobalVersionProvider). Every time adding a new version, we will update the version provider and provide a new version.
Example In Java
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableMap;
import io.temporal.workflow.Workflow;
import java.util.HashMap;
import java.util.Map;
public class GlobalVersionProvider {
private static final String WORKFLOW_VERSION_CHANGE_ID = "global";
private static final int STARTING_VERSION_USING_GLOBAL_VERSION = 1;
private static final int STARTING_VERSION_DOING_X = 2;
private static final int STARTING_VERSION_DOING_Y = 3;
private static final int MAX_STARTING_VERSION_OF_ALL =
STARTING_VERSION_DOING_Y;
// Workflow.getVersion can release a thread and subsequently cause a non-deterministic error.
// We're introducing this map in order to cache our versions on the first call, which should
// always occur at the beginning of an workflow
private static final Map<String, GlobalVersionProvider> RUN_ID_TO_INSTANCE_MAP =
new HashMap<>();
private final int versionOnInstantiation;
private GlobalVersionProvider() {
versionOnInstantiation =
Workflow.getVersion(
WORKFLOW_VERSION_CHANGE_ID,
Workflow.DEFAULT_VERSION,
MAX_STARTING_VERSION_OF_ALL);
}
private int getVersion() {
return versionOnInstantiation;
}
public boolean isAfterVersionOfUsingGlobalVersion() {
return getVersion() >= STARTING_VERSION_USING_GLOBAL_VERSION;
}
public boolean isAfterVersionOfDoingX() {
return getVersion() >= STARTING_VERSION_DOING_X;
}
public boolean isAfterVersionOfDoingY() {
return getVersion() >= STARTING_VERSION_DOING_Y;
}
public static GlobalVersionProvider get() {
String runId = Workflow.getInfo().getRunId();
GlobalVersionProvider instance;
if (RUN_ID_TO_INSTANCE_MAP.containsKey(runId)) {
instance = RUN_ID_TO_INSTANCE_MAP.get(runId);
} else {
instance = new GlobalVersionProvider();
RUN_ID_TO_INSTANCE_MAP.put(runId, instance);
}
return instance;
}
// NOTE: this should be called at the beginning of the workflow method
public static void upsertGlobalVersionSearchAttribute() {
int workflowVersion = get().getVersion();
Workflow.upsertSearchAttributes(
ImmutableMap.of(
WorkflowSearchAttribute.TEMPORAL_WORKFLOW_GLOBAL_VERSION.getValue(),
workflowVersion));
}
// Call this API on each replay tests to clear up the cache
#VisibleForTesting
public static void clearInstances() {
RUN_ID_TO_INSTANCE_MAP.clear();
}
}
Note that because of a bug in Temporal/Cadence Java SDK, Workflow.getVersion can release a thread and subsequently cause a non-deterministic error.
We're introducing this map in order to cache our versions on the first call, which should
always occur at the beginning of the workflow execution.
Call clearInstances API on each replay tests to clear up the cache.
Therefor in the workflow code:
public class HelloWorldImpl{
private GlovalVersionProvider globalVersionProvider;
#VisibleForTesting
public HelloWorldImpl(final GlovalVersionProvider versionProvider){
this.globalVersionProvider = versionProvider;
}
public HelloWorldImpl(){
this.globalVersionProvider = GlobalVersionProvider.get();
}
#Override
public void start(final Request request) {
if (globalVersionProvider.isAfterVersionOfUsingGlobalVersion()) {
GlobalVersionProvider.upsertGlobalVersionSearchAttribute();
}
...
...
if (globalVersionProvider.isAfterVersionOfDoingX()) {
// doing X here
...
}
...
if (globalVersionProvider.isAfterVersionOfDoingY()) {
// doing Y here
...
}
...
}
Best practice with the pattern
How to add a new version
For every new version
Add the new constant STARTING_VERSION_XXXX
Add a new API ` public boolean isAfterVersionOfXXX()
Update MAX_STARTING_VERSION_OF_ALL
Apply the new API into workflow code where you want to add the new logic
Maintain the replay test JSON in a pattern of `HelloWorldWorkflowReplaytest-version-x-description.json. Make sure always add a new replay test for every new version you introduce to the workflow. When generating the JSON from a workflow execution, make sure it exercise the new code path – otherwise it won't be able to protect the determinism. If it requires more than one workflow executions to exercise all branches, then make multiple JSON files for replay.
How to remove a old version:
To remove an old code path(version), add a new version to not execute old code path, then later on use Search attribute query like
GlobalVersion>=STARTING_VERSION_DOING_X AND GlobalVersion<STARTING_VERSION_NOT_DOING_X to find out if there is existing workflow execution still running with certain versions.
Instead of waiting for workflows to close, you can terminate or reset workflows
Example of deprecating a code path DoingX:
Therefor in the workflow code:
public class HelloWorldImpl implements Helloworld{
...
#Override
public void start(final Request request) {
...
...
if (globalVersionProvider.isAfterVersionOfDoingX() && !globalVersionProvider.isAfterVersionOfNotDoingX()) {
// doing X here
...
}
}
###TODO Example In Golang
Benefits
Prevent spaghetti code by using native Temporal versioning API everywhere in the workflow code
Provide search attribute to find workflow of particular version. This will fill the gaps that Temporal Java SDK is missing TemporalChangeVersion feature.
Even Cadence Java/Golang SDK has CadenceChangeVersion, this global
version search attribute is much better in query, because it's an
integer instead of a keyword.
Provide a pattern to maintain replay test easily
Provide a way to test different version without this missing feature
Cons
There shouldn't be any cons. Using this pattern doesn't stop you from using the raw versioning API directly in the workflow. You can combine this pattern with others together.
This was asked during an interview.
There are different manufacturers of buses. Each bus has got different models and each model has only 2 variants. So different manufacturers have different models with only 2 variants. The interviewer asked me to design a standalone program with just classes. She mentioned that I should not think about databases and I didn't have to code them. For example, it could be a console based program with inputs and outputs.
The manufacturers, models and variants information should be held in memory (hard-coded values were fine for this standalone program). She wanted to observe the classes and my problem solving approach.
She told me to focus on implementing three APIs or methods for this system.
The first one was to get information about a particular bus. Input would be manufacturer name, model name and variant name. Given these three values, the information about a particular bus such as its price, model, year, etc should be shown to the client.
The second API would be to compare two buses and the output would be to list the features side by side, probably in a tabular format. Input would be the same as the one for the first API i.e. manufacturer name, model name and variant name for both the buses.
The third one would be to search the buses by price (>= price) and get the list of buses which satisfy the condition.
She also added that the APIs should be scalable and I should design the solution with this condition on my mind.
This is how I designed the classes:
class Manufacturer {
private String name;
private Set<Model> models;
// some more properties related to manufacturer
}
class Model {
private String name;
private Integer year;
private Set<Variant> variants;
// some more properties related to model
}
class Variant {
private String name;
private BigDecimal price;
// some more properties related to variant
}
class Bus {
private String manufacturerName;
private String modelName;
private String variantName;
private Integer year;
private BigDecimal price;
// some more additional properties as required by client
}
class BusService {
// The first method
public Bus getBusInformation(String manufacturerName, String modelName, String variantName) throws Exception {
Manufacturer manufacturer = findManufacturer(manufacturerName);
//if(manufacturer == null) throw a valid exception
Model model = findModel(manufacturer);
// if(model == null) throw a valid exception
Variant variant = findVariant(model);
// if(variant == null) throw a valid exception
return createBusInformation(manufacturer, model, variant);
}
}
She stressed that there were only 2 variants and there wouldn't be any more variants and it should be scalable. After going through the classes, she said she understood my approach and I didn't have to implement the other APIs/methods. I realized that she wasn't impressed with the way I designed them.
It would be helpful to understand the mistake I made so that I could learn from it.
I interpreted your 3 requirements a bit differently (and I may be wrong). But it sounds like the overall desire is to be able to perform different searches against all Models, correct?
Also, sounds to me that as all Variants are Models. I suspect different variants would have different options, but nothing to confirm that. If so, a variant is just a subclass of a particular model. However, if variants end up having the same set of properties, then variant isn't anything more than an additional descriptor to the model.
Anyway, going on my suspicions, I'd have made Model the center focus, and gone with:
(base class)
abstract class Model {
private Manufacturer manufacturer;
private String name;
private String variant;
private Integer year;
private BigDecimal price;
// some more properties related to model
}
(manufacturer variants)
abstract class AlphaModel {
AlphaModel() {
this.manufacturer = new Manufacturer() { name = "Alpha" }
}
// some more properties related to this manufacturer
}
abstract class BetaModel {
BetaModel() {
this.manufacturer = new Manufacturer() { name = "Beta" }
}
// some more properties related to this manufacturer
}
(model variants)
abstract class AlphaBus : AlphaModel {
AlphaBus() {
this.name = "Super Bus";
}
// some more properties related to this model
}
abstract class BetaTruck : BetaModel {
BetaTruck() {
this.name = "Big Truck";
}
// some more properties related to this model
}
(actual instances)
class AlphaBusX : AlphaBus {
AlphaBusX() {
this.variant = "X Edition";
}
// some more properties exclusive to this variant
}
class AlphaBusY : AlphaBus {
AlphaBusY() {
this.variant = "Y Edition";
}
// some more properties exclusive to this variant
}
class BetaTruckF1 : BetaTruck {
BetaTruckF1() {
this.variant = "Model F1";
}
// some more properties exclusive to this variant
}
class BetaTruckF2 : BetaTruck {
BetaTruckF2() {
this.variant = "Model F2";
}
// some more properties exclusive to this variant
}
Then finally:
var data = new Set<Model> {
new AlphaBusX(),
new AlphaBusY(),
new BetaTruckF1(),
new BetaTruckF2()
}
API #1:
var result = data.First(x => x.manufacturer.name = <manufactuer>
&& x.name = <model>
&& x.variant = <variant>);
API #2:
var result1 = API#1(<manufacturer1>, <model1>, <variant1>);
var result2 = API#1(<manufacturer2>, <model2>, <variant2>);
API #3:
var result = data.Where(x => x.price >= <price>);
I would say your representation of the Bus class is severely limited, Variant, Model, Manufacturer should be hard links to the classes and not strings. Then a get for the name of each.
E.G from the perspective of Bus bus1 this.variant.GetName() or from the outside world. bus1.GetVariant().name
By limiting your bus to strings of it's held pieces, you're forced to do a lookup even when inside the bus class, which performs much slower at scale than a simple memory reference.
In terms of your API (while I don't have an example), your one way to get bus info is limited. If the makeup of the bus changes (variant changes, new component classes are introduced), it requires a decent rewrite of that function, and if other functions are written similarly then all of those two.
It would require some thought but a generic approach to this that can dynamically grab the info based on the input makes it easier to add/remove component pieces later on. This will be the are your interviewer was focusing on most in terms of advanced technical&language skills. Implementing generics, delegates, etc. here in the right way can make future upkeep of your API a lot easier. (Sorry I don't have an example)
I wouldn't say your approach here is necessarily bad though, the string member variables are probably the only major issue.
Does anyone have any insights to which of the following two pseudo-patterns are the correct way of instantiating/utilizing reliable collections within a stateful service fabric service? Specifically, wondering if one approach is more performant, more memory-consuming or even more error prone.
Approach 1 (get instance from StateManager inside method):
public class MyService : IService {
public async Task<string> GetSomeValueAsync(string input){
var reliableDic = await StateManager.GetOrAddAsync<IReliableDictionary<string, string>>(StateKey);
var result = await reliableDic.TryGetValue(input);
return result.HasValue ? result.Value : null;
}
}
Approach 2 (store collection as member variable on class)
public class MyService : IService {
private bool _isInitialized;
private readonly object _lock = new object();
private IReliableDictionary<string, string> _dictionary;
private async Task Initialize(){
if (_isInitialized){
return;
}
lock(_lock){
if (_isInitialized){
return;
}
_dictionary = await StateManager.GetOrAddAsync<IReliableDictionary<string, string>>(StateKey);
_isInitialized = true;
}
}
public async Task<string> GetSomeValueAsync(string input){
await Initialize();
var result = await _dictionary.TryGetValue(input);
return result.HasValue ? result.Value : null;
}
}
So, approach 1 fetches the dictionary from the StateManager in each method while approach 2 does a lazy initialization check and then uses class members.
Most samples we see are using approach 1, but the idea behind approach 2 is to store the reliable dictionary in an instance field and avoid any StateManager.GetOrAddAsync hit in each method as well as centralize the handling of the StateKey which could be beneficial in larger services with many methods and potentially more reliable collections.
Would love to learn if there are any pitfalls or inefficiencies in either approach (obviously approach 2 is more verbose and uses more lines of code, but that is not the primary concern).
Actually there is no real reason to cache result of StateManager.GetOrAddAsync except saving memory allocations of Task object or making it available in places where having StateManager available isn't appropriate.
The reason for this is quite simple - the StateManager already caches the instance of IRealiableState for you. So it returns the same instance each time you do StateManager.GetOrAddAsync (here is the official answer from Microsoft).
You also can check it yourself with the very simple test (the c is true):
var q1 = stateManager.GetOrAddAsync<IReliableDictionary<string, string>>("MyDict")
.GetAwaiter().GetResult();
var q2 = stateManager.GetOrAddAsync<IReliableDictionary<string, string>>("MyDict")
.GetAwaiter().GetResult();
var c = q1 == q2;
Does anyone know the best way to refactor a God-object?
Its not as simple as breaking it into a number of smaller classes, because there is a high method coupling. If I pull out one method, i usually end up pulling every other method out.
It's like Jenga. You will need patience and a steady hand, otherwise you have to recreate everything from scratch. Which is not bad, per se - sometimes one needs to throw away code.
Other advice:
Think before pulling out methods: on what data does this method operate? What responsibility does it have?
Try to maintain the interface of the god class at first and delegate calls to the new extracted classes. In the end the god class should be a pure facade without own logic. Then you can keep it for convenience or throw it away and start to use the new classes only
Unit Tests help: write tests for each method before extracting it to assure you don't break functionality
I assume "God Object" means a huge class (measured in lines of code).
The basic idea is to extract parts of its functions into other classes.
In order to find those you can look for
fields/parameters that often get used together. They might move together into a new class
methods (or parts of methods) that use only a small subset of the fields in the class, the might move into a class containing just those field.
primitive types (int, String, boolean). They often are really value objects before their coming out. Once they are value object, they often attract methods.
look at the usage of the god object. Are there different methods used by different clients? Those might go in separate interfaces. Those intefaces might in turn have separate implementations.
For actually doing these changes you should have some infrastructure and tools at your command:
Tests: Have a (possibly generated) exhaustive set of tests ready that you can run often. Be extremely careful with changes you do without tests. I do those, but limit them to things like extract method, which I can do completely with a single IDE action.
Version Control: You want to have a version control that allows you to commit every 2 minutes, without really slowing you down. SVN doesn't really work. Git does.
Mikado Method: The idea of the Mikado Method is to try a change. If it works great. If not take note what is breaking, add them as dependency to the change you started with. Rollback you changes. In the resulting graph, repeat the process with a node that has no dependencies yet. http://mikadomethod.wordpress.com/book/
According to the book "Object Oriented Metrics in Practice" by Lanza and Marinescu, The God Class design flaw refers to classes that tend to centralize the intelligence of the system. A God Class performs too much work on its own, delegating only minor details to a set of trivial classes and using the data from other classes.
The detection of a God Class is based on three main characteristics:
They heavily access data of other simpler classes, either directly or using accessor methods.
They are large and complex
They have a lot of non-communicative behavior i.e., there is a low
cohesion between the methods belonging to that class.
Refactoring a God Class is a complex task, as this disharmony is often a cumulative effect of other disharmonies that occur at the method level. Therefore, performing such a refactoring requires additional and more fine-grained information about the methods of the class, and sometimes even about its inheritance context. A first approach is to identify clusters of methods and attributes that are tied together and to extract these islands into separate classes.
Split Up God Class method from the book "Object-Oriented Reengineering Patterns" proposes to incrementally redistribute the responsibilities of the God Class either to its collaborating classes or to new classes that are pulled out of the God Class.
The book "Working Effectively with Legacy Code" presents some techniques such as Sprout Method, Sprout Class, Wrap Method to be able to test the legacy systems that can be used to support the refactoring of God Classes.
What I would do, is to sub-group methods in the God Class which utilize the same class properties as inputs or outputs. After that, I would split the class into sub-classes, where each sub-class will hold the methods in a sub-group, and the properties which these methods utilize.
That way, each new class will be smaller and more coherent (meaning that all their methods will work on similar class properties). Moreover, there will be less dependency for each new class we generated. After that, we can further reduce those dependencies since we can now understand the code better.
In general, I would say that there are a couple of different methods according to the situation at hand. As an example, let's say that you have a god class named "LoginManager" that validates user information, updates "OnlineUserService" so the user is added to the online user list, and returns login-specific data (such as Welcome screen and one time offers)to the client.
So your class will look something like this:
import java.util.ArrayList;
import java.util.List;
public class LoginManager {
public void handleLogin(String hashedUserId, String hashedUserPassword){
String userId = decryptHashedString(hashedUserId);
String userPassword = decryptHashedString(hashedUserPassword);
if(!validateUser(userId, userPassword)){ return; }
updateOnlineUserService(userId);
sendCustomizedLoginMessage(userId);
sendOneTimeOffer(userId);
}
public String decryptHashedString(String hashedString){
String userId = "";
//TODO Decrypt hashed string for 150 lines of code...
return userId;
}
public boolean validateUser(String userId, String userPassword){
//validate for 100 lines of code...
List<String> userIdList = getUserIdList();
if(!isUserIdValid(userId,userIdList)){return false;}
if(!isPasswordCorrect(userId,userPassword)){return false;}
return true;
}
private List<String> getUserIdList() {
List<String> userIdList = new ArrayList<>();
//TODO: Add implementation details
return userIdList;
}
private boolean isPasswordCorrect(String userId, String userPassword) {
boolean isValidated = false;
//TODO: Add implementation details
return isValidated;
}
private boolean isUserIdValid(String userId, List<String> userIdList) {
boolean isValidated = false;
//TODO: Add implementation details
return isValidated;
}
public void updateOnlineUserService(String userId){
//TODO updateOnlineUserService for 100 lines of code...
}
public void sendCustomizedLoginMessage(String userId){
//TODO sendCustomizedLoginMessage for 50 lines of code...
}
public void sendOneTimeOffer(String userId){
//TODO sendOneTimeOffer for 100 lines of code...
}}
Now we can see that this class will be huge and complex. It is not a God class by book definition yet, since class fields are commonly used among methods now. But for the sake of argument, we can treat it as a God class and start refactoring.
One of the solutions is to create separate small classes which are used as members in the main class. Another thing you could add, could be separating different behaviors in different interfaces and their respective classes. Hide implementation details in classes by making those methods "private". And use those interfaces in the main class to do its bidding.
So at the end, RefactoredLoginManager will look like this:
public class RefactoredLoginManager {
IDecryptHandler decryptHandler;
IValidateHandler validateHandler;
IOnlineUserServiceNotifier onlineUserServiceNotifier;
IClientDataSender clientDataSender;
public void handleLogin(String hashedUserId, String hashedUserPassword){
String userId = decryptHandler.decryptHashedString(hashedUserId);
String userPassword = decryptHandler.decryptHashedString(hashedUserPassword);
if(!validateHandler.validateUser(userId, userPassword)){ return; }
onlineUserServiceNotifier.updateOnlineUserService(userId);
clientDataSender.sendCustomizedLoginMessage(userId);
clientDataSender.sendOneTimeOffer(userId);
}
}
DecryptHandler:
public class DecryptHandler implements IDecryptHandler {
public String decryptHashedString(String hashedString){
String userId = "";
//TODO Decrypt hashed string for 150 lines of code...
return userId;
}
}
public interface IDecryptHandler {
String decryptHashedString(String hashedString);
}
ValidateHandler:
public class ValidateHandler implements IValidateHandler {
public boolean validateUser(String userId, String userPassword){
//validate for 100 lines of code...
List<String> userIdList = getUserIdList();
if(!isUserIdValid(userId,userIdList)){return false;}
if(!isPasswordCorrect(userId,userPassword)){return false;}
return true;
}
private List<String> getUserIdList() {
List<String> userIdList = new ArrayList<>();
//TODO: Add implementation details
return userIdList;
}
private boolean isPasswordCorrect(String userId, String userPassword)
{
boolean isValidated = false;
//TODO: Add implementation details
return isValidated;
}
private boolean isUserIdValid(String userId, List<String> userIdList)
{
boolean isValidated = false;
//TODO: Add implementation details
return isValidated;
}
}
Important thing to note here is that the interfaces () only has to include the methods used by other classes. So IValidateHandler looks as simple as this:
public interface IValidateHandler {
boolean validateUser(String userId, String userPassword);
}
OnlineUserServiceNotifier:
public class OnlineUserServiceNotifier implements
IOnlineUserServiceNotifier {
public void updateOnlineUserService(String userId){
//TODO updateOnlineUserService for 100 lines of code...
}
}
public interface IOnlineUserServiceNotifier {
void updateOnlineUserService(String userId);
}
ClientDataSender:
public class ClientDataSender implements IClientDataSender {
public void sendCustomizedLoginMessage(String userId){
//TODO sendCustomizedLoginMessage for 50 lines of code...
}
public void sendOneTimeOffer(String userId){
//TODO sendOneTimeOffer for 100 lines of code...
}
}
Since both methods are accessed in LoginHandler, interface has to include both methods:
public interface IClientDataSender {
void sendCustomizedLoginMessage(String userId);
void sendOneTimeOffer(String userId);
}
There are really two topics here:
Given a God class, how its members be rationally partitioned into subsets? The fundamental idea is to group elements by conceptual coherency (often indicated by frequent co-usage in client modules) and by forced dependencies. Obviously the details of this are specific to the system being refactored. The outcome is a desired partition (set of groups) of God class elements.
Given a desired partition, actually making the change. This is difficult if the code base has any scale. Doing this manually, you are almost forced to retain the God class while you modify its accessors to instead call new classes formed from the partitions. And of course you need to test, test, test because it is easy to make a mistake when manually making these changes. When all accesses to the God class are gone, you can finally remove it. This sounds great in theory but it takes a long time in practice if you are facing thousands of compilation units, and you have to get the team members to stop adding accesses to the God interface while you do this. One can, however, apply automated refactoring tools to implement this; with such a tool you specify the partition to the tool and it then modifies the code base in a reliable way. Our DMS can implement this Refactoring C++ God Classes and has been used to make such changes across systems with 3,000 compilation units.
I have a singleton IObservable that returns the results of a Linq query. I have another class that listens to the IObservable to structure a message. That class is Exported through MEF, and I can import it and get asynchronous results from the Linq query.
My problem is that after initial composition takes place, I don't get any renotification on changes when the data supplied to the Linq query changes. I implemented INotifyPropertyChanged on the singleton, thinking it word make the exported class requery for a new IObservable, but this doesn't happen.
Maybe I'm not understanding something about the lifetime of MEF containers, or about property notification. I'd appreciate any help.
Below are the singleton and the exported class. I've left out some pieces of code that can be inferred, like the PropertyChanged event handlers and such. Suffice to say, that does work when the underlying Session data changes. The singleton raises a change event for UsersInCurrentSystem, but there is never any request for a new IObservable from the UsersInCurrentSystem property.
public class SingletonObserver: INotifyPropertyChanged
{
private static readonly SingletonObserver _instance = new SingletonObserver();
static SingletonObserver() { }
private SingletonObserver()
{
Session.ObserveProperty(xx => xx.CurrentSystem, true)
.Subscribe(x =>
{
this.RaisePropertyChanged(() => this.UsersInCurrentSystem);
});
}
public static SingletonObserverInstance { get { return _instance; } }
public IObservable<User> UsersInCurrentSystem
{
get
{
var x = from user in Session.CurrentSystem.Users
select user;
return x.ToObservable();
}
}
}
[Export]
public class UserStatus : INotifyPropertyChanged
{
private string _data = string.Empty;
public UserStatus
{
SingletonObserver.Instance.UsersInCurrentSystem.Subscribe(sender =>
{
//set _data according to information in sender
//raise PropertyChanged for Data
}
}
public string Data
{
get { return _data; } }
}
}
My problem is that after initial composition takes place, I don't get any renotification on changes when the data supplied to the Linq query changes.
By default MEF will only compose parts once. When a part has been composed, the same instance will be supplied to all imports. The part will not be recreated unless you explicitly do so.
In your case, if the data of a part change, even if it implements INotifyPropertyChanged, MEF will not create a new one, and you don't need to anyway.
I implemented INotifyPropertyChanged on the singleton, thinking it word make the exported class requery for a new IObservable
No.
Maybe I'm not understanding something about the lifetime of MEF containers, or about property notification.
Property notification allows you to react to a change in the property and has no direct effect on MEF. As for the container's lifetime, it will remain active until it is disposed. While it is still active, the container will keep references to it's compose parts. It's actually a little more complex than that, as parts can have different CreationPolicy that affects how MEF holds the part, I refer you to the following page: Parts Lifetime for more information.
MEF does allow for something called Recomposition. You can set it likewise:
[Import(AllowRecomposition=true)]
What this does tough is allow MEF to recompose parts when new parts are available or existing parts aren't available anymore. From what I understand it isn't what you are referring to in your question.