Pattern to bridge the gap between Scalas functional immutable style and JavaFX 2 Properties? - scala

currently working on a GUI application using JavaFX 2 as framework. Used in Java allready and know the principles of data binding.
As the functional style programming in scala advocates the use of imutable values (vals), there is a gap.
Is there an other solution than having an mutable fx-property based presentation model for the gui and and immutable model for application logic with an conversion layer?
Greets,
Andreas

Since your question is a bit vague, please forgive if this is largely based on personal opinion: There are, to my knowledge, no other approaches to the mutable property model. I would however argue that you don't want one:
First of, functional programming, at least from a puristic point of view, attempts to avoid side effects. User interfaces, however, are exclusively about causing side effects. There is a slight philosophical mismatch to begin there.
One of the main benefits of immutable data is that you don't have to deal with control structures to avoid concurrent modification. However, JavaFX's event queue implements a very strict single-threaded approach with an implicit control of read and write access. On the other hand, user interface components fit the idea of mutable objects better than most other fields of programming. The node structure is, after all, an inherent hierarchy of stateful components.
Considering this, I think trying to force a functional and immutable paradigm on JavaFX is not going to work out. Instead, I would recommend building a translation layer based on keypath selections - e.g. binding a Label to display an (immutable) Person's name to the Person, not the name property, and having a resolver handle the access to the name attribute. Basically, this would mean having a combination of Bindings#select and a JavaBeanStringProperty.

Related

Scala, GUI and immutability

I created an algorithm that calculates certain things. This can be considered as the model. The algorithm is implemented in a fully functional way, so it uses immutable classes only.
Now using this model, I would like to develop a GUI layer on the top of it. However I do not know anything about the best-practises of building GUI in Scala. I intend to use ScalaFX.
My problem is the following: in ScalaFX (similarly to JavaFX) you can bind values from the GUI to object properties. This clearly violates the functional paradigm, but seems very convenient.
This would require rewriting my classes to use bindable properties which would feel like the tail wagging the dog — the model would depend on the GUI.
On the other hand, I could have an independent GUI layer. In this case I would need proxy objects to bind to and I would have to create my model objects based on these proxy objects. This would feel more idiomatic but implies a lot of code duplication and extra work. My model and the proxy objects would have to be kept in sync and I would have to take care of copying the attributes.
What is a good way of doing this? A GUI is always full of mutability so functional programming does not feel right here. Nevertheless I love Scala so I would like to keep using it for the GUI, too.
Despite the extra effort, take the second approach. Create small mutable "view" instances for each of your model. Bind the views to the widgets and install observers or hooks that update the view proxies based on changes in your model. Don't let the GUI API dictate how your concurrency approach and model should look like.
I believe there are a few open source libraries around that provide a more functional and/or reactive abstraction layer to the plain Scala-Swing or Scala-FX.

Functional Programming + Domain-Driven Design

Functional programming promotes immutable classes and referential transparency.
Domain-driven design is composed of Value Object (immutable) and Entities (mutable).
Should we create immutable Entities instead of mutable ones?
Let's assume, project uses Scala as main language, how could we write Entities as case classes (immutable so) without risking stale status if we're dealing with concurrency?
What is a good practice? Keeping Entities mutable (var fields etc...) and avoiding great syntax of case classes?
You can effectively use immutable Entities in Scala and avoid the horror of mutable fields and all the bugs that derives from mutable state. Using Immutable entities help you with concurrency, doesn't make things worse. Your previous mutable state will become a set of transformation which will create a new reference at each change.
At a certain level of your application, however, you will need to have a mutable state, or your application would be useless. The idea is to push it as up as you can in your program logic. Let's take an example of a Bank Account, which can change because of interest rate and ATM withdrawal or
deposit.
You have two valid approaches:
You expose methods that can modify an internal property and you manage concurrency on those methods (very few, in fact)
You make all the class immutable and you surround it with a "manager" that can change the account.
Since the first is pretty straightforward, I will detail the first.
case class BankAccount(val balance:Double, val code:Int)
class BankAccountRef(private var bankAccount:BankAccount){
def withdraw(withdrawal) = {
bankAccount = bankAccount.copy(balance = bankAccount.balance - withdrawal)
bankAccount.balance
}
}
This is nice, but gosh, you are still stuck with managing concurrency. Well, Scala offers you a solution for that. The problem here is that if you share your reference to BankAccountRef to your Background job, then you will have to synchronize the call. The problem is that you are doing concurrency in a suboptimal way.
The optimal way of doing concurrency: message passing
What if on the other side, the different jobs cannot invoke methods directly on the BankAccount or a BankAccountRef, but just notify them that some operations needs to be performed? Well, then you have an Actor, the favourite way of doing concurrency in Scala.
class BankAccountActor(private var bankAccount:BankAccount) extends Actor {
def receive {
case BalanceRequest => sender ! Balance(bankAccount.balance)
case Withdraw(amount) => {
this.bankAccount = bankAccount.copy(balance = bankAccount.balance - amount)
}
case Deposit(amount) => {
this.bankAccount = bankAccount.copy(balance = bankAccount.balance + amount)
}
}
}
This solution is extensively described in Akka documentation: http://doc.akka.io/docs/akka/2.1.0/scala/actors.html . The idea is that you communicate with an Actor by sending messages to its mailbox, and those messages are processed in order of receival. As such, you will never have concurrency flaws if using this model.
This is sort of an opinion question that is less scala specific then you think.
If you really want to embrace FP I would go the immutable route for all your domain objects and never put any behavior them.
That is some people call the above the service pattern where there is always a seperation between behavior and state. This eschewed in OOP but natural in FP.
It also depends what your domain is. OOP is some times easier with stateful things like UI and video games. For hard core backend services like web sites or REST I think the service pattern is better.
Two really nice things that I like about immutable objects besides the often mentioned concurrency is that they are much more reliable to cache and they are also great for distributed message passing (e.g. protobuf over amqp) as the intent is very clear.
Also in FP people combat the mutable to immutable bridge by creating a "language" or "dialogue" aka DSL (Builders, Monads, Pipes, Arrows, STM etc...) that enables you to mutate and then to transform back to the immutable domain. The services mentioned above uses the DSL to make changes. This is more natural than you think (e.g. SQL is an example "dialogue"). OOP on the other hand prefers having a mutable domain and leveraging the existing procedural part of the language.

Deep class inheritance hierarchy -- bad idea?

hoping a grandmaster can shed some light. Very high overview is that I am no beginner to coding, but still new to OOP. This set of message classes is at the heart of a large simulation application we're writing, and I don't want to do it stupidly--this interface cuts the application in half, from sequencer to executer and vice-versa.
My question is whether or not it's a bad idea to have an inheritance hierarchy this deep (image is not yet fleshed out, might go 5 or 6 deep in the end). This is as opposed to having some of the child classes just have a directed association to their parent class, instead of inheriting.
I've read that a deep inheritance hierarchy is not a good idea, and that if a child class is inheriting simply to have the parent's data, then you should simply include the parent as data in the child, but I'm having a hard time wrapping my head around why. What bad thing is going to happen to us if I decided to make an inheritance hierarchy 7-deep or something like that? Clearly there's a small performance hit, and changing things at the top of the hierarchy is going to have huge ripples throughout the app, but other than that I don't see an issue. Aside, I care little about minor differences in performance.
(bonus question: Is there an off-the-shelf package that handles this kind of stuff? We have most of the low level physical simulations handled, but the sequencing program we're going to have to write. I just have this suspicion that what I've laid out is very similar to what about 10,000 simulation developers before me did.)
(bonus question #2: any masters of both simulation systems and OOP programming, that would not hate living in Los Angeles? We're hiring.)
that if a child class is inheriting simply to have the parent's data
This is a bad idea. There's this understanding that you define base classes as the most generic of contracts that a set of (concrete) classes are going to honor. This typically means that your contract is about behavior and not implementation.
What bad thing is going to happen to us if I decided to make an inheritance hierarchy 7-deep or something like that?
The major issues here are mundane:
Fragile base classes (changes to base are a nightmare for the derived)
Increased coupling (with too many base classes comes tight coupling)
Encapsulation weakens
Testing issues (leaf level overridden methods can't just be tested to reproduce end-user behavior correctly always due to multiple chained calls here and there)
Maintenance (comes from strong coupling)
(You many want to peruse this paper on Why Ada isn't popular, particularly, Item 6, para 6.)
Is there an off-the-shelf package that handles this kind of stuff?
I'm not sure what you are looking for, but if you're looking for an automated hierarchy simplifier then I don't know of any. Also if such a package exists it'll be highly dependent on your language of choice and you haven't mentioned one.
Note that most of the times such issues can be resolved by looking at alternatives like aggregation or traits or dependency injection or whatever. These are design time issues and are typically (IMO) best ironed out on a whiteboard than with a compiler and millions of LOC.
Seeing this question quite late, but I have had many thoughts on this and have been bitten with deep inheritance hierarchies. One reason they are bad is because you will inevitably get the classification wrong as you specialize the many subclasses. However, once you have the class structure in place it will be hard to change because doing so would break client code.
I blogged about this here.
Old question, but active issue in software development and wanted to add an opinion that may help.
Maintenance overhead can't be estimated when you touch the base classes with DI. This is a major drawback that recently affected our 3 level deep inheritance structure.
Also, if you base is a template, expect to violate SOL(I)(D) if you have too many children with just 1 derived parent in 3 levels for example.
Generally, just to access data I'd choose an adequate design pattern or pass the data pointer if it doesn't violate SOLID. Depending on whether you just read or write, I'd also avoid getters and setters to avoid the Quasi-Classes. It's rare case that default children are 'protected' and I think the structure in this case is a candidate to be flawed by design.

NSDictionaries vs. custom objects with properties, what's your take?

I'm writing an App that basically uses 5 business entities, A, B C, D and E
A has some properties and holds a list of B's
B has some other properties and a list of C's and a list of D's
C has some other properties and a list of D's and a list of E's
D has only a few properties
E has only a few properties
There is no inheritance between any of them.
There's no real business logic involved, the objects are created, populated, and then accessed read-only, no further manipulations.
My natural coding style would be to go object oriented and write classes for each of those entities, use NSArrays for the lists, and have the mentioned properties synthesized.
It would make the code readable.
But another approach seems obvious too: only use NSDictionaries and NSArrays, and working with keys/values instead of properties. This seems more efficient, and somehow "closer" to iPhone-style programming to me... but obviously leads to less readable code. Another advantage is there's no additional custom encoding/decoding for serialization required (persisting state to disk, using JSON, ...)
So on the paper, it speaks for the latter approach, on the other hand, it still feels somehow awkward NOT to use custom objects...
Is this really just a matter of taste question? Or are there maybe other arguments in favour/against one of the approaches? Is only using Dictionaries better memory/performance-wise? Is it the preferred "Apple Coding Style"? (I'm coming from Java/C#).
I don't see much difference between Java/C# and Cocoa in this area. Your question is equivalently applicable to those platforms as well (the same also applies to key-value stores and relational stores).
In an object oriented environment, you have to make a trade-off between the flexibility of the key-value approach for storing data and the structured and object oriented style. I'd go with the key-value approach only when I need the flexibility (e.g. the structure is dynamic and might change by user or not known at compile time). Otherwise, taking that route might get you completely off the OOP conventions and benefits (By the way, this is the important point. Does the hassle of sticking to object oriented principles worth it for that specific circumstance? I think your question reduces to this one and to answer it, you should analyze your specific situation)
It largely depends on whether your objects are just collections of data (key/value pairs) or implement their own functionality.
If they're data I'd say go with NSDictionary, it's a lot less code and as you point out you won't have to write serialization routines for each class.
Use a hybrid approach. Store the dictionaries the objects are based on, but expose the most-used values as properties that are either filled when the object is initialized from a dictionary, or have the accessors look into the dictionary for values (less efficient).
Also provide a property to get at the dictionary. This way if you need to propagate a new value quickly to a specific area of the code from the dictionary (presumably a new value added by the server) you have that flexibility. Then if callers are making heavy use of a value you can migrate it to be a true property and get the completion and type checking of a property.

What are the important rules in Object Model Design

We are developing an extension (in C# .NET env.) for a GIS application, which will has predefined types
for modeling the real world objects, start from GenericObject, and goes to more specific types like Pipe and Road with their detailed properties and methods like BottomOfPipe, Diameter and so on.
Surely, there will be an Object Model, Interfaces, Inheritance and lots of other essential parts in the TypeLibrary, and by now we fixed some of them. But as you may know, designing an Object Model is a very ambiguous work, and (I as much as I know), can be done in many different ways and many different results and weaknesses.
Is there any distinct rules in designing O.M.: the Hierarchy, the way of defining Interfaces, abstract and coclasses enums?
Any suggestion, reference or practice?
A couple of good ones:
SOLID
Single responsibility principle
Open/closed principle
Liskoff substitution principle
Interface segregation principle
Dependency inversion principle
More information and more principles here:
http://mmiika.wordpress.com/oo-design-principles/
Check out Domain-Driven Design: Tackling Complexity in the Heart of Software. I think it will answer your questions.
what they said, plus it looks like you are modeling real-world entities, so:
restrict your object model to exactly match the real-world entities.
You can use inheritance and components to reduce the code/model, but only in ways that make sense with the underlying domain.
For example, a Pipe class with a Diameter property would make sense, while a DiameterizedObject class (with a Diameter property) with a GeometryType property of GeometryType.Pipe would not. Both models could be made to work, but the former clearly corresponds to the problem domain, while the latter implements an artificial (non-real-world) perspective.
One additional clue: you know you've got the model right when you find yourself discovering new features in the code that you didn't plan from the start - they just 'naturally' fall out of the model. For example, your model may have Pipe and Junction classes (as connectivity adapters) sufficient to solve the immediate problem of (say) joining different-diameter pipes to each other and calculating flow rates, maximum pressures, and structural integrity. You later realize that since you modeled the structural and connectivity properties of the Pipes and Junctions accurately (within the requirements of the domain) you can also create a JungleGym object from connected pipes and correctly calculate how much structural load it will bear.
This is an extreme example, but it should get the point across: correct object models support extension and often manifest beneficial unexpected properties and features (not bugs!).
The Liskov Substitution Principle, often expressed in terms of "is-a".
Many examples of OOP would be better off making use of "has-a" (in c++ private inheritance or explicit composition) rather than public inheritance ("is-a")
Getting Inheritance right is hard. Doing so with interfaces (pure virtual classes) is often easier than for base/sub classes
Check out the "principles" of Object oriented design. These have guidelines for all the questions you ask.
References:
"Object oriented software construction" by Robert Martin
http://www.objectmentor.com/resources/publishedArticles.html
Checkout the "Design Principles" articles at the above site. They are the best references available.
"BottomOfPipe"? Is that another way of saying the depth of the Pipe below the Road?
Any kind of design is difficult and can be done different ways. There are no guarantees that your design will work when you create it.
The advantage that people who design ball bearings and such have is many more years of experience and data to determine what works and what does not. Software doesn't have as much time or hard data.
Here's some advice:
Inheritance means IS-A. If that doesn't hold, don't use inheritance.
A deep hierarchy is probably a sign of trouble.
From Scott Meyers: Make non-leaf classes interfaces or abstract.
Prefer composition to inheritance.