When to use mutable vs immutable classes in Scala - scala

Much is written about the advantages of immutable state, but are there common cases in Scala where it makes sense to prefer mutable classes? (This is a Scala newbie question from someone with a background in "classic" OOP design using mutable classes.)
For something trivial like a 3-dimensional Point class, I get the advantages of immutability. But what about something like a Motor class, which exposes a variety of control variables and/or sensor readings? Would a seasoned Scala developer typically write such a class to be immutable? In that case, would 'speed' be represented internally as a 'val' instead of a 'var', and the 'setSpeed' method return a new instance of the class? Similarly, would every new reading from a sensor describing the motor's internal state cause a new instance of Motor to be instantiated?
The "old way" of doing OOP in Java or C# using classes to encapsulate mutable state seems to fit the Motor example very well. So I'm curious to know if once you gain experience using the immutable-state paradigm, you would even design a class like Motor to be immutable.

I'll use a different, classic, OO modeling example: bank accounts.
These are used in practically every OO course on the planet, and the design you usually end up with is something like this:
class Account(var balance: BigDecimal) {
def transfer(amount: BigDecimal, to: Account): Unit = {
balance -= amount
to.balance += amount
}
}
IOW: the balance is data, and the transfer is an operation. (Note also that the transfer is a complex operation involving multiple mutable objects, which however should be atomic, not complex … so you need locking etc.)
However, that is wrong. That's not how banking systems are actually designed. In fact, that's not how actual real-world (physical) banking works, either. Actual physical banking and actual banking systems work like this:
class Account(implicit transactionLog: TransactionLog) {
def balance = transactionLog.reduceLeft(_ + _)
}
class TransactionSlip(from: Account, to: Account, amount: BigDecimal)
IOW: the balance is an operation and the transfer is data. Note that everything here is immutable. The balance is just a left fold of the transaction log.
Note also that we didn't even end up with a purely functional, immutable design as an explicit design goal. We just wanted to model the banking system correctly and ended up with a purely functional, immutable design by coincidence. (Well, it's actually not by coincidence. There's a reason why real-world banking works that way, and it has the same benefits as it has in programming: mutable state and side-effects make systems complex and confusing … and in banking that means money disappearing.)
The point here is that the exact same problem can be modeled in very different ways, and depending on the model, you might up with something which is trivial to make purely immutable or very hard.

I think the short answer is most likely: Yes, immutable data structures are far more usable and efficient than you realize.
The question you've posed is a bit ambiguous because the answer depends less on the motor you've described than on the software system that you haven't described. The great mistake of how OOP is always taught, in my opinion, is recommending bottom-up design of "domain" classes prior to considering how the classes will be used. Maybe your system even needs more than one data structure holding the same information about a motor in different ways.
The "old way" of doing OOP in Java or C# using classes to encapsulate mutable state seems to fit the motor example very well.
The "new way" (arguably), in support of multithreaded systems, is to encapsulate mutable state within actors. An actor that represents the current state of a motor would be mutable. But if you were to take a "snapshot" of the motor's state and pass that information to another actor, the message needs to be immutable.
In that [immutable] case, would 'speed' be represented internally as a 'val' instead of a 'var', and the 'setSpeed' method return a new instance of the class?
Yes, but you don't actually have to write that method if you use a case class. Suppose you have a class defined as case class Motor(speed: Speed, rpm: Int, mass: Mass, color: Color). Using the copy method, you could write something like motor2 = motor1.copy(rpm = 3500, speed = 88.mph).

Related

Refactoring an OOP "decorator" to Free monad structure(s)

I have a bit of “legacy” Scala code (Java-like), which does a bit of data access. There’s a decorator which tracks usage of the DAO methods (collecting metrics), like this:
class TrackingDao(tracker: Tracker) extends Dao {
def fetchById(id: UUID, source: String): Option[String] = {
tracker.track("fetchById", source) {
actualFetchLogic(...)
}
}
...
}
I'm trying to model this as a Free monad. I've defined the following algebra for the DAO operations:
sealed trait DBOp[A]
case class FetchById(id: UUID) extends DBOp[Option[String]]
...
I see two options:
a) I can either make two interpreters that take DBOp, one performs the actual data access, the other does the tracking, and compose them together OR
b) I make Tracking an explicit algebra, and use a Coproduct to use them both in the same for composition OR
c) Something completely different!
The first option looks more like a "decorator" approach, which is tied to DBOp, the second is more generic solution, but would require calling the 'tracking' algebra explicitly.
In addition, notice the source parameter on the original fetchById call: it's only used for tracking. I much rather remove it from the API.
Here's the actual question: how do I model the tracking?
It's not totally clear from your question, but if tracking is a sort of ambient effect that should "happen" when you perform db access and source is just an argument for tracking purposes, you may not have to mention it in your Free language at all. You can use the ADT you have now and interpret into (Tracker, Source, OtherStuff) => IO[A] for instance, so what you get back is a function that will produce a program to do DB access once you give it a Tracker and source and whatever else you need (DB connection for instance), and the tracking implementation is entirely private to the interpreter. This lets you write your database program without thinking about tracking at all.
If on the other hand you do need to talk about tracking in your business logic then we probably need more information about what it would mean to have multiple Trackers and sources and how they're introduced and used. A coproduct or extended language or nested language might be necessary to deal with what you need to express.
As in everything in our industry, the straight answer is "it depends" :). Since "tracking" is vague concept here (I don't know details of the domain), I would say that you have two possible scenarios (or at least I see two)
a) "tracking" is an element of your business vocabulary
If tracking is a separate concern that is part of the vocabulary that is used by your business, then I would go with a separate algebra representing that concern. Something similar to this would be "authentication & authorization" - even though it is a "low-level" concern it is still part of the business language ("As admin I want to...") I would go here with separate algebra
b) "tracking" is mechanism to some 'debugging', 'logging'
If tracking is not part of the language, but element of machinery that you keep for maintenance, then I would keep that in where it belongs - the machinery. I would go with an interpreter that would side effect with 'tracking' (logging, debugging) those different calls.
In other words, if right now you don't have a single test that tests "if I do this business thingy, then this should be tracked" then most definitely I would go with option b) here

How to wrap procedural algorithms in OOP language

I have to implement an algorithm which fits perfectly to the procedural design approach. It has no relations with some data structure, it just takes couple of objects, bunch of control parameters and performs complicated operations on them, including creating and modifying intermediate temporal data, subroutines calls, many cpu-intensive data transformations. The algorithm is too specific to include in either parameter object as method.
What is idiomatic way to wrap such algorithms in an OOP language? Define static object with static method that performs calculation? Define class that takes all algorithm parameters as constructor arguments and have result method to return result? Any other way?
If you need more specifics, I'm writing in scala. But any general OOP approach is also applicable.
A static method (or a method on a singleton object in the case of Scala -- which I'm just gonna call a static method because that's the most common terminology) can work perfectly fine and is probably the most common approach to this.
There's some reasons to use other approaches, but they aren't strictly necessary and I'd avoid them unless you actually need an advantage that they give. The reason for this is because static methods are the simplest (if least versatile) approach.
Using a non-static method can be useful because you can then utilize design patterns like the factory pattern. For example, you might have an Operator class with a method evaluate. Now you could have different factories create different Operators so that you can swap your algorithm on the fly. Perhaps a calculator might have an AddOperatorFactory, MultiplyOperatorFactory and so on. Obviously this requires that you are able to instantiate an object that represents the algorithm. Of course, you could just pass a function around directly, as Scala and many other languages allow. Classes allow for inheritance, though, which opens the doors for some design patterns and, well, you're asking about OOP, not Scala specifically.
Also useful is the ability to have state with an object. With static methods, your only options for retaining state are either having global state (ew) or making the user of the static methods keep track of this state (more work for the users). With an instance of an object, you can keep that state inside the instance. For example, if your algorithm is a graph search, perhaps you'd want to allow resuming a search after you find the first match (which obviously requires storing state).
It's not much harder to have to do new MyAlgorithm().doStuff() instead of MyAlgorithm.doStuff(), so if in doubt, I would err on the side of avoiding static methods if you think you'll need the functionality that having an instance offers.

Swift: why aren't all variables lazy by default?

In comparing these two options for defining an instance property:
var networkManager = NetworkManager.sharedInstance()
var lazy networkManager = NetworkManager.sharedInstance()
Both:
Can evaluate a block to get the value
Can be declared inline (not a block, like above)
Lazy:
Can refer to self
Is not calculated until needed
If you don't use it, it is never calculated
Non-lazy:
No benefits whatsoever
It appears that there is no benefit to ever use a non-lazy variable. So why does the language allow the programmer to make this inferior choice?
(I am NOT asking about the difference between var and let à la Are Swift constants lazy by default?)
One reason might be that lazyness is not well-suited for situations where you want control when the evaluation happens. this is relevant in cases where the work being done in the assignment has side effects.
Although this pertains to closure, this blog post by stuart sierra explains this idea very well, and I think it applies equally in any language.
As others already said, there are several critical scenarios where you want the initialization of the properties to be deterministic.
This is an example (among many others) related to game development.
Often the instances of classes representing items in a game scene/level, are created before the level does begin.
Initialisation can be a time expensive task (load stuff from persistent storage, allocate memory, prepare the instances...) and doing this part before the player does begin playing the level does avoid CPU overhead.
This is critical because a CPU overhead in the middle of a level could cause a drop in the frame rate which is a nightmare for the user experience.
FYI. My feeling is that Swift wants to become more like a functional language and would like lazy instantiation in more places.
My early assessment of Swift has held up pretty well over time (well, the "not functional" part. I didn't anticipate how much Swift would favor methods over functions in later versions). Swift is not a functional language and does not intend to be one. This has come up often in WWDC talks, on the forums, on Twitter, and in conversations with the Swift team. Originally all maps and filters were lazy. Swift removed that because of the problems it caused. Probably the best talk on that subject is "Building Better Apps with Value Types in Swift". As they say:
We like mutation. We think it's valuable. We think it's easy to use when done correctly.
You don't get much more "non-functional" than that. Swift also embraces immutable data. But functional programming is about pure functions over immutable data, and that's not Swift.
(Of course there are plenty of non-lazy functional languages. Lazy and functional are orthogonal concepts. Haskell just happened to embrace both.)
To the question at hand, though:
I've found the lazy attribute rarely useful in real-world Swift (I'm being generous; I have never encountered a case where I kept it in the code). It doesn't offer anything like the laziness you get in Haskell. It isn't thread safe, so that's a nightmare. It forces you into reference types (or forces your structs to be mutable), so that can be annoying. If I heard they were pulling it from the language and we just had to roll our own, that'd be fine with me. (I'm tempted to write a proposal to do just that.) It implements a specific memo pattern that can occasionally be handy, but often isn't the one you want. So it's a very good thing that it isn't the default.
As you likely know, global variables and class variables are lazy by default, and I think that tends to work out pretty well since there are so many fewer of them, there's a much better chance they won't be accessed in practice, and that laziness is thread safe (which has a cost, but since they're so much rarer, the cost is much lower).
If you have an expensive object (in terms of, takes long to create) you would like to decide and control when it is created. One could argue that the lazy variable should be the default though. Maybe it has historical reasons. Lazy properties in ObjC resulted in a lot boilerplate code.

Classes vs. Functions [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
What is the difference between functional programming and object oriented programming? How should one decide what kind of programming paradigm should be chosen? what are the benefits of one over the other ?
Functions are easy to understand even for someone without any programming experience, but with a fair math background. On the other hand, classes seem to be more difficult to grasp.
Let's say I want to make a class/function that calculates the age of a person given his/her birth year and the current year. Should I create a class for this or a function?
Or is the choice dependent on the scenario?
P.S. I am working on Python, but I guess the question is generic.
Create a function. Functions do specific things, classes are specific things.
Classes often have methods, which are functions that are associated with a particular class, and do things associated with the thing that the class is - but if all you want is to do something, a function is all you need.
Essentially, a class is a way of grouping functions (as methods) and data (as properties) into a logical unit revolving around a certain kind of thing. If you don't need that grouping, there's no need to make a class.
Like what Amber says in her answer: create a function. In fact when you don't have to make classes if you have something like:
class Person(object):
def __init__(self, arg1, arg2):
self.arg1 = arg1
self.arg2 = arg2
def compute(self, other):
""" Example of bad class design, don't care about the result """
return self.arg1 + self.arg2 % other
Here you just have a function encapsulate in a class. This just make the code less readable and less efficient. In fact the function compute can be written just like this:
def compute(arg1, arg2, other):
return arg1 + arg2 % other
You should use classes only if you have more than 1 function to it and if keep a internal state (with attributes) has sense. Otherwise, if you want to regroup functions, just create a module in a new .py file.
You might look this video (Youtube, about 30min), which explains my point. Jack Diederich shows why classes are evil in that case and why it's such a bad design, especially in things like API.
It's quite a long video but it's a must see.
i know it is a controversial topic, and likely i get burned now. but here are my thoughts.
For myself i figured that it is best to avoid classes as long as possible. If i need a complex datatype I use simple struct (C/C++), dict (python), JSON (js), or similar, i.e. no constructor, no class methods, no operator overloading, no inheritance, etc. When using class, you can get carried away by OOP itself (What Design pattern, what should be private, bla bla), and loose focus on the essential stuff you wanted to code in the first place.
If your project grows big and messy, then OOP starts to make sense because some sort of helicopter-view system architecture is needed. "function vs class" also depends on the task ahead of you.
function
purpose: process data, manipulate data, create result sets.
when to use: always code a function if you want to do this: “y=f(x)”
struct/dict/json/etc (instead of class)
purpose: store attr./param., maintain attr./param., reuse attr./param., use attr./param. later.
when to use: if you deal with a set of attributes/params (preferably not mutable)
different languages same thing: struct (C/C++), JSON (js), dict (python), etc.
always prefer simple struct/dict/json/etc over complicated classes (keep it simple!)
class (if it is a new data type)
a simple perspective: is a struct (C), dict (python), json (js), etc. with methods attached.
The method should only make sense in combination with the data/param stored in the class.
my advice: never code complex stuff inside class methods (call an external function instead)
warning: do not misuse classes as fake namespace for functions! (this happens very often!)
other use cases: if you want to do a lot of operator overloading then use classes (e.g. your own matrix/vector multiplication class)
ask yourself: is it really a new “data type”? (Yes => class | No => can you avoid using a class)
array/vector/list (to store a lot of data)
purpose: store a lot of homogeneous data of the same data type, e.g. time series
advice#1: just use what your programming language already have. do not reinvent it
advice#2: if you really want your “class mysupercooldatacontainer”, then overload an existing array/vector/list/etc class (e.g. “class mycontainer : public std::vector…”)
enum (enum class)
i just mention it
advice#1: use enum plus switch-case instead of overcomplicated OOP design patterns
advice#2: use finite state machines
Classes (or rather their instances) are for representing things. Classes are used to define the operations supported by a particular class of objects (its instances). If your application needs to keep track of people, then Person is probably a class; the instances of this class represent particular people you are tracking.
Functions are for calculating things. They receive inputs and produce an output and/or have effects.
Classes and functions aren't really alternatives, as they're not for the same things. It doesn't really make sense to consider making a class to "calculate the age of a person given his/her birthday year and the current year". You may or may not have classes to represent any of the concepts of Person, Age, Year, and/or Birthday. But even if Age is a class, it shouldn't be thought of as calculating a person's age; rather the calculation of a person's age results in an instance of the Age class.
If you are modelling people in your application and you have a Person class, it may make sense to make the age calculation be a method of the Person class. A method is basically a function which is defined as part of a class; this is how you "define the operations supported by a particular class of objects" as I mentioned earlier.
So you could create a method on your person class for calculating the age of the person (it would probably retrieve the birthday year from the person object and receive the current year as a parameter). But the calculation is still done by a function (just a function that happens to be a method on a class).
Or you could simply create a stand-alone function that receives arguments (either a person object from which to retrieve a birth year, or simply the birth year itself). As you note, this is much simpler if you don't already have a class where this method naturally belongs! You should never create a class simply to hold an operation; if that's all there is to the class then the operation should just be a stand-alone function.
It depends on the scenario. If you only want to compute the age of a person, then use a function since you want to implement a single specific behaviour.
But if you want to create an object, that contains the date of birth of a person (and possibly other data), allows to modify it, then computing the age could be one of many operations related to the person and it would be sensible to use a class instead.
Classes provide a way to merge together some data and related operations. If you have only one operation on the data then using a function and passing the data as argument you will obtain an equivalent behaviour, with less complex code.
Note that a class of the kind:
class A(object):
def __init__(self, ...):
#initialize
def a_single_method(self, ...):
#do stuff
isn't really a class, it is only a (complicated)function. A legitimate class should always have at least two methods(without counting __init__).
I'm going to break from the herd on this one (Edit 7 years later: I'm not a lone voice on this anymore, there is an entire coding movement to do just this, called 'Functional Programming') and provide an alternate point of view:
Never create classes. Always use functions.
Edit: Research has repeatedly shown that Classes are an outdated method of programming. Nearly every research paper on the topic sides with Functional Programming rather than Object Oriented Programming.
Reliance on classes has a significant tendency to cause coders to create bloated and slow code. Classes getting passed around (since they're objects) take a lot more computational power than calling a function and passing a string or two. Proper naming conventions on functions can do pretty much everything creating a class can do, and with only a fraction of the overhead and better code readability.
That doesn't mean you shouldn't learn to understand classes though. If you're coding with others, people will use them all the time and you'll need to know how to juggle those classes. Writing your code to rely on functions means the code will be smaller, faster, and more readable. I've seen huge sites written using only functions that were snappy and quick, and I've seen tiny sites that had minimal functionality that relied heavily on classes and broke constantly. (When you have classes extending classes that contain classes as part of their classes, you know you've lost all semblance of easy maintainability.)
When it comes down to it, all data you're going to want to pass can easily be handled by the existing datatypes.
Classes were created as a mental crutch and provide no actual extra functionality, and the overly-complicated code they have a tendency to create defeats the point of that crutch in the long run.
Edit: Update 7 years later...
Recently, a new movement in coding has been validating this exact point I've made. It is the movement to replace Object Oriented Programming (OOP) with functional programming, and it's based on a lot of these exact issues with OOP. There are lots of research papers showing the benefits of Functional programming over Object Oriented Programming. In addition to the points I've mentioned, it makes reusing code much easier, makes bugfixing and unit testing fasters and easier. Honestly, with the vast number of benefits, the only reason to go with OOP over Functional is compatibility with legacy code that hasn't been updated yet.
Before answering your question:
If you do not have a Person class, first you must consider whether you want to create a Person class. Do you plan to reuse the concept of a Person very often? If so, you should create a Person class. (You have access to this data in the form of a passed-in variable and you don't care about being messy and sloppy.)
To answer your question:
You have access to their birthyear, so in that case you likely have a Person class with a someperson.birthdate field. In that case, you have to ask yourself, is someperson.age a value that is reusable?
The answer is yes. We often care about age more than the birthdate, so if the birthdate is a field, age should definitely be a derived field. (A case where we would not do this: if we were calculating values like someperson.chanceIsFemale or someperson.positionToDisplayInGrid or other irrelevant values, we would not extend the Person class; you just ask yourself, "Would another program care about the fields I am thinking of extending the class with?" The answer to that question will determine if you extend the original class, or make a function (or your own class like PersonAnalysisData or something).)
Never create classes. At least the OOP kind of classes in Python being discussed.
Consider this simplistic class:
class Person(object):
def __init__(self, id, name, city, account_balance):
self.id = id
self.name = name
self.city = city
self.account_balance = account_balance
def adjust_balance(self, offset):
self.account_balance += offset
if __name__ == "__main__":
p = Person(123, "bob", "boston", 100.0)
p.adjust_balance(50.0)
print("done!: {}".format(p.__dict__))
vs this namedtuple version:
from collections import namedtuple
Person = namedtuple("Person", ["id", "name", "city", "account_balance"])
def adjust_balance(person, offset):
return person._replace(account_balance=person.account_balance + offset)
if __name__ == "__main__":
p = Person(123, "bob", "boston", 100.0)
p = adjust_balance(p, 50.0)
print("done!: {}".format(p))
The namedtuple approach is better because:
namedtuples have more concise syntax and standard usage.
In terms of understanding existing code, namedtuples are basically effortless to understand. Classes are more complex. And classes can get very complex for humans to read.
namedtuples are immutable. Managing mutable state adds unnecessary complexity.
class inheritance adds complexity, and hides complexity.
I can't see a single advantage to using OOP classes. Obviously, if you are used to OOP, or you have to interface with code that requires classes like Django.
BTW, most other languages have some record type feature like namedtuples. Scala, for example, has case classes. This logic applies equally there.

Encapsulation in the age of frameworks

At my old C++ job, we always took great care in encapsulating member variables, and only exposing them as properties when absolutely necessary. We'd have really specific constructors that made sure you fully constructed the object before using it.
These days, with ORM frameworks, dependency-injection, serialization, etc., it seems like you're better off just relying on the default constructor and exposing everything about your class in properties, so that you can inject things, or build and populate objects more dynamically.
In C#, it's been taken one step further with Object initializers, which give you the ability to basically define your own constructor. (I know object initializers are not really custom constructors, but I hope you get my point.)
Are there any general concerns with this direction? It seems like encapsulation is starting to become less important in favor of convenience.
EDIT: I know you can still carefully encapsulate members, but I just feel like when you're trying to crank out some classes, you either have to sit and carefully think about how to encapsulate each member, or just expose it as a property, and worry about how it is initialized later. It just seems like the easiest approach these days is to expose things as properties, and not be so careful. Maybe I'm just flat wrong, but that's just been my experience, espeically with the new C# language features.
I disagree with your conclusion. There are many good ways of encapsulating in c# with all the above mentioned technologies, as to maintain good software coding practices. I would also say that it depends on whose technology demo you're looking at, but in the end it comes down to reducing the state-space of your objects so that you can make sure they hold their invariants at all times.
Take object relational frameworks; most of them allow you to specify how they are going to hydrate the entities; NHibernate for example allows you so say access="property" or access="field.camelcase" and similar. This allows you to encapsulate your properties.
Dependency injection works on the other types you have, mostly those which are not entities, even though you can combine AOP+ORM+IOC in some very nice ways to improve the state of these things. IoC is often used from layers above your domain entities if you're building a data-driven application, which I guess you are, since you're talking about ORMs.
They ("they" being application and domain services and other intrinsic classes to the program) expose their dependencies but in fact can be encapsulated and tested in even better isolation than previously since the paradigms of design-by-contract/design-by-interface which you often use when mocking dependencies in mock-based testing (in conjunction with IoC), will move you towards class-as-component "semantics". I mean: every class, when built using the above, will be better encapsulated.
Updated for urig: This holds true for both exposing concrete dependencies and exposing interfaces. First about interfaces: What I was hinting at above was that services and other applications classes which have dependencies, can with OOP depend on contracts/interfaces rather than specific implementations. In C/C++ and older languages there wasn't the interface and abstract classes can only go so far. Interfaces allow you to tie different runtime instances to the same interface without having to worry about leaking internal state which is what you're trying to get away from when abstracting and encapsulating. With abstract classes you can still provide a class implementation, just that you can't instantiate it, but inheritors still need to know about the invariants in your implementation and that can mess up state.
Secondly, about concrete classes as properties: you have to be wary about what types of types ;) you expose as properties. Say you have a List in your instance; then don't expose IList as the property; this will probably leak and you can't guarantee that consumers of the interface don't add things or remove things which you depend on; instead expose something like IEnumerable and return a copy of the List, or even better, do it as a method:
public IEnumerable MyCollection { get { return _List.Enum(); } } and you can be 100% certain to get both the performance and the encapsulation. Noone can add or remove to that IEnumerable and you still don't have to perform a costly array copy. The corresponding helper method:
static class Ext {
public static IEnumerable<T> Enum<T>(this IEnumerable<T> inner) {
foreach (var item in inner) yield return item;
}
}
So while you can't get 100% encapsulation in say creating overloaded equals operators/method you can get close with your public interfaces.
You can also use the new features of .Net 4.0 built on Spec# to verify the contracts I talked about above.
Serialization will always be there and has been for a long time. Previously, before the internet-area it was used for saving your object graph to disk for later retrieval, now it's used in web services, in copy-semantics and when passing data to e.g. a browser. This doesn't necessarily break encapsulation if you put a few [NonSerialized] attributes or the equivalents on the correct fields.
Object initializers aren't the same as constructors, they are just a way of collapsing a few lines of code. Values/instances in the {} will not be assigned until all of your constructors have run, so in principle it's just the same as not using object initializers.
I guess, what you have to watch out for is deviating from the good principles you've learnt from your previous job and make sure you are keeping your domain objects filled with business logic encapsulated behind good interfaces and ditto for your service-layer.
Private members are still incredibly important. Controlling access to internal object data is always good, and shouldn't be ignored.
Many times private methods I've found to be overkill. Most of the time, if the work you're doing is important enough to break out, you can refactor it in such a way that either a) the private method is trivial, or b) is an integral part of other functions.
In addition, with unit testing, having many methods private makes it very hard to unit test. There are ways around that (making test objects friends, etc), but add difficulties.
I wouldn't discount private methods entirely though. Any time there's important, internal algorithms that really make no sense outside of the class there's no reason to expose those methods.
I think that encapsulation is still important, it helps more in libraries than anything imho. You can create a library that does X, but you don't need everyone to know how X was created. And if you wanted to create it more specifically to obfuscate the way you create X. The way I learned about encapsulation, I remember also that you should always define your variables as private to protect them from a data attack. To protect against a hacker breaking your code and accessing variables that they are not supposed to use.