« Temporal Correlation of Class Changes | Main | The Tyranny of the Diff »

March 20, 2012


Akash Chopra

So what if we apply SOA concepts at a lower level? I've been thinking along the lines of having only async communication between objects by implementing an in-process message bus and having objects act like mini SOA services.

I'd assumed I'd need the concept of events (tell) and async request (ask), but now I'm not so sure. It comes down to your distinction between functional and OO approaches, both of which seem reasonable for services. An OO service can be as stateful as it likes, because we can only tell it about things, not ask anything of it; an observationally immutable, idempotent service is also appealing for obvious reasons.

I need to get that blog post written...thanks for the thought provoking article.

Stefan Tilkov

It seems to me you're on to something, and I agree with Akash that it's related to SOA -- but the other way round: High level services (which I like to call systems) encapsulate both state and logic, and they interact with the outside world through the messages they exchange with it. Within each service's implementation, there'll be a logic layer and some persistency, and FP is great strategy for modularizing the logic. One convincing detail that supports this is that re-use never happened at the object level; it's always a larger, cohesive unit that gets re-used in a black-box kind of way. Maybe this can be explained by fine-grained re-use requiring freedom from side-effects.

Daniel Lyons

One detail that may add some weight to your claim is the direction of the design process. FP is almost always a bottom-up affair; I don't think I've ever seen FP educational matter that emphasized top-down design over starting at the bottom and successive refinement. OOP, on the other hand, is usually taught with emphasis on top-down design, patterns and architecture.


Steve Freeman and Nat Pryce wrote about similar topics called peers and internals. They prefer to build up internal object behavior using the functional style of programming while using message passing between objects. Their definition of internal doesn't necessarily limit them to the functional approach but a preference to it.

Regardless of the programming paradigm, you'll need to employ the Tell Don't Ask principle to centralize the decision making process. Disparate components cannot interact in a meaningful way unless they share the same context which is why context indepedence is critical to the composability and testability of a system.

Alistair Bayley

"Purity ... enables lazy evaulation"

I think you have this backwards. Laziness enforces purity (p24):

Also, have you read Marks and Moseley "Out of the tar pit"?

Avdi Grimm

This is largely how I view things as well. Also, I've been reading through "Growing Object Oriented Software, Guided By Incredibly Long Book Titles", and they say something similar.


In "Growing Object-Oriented Software, Guided by Tests", the authors say the following:

"we find that we tend towards different programming styles at different levels in the code. Loosely speaking, we use the message-passing style we’ve just described between objects, but we tend to use a more functional style within an object, building up behavior from methods and values that have no side effects.

Features without side effects mean that we can assemble our code from smaller components, minimizing the amount of risky shared state. Writing large-scale functional programs is a topic for a different book, but we find that a little immutability within the implementation of a class leads to much safer code and that, if we do a good job, the code reads well too."


The juxtaposition of "don't ask tell" and functional programming is an interesting idea - I need to think it over - but I don't quite agree with the "functional programming at the low level" implication. Side-effect free libraries are useful - but it does not mean that they have to be purely functional internally - they just need to be purely functional from the outside. They can have state that is rebuilt with each invocation anew. I think that imperative programming is just easier then functional programming - but if you could enclose it inside modules and have the compiler checking that it is properly closed - then you could use imperative programming inside libraries that look pure from the outside. In a way this is the same as how functional languages work on an imperative Random Access Machines.

Stephen Colebourne

I think I've tended to prefer something like this for a while. Part of this is prefering libraries to frameworks for dependencies as it reduces the overall complexity of the system IMO. In addition, the lowest libraries (like Joda-Time) are immutable/functional.

My suspicion is that one thing preventing more use of immutable structure at higher levels is their ease of use. Specifically, editing a single property of a deeply nested structure is a tricky prospect without language support.


Have been using this dual approach very successfully with JavaScript the last year or so.
Functional for actions doable in isolation.
OO to divide program into responsibilities/capabilities and handle path choices.

Think it works better than only using one way for the whole application. Both objects and functions turn out reasonably simple for there combined complexity. And there is no mixing of styles, each style in its own area.

Tracy Harms

Your proposal sounds very similar to what is said about OOP and FP in the "Object Oriented Programming" Lab included with the J programming language. In section 5 of that lab, it says:

"It is a mistake to think of OOP as an alternative to FP. All languages, at the low level, have FP, and at a higher level, have OOP."

I don't wholly agree with that claim, but I do like the way it points in the same direction as your idea: Large-scale modules benefit from OOP techniques, small-scale components benefit from FP techniques.

Putting this idea into practice, we must become willing and able to write very different programs within these two layers. We must also find a way to interface between, i.e. program across, these two layers.

Those difficulties are great enough that most projects don't do it. By and large, management does not recognize that this two-tier approach is an option, but if the option were clear to them managers might often decide not to try it. It simplifies things to have a single tier, even if that does mean experiencing the method-by-method mis-fit of applying OOP to small-scale programming tasks. Perhaps this helps us understand why C++, Java, and C# have their patterns of success.

Rafael de F. Ferreira

This insight is also consonant with the advice given in Peter Van Roy's CTM. The book recommends keeping most of the logic in a purely functional core (actually they recommend a single-assignment dataflow style, but it's close enough) surrounded by a concurrent Actor-style layer.

I'm inclined to agree, but one thing I wouldn't know how to handle is the use of the Separated Interface pattern to trigger side-effects. For instance, it's common to have a domain model object call a notification dependency if some condition applies (say, the user exceeded his daily quota), this notification interface could have an implementation that sends an email to the user. Another common use of the separated interface pattern is to pass a repository interface to a domain object that may (or may not, depending on some logic) call it to obtain other objects from a database; the repository would be implemented as a sort of data access object on the persistence layer. If we restrict the core domain model to be purely functional, we could no longer apply this modeling technique; I guess an alternative would be to wrap logic that may or may not cause side effects in a monad, but I'm unsure about the gains.

Maybe the generic objection is how to handle IO side-effects when the decision to produce them is complex and they may be parametrizable.

Sorry for the ramble, perhaps someone has a more concrete idea.

Raul Miller

First off, from my point of view, lazy sequences are the moral equivalent, in data structures, of i/o. While in one sense, they are "pure", they have an inherently time-dependent nature and an external dependency which you need to eliminate before they can be completely valid:

Hypothetically speaking, if you evaluate [1..] it's going to stop sooner or later. It will never reach infinity -- you'll get a machine failure before then if you let it run long enough. The valuable part of this expression, then, is not the generator that counts up indefinitely, it's the (contained) function which maps generator state to values.

So, anyways, this dichotomy between what I see as the useful parts of a lazy system and what I see as the parts that are i/o related make talking about the relationship between OO and FP a bit stilted for me.

That said: it has been my general experience that pushing regular, simple computations into my computational "leaf nodes" and bringing irregular "application specific" computations up to towards my computational "root nodes" tends to result in a simple to understand and maintain system and a system that performs well.

But, also: in my experience, the "information hiding" aspect of "encapsulation" tends to result in sub-optimal modularity. If nothing else, it's hard to debug hidden information. But sometimes "optimal modularity" is not a worthwhile goal. For example, when work involves "independent administrative entities" you need clear agreements on how they work together, and you need well defined interfaces and standards. And, OO can be a really good model for how to design these interfaces. If you can make FP fit the problem, it can be an even better model (simpler to work with, and simpler to understand), but that depends on the people you are working with and the nature of the problem -- it's not always practical to completely specify all of the relevant information that the system depends on.

Anyways, ultimately the "goodness" of a system is a people issue as much as it's an architectural issue.

Dhananjay Nene

The similarity may not be so obvious, but you come back to essentially describing Erlang's model. Autonomous, concurrent, message passing objects (oops processes) each being implemented using FP constructs including full immutability. The curious part about erlang that this doesn't talk about is the state. Erlang is able to maintain state despite completely adopting immutable constructs, by maintaining state on the stack.


Amen! Widespread synchronous "message sends" are really a red herring.

Another gradient that agrees is typing. Functional languages generally emphasize simple container types, and non-nominal composition (tuples, ocaml/clojure structual objects/maps, prevalent untyped usage of lisps). Up close, if you've two lists of strings, say one of first names and one of last names, there isn't much chance of getting them confused. You may even be passing them both (independently) to the same function, as they're both lists of strings (the higher purpose is still within your mind).

However at the module level, the behavior becomes so complex, and the data structures opaque (de facto through complexity and de jure through published interfaces), that the *name* is really the only thing that identifies what a type is used for. We don't really want to see a (Map(String, (String, String, String)), we *want* the semantic encapsulation and documentation of the abstraction (note that the composite type I gave isn't really that complex, but it's not a stretch to imagine an entire hierarchy of structural values that don't give much clue as to their invariants or purpose).

Regardless of the philosophical reasoning, new ("multi-core") languages will be leading us there anyway. Immutability and actors are both promising solutions to parallel processing, and I can only expect to see more languages incorporating them as first-class constructs. Whether communication is better phrased in terms of dually-owned channels (Rust/Go/Felix/OCaml) or implicit per-object mailboxes (classic object orientation, but asynchronous) remains to be seen.

Dale Schumacher

Alan Kay has said that Actors are closer to his original conception of Objects. Synchronous messaging was an early implementation choice based on limitations of then-current hardware. Messages were always the key concept. As previously noted, asynchronous messaging is closer to the biological metaphor.

Functional and Object/Actor-based programming are actually quite compatible when BOTH have first-class status within a system. I believe that a lot of difficulties have resulted from trying to force everything into one model or the other. For example, when mostly-functional languages introduce "expressions" that cause effects, breaking their purity.

We need both stateful and stateless constructs, cleanly separated from each other, but interoperable. This is the approach I've taken with Humus [1]. Expressions are purely functional, yielding immutable values. Actors encapsulate mutable state, acting like Objects, but using asynchronous messaging (tell, not ask).

Ironically, even pure-functional expression evaluation can be expressed with asynchronous actor messaging [2]. However, this detail can, and often should, be hidden from the programmer. From a conceptual standpoint, we want to reason about pure functional values (below), as asynchronous messages among stateful Actors/Objects (above).

[1] http://www.dalnefre.com/wp/humus/
[2] http://www.dalnefre.com/wp/2010/08/evaluating-expressions-part-1-core-lambda-calculus/

Bob Corrick

This reminds me of "Parameterise from above" as in http://accu.org/index.php/journals/1411


Essentially isn't this what the IO Monad is in Haskell? It forces the separation of pure functionality from impure functionality. You often end up with an impure layer on top to orchestrate things, with the a large amount of pure code underneath.


Default Java(such as spring or JavaEE) is a wrong OO system, I found open source Jdon Framework is one of many Java message/event frameworks, it brings "Tell, Don't Ask" to java developer.

Vincent Toups

It isn't clear that functional programming is necessarily lazy. Bob Harper disagrees: https://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/

Account Deleted

Nice post Michael! As i read it, it immediately reminded me of a presentation of Tim Sweeney (Unreal Engine tech Lead) where he expresses his "wishful thinking" of a functional-o.o hybrid language and which contexts coding in one or other paradigm makes sense. Nice to see the ideas converge in a certain way: http://www.slideshare.net/jstrane/tim-sweeneys-invited-talk-at-popl06



Mark Jordan

I'm a big fan of Gary Bernhardt's 'functional core, imperative shell' idea as described in this talk: https://www.youtube.com/watch?v=yTkzNHF6rMs

It proposes something similar as a way of making everything more unit-testable, but it seems to work as a more general architectural design as well.

The comments to this entry are closed.