Adopting New JVM Languages in the Enterprise (Update) 102

Posted by Dean Wampler Thu, 15 Jan 2009 07:40:00 GMT

(Updated to add Groovy, which I should have mentioned the first time. Also mentioned Django under Python.)

This is an exciting time to be a Java programmer. The pace of innovation for the Java language is slowing down, in part due to concerns that the language is growing too big and in part due to economic difficulties at Sun, which means there are fewer developers assigned to Java. However, the real crown jewel of the Java ecosystem, the JVM, has become an attractive platform for new languages. These languages give us exciting new opportunities for growth, while preserving our prior investment in code and deployment infrastructure.

This post emphasizes practical issues of evaluating and picking new JVM languages for an established Java-based enterprise.

The Interwebs are full of technical comparisons between Java and the different languages, e.g., why language X fixes Java’s perceived issue Y. I won’t rehash those arguments here, but I will describe some language features, as needed.

A similar “polyglot” trend is happening on the .NET platform.

The New JVM Languages

I’ll limit my discussion to these representative (and best known) alternative languages for the JVM.

  1. JRuby – Ruby running on the JVM.
  2. Scala – A hybrid object-oriented and functional language that runs on .NET as well as the JVM. (Disclaimer: I’m co-writing a book on Scala for O’Reilly.)
  3. Clojure – A Lisp dialect.

I picked these languages because they seem to be the most likely candidates for most enterprises considering a new JVM language, although some of the languages listed below could make that claim.

There are other deserving languages besides these three, but I don’t have the time to do them justice. Hopefully, you can generalize the subsequent discussion for these other languages.

  1. Groovy – A dynamically-typed language designed specifically for interoperability with Java. It will appeal to teams that want a dynamically-typed language that is closer to Java than Ruby. With Grails, you have a combination that’s comparable to Ruby on Rails.
  2. Jython – The first non-Java language ported to the JVM, started by Jim Hugunin in 1997. Most of my remarks about JRuby are applicable to Jython. Django is the Python analog of Rails. If your Java shop already has a lot of Python, consider Jython.
  3. Fan – A hybrid object-oriented and functional language that runs on .NET, too. It has a lot of similarities to Scala, like a scripting-language feel.
  4. Ioke – (pronounced “eye-oh-key”) An innovative language developed by Ola Bini and inspired by Io and Lisp. This is the newest language discussed here. Hence, it has a small following, but a lot of potential. The Io/Lisp-flavored syntax will be more challenging to average Java developers than Scala, JRuby, Jython, Fan, and JavaScript.
  5. JavaScript, e.g., Rhino – Much maligned and misunderstood (e.g., due to buggy and inconsistent browser implementations), JavaScript continues to gain converts as an alternative scripting language for Java applications. It is the default scripting language supported by the JDK 6 scripting interface.
  6. Fortress – A language designed as a replacement for high-performance FORTRAN for industrial and academic “number crunching”. This one will interest scientists and engineers…

Note: Like a lot of people, I use the term scripting language to refer to languages with a lightweight syntax, usually dynamically typed. The name reflects their convenience for “scripting”, but that quality is sometimes seen as pejorative; they aren’t seen as “serious” languages. I reject this view.

To learn more about what people are doing on the JVM today (with some guest .NET presentations), a good place to start is the recent JVM Language Summit.

Criteria For Evaluating New JVM Languages

I’ll frame the discussion around a few criteria you should consider when evaluating language choices. I’ll then discuss how each of the languages address those criteria. Since we’re restricting ourselves to JVM languages, I assume that each language compiles to valid byte code, so code in the new language and code written in Java can call each other, at least at some level. The “some level” part will be one criterion. Substitute X for the language you are considering.

  1. Interoperability: How easily can X code invoke Java code and vice versa? Specifically:
    1. Create objects (i.e., call new Foo(...)).
    2. Call methods on an object.
    3. Call static methods on a class.
    4. Extend a class.
    5. Implement an interface.
  2. Object Model: How different is the object model of X compared to Java’s object model? (This is somewhat tied to the previous point.)
  3. New “Ideas”: Does X support newer programming trends:
    1. Functional Programming.
    2. Metaprogramming.
    3. Easier approaches to writing robust concurrent applications.
    4. Easier support for processing XML, SQL queries, etc.
    5. Support internal DSL creation.
    6. Easier presentation-tier development of web and thick-client UI’s.
  4. Stability: How stable is the language, in terms of:
    1. Lack of Bugs.
    2. Stability of the language’s syntax, semantics, and library API’s. (All the languages can call Java API’s.)
  5. Performance: How does code written in X perform?
  6. Adoption: Is X easy to learn and use?
  7. Tool Support: What about editors, IDE’s, code coverage, etc.
  8. Deployment: How are apps and libraries written in X deployed?
    1. Do I have to modify my existing infrastructure, management, etc.?

The Interoperability point affects ease of adoption and use with a legacy Java code base. The Object Model and Adoption points address the barrier to adoption from the learning point of view. The New “Ideas” point asks what each language brings to development that is not available in Java (or poorly supported) and is seen as valuable to the developer. Finally, Stability, Performance, and Deployment address very practical issues that a candidate production language must address.

Comparing the Languages

JRuby

JRuby is the most popular alternative JVM langauge, driven largely by interest in Ruby and Ruby on Rails.

Interoperability

Ruby’s object model is a little different than Java’s, but JRuby provides straightforward coding idioms that make it easy to call Java from JRuby. Calling JRuby from Java requires the JSR 223 scripting interface or a similar approach, unless JRuby is used to compile the Ruby code to byte code first. In that case, shortcuts are possible, which are well documented.

Object Model

Ruby’s object model is a little different than Java’s. Ruby support mixin-style modules, which behave like interfaces with implementations. So, the Ruby object model needs to be learned, but it is straightforward or the Java developer.

New Ideas

JRuby brings closures to the JVM, a much desired feature that probably won’t be added in the forthcoming Java 7. Using closures, Ruby supports a number of functional-style iterative operations, like mapping, filtering, and reducing/folding. However, Ruby does not fully support functional programming.

Ruby uses dynamic-typing instead of static-typing, which it exploits to provide extensive and powerful metaprogramming facilities.

Ruby doesn’t offer any specific enhancements over Java for safe, robust concurrent programming.

Ruby API’s make XML processing and database access relatively easy. Ruby on Rails is legendary for improving the productivity of web developers and similar benefits are available for thick-client developers using other libraries.

Ruby is also one of the best languages for defining “internal” DSL’s, which are used to great affect in Rails (e.g., ActiveRecord).

Stability

JRuby and Ruby are very stable and are widely used in production. JRuby is believed to be the best performing Ruby platform.

The Ruby syntax and API are undergoing some significant changes in the current 1.9.X release, but migration is not a major challenge.

Performance

JRuby is believed to be the best performing Ruby platform. While it is a topic of hot debate, Ruby and most dynamically-typed languages have higher runtime overhead compared to statically-typed languages. Also, the JVM has some known performance issues for dynamically-typed languages, some of which will be fixed in JDK 7.

As always, enterprises should profile code written in their languages of choice to pick the best one for each particular task.

Adoption

Ruby is very easy to learn, although effective use of advanced techniques like metaprogramming require some time to master. JRuby-specific idioms are also easy to master and are well documented.

Tool Support

Ruby is experiencing tremendous growth in tool support. IDE support still lags support for Java, but IntelliJ, NetBeans, and Eclipse are working on Ruby support. JRuby users can exploit many Java tools.

Code analysis tools and testing tools (TDD and BDD styles) are now better than Java’s.

Deployment

JRuby applications, even Ruby on Rails applications, can be deployed as jars or wars, requiring no modifications to an existing java-based infrastructure. Teams use this approach to minimize the “friction” of adopting Ruby, while also getting the performance benefits of the JVM.

Because JRuby code is byte code at runtime, it can be managed with JMX, etc.

Scala

Scala is a statically-typed language that supports an improved object model (with a full mixin mechanism called traits; similar to Ruby modules) and full support for functional programming, following a design goal of the inventor of Scala, Martin Odersky, that these two paradigms can be integrated, despite some surface incompatibilities. Odersky was involved in the design of Java generics (through earlier research languages) and he wrote the original version of the current javac. The name is a contraction of “scalable language”, but the first “a” is pronounced like “ah”, not long as in the word “hay”.

The syntax looks like a cross between Ruby (method definitions start with the def keyword) and Java (e.g., curly braces). Type inferencing and other syntactic conventions significantly reduce the “cluuter”, such as the number of explicit type declarations (“annotations”) compared to Java. Scala syntax is very succinct, sometimes even more so than Ruby! For more on Scala, see also my previous blog postings, part 1, part 2, part 3, and this related post on traits vs. aspects.

Interoperability

Scala’s has the most seamless interoperability with Java of any of the languages discussed here. This is due in part to Scala’s static typing and “closed” classes (as opposed to Ruby’s “open” classes). It is trivial to import and use Java classes, implement interfaces, etc.

Direct API calls from Java to Scala are also supported. The developer needs to know how the names of Scala methods are encoding in byte code. For example, Scala methods can have “operator” names, like ”+”. In the byte code, that name will be ”$plus”.

Object Model

Scala’s object model extends Java’s model with traits, which support flexble mixin composition. Traits behave like interfaces with implementations. The Scala object model provides other sophisticated features for building “scalable applications”.

New Ideas

Scala brings full support for functional programming to the JVM, including first-class function and closures. Other aspects of functional programming, like immutable variables and side-effect free functions, are encouraged by the language, but not mandated, as Scala is not a pure functional language. (Functional programming is very effective strategy for writing tread-safe programs, etc.) Scala’s Actor library is a port of Erlang’s Actor library, a message-based concurrency approach.

In my view, the Actor model is the best general-purpose approach to concurrency. There are times when multi-threaded code is needed for performance, but not for most concurrent applications. (Note: there are Actor libraries for Java, e.g., Kilim.)

Scala has very good support for building internal DSL’s, although it is not quite as good as Ruby’s features for this purpose. It has a combinator parser library that makes external DSL creation comparatively easy. Scala also offers some innovative API’s for XML processing and Swing development.

Stability

Scala is over 5 years old and it is very stable. The API and syntax continue to evolve, but no major, disruptive changes are expected. In fact, the structure of the language is such that almost all changes occur in libraries, not the language grammar.

There are some well-known production deployments, such as back-end services at twitter.

Performance

Scala provides comparable performance to Java, since it is very close “structurally” to Java code at the byte-code level, given the static typing and “closed” classes. Hence, Scala can exploit JVM optimizations that aren’t available to dynamically-typed languages.

However, Scala will also benefit from planned improvements to support dynamically-typed languages, such as tail-call optimizations (which Scala current does in the compiler.) Hence, Scala probably has marginally better performance than JRuby, in general. If true, Scala may be more appealing than JRuby as a general-purpose, systems language, where performance is critical.

Adoption

Scala is harder to learn and master than JRuby, because it is a more comprehensive language. It not only supports a sophisticated object model, but it also supports functional programming, type inferencing, etc. In my view, the extra effort will be rewarded with higher productivity. Also, because it is closer to Java than JRuby and Clojure, new users will be able to start using it quickly as a “better object-oriented Java”, while they continue to learn the more advanced features, like functional programming, that will accelerate their productivity over the long term.

Tool Support

Scala support in IDE’s still lags support for Java, but it is improving. IntelliJ, NetBeans, and Eclipse now support Scala with plugins. Maven and ant are widely used as the build tool for Scala applications. Several excellent TDD and BDD libraries are available.

Deployment

Scala applications are packaged and deployed just like Java applications, since Scala files are compiled to class files. A Scala runtime jar is also required.

Clojure

Of the three new JVM languages discussed here, Clojure is the least like Java, due to its Lisp syntax and innovative “programming model”. Yet it is also the most innovative and exciting new JVM language for many people. Clojure interoperates with Java code, but it emphasizes functional programming. Unlike the other languages, Clojure does not support object-oriented programming. Instead, it relies on mechanisms like multi-methods and macros to address design problems for which OOP is often used.

One exciting innovation in Clojure is support for software transactional memory, which uses a database-style transactional approach to concurrent modifications of in-memory, mutable state. STM is somewhat controversial. You can google for arguments about its practicality, etc. However, Clojure’s implementation appears to be successful.

Clojure also has other innovative ways of supporting “principled” modification of mutable data, while encouraging the use of immutable data. These features with STM are the basis of Clojure’s approach to robust concurrency.

Finally, Clojure implements several optimizations in the compiler that are important for functional programming, such as optimizing tail call recursion.

Disclaimer: I know less about Clojure than JRuby and Scala. While I have endeavored to get the facts right, there may be errors in the following analysis. Feedback is welcome.

Interoperability

Despite the Lisp syntax and functional-programming emphasis, Clojure interoperates with Java. Calling java from Clojure uses direct API calls, as for JRuby and Scala. Calling Clojure from Java is a more involved. You have to create Java proxies on the Clojure side to generate the byte code needed on the Java side. The idioms for doing this are straightforward, however.

Object Model

Clojure is not an object-oriented language. However, in order to interoperate with Java code, Clojure supports implementing interfaces and instantiating Java objects. Otherwise, Clojure offers a significant departure for develops well versed in object-oriented programming, but with little functional programming experience.

New Ideas

Clojure brings to the JVM full support for functional programming and popular Lisp concepts like macros, multi-methods, and powerful metaprogramming. It has innovative approaches to safe concurrency, including “principled” mechanisms for supporting mutable state, as discussed previously.

Clojure’s succinct syntax and built-in libraries make processing XML succinct and efficient. DSL creation is also supported using Lisp mechanisms, like macros.

Stability

Clojure is the newest of the three languages profiled here. Hence, it may be the most subject to change. However, given the nature of Lisps, it is more likely that changes will occur in libraries than the language itself. Stability in terms of bugs does not appear to be an issue.

Clojure also has the fewest known production deployments of the three languages. However, industry adoption is expected to happen rapidly.

Performance

Clojure supports type “hints” to assist in optimizing performance. The preliminary discussions I have seen suggest that Clojure offers very good performance.

Adoption

Clojure is more of a departure from Java than is Scala. It will require a motivated team that likes Lisp ;) However, such a team may learn Clojure faster than Scala, since Clojure is a simpler language, e.g., because it doesn’t have its own object model. Also, Lisps are well known for being simple languages, where the real learning comes in understanding how to use it effectively!

However, in my view, as for Scala, the extra learning effort will be rewarded with higher productivity.

Tool Support

As a new language, tool support is limited. Most Clojure developers use Emacs with its excellent Lisp support. Many Java tools can be used with Clojure.

Deployment

Clojure deployment appears to be as straightforward as for the other languages. A Clojure runtime jar is required.

Comparisons

Briefly, let’s review the points and compare the three languages.

Interoperability

All three languages make calling Java code straightforward. Scala interoperates most seamlessly. Scala code is easiest to invoke from Java code, using direct API calls, as long as you know how Scala encodes method names that have “illegal” characters (according to the JVM spec.). Calling JRuby and Clojure code from Java is more involved.

Therefore, if you expect to continue writing Java code that needs to make frequent API calls to the code in the new language, Scala will be a better choice.

Object Model

Scala is closest to Java’s object model. Ruby’s object model is superficially similar to Scala’s, but the dynamic nature of Ruby brings significant differences. Both extend Java’s object model with mixin composition through traits (Scala) or modules (Ruby), that act like interfaces with implementations.

Clojure is quite different, with an emphasis on functional programming and no direct support for object-oriented programming.

New Ideas

JRuby brings the productivity and power of a dynamically-typed language to the JVM, along with the drawbacks. It also brings some functional idioms.

Scala and Clojure bring full support for functional programming. Scala provides a complete Actor model of concurrency (as a library). Clojure brings software transactional memory and other innovations for writing robust concurrent applications. JRuby and Ruby don’t add anything specific for concurrency.

JRuby, like Ruby, is exceptionally good for writing internal DSL’s. Scala is also very good and Clojure benefits from Lisp’s support for DSL creation.

Stability

All the language implementations are of high quality. Scala is the most mature, but JRuby has the widest adoption in production.

Performance

Performance should be comparable for all, but JRuby and Clojure have to deal with some inefficiencies inherent to running dynamic languages on the JVM. Your mileage may vary, so please run realistic profiling experiments on sample implementations that are representative of your needs. Avoid “prematurely optimization” when choosing a new language. Often, team productivity and “time to market” are more important than raw performance.

Adoption

JRuby is the the easiest of the three languages to learn and adopt if you already have some Ruby or Ruby on Rails code in your environment.

Scala has the lowest barrier to adoption because it is the language that most resembles Java “philosophically” (static typing, emphasis on object-oriented programming, etc.). Adopters can start with Scala as a “better Java” and gradually learn the advanced features (mixin composition with traits and functional programming). Scala will appeal the most to teams that prefer statically-typed languages, yet want some of the benefits of dynamically-typed languages, like a succinct syntax.

However, Scala is the most complex of the three languages, while Clojure requires the biggest conceptual leap from Java.

Clojure will appeal to teams willing to explore more radical departures from what they are doing now, with potentially great payoffs!

Deployment

Deployment is easy with all three languages. Scala is most like Java, since you normally compile to class files (there is a limited interpreter mode). JRuby and Clojure code can be interpreted at runtime or compiled.

Summary and Conclusions

All three choices (or comparable substitutions from the list of other languages), will provide a Java team with a more modern language, yet fully leverage the existing investment in Java. Scala is the easiest incremental change. JRuby brings the vibrant Ruby world to the JVM. Clojure offers the most innovative departures from Java.

Video of my RubyConf talk, "Better Ruby through Functional Programming" 64

Posted by Dean Wampler Thu, 27 Nov 2008 22:09:00 GMT

Confreaks has started posting the videos from RubyConf. Here’s mine on Better Ruby through Functional Programming.

Please ignore the occasional Ruby (and Scala) bugs…

Upcoming Speaking Engagements 40

Posted by Dean Wampler Tue, 04 Nov 2008 00:13:00 GMT

I’m speaking this Friday at RubyConf on Better Ruby Through Functional Programming. I’ll introduce long-overlooked ideas from FP, why they are important for Ruby programmers, and how to use them in Ruby.

In two weeks, I’m speaking on Wednesday, 11/19 at QCon San Francisco on Radical Simplification Through Polyglot and Poly-paradigm Programming. The idea of this talk is that combining the right languages and modularity paradigms (i.e., objects, functions, aspects) can simplify your code base and reduce the amount of code you have to write and manage, providing numerous benefits.

Back in Chicago, I’m speaking at the Polyglot Programmer’s meeting on The Seductions of Scala, 11/13. It’s an intro to the Scala language, which could become the language of choice for the JVM. I’m repeating this talk at the Chicago Java User’s Group on 12/16. I’m co-writing a book on Scala with Alex Payne. O’Reilly will be the publisher.

Incidently, Bob Martin is also speaking in Chicago on 11/13 at the APLN Chicago meeting on Software Professionalism.

A Scala-style "with" Construct for Ruby 108

Posted by Dean Wampler Tue, 30 Sep 2008 03:41:00 GMT

Scala has a “mixin” construct called traits, which are roughly analogous to Ruby modules. They allow you to create reusable, modular bits of state and behavior and use them to compose classes and other traits or modules.

The syntax for using Scala traits is quite elegant. It’s straightforward to implement the same syntax in Ruby and doing so has a few useful advantages.

For example, here is a Scala example that uses a trait to trace calls to a Worker.work method.

    
// run with "scala example.scala" 

class Worker {
    def work() = "work" 
}

trait WorkerTracer extends Worker {
    override def work() = "Before, " + super.work() + ", After" 
}

val worker = new Worker with WorkerTracer

println(worker.work())        // => Before, work, After
    

Note that WorkerTracer extends Worker so it can override the work method. Since Scala is statically typed, you can’t just define an override method and call super unless the compiler knows there really is a “super” method!

Here’s a Ruby equivalent.

    
# run with "ruby example.rb" 

module WorkerTracer
    def work; "Before, #{super}, After"; end
end

class Worker 
    def work; "work"; end
end

class TracedWorker < Worker 
  include WorkerTracer
end

worker = TracedWorker.new

puts worker.work          # => Before, work, After
    

Note that we have to create a subclass, which isn’t required for the Scala case (but can be done when desired).

If you know that you will always want to trace calls to work in the Ruby case, you might be tempted to dispense with the subclass and just add include WorkerTracer in Worker. Unfortunately, this won’t work. Due to the way that Ruby resolves methods, the version of work in the module will not be found before the version defined in Worker itself. Hence the subclass seems to be the only option.

However, we can work around this using metaprogramming. We can use WorkerTracer#append_features(...). What goes in the argument list? If we pass Worker, then all instances of Worker will be effected, but actually we’ll still have the problem with the method resolution rules.

If we just want to affect one object and work around the method resolution roles, then we need to pass the singleton class (or eigenclass or metaclass ...) for the object, which you can get with the following expression.

    
metaclass = class << worker; self; end
    

So, to encapsulate all this and to get back to the original goal of implementing with-style semantics, here is an implementation that adds a with method to Object, wrapped in an rspec example.

    
# run with "spec ruby_with_spec.rb" 

require 'rubygems'
require 'spec'

# Warning, monkeypatching Object, especially with a name
# that might be commonly used is fraught with peril!!

class Object
  def with *modules
    metaclass = class << self; self; end
    modules.flatten.each do |m|
      m.send :append_features, metaclass
    end
    self
  end
end

module WorkerTracer
    def work; "Before, #{super}, After"; end
end

module WorkerTracer1
    def work; "Before1, #{super}, After1"; end
end

class Worker 
    def work; "work"; end
end

describe "Object#with" do
  it "should make no changes to an object if no modules are specified" do
    worker = Worker.new.with
    worker.work.should == "work" 
  end

  it "should override any methods with a module's methods of the same name" do
    worker = Worker.new.with WorkerTracer
    worker.work.should == "Before, work, After" 
  end

  it "should stack overrides for multiple modules" do
    worker = Worker.new.with(WorkerTracer).with(WorkerTracer1)
    worker.work.should == "Before1, Before, work, After, After1" 
  end

  it "should stack overrides for a list of modules" do
    worker = Worker.new.with WorkerTracer, WorkerTracer1
    worker.work.should == "Before1, Before, work, After, After1" 
  end

  it "should stack overrides for an array of modules" do
    worker = Worker.new.with [WorkerTracer, WorkerTracer1]
    worker.work.should == "Before1, Before, work, After, After1" 
  end
end
    

You should carefully consider the warning about monkeypatching Object! Also, note that Module.append_features is actually private, so I had to use m.send :append_features, ... instead.

The syntax is reasonably intuitive and it eliminates the need for an explicit subclass. You can pass a single module, or a list or array of them. Because with returns the object, you can also chain with calls.

A final note; many developers steer clear of metaprogramming and reflection features in their languages, out of fear. While prudence is definitely wise, the power of these tools can dramatically accelerate your productivity. Metaprogramming is just programming. Every developer should master it.

Traits vs. Aspects in Scala 91

Posted by Dean Wampler Sun, 28 Sep 2008 03:33:00 GMT

Scala traits provide a mixin composition mechanism that has been missing in Java. Roughly speaking, you can think of traits as analogous to Java interfaces, but with implementations.

Aspects, e.g., those written in AspectJ, are another mechanism for mixin composition in Java. How do aspects and traits compare?

Let’s look at an example trait first, then re-implement the same behavior using an AspectJ aspect, and finally compare the two approaches.

Observing with Traits

In a previous post on Scala, I gave an example of the Observer Pattern implemented using a trait. Chris Shorrock and James Iry provided improved versions in the comments. I’ll use James’ example here.

To keep things as simple as possible, let’s observe a simple Counter, which increments an internal count variable by the number input to an add method.

    
package example

class Counter {
    var count = 0
    def add(i: Int) = count += i
}
    

The count field is actually public, but I will only write to it through add.

Here is James’ Subject trait that implements the Observer Pattern.

    
package example

trait Subject {
  type Observer = { def receiveUpdate(subject:Any) }

  private var observers = List[Observer]()
  def addObserver(observer:Observer) = observers ::= observer
  def notifyObservers = observers foreach (_.receiveUpdate(this))
}
    

Effectively, this says that we can use any object as an Observer as long as it matches the structural type { def receiveUpdate(subject:Any) }. Think of structural types as anonymous interfaces. Here, a valid observer is one that has a receiveUpdate method taking an argument of Any type.

The rest of the trait manages a list of observers and defines a notifyObservers method. The expression observers ::= observer uses the List :: (“cons”) operator to prepend an item to the list. (Note, I am using the default immutable List, so a new copy is created everytime.)

The notifyObservers method iterates through the observers, calling receiveUpdate on each one. The _ that gets replaced with each observer during the iteration.

Finally, here is a specs file that exercises the code.

    
package example

import org.specs._

object CounterObserverSpec extends Specification {
    "A Counter Observer" should {
        "observe counter increments" in {
            class CounterObserver {
                var updates = 0
                def receiveUpdate(subject:Any) = updates += 1
            }
            class WatchedCounter extends Counter with Subject {
                override def add(i: Int) = { 
                    super.add(i)
                    notifyObservers
                }
            }
            var watchedCounter = new WatchedCounter
            var counterObserver = new CounterObserver
            watchedCounter.addObserver(counterObserver)
            for (i <- 1 to 3) watchedCounter.add(i)
            counterObserver.updates must_== 3
            watchedCounter.count must_== 6
    }
  }
}
    

The specs library is a BDD tool inspired by rspec in Rubyland.

I won’t discuss it all the specs-specific details here, but hopefully you’ll get the general idea of what it’s doing.

Inside the "observe counter increments" in {...}, I start by declaring two classes, CounterObserver and WatchedCounter. CounterObserver satisfies our required structural type, i.e., it provides a receiveUpdate method.

WatchedCounter subclasses Counter and mixes in the Subject trait. It overrides the add method, where it calls Counter’s add first, then notifies the observers. No parentheses are used in the invocation of notifyObservers because the method was not defined to take any!

Next, I create an instance of each class, add the observer to the WatchedCounter, and make 3 calls to watchedCounter.add.

Finally, I use the “actual must_== expected” idiom to test the results. The observer should have seen 3 updates, while the counter should have a total of 6.

The following simple bash shell script will build and run the code.

    
SCALA_HOME=...
SCALA_SPECS_HOME=...
CP=$SCALA_HOME/lib/scala-library.jar:$SCALA_SPECS_HOME/specs-1.3.1.jar:bin
rm -rf bin
mkdir -p bin
scalac -d bin -cp $CP src/example/*.scala
scala -cp $CP example/CounterObserverSpec
    

Note that I put all the sources in a src/example directory. Also, I’m using v1.3.1 of specs, as well as v2.7.1 of Scala. You should get the following output.

    
Specification "CounterObserverSpec" 
  A Counter Observer should
  + observe counter increments

Total for specification "CounterObserverSpec":
Finished in 0 second, 60 ms
1 example, 2 assertions, 0 failure, 0 error
    

Observing with Aspects

Because Scala compiles to Java byte code, I can use AspectJ to advice Scala code! For this to work, you have to be aware of how Scala represents its concepts in byte code. For example, object declarations, e.g., object Foo {...} become static final classes. Also, method names like + become $plus in byte code.

However, most Scala type, method, and variable names can be used as is in AspectJ. This is true for my example.

Here is an aspect that observes calls to Counter.add.

    
package example

public aspect CounterObserver {
    after(Object counter, int value): 
        call(void *.add(int)) && target(counter) && args(value) {

        RecordedObservations.record("adding "+value);
    }
}
    

You can read this aspect as follows, after calling Counter.add (and keeping track of the Counter object that was called, and the value passed to the method), call the static method record on the RecordedObservations.

I’m using a separate Scala object RecordedObservations

    
package example

object RecordedObservations {
    private var messages = List[String]()
    def record(message: String):Unit = messages ::= message
    def count() = messages.length
    def reset():Unit = messages = Nil
}
    

Recall that this is effectively a static final Java class. I need this separate object, rather than keeping information in the aspect itself, because of the simple-minded way I’m building the code. ;) However, it’s generally a good idea with aspects to delegate most of the work to Java or Scala code anyway.

Now, the “spec” file is:

    
package example

import org.specs._

object CounterObserverSpec extends Specification {
    "A Counter Observer" should {
        "observe counter increments" in {
            RecordedObservations.reset()
            var counter = new Counter
            for (i <- 1 to 3) counter.add(i)
            RecordedObservations.count() must_== 3
            counter.count must_== 6
    }
  }
}
    

This time, I don’t need two more classes for the adding a mixin trait or defining an observer. Also, I call RecordedObservations.count to ensure it was called 3 times.

The build script is also slightly different to add the AspectJ compilation.

    
SCALA_HOME=...
SCALA_SPECS_HOME=...
ASPECTJ_HOME=...
CP=$SCALA_HOME/lib/scala-library.jar:$SCALA_SPECS_HOME/specs-1.3.1.jar:$ASPECTJ_HOME/lib/aspectjrt.jar:bin
rm -rf bin app.jar
mkdir -p bin
scalac -d bin -cp $CP src/example/*.scala 
ajc -1.5 -outjar app.jar -cp $CP -inpath bin src/example/CounterObserver.aj
aj -cp $ASPECTJ_HOME/lib/aspectjweaver.jar:app.jar:$CP example.CounterObserverSpec
    

The ajc command not only compiles the aspect, but it “weaves” into the compiled Scala classes in the bin directory. Actually, it only affects the Counter class. Then it writes all the woven and unmodified class files to app.jar, which is used to execute the test. Note that for production use, you might prefer load-time weaving.

The output is the same as before (except for the milliseconds), so I won’t show it here.

Comparing Traits with Aspects

So far, both approaches are equally viable. The traits approach obviously doesn’t require a separate language and corresponding tool set.

However, traits have one important limitation with respect to aspects. Aspects let you define pointcuts that are queries over all possible points where new behavior or modifications might be desired. These points are called join points in aspect terminology. The aspect I showed above has a simple pointcut that selects one join point, calls to the Counter.add method.

However, what if I wanted to observe all state changes in all classes in a package? Defining traits for each case would be tedious and error prone, since it would be easy to overlook some cases. With an aspect framework like AspectJ, I can implement observation at all the points I care about in a modular way.

Aspect frameworks support this by providing wildcard mechanisms. I won’t go into the details here, but the * in the previous aspect is an example, matching any type. Also, one of the most powerful techniques for writing robust aspects is to use pointcuts that reference only annotations, a form of abstraction. As a final example, if I add an annotation Adder to Counter.add,

    
package example

class Counter {
    var count = 0
    @Adder def add(i: Int) = count += i
}
    

Then I can rewrite the aspect as follows.

    
package example

public aspect CounterObserver {
    after(Object counter, int value): 
        call(@Adder void *.*(int)) && target(counter) && args(value) {

        RecordedObservations.record("adding "+value);
    }
}
    

Now, there are no type and method names in the pointcut. Any instance method on any visible type that takes one int (or Scala Int) argument and is annotated with Adder will get matched.

Note: Scala requires that you create any custom annotations as normal Java annotations. Also, if you intend to use them with Aspects, use runtime retention policy, which will be necessary if you use load-time weaving.

Conclusion

If you need to mix in behavior in a specific, relatively-localized set of classes, Scala traits are probably all you need and you don’t need another language. If you need more “pervasive” modifications (e.g., tracing, policy enforcement, security), consider using aspects.

Acknowledgements

Thanks to Ramnivas Laddad, whose forthcoming 2nd Edition of AspectJ in Action got me thinking about this topic.

User Stories for Cross-Component Teams 37

Posted by Dean Wampler Sat, 20 Sep 2008 00:53:00 GMT

I’m working on an Agile Transition for a large organization. They are organized into component teams. They implement features by forming temporary feature teams with representatives from each of the relevant components, usually one developer per component.

Doing User Stories for such cross-component features can be challenging.

Now, it would be nice if the developers just pair-programmed with each other, ignoring their assigned component boundaries, but we’re not quite there yet. Also, there are other issues we are addressing, such as the granularity of feature definitions, etc., etc. Becoming truly agile will take time.

Given where we are, it’s just not feasible to estimate a single story point value for each cross-component user story, because the work for each component varies considerably. A particular story might be the equivalent of 1 point for the UI part, 8 points for the middle-tier part, 2 points for the database part, etc.

So, what we’re doing is treating the user story as an “umbrella”, with individual component stories underneath. We’re estimating and tracking the points for each component story. The total points for the user story is the sum of the component story points, plus any extra we decide is needed for the integration and final acceptance testing work.

This model allows us to track the work more closely, as long as we remember that component points mean nothing from the point of view of delivering customer value!

I prefer this approach to documenting tasks, because it keeps the focus on delivering value to the client of each story. For the component stories, the client will be another component.

Automated Acceptance Tests for Component Stories

Just as for user stories, we are defining automated acceptance tests for each component. We’re using JUnit for them, since we don’t need a customer-friendly specification format, like FitNesse or RSpec.

This is also a (sneaky…) way to get the developers from different components to pair together. Say for example that we have a component story for the midtier and the UI is the client. The UI developer and the midtier developer pair to produce the the acceptance criteria for the story.

For each component story, the pair of programmers produce the following:

  1. JUnit tests that define the acceptance criteria for the component story.
  2. One or more interfaces that will be used by the client of the component. They will also be implemented by concrete classes in the component.
  3. A test double that passes the JUnit tests and allows the client to move forward while the component feature is being implemented.

In a sense, the “contract” of the component story is the interfaces, which specify the static structure, and the JUnit tests, which specify the dynamic behavior of the feature.

This model of pair-programming the component interface should solve the common, inefficient communication problems when component interactions need to be changed. You know the scenario; a client component developer or manager tells a server component developer or manager that a change is needed. A developer (probably a different one…) on the server component team makes up an interface, checks it into version control, and waits for feedback from the client team. Meanwhile, the server component developer starts implementing the changes.

A few days before the big drop to QA for final integration testing, the server component developer realizes that the interface is missing some essential features. At the same time, the client component developer finally gets around to using the new interface and discovers a different set of missing essential features. Hilarity ensues…

We’re just getting started with this approach, but so far it is proving to be an effective way to organize our work and to be more efficient.

Configuration Management Systems, Automated Tests, CI, and Complexity 54

Posted by Dean Wampler Mon, 08 Sep 2008 21:53:00 GMT

I’m working with a client that has a very complex branching structure in their commercial CM system, which will remain nameless. Why is it so complex? Because everyone is afraid to merge to the integration branches.

This is a common symptom in teams that don’t have have good automated test coverage and don’t use continuous integration (CI). Fear is their lot in life. They’ll keep lots of little branches and only merge to integration when they’re ready for the “big bang” integration.

I spoke with a manager at the client site today who expressed frustration that no one really knows which branch they should be committing to and when they should merge to the integration branches.

In contrast, projects with good test coverage and CI do almost all their work on the integration branch. They have little fear, because any problems will get caught and fixed quickly. So, the first point of this post is:

Automated test coverage and CI drastically simplify your use of configuration management.

Here’s something else I’ve noticed, open source CM systems seem to focus on different priorities than commercial systems. Disclaimer: I know I’m generalizing a bit here.

The commercial systems tend to be really good at managing complex branching, with fancy GUI tools to help manage the big trees of branches, facilitate merges, etc. That seems to be their biggest selling feature, GUI’s to manage the complexity.

In contrast, most of the open-source CM systems have lower-tech GUI’s, if any, but the teams using them don’t seem to care that much. Usually, this is because these teams are also practicing TDD and CI, so they just don’t need the wizardry as much.

The open-source CM systems seem to be better at scalability and performance. Some are pioneering distributed CM, e.g., Git and Mercurial. Git, for example, was designed to manage a massive project called Linux. Maybe you’ve heard of it.

Distributed CM is not easy, but it’s a lot easier to do if you don’t need to worry as much about complex branch hierarchies.

Most of the commercial tools I’ve seen don’t scale well and some require way too much administration. My client is apparently the biggest user of their particular tool and the developers complain all the time about performance. This tool is not designed to scale horizontally. The only hope is use faster hardware. In this case, the vendor has focused on managing complexity. To be frank, even their GUI tool is an uninspired and slow Java fat client.

So the second point of this post is:

Avoid CM tools that encourage complexity. Pick the ones that scale.

The Liskov Substitution Principle for "Duck-Typed" Languages 105

Posted by Dean Wampler Sun, 07 Sep 2008 04:48:00 GMT

OCP and LSP together tell us how to organize similar vs. variant behaviors. I blogged the other day about OCP in the context of languages with open classes (i.e., dynamically-typed languages). Let’s look at the Liskov Substitution Principle (LSP).

The Liskov Substitution Principle was coined by Barbara Liskov in Data Abstraction and Hierarchy (1987).

If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2, then S is a subtype of T.

I’ve always liked the elegant simplicity, yet power, of LSP. In less formal terms, it says that if a client (program) expects objects of one type to behave in a certain way, then it’s only okay to substitute objects of another type if the same expectations are satisfied.

This is our best definition of inheritance. The well-known is-a relationship between types is not precise enough. Rather, the relationship has to be behaves-as-a, which unfortunately is more of a mouthful. Note that is-a focuses on the structural relationship, while behaves-as-a focuses on the behavioral relationship. A very useful, pre-TDD design technique called Design by Contract emerges out of LSP, but that’s another topic.

Note that there is a slight assumption that I made in the previous paragraph. I said that LSP defines inheritance. Why inheritance specifically and not substitutability, in general? Well, inheritance has been the main vehicle for substitutability for most OO languages, especially the statically-typed ones.

For example, a Java application might use a simple tracing abstraction like this.

    
public interface Tracing {
    void trace(String message);
}
    

Clients might use this to trace methods calls to a log. Only classes that implement the Tracer interface can be given to these clients. For example,

    
public class TracerClient {
    private Tracer tracer;

    public TracerClient(Tracer tracer) {
        this.tracer = tracer;
    }

    public void doWork() {
        tracer.trace("in doWork():");
        // ...
    }
}
    

However, Duck Typing is another form of substitutability that is commonly seen in dynamically-typed languages, like Ruby and Python.

If it walks like a duck and quacks like a duck, it must be a duck.

Informally, duck typing says that a client can use any object you give it as long as the object implements the methods the client wants to invoke on it. Put another way, the object must respond to the messages the client wants to send to it.

The object appears to be a “duck” as far as the client is concerned.

In or example, clients only care about the trace(message) method being supported. So, we might do the following in Ruby.

    
class TracerClient 
  def initialize tracer 
    @tracer = tracer
  end

  def do_work
    @tracer.trace "in do_work:" 
    # ... 
  end
end

class MyTracer
  def trace message
    p message
  end
end

client = TracerClient.new(MyTracer.new)
    

No “interface” is necessary. I just need to pass an object to TracerClient.initialize that responds to the trace message. Here, I defined a class for the purpose. You could also add the trace method to another type or object.

So, LSP is still essential, in the generic sense of valid substitutability, but it doesn’t have to be inheritance based.

Is Duck Typing good or bad? It largely comes down to your view about dynamically-typed vs. statically-typed languages. I don’t want to get into that debate here! However, I’ll make a few remarks.

On the negative side, without a Tracer abstraction, you have to rely on appropriate naming of objects to convey what they do (but you should be doing that anyway). Also, it’s harder to find all the “tracing-behaving” objects in the system.

On the other hand, the client really doesn’t care about a “Tracer” type, only a single method. So, we’ve decoupled “client” and “server” just a bit more. This decoupling is more evident when using closures to express behavior, e.g., for Enumerable methods. In our case, we could write the following.

    
class TracerClient2 
  def initialize &tracer 
    @tracer = tracer
  end

  def do_work 
    @tracer.call "in do_work:" 
    # ... 
  end
end

client = TracerClient2.new {|message| p "block tracer: #{message}"}
    

For comparison, consider how we might approach substitutability in Scala. As a statically-typed language, Scala doesn’t support duck typing per se, but it does support a very similar mechanism called structural types.

Essentially, structural types let us declare that a method parameter must support one or more methods, without having to say it supports a full interface. Loosely speaking, it’s like using an anonymous interface.

In our Java example, when we declare a tracer object in our client, we would be able to declare that is supports trace, without having to specify that it implements a full interface.

To be explicit, recall our Java constructor for TestClient.

    
public class TracerClient {
    public TracerClient(Tracer tracer) { ... }
    // ...
    }
}
    
In Scala, a complete example would be the following.
    
class ScalaTracerClient(val tracer: { def trace(message:String) }) {
    def doWork() = { tracer.trace("doWork") }
}

class ScalaTracer() {
    def trace(message: String) = { println("Scala: "+message) }
}

object TestScalaTracerClient {
    def main() {
        val client = new ScalaTracerClient(new ScalaTracer())
        client.doWork();
    }
}
TestScalaTracerClient.main()
    

Recall from my previous blogs on Scala, the argument list to the class name is the constructor arguments. The constructor takes a tracer argument whose “type” (after the ’:’) is { def trace(message:String) }. That is, all we require of tracer is that it support the trace method.

So, we get duck type-like behavior, but statically type checked. We’ll get a compile error, rather than a run-time error, if someone passes an object to the client that doesn’t respond to tracer.

To conclude, LSP can be reworded very slightly.

If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2, then S is substitutable for T.

I replaced a subtype of with substitutable for.

An important point is that the idea of a “contract” between the types and their clients is still important, even in a language with duck-typing or structural typing. However, languages with these features give us more ways to extend our system, while still supporting LSP.

The Open-Closed Principle for Languages with Open Classes 127

Posted by Dean Wampler Fri, 05 Sep 2008 02:42:00 GMT

We’ve been having a discussion inside Object Mentor World Design Headquarters about the meaning of the OCP for dynamic languages, like Ruby, with open classes.

For example, in Ruby it’s normal to define a class or module, e.g.,

    
# foo.rb
class Foo
    def method1 *args
        ...
    end
end
    

and later re-open the class and add (or redefine) methods,

    
# foo2.rb
class Foo
    def method2 *args
        ...
    end
end
    

Users of Foo see all the methods, as if Foo had one definition.

    
foo = Foo.new
foo.method1 :arg1, :arg2
foo.method2 :arg1, :arg2
    

Do open classes violate the Open-Closed Principle? Bertrand Meyer articulated OCP. Here is his definition1.

Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.

He elaborated on it here.

... This is the open-closed principle, which in my opinion is one of the central innovations of object technology: the ability to use a software component as it is, while retaining the possibility of adding to it later through inheritance. Unlike the records or structures of other approaches, a class of object technology is both closed and open: closed because we can start using it for other components (its clients); open because we can at any time add new properties without invalidating its existing clients.

Tell Less, Say More: The Power of Implicitness

So, if one client require’s only foo.rb and only uses method1, that client doesn’t care what foo2.rb does. However, if the client also require’s foo2.rb, perhaps indirectly through another require, problems will ensue unless the client is unaffected by what foo2.rb does. This looks a lot like the way “good” inheritance should behave.

So, the answer is no, we aren’t violating OCP, as long as we extend a re-opened class following the same rules we would use when inheriting from it.

If we use inheritance instead:

    
# foo.rb
class Foo
    def method1 *args
        ...
    end
end
...
class DerivedFoo < Foo
    def method2 *args
        ...
    end
end
...
foo = SubFoo.new    # Instantiate different class...
foo.method1 :arg1, :arg2
foo.method2 :arg1, :arg2
    

One notable difference is that we have to instantiate a different class. This is an important difference. While you can often just use inheritance, and maybe you should prefer it, inheritance only works if you have full control over what types get instantiated and it’s easy to change which types you use. Of course, inheritance is also the best approach when you need all behavioral variants simulateneously, i.e., each variant in one or more objects.

Sometimes you want to affect the behavior of all instances transparently, without changing the types that are instantiated. A slightly better example, logging method calls, illustrates the point. Here we use the “famous” alias_method in Ruby.

    
# foo.rb
class Foo
    def method1 *args
        ...
    end
end
# logging_foo.rb
class Foo
    alias_method :old_method1, :method1
    def method1 *args
        p "Inside method1(#{args.inspect})" 
        old_method1 *args
    end
end
...
foo = Foo.new
foo.method1 :arg1, :arg2
    

Foo.method1 behaves like a subclass override, with extended behavior that still obeys the Liskov-Substitution Principle (LSP).

So, I think the OCP can be reworded slightly.

Software entities (classes, modules, functions, etc.) should be open for extension, but closed for source modification.

We should not re-open the original source, but adding functionality through a separate source file is okay.

Actually, I prefer a slightly different wording.

Software entities (classes, modules, functions, etc.) should be open for extension, but closed for source and contract modification.

The extra and contract is redundant with LSP. I don’t think this kind of redundancy is necessarily bad. ;) The contract is the set of behavioral expectations between the “entity” and its client(s). Just as it is bad to break the contract with inheritance, it is also bad to break it through open classes.

OCP and LSP together are our most important design principles for effective organization of similar vs. variant behaviors. Inheritance is one way we do this. Open classes provide another way. Aspects provide a third way and are subject to the same design issues.

1 Meyer, Bertrand (1988). Object-Oriented Software Construction. Prentice Hall. ISBN 0136290493.

Tag: How Did I Get Started in Software Development 12

Posted by Dean Wampler Sat, 30 Aug 2008 03:36:00 GMT

Micah Martin tagged me a while ago:

How old were you when you started programming.

I was around 15.

How did you get started programming.

This was in the mid-70’s. In school, we had access to a time-shared computer running Basic. My first “real” programs were in college when I wrote Fortran code for a PDP-11 in a Physics professor’s lab.

What was your first language?

Basic

What languages have you used since you started programming?

Roughly in order of adoption:

Fortran, C, Assembly Language, C++, PL/1, Perl, TCL/TK, Python, Java, HTML, JavaScript, CSS, SQL, Ruby, Scala

What was the first real program you wrote?

Data analysis programs in Fortran for that Physics professor. Later, in graduate school, I started applying OO principles to the massive Fortran simulations I wrote. I needed OO to manage the complex objects!

What was your first professional programming gig?

Writing PL/1 and C code for a 3-dimensional scanning system running on the RMX OS on proprietary Intel mini-computers. The tools were atrocious and we forced our customers to use a green screen terminal interface, because nothing else was available!

If there is one thing you learned along the way that you would tell new developers, what would it be?

Take the initiative to learn from potential mentors. They are too hard to find in our industry, so grab the opportunities when you can. The other recommendation I would make is to pay attention to the business side. Do you really want to do all that hard work for a project that will face-plant once it reaches the marketplace??

What’s the most fun you’ve ever had programming?

Leading a team of C++ developers writing a new user interface for a debugger that worked with in-circuit emulators (ICE’s). I always enjoyed UI development and the technical challenges were good. Unfortunately, it was one of those face-plants…

Tag, you’re it: Bob Koss, Brett Schuchert

Older posts: 1 2 3 4 5