Scala stackable traits

So let's add some lines that show us instantiation order

  trait A {
    print("A")
    def foo(): String = "A"
  }

  trait B extends A {
    print("B")
    abstract override def foo() = "B" + super.foo()
  }

  trait C extends B {
    print("C")
    abstract override def foo() = "C" + super.foo()
  }

  trait D extends A {
    print("D")
    abstract override def foo() = "D" + super.foo()
  }

  class E extends A {
    print("E")
    override def foo() = "E" + super.foo()
  }

  var e = new E with D with C with B
  println()
  println(s"e >> ${e.foo()}")

printed: AEDBC e >> CBDEA

but what with F?

  class F extends A with D with C with B {
    print("F")
    override def foo() = "F" + super.foo()
  }
  val f = new F()
  println()
  println(s"f >> ${f.foo()}")

printed: ADBCF f >> FCBDA

As You can see the linearization for both cases different! When we instantiate the class with a cortege of traits it's not the same as when we create a separate class that inherits those traits.

So the order of calling foo different too, according to linearization. And it's a little bit clear when we add super.foo() to the E


The first case with new E with D with C with B is perfectly explained here. Its linearization is EDBC, so when you call d.foo(), it

  • first calls C#foo(),
  • then B#foo(),
  • then D#foo()
  • and finally E#foo().

If you make E a trait and mix it in the end: val d = new D with C with B with E, then d.foo() will return just "E", because trait E is the "last" in the linearization and just overridesfoo.

The case of F is different, because you define foo as "F" + super.foo(), and super in this case is A with D with C with B whose linearization is ADBC, so new F().foo() - first prints "F", - then its super.foo() which is "CBD".

By the way, try changing A#foo() to return "A", then you will see that in E you override A's foo so "A" doesn't appear in the result, and in F it is "FCBDA".

Tags:

Scala

Traits