LRUCache in Scala?

LRUCache solution based on Java LinkedHashMap and exposed as Scala mutable.Map

import java.util.Collections.synchronizedMap

import scala.collection.JavaConversions._
import scala.collection.mutable

class LRUCache[K, V](maxEntries: Int)
  extends java.util.LinkedHashMap[K, V](100, .75f, true) {

  override def removeEldestEntry(eldest: java.util.Map.Entry[K, V]): Boolean 
     = size > maxEntries

}

object LRUCache {
  def apply[K, V](maxEntries: Int): mutable.Map[K, V] 
    = synchronizedMap(new LRUCache[K, V](maxEntries))
}

When map size > maxEntries last recent used entry will be removed.

LinkedHashMap 3rd constructor parameter should be set as true for LRU strategy. LinkedHashMap(int initialCapacity, float loadFactor, boolean accessOrder)

Usage example:

val cache = LRUCache[String, Int](1000)
val key = "key1"
val value = 111

cache.get(key) shouldBe None
cache += key -> value
cache.get(key) shouldBe Some(value)
cache(key) shouldBe value

The Spray folks have a spray-caching module which uses Futures. There is a plain LRU version and a version that allows you to specify an explicit time to live, after which entries are expired automatically.

The use of Futures obviously allows you to write code that does not block. What is really cool, though, is that it solves the "thundering herds" problem as a bonus. Say, for example, that a bunch of requests come in at once for the same entry which is not in the cache. In a naive cache implementation, a hundred threads might get a miss on that entry in the cache and then run off to generate the same data for that cache entry, but of course 99% of that is just wasted effort. What you really want is for just one thread to go generate the data and all 100 requestors to see the result. This happens quite naturally if your cache contains Futures: the first requestor immediately installs a Future in the cache, so only the first requestor misses. All 100 requestors get that Future for the generated result.