Scala Fork-Join-All With Multiple Generic Types and 1 Generic Unit of Work - scala

I'm attempting to write a method which accepts multiple generic types and takes as an argument a unit of work to execute.
The idea is that the unit of work is a common function that itself is generic. For the sake of example, let's say it's something like the following:
def loadModelRdd[T: TypeTag](sc: SparkContext): RDD[T] = {
...
}
loadModelRdd() will construct an RDD of the given type after some internal processing like loading the Model information, etc.
A prototype method I've been hacking on looks something like the following (non-working):
def forkAll[A : Manifest, B : Manifest](work: => RDD[_]): (RDD[A], RDD[B]) = {
def aFuture = Future { work } // How can I notify that this work call returns type A?
def bFuture = Future { work } // How can I notify that this work call returns type B?
val res = for {
a <- aFuture
b <- bFuture
} yield (a.asInstanceOf[A], b.asInstanceOf[B])
Await.result(res, 10.seconds)
}
This is a shortened version of the code I'm working on as I'm actually looking at accepting as many as 10 different types.
As you can see, the overall goal of the forkAll method is to wrap the unit of work in a Future, fork-join the execution of the unit of work for each type, then return the results as a Tuple'd result. An example consumer statement would be:
val (a, b) = forkAll[ClassA, ClassB](loadModelRdd)
i.e I want to fork-join at this point and wait for the results, but I want the executions to be executed in parallel and then collected back to the Driver (Spark Driver to be specific).
The problem is I'm not sure how to coerce the type returned by the unit of work within forkAll when constructing the Future {} blocks. Without the forkAll, the implementation looked like the following:
val resA = loadModelRdd[ClassA](sc)
val resB = loadModelRdd[ClassB](sc)
...
I am looking at doing this for two reasons:
To abstract the details of fork-join for any unit of work which matches this model.
A version of this code, which explicitly states what the unit of work is, is working in Production and was responsible for cutting execution of a long-running block by close to half. I have a couple of execution steps where this pattern could be applied
Is this something that is possible in Scala's type system? Or should I look at this problem from a different perspective? I've tried a couple of implementations (including one described here) but I haven't quite found one that fits my current view of the problem
Please let me know if there is any additional information needed.
Thanks!

Short answer: Scala does not allow functions with type parameters, so what you want is not exactly possible.
You are attempting to pass a method with a type parameter. Although methods are allowed to have type parameters, functions are not. When you try to pass a method, it acts like an anonymous function, so you must specify a type.
However, since methods do allow type parameters, you can take advantage of this by creating an abstract class that will do your fork/join
abstract class ForkJoin {
protected def work[T]: RDD[T]
def apply[A, B]: (RDD[A], RDD[B]) = {
// Write implementation of fork/join here
(work[A], work[B])
}
}
then overriding the type generic work method so that it does what you want, such as calling some other pre-defined method.
val forkJoin = new ForkJoin {
override protected def work[T]: RDD[T] =
loadModelRdd[T](sc)
}
val (intRdd, stringRdd) = forkJoin[Int, String]
Check out this for a prototype implementation that compiles and runs without issues.

Related

How can I provide a custom header to a ZIO during tests

I have service that returns a ZIO[Has[MyCustomHeader]], and I'm having trouble testing it.
Other services in our organisation are tested by converting ZIO to Twitterfuture using runtime.unsafeRunToFuture (where runtime is a Runtime[ZEnv] ) and then awaiting the future, thus running the tests in blocking mode.
However this service has a Has[] requirement and runtime.unsafeRunToFuture doesnt handle those. So far my approach has been to try to convert my ZIO[Has[MyCustomHeader]] to a ZIO[ZEnv], but I've yet to succeed at this.
from what I gather I need to provide a ZLayer via ZIO.provideSomeLayer() but I'm simply too stupid to understand how to construct a ZLayer properly?
Am I even on the right path here? and if so, How do I construct a ZLayer with a static value for MyCustomHeader to use in my tests?
This is how far along I am at trying to add a header for testing purposes: it doesn't work, but might illustrate what I'm trying to achieve..maybe... I'm pretty confused myself:
object effectAwait {
implicit class ZioEffect[A](private val value: ZIO[Has[EnvironmentHeader], RequestFailure, A]) extends AnyVal {
final def await(implicit runtime: Runtime[ZEnv] = Runtime.default): A = {
val zmanaged = ZManaged.fromEffect(value).provide(Has(EnvironmentHeader("test")))
val layered = value.provideSomeLayer(zmanaged.toLayer)
val sf = runtime.unsafeRunToFuture(layered)
Await.result(sf, 10.seconds)
}
}
}
this however gives me the error:
could not find implicit value for izumi.reflect.Tag[A]. Did you
forget to put on a Tag, TagK or TagKK context bound on one of the
parameters in A? e.g. def x[T: Tag, F[_]: TagK] = ...
<trace>:
deriving Tag for A, dealiased: A:
could not find implicit value for Tag[A]: A is a type parameter without an implicit Tag!
val layered = value.provideSomeLayer(zmanaged.toLayer)
I think you can just use ZIO.provideLayer (instead of provideSomeLayer) here :)
Also, there's a runtime.unsafeRun that will wait for the result as well, so you don't necessarily have to convert it to a Future. Also, also, instead of relying on an implicit runtime, there's always zio.Runtime.default that you can use anywhere (it's a Runtime[ZEnv] so it should work just as well, unless you've otherwise customized the runtime's behavior)

Can Scala infer the actual type from the return type actually expected by the caller?

I have a following question. Our project has a lot of code, that runs tests in Scala. And there is a lot of code, that fills the fields like this:
production.setProduct(new Product)
production.getProduct.setUuid("b1253a77-0585-291f-57a4-53319e897866")
production.setSubProduct(new SubProduct)
production.getSubProduct.setUuid("89a877fa-ddb3-3009-bb24-735ba9f7281c")
Eventually, I grew tired from this code, since all those fields are actually subclasses of the basic class that has the uuid field, so, after thinking a while, I wrote the auxiliary function like this:
def createUuid[T <: GenericEntity](uuid: String)(implicit m : Manifest[T]) : T = {
val constructor = m.runtimeClass.getConstructors()(0)
val instance = constructor.newInstance().asInstanceOf[T]
instance.setUuid(uuid)
instance
}
Now, my code got two times shorter, since now I can write something like this:
production.setProduct(createUuid[Product]("b1253a77-0585-291f-57a4-53319e897866"))
production.setSubProduct(createUuid[SubProduct]("89a877fa-ddb3-3009-bb24-735ba9f7281c"))
That's good, but I am wondering, if I could somehow implement the function createUuid so the last bit would like this:
// Is that really possible?
production.setProduct(createUuid("b1253a77-0585-291f-57a4-53319e897866"))
production.setSubProduct(createUuid("89a877fa-ddb3-3009-bb24-735ba9f7281c"))
Can scala compiler guess, that setProduct expects not just a generic entity, but actually something like Product (or it's subclass)? Or there is no way in Scala to implement this even shorter?
Scala compiler won't infer/propagate the type outside-in. You could however create implicit conversions like:
implicit def stringToSubProduct(uuid: String): SubProduct = {
val n = new SubProduct
n.setUuid(uuid)
n
}
and then just call
production.setSubProduct("89a877fa-ddb3-3009-bb24-735ba9f7281c")
and the compiler will automatically use the stringToSubProduct because it has applicable types on the input and output.
Update: To have the code better organized I suggest wrapping the implicit defs to a companion object, like:
case class EntityUUID(uuid: String) {
uuid.matches("[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}") // possible uuid format check
}
case object EntityUUID {
implicit def toProduct(e: EntityUUID): Product = {
val p = new Product
p.setUuid(e.uuid)
p
}
implicit def toSubProduct(e: EntityUUID): SubProduct = {
val p = new SubProduct
p.setUuid(e.uuid)
p
}
}
and then you'd do
production.setProduct(EntityUUID("b1253a77-0585-291f-57a4-53319e897866"))
so anyone reading this could have an intuition where to find the conversion implementation.
Regarding your comment about some generic approach (having 30 types), I won't say it's not possible, but I just do not see how to do it. The reflection you used bypasses the type system. If all the 30 cases are the same piece of code, maybe you should reconsider your object design. Now you can still implement the 30 implicit defs by calling some method that uses reflection similar what you have provided. But you will have the option to change it in the future on just this one (30) place(s).

Scala: Why use implicit on function argument?

I have a following function:
def getIntValue(x: Int)(implicit y: Int ) : Int = {x + y}
I see above declaration everywhere. I understand what above function is doing. It is a currying function which takes two arguments. If you omit the second argument, it will invoke implicit definition which returns int instead. So I think it is something very similar to defining a default value for the argument.
implicit val temp = 3
scala> getIntValue(3)
res8: Int = 6
I was wondering what are the benefits of above declaration?
Here's my "pragmatic" answer: you typically use currying as more of a "convention" than anything else meaningful. It comes in really handy when your last parameter happens to be a "call by name" parameter (for example: : => Boolean):
def transaction(conn: Connection)(codeToExecuteInTransaction : => Boolean) = {
conn.startTransaction // start transaction
val booleanResult = codeToExecuteInTransaction //invoke the code block they passed in
//deal with errors and rollback if necessary, or commit
//return connection to connection pool
}
What this is saying is "I have a function called transaction, its first parameter is a Connection and its second parameter will be a code-block".
This allows us to use this method like so (using the "I can use curly brace instead of parenthesis rule"):
transaction(myConn) {
//code to execute in a transaction
//the code block's last executable statement must be a Boolean as per the second
//parameter of the transaction method
}
If you didn't curry that transaction method, it would look pretty unnatural doing this:
transaction(myConn, {
//code block
})
How about implicit? Yes it can seem like a very ambiguous construct, but you get used to it after a while, and the nice thing about implicit functions is they have scoping rules. So this means for production, you might define an implicit function for getting that database connection from the PROD database, but in your integration test you'll define an implicit function that will superscede the PROD version, and it will be used to get a connection from a DEV database instead for use in your test.
As an example, how about we add an implicit parameter to the transaction method?
def transaction(implicit conn: Connection)(codeToExecuteInTransaction : => Boolean) = {
}
Now, assuming I have an implicit function somewhere in my code base that returns a Connection, like so:
def implicit getConnectionFromPool() : Connection = { ...}
I can execute the transaction method like so:
transaction {
//code to execute in transaction
}
and Scala will translate that to:
transaction(getConnectionFromPool) {
//code to execute in transaction
}
In summary, Implicits are a pretty nice way to not have to make the developer provide a value for a required parameter when that parameter is 99% of the time going to be the same everywhere you use the function. In that 1% of the time you need a different Connection, you can provide your own connection by passing in a value instead of letting Scala figure out which implicit function provides the value.
In your specific example there are no practical benefits. In fact using implicits for this task will only obfuscate your code.
The standard use case of implicits is the Type Class Pattern. I'd say that it is the only use case that is practically useful. In all other cases it's better to have things explicit.
Here is an example of a typeclass:
// A typeclass
trait Show[a] {
def show(a: a): String
}
// Some data type
case class Artist(name: String)
// An instance of the `Show` typeclass for that data type
implicit val artistShowInstance =
new Show[Artist] {
def show(a: Artist) = a.name
}
// A function that works for any type `a`, which has an instance of a class `Show`
def showAListOfShowables[a](list: List[a])(implicit showInstance: Show[a]): String =
list.view.map(showInstance.show).mkString(", ")
// The following code outputs `Beatles, Michael Jackson, Rolling Stones`
val list = List(Artist("Beatles"), Artist("Michael Jackson"), Artist("Rolling Stones"))
println(showAListOfShowables(list))
This pattern originates from a functional programming language named Haskell and turned out to be more practical than the standard OO practices for writing a modular and decoupled software. The main benefit of it is it allows you to extend the already existing types with new functionality without changing them.
There's plenty of details unmentioned, like syntactic sugar, def instances and etc. It is a huge subject and fortunately it has a great coverage throughout the web. Just google for "scala type class".
There are many benefits, outside of your example.
I'll give just one; at the same time, this is also a trick that you can use on certain occasions.
Imagine you create a trait that is a generic container for other values, like a list, a set, a tree or something like that.
trait MyContainer[A] {
def containedValue:A
}
Now, at some point, you find it useful to iterate over all elements of the contained value.
Of course, this only makes sense if the contained value is of an iterable type.
But because you want your class to be useful for all types, you don't want to restrict A to be of a Seq type, or Traversable, or anything like that.
Basically, you want a method that says: "I can only be called if A is of a Seq type."
And if someone calls it on, say, MyContainer[Int], that should result in a compile error.
That's possible.
What you need is some evidence that A is of a sequence type.
And you can do that with Scala and implicit arguments:
trait MyContainer[A] {
def containedValue:A
def aggregate[B](f:B=>B)(implicit ev:A=>Seq[B]):B =
ev(containedValue) reduce f
}
So, if you call this method on a MyContainer[Seq[Int]], the compiler will look for an implicit Seq[Int]=>Seq[B].
That's really simple to resolve for the compiler.
Because there is a global implicit function that's called identity, and it is always in scope.
Its type signature is something like: A=>A
It simply returns whatever argument is passed to it.
I don't know how this pattern is called. (Can anyone help out?)
But I think it's a neat trick that comes in handy sometimes.
You can see a good example of that in the Scala library if you look at the method signature of Seq.sum.
In the case of sum, another implicit parameter type is used; in that case, the implicit parameter is evidence that the contained type is numeric, and therefore, a sum can be built out of all contained values.
That's not the only use of implicits, and certainly not the most prominent, but I'd say it's an honorable mention. :-)

How can I combine fluent interfaces with a functional style in Scala?

I've been reading about the OO 'fluent interface' approach in Java, JavaScript and Scala and I like the look of it, but have been struggling to see how to reconcile it with a more type-based/functional approach in Scala.
To give a very specific example of what I mean: I've written an API client which can be invoked like this:
val response = MyTargetApi.get("orders", 24)
The return value from get() is a Tuple3 type called RestfulResponse, as defined in my package object:
// 1. Return code
// 2. Response headers
// 2. Response body (Option)
type RestfulResponse = (Int, List[String], Option[String])
This works fine - and I don't really want to sacrifice the functional simplicity of a tuple return value - but I would like to extend the library with various 'fluent' method calls, perhaps something like this:
val response = MyTargetApi.get("customers", 55).throwIfError()
// Or perhaps:
MyTargetApi.get("orders", 24).debugPrint(verbose=true)
How can I combine the functional simplicity of get() returning a typed tuple (or similar) with the ability to add more 'fluent' capabilities to my API?
It seems you are dealing with a client side API of a rest style communication. Your get method seems to be what triggers the actual request/response cycle. It looks like you'd have to deal with this:
properties of the transport (like credentials, debug level, error handling)
providing data for the input (your id and type of record (order or customer)
doing something with the results
I think for the properties of the transport, you can put some of it into the constructor of the MyTargetApi object, but you can also create a query object that will store those for a single query and can be set in a fluent way using a query() method:
MyTargetApi.query().debugPrint(verbose=true).throwIfError()
This would return some stateful Query object that stores the value for log level, error handling. For providing the data for the input, you can also use the query object to set those values but instead of returning your response return a QueryResult:
class Query {
def debugPrint(verbose: Boolean): this.type = { _verbose = verbose; this }
def throwIfError(): this.type = { ... }
def get(tpe: String, id: Int): QueryResult[RestfulResponse] =
new QueryResult[RestfulResponse] {
def run(): RestfulResponse = // code to make rest call goes here
}
}
trait QueryResult[A] { self =>
def map[B](f: (A) => B): QueryResult[B] = new QueryResult[B] {
def run(): B = f(self.run())
}
def flatMap[B](f: (A) => QueryResult[B]) = new QueryResult[B] {
def run(): B = f(self.run()).run()
}
def run(): A
}
Then to eventually get the results you call run. So at the end of the day you can call it like this:
MyTargetApi.query()
.debugPrint(verbose=true)
.throwIfError()
.get("customers", 22)
.map(resp => resp._3.map(_.length)) // body
.run()
Which should be a verbose request that will error out on issue, retrieve the customers with id 22, keep the body and get its length as an Option[Int].
The idea is that you can use map to define computations on a result you do not yet have. If we add flatMap to it, then you could also combine two computations from two different queries.
To be honest, I think it sounds like you need to feel your way around a little more because the example is not obviously functional, nor particularly fluent. It seems you might be mixing up fluency with not-idempotent in the sense that your debugPrint method is presumably performing I/O and the throwIfError is throwing exceptions. Is that what you mean?
If you are referring to whether a stateful builder is functional, the answer is "not in the purest sense". However, note that a builder does not have to be stateful.
case class Person(name: String, age: Int)
Firstly; this can be created using named parameters:
Person(name="Oxbow", age=36)
Or, a stateless builder:
object Person {
def withName(name: String)
= new { def andAge(age: Int) = new Person(name, age) }
}
Hey presto:
scala> Person withName "Oxbow" andAge 36
As to your use of untyped strings to define the query you are making; this is poor form in a statically-typed language. What is more, there is no need:
sealed trait Query
case object orders extends Query
def get(query: Query): Result
Hey presto:
api get orders
Although, I think this is a bad idea - you shouldn't have a single method which can give you back notionally completely different types of results
To conclude: I personally think there is no reason whatsoever that fluency and functional cannot mix, since functional just indicates the lack of mutable state and the strong preference for idempotent functions to perform your logic in.
Here's one for you:
args.map(_.toInt)
args map toInt
I would argue that the second is more fluent. It's possible if you define:
val toInt = (_ : String).toInt
That is; if you define a function. I find functions and fluency mix very well in Scala.
You could try having get() return a wrapper object that might look something like this
type RestfulResponse = (Int, List[String], Option[String])
class ResponseWrapper(private rr: RestfulResponse /* and maybe some flags as additional arguments, or something? */) {
def get : RestfulResponse = rr
def throwIfError : RestfulResponse = {
// Throw your exception if you detect an error
rr // And return the response if you didn't detect an error
}
def debugPrint(verbose: Boolean, /* whatever other parameters you had in mind */) {
// All of your debugging printing logic
}
// Any and all other methods that you want this API response to be able to execute
}
Basically, this allows you to put your response into a contain that has all of these nice methods that you want, and, if you simply want to get the wrapped response, you can just call the wrapper's get() method.
Of course, the downside of this is that you will need to change your API a bit, if that's worrisome to you at all. Well... you could probably avoid needing to change your API, actually, if you, instead, created an implicit conversion from RestfulResponse to ResponseWrapper and vice versa. That's something worth considering.

Scala dependency injection: alternatives to implicit parameters

Please pardon the length of this question.
I often need to create some contextual information at one layer of my code, and consume that information elsewhere. I generally find myself using implicit parameters:
def foo(params)(implicit cx: MyContextType) = ...
implicit val context = makeContext()
foo(params)
This works, but requires the implicit parameter to be passed around a lot, polluting the method signatures of layer after layout of intervening functions, even if they don't care about it themselves.
def foo(params)(implicit cx: MyContextType) = ... bar() ...
def bar(params)(implicit cx: MyContextType) = ... qux() ...
def qux(params)(implicit cx: MyContextType) = ... ged() ...
def ged(params)(implicit cx: MyContextType) = ... mog() ...
def mog(params)(implicit cx: MyContextType) = cx.doStuff(params)
implicit val context = makeContext()
foo(params)
I find this approach ugly, but it does have one advantage though: it's type safe. I know with certainty that mog will receive a context object of the right type, or it wouldn't compile.
It would alleviate the mess if I could use some form of "dependency injection" to locate the relevant context. The quotes are there to indicate that this is different from the usual dependency injection patterns found in Scala.
The start point foo and the end point mog may exist at very different levels of the system. For example, foo might be a user login controller, and mog might be doing SQL access. There may be many users logged in at once, but there's only one instance of the SQL layer. Each time mog is called by a different user, a different context is needed. So the context can't be baked into the receiving object, nor do you want to merge the two layers in any way (like the Cake Pattern). I'd also rather not rely on a DI/IoC library like Guice or Spring. I've found them very heavy and not very well suited to Scala.
So what I think I need is something that lets mog retrieve the correct context object for it at runtime, a bit like a ThreadLocal with a stack in it:
def foo(params) = ...bar()...
def bar(params) = ...qux()...
def qux(params) = ...ged()...
def ged(params) = ...mog()...
def mog(params) = { val cx = retrieveContext(); cx.doStuff(params) }
val context = makeContext()
usingContext(context) { foo(params) }
But that would fall as soon as asynchronous actor was involved anywhere in the chain. It doesn't matter which actor library you use, if the code runs on a different thread then it loses the ThreadLocal.
So... is there a trick I'm missing? A way of passing information contextually in Scala that doesn't pollute the intervening method signatures, doesn't bake the context into the receiver statically, and is still type-safe?
The Scala standard library includes something like your hypothetical "usingContext" called DynamicVariable. This question has some information about it When we should use scala.util.DynamicVariable? . DynamicVariable does use a ThreadLocal under the hood so many of your issues with ThreadLocal will remain.
The reader monad is a functional alternative to explicitly passing an environment http://debasishg.blogspot.com/2010/12/case-study-of-cleaner-composition-of.html. The Reader monad can be found in Scalaz http://code.google.com/p/scalaz/. However, the ReaderMonad does "pollute" your signatures in that their types must change and in general monadic programming can cause a lot of restructuring to your code plus extra object allocations for all the closures may not sit well if performance or memory is a concern.
Neither of these techniques will automatically share a context over an actor message send.
A little late to the party, but have you considered using implicit parameters to your classes constructors?
class Foo(implicit biz:Biz) {
def f() = biz.doStuff
}
class Biz {
def doStuff = println("do stuff called")
}
If you wanted to have a new biz for each call to f() you could let the implicit parameter be a function returning a new biz:
class Foo(implicit biz:() => Biz) {
def f() = biz().doStuff
}
Now you simply need to provide the context when constructing Foo. Which you can do like this:
trait Context {
private implicit def biz = () => new Biz
implicit def foo = new Foo // The implicit parameter biz will be resolved to the biz method above
}
class UI extends Context {
def render = foo.f()
}
Note that the implicit biz method will not be visible in UI. So we basically hide away those details :)
I wrote a blog post about using implicit parameters for dependency injection which can be found here (shameless self promotion ;) )
I think that the dependency injection from lift does what you want. See the wiki for details using the doWith () method.
Note that you can use it as a separate library, even if you are not running lift.
You asked this just about a year ago, but here's another possibility. If you only ever need to call one method:
def fooWithContext(cx: MyContextType)(params){
def bar(params) = ... qux() ...
def qux(params) = ... ged() ...
def ged(params) = ... mog() ...
def mog(params) = cx.doStuff(params)
... bar() ...
}
fooWithContext(makeContext())(params)
If you need all the methods to be externally visible:
case class Contextual(cx: MyContextType){
def foo(params) = ... bar() ...
def bar(params) = ... qux() ...
def qux(params) = ... ged() ...
def ged(params) = ... mog() ...
def mog(params) = cx.doStuff(params)
}
Contextual(makeContext()).foo(params)
This is basically the cake pattern, except that if all your stuff fits into a single file, you don't need all the messy trait stuff to combine it into one object: you can just nest them. Doing it this way also makes cx properly lexically scoped, so you don't end up with funny behavior when you use futures and actors and such. I suspect that if you use the new AnyVal, you could even do away with the overhead of allocating the Contextual object.
If you want to split your stuff into multiple files using traits, you only really need a single trait per file to hold everything and put the MyContextType properly in scope, if you don't need the fancy replaceable-components-via-inheritance thing most cake pattern examples have.
// file1.scala
case class Contextual(cx: MyContextType) with Trait1 with Trait2{
def foo(params) = ... bar() ...
def bar(params) = ... qux() ...
}
// file2.scala
trait Trait1{ self: Contextual =>
def qux(params) = ... ged() ...
def ged(params) = ... mog() ...
}
// file3.scala
trait Trait2{ self: Contextual =>
def mog(params) = cx.doStuff(params)
}
// file4.scala
Contextual(makeContext()).foo(params)
It looks kinda messy in a small example, but remember, you only need to split it off into a new trait if the code is getting too big to sit comfortable in one file. By that point your files are reasonably big, so an extra 2 lines of boilerplate on a 200-500 line file is not so bad really.
EDIT:
This works with asynchronous stuff too
case class Contextual(cx: MyContextType){
def foo(params) = ... bar() ...
def bar(params) = ... qux() ...
def qux(params) = ... ged() ...
def ged(params) = ... mog() ...
def mog(params) = Future{ cx.doStuff(params) }
def mog2(params) = (0 to 100).par.map(x => x * cx.getSomeValue )
def mog3(params) = Props(new MyActor(cx.getSomeValue))
}
Contextual(makeContext()).foo(params)
It Just Works using nesting. I'd be impressed if you could get similar functionality working with DynamicVariable.
You'd need a special subclass of Future that stores the current DynamicVariable.value when created, and hook into the ExecutionContext's prepare() or execute() method to extract the value and properly set up the DynamicVariable before executing the Future.
Then you'd need a special scala.collection.parallel.TaskSupport to do something similar in order to get parallel collections working. And a special akka.actor.Props in order to do something similar for that.
Every time there's a new mechanism of creating asynchronous tasks, DynamicVariable based implementations will break and you'll have weird bugs where you end up pulling up the wrong Context. Every time you add a new DynamicVariable to keep track of, you'll need to patch all your special executors to properly set/unset this new DynamicVariable. Using nesting you can just let lexical closure take care of all of this for you.
(I think Futures, collections.parallel and Props count as "layers in between that aren't my code")
Similar to the implicit approach, with Scala Macros you can do auto-wiring of objects using constructors - see my MacWire project (and excuse the self-promotion).
MacWire also has scopes (quite customisable, a ThreadLocal implementation is provided). However, I don't think you can propagate context across actor calls with a library - you need to carry some identifier around. This can be e.g. through a wrapper for sending actor messages, or more directly with the message.
Then as long as the identifier is unique per request/session/whatever your scope is, it's just a matter of looking things up in a map via a proxy (like the MacWire scopes do, the "identifier" here isn't needed as it is stored in the ThreadLocal).