Is there an ability to define scopes similar to Rails ActiveRecord way?
http://datamapper.org/docs/find.html
class Zoo
# all the keys and property setup here
def self.open
all(:open => true)
end
def self.big
all(:animal_count => 1000)
end
end
big_open_zoos = Zoo.big.open
So nothing special about scopes, just ruby.
Related
Suppose that you want to traverse an object graph in a navigational way, similar to the way we traverse file systems.
For example, imagine you have an object graph that supports this expression:
var x = objName.foo.bar.baz.fieldName
We can encode this data access expression as a path as follows:
"objName/foo/bar/baz/fieldName"
By breaking this path into segments, we can easily traverse an object graph in JavaScript because in addition to the traditional dot notation, it also supports the array access notation: objName["foo"]["bar"]["baz"]["fieldName"].
In Java or JVM Scala, we can use reflection to traverse object graphs, but how would you follow these kinds of paths to traverse object graphs of Scala objects in the Scala.js environment?
In other words, given a path similar in form to URIs, how would you walk through Scala.js objects, and fields?
You could surely use a macro to convert a constant String representation to an accessor at compile-time, but I guess you want to be able to do this at run-time for arbitrary strings.
If the goal is to be able to pass around partially constructed paths, then this is just a matter of passing accessor functions around. You could use something like this to make it prettier:
class Path[+A](private val value: () => A) {
def resolve: A = value()
def /[B](f: A => B): Path[B] = new Path(() => f(resolve))
}
implicit class PathSyntax[A](val a: A) extends AnyVal {
def /[B](f: A => B): Path[B] = new Path(() => a) / f
}
object Foo {
val bar = Bar
}
object Bar {
val baz = Baz
}
object Baz
The syntax is not exactly as pretty, but this is now typesafe and not too bad looking:
val barPath: Path[Bar.type] = Foo / (_.bar)
val bazPath: Path[Baz.type] = barPath / (_.baz)
Against, this would need more work, eg. there is no proper equality/comparison between Paths.
If you want to stick with your approach, I'm not aware of a direct solution. However, I would argue that the whole point of using ScalaJS is to keep strong types and avoid any pattern that could lead to runtime errors that the compiler could have prevented had you let it do its job.
I have this module
defmodule ElixirMeta.LangLoader do
#external_resource [Path.join([__DIR__, "es.json"]),
Path.join([__DIR__, "en.json"])]
defmacro __using__(_) do
for lang <- ["es", "en"] do
{:ok, body} = File.read(Path.join([__DIR__, "#{lang}.json"]))
{:ok, json} = Poison.decode(body)
quote do
def lang(unquote(lang)), do: unquote(Macro.escape(json))
end
end
end
end
defmodule ElixirMeta.Lang do
use ElixirMeta.LangLoader
end
I know I can define a function like:
def lang(unquote(lang)), do: unquote(Macro.escape(json))
And can be called like this:
Lang.lang("es")
Also even modify it's function name, like this:
def unquote(:"lang_#{lang}")(), do: unquote(Macro.escape(json))
And be called like this:
Lang.lang_es
But is it possible to do the same with a module attribute?
And being the module attribute compiled (?) I think is not possible to initialize it from the macro? maybe I would have to do it within before_compile macro?
I would like to access, for the purpose of the example, Lang.lang_es as a #lang_es and #lang_en LangLoader attributes
Yes, one can achieve that with Module.put_attribute/3 (I have created an MCVE out of your initial code):
defmodule ElixirMeta.LangLoader do
defmacro __using__(_) do
[
(quote do: Module.register_attribute __MODULE__,
:langs, accumulate: true) |
for lang <- ["es", "en"] do
quote do
def lang(unquote(lang)), do: unquote(lang)
Module.put_attribute __MODULE__,
:"lang_#{unquote(lang)}", unquote(lang)
Module.put_attribute __MODULE__,
:langs, unquote(lang)
end
end
]
end
end
defmodule ElixirMeta.Lang do
use ElixirMeta.LangLoader
def test do
IO.inspect {
#lang_es,
Enum.find(#langs, & &1 == "es"),
lang("es")
}, label: "Variants"
end
end
ElixirMeta.Lang.test
#⇒ Variants: {"es", "es", "es"}
The code above declares the accumulated attribute (#attr :foo followed by #attr :bar would produce [:foo, :bar] value instead of overwriting the attribute value, single attribute and the function.
Please note, that there is no way to access module attribute from outside, since module attributes are compile-time entities.
I have some code that requires an Environment Variable to run correctly. But when I run my unit tests, it bombs out once it reaches that point unless I specifically export the variable in the terminal. I am using Scala and sbt. My code does something like this:
class something() {
val envVar = sys.env("ENVIRONMENT_VARIABLE")
println(envVar)
}
How can I mock this in my unit tests so that whenever sys.env("ENVIRONMENT_VARIABLE") is called, it returns a string or something like that?
If you can't wrap existing code, you can change UnmodifiableMap System.getenv() for tests.
def setEnv(key: String, value: String) = {
val field = System.getenv().getClass.getDeclaredField("m")
field.setAccessible(true)
val map = field.get(System.getenv()).asInstanceOf[java.util.Map[java.lang.String, java.lang.String]]
map.put(key, value)
}
setEnv("ENVIRONMENT_VARIABLE", "TEST_VALUE1")
If you need to test console output, you may use separate PrintStream.
You can also implement your own PrintStream.
val baos = new java.io.ByteArrayOutputStream
val ps = new java.io.PrintStream(baos)
Console.withOut(ps)(
// your test code
println(sys.env("ENVIRONMENT_VARIABLE"))
)
// Get output and verify
val output: String = baos.toString(StandardCharsets.UTF_8.toString)
println("Test Output: [%s]".format(output))
assert(output.contains("TEST_VALUE1"))
Ideally, environment access should be rewritten to retrieve the data in a safe manner. Either with a default value ...
scala> scala.util.Properties.envOrElse("SESSION", "unknown")
res70: String = Lubuntu
scala> scala.util.Properties.envOrElse("SECTION", "unknown")
res71: String = unknown
... or as an option ...
scala> scala.util.Properties.envOrNone("SESSION")
res72: Option[String] = Some(Lubuntu)
scala> scala.util.Properties.envOrNone("SECTION")
res73: Option[String] = None
... or both [see envOrSome()].
I don't know of any way to make it look like any/all random env vars are set without actually setting them before running your tests.
You shouldn't test it in unit-test.
Just extract it out
class F(val param: String) {
...
}
In your prod code you do
new Foo(sys.env("ENVIRONMENT_VARIABLE"))
I would encapsulate the configuration in a contraption which does not expose the implementation, maybe a class ConfigValue
I would put the implementation in a class ConfigValueInEnvVar extends ConfigValue
This allows me to test the code that relies on the ConfigValue without having to set or clear environment variables.
It also allows me to test the base implementation of storing a value in an environment variable as a separate feature.
It also allows me to store the configuration in a database, a file or anything else, without changing my business logic.
I select implementation in the application layer.
I put the environment variable logic in a supporting domain.
I put the business logic and the traits/interfaces in the core domain.
I seem to be doing following everywhere in service APIs to create Slick transactions.
db.withTransaction{ implicit session =>
.....
}
Want to create something more DSL looking snippet instead of doing db.withTransaction everywhere.
I came up with below
def executeInSlickTransaction[T](body: => T) = {
val db = DataSource.getDb
db.withTransaction{ implicit session =>
body
}
}
So now I can call
executeInSlickTransaction{
....
}
but then I need implicit session as well in executeInSlickTransaction e.g. something like
executeInSlickTransaction{ implicit session => ...} because session implicit is required for DAO calls (made from within executeInSlickTransaction block) which expect it.
Is there a way to get implicit session back from executeInSlickTransaction ?
val db you can store somewhere else. It's just configuration.
You cannot get rid of the implicit keyword if you want to pass around the session implicitly. But you don't have to. If you just execute a single query (or a single function session => results) you can just do something like this:
import db.withTransaction
Slick 2.0:
withTransaction{ someQuery.list()(_) }
Slick 2.1:
withTransaction{ someQuery.list(_) }
Or shorten the name of the session variable, as it doesn't matter really if its implicit:
withTransaction{ implicit s =>
someQuery.list()(_)
}
Doesn't seem to me like a "dsl", i.e. just another method could help you. At best you could shorten the name to somethind shorter:
import db.{withTransaction => t}
t{ implicit s => q.list }
If you are new to Scala and don't understand some concept used here, they should be explained in about every Scala book, but of course there is lots of things at the same time when you are new.
Please pardon the length of this question.
I often need to create some contextual information at one layer of my code, and consume that information elsewhere. I generally find myself using implicit parameters:
def foo(params)(implicit cx: MyContextType) = ...
implicit val context = makeContext()
foo(params)
This works, but requires the implicit parameter to be passed around a lot, polluting the method signatures of layer after layout of intervening functions, even if they don't care about it themselves.
def foo(params)(implicit cx: MyContextType) = ... bar() ...
def bar(params)(implicit cx: MyContextType) = ... qux() ...
def qux(params)(implicit cx: MyContextType) = ... ged() ...
def ged(params)(implicit cx: MyContextType) = ... mog() ...
def mog(params)(implicit cx: MyContextType) = cx.doStuff(params)
implicit val context = makeContext()
foo(params)
I find this approach ugly, but it does have one advantage though: it's type safe. I know with certainty that mog will receive a context object of the right type, or it wouldn't compile.
It would alleviate the mess if I could use some form of "dependency injection" to locate the relevant context. The quotes are there to indicate that this is different from the usual dependency injection patterns found in Scala.
The start point foo and the end point mog may exist at very different levels of the system. For example, foo might be a user login controller, and mog might be doing SQL access. There may be many users logged in at once, but there's only one instance of the SQL layer. Each time mog is called by a different user, a different context is needed. So the context can't be baked into the receiving object, nor do you want to merge the two layers in any way (like the Cake Pattern). I'd also rather not rely on a DI/IoC library like Guice or Spring. I've found them very heavy and not very well suited to Scala.
So what I think I need is something that lets mog retrieve the correct context object for it at runtime, a bit like a ThreadLocal with a stack in it:
def foo(params) = ...bar()...
def bar(params) = ...qux()...
def qux(params) = ...ged()...
def ged(params) = ...mog()...
def mog(params) = { val cx = retrieveContext(); cx.doStuff(params) }
val context = makeContext()
usingContext(context) { foo(params) }
But that would fall as soon as asynchronous actor was involved anywhere in the chain. It doesn't matter which actor library you use, if the code runs on a different thread then it loses the ThreadLocal.
So... is there a trick I'm missing? A way of passing information contextually in Scala that doesn't pollute the intervening method signatures, doesn't bake the context into the receiver statically, and is still type-safe?
The Scala standard library includes something like your hypothetical "usingContext" called DynamicVariable. This question has some information about it When we should use scala.util.DynamicVariable? . DynamicVariable does use a ThreadLocal under the hood so many of your issues with ThreadLocal will remain.
The reader monad is a functional alternative to explicitly passing an environment http://debasishg.blogspot.com/2010/12/case-study-of-cleaner-composition-of.html. The Reader monad can be found in Scalaz http://code.google.com/p/scalaz/. However, the ReaderMonad does "pollute" your signatures in that their types must change and in general monadic programming can cause a lot of restructuring to your code plus extra object allocations for all the closures may not sit well if performance or memory is a concern.
Neither of these techniques will automatically share a context over an actor message send.
A little late to the party, but have you considered using implicit parameters to your classes constructors?
class Foo(implicit biz:Biz) {
def f() = biz.doStuff
}
class Biz {
def doStuff = println("do stuff called")
}
If you wanted to have a new biz for each call to f() you could let the implicit parameter be a function returning a new biz:
class Foo(implicit biz:() => Biz) {
def f() = biz().doStuff
}
Now you simply need to provide the context when constructing Foo. Which you can do like this:
trait Context {
private implicit def biz = () => new Biz
implicit def foo = new Foo // The implicit parameter biz will be resolved to the biz method above
}
class UI extends Context {
def render = foo.f()
}
Note that the implicit biz method will not be visible in UI. So we basically hide away those details :)
I wrote a blog post about using implicit parameters for dependency injection which can be found here (shameless self promotion ;) )
I think that the dependency injection from lift does what you want. See the wiki for details using the doWith () method.
Note that you can use it as a separate library, even if you are not running lift.
You asked this just about a year ago, but here's another possibility. If you only ever need to call one method:
def fooWithContext(cx: MyContextType)(params){
def bar(params) = ... qux() ...
def qux(params) = ... ged() ...
def ged(params) = ... mog() ...
def mog(params) = cx.doStuff(params)
... bar() ...
}
fooWithContext(makeContext())(params)
If you need all the methods to be externally visible:
case class Contextual(cx: MyContextType){
def foo(params) = ... bar() ...
def bar(params) = ... qux() ...
def qux(params) = ... ged() ...
def ged(params) = ... mog() ...
def mog(params) = cx.doStuff(params)
}
Contextual(makeContext()).foo(params)
This is basically the cake pattern, except that if all your stuff fits into a single file, you don't need all the messy trait stuff to combine it into one object: you can just nest them. Doing it this way also makes cx properly lexically scoped, so you don't end up with funny behavior when you use futures and actors and such. I suspect that if you use the new AnyVal, you could even do away with the overhead of allocating the Contextual object.
If you want to split your stuff into multiple files using traits, you only really need a single trait per file to hold everything and put the MyContextType properly in scope, if you don't need the fancy replaceable-components-via-inheritance thing most cake pattern examples have.
// file1.scala
case class Contextual(cx: MyContextType) with Trait1 with Trait2{
def foo(params) = ... bar() ...
def bar(params) = ... qux() ...
}
// file2.scala
trait Trait1{ self: Contextual =>
def qux(params) = ... ged() ...
def ged(params) = ... mog() ...
}
// file3.scala
trait Trait2{ self: Contextual =>
def mog(params) = cx.doStuff(params)
}
// file4.scala
Contextual(makeContext()).foo(params)
It looks kinda messy in a small example, but remember, you only need to split it off into a new trait if the code is getting too big to sit comfortable in one file. By that point your files are reasonably big, so an extra 2 lines of boilerplate on a 200-500 line file is not so bad really.
EDIT:
This works with asynchronous stuff too
case class Contextual(cx: MyContextType){
def foo(params) = ... bar() ...
def bar(params) = ... qux() ...
def qux(params) = ... ged() ...
def ged(params) = ... mog() ...
def mog(params) = Future{ cx.doStuff(params) }
def mog2(params) = (0 to 100).par.map(x => x * cx.getSomeValue )
def mog3(params) = Props(new MyActor(cx.getSomeValue))
}
Contextual(makeContext()).foo(params)
It Just Works using nesting. I'd be impressed if you could get similar functionality working with DynamicVariable.
You'd need a special subclass of Future that stores the current DynamicVariable.value when created, and hook into the ExecutionContext's prepare() or execute() method to extract the value and properly set up the DynamicVariable before executing the Future.
Then you'd need a special scala.collection.parallel.TaskSupport to do something similar in order to get parallel collections working. And a special akka.actor.Props in order to do something similar for that.
Every time there's a new mechanism of creating asynchronous tasks, DynamicVariable based implementations will break and you'll have weird bugs where you end up pulling up the wrong Context. Every time you add a new DynamicVariable to keep track of, you'll need to patch all your special executors to properly set/unset this new DynamicVariable. Using nesting you can just let lexical closure take care of all of this for you.
(I think Futures, collections.parallel and Props count as "layers in between that aren't my code")
Similar to the implicit approach, with Scala Macros you can do auto-wiring of objects using constructors - see my MacWire project (and excuse the self-promotion).
MacWire also has scopes (quite customisable, a ThreadLocal implementation is provided). However, I don't think you can propagate context across actor calls with a library - you need to carry some identifier around. This can be e.g. through a wrapper for sending actor messages, or more directly with the message.
Then as long as the identifier is unique per request/session/whatever your scope is, it's just a matter of looking things up in a map via a proxy (like the MacWire scopes do, the "identifier" here isn't needed as it is stored in the ThreadLocal).