I want to build a role functionality for my application. So, I thought object would come in handy because I need a Singelton of all different roles.
Therefore, I have the following code:
trait Role {
def id: UUID
def name: String
}
object Admin extends Role {
val id = UUID.randomUUID()
val name = "admin"
}
object Pro extends Role {
val id = UUID.randomUUID()
val name = "pro"
}
However, after I persisted these roles in my database and restarted the application, I noticed that the id of the roles changed, meaning it's not the same role as I persisted them when I started the application in the first place. So, I would need to set the id if a role with the same name has already been stored in the database and set it to the Singelton object. I thought that I could use parameters to initialize the Admin and Pro object, but apparently this does not work.
How can this be done?
First, it is difficult to discuss the problem by only seeing this code, without knowing how you try to do the database persistence part.
Following your code, the id is initialised by calling randomUUID, so surely you get a new one with each start. System works as designed.
Second, I am not sure if we would agree about what a singleton is and what is the semantic of the two 'objects'.
To me it looks as if you indeed would like to have two different instances of the type Role, instead of one singleton type Admin and one different singleton type Pro, because the two differ only in the attributed values, not in structure.
A singleton object is already an object, indeed the sole object of its type. So the notion of setting its values from outside during some sort of construction, like you would do with classes during instantiation, is not really applicable here.
Take a look at the below code:
object Admin extends Role {
val id = getPersistedIDFromDatabase(name).getOrElse {
pesistID(name, UUID.randomUUID())
}
val name = "admin"
}
// getThePersistedIDFromDatabase => executes the `select` SQL query and returns an Optional ID, i.e., Some(id) if the admin already exists; Otherwise None.
Whenever you restart your application, its memory is wiped out. So it has no way to know about your previous ID.
Related
So I have a schema that is well defined. The datastorage that backs it will allow for this request. (MongoDB).
Lets say I have a Users class:
class User
emailAddress
name
If I'm merging in data from another source (lets say a map/params, and I can properly identify the source.) My intention is to put the unused properties in a structure within the User class.
For example: If I'm importing a User from facebook, they're going to have all kinds of properties outside just the emailAddress, or the name. BUt I don't know how to deal with those yet.
My question is: How would I design a domain class so that it can handle all of this on the creation of the object? (I'm willing to put a tracer property in to signify the source) [I.e. adding [source: Facebook]]
The outputting class would look, and be serialized as such:
The info coming back from Facebook would be [name: Jim, email: bo#jim.com, friends:1000, level:42]. The resulting class would be:
class User
emailAddress : bo#jim.com
name: Jim
extraProperties: [Facebook, [friends:1000, level:42]]
What is the best way of going about this? Would it break the domain class model? Is expando something that would work here?
I think the best way to design your domain class would be to look into saving the additional user's properties (extraProperties) as a serialised 'document' type object. If you were to convert the sample Map you have into say, JSON/GSON or XML (Converters) and save this to your database as a document / large nvarchar, you then have the flexibility of different properties for each user source.
You could then add custom getters and setters to your domain object which would convert / slurp the document, and present it as a map to your controllers/services
String extraProperties
def setExtraProperties(def properties){
this.extraProperties = (properties as JSON)?.toString()
}
def getExtraPropertiesMap() {
def jsonSlurper = new JsonSlurper()
def extraProps = jsonSlurper.parseText(this.extraProperties)
return extraProps //you can then access this using map syntax, eg. extraProps.Facebook.friends
}
I have a class being used in a Play app as a model:
case Class1(
id: Pk[UUID] = NotAssigned,
addedAt: DateTime = null
)
object Class1 {
//....
}
The field added_at has a default value in db of now() and it can't be NULL in db. But nonetheless, when I want to create an instance of Class1, I have to specify it even though I don't want to send it db because it will be set to be now() in db:
Class1(addedAt = DateTime.now())
Using null as a default value seems like not a sensible idea, using Option[DateTime] = None doesn't make sense because it can't be null in db.
What do I do about it: I want it to exist in Class1 because it exists in db, but at the same time I don't want to specify it because it's not needed?
Imagine you just created an object of this class and haven't sent it to the database yet. What would the result of (new Class1).addedAt be? You need the notion of not being assigned, same as for id. Since it isn't a primary key, the normal way to express it is Option. If you can ensure you always work with objects which are already sent to the database outside of your DB access layer, then keep Option[DateTime] in your DAL, but don't use it directly in the parts of program which don't need to deal with it being unassigned:
case class Class1DAL(..., addedAt: Option[DateTime])
// must ensure it is never seen with x not sent to DB
class Class1(x: Class1DAL) {
def id = x.id
def addedAt: DateTime =
x.addedAt.getOrElse(throw new IllegalStateException("Class1.addedAt unassigned, which should be impossible")
}
Initialised vs. not-initialised is definitely one valid understanding of an Option value, and doesn't force you to infer where the thing was initialised.
In this case, if you want to avoid misunderstanding, it would be enough to simply rename the value to addedToDb - or use an equivalent doc comment if renaming isn't an option.
I don't think anyone would have a problem with addedToDb being None for an object that hasn't yet been added to the DB!
Also... Don't use null, it's evil
My application has a class ApplicationUsers that has no mutable members. Upon creation of instances, it reads the entire user database (relatively small) into an immutable collection. It has a number of methods to query the data.
I am now faced with the problem of having to create new users (or modify some of their attributes). My current idea is to use an Akka actor that, at a high level, would look like this:
class UserActor extends Actor{
var users = new ApplicationUsers
def receive = {
case GetUsers => sender ! users
case SomeMutableOperation => {
PerformTheChangeOnTheDatabase() // does not alter users (which is immutable)
users = new ApplicationUsers // reads the database from scratch into a new immutable instance
}
}
}
Is this safe? My reasoning is that it should be: whenever users is changed by SomeMutableOperation any other threads making use of previous instances of users already have a handle to an older version, and should not be affected. Also, any GetUsers request will not be acted upon until a new instance is not safely constructed.
Is there anything I am missing? Is my construct safe?
UPDATE: I probably should be using Agents to do this, but the question is still holds: is the above safe?
You are doing it exactly right: have immutable data types and reference them via var within the actor. This way you can freely share the data and mutability is confined to the actor. The only thing to watch out for is if you reference the var from a closure which is executed outside of the actor (e.g. in a Future transformation or a Props instance). In such a case you need to make a stack-local copy:
val currentUsers = users
other ? Process(users) recoverWith { case _ => backup ? Process(currentUsers) }
In the first case you just grab the value—which is fine—but asking the backup happens from a different thread, hence the need for val currentUsers.
Looks fine to me. You don't seem to need Agents here.
I am looking for best practices regarding models and ways to persist objects in database with play 2.0. I have studied the Play and typesafe samples for play 2.0 using scala.
What I understand is :
The model is defined in a case class
All the insert/update/delete/select are defined in the companion object of this case class
So if I need to update my Car object to define a new owner i will have to do:
val updatedCar = myCar.copy(owner=newOwner)
Car.update(updatedCar)
// or
Car.updateOwner(myCar.id.get, newOwner)
I am wondering why the update or delete statements are not in the case class itself:
case class Car(id: Pk[Long] = NotAssigned, owner: String) {
def updateOwner(newOwner: String) {
DB.withConnection { implicit connection =>
SQL(
"""
update car
set owner = {newOwner}
where id = {id}
"""
).on(
'id -> id,
'newOwner -> newOwner
).executeUpdate()
}
copy(owner = newOwner)
}
}
Doing so would permit to do:
val updatedCar = myCar.updateOwner(newOwner)
Which is what I used to do with Play 1.X using Java and JPA.
Maybe the reason is obvious and due to my small knowledge of Scala.
I think part of the reason is the favoring of immutability in functional languages like Scala.
In your example, you modify 'this.owner'. What's your equivalent operation look like for a delete, and what happens to "this"?
With a companion object, it seems a bit more clear that the passed object (or ID) is not modified, and the returned object or ID is the relevant result of the operation.
Then also, I think another part of the issue is that your example requires an instance first. When you delete an Object, what if you just want to delete by Id you got off a form, and don't want to first build a whole instance of the object you intend to delete?
I've been playing with play2.0 with mongo, and my companion objects look like:
object MyObject extends SalatDAO[MyObject,ObjectId] (collection = getCollection("objectcollection")) {
}
These companion objects inherit CRUD like operations from SalatDAO (MyObject.save(), MyObject.find(), etc). I'm not entirely clear on how it is implemented internally, but it works nicely.
I know it's not directly possible to serialize a function/anonymous class to the database but what are the alternatives? Do you know any useful approach to this?
To present my situation: I want to award a user "badges" based on his scores. So I have different types of badges that can be easily defined by extending this class:
class BadgeType(id:Long, name:String, detector:Function1[List[UserScore],Boolean])
The detector member is a function that walks the list of scores and return true if the User qualifies for a badge of this type.
The problem is that each time I want to add/edit/modify a badge type I need to edit the source code, recompile the whole thing and re-deploy the server. It would be much more useful if I could persist all BadgeType instances to a database. But how to do that?
The only thing that comes to mind is to have the body of the function as a script (ex: Groovy) that is evaluated at runtime.
Another approach (that does not involve a database) might be to have each badge type into a jar that I can somehow hot-deploy at runtime, which I guess is how a plugin-system might work.
What do you think?
My very brief advice is that if you want this to be truly data-driven, you need to implement a rules DSL and an interpreter. The rules are what get saved to the database, and the interpreter takes a rule instance and evaluates it against some context.
But that's overkill most of the time. You're better off having a little snippet of actual Scala code that implements the rule for each badge, give them unique IDs, then store the IDs in the database.
e.g.:
trait BadgeEval extends Function1[User,Boolean] {
def badgeId: Int
}
object Badge1234 extends BadgeEval {
def badgeId = 1234
def apply(user: User) = {
user.isSufficientlyAwesome // && ...
}
}
You can either have a big whitelist of BadgeEval instances:
val weDontNeedNoStinkingBadges = Map(
1234 -> Badge1234,
5678 -> Badge5678,
// ...
}
def evaluator(id: Int): Option[BadgeEval] = weDontNeedNoStinkingBadges.get(id)
def doesUserGetBadge(user: User, id: Int) = evaluator(id).map(_(user)).getOrElse(false)
... or if you want to keep them decoupled, use reflection:
def badgeEvalClass(id: Int) = Class.forName("com.example.badge.Badge" + id + "$").asInstanceOf[Class[BadgeEval]]
... and if you're interested in runtime pluggability, try the service provider pattern.
You can try and use Scala Continuations - they can give you the ability to serialize the computation and run it at later time or even on another machine.
Some links:
Continuations
What are Scala continuations and why use them?
Swarm - Concurrency with Scala Continuations
Serialization relates to data rather than methods. You cannot serialize functionality because it is a class file which is designed to serialize that and object serialization serializes the fields of an object.
So like Alex says, you need a rule engine.
Try this one if you want something fairly simple, which is string based, so you can serialize the rules as strings in a database or file:
http://blog.maxant.co.uk/pebble/2011/11/12/1321129560000.html
Using a DSL has the same problems unless you interpret or compile the code at runtime.