is SFig language syntax efficient and clear (and better than Spring-Framework's XML DSL)? - inversion-of-control

ADDENDUM EDIT:
Have not accepted an answer to this as
there has not been any feedback from
experienced Spring Framework
developers.
I've been working on a replacement DSL to use for Spring-Framework applicationContext.xml files (where bean initialization and dependency relationships are described for loading up into the Spring bean factory).
My motivation is that I just flat out don't like Spring's use of XML for this purpose nor do I really like any of the alternatives that have been devised so far. For various reasons that I won't go into, I want to stay with a declarative language and not some imperative scripting language such as Groovy.
So I grabbed the ANTLR parser tool and have been devising a new bean factory DSL that I've dubbed SFig. Here's a link that talks more about that:
SFig™ - alternative metadata config language for Spring-Framework
And here is the source code repository site:
http://code.google.com/p/sfig/
I'm interested to know how I'm doing on the language syntax so far. Do you think SFig is both efficient and clear to understand? (I'm particularly concerned right now with the mulit-line text string):
properties_include "classpath:application.properties";
org.apache.commons.dbcp.BasicDataSource dataSource {
#scope = singleton;
#destroy-method = close;
driverClassName = "${jdbc.driverClassName}";
url = "${jdbc.url}";
username = "${jdbc.username}";
password = "${jdbc.password}";
defaultAutoCommit = true;
}
org.springframework.orm.ibatis.SqlMapClientFactoryBean sqlMapClient {
#scope = singleton;
#init-method = afterPropertiesSet;
#factory-method = getObject;
configLocation = "classpath:sqlmap-config.xml";
dataSource = $dataSource;
}
/* this string will have Java unescape encoding applied */
STRING str = "\tA test\u0020string with \\ escaped character encodings\r\n";
/* this string will remain literal - with escape characters remaining in place */
STRING regexp = #"(\$\{([a-zA-Z][a-zA-Z0-9._]*)\})";
/* multi-line text block - equates to a java.lang.String instance */
TEXT my_multi_line_text = ///
Here is a line of text.
This is yet another. Here is a blank line:
Now picks up again.
///;
/* forward use of 'props' bean */
java.util.HashMap map {
this( $props );
}
/* equates to a java.util.Propertis instance */
PROPERTIES props {
"James Ward" = "Adobe Flex evangelist";
"Stu Stern" = "Gorilla Logic - Flex Monkey test automation";
Dilbert = "character in popular comic strip of same title";
"App Title Display" = "Application: ${app.name}";
"${app.desc}" = "JFig processes text-format Java configuration data";
}
/* equates to a java.util.ArrayList instance */
LIST list {
this( ["dusty", "moldy", "${app.version}", $str] );
[234, 9798.76, -98, .05, "numbers", $props, ["red", "green", "blue"]];
}

I'll provide a bit of background on Spring and it's applicationContext.xml file - that will lend clarity to some of the things going on in SFig syntax.
The applicationContext.xml file is used to express bean initialization for beans that will be managed by the Spring bean factory. So given the example of beans seen in my SFig version of this file, in Java application code one might request the bean factory to make an instance of a bean like so:
SqlMapClient sqlMapClient = getBean("sqlMapClient");
The bean factory takes care of any instantiation and initialization that the bean requires - even to the point of injecting dependencies. In this case, a SqlMapClient bean needs an instance of a dataSource bean (which is also described and referenced in the SFig example).
A bean descriptor relays the following information to the bean factory:
the bean's Java class name
a bean ID by which to request or reference it
bean definition meta attributes (optional)
constructor initialization arguments (optional)
and/or property initializers
The '#' prefixes bean definition meta attributes. These are attributes that are used by the bean factory to manage the bean. For instance, #scope = singleton, would inform the bean factory to make a single instance of the bean, cache it, and hand out references to it when it is requested. The ones that can be set are the same ones defined by the Spring-Framework.
If a bean is to be initialized via a constructor, then that is expressed in SFig by a syntax that appears to be invoking this with arguments in parenthesis.
Or a bean can be initialized by setting its properties. Identifiers that are assigned to and not prefixed by '#' are bean properties.
When referencing a bean that is a required dependency, then it can be referred to by prefixing it's bean ID with '$'. Several examples of this appear in the SFig example.
The ${foo.bar} syle of variable appearing in string literals will be replaced by a Java property value. In this case, properties are loaded from the file application.properties via this line:
properties_include "classpath:application.properties";
Java System properties will be looked to next if not found in any included properties. This is a widely followed practices in many Java frameworks. The current XML-based applicationContext.xml file has a way of permitting this usage too.
Because java.util.Properties are often used to initialize beans, SFig provides the PROPERTIES as a special convenient syntax for declaring a Properties object. Likewise for java.util.List, which has the corresponding SFig LIST. Also, arrays of values can be declared within square brackets [...].
Additionally there is TEXT for declaring blocks of multi-line text. The '#' prefixing a string literal means to turn off escape encoding - a language syntax borrowed from C#.
One of the primary design objectives of the SFig DSL is to remain declarative in nature. I purposely am refraining from adding any imperative scripting features. The complexity of programming logic embedded in a text configuration file will imply possibility of having to debug it. Don't want to open yet another dimension of code debugging.

I haven't much experience with the Spring XML you refer, so you should take the following feedback with a pinch of salt.
As a second and third caveat:
providing a snippet of code will give a flavour of what the language and its semantics are. It is difficult to completely understand some of the choices you have already made (and with good reason), so any feedback here may be completely contradictory or impossible in the light of those choices.
language design is as much an art as a science, and so at this stage, any feedback you may get is likely to be quite subjective.
A larger, meta-, question: as a DSL are you trying to do configuration of Spring, or as a more general class of frameworks?
There: caveat emptor. Now my subjective and incomplete feedback ;)
I'm not sure I understand the reason why you have the # prefix for scope and destroy-method, but not driverClassName. Also the mix of both xml-case and camelCase isn't completely apparent to start with. Is the # prefix a type modifier, or are these keywords in the language?
I'm not completely sure of your intentions about the block header format. You have class name, then a function of that class; is the intention to specify what class your are going to use for a particular function?
e.g.
sqlMapClient: org.springframework.orm.ibatis.SqlMapClientFactoryBean {
# body.
}
or even:
sqlMapClient {
#class = org.springframework.orm.ibatis.SqlMapClientFactoryBean;
# is there a sensible (perhaps built-in) default if this is missing?
}
I like the variable substitution; I presume the values will come from System properties?
I like being able to specify string literals (without escaping), especially for the regular expressions you've shown. However, having multi-character quote or quote modifier seems a little alien. I guess you considered the single-quote (shell and Perl use single-quotes for literal strings).
On the other hand, I think the triple forward slash for multi-line TEXT is the right approach, but two reminiscent of comments in C-style languages. Python uses a triple " for this purpose. Some shell idioms have a multi-line text convention I would not copy.
I very much like the look of properties and config location, using what looks like a URI notion of addressing. If this is a URI, classpath://file.xml may be clearer. I may have the wrong end of the stick here, however.
I also very much like the notion of list and map literals you have, though I'm not sure where:
this comes into it (I guess a call to a Java constructor)
why some types are capitalized, and others are not. Do I take it that there is a default MAP type, which you can be more specific type if you wish to?
is Dilbert an unquoted string literal?
Finally, I'd point you to another configuration DSL, though perhaps more for sysadmin usage: Puppet.
Go well.

Related

'Happens before' on Scala constructors: final fields

Java specification mentions that classes having only final fields have their constructors in a happens-before relation with any thread reading any reference to that object: in other words, it is not possible for the application to see a partially constructed object.
Scala hacks initialization by extracting it to separate methods in order to ensure that 'primary constructor vals' are set before any initializing code in superclasses. This is at least one reaason why Scala final val doesn't translate always (or ever?) to a Java final field.
Is there a way to achieve this, i.e. ensure the happens-before relation between class clients and its constructors?
One which is a reasonably stable feature of the compiler?
One which is not writing the class in Java?
Scala hacks initialization by extracting it to separate methods in order to ensure that 'primary constructor vals' are set before any initializing code in superclasses.
In java that doesn't break final guarantees as long as this doesn't escape from the constructor.
("doesn't escape" means that constructor's code doesn't store this in a variable/collection/etc which can be read by another thread)
Also because the JMM is defined for java language and not for the JVM, I'm afraid it only works in languages that compile to java code.

Scala macros: Generate factory or ctor for Java Pojo

I'm currently working with reasonably large code base where new code is written in scala, but where a lot of old Java code remains. In particular there are a lot of java APIs we have to talk to. The old code uses simple Java Pojos with public non-final fields, without any methods or constructors, e.g:
public class MyJavaPojo {
public String myField1;
public MyOtheJavaPojo myField2;
}
Note that we dont have the option of adding helper methods or constructors to these types. These are currently created like old c-structs (pre-named parameters) like this:
val myPojo = new MyJavaPojo
myPojo.myField1 = ...
myPojo.myField2 = ...
Because of this, it's very easy to forget about assigning one of the fields, especially when we suddenly add new fields to the MyJavaPojo class, the compiler wont complain that I've left one field to null.
NOTE: We don't have the option of modifying the java types/adding constructors the normal way. We also don't want to start creating lots and lots of manually created helper functions for object creation - We would really like to find a solution based on scala macros instead of possible!
What I would like to do would be to create a macro that generates either a constructor-like method for my Pojos or a macro that creates a factory, allowing for named parameters. (Basically letting a macro do the work instead of creating a gazillion manually written helper methods in scala).
Do you know of any way to do this with scala macros? (I'm certain it's possible, but I've never written a scala macro in my life)
Desired API alternative 1:
val myPojo = someMacro[MyJavaPojo](myField1 = ..., myField2 = ...)
Desired API alternative 2
val factory = someMacro[MyJavaPojo]
val myPojo = factory.apply(myField1 = ..., myField2 = ...)
NOTE/Important: Named parameters!
I'm looking for either a ready-to-use solution or hints as to where I can read up on making one.
All ideas and input appreciated!
Take a look at scala-beanutils.
#beanCompanion[MyJavaPojo] object MyScalaPojo
MyScalaPojo(...)
It probably won't work directly, as you classes are not beans and it's only been made for Scala 2.10, but the source code is < 200 lines and should give you an idea of where to start.

In Scala is there any way to get a parameter's method name and class?

At my work we use a typical heavy enterprise stack of Hibernate, Spring, and JSF to handle our application, but after learning Scala I've wanted to try to replicate much of our functionality within a more minimal Scala stack (Squeryl, Scalatra, Scalate) to see if I can decrease code and improve performance (an Achilles heal for us right now).
Often my way of doing things is influenced by our previous stack, so I'm open to advice on a way of doing things that are closer to Scala paradigms. However, I've chosen some of what I do based on previous paradigms we have in the Java code base so that other team members will hopefully be more receptive to the work I'm doing. But here is my question:
We have a domain class like so:
class Person(var firstName: String, var lastName: String)
Within a jade template I make a call like:
.section
- view(fields)
The backing class has a list of fields like so:
class PersonBean(val person: Person) {
val fields: Fields = Fields(person,
List(
Text(person.firstName),
Text(person.lastName)
))
}
Fields has a base object (person) and a list of Field objects. Its template prints all its fields templates. Text extends Field and its Jade template is supposed to print:
<label for="person:firstName">#{label}</label>: <input type="text" id="person:firstName" value="#{value}" />
Now the #{value} is simply a call to person.firstName. However, to find out the label I reference a ResourceBundle and need to produce a string key. I was thinking of using a naming convention like:
person.firstName.field=First Name
So the problem then becomes, how can I within the Text class (or parent Field class) discover what the parameter being passed in is? Is there a way I can pass in person.firstName and find that it is calling firstName on class Person? And finally, am I going about this completely wrong?
If you want to take a walk on the wild side, there's a (hidden) API in Scala that allows you to grab the syntax tree for a thunk of code - at runtime.
This incantation goes something like:
scala.reflect.Code.lift(f).tree
This should contain all the information you need, and then some, but you'll have your work cut out interpreting the output.
You can also read a bit more on the subject here: Can I get AST from live scala code?
Be warned though... It's rightly classified as experimental, do this at your own risk!
You can never do this anywhere from within Java, so I'm not wholly clear as to how you are just following the idiom you are used to. The obvious reason that this is not possible is that Java is pass-by-value. So in:
public void foo(String s) { ... }
There is no sense that the parameter s is anything other than what it is. It is not person.firstName just because you called foo like:
foo(person.firstName);
Because person.firstName and s are completely separate references!
What you could do is replacing the fields (e.g. firstname) with actual objects, which have a name attribute.
I did something similiar in a recent blog post:http://blog.schauderhaft.de/2011/05/01/binding-scala-objects-to-swing-components/
The property doesn't have a name property (yet), but it is a full object but is still just as easy to use as a field.
I would not be very surprised if the following is complete nonsense:
Make the parameter type of type A that gets passed in not A but Context[A]
create an implicit that turns any A into a Context[A] and while doing so captures the value of the parameter in a call-by-name parameter
then use reflection to inspect the call-by-name parameter that gets passed in
For this to work, you'd need very specific knowledge of how stuff gets turned into call-by-name functions; and how to extract the information you want (if it's present at all).

How do you do dependency injection with the Cake pattern without hardcoding?

I just read and enjoyed the Cake pattern article. However, to my mind, one of the key reasons to use dependency injection is that you can vary the components being used by either an XML file or command-line arguments.
How is that aspect of DI handled with the Cake pattern? The examples I've seen all involve mixing traits in statically.
Since mixing in traits is done statically in Scala, if you want to vary the traits mixed in to an object, create different objects based on some condition.
Let's take a canonical cake pattern example. Your modules are defined as traits, and your application is constructed as a simple Object with a bunch of functionality mixed in
val application =
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now all of those modules have nice self-type declarations which define their inter-module dependencies, so that line only compiles if your all inter-module dependencies exist, are unique, and well-typed. In particular, the Persistence module has a self-type which says that anything implementing Persistence must also implement DataSource, an abstract module trait. Since ProductionDataSource inherits from DataSource, everything's great, and that application construction line compiles.
But what if you want to use a different DataSource, pointing at some local database for testing purposes? Assume further that you can't just reuse ProductionDataSource with different configuration parameters, loaded from some properties file. What you would do in that case is define a new trait TestDataSource which extends DataSource, and mix it in instead. You could even do so dynamically based on a command line flag.
val application = if (test)
new Object
extends Communications
with Parsing
with Persistence
with Logging
with TestDataSource
else
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now that looks a bit more verbose than we would like, particularly if your application needs to vary its construction on multiple axes. On the plus side, you usually you only have one chunk of conditional construction logic like that in an application (or at worst once per identifiable component lifecycle), so at least the pain is minimized and fenced off from the rest of your logic.
Scala is also a script language. So your configuration XML can be a Scala script. It is type-safe and not-a-different-language.
Simply look at startup:
scala -cp first.jar:second.jar startupScript.scala
is not so different than:
java -cp first.jar:second.jar com.example.MyMainClass context.xml
You can always use DI, but you have one more tool.
The short answer is that Scala doesn't currently have any built-in support for dynamic mixins.
I am working on the autoproxy-plugin to support this, although it's currently on hold until the 2.9 release, when the compiler will have new features making it a much easier task.
In the meantime, the best way to achieve almost exactly the same functionality is by implementing your dynamically added behavior as a wrapper class, then adding an implicit conversion back to the wrapped member.
Until the AutoProxy plugin becomes available, one way to achieve the effect is to use delegation:
trait Module {
def foo: Int
}
trait DelegatedModule extends Module {
var delegate: Module = _
def foo = delegate.foo
}
class Impl extends Module {
def foo = 1
}
// later
val composed: Module with ... with ... = new DelegatedModule with ... with ...
composed.delegate = choose() // choose is linear in the number of `Module` implementations
But beware, the downside of this is that it's more verbose, and you have to be careful about the initialization order if you use vars inside a trait. Another downside is that if there are path dependent types within Module above, you won't be able to use delegation that easily.
But if there is a large number of different implementations that can be varied, it will probably cost you less code than listing cases with all possible combinations.
Lift has something along those lines built in. It's mostly in scala code, but you have some runtime control. http://www.assembla.com/wiki/show/liftweb/Dependency_Injection

How to workaround the XmlSerialization issue with type name aliases (in .NET)?

I have a collection of objects and for each object I have its type FullName and either its value (as string) or a subtree of inner objects details (again type FullName and value or a subtree).
For each root object I need to create a piece of XML that will be xml de-serializable.
The problem with XmlSerializer is that i.e. following object
int age = 33;
will be serialized to
<int>33</int>
At first this seems to be perfectly ok, however when working with reflection you will be using System.Int32 as type name and int is an alias and this
<System.Int32>33</System.Int32>
will not deserialize.
Now additional complexity comes from the fact that I need to handle any possible data type.
So solutions that utilize System.Activator.CreateInstance(..) and casting won't work unless I go down the path of code gen & compilation as a way of achieving this (which I would rather avoid)
Notes:
A quick research with .NET Reflector revealed that XmlSerializer uses internal class TypeScope, look at its static constructor to see that it initializes an inner HashTable with mappings.
So the question is what is the best (and I mean elegant and solid) way to workaround this sad fact?
I don't exactly see, where your problem originally stems from. XmlSerializer will use the same syntax/mapping for serializing as for deserializing, so there is no conflict when using it for both serializing and deserializing.
Probably the used type-tags are some xml-standard thing, but not sure about that.
I guess the problem is more your usage of reflection. Do you instantiate your
imported/deserialized objects by calling Activator.CreateInstance ?
I would recommend the following instead, if you have some type Foo to be created from the xml in xmlReader:
Foo DeserializedObject = (Foo)Serializer(typeof(Foo)).Deserialize(xmlReader);
Alternatively, if you don't want to switch to the XmlSerializer completely, you could do some preprocessing of your input. The standard way would then be to create some XSLT, in which you transform all those type-elements to their aliases or vice versa. then before processing the XML, you apply your transformation using System.Xml.Xsl.XslCompiledTransform and use your (or the reflector's) mappings for each type.
Why don't you serialize all the field's type as an attribute?
Instead of
<age>
<int>33</int>
</age>
you could do
<age type="System.Int32">
33
</age>