Custom wro4j plugin for Scala's Simple Build Tool - scala

I'm in the process of creating my own wro4j plugin for SBT as my project has some special requirements not achievable with xsbt-wro4j-plugin directly.
I checked the source code of xsbt-wro4j-plugin (here) and also the wro4j API documentation to gain some insight into the file creating process but I'm a bit puzzled here. As far as I can tell the plugin uses Mockito to produce the necessary resources somehow but I don't get how it cooperates with wro4j itself. If I'm right this whole Mockito stuff is a hack so we can use SBT's caching mechanism.
Question #1 is whether we can avoid this Mockito voodoo without losing caching support.
Question #2: what is responsible for file creation within wro4j? Could I override it?

This is not necessarily an answer for all of your questions, but an explanation for the reason xsbt wro4j plugin (and wro4j-maven-plugin) uses mockito.
The wro4j was created initially as a runtime solution only (using HttpServletFilter) to minimize static resources on the fly. As result, the internal API is based on servlet-api (more specifically HttpServletRequest & HttpServletResponse objects). Later, when a build-time solution was required, instead of changing the internals of the framework, a suitable workaround was applied: using a mechanism for stubbing servlet-api in a non servlet environment (build-time).
The way I see the long term approach: is to make wro4j, servlet-api agnostic and allow build-time solutions like maven plugin or xsbt plugin, to not require using this workaround. Unfortunately, in order to do that, the internals of the wro4j should be changed and that would require a major release (incompatible with previous versions). Given the amount of work required to do that, most probably this will be delayed.

Related

How seamless will be dotty/scala3 integration with tech like scala-native and scala-js?

Are there any limitations we should be aware of? Will it require us to use some scalafix like tools? or will it work out of box?
Migration from 2.13 to 3.0 in general:
Dotty uses 2.13 collections so no need to change things here - as a matter of the fact 2.13 would be so close to 3.0 that maintainers decided to skip 2.14 release which was supposed to serve as a stepping stone
macros will need to be rewritten - that is the biggest issue, but library maintainers have some time to do it, and some are rewriting things even now (see quill)
there will be few deprecations, e.g. forSome syntax for existential types disappears (see: Dropped features on documentation)
libraries might need to extends themselves to support new stuff (union/intersection/opaque types) but until you start using new things in your code everything works as before
other than that old Scala code should work without any changes
Scalafix is being used on prod even now, e.g. Scala Steward is able to apply migrations as it updates libraries to a new version.
Scala.js is already supported as Dotty backend next to JVM one.
Recently Scala Center took over Scala-native, so we should expect that Scala-native development will speed up (it was a bit stalled) and it should eventually land as another supported backend. I cannot tell if they manage to deliver before the release of Dotty, but I doubt it. For now, Scala-native would have to get support for 2.12 and/or 2.13 first. Trace this issue if you want to know or ask on Gitter.
Long story short: you would need to wait for libraries you use to get ported to Dotty, then update your macros if you wrote any, besides that migration should be pretty much straightforward for JVM and JS backends. Scala native will probably take more time.

DI or Service Locator : Injecting implementations at run-time ( no static binding ) in scala

i have a use case where i would like to offer a simple API to extend the functionality of my scala application.
i've spent the last couple of days trying to find a java/scala DI framework or library that does the following for me:
identifies implementations of an interface/trait on the classpath
instantiates and injects said implementations ( important feature: all of them ) at a site marked, preferably with an annotation
the above can't happen in the compiler because i need a plugin architecture where the plugins are not introduced until the JVM starts
therefore above can happen at JVM start (no hot-swap necessary)
i'm gravitating more and more towards OSGi DS, which i'm a big fan of, except i see it as an overkill due to #4.
i looked at guice, weld, scaladi and macwire, and could not immediately see a simple way to do this. My objective is for the "plugin" authors to not have to be aware of my injection/IoC solution in any way, except for the occasional Annotation ( preferably JSR330 ) At the injection site i am willing to deal with uglier things. :-)
Will i have to roll my own solution here, go with OSGi, or am i missing something trivial in the above mentioned libraries?
ps: i'm trying to steer clear of OSGi mainly because of it's interaction with the application framework i'm using (akka - not sure the bundle/DS lifecycle mixes well with a single actor system)
If you can afford it, it's probably best (not only for you, but for the entire ecosystem) to go with Peter's suggestion.
Pragmatically speaking though, Java has SPI which comes OOTB and may be the simplest way to go in your particular case.
I have a look at what functionality is provided by Scaldi at the moment. I've found it is mature enough. So nothing hard to use this DI library to achieve your goals. For example you can only implement code for searching specifically annotated/specified in some configuration file or anything else.
If you like DS (and it seems eminently suitable for your problem) then why not solve any problems with Akka? I am pretty sure others will be willing to help out since it looks like an interesting combination.

Scala Dependency Injection for compile time with separate configuration

Now firstly I realise the title is extremely broad, so let me describe the use case.
Background:
I'm currently teaching myself Scala+Gradle (because I like the flexibility and power of gradle and the much more legible build files)
As such with learning new languages its often best to make applications that you can actually use, and being primarily a PHP (with Symfony) programmer and formerly a Java programmer, there are many patterns that could carry across from both paradigms.
Use Case:
I'm writing an application where I am experimenting with a Provider+Interface(trait) layout, the goal is to define traits that encompass all the expected functionality for any particular type of component e.g. a ConfigReaderTrait and a YamlConfigReager as a provider. Theoretically the advantage of this would be to allow me to switch out core mechanisms or even architectural components with minimal effort, this allows for a great deal of R&D and experimenting.
PHP Symfony Influence
Now currently I work as a pure PHP dev, and as such has been introduced to Symfony, which has a brilliant Dependency Injection framework where the dependencies are defined in yaml files, and can be delegated to sub directories. I like this, because unlike with SBT I am unphased by using different languages for different purposes (eg groovy with gradle for build scripts) and I want to maintain a separation of concerns.
Such that each type of interface/trait or bundle of related functionality should be able to have its own DI config, and I would prefer it separate from the scala code itself.
Now for Scala....
Obviously things are not the same across different languages, and if you don't embrace the differences you may aswell go back to the previous language and leave things at that.
That said, I am not yet convinced by the DI frameworks I see for scala.
Guice for example is really a modified java framework (which is fine
because scala can use java libs, but because they don't function in
the entirely same paradigm of coding languages it feels as though
scala's capabilities are not leveraged)
MacWire annoyed me a bit,because you had to define the dependencies
in the files where you used them. Which does not assist in my
interface/provider concept.
SubCut so far seems to be the best suited to what I would expect.
But while going through all of this (and bare in mind this is all in the research phase, I havent used any of them yet) it seemed that DI in Scala is still very scattered, and in its infancy, by that I mean that there are different implementations with different applications, but not one flexible enough or power enough to compare to Symfonys DI. particularly not for my application.
Comments? Thoughts?
My 5 cents:
I have actually stopped using dependency injection frameworks after switching to Scala from Java.
The language allows for a few nice ways of doing it without a framework (multiple parameter lists and currying as well as the mixins for doing injection the way the 'cake pattern' does)
and I find myself more and more just using constructor or method parameter based injection as it clearly documents what dependencies a given piece of logic has and where it got those dependencies from.
It's also fairly easy to create different modules sets of implementations or factories for implementations using Scala objects and then selecting between those at runtime. This will give you the guarantee that it wont compile unless there is an implementation available, as opposed to the big ones in Java-land that will fail in runtime, effectively pushing a compile time problem into runtime.
This also removes the 'magic' of how dependencies are created and wired (reflection, runtime weaving, macros, XML, binding context to thread local etc). I think this makes it much easier for new developers to jump into a project and understand how the codebase is interconnected.
Regarding declaring implementations in non-code like XML I have found that projects rarely or never change those files without making a new release so then they might as well be code with all the benefits that bring (IDE support, performance, type checking).

What are the use cases for Castle Windsor's logging facility?

I've recently been learning about inversion of control through dependency injection, and using Castle Windsor. I like it. I get it. The lightbulb over my head is burning brightly. But I have a nagging concern about the logging facility and the ILogger interface. What is that really doing there? Is it for me, or just for Windsor itself?
Since ILogger is intended to abstract away the differences between log4net and Nlog and whatever other logging frameworks it supports, it has to represent the lowest common denominator between them. The various frameworks are similar, but not necessarily identical. If there were some fantastic feature in one, but not in the other, it would either have to be left out of ILogger, or the ILogger implementations for the other logging frameworks would have to have no-op implementations of it, or something else that's not very satisfying.
Long before Windsor, I was a fan of log4net, and a lot of my favorite libraries use it, like NHibernate. So if I'm building a new application, I'll use log4net. I'm willing to commit to it, and I consider it a stable dependency -- as stable a dependency as needing System.Web for example. I would not write my components to use ILogger, I would write them to use ILog. But I get the impression that Windsor expects me to use ILogger for my own logging. Isn't that saddling my project with a dependency on Windsor, when I shouldn't have any dependency on my IoC container?
I see the point of Windsor having the logging facility so it can log its own operations using whichever logging framework the project wants to use. That seems perfectly sensible. But if I don't use ILogger for my own code, and just go straight to log4net's ILog, what am I giving up? Will I regret this?
The obvious response is that I might want to change logging frameworks in six months. But I won't. log4net is mature and stable. It's a project with a limited and very well-defined scope, which it implements nearly perfectly. It can be considered "finished". At most, I might need to write a custom appender to handle messages. (Maybe I want to write them onto a postcard and drop them in the mail for some reason.) But that's easily done within the log4net framework, and I would use it just like any other log4net appender. I would be no more likely to change logging frameworks than I would be to change web platforms.
It seems as if you have already made your mind up to use log4net directly. This is perfectly reasonable, as you consider it to be a stable dependency. I have just switched from log4net to NLog for our projects, so it does happen that you may change logging frameworks in future, which is where an abstraction has its advantages.
Another consideration when thinking about using a logging abstraction (other than losing functionality specific to a particular logging framework), is the extra overhead of learning the abstraction. Does this make the code more or less complex for developers to pick up?
In our case, we found NLog was so easy to install and configure directly, that we decided to lose our custom logging abstraction and switch from log4net (which we found a bit verbose in its xml configuration compared to NLog).

Added GWT to web app. Other devs complain about compile time. What should I tell them?

I recently added GWT to our project to implement an Ajax feature of our web app. The other devs are complaining about the extra time GWT compile adds to the build and are asking why I didn't use JSON and jQuery instead. What should I tell them?
Try to make the build smarter, if it isn't already: The GWT (client) part should only be re-compiled, when the client source changes. I assume, it's mostly you who changes that source, so the other developers won't experience the pain.
Caveat: This doesn't work of course, if your client source shares code with the existing project (I assume, it's a Java project on the server side?). But maybe you should avoid shared code in your case: Even though that's violating the DRY (Don't Repeat Yourself) principle, realize that you'd violate it anyway, if you didn't use GWT.
However, if you do reuse code from the server project, then you have a good argument, why you used GWT.
If developers have to compile the whole GWT stuff (all the permutations) in order to develop application it is real pain. Starting from GWT 2 you can configure the webapp project to be run in "development mode". It can be started directly from eclipse (Google plugin) thanks to built in jetty container. In such scenario only requested resources are compiled, and the process is incremental. I find this very convenient - GWT compilation overhead in our seam+richfaces+GWT application is very small during development cycle.
When it comes to application builds there are several options which can speed up GWT compilation. Here is checklist:
disable soyc reports
enable draftCompile flag which skips some optimization
by adjusting localWorkers flag you can speed things a bit when building on multi-core CPU
compile limited set of permutation: e.g. only for browsers used during development and only for one language
Release builds of the webapp should have draftCompile disabled though. Also all the laguage variants should be enabled. Maven profiles are very useful for parametrization of builds.
What was the reason you used GWT instead of JSON/jQuery?
I would ask the same question since for what you need, GWT may not be legitimately needed.
In my experience, I totally understand the complaints you are getting. GWT is a wonderful technology, and it has many benefits. It also has downsides and one of them is long compile time. The GWT compiler does lots of static code analysis and it's not something that has an order-of-magnitude solution.
As a developer, the most frustrating thing in the world is long development-deploy-test cycles. I know how your developers feel.
You need to make an architectural decision if the technological benefits of GWT are worth it. If they are, your developers will need to get used to the technology, and there are many solutions which can make the development much easier.
If there was a good reason for using GWT instead of pure javascript, you should tell them this reason (skills, debugging for a very hard to implement problem, you didn't want to deal with browser compatibility, etc). If there is no good reason, maybe they're right to be upset.
I use GWT myself and I know about this compile time :-)
If you used GWT for an easy-to-implement-in-javascript widget or something like that, maybe you should have consider using javascript instead.
What tool you're using to compile project?
Long time ago I've used ant and it was smart enough to find out that, when none of source files for GWT app (client code) has changed, the GWT compiler task was not called.
However, after that I've used maven and it was real pain, because it's plugin didn't recognize the code hasn't changed and GWT compilation was run all and over, no matter if it was needed or not.
I would recommend ant for GWT projects. Alternative would be rewriting the maven plugin or getting developers used to long compile time.