Why ehcache doesn't propagate expiration events ? How do you deal with it ?
Here is my situation. I have two nodes that syncronize with each other using RMI. My config is below:
<cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=manual, rmiUrls=//localhost:51001/sessionCache|//localhost:51002/sessionCache"/>
<cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="hostName=0.0.0.0, port=51002, socketTimeoutMillis=2000"/>
<diskStore path="java.io.tmpdir"/>
<cache name="sessionCache"
maxEntriesLocalHeap="20000"
maxEntriesLocalDisk="100000"
eternal="false"
diskSpoolBufferSizeMB="20"
timeToIdleSeconds="60"
memoryStoreEvictionPolicy="LRU"
transactionalMode="off">
<persistence strategy="localTempSwap"/>
<cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory" properties="replicateAsynchronously=true" />
<bootstrapCacheLoaderFactory
class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory"
properties="bootstrapAsynchronously=false"
propertySeparator=","/>
</cache>
Now imagine following scenario:
Login call to server A creates session which is put to replicated cache.
Continuous calls to server A to get session object from cache will reset timeToIdleSeconds counter for it.
Continue calling server A for 65 seconds to make sure that session is going to be expired on server B which was never called after session object was put into cache in step 1.
Call server B. Internally it will remove session object from local cache and will call RegisteredEventListeners.internalNotifyElementExpiry which ends up calling RMISynchronousCacheReplicator.notifyElementExpired.
In step 4 I expect server B to send RmiEventType.REMOVE to server A. Instead RMISynchronousCacheReplicator.notifyElementExpired has following body
public final void notifyElementExpired(final Ehcache cache, final Element element) {
/*do not propagate expiries. The element should expire in the remote cache at the same time, thus
preseerving coherency.
*/
}
It looks like creator of RMICacheReplicatorFactory never accounted for timeToIdleSeconds algorithm of eviction.
Is manual call of cache.replace() right after each cache.get() only way to reset TTI (timeToIdle) in cluster ?
Is additional cacheEventListener with cache.remove() call in notifyElementExpired only way to remove expired element from cluster ?
Well, I ended up making hack. Let me know guys if there is more elegant solution.
I created additional cache event listener which looks like this
class MyCacheEventListener(properties: Properties) extends CacheEventListener {
override def notifyElementExpired(cache: Ehcache, element: Element): Unit = {
cache.getCacheEventNotificationService.notifyElementRemoved(element, false)
}
override def notifyElementRemoved(cache: Ehcache, element: Element): Unit = {}
override def notifyElementEvicted(cache: Ehcache, element: Element): Unit = {}
override def notifyRemoveAll(cache: Ehcache): Unit = {}
override def notifyElementPut(cache: Ehcache, element: Element): Unit = {}
override def notifyElementUpdated(cache: Ehcache, element: Element): Unit = {}
override def dispose(): Unit = {}
}
which is responsible for sending fake RmiEventType.REMOVE to cluster. It's specified with scope local in config
<cacheEventListenerFactory class="com.zzzz.MyCacheEventListenerFactory" listenFor="local" />
For refreshing I had to do this after successful cache.get()
cache.getCacheEventNotificationService.notifyElementUpdated(element, false)
It's ugly but not sure what else I could do. Frankly I think Terracotta could easily implement it in their RMICacheReplicatorFactory.
Related
In my Scalatra routes, I often use halt() to fail fast:
val user: User = userRepository.getUserById(params("userId"))
.getOrElse {
logger.warn(s"Unknown user: $userId")
halt(404, s"Unknown user: $userId")
}
As shown in the example, I also want to log a warning in those cases. But I'd like to avoid the code duplication between the halt() and the logger. It would be a lot cleaner to simply do:
val user: User = userRepository.getUserById(params("userId"))
.getOrElse(halt(404, s"Unknown user: $userId"))
What would be the best way of logging all "HaltExceptions" in a cross-cutting manner ?
I've considered:
1) Overriding the halt() method in my route:
override def halt[T](status: Integer, body: T, headers: Map[String, String])(implicit evidence$1: Manifest[T]): Nothing = {
logger.warn(s"Halting with status $status and message: $body")
super.halt(status, body, headers)
}
Aside from the weird method signature, I don't really like this approach, because I could be calling the real halt() by mistake instead of the overridden method, for example if I'm halting outside the route. In this case, no warning would be logged.
2) Use trap() to log all error responses:
trap(400 to 600) {
logger.warn(s"Error returned with status $status and body ${extractBodyInSomeWay()}")
}
But I'm not sure it's the best approach, especially since it adds 201 routes to the _statusRoutes Map (one mapping for each integer in the range...). I also don't know how to extract the body here ?
3) Enable some kind of response logging in Jetty for specific status codes ?
What would be the best approach to do this? Am I even approaching this correctly?
The easiest solution is doing it in a servlet filter like below:
package org.scalatra.example
import javax.servlet._
import javax.servlet.http.HttpServletResponse
class LoggingFilter extends Filter {
override def init(filterConfig: FilterConfig): Unit = ()
override def destroy(): Unit = ()
override def doFilter(request: ServletRequest, response: ServletResponse, chain: FilterChain): Unit = {
chain.doFilter(request, response)
val status = response.asInstanceOf[HttpServletResponse].getStatus
if (status >= 400 && status <= 600) {
// Do logging here!
}
}
}
Register this filter in your Bootstrap class (or it's possible even in web.xml):
package org.scalatra.example
import org.scalatra._
import javax.servlet.ServletContext
class ScalatraBootstrap extends LifeCycle {
override def init(context: ServletContext): Unit = {
context.addFilter("loggingFilter", new LoggingFilter())
context.getFilterRegistration("loggingFilter")
.addMappingForUrlPatterns(EnumSet.allOf(classOf[DispatcherType]), true, "/*")
// mount your servlets or filters
...
}
}
In my opinion, Scalatra should provide a way to trap halting easier essentially. In fact, there is a method named renderHaltException in ScalatraBase, it looks to be possible to add logging by overriding this method at a glance:
https://github.com/scalatra/scalatra/blob/cec3f75e3484f2233274b1af900f078eb15c35b1/core/src/main/scala/org/scalatra/ScalatraBase.scala#L512
However we can't do it actually because HaltException is package private and it can be accessed inside of org.scalatra package only. I wonder HaltException should be public.
I am currently digging into using Quartz in our play 2.4 application.
Initially, I tried initializing everything through Global object, and everything worked perfectly.
Now, I an trying to move away from Global utilize modules infrastructure.
Here is what I have until now.
JobSchedulingService
#Singleton
class JobSchedulingService #Inject()(lifecycle: ApplicationLifecycle) extends ClassLogger{
lazy val schedulerFactory = current.injector.instanceOf[StdSchedulerFactory]
lazy val scheduler = schedulerFactory.getScheduler
/**
* Let's make sure that scheduler shuts down properly
*/
lifecycle.addStopHook{ () =>
Future.successful{
if (scheduler.isStarted) {
scheduler.shutdown(true)
}
}
}
protected def init() : Unit = {
logger.info("Initializing scheduler...")
scheduler.start()
}
init()
}
SchedulerModule - here for initialization of the service above.
class SchedulerModule extends AbstractModule{
override def configure(): Unit = {
bind(classOf[JobSchedulingService]).asEagerSingleton
}
}
And in my application.conf I added:
play.modules.enabled += "scheduling.modules.SchedulerModule"
It looks pretty strait forward. However, when the app starts I am getting an exception:
2016-03-23 00:07:42,173 INFO s.JobSchedulingService - Initializing
scheduler... 2016-03-23 00:07:42,213 ERROR application -
! #6pfp72mh6 - Internal server error, for (GET) [/] ->
play.api.UnexpectedException: Unexpected exception[CreationException:
Unable to create injector, see the following errors:
1) Error injecting constructor, java.lang.RuntimeException: There is
no started application at
scheduling.JobSchedulingService.(JobSchedulingService.scala:15)
at
scheduling.modules.SchedulerModule.configure(SchedulerModule.scala:11)
(via modules: com.google.inject.util.Modules$OverrideModule ->
scheduling.modules.SchedulerModule) while locating
scheduling.JobSchedulingService
...
The thing is, in our app, the scheduler is based of off persistence job storage and should restart when the application restarts. Again, when I did it through Global, it worked perfectly.
How do I get around this problem? What is the correct way to initialize an instance on startup?
Thanks,
You should probably be using the dependency injection for everything. Inject the scheduler like..
class JobSchedulingService #Inject()(lifecycle: ApplicationLifecycle, schedulerFactory: StdSchedulerFactory) extends ClassLogger{
lazy val scheduler = schedulerFactory.getScheduler
Hmmm...
I think I just resolved it.
It seems like the problem was not the actual initialization of a class, but this line:
lazy val schedulerFactory = current.injector.instanceOf[StdSchedulerFactory]
once I changed it to:
lazy val schedulerFactory = new StdSchedulerFactory
it started working.
May that will help somebody
I know that in Play! using Scala that there is no Http.context available since the idea is to leverage implicits to pass any data around your stack. However, this seems like kind of a lot of boiler plate to pass through when you need a piece of information available for the entire context.
More specifically what I'm interested in is tracking a UUID that is passed from the request header and making it available to any logger so that each request gets its own unique identifier. I'd like this to be seamless from anyone who calls into a logger (or log wrapper)
Coming from a .NET background the http context flows with async calls, and this is also possible with the call context in WCF. At that point you can register a function with the logger to return the current uuid for the request based on a logging pattern of something like "%requestID%".
Building a larger distributed system you need to be able to correlate requests across multiple stacks.
But, being new to scala and play I'm not even sure where to look for a way to do this?
What you are looking for in Java is called the Mapped Diagnostic Context or MDC (at least by SLF4J) - here's an article I found that details how to set this up for Play. In the interest of preserving the details for future visitors here is the code used for an MDC-propagating Akka dispatcher:
package monitoring
import java.util.concurrent.TimeUnit
import akka.dispatch._
import com.typesafe.config.Config
import org.slf4j.MDC
import scala.concurrent.ExecutionContext
import scala.concurrent.duration.{Duration, FiniteDuration}
/**
* Configurator for a MDC propagating dispatcher.
* Authored by Yann Simon
* See: http://yanns.github.io/blog/2014/05/04/slf4j-mapped-diagnostic-context-mdc-with-play-framework/
*
* To use it, configure play like this:
* {{{
* play {
* akka {
* actor {
* default-dispatcher = {
* type = "monitoring.MDCPropagatingDispatcherConfigurator"
* }
* }
* }
* }
* }}}
*
* Credits to James Roper for the [[https://github.com/jroper/thread-local-context-propagation/ initial implementation]]
*/
class MDCPropagatingDispatcherConfigurator(config: Config, prerequisites: DispatcherPrerequisites)
extends MessageDispatcherConfigurator(config, prerequisites) {
private val instance = new MDCPropagatingDispatcher(
this,
config.getString("id"),
config.getInt("throughput"),
FiniteDuration(config.getDuration("throughput-deadline-time", TimeUnit.NANOSECONDS), TimeUnit.NANOSECONDS),
configureExecutor(),
FiniteDuration(config.getDuration("shutdown-timeout", TimeUnit.MILLISECONDS), TimeUnit.MILLISECONDS))
override def dispatcher(): MessageDispatcher = instance
}
/**
* A MDC propagating dispatcher.
*
* This dispatcher propagates the MDC current request context if it's set when it's executed.
*/
class MDCPropagatingDispatcher(_configurator: MessageDispatcherConfigurator,
id: String,
throughput: Int,
throughputDeadlineTime: Duration,
executorServiceFactoryProvider: ExecutorServiceFactoryProvider,
shutdownTimeout: FiniteDuration)
extends Dispatcher(_configurator, id, throughput, throughputDeadlineTime, executorServiceFactoryProvider, shutdownTimeout ) {
self =>
override def prepare(): ExecutionContext = new ExecutionContext {
// capture the MDC
val mdcContext = MDC.getCopyOfContextMap
def execute(r: Runnable) = self.execute(new Runnable {
def run() = {
// backup the callee MDC context
val oldMDCContext = MDC.getCopyOfContextMap
// Run the runnable with the captured context
setContextMap(mdcContext)
try {
r.run()
} finally {
// restore the callee MDC context
setContextMap(oldMDCContext)
}
}
})
def reportFailure(t: Throwable) = self.reportFailure(t)
}
private[this] def setContextMap(context: java.util.Map[String, String]) {
if (context == null) {
MDC.clear()
} else {
MDC.setContextMap(context)
}
}
}
You can then set values in the MDC using MDC.put and remove it using MDC.remove (alternatively, take a look at putCloseable if you need to add and remove some context from a set of synchronous calls):
import org.slf4j.MDC
// Somewhere in a handler
MDC.put("X-UserId", currentUser.id)
// Later, when the user is no longer available
MDC.remove("X-UserId")
and add them to your logging output using %mdc{field-name:default-value}:
<!-- an example from the blog -->
<appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} %coloredLevel %logger{35} %mdc{X-UserId:--} - %msg%n%rootException</pattern>
</encoder>
</appender>
There are more details in the linked blog post about tweaking the ExecutionContext that Play uses to propagate the MDC context correctly (as an alternative approach).
I would like to design a client that would talk to a REST API. I have implemented the bit that actually does call the HTTP methods on the server. I call this Layer, the API layer. Each operation the server exposes is encapsulated as one method in this layer. This method takes as input a ClientContext which contains all the needed information to make the HTTP method call on the server.
I'm now trying to set up the interface to this layer, let's call it ClientLayer. This interface will be the one any users of my client library should use to consume the services. When calling the interface, the user should create the ClientContext, set up the request parameters depending on the operation that he is willing to invoke. With the traditional Java approach, I would have a state on my ClientLayer object which represents the ClientContext:
For example:
public class ClientLayer {
private static final ClientContext;
...
}
I would then have some constructors that would set up my ClientContext. A sample call would look like below:
ClientLayer client = ClientLayer.getDefaultClient();
client.executeMyMethod(client.getClientContext, new MyMethodParameters(...))
Coming to Scala, any suggestions on how to have the same level of simplicity with respect to the ClientContext instantiation while avoiding having it as a state on the ClientLayer?
I would use factory pattern here:
object RestClient {
class ClientContext
class MyMethodParameters
trait Client {
def operation1(params: MyMethodParameters)
}
class MyClient(val context: ClientContext) extends Client {
def operation1(params: MyMethodParameters) = {
// do something here based on the context
}
}
object ClientFactory {
val defaultContext: ClientContext = // set it up here;
def build(context: ClientContext): Client = {
// builder logic here
// object caching can be used to avoid instantiation of duplicate objects
context match {
case _ => new MyClient(context)
}
}
def getDefaultClient = build(defaultContext)
}
def main(args: Array[String]) {
val client = ClientFactory.getDefaultClient
client.operation1(new MyMethodParameters())
}
}
My target is building a highly concurrent backend for my widgets. I'm currently exposing the backend as a web service, which receives requests to run a specific widget (using Scalatra), fetches widget's code from DB and runs it in an actor (using Akka) which then replies with the results. So imagine I'm doing something like:
get("/run:id") {
...
val actor = Actor.actorOf("...").start
val result = actor !! (("Run",id), 10000)
...
}
Now I believe this is not the best concurrent solution and I should somehow combine listening for requests and running widgets in one actor implementation. How would you design this for maximum concurrency? Thanks.
You can start your actors in an akka boot file or in your own ServletContextListener so that they are started without being tied to a servlet.
Then you can look for them with the akka registry.
Actor.registry.actorFor[MyActor] foreach { _ !! (("Run",id), 10000) }
Apart from that there is no real integration for akka with scalatra at this moment.
So until now the best you can do is by using blocking requests to a bunch of actors.
I'm not sure but I wouldn't necessary spawn an actor for each request but rather have a pool of widget actors which you can send those requests. If you use a supervisor hierarchy then the you can use a supervisor to resize the pool if it is too big or too small.
class MyContextListener extends ServletContextListener {
def contextInitialized(sce: ServletContextEvent) {
val factory = SupervisorFactory(
SupervisorConfig(
OneForOneStrategy(List(classOf[Exception]), 3, 1000),
Supervise(actorOf[WidgetPoolSupervisor], Permanent)
}
def contextDestroyed(sce: ServletContextEvent) {
Actor.registry.shutdownAll()
}
}
class WidgetPoolSupervisor extends Actor {
self.faultHandler = OneForOneStrategy(List(classOf[Exception]), 3, 1000)
override def preStart() {
(1 to 5) foreach { _ =>
self.spawnLink[MyWidgetProcessor]
}
Scheduler.schedule(self, 'checkPoolSize, 5, 5, TimeUnit.MINUTES)
}
protected def receive = {
case 'checkPoolSize => {
//implement logic that checks how quick the actors respond and if
//it takes to long add some actors to the pool.
//as a bonus you can keep downsizing the actor pool until it reaches 1
//or until the message starts returning too late.
}
}
}
class ScalatraApp extends ScalatraServlet {
get("/run/:id") {
// the !! construct should not appear anywhere else in your code except
// in the scalatra action. You don't want to block anywhere else, but in a
// scalatra action it's ok as the web request itself is synchronous too and needs to
// to wait for the full response to have come back anyway.
Actor.registry.actorFor[MyWidgetProcessor] foreach {
_ !! ((Run, id), 10000)
} getOrElse {
throw new HeyIExpectedAResultException()
}
}
}
Please do regard the code above as pseudo code that happens to look like scala, I just wanted to illustrate the concept.