How to avoid OutOfMemoryError using Kotlin Coroutines - apache-kafka

I have a Ktor application with two Kafka consumers running in parallel. To achieve parallel execution I'm using coroutines but eventually I'm getting a java.lang.OutOfMemoryError: Java heap space in one of the consumers. I am no expert neither in Kafka nor in Kotlin coroutines so I'm not sure which of those could be causing the issue. I would like to discard first the coroutine implementation, because the Kafka consumer implementation is way more complex and is working fine in other applications without coroutines, the code looks like this:
private val parentJob = Job()
private val embeddedServer = embeddedServer(Netty, config.port) {
//setting up ktor, routing, etc...
startKafkaService(kafkaService1, "FirstConsumer", log, parentJob)
startKafkaService(kafkaService2, "SecondConsumer", log, parentJob)
}
fun <T> CoroutineScope.startKafkaService(
kafkaService: KafkaService<T, TbotUser>,
serviceName: String,
logger: Logger,
parentJob: CompletableJob
) {
val handler = CoroutineExceptionHandler { context, exception ->
val jobName = context[CoroutineName.Key]?.name ?: Thread.currentThread().name
logger.error("Exception caught in $jobName:\n${exception.stackTraceToString()}")
}
launch(parentJob + handler + CoroutineName(serviceName)) {
while (isActive) kafkaService.startConsuming()
kafkaService.close()
}
}
fun main(args: Array<String>) {
val logger = JsonLogger("...")
embeddedServer.start(true)
Runtime.getRuntime().addShutdownHook(object : Thread() {
override fun run() = runBlocking {
parentJob.cancelAndJoin()
}
})
}
and this is the error log I'm getting:
09:24:07.165 [kafka-coordinator-heartbeat-thread] ERROR org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=clientid, groupId=groupid] Heartbeat thread failed due to unexpected error
java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.Arrays.copyOf(Arrays.java:3537)
at java.base/java.lang.String.encodeUTF8(String.java:1265)
at java.base/java.lang.String.encode(String.java:825)
at java.base/java.lang.String.getBytes(String.java:1783)
at org.apache.kafka.common.message.HeartbeatRequestData.addSize(HeartbeatRequestData.java:239)
at org.apache.kafka.common.protocol.SendBuilder.buildSend(SendBuilder.java:218)
at org.apache.kafka.common.protocol.SendBuilder.buildRequestSend(SendBuilder.java:187)
at org.apache.kafka.common.requests.AbstractRequest.toSend(AbstractRequest.java:101)
at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:524)
at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:500)
at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:460)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.trySend(ConsumerNetworkClient.java:499)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:255)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:306)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1386)
09:24:07.407 [eventLoopGroupProxy-4-1] ERROR ktor.application - Unhandled: GET - /_/health
java.lang.OutOfMemoryError: Java heap space
09:24:08.400 [DefaultDispatcher-worker-2] INFO ktor.application - Responding at http://0.0.0.0:8080
09:24:09.151 [DefaultDispatcher-worker-1] ERROR App - Exception caught in SecondConsumer:
java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1468)
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.Arrays.copyOf(Arrays.java:3537)
at java.base/java.lang.String.encodeUTF8(String.java:1265)
at java.base/java.lang.String.encode(String.java:825)
at java.base/java.lang.String.getBytes(String.java:1783)
at org.apache.kafka.common.message.HeartbeatRequestData.addSize(HeartbeatRequestData.java:239)
at org.apache.kafka.common.protocol.SendBuilder.buildSend(SendBuilder.java:218)
at org.apache.kafka.common.protocol.SendBuilder.buildRequestSend(SendBuilder.java:187)
at org.apache.kafka.common.requests.AbstractRequest.toSend(AbstractRequest.java:101)
at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:524)
at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:500)
at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:460)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.trySend(ConsumerNetworkClient.java:499)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:255)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:306)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1386)

Related

Why Spark structured streaming job is not terminating even after raising exception

I am raising a custom exception to test failure in my structured streaming job as below. I see the query gets terminated but not able to understand why driver script is not failing with a non zero exit code
streamingDF.writeStream
.trigger(Trigger.ProcessingTime(10000L))
.foreachBatch {
(batchDF: DataFrame, batchId: Long) => {
val transformedDF: DataFrame = DoSomeProcessing(batchDF)
if (batchId == 1) {
throw new Exception("Custom Exception as batchId is 1")
}
I get below trace on my console but the driver script is not exiting and no new logs are printed on console.
Exception in thread "main" org.apache.spark.sql.streaming.StreamingQueryException: Custom Exception as batchId is 1
=== Streaming Query ===
Identifier: [id = 6f4c3b4c-bc30-46fe-93ef-8378c23380ab, runId = 1241cb37-493b-4882-ab28-9df8a8c6fb1a]
Current Committed Offsets: ...
Current Available Offsets: ...
Current State: ACTIVE
Thread State: RUNNABLE
Logical Plan:
RepartitionByExpression [timestamp#12], 10
...
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:295)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Caused by: java.lang.Exception: Custom Exception as batchId is 1
at MySteamingApp$$anonfun$startSparkStructuredStreaming$1.apply(MySteamingApp.scala:61)
at MySteamingApp$$anonfun$startSparkStructuredStreaming$1.apply(MySteamingApp.scala:57)
at org.apache.spark.sql.execution.streaming.sources.ForeachBatchSink.addBatch(ForeachBatchSink.scala:35)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5$$anonfun$apply$17.apply(MicroBatchExecution.scala:534)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5.apply(MicroBatchExecution.scala:532)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch(MicroBatchExecution.scala:531)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:198)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)
... 1 more
I think number of task failures were configured more
spark.task.maxFailures default 4 Number of failures of any particular task before giving up on the job. The total number of failures spread across different tasks will not cause the job to fail; a particular task has to fail this number of attempts. Should be greater than or equal to 1. Number of allowed retries = this value - 1.
Further have a look at Is there a way to dynamically stop Spark Structured Streaming?

How to handle UnkownProducerIdException

We are having some troubles with Spring Cloud and Kafka, at sometimes our microservice throws an UnkownProducerIdException, this is caused if the parameter transactional.id.expiration.ms is expired in the broker side.
My question, could it be possible to catch that exception and retry the failed message? If yes, what could be the best option to handle it?
I have took a look at:
- https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=89068820
- Kafka UNKNOWN_PRODUCER_ID exception
We are using Spring Cloud Hoxton.RELEASE version and Spring Kafka version 2.2.4.RELEASE
We are using AWS Kafka solution so we can't set a new value on that property I mentioned before.
Here is some trace of the exception:
2020-04-07 20:54:00.563 ERROR 5188 --- [ad | producer-2] o.a.k.c.p.internals.TransactionManager : [Producer clientId=producer-2] The broker returned org.apache.kafka.common.errors.UnknownProducerIdException: This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception. for topic-partition test.produce.another-2 with producerId 35000, epoch 0, and sequence number 8
2020-04-07 20:54:00.563 INFO 5188 --- [ad | producer-2] o.a.k.c.p.internals.TransactionManager : [Producer clientId=producer-2] ProducerId set to -1 with epoch -1
2020-04-07 20:54:00.565 ERROR 5188 --- [ad | producer-2] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='{...}' to topic <some-topic>:
To reproduce this exception:
- I have used the confluent docker images and set the environment variable KAFKA_TRANSACTIONAL_ID_EXPIRATION_MS to 10 seconds so I wouldn't wait too much for this exception to be thrown.
- In another process, send one by one in interval of 10 seconds 1 message in the topic the java will listen.
Here is a code example:
File Bindings.java
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
public interface Bindings {
#Input("test-input")
SubscribableChannel testListener();
#Output("test-output")
MessageChannel testProducer();
}
File application.yml (don't forget to set the environment variable KAFKA_HOST):
spring:
cloud:
stream:
kafka:
binder:
auto-create-topics: true
brokers: ${KAFKA_HOST}
transaction:
producer:
error-channel-enabled: true
producer-properties:
acks: all
retry.backoff.ms: 200
linger.ms: 100
max.in.flight.requests.per.connection: 1
enable.idempotence: true
retries: 3
compression.type: snappy
request.timeout.ms: 5000
key.serializer: org.apache.kafka.common.serialization.StringSerializer
consumer-properties:
session.timeout.ms: 20000
max.poll.interval.ms: 350000
enable.auto.commit: true
allow.auto.create.topics: true
auto.commit.interval.ms: 12000
max.poll.records: 5
isolation.level: read_committed
configuration:
auto.offset.reset: latest
bindings:
test-input:
# contentType: text/plain
destination: test.produce
group: group-input
consumer:
maxAttempts: 3
startOffset: latest
autoCommitOnError: true
queueBufferingMaxMessages: 100000
autoCommitOffset: true
test-output:
# contentType: text/plain
destination: test.produce.another
group: group-output
producer:
acks: all
debug: true
The listener handler:
#SpringBootApplication
#EnableBinding(Bindings.class)
public class PocApplication {
private static final Logger log = LoggerFactory.getLogger(PocApplication.class);
public static void main(String[] args) {
SpringApplication.run(PocApplication.class, args);
}
#Autowired
private BinderAwareChannelResolver binderAwareChannelResolver;
#StreamListener(Topics.TESTLISTENINPUT)
public void listen(Message<?> in, String headerKey) {
final MessageBuilder builder;
MessageChannel messageChannel;
messageChannel = this.binderAwareChannelResolver.resolveDestination("test-output");
Object payload = in.getPayload();
builder = MessageBuilder.withPayload(payload);
try {
log.info("Event received: {}", in);
if (!messageChannel.send(builder.build())) {
log.error("Something happend trying send the message! {}", in.getPayload());
}
log.info("Commit success");
} catch (UnknownProducerIdException e) {
log.error("UnkownProducerIdException catched ", e);
} catch (KafkaException e) {
log.error("KafkaException catched ", e);
}catch (Exception e) {
System.out.println("Commit failed " + e.getMessage());
}
}
}
Regards
} catch (UnknownProducerIdException e) {
log.error("UnkownProducerIdException catched ", e);
To catch exceptions there, you need to set the sync kafka producer property (https://cloud.spring.io/spring-cloud-static/spring-cloud-stream-binder-kafka/3.0.3.RELEASE/reference/html/spring-cloud-stream-binder-kafka.html#kafka-producer-properties). Otherwise, the error comes back asynchronously
You should not "eat" the exception there; it must be thrown back to the container so the container will roll back the transaction.
Also,
}catch (Exception e) {
System.out.println("Commit failed " + e.getMessage());
}
The commit is performed by the container after the stream listener returns to the container so you will never see a commit error here; again, you must let the exception propagate back to the container.
The container will retry the delivery according to the consumer binding's retry configuration.
probably you can also use the callback function to handle the exception, not sure about the springframework lib for kafka, if using kafka client, you can something like this:
producer.send(record, new Callback() {
public void onCompletion(RecordMetadata metadata, Exception e) {
if(e != null) {
e.printStackTrace();
if(e.getClass().equals(UnknownProducerIdException.class)) {
logger.info("UnknownProducerIdException caught");
while(--retry>=0) {
send(topic,partition,msg);
}
}
} else {
logger.info("The offset of the record we just sent is: " + metadata.offset());
}
}
});

How to overcome Scalatra initialization issue: NoSuchMethodError: javax.servlet.ServletContext.getFilterRegistration?

This is my first time using Scalatra, and I'm using it outside of SBT (building and running using mill). I get the following error which seems to be about a missing dependency.
2018.05.23 18:26:30 [main] INFO org.scalatra.servlet.ScalatraListener - The cycle class name from the config: ScalatraBootstrap
2018.05.23 18:26:30 [main] INFO org.scalatra.servlet.ScalatraListener - Initializing life cycle class: ScalatraBootstrap
2018.05.23 18:26:30 [main] ERROR org.scalatra.servlet.ScalatraListener - Failed to initialize scalatra application at
java.lang.NoSuchMethodError: javax.servlet.ServletContext.getFilterRegistration(Ljava/lang/String;)Ljavax/servlet/FilterRegistration;
at org.scalatra.servlet.RichServletContext.mountFilter(RichServletContext.scala:162)
at org.scalatra.servlet.RichServletContext.mount(RichServletContext.scala:85)
at org.scalatra.servlet.RichServletContext.mount(RichServletContext.scala:93)
at org.scalatra.servlet.RichServletContext.mount(RichServletContext.scala:90)
at ScalatraBootstrap.init(ScalatraBootstrap.scala:8)
at org.scalatra.servlet.ScalatraListener.configureCycleClass(ScalatraListener.scala:66)
at org.scalatra.servlet.ScalatraListener.contextInitialized(ScalatraListener.scala:22)
at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:890)
at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:558)
at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:853)
at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:370)
at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1497)
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1459)
at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:785)
at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:287)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
at org.eclipse.jetty.server.Server.start(Server.java:419)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.server.Server.doStart(Server.java:386)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at com.example.JettyLauncher$.main(JettyLauncher.scala:20)
at com.example.JettyLauncher.main(JettyLauncher.scala)
Here are the dependencies I'm using:
val jettyVersion = "9.4.10.v20180503"
def ivyDeps = Agg(
ivy"org.scalatra::scalatra:2.6.3",
ivy"javax.servlet:servlet-api:2.5",
ivy"org.eclipse.jetty:jetty-server:$jettyVersion",
ivy"org.eclipse.jetty:jetty-servlet:$jettyVersion",
ivy"org.eclipse.jetty:jetty-webapp:$jettyVersion",
)
My JettyLauncher is a striaght copy from the web site, so far, except I changed the resourceBase to be something that actually exists (but it didn't help):
object JettyLauncher { // this is my entry object as specified in sbt project definition
def main(args: Array[String]) {
val port = if(System.getenv("PORT") != null) System.getenv("PORT").toInt else 5001
val server = new Server(port)
val context = new WebAppContext()
context setContextPath "/"
context.setResourceBase("repeater")
context.addEventListener(new ScalatraListener)
context.addServlet(classOf[DefaultServlet], "/")
server.setHandler(context)
server.start
server.join
}
}
My LifeCycle class is also fairly minimal:
class ScalatraBootstrap extends LifeCycle {
override def init(context: ServletContext) {
context mount (new RepeatAll, "/*")
}
}
UPDATE
I changed to using ScalatraServlet instead of ScalatraServlet, but get a similar issue:
2018.05.23 18:39:24 [main] ERROR org.scalatra.servlet.ScalatraListener - Failed to initialize scalatra application at
java.lang.NoSuchMethodError: javax.servlet.ServletContext.getServletRegistration(Ljava/lang/String;)Ljavax/servlet/ServletRegistration;
at org.scalatra.servlet.RichServletContext.mountServlet(RichServletContext.scala:127)
at org.scalatra.servlet.RichServletContext.mount(RichServletContext.scala:84)
at org.scalatra.servlet.RichServletContext.mount(RichServletContext.scala:93)
at org.scalatra.servlet.RichServletContext.mount(RichServletContext.scala:90)
at ScalatraBootstrap.init(ScalatraBootstrap.scala:8)
Update 2
Another important part of the stacktrace I missed posting earlier:
2018.05.23 18:39:24 [main] WARN org.eclipse.jetty.webapp.WebAppContext - Failed startup of context o.e.j.w.WebAppContext#3d74bf60{/,file:///home/brandon/workspace/sbh/repeater,UNAVAILABLE}
java.lang.NoSuchMethodError: javax.servlet.ServletContext.getServletRegistration(Ljava/lang/String;)Ljavax/servlet/ServletRegistration;
at org.scalatra.servlet.RichServletContext.mountServlet(RichServletContext.scala:127)
at org.scalatra.servlet.RichServletContext.mount(RichServletContext.scala:84)
at org.scalatra.servlet.RichServletContext.mount(RichServletContext.scala:93)
I tried putting a WEB-INF/web.xml under repeater as specified above, but same result.

Logback no ExceptionInInitializerError in file only on console

I'm using logback 1.7.5 with play framework 2.2.4, during working I'm getting such error:
Exception in thread "Thread-5" java.lang.ExceptionInInitializerError
at controllers.db.SyncDBManager.createDeviceCollectionsIfNotExists(SyncDBManager.scala)
at server.impl.logic.controller.DeviceInitializer.checkAuthenticationResponse(DeviceInitializer.java:226)
at server.impl.logic.controller.DeviceInitializer.processReceivedFrames(DeviceInitializer.java:117)
at server.impl.logic.controller.DeviceController.onReceivedPackets(DeviceController.java:77)
at server.impl.logic.io.DeviceReader.run(DeviceReader.java:130)
Caused by: java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:313)
at scala.None$.get(Option.scala:311)
at controllers.base.MongoSyncHelper$class.$init$(MongoSyncHelper.scala:16)
at controllers.db.SyncDBManager$.<init>(SyncDBManager.scala:32)
at controllers.db.SyncDBManager$.<clinit>(SyncDBManager.scala)
... 5 more
But only on console, how can I catch such exception and write it to file?
Seems like an ordinary stack trace in a console printed with the sysout. Depending on your design if you want to log an exception with the Play logger you should do it in a catch block with one of logger's methods by passing an exception as one of parameters. For example:
try {
// code
} catch {
case e: Exception => play.api.Logger.error("An error occurred", e)
}
The logger level is up to you. If you don't have a catch block and want to log an unexpected exception that occurs in any place of an application you should log it in the Global object by overriding onError method.
override def onError(request: RequestHeader, e: Throwable): Future[SimpleResult] = {
play.api.Logger.error("An error occurred", e)
super.onError(request, e)
}
Remember that onError method is called only in the production mode.

scala nsc IMain bind() speed and memory issues

We are using tools.nsc.interpreter.IMain's bind() and interpret() method to execute scala scripts on a server. This is on on scala 2.9.1 and Java 7u2.
After repeatedly using the same IMain instance, the bind() methods suddenly starts to take very long time (5-6 seconds and even longer). I have tried close() reset() but nothing helps. Weird thing is that the sudden slowness occurs after several uses.
Code snippet (that is executed over and over again):
main.bind("status", status)
try {
main.interpret(prepare(restriction, input))
} catch {
case e: Exception =>
status.setCode("ERR6")
status.setSummary("Error Interpreting Restriction")
status.setType(MetaFileElements.ERROR_VALUE)
status.setValue("Restriction: \"" + restriction + "\", Input: \"" + input + "\"")
}
Another Issue is evetually the process crashes with this error:
Exception in thread "main" java.lang.OutOfMemoryError: PermGen space
at java.lang.ClassLoader.findBootstrapClass(Native Method)
at java.lang.ClassLoader.findBootstrapClassOrNull(ClassLoader.java:1061)
at java.lang.ClassLoader.loadClass(ClassLoader.java:412)
at java.lang.ClassLoader.loadClass(ClassLoader.java:410)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
at scala.tools.nsc.util.Exceptional$.unwrap(Exceptional.scala:140)
at scala.tools.nsc.interpreter.IMain$Request$$anonfun$handleException$1$1.apply(IMain.scala:821)
at scala.tools.nsc.interpreter.IMain$Request$$anonfun$handleException$1$1.apply(IMain.scala:818)
at scala.tools.nsc.interpreter.IMain$$anonfun$withoutBindingLastException$2.apply(IMain.scala:228)
at scala.util.control.Exception$Catch.apply(Exception.scala:88)
at scala.tools.nsc.interpreter.IMain.withoutBindingLastException(IMain.scala:226)
at scala.tools.nsc.interpreter.IMain$Request.handleException$1(IMain.scala:818)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:838)
at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:471)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:503)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:468)
at scala.tools.nsc.interpreter.IMain.bind(IMain.scala:525)
at scala.tools.nsc.interpreter.IMain.bind(IMain.scala:544)
at scala.tools.nsc.interpreter.IMain.bind(IMain.scala:545)
at com.nomura.fi.spg.kozo.meta.client.helper.RestrictionsHelper$.execute(RestrictionsHelper.scala:22)