Deploying Playframework 2 to AppFog - Memory problems - deployment

I am trying to deploy simple Playframework Scala app to AppFog. I've created new Scala application and added a JAR from AppFog documentation. Then I've followed the steps from deploying to AppFog guide.
The problem is that the application won't start when less than 900MB of memory is reserved. The error is :
Error: Application [pralab-test] failed to start, logs information below.
====> /logs/stdout.log <====
No database found in Play configuration. Skipping auto-reconfiguration.
Play server process ID is 13276
[‹[33mwarn‹[0m] play - Plugin [org.cloudfoundry.reconfiguration.play.JPAPlugin]
is disabled
[‹[37minfo‹[0m] play - Application started (Prod)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# pthread_getattr_np
# An error report file with more information is saved as:
# /mnt/var/vcap.local/dea/apps/pralab-test-0-d6bc1b644e85148149d759499e02b409/ap
p/hs_err_pid13276.log
When started with more memory application starts and uses only about 140MB of declared 900MB. Is it a startup memory peak of Play or there is a bug in AppFog?
Do you have any successful deployments of Play applications on AppFog?
EDIT
This runs OK on cloudfoundry.com with 256M of memory.

I was having this same problem and couldn't get around it. I just had to do what you did and allocate 1G to my app when it was only using around 250m. I opened customer support ticket and got no response.
I believe you are having this same problem that they claim to have fixed for "JAVA" but must not be rolled out to play autodetected apps: https://groups.google.com/forum/#!topic/appfog-users/hxBxUe3c4QI
Current options are just live with more memory allocated.

Related

Google cloud datalab deployment unsuccessful - sort of

This is a different scenario from other question on this topic. My deployment almost succeeded and I can see the following lines at the end of my log
[datalab].../#015Updating module [datalab]...done.
Jul 25 16:22:36 datalab-deploy-main-20160725-16-19-55 startupscript: Deployed module [datalab] to [https://main-dot-datalab-dot-.appspot.com]
Jul 25 16:22:36 datalab-deploy-main-20160725-16-19-55 startupscript: Step deploy datalab module succeeded.
Jul 25 16:22:36 datalab-deploy-main-20160725-16-19-55 startupscript: Deleting VM instance...
The landing page keeps showing a wait bar indicating the deployment is still in progress. I have tried deploying several times in last couple of days.
About additions described on the landing page -
An App Engine "datalab" module is added. - when I click on the pop-out url "https://datalab-dot-.appspot.com/" it throws an error page with "404 page not found"
A "datalab" Compute Engine network is added. - Under "Compute Engine > Operations" I can see a create instance for datalab deployment with my id and a delete instance operation with *******-ompute#developer.gserviceaccount.com id. not sure what it means.
Datalab branch is added to the git repo- Yes and with all the components.
I think the deployment is partially successful. When I visit the landing page again, the only option I see is to deploy the datalab again and not to start it. Can someone spot the problem ? Appreciate the help.
I read the other posts on this topic and tried to verify my deployment using - "https://console.developers.google.com/apis/api/source/overview?project=" I get the following message-
The API doesn't exist or you don't have permission to access it
You can try looking at the App Engine dashboard here, to verify that there is a "datalab" service deployed.
If that is missing, then you need to redeploy again (or switch to the new locally-run version).
If that is present, then you should also be able to see a "datalab" network here, and a VM instance named something like "gae-datalab-main-..." here. If either of those are missing, then try going back to the App Engine console, deleting the "datalab" service, and redeploying.

PlayFramework 2.2.6: Advanced HTTP server configuration maxInitialLineLength

We are trying to send GET and POST requests with a length greater than 4096 bytes to our REST API implemented with Playframework 2.2.6.
After a long google research we tried nearly everything and the solution seems to be passing the following two arguments when starting our server via play. We receive no error message about wrong parameters but when we send a large request to the api we still receive the error
TooLongFrameException: An HTTP line is larger than 4096 Bytes
We are running the server by the following command
<PathToPlay>\play-2.2.6\play.bat -org.jboss.netty.maxHeaderSize:102400 -org.jboss.netty.maxInitialLineLength:102400 run
First of all your path to start your application seems off. When you create a new play application a play.bat or activator.bat file is automatically created in your project root folder. So no need to call a specific play installation runtime outside your project folder.
The parameters for setting the max body and header length can be found in the play documentation.
http.netty.maxInitialLineLength
- The maximum length for the initial line of an HTTP request, defaults to 4096
http.netty.maxHeaderSize
- The maximum size for the entire HTTP header, defaults to 8192
Development Mode
To start your application in development mode call
/path/to/project/play run -Dhttp.netty.maxInitialLineLength=102400 -Dhttp.netty.maxHeaderSize=102400
If you've used Activator to create your project replace play with activator.
Production mode
After you've published your application for production with play dist you can set the parameters by calling
/path/to/publishedApp/bin/<nameOfApp> -Dhttp.netty.maxInitialLineLength=102400 -Dhttp.netty.maxHeaderSize=102400

Apache Memory leakage while using camel-server

I am using camel to configure rest end points in my application.
While adding all the files does not give any problem to the project which is working fine till the point I edit camel-server.xml ,please note that I am updating knowledge-service.xml as this is where the rest point uri should hit. The moment I update camel-server.xml , deployment of WAR files in tomcat starts throwing following error :
SEVERE: The web application [/app] created a ThreadLocal with key of type [org.drools.core.common.UpgradableReentrantReadWriteLock$1] (value [org.drools.core.common.UpgradableReentrantReadWriteLock$1#3968e529]) and a value of type [int[]] (value [[I#159d0431]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.

Load balancing MySQL ndbcluster

I have successfully setup ndbcluster version 7.1.26.
This contains 2 data nodes[NDBD], 2 mysql [MYSQLD] nodes and one management [MGMD] node.
Replication works successfully.
My Web application is deployed in JBoss-5.0.1 and using JNDI for connection resources which are specified in application specific ds.xml file in load balanced url forms e.g. jbdc:mysql:loadbalance:host1:port1,host2:port2/databaseName.
host1 : refers to first mysqld node and port1 refers the port it is running on.
host2 : refers to second mysqld node and port2 refers the port it is running on.
When both of the [MySQLD] nodes are up and running everything works fine and cluster responds well, replicates data, and data retrieval operations also work properly.
But issues are raised when any of the [MySQLD] nodes goes down. Data gets inserted/updated/replicated but the application is unable to retrieve data from cluster and web page remains busy working which means busy retrieving data. As soon as the node which was down goes up it responds properly and application goes forward and shows up data retrieved from cluster.
At JBoss 5.0.1 startup it showed up a NullPointerException in class LoadBalancingConnectionProxy.invoke(LoadBalancingConnectionProxy.java:439). Tell me if the above Exception plays any role in the above explained issues.
If anyone had faced issues like above and if has any solution regarding the issues please let me know.
Thanks and regards.
I have resolved the issue as it was a bug in the connectorJ's version.
As The project I am working on was already using both the buggy jar mysql-connector-java-5.0.8.jar and the jar version in which the issue is already resolved i.e. mysql-connector-java-5.1.13-bin.jar.
After all the search when I removed the jar mysql-connector-java-5.0.8.jar my issues got resolved.
All that was problematic was that the ConnectorJ/Driver was getting referred from the buggy jar.
The bug id and url which refers to this issue is:
http://bugs.mysql.com/bug.php?id=31053
.
Thanks for considerations.
Are you using different userids and passwords for each of the hosts(host1, host2) specified in the tag ? (Either directly or using tag) ?

Clean Liferay installation for portlet development?

Suppose that I have to develop a simple Liferay portlet. Is it possible to prepare some cleaned installation, which contains only very basic things? I have erased many of the webapp folders but have Liferay loading for 73 seconds. What more can be disabled?
You can delete everything less than the ROOT folder under webapps.
To speedup you can also use in memory database and disable some spring service.
Please find following configuration files for db and spring services that I use for testing.
#In memory database for testing purpose.
jdbc.default.driverClassName=org.hsqldb.jdbcDriver
jdbc.default.url=jdbc:hsqldb:mem:lportal
jdbc.default.username=sa
jdbc.default.password=
ehcache.portal.cache.manager.jmx.enabled=false
value.object.listener.com.liferay.portal.model.LayoutSet=
# Disable the scheduler for Unit testing
scheduler.enabled=false
hibernate.configs=\
META-INF/mail-hbm.xml,\
META-INF/portal-hbm.xml,\
META-INF/ext-hbm.xml
# Comment or uncomment spring configuration files below as needed.
spring.configs=\
META-INF/base-spring.xml,\
META-INF/hibernate-spring.xml,\
META-INF/infrastructure-spring.xml,\
META-INF/management-spring.xml,\
META-INF/util-spring.xml,\
META-INF/jpa-spring.xml,\
# META-INF/audit-spring.xml,\
# META-INF/cluster-spring.xml,\
# META-INF/editor-spring.xml,\
META-INF/jcr-spring.xml,\
# META-INF/ldap-spring.xml,\
META-INF/messaging-core-spring.xml,\
# META-INF/messaging-misc-spring.xml,\
# META-INF/poller-spring.xml,\
# META-INF/rules-spring.xml,\
# META-INF/scheduler-spring.xml,\
# META-INF/scripting-spring.xml,\
# META-INF/search-spring.xml,\
# META-INF/workflow-spring.xml,\
META-INF/counter-spring.xml,\
META-INF/document-library-spring.xml,\
META-INF/mail-spring.xml,\
META-INF/portal-spring.xml,\
META-INF/portlet-container-spring.xml,\
# META-INF/dynamic-data-source-spring.xml,\
# META-INF/shard-data-source-spring.xml,\
# META-INF/memcached-spring.xml,\
# META-INF/monitoring-spring.xml,\
META-INF/ext-spring.xml
How much memory do you have in your computer? What memory settings do you have for Liferay? If the computer is using any swap space during startup, more main memory (or less apps in memory) will help most.
And, probably more important: What's the reason for you to optimize load time? Typically you rarely start/restart the server, unless you're constantly redeploying your ext-plugins.
If you're using the Liferay Development Tools (Liferay IDE or Liferay Developer Studio) you'll be able to deploy into the running system automatically. The Plugin SDK does the same thing from ant.