Play Framework Ubuntu Rogue Process - scala

I've packaged my Play application using the sbt-native-packager Debian Plugin. I installed the .deb file using the typical sudo dpkg -i tyrion_1.0-SNAPSHOT_all.deb. Once I did that, it created the daemon user and group and started the process as per the following:
aczerwon#vps57610:~/work/tyrion/target$ sudo dpkg -i tyrion_1.0-SNAPSHOT_all.deb
Selecting previously unselected package tyrion.
(Reading database ... 53135 files and directories currently installed.)
Preparing to unpack tyrion_1.0-SNAPSHOT_all.deb ...
Unpacking tyrion (1.0-SNAPSHOT) ...
Setting up tyrion (1.0-SNAPSHOT) ...
Creating system group: tyrion
Creating system user: tyrion in tyrion with tyrion user-daemon and shell /bin/false
tyrion start/running, process 30525
Processing triggers for ureadahead (0.100.0-16) ...
I see a java process running at 50% - which is nuts because the app should be idle. I'm assuming it's using the application.conf configuration but I get a ERR_CONNECTION_REFUSED when I try to hit the website.
Process is Starting and Stopping
Watching top, I now see the CPU is pinned because the process is starting and dying over and over. The pid is changing and VisualVM can't see it - it's not listed.

A ERR_CONNECTION_REFUSED error is probably due to miss configuration of play. See sbt-native-packager docs and play pid configuration.
Example configuration
javaOptions in Universal ++= Seq(
// Since play uses separate pidfile we have to provide it with a proper path
s"-Dpidfile.path=/var/run/${packageName.value}/play.pid",
// setting the http port explicitly
"-Dhttp.port=9000"
)
For the high CPU utilization I would recommend profiling the app with VisualVM or MissionControl to see what's going on.
Update
For play applications the PID file must be named play.pid otherwise play doesn't start up.

Related

Problem setting up some critical paths when I run Wildfly 20 as a service

I have a problem setting up some critical paths when I run Wildfly 20 as a service.
When I install (in "VM1") Wildfly in /home/myuser/ instead of /opt and NOT as a service and run it with the following, I am able to use the Admin console's "Test Connection" to connect to a Sybase SQL Anywhere database using the sajdbc4 driver.
cd ~/wildfly-20.0.1.Final/bin
export LD_LIBRARY_PATH=/home/myuser/wildfly-20.0.1.Final/modules/system/layers/base/com/sybase/main
export CLASSPATH=.:/home/myuser/wildfly-20.0.1.Final/modules/system/layers/base/com/sybase/main/sajdbc4.jar
./standalone.sh
LD_LIBRARY_PATH sets the path to the driver support files.
On the other hand, when I install Wildfly (in "VM2") exactly the same way as before except for installing into /opt and the extra steps to run Wildfly as a service as below, the Admin console's "Test Connection" fails with:
cd ~/wildfly-20.0.1.Final/bin
export LD_LIBRARY_PATH=/opt/wildfly-20.0.1.Final/modules/system/layers/base/com/sybase/main
export CLASSPATH=.:/opt/wildfly-20.0.1.Final/modules/system/layers/base/com/sybase/main/sajdbc4.jar
sudo systemctl start wildfly
2020-08-28 13:13:41,341 INFO [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0183:
Service status report WFLYCTL0184: New missing/unsatisfied dependencies: service jboss.jdbc-driver.sajdbc4_jar (missing) dependents: [service jboss.driver-demander.java:jboss/datasources/TestDB, service org.wildfly.data-source.TestDB]
I can run a simple Java test app on the "VM02" system that connects and dumps a database table with:
cd $HOME/Desktop
export LD_LIBRARY_PATH=/opt/wildfly-20.0.1.Final/modules/system/layers/base/com/sybase/main
export CLASSPATH=.:/opt/wildfly-20.0.1.Final/modules/system/layers/base/com/sybase/main/sajdbc4.jar
java sajdbc4DriverTest.java
This suggest ti me that all of the driver files are present at and the LD_LIBRARY_PATH location. Note that the launch of Wildlfly as a service uses the same paths.
Can anyone explain why Wildfly is ignoring the two paths I set prior to starting the service?
Thank you in advance.
Service environment variables are not set this way. And even if they were, the use of sudo changes to a new user with new environment variables.
Instead, if you installed Wildfly as documented in wildfly-20.0.1.Final/docs/contrib/scripts/systemd, add your environment variables in /etc/wildfly/wildfly.conf. Something more like:
# The configuration you want to run
WILDFLY_CONFIG=standalone.xml
# The mode you want to run
WILDFLY_MODE=standalone
# The address to bind to
WILDFLY_BIND=0.0.0.0
# Add Sybase native library dir
LD_LIBRARY_PATH=/opt/wildfly-20.0.1.Final/modules/system/layers/base/com/sybase/main
I don't feel that you need to set CLASSPATH but I don't think it'll hurt either.

Autostart deluge daemon 1.3.10 on boot on Raspberry Pi

I have already been through different tutorials on how to set raspberry pi into a torrent box, but I think most of the how-to tutorials are out-dated.
I have also check my version of deluge daemon using his command:
deluge -v
And it returns this:
deluged: 1.3.10
libtorrent: 0.16.18.0
I have followed the How-To Geek tutorial so far.
Link: http://www.howtogeek.com/142044/how-to-turn-a-raspberry-pi-into-an-always-on-bittorrent-box/
After I stared to get errors I have fully uninstalled and deleted all the files of deluge.
The tutorial suggests this command:
sudo wget -O /etc/default/deluge-daemon http://cdn5.howtogeek.com/wp-content/uploads/gg/up/sshot5151a8c86fb85.txt
But there is no such file as /etc/default/deluge-daemon, instead there is a deluged named file (maybe short for deluge-daemon in new version)
Basically what the command does is that it copies the content of the file http://cdn5.howtogeek.com/wp-content/uploads/gg/up/sshot5151a8c86fb85.txt to the file located at /etc/default/deluge-daemon.
As I can't find deluged-daemon, I chose to do this with /etc/default/deluged
The original content of /etc/default/deluged:
# Defaults for deluged initscript
# sourced by /etc/init.d/deluged
# change to 1 to enable daemon
ENABLE_DELUGED=0
Content provided on the file http://cdn5.howtogeek.com/wp-content/uploads/gg/up/sshot5151a8c86fb85.txt:
# Configuration for /etc/init.d/deluge-daemon
# The init.d script will only run if this variable non-empty.
DELUGED_USER="pi" # !!!CHANGE THIS!!!!
# Should we run at startup?
RUN_AT_STARTUP="YES"
But both files looks different and the deluge daemon doesn't load up on startup.
I managed to resolve this issue using this guide: http://dev.deluge-torrent.org/wiki/UserGuide/Service/systemd.
Follow instructions from this guide (you may skip deluge-web instructions).
Note that deluge user was created with --home /var/lib/deluge.
Update auth (setup an account) and core.conf (set allow_remote flag) files in home dir of deluge user (as opposed to home dir of pi user that is usually mentioned in other tutorials).
Reboot

Zookeeper startup on system reboot error

I have installed zookeeper on my Linux server ubuntu 12.04 in some folder like abc/zookeeper/zkserver/bin/zkserver.sh start. this works fine and starts the server as expected but when i put this zkserver.sh file in /etc/init.d folder and copy the file in rc2.d folder for the reason to start-up zookeeper on system reboot. and when i run this command /etc/init.d/zkserver.sh start this is giving errors like:
JMX enabled by default
Using config: /etc/init.d/../etc/zookeeper/zoo.cfg
grep: /etc/init.d/../etc/zookeeper/zoo.cfg: No such file or directory
mkdir: cannot create directory `': No such file or directory
Starting zookeeper ... STARTED
The zkServer.sh is dependent on a certain directory structure and certain files being present. It is not supposed to be moved in isolation like that. It is also not supposed to be used as an init script.
Check if your zk download comes with the init script. Try looking at src/packages/rpm/init.d/zookeeper or similar, and use that one instead.

Vagrant & Chef - Postgresql not starting on reboot

I decided to create my own chef script to install Postgres. The installation works perfectly fine, but postgres doesn't start on boot when I vagrant reload
Here's my recipes/default.rb:
include_recipe "apt"
apt_repository 'apt.postgresql.org' do
uri 'http://apt.postgresql.org/pub/repos/apt'
distribution node["lsb"]["codename"] + '-pgdg'
components ['main', node["postgres"]["version"]]
key 'http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc'
action :add
end
package 'postgresql-' + node["postgres"]["version"] do
action :install
end
file "/etc/postgresql/#{node['postgres']['version']}/main/postgresql.conf" do
action :delete
end
link "/etc/postgresql/#{node['postgres']['version']}/main/postgresql.conf" do
to node["postgres"]["conf_path"]
action :create
notifies :reload, "service[postgresql]", :delayed
end
service "postgresql" do
action [:enable, :start]
supports :status=>true, :restart=>true, :start => true, :stop => true, :reload=>true
end
And here's my attributes/default.rb:
default["postgres"]["version"] = "9.3"
default["postgres"]["conf_path"] = "/home/vagrant/postgres/postgresql.conf"
Any help would be greatly appreciated!!
============ EDIT 1 ============
Here is the output when running vagrant up for the first time with chef.log_level = :debug: http://pastebin.com/w8Lp8gzv
Here is /etc/init.d/postgresql: http://pastebin.com/dQ5Zb1yj
Here is /var/log/postgresql/postgresql-9.3-main.log: http://pastebin.com/0Y2RhWvL
============ EDIT 2 ============
I'm now fairly confident that it's my postgresql.conf file, which looks like: http://pastebin.com/rjX89iU0
shared_buffers might be too high...
When you run vagrant reload, is the Chef Client running? I suspect not. Mitchell changed the behavior in a recent version of vagrant to only provision if the machine hasn't already been provisioned. This information is stored in the .vagrant directory in your working directory. In short, since you already provisioned your machine with vagrant up, it is not provisioned when you run vagrant reload.
You run vagrant up - this is actually going to run vagrant up --provision, which executes the Chef Client provisioner on the node, executing your Chef Recipe.
You run vagrant reload - this actually runs vagrant up --no-provision, because the .vagrant. directory indicates the machine has already been provisioned. So your machine is rebooted, but the Chef Client provisioner is not executed.
Solution
Run vagrant reload with the --provision flag
vagrant reload --provision
Notes
This still doesn't explain why upstart (or whatever you're using to ensure the postgres service is running at boot) isn't starting the server for your automatically. In order to answer that question, I'll need to see more information. Can you set the chef.log_level = :debug in your Vagrantfile and update your question with the output? It would also be helpful to see the init.d script this postgres installer creates, and any log output from /var/log related to postgres.
Alright, it looks like Postgresql doesn't play nice with postgresql.conf being a symbolic link. Copying the file instead did the trick.
Turns out the postgresql was starting before the postgersql.conf file was mounted
If you're starting services with Upstart that depend on something in Vagrant's shared folders, have your upstart conf file listen for the vagrant-mounted event.
# /etc/init/start-postgresql.conf
start on vagrant-mounted
script
# commands to start postgresql...
end script
The vagrant-mounted event is emitted after Vagrant is done setting up shared folders, this way you can restart dependent services after vagrant reload without having to run your provisioners again.

How to deploy a meteor application to my own server?

How to deploy a meteor application to my own server?
flavour 1: the development and deployment server are the same;
flavour 2: the development server is one (maybe my localhost) and the deployment server is another (maybe a VPS in the cloud);
flavour 3: I want to make a "meteor hosting" domain, just like "meteor.com". Is it possible? How?
Update:
I'm running Ubuntu and I don't want to "demeteorize" the application. Thank you.
Meteor documentation currently says:
"[...] you need to provide Node.js 0.8 and a MongoDB server. You can
then run the application by invoking node, specifying the HTTP port
for the application to listen on, and the MongoDB endpoint."
So, among the several ways to install Node.js, I got it up and running following the best advice I found, which is basically unpacking the latest version available directly in the official Node.JS website, already compiled for Linux (64 bits, in my case):
# Does NOT need to be root user:
# create directory
mkdir -p ~/.nodes && cd ~/.nodes
# download latest Node.js distribution
curl -O http://nodejs.org/dist/v0.10.13/node-v0.10.13-linux-x64.tar.gz
# unpack it
tar -xzf node-v0.10.13-linux-x64.tar.gz
# discard it
rm node-v0.10.13-linux-x64.tar.gz
# rename unpacked folder
mv node-v0.10.13-linux-x64 0.10.13
# create symlink
ln -s 0.10.13 current
# add path to PATH
export PATH="~/.nodes/current/bin:$PATH"
# check
node --version
npm --version
And to install MongoDB, I simply followed the instructions in the MongoDB manual available in the Documentation section of its official website:
# Needs to be root user (apply "sudo" if not at root shell)
apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list
apt-get update
apt-get install mongodb-10gen
The server is ready to run Meteor applications! For deployment, the main "issue" is where the "bundle" operation happens. We need to run meteor bundle command from inside the application source files tree. For example:
cd ~/leaderboard
meteor bundle leaderboard.tar.gz
If the deployment will happen in another server (flavour 2), we need to upload the bundle tar.gz file to it, using sftp, ftp, or any other file transfer method. Once the file is there, we follow both Meteor documentation and the README file which is magically included in the root of the bundle tree:
# unpack the bundle
tar -xvzf leaderboard.tar.gz
# discard tar.gz file
rm leaderboard.tar.gz
# rebuild native packages
pushd bundle/programs/server/node_modules
rm -r fibers
npm install fibers#1.0.1
popd
# setup environment variables
export MONGO_URL='mongodb://localhost'
export ROOT_URL='http://example.com'
export PORT=3000
# start the server
node main.js
If the deployment will be in the same server (flavour 1), the bundle tar.gz file is already there, and we don't need to recompile the native packages. (Just jump the corresponding section above.)
Cool! With these steps, I've got the "Leaderboard" example deployed to my custom server, not "meteor.com"... (only to learn and value their services!)
I still have to make it run on port 80 (I plan to use NginX for this), persist environment variables, start Node.JS dettached from terminal, et cetera... I am aware this setup in a "barely naked" one... just the base, the first step, basic foundation stones.
The application has been "manually" deployed, without taking advantage of all meteor deploy command magic features... I've seen people published their "meteor.sh" and "meteoric.sh" and I am following the same path... create a script to emulate the "single command deploy" feature... aware that in the near future all this stuff will be part of the pioneer Meteor explorers only, as it will grow into a whole Galaxy! and most of these issues will be an archaic thing of the past.
Anyway, I am very happy to see how fast the deployed application runs in the cheapest VPS ever, with a surprisingly low latency and almost instant simultaneous updates in several distinct browsers. Fantastic!
Thank you!!!
Try Meteor Up too
With that you can deploy into any Ubuntu server. This uses meteor build command internally. And used by many for deploying production apps.
I created Meteor Up to allow developers to deploy production quality Meteor apps until Galaxy comes.
I would recommend flavor two with a separate deployment server. Separation of concerns leads to a more stable environment for your code and its easier to debug.
To do it, there's the excellent Meteoric bash script that helps you deploy to Amazon's EC2 or your own server.
As for how to roll your own meteor.com, I suggest you break that out into it's own StackOverflow question as it's not related. Plus, I can't answer it :)
I done with it few days ago. I deployed my Meteor application to my own server on the DigitalOcean. I used Meteor Up tool for managing deploys and Nginx on the server to serve the app.
It's very simple to use. You should install meteor up with the command:
npm install -g mup
Then create the folder for deployment configuration and go to the created directory. Then run mup init command. It will created two configuration files. We are have interest for mup.json file. It have configurations for deployment process. It's looks like this:
{
// Server authentication info
"servers": [
{
"host": "hostname",
"username": "root",
"password": "password",
// or pem file (ssh based authentication)
//"pem": "~/.ssh/id_rsa",
// Also, for non-standard ssh port use this
//"sshOptions": { "port" : 49154 },
// server specific environment variables
"env": {}
}
],
// Install MongoDB on the server. Does not destroy the local MongoDB on future setups
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: nodeVersion defaults to 0.10.36 if omitted. Do not use v, just the version number.
"nodeVersion": "0.10.36",
// Install PhantomJS on the server
"setupPhantom": true,
// Show a progress bar during the upload of the bundle to the server.
// Might cause an error in some rare cases if set to true, for instance in Shippable CI
"enableUploadProgressBar": true,
// Application name (no spaces).
"appName": "meteor",
// Location of app (local directory). This can reference '~' as the users home directory.
// i.e., "app": "~/Meteor/my-app",
// This is the same as the line below.
"app": "/Users/arunoda/Meteor/my-app",
// Configure environment
// ROOT_URL must be set to https://YOURDOMAIN.com when using the spiderable package & force SSL
// your NGINX proxy or Cloudflare. When using just Meteor on SSL without spiderable this is not necessary
"env": {
"PORT": 80,
"ROOT_URL": "http://myapp.com",
"MONGO_URL": "mongodb://arunoda:fd8dsjsfh7#hanso.mongohq.com:10023/MyApp",
"MAIL_URL": "smtp://postmaster%40myapp.mailgun.org:adj87sjhd7s#smtp.mailgun.org:587/"
},
// Meteor Up checks if the app comes online just after the deployment.
// Before mup checks that, it will wait for the number of seconds configured below.
"deployCheckWaitTime": 15
}
After you fill all data fields you can start the setup process with command mup setup. It will setup your server.
After sucessfull setup you can deploy your app. Just type mup deploy in the console.
Another alternative is to just develop on your own server to start with.
I just created a Digital Ocean box and then connected my Cloud9 IDE account.
Now, I can develop right on the machine in a Cloud IDE and deployment is easy--just copying files.
I created a tutorial that shows exactly how my set up works.
I had a lot of trouble with meteor up, so I decided writing my own deploy script. I also added additional info how to set up nginx or mongodb. Hope it helps!
See /sh folder in repository
What the script meteor-deploy.sh does:
Select environment (./meteor-deploy.sh for staging, ./meteor-deploy.sh prod for production)
Build and bundle production version of the meteor app
Copy bundle to server
SSH into server
Do a mongodump to backup database
Stop the running app
Unpack bundle
Overwrite app files
Re-install app node package dependencies
Start the app (uses forever)
Tested for the following server configurations:
Ubuntu 14.04.4 LTS
meteor --version 1.3.2.4
node --version v0.10.41
npm --version 3.10.3