How to get absolute path with Vert.x? - vert.x

How can I get the absolute path of a file without hardcoding the path in a String?
So basically I'm asking for vert.x's version of PHP's $_SERVER['DOCUMENT_ROOT']. Does anybody know?
UPDATE
I have the following directory structure:
| app.coffee
| Views
-| foo.html
app.coffee:
vertx = require('vertx')
rm = new vertx.RouteMatcher()
rm.get '/', (req) ->
req.response.sendFile "Views/foo.html"
vertx.createHttpServer().requestHandler(rm).listen(8080)

It sounds like you are looking for the execution directory of the application. To get the execution directory you can just look up the system property 'user.dir' like so:
System.getProperty("user.dir")
So for instance when I serve static files in vertx for testing I use this:
val staticFilePath = container.config().getString("static_files", System.getProperty("user.dir"));
server.requestHandler(
{
request: HttpServerRequest => {
request.response().sendFile(staticFilePath + "/something.css")
}
}
If you are still not getting the desired result just print out the user.dir property to see where you are trying to serve files from.
References:
http://docs.oracle.com/javase/tutorial/essential/environment/sysprop.html

Just type in a relative path, it will refer to the working directory you launched the vertx command into.
Look at this simple example:
$ ls
index.html verticle.scala
As you can see, I have a verticle and a static file in my (unspecified!) directory.
$ cat index.html
Hello!
$ cat verticle.scala
vertx.createHttpServer().requestHandler({ req: HttpServerRequest =>
req.response().sendFile("index.html") // <-- relative path!
}).listen(8080)
Now I just run it from here
$ vertx run verticle.scala
Compiling verticle.scala as Scala script
Starting verticle.scala
Succeeded in deploying verticle
And it just works, no need at all to specify constants like in PHP :)
$ curl localhost:8080
Hello!
Enjoy!

I bet you're trying to do
curl http://localhost:8080/foo.html
That can't work because your route matcher is configured to match / and respond with the contents of foo.html.
The right test case should be instead:
curl http://localhost:8080/
and the expected result should be
Hello!
Route matcher is not is the wrong tool for serving static files. If you intend to do that, refer to the web server module
UPDATE
Mate, this thing works for me in coffeescript too. What Vert.x version are you using?
$ vertx version
2.1 (built 2014-05-27 12:39:02)
$ cat app.coffee
vertx = require('vertx')
rm = new vertx.RouteMatcher()
rm.get '/', (req) ->
req.response.sendFile "Views/foo.html"
vertx.createHttpServer().requestHandler(rm).listen(8080)
$ vertx run app.coffee &
[1] 27731
$ Succeeded in deploying verticle
$ curl localhost:8080
ciao
$ cat Views/foo.html
ciao

Related

Apache Zeppelin cannot deserialize dataset: "NoSuchMethodError"

I am trying to use Apache Zeppelin (0.7.2, net install running locally on a Mac) to explore data loaded from an s3 bucket. The data seems to load just fine, as the command:
val p = spark.read.textFile("s3a://sparkcookbook/person")
gives the result:
p: org.apache.spark.sql.Dataset[String] = [value: string]
However, when I try to call methods on the object p, I get an error. For example:
p.take(1)
results in:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.spark.rdd.RDDOperationScope$
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:225)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:308)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2370)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2377)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2113)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2112)
at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2795)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2112)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2327)
My conf/zeppelin-env.sh is the same as the default, except that I have amazon access key and secret key environment variables defined there. In the Spark interpreter in the Zeppelin notebook, I have added the following artifacts:
org.apache.hadoop:hadoop-aws:2.7.3
com.amazonaws:aws-java-sdk:1.7.9
com.fasterxml.jackson.core:jackson-core:2.9.0
com.fasterxml.jackson.core:jackson-databind:2.9.0
com.fasterxml.jackson.core:jackson-annotations:2.9.0
(I think only the first two are necessary). The two commands above work fine in the Spark shell, just not in the Zeppelin notebook (see How to use s3 with Apache spark 2.2 in the Spark shell for how that was set up).
So it seems that there is a problem with one of the Jackson libraries. Maybe I'm using the wrong artifacts above for the Zeppelin interpreter?
UPDATE: Following the advice in the proposed answer below, I removed the jackson jars that came with Zeppelin, and replaced them with the following:
jackson-annotations-2.6.0.jar
jackson-core-2.6.7.jar
jackson-databind-2.6.7.jar
And replaced the artifacts with these, so my artifacts are now:
org.apache.hadoop:hadoop-aws:2.7.3
com.amazonaws:aws-java-sdk:1.7.9
com.fasterxml.jackson.core:jackson-core:2.6.7
com.fasterxml.jackson.core:jackson-databind:2.6.7
com.fasterxml.jackson.core:jackson-annotations:2.6.0
The error I get, however, from running the above commands is the same.
UDPATE2: As per I removed the jackson libraries from the list of artifacts, since they are already now in the jars/ folder - the only added artifacts are now the aws artifacts above. I then cleaned the classpath by entering the following in the notebook (as per the instructions):
%spark.dep
z.reset()
I get a different error now:
val p = spark.read.textFile("s3a://sparkcookbook/person")
p.take(1)
p: org.apache.spark.sql.Dataset[String] = [value: string]
java.lang.NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer$.handledType()Ljava/lang/Class;
at com.fasterxml.jackson.module.scala.deser.NumberDeserializers$.<init>(ScalaNumberDeserializersModule.scala:49)
at com.fasterxml.jackson.module.scala.deser.NumberDeserializers$.<clinit>(ScalaNumberDeserializersModule.scala)
at com.fasterxml.jackson.module.scala.deser.ScalaNumberDeserializersModule$class.$init$(ScalaNumberDeserializersModule.scala:61)
at com.fasterxml.jackson.module.scala.DefaultScalaModule.<init>(DefaultScalaModule.scala:20)
at com.fasterxml.jackson.module.scala.DefaultScalaModule$.<init>(DefaultScalaModule.scala:37)
at com.fasterxml.jackson.module.scala.DefaultScalaModule$.<clinit>(DefaultScalaModule.scala)
at org.apache.spark.rdd.RDDOperationScope$.<init>(RDDOperationScope.scala:82)
at org.apache.spark.rdd.RDDOperationScope$.<clinit>(RDDOperationScope.scala)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:225)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:308)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765)
UPDATE3: As per the suggestion in a comment to the proposed answer below, I cleaned the class path by deleting all the files in the local repo:
rm -rf local-repo/*
I then restarted the Zeppelin server. To check the class path, I executed the following in the notebook:
val cl = ClassLoader.getSystemClassLoader
cl.asInstanceOf[java.net.URLClassLoader].getURLs.foreach(println)
This gave the following output (I include only the jackson libraries from the output here, otherwise the output is too long to paste):
...
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-annotations-2.1.1.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-annotations-2.2.3.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-core-2.1.1.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-core-2.2.3.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-core-asl-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-databind-2.1.1.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-databind-2.2.3.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-jaxrs-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-mapper-asl-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-xc-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/lib/jackson-annotations-2.6.0.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/lib/jackson-core-2.6.7.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/lib/jackson-databind-2.6.7.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/jackson-annotations-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/jackson-core-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/jackson-core-asl-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/jackson-databind-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/jackson-mapper-asl-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-annotations-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-core-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-core-asl-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-databind-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-jaxrs-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-mapper-asl-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-module-paranamer-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-module-scala_2.11-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-xc-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/json4s-jackson_2.11-3.2.11.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/parquet-jackson-1.8.1.jar
...
It seems that multiple versions are fetched from the repo. Should I exclude the older versions? If so, how do I do that?
Use this jar versions;
aws-java-sdk-1.7.4.jar
hadoop-aws-2.6.0.jar
like in this script : https://github.com/2dmitrypavlov/sparkDocker/blob/master/zeppelin.sh
do not use package but download the jars and put them in a path, let's say in "/root/jars/" then edit your zeppelin-env.sh;
then run this command from zeppelin/conf dir;
echo 'export SPARK_SUBMIT_OPTIONS="--jars /root/jars/mysql-connector-java-5.1.39.jar,/root/jars/aws-java-sdk-1.7.4.jar,/root/jars/hadoop-aws-2.6.0.jar"'>>zeppelin-env.sh
after that restart zeppelin.
The code at the link above is pasted below (just in case the link becomes stale):
#!/bin/bash
# Download jars
cd /root/jars
wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.39/mysql-connector-java-5.1.39.jar
cd /usr/share/
wget http://archive.apache.org/dist/zeppelin/zeppelin-0.7.1/zeppelin-0.7.1-bin-all.tgz
tar -zxvf zeppelin-0.7.1-bin-all.tgz
cd zeppelin-0.7.1-bin-all/conf
cp zeppelin-env.sh.template zeppelin-env.sh
echo 'export MASTER=spark://'$MASTERZ':7077'>>zeppelin-env.sh
echo 'export SPARK_SUBMIT_OPTIONS="--jars /root/jars/mysql-connector-java-5.1.39.jar,/root/jars/aws-java-sdk-1.7.4.jar,/root/jars/hadoop-aws-2.6.0.jar"'>>zeppelin-env.sh
echo 'export ZEPPELIN_NOTEBOOK_STORAGE="org.apache.zeppelin.notebook.repo.VFSNotebookRepo, org.apache.zeppelin.notebook.repo.zeppelinhub.ZeppelinHubRepo"'>>zeppelin-env.sh
echo 'export ZEPPELINHUB_API_ADDRESS="https://www.zeppelinhub.com"'>>zeppelin-env.sh
echo 'export ZEPPELIN_PORT=9999'>>zeppelin-env.sh
echo 'export SPARK_HOME=/usr/share/spark'>>zeppelin-env.sh
cd ../bin/
./zeppelin.sh
You are probably using a too recent Jackson version. Even spark 2.3 is still on `2.6.7. Downgrade, and make sure that all your jackson JARs are consistent.

Possible to change the package name when generating client code

I am generating the client scala code for an API using the Swagger Edtior. I pasted the json then did a Generate Client/Scala. It gives me a default root package of
io.swagger.client
I can't see any obvious way of specifying something different. Can this be done?
Step (1): Create a file config.json and add following lines and define package names:
{
"modelPackage" : "com.xyz.model",
"apiPackage" : "com.xyz.api"
}
Step (2): Now, pass the above file name along with codegen command with -c option:
$ java -jar swagger-codegen-cli.jar generate -i path/swagger.json -l java -o Code -c path/config.json
Now, it will generate your java packages like com.xyz… instead of default one io.swagger.client…
Run the following command to get information about the supported configuration options
java -jar swagger-codegen-cli.jar config-help -l scala
This will give you information about supported by this generator (Scala in this example):
CONFIG OPTIONS
sortParamsByRequiredFlag
Sort method arguments to place required parameters before optional parameters. (Default: true)
ensureUniqueParams
Whether to ensure parameter names are unique in an operation (rename parameters that are not). (Default: true)
modelPackage
package for generated models
apiPackage
package for generated api classes
Next, define a config.json file with the above parameters:
{
"modelPackage": "your package name",
"apiPackage": "your package name"
}
And supply config.json as input to swagger-codegen using the -c flag.

How to access sockets from lua in nginx?

I've built nginx from sources with lua support and I'm able to run server-side scripts like this:
http {
lua_package_path '/usr/local/share/lua/5.1/?.lua;;';
server {
listen 80;
location /hi {
content_by_lua '
ngx.header["Content-Type"] = "text/plain;charset=utf-8"
ngx.say("Hello world!")
--local s = require("socket")
ngx.say(_VERSION);
';
}
}
}
So when I access http://localhost/hi, I get this output:
Hello world!
Lua 5.1
If I uncomment line local s = require("socket") then I get following error in my browser:
Unable to load page
Problem occurred while loading the URL http://localhost/hi
Connection terminated unexpectedly
soucket.lua presents in this folder:
root#debian:/usr/local/share/lua/5.1# ls
ltn12.lua mime.lua socket socket.lua
UPDATE: adding cpath doesn't help:
lua_package_cpath '/usr/local/lib/lua/5.1/?.so;/usr/local/lib/lua/5.1/socket/?.so;;';
# ls /usr/local/lib/lua/5.1/socket/*.so
/usr/local/lib/lua/5.1/socket/core.so /usr/local/lib/lua/5.1/socket/serial.so /usr/local/lib/lua/5.1/socket/unix.so
# ls /usr/local/share/lua/5.1/
ltn12.lua mime.lua socket socket.lua
How can I fix/diagnose this problem?
Thanx
If the socket.lua is the diegonehab/luasocket,
then it requires socket/core.so
You need to specify the lua_package_cpath
cpath is for compiled shared library modules (.so), path is for text modules (.lua)
/usr/local/lib/lua/5.1/?.so - most common value for cpath
Do not try to use LuaSocket inside nginx. LuaSocket is a blocking library and nginx is non-blocking so you will have problems. Check out ngx.socket.tcp instead. Its API is compatible with socket.tcp but it is non-blocking.

Running the ruby plugin foo.rb for logstash-1.5.0beta1 version

I am trying to run the ruby plugin foo.rb as given on this link-http://logstash.net/docs/1.3.3/extending/example-add-a-new-filter .
As specified there I have created the ruby file inside /logstash-1.5.0.beta1/lib/logstash/filters/foo.rb
The command for running the conf file is given as-
% logstash --pluginpath . -f example.conf
What exactly should I write in the place of 'pluginpath'?
On specifying the path of foo.rb it is giving me the following errors-
Clamp::UsageError: Unrecognised option '--lib/logstash/filters/foo.rb'
signal_usage_error at /home/administrator/Softwares/logstash-1.5.0.beta1/vendor/bundle/jruby/1.9/gems/clamp-0.6.3/lib/clamp/command.rb:103
find_option at /home/administrator/Softwares/logstash-1.5.0.beta1/vendor/bundle/jruby/1.9/gems/clamp-0.6.3/lib/clamp/option/parsing.rb:62
parse_options at /home/administrator/Softwares/logstash-1.5.0.beta1/vendor/bundle/jruby/1.9/gems/clamp-0.6.3/lib/clamp/option/parsing.rb:28
parse at /home/administrator/Softwares/logstash-1.5.0.beta1/vendor/bundle/jruby/1.9/gems/clamp-0.6.3/lib/clamp/command.rb:52
run at /home/administrator/Softwares/logstash-1.5.0.beta1/lib/logstash/runner.rb:155
call at org/jruby/RubyProc.java:271
run at /home/administrator/Softwares/logstash-1.5.0.beta1/lib/logstash/runner.rb:171
call at org/jruby/RubyProc.java:271
initialize at /home/administrator/Softwares/logstash-1.5.0.beta1/vendor/bundle/jruby/1.9/gems/stud-0.0.18/lib/stud/task.rb:12
What should I do? Thanks in advance! :)

CoffeeScript: No output after installation

I'm running Ubuntu 13.04, after installing using:
$ sudo npm install -g coffee-script
..with output..
npm http GET https://registry.npmjs.org/coffee-script
npm http 304 https://registry.npmjs.org/coffee-script
/usr/local/bin/coffee -> /usr/local/lib/node_modules/coffee-script/bin/coffee
/usr/local/bin/cake -> /usr/local/lib/node_modules/coffee-script/bin/cake
coffee-script#1.6.3 /usr/local/lib/node_modules/coffee-script
No commands yields any result, whatsoever:
$ coffee js.coffee
$ coffee -v
$ coffee GiveMeSomeCoffeePlease
I verified that it exists:
$ which coffee
/usr/local/bin/coffee
And the file has some contents:
$ cat `which coffee`
#!/usr/bin/env node
var path = require('path');
var fs = require('fs');
var lib = path.join(path.dirname(fs.realpathSync(__filename)), '../lib');
require(lib + '/coffee-script/command').run();
Also tried version 1.6.1 which works on my laptop. No difference on this computer though. Any ideas?
I finally found the solution. I had installed the package node on Ubuntu, which is something entirely different:
Amateur Packet Radio Node program (transitional package) The
existing node package has been renamed to ax25-node. This transitional
package exists to ease the upgrade path for existing users.
I went ahead and installed the nodejs package. But seems it didn't quite create the right binding anyway, I could run nodejs but not node. So I made an alias for it and now CoffeeScript is running just fine!
cd /usr/bin; sudo ln -s nodejs node
Same here .. In my expressjs app, instead of running via
node app
now it seems I have to run it via
nodejs app
I ll either create an alias or a symlink like Mika did. I am using Ubuntu 13.10 fyi.