How to run a server JAR file with RMI, directly from within Java code in that same JAR file - rmi

I am trying to deploy a jar file on a tomcat server and initialise RMI. For this I am using the following command--
java -jar -Djava.security.policy=[rmi.policy file path] server.jar [server_ip] [port]
Now, I want the above initialisation and RMI Policy file, and port (for RMI) to be directly done by Java code within the same Jar file.
How do I do this? Also, can I use a URL instead of the IP address?

java.security.policy is a system property, and can be set with System.setProperty(). You don't need to set command line arguments from within the code, as you are just talking to yourself: just use those values in the appropriate places.

Related

Eclipse Path Variable to Output folder

I need to start an external tool in eclipse, which starts with some arguments. One of them revers to the output file. My problem is, that I have more than one build configuration, so I have multiple output folders, but I want to use just one external tool config.
External tool config
Is there A way, where I can set the output folder as a dynamic variable, which depends on the selected build config.
There exists a variable config_name:
https://www.eclipse.org/forums/index.php/t/1076833/
${config_name} of the current project is given by:
${config_name:${ProjName}}

How to create stub/services files with MATLAB grpc plugin?

I'm using MatlabWithProtoV3 to create protoc.exe with matlab_out in Windows environment.
I was able to create protoc and when I use
protoc.exe user.proto --matlab_out=./
It only creating matlab files for proto messages (files can be found in the bottom attachment) and it is not creating matlab files for services(client and server)
Then, I read about plugins and included the generator and plugin files to gRPC Source to create Matlab plugin and created the grpc_matlab_plugin.exe successfully.
Now, when I execute
protoc.exe user.proto --matlab_out=./ --grpc_out=./ --plugin=protoc-gen-grpc="D:\grpc\cmake\build\Debug\grpc_matlab_plugin.exe
I'm getting
pb_descriptor_LoginRequest.m: Tried to write the same file twice.
pb_read_LoginRequest.m: Tried to write the same file twice.
pb_descriptor_APIResponse.m: Tried to write the same file twice.
pb_read_APIResponse.m: Tried to write the same file twice.
pb_descriptor_Empty.m: Tried to write the same file twice.
pb_read_Empty.m: Tried to write the same file twice.
error message and no files are getting created.
in gRPC repo, for C++ compiler i could find cpp_plugin.h has some codes to create service related files but similar file is not available for Matlab in here or here
Can you please let me know how to create Matlab files for services?
Attached the files created when I execute the above mentioned commands,
sample_files.zip
Github issue
Thanks
protobuf-matlab is just a protobuf plugin - it generates code to read/write protocol buffer.
Unfortunately it does not implement a gRPC plugin which would build the client stub and server.
If you are able to call your matlab code from another language, you could host the gRPC server externally, e.g. create a gRPC server in dotnet and use COM to call your matlab code.

SQLAPI++: Get path to shared library loaded by executable

SQLAPI++ has an unusual feature where you set a string to tell it where to find the ODBC shared library. In my case this is libtdsodbc.so, and my application actually links that library at build time, but at runtime this is not enough for SQLAPI++ to work.
My code is:
SAConnection conn;
conn.setOption("ODBC.LIBS") = "libtdsodbc.so";
conn.Connect("SERVER=...", "", "", SA_ODBC_Client);
ODBC.LIBS is documented like this:
Forces SQLAPI++ Library to use specified ODBC manager library.
The above code works if you set LD_LIBRARY_PATH to a directory containing libtdsodbc.so. But if you don't, Connect() fails:
libtdsodbc.so: cannot open shared object file: No such file or directory
DBMS API Library 'libtdsodbc.so' loading fails
This library is a part of DBMS client installation, not SQLAPI++
Make sure DBMS client is installed and
this required library is available for dynamic loading
Linux/Unix:
1) The directories in the user's LD_LIBRARY_PATH environment variable
2) The list of libraries cached in /etc/ld.so.cache
3) /usr/lib, followed by /lib
It works again if you set ODBC.LIBS to a full path rather than just a filename. But how can the application know which path?
My application (outside of SQLAPI++) finds libtdsodbc.so via its RUNPATH which is set at build time. This path is not a system path like /usr/lib. I'd like to have SQLAPI++ use the same library which is loaded in the application at runtime.
One idea is for the application to inspect its own RUNPATH, search for libtdsobc.so, and use that path. But this requires quite a bit of fiddly code to basically reimplement what ld.so already does.
I don't want to bake the path into the executable at build time separately from RUNPATH, because I sometimes edit RUNPATH before deployment (and then I'd need to edit two things).
Ideally I would like to tell SQLAPI++ to just use the library which is already loaded. I can figure this path out by running lsof -p PID | grep libtdsodbc.so but running shell commands from within the executable is not a good solution (and again I would rather not reimplement lsof).
You could either use dl_iterate_phdr (the link also includes a sample code which prints out lib names) or manually parse /proc/self/maps.

How do I specify a config file with play 2.4 and activator

I am building a Scala Play 2.4 application which uses the typesafe activator.
I would like to run my tests 2 times with a different configuration file for each run.
How can I specify alternative config files, or override the config settings?
I currently run tests with the command "./activator test"
You can create different configuration files for different environments/purposes. For example, I have three configuration files for local testing, alpha deployment, and production deployment as in this project https://github.com/luongbalinh/play-mongo
You can specify the configuration for running as follows:
activator run -Dconfig.resource=application.conf
where application.conf is the configuration you want to use.
You can create different configuration files for different environments. To specify the configuration to use it with activator run, use the following command:
activator "run -Dconfig.resource=application.conf"
where the application.conf is the desired configuration. Without the quotes it did not work for me. This is using the same configuration parameters as you use when going into production mode as described here:
https://www.playframework.com/documentation/2.5.x/ProductionConfiguration#Specifying-an-alternate-configuration-file
Important to know is also that config.resource tries to locate the configuration within the conf/ folder, so no need to specify that as well. For full paths not among the resources, use config.file. Further reading is also in the above link.
The quotes need to be used because you do not want to send the -D to activator, but to the run command. Using the quotes, the activator's JVM gets no -D argument but it interprets "run -Dconfig.file=application.conf" and sets the config.file property accordingly, also in the activator's JVM.
This was already discussed here: Activator : Play Framework 2.3.x : run vs. start
Since all the above are partially incorrect, here is my hard wrought knowledge from the last weekend.
Use include "application.conf" not include "application" (which Akka does)
Configs must be named .conf or Play will discard them silently
You probably want -Dconfig.file=<file>.conf so you're not classpath dependent
Make sure your provide the full file path (e.g. /opt/configs/prod.conf)
Example
Here is an example of this we run:
#prod.conf
include "application"
akka.remote.hostname = "prod.blah.com"
# Example of passing in S3 keys
s3.awsAccessKeyId="YOUR_KEY"
s3.awsSecretAccessKey="YOUR_SECRET_KEY"
And just pass it in like so:
activator -Dconfig.file=/var/lib/jenkins/jenkins.conf test
of if you fancy SBT:
sbt -Dconfig.file=/var/lib/jenkins/jenkins.conf test
Dev Environment
Also note it's easy to make a developer.conf file as well, to keep all your passwords/local ports, and then set a .gitignore so dev's don't accidentally check them in.
The below command works with Play 2.5
$ activator -Dconfig.resource=jenkins.conf run
https://www.playframework.com/documentation/2.5.x/ProductionConfiguration

"Not A Valid Jar" When trying to run Map Reduce Job

I am trying to run a my MapReduce job by building a jar from eclipse , but while trying to execute the job , I am getting "Not a valid Jar" error.
I have tried to follow the link Not a valid Jar but that didnt help.
Can anyone please give me the instructions on how to build the jar from eclipse, for it to run on Hadoop.
I am aware of the process of building the Jar file from eclipse,however I am not sure, do I have to take any special care for building a jar file, so that it runs on Hadoop.
When you submit the command, make certain you have the following things on the line to do the command:
When you indicate the jar, make certain you are directing to the jar properly. It may be easiest to be certain by using the absolute path. To get the absolute path, if you navigate to the place where the jar is, then run 'readlink -f ' command to get the absolute path. So for you, not just hist.jar, but maybe /home/akash_user/jars/hist.jar or wherever it is on your system. If you are using Eclipse, it may be saving it somewhere funny, so make sure that is not the problem. The jar cannot be run from HDFS storage. must run from local storage.
When you name your main class, in your example Histogram, you must use the fully qualified name of the class, with the package, the project, and the class. So, usually, if the program/project is named Histogram, and there is a HistogramDriver, HistogramMapper, HistogramReducer, and your main() is in HistogramDriver, you need to type Histogram.HistogramDriver to get the program running. (Unless you made your jar runnable, which requires extra stuff at the beginning, making .mdf and things.)
Make sure that the jar you are submitting (hist.jar) is in the current directory from where you are submitting the 'hadoop jar' command.
If the issue is still persisting, please tell the Java, Hadoop and Linux version you are using.
You should not keep the jar file in HDFS when executing the MapReduce job. Make sure Jar is available in the local path. Input path and output directory should be the path from HDFS.