SBT gives my Scala program stale resource files - scala

I have a simple Scala project using SBT. I have a resource file, src/main/resources/cea-builtins.rkt. I open it with val resource = Source.fromResource("cea-builtins.rkt").
Unfortunately, it gives me a stale version of the resource file.
What I expect to happen is:
I edit the file.
I run my program from the sbt command prompt and my program reads the updated file.
What actually happens is:
I edit the resource file.
I run my program from the sbt command prompt and my program reads the old version of the file.
I can clean the project and still get the old version of the file.
If I exit SBT, restart it, and then run my program, I get the updated version of the resource file.
I note that there is a copy of the file in ./target/scala-2.13/<project_name>_2.13-0.1.jar. Updating the resource file and running the program causes that copy to be updated, but I still get the old contents in my program.
If I do lsof -p <sbt_pid> | grep <project_name> I get
<project_root>/target/bg-jobs/sbt_ae437a1d/job-1/target/c224367f/577d7b48/<project_name>_2.13-0.1.jar
I wondered if my program was getting a copy from that file, but that file does not exist in spite of what lsof says.
I've been using Source.fromResource("cea-builtins.rkt") to read the file. That works (except that it's stale). However, if I adapt my code to open the resource with getClass.getResource("cea-builtins.rkt") it fails, as this returns null. I don't understand why.
My build.sbt file is absolutely vanilla. It does not change any paths. It contains library dependencies, one resolver, and that's about it. The only plugin is sbt-assembly.
Any help understanding why I get the stale resource unless I exit and restart SBT? Any suggestions on how to fix this?

Related

flutter blank screen on desktop(windows)

Running my application on another Windows machine results in a blank window. I works fine on the development machine.
I have included all the dll-files + data folder + 3 extra dll files mentioned on the Flutter website.
I have also ran "dependencies" on the resulting .exe file and can't see any missing dll's.
Compiling the "mydemo" application works fine, so I assume there are some other external files my application needs.
I have tried looking through the output of "flutter run -v" to find any clues of extra files needed, but can't see anything useful.
What is the preferred way to tackle a problem like this? How can I find out what files/resources are missing to distribute my app? Is there a way to use the "debug" version on the other machine instead and bring out the debug console window? I would guess that would show me errors when the app tries to load the missing resources.
Check if any package you depend on has some additional file requirements: for example, I'm using sqflite_common_ffi in some of my projects, which needs an additional DLL file to run. I don't know what you're referring to 'dependencies' ran on the EXE, though.
In any case, when I can't get any good output or error from a project, I do this: open up Windows prompt, go to the directory where you have put all the required files, and run
your_exe_file >> logFile.txt 2>&1
which will output the standard output and the standard error to the file. The log file name and extension don't really matter, it will be a simple text file.
For example, if I dont put the additional DLL for sqflite_common_ffi in the same folder of the compiled EXE, the output of the command above will specifically mention the name of the DLL that is missing.
if you got your project through a repository to run on another windows machine, use the command "flutter pub get"

integrate newrelic in flink scala project

I want to integrate newrelic in my flink project. I have downloaded my newrelic.yml file from my account and have changed the app name only and I have created a folder named newrelic in my project root folder and have placed newrelic.yml file in it.
I have also placed the following dependency in my buld.sbt file:
"com.newrelic.agent.java" % "newrelic-api" % "3.0.0"
I am using the following command to run my jar:
flink run -m yarn-cluster -yn 2 -c Main /home/hadoop/test-assembly-0.2.jar
I guess, my code is not able to read my newrelic.yml file because I can't see my app name in newrelic. Do i need to initialize newrelic agent somewhere (if yes, how?). Please help me with this integration.
You should only need the newrelic.jar and newrelic.yml files to be accessible and have -javaagent:path/to/newrelic.jar passed to the JVM as an argument. You could try putting both newrelic.jar and newrelic.yml into your lib/ directory so they get copied to the job & task managers, then adding this to your conf/flink-conf.yaml:
env.java.opts: -javaagent:lib/newrelic.jar
Both New Relic files should be in the same directory and you ought to be able to remove the New Relic line from your build.sbt file. Also double check that your license key is in the newrelic.yml file.
I haven't tested this but the main goal is for the .yml and .jar to be accessible in the same directory(the yml can go into a different directory but other JVM arguments will need to be passed to reference it) and to pass -javaagent:path/to/newrelic.jar to as a JVM argument. If you run into issues try checking for new relic logs in the log folder of the directory where the .jar is located.

"Not A Valid Jar" When trying to run Map Reduce Job

I am trying to run a my MapReduce job by building a jar from eclipse , but while trying to execute the job , I am getting "Not a valid Jar" error.
I have tried to follow the link Not a valid Jar but that didnt help.
Can anyone please give me the instructions on how to build the jar from eclipse, for it to run on Hadoop.
I am aware of the process of building the Jar file from eclipse,however I am not sure, do I have to take any special care for building a jar file, so that it runs on Hadoop.
When you submit the command, make certain you have the following things on the line to do the command:
When you indicate the jar, make certain you are directing to the jar properly. It may be easiest to be certain by using the absolute path. To get the absolute path, if you navigate to the place where the jar is, then run 'readlink -f ' command to get the absolute path. So for you, not just hist.jar, but maybe /home/akash_user/jars/hist.jar or wherever it is on your system. If you are using Eclipse, it may be saving it somewhere funny, so make sure that is not the problem. The jar cannot be run from HDFS storage. must run from local storage.
When you name your main class, in your example Histogram, you must use the fully qualified name of the class, with the package, the project, and the class. So, usually, if the program/project is named Histogram, and there is a HistogramDriver, HistogramMapper, HistogramReducer, and your main() is in HistogramDriver, you need to type Histogram.HistogramDriver to get the program running. (Unless you made your jar runnable, which requires extra stuff at the beginning, making .mdf and things.)
Make sure that the jar you are submitting (hist.jar) is in the current directory from where you are submitting the 'hadoop jar' command.
If the issue is still persisting, please tell the Java, Hadoop and Linux version you are using.
You should not keep the jar file in HDFS when executing the MapReduce job. Make sure Jar is available in the local path. Input path and output directory should be the path from HDFS.

Hadoop eclipse plugin : Unable to see output on console

I am using hadoop-0.20.2 from http://www.apache.org/dyn/closer.cgi/hadoop/common/ and I'm using the following Eclipse plugin hadoop-0.20.1-eclipse-plugin.jar from http://code.google.com/p/hadoop-eclipse-plugin/.
Using the file I'm able to load the file into HDFS and also able to compile word-count program . I'm able to compile it without error and get .class files. But when I run the project on Hadoop, I don't see any out-put on the console.
Please tell me if there are any configurations that I need to get the out-put on console. Even output file is not generated.
You should chack the out file path given in FileOutputFormat.setOutputPath(conf, new Path(args[1]));
Note that the output directory should not exist.
I am running the same example given here. And it works fine and also creating the output folder.

Why does Scala create a ~/tmp directory when I run a script?

When I execute a Scala script from the command line, a directory named "tmp" is created in my home directory. It is always empty, so I simply deleted it without any apparent problem. Of course, when I run another Scala script, it reappears.
Is there a configuration file/flag by which I can change this behavior?
Thanks.
Scala probably compiles the script to byte code, and byte code files are stored temporarily under tmp directory. That would be my guess.
Early versions of the Scala interpreter wrote generated class files out to disk, in a temporary directory.
Try a newer version.