How to ignore some classes during scala compilation - scala

I am running sbt in my console where i can enter commands compile run test ...
But as a default these commands always compile every single class they see in your project. The thing is sometimes you want just to ignore certain classes because you know they contain errors and don't want to focus on that now.
How do i specify what class(es) i want to be part of the compilation process when i enter one of these commands so that i don't see the unuseful errors from other classes every time?

Related

Scala project-wide static instructions

I'm using a Java lib within my Scala project. I want to call a specific static method adjustLib of that lib as early as possible and for every possible starting point of an execution (e.g. before runing the "main" App and before executing tests) to achieve the desired behaviour.
One solution would be to place this statement at the very top of each class, that might be executed (applies for all classes extending App and for all tests).
However, if someone implements a new, executable class but forgets adjustLib, things might get weird.
Is there any chance to define an object or something similar, that executes this adjustLib statement in a "static" manner every time anything in the given project is executed?

Understanding Task Dependencies

I'm trying to grok how tasks depend on one another in SBT. Using 0.13.7. 'inspect' and 'inspect tree' have been lifesavers, however I still find examples that I can't explain.
For example, I know that 'publishLocal' ends up calling 'copyResources' somehow, but if you run 'inspect tree publishLocal', you don't see the copyResources task in the tree. I can see the 'Copying Resources' output when running with debug logging on and I know that log statement comes from inside the copyResourcesTask function. Is there some other way this is getting invoked? Some other way to see these dependencies?
Some dependencies are dynamic, in the sense that they are computed while running a task. This dependencies cannot be shown by inspect tree because identifying them would require executing the task. And then again, the dependency could change from one run to the next.
See the documentation about taskDyn.
I don't know of any way to show the actual dependencies, though.

How to skip error checking when compiling Scala files in IDEA?

The run config option Make, no error check in IntelliJ IDEA does not skip Scala errors that result in ambiguous ClassDefNotFound errors upon running a project.
How do I skip error checking when compiling Scala files?
My end goal is for the program to fail at runtime when the classes with errors are accessed, not before runtime preventing the whole program from running.
As the name suggests, "Make, no error check" will just pretend compile errors don't matter. Then obviously you end up with an incomplete class path, not containing any classes that produce compiler errors. As a consequence, when you run that project, you should not be surprised to find class-not-found errors.
I don't know how this "feature" interacts with the incremental Scala compiler. I wouldn't expect "Make, no error check" to produce any useful results.
If you want to skip parts of your code that currently don't compile, a good solution is to use the ??? "hole" value, i.e. a placeholder of type Nothing that you can plug into any position where you would have incomplete code. E.g.
def myMethod: String = ??? // implement later
This allows these things to be compiled and will only produce runtime errors when hit during runtime.

JUnit Fork-Mode in Java Classes

There's support for forkMode in Ant and Maven and occasionally we use it with value perTest. However, the JUnit-tests in Eclipse still fail when we run the tests on a class or on a project (Run As -> JUnit Test). Obviously JUnit uses default settings or behaviour and executes the tests in parallel causing some red crosses in the JUnit-view.
Is there a way to code something into the test-class that lets JUnit behave like the forkMode setting? We don't mind if there's an Eclipse-only solution for this.
Or can this be done with a Run Configuration in Eclipse?
EDIT:
I understand that the problems are based on data remaining after tests and further tests will fail due to that. While this makes sense, please understand that this doesn't answer my question. Think of my situation as being part of some sort of a Tiger Team. We have a bunch of issues and fixing that part of existing tests is just one of them. Trust me, we will try to cover everything... (I haven't heard that in a while)
Eclipse runs the JUnit test serially, in a single thread, in the same JVM. If you have tests that normally operate in parallel, this should not affect the test behavior. However, if you assume that you can change settings in the VM, like system properties, or class static variables, and the next test will not have those changes, that will break your tests.
The rule of thumb is that each test should leave the system (vm, database, filesystem) exactly as it found it so that each test can be run at any time, in any order.

import and compile axapta 2009 xpo by commandline

i'm looking for a way to import an existing xpo-export via command-line into ax2009 aot and afterwards compile just this imported xpo. google tells me how to compile the whole aot by commandline, which takes quite long.
so is there a way to import an xpo ( shared project ) and compile just these objects?
what possibilities are available, if the objects which should be imported are version-controlled by ax and are checked-in?
hoping for an easy way to automate optionally check-out, import, avoid overwrite?-questions, compile and run ;)
thanks in advance!
You can make you own startup command:
Make a new class and extend SysStartupCmd
Change the construct method of SysStartupCmd to call you class.
Do whatever you need, this includes parsing the parm variable.
Also you will have to deal with version control by calling checkin/checkout in your code, handling compile errors etc.
There are no easy way, this is complicated stuff.
Over the last two years I have introduced and refined a command line process for deploying XPOs to AX 4.0 with great success. The class SysAutoRun is key as mentioned above. The following is a brief explanation of the resulting process:
Developers export AX objects from the AOT to a corresponding folder(layer) i.e. CUS, VAR, etc... for the most part the file name is the default file name set by AX.
Developers commit using SVN in this scenario. This would have to be evaluted to meet your needs.
Console application for the build process reads all file names from each directory(layer) and creates corresponding AX project definition files.
Console application reads all file names from each directory (again) and creates an import definition file for each corresponding layer(folder). The project definition created above is also instructed to be imported after all other objects are loaded and finally compiled. The import definition contains some specialized elements that are recognized by the SysAutoRun.execCommand(XmlNode _command) method.
A call is made to ax32.exe "config.axc" -StartupCmd=AUTORUN_ImportDefinitionMentionedAbove.xml -lazyclassloading -lazytableloading -nocompileonimport -internal=noModalBoxes
AX parses this import definition file invoking customizations as instructed. Logging is added to the process for outputting compilation results to an XML log file. Finally step 3's project definition file is compiled.
Console application validates the outputted XML log and handles appropriately.
Step 5-7 is repeated for each (folder)layer.
I understand this is very vague. The intent of this post is to get feedback on interest before I invest more time on describing the process. The import definition file is probably of most interest as it is responsible for loading the objects in the right order, synchronizing the ORM, compiling, repeating, etc...
Thanks M#