I've run into a problem using Set.contains in F# running inside Unity.
(Unity does not officially support F#, but they do support including .DLLs as Plugins to provide externally-compiled code which Scripts can access. This project includes the FSharp.Core.DLL inside Unity and works well. This is the first gotcha I've run into using this approach.)
The problem is that the built-in F# structural equality support seems to fail when running under Unity under certain conditions. The identical code succeeds when run as a console application, outside Unity.
I have a sample project on github which contains a Visual Studio solution with all the pieces necessary to demonstrate the inconsistency: https://github.com/anticrisis/fsharp-unity-test
It's quite simple: when a Set is built containing a struct type, either an F# 4.1 [<Struct>] record, or an old style struct/vals/end type, Set.contains returns true when it shouldn't:
[<Struct>]
type V3 = {x: int; y: int; z: int}
let testSet0 = Set.empty<V3>
let testSet1 = testSet0.Add({x = 123; y = 123; z = 123})
// fails under Unity
if testSet1.Contains {x = 0; y = 0; z = 1}
then "FAIL test: testSet1"
else "SUCCESS test: testSet1"
This Contains call should obviously return false, and it does outside of Unity. However, it returns true when run inside Unity.
Luckily, there are two main workarounds available: avoid Set.contains and use Set.exists instead. Or, use a tuple instead of a struct. Both approaches seem to avoid whatever inconsistency causes F# to fail within Unity.
The repository includes a bit more information in the README and more tests to demonstrate this problem.
This unexpected behavior is caused by a bug in the Windows version of 64-bit Mono: https://github.com/mono/mono/issues/7177.
For a workaround, consider implementing [<CustomComparison; CustomEquality>] on any [<Struct>] types you define.
Related
How can I add debug points to statements in for-comprehension?
for {
a <- sqlQuery1()
b <- sqlQuery2()
} yield {
// output
}
I want to inspect as each of the above queries are executed, but simply adding debug points does not seem to work. The queries get executed without ever hitting debug points. I am not sure if it is a limitation of the library being used, language or IDE.
Here is my setup:
Scala
IntelliJ IDEA 14.1.3
Scala Plugin 1.5.200
Library - Slick 3.0 (Using DBIO Actions)
Update:
Case 1: with slick
Debug points don't work. Confirmed that again.
Case 2: w/o slick
Debugger does stop at debug points. But sometimes it evaluates expressions and sometimes it does not. I have noticed following problems (in currying / anonymous functions):
Variable value is shown as 'Size = ?'
Variable value is 'Debug info unavailable'
Fails to evaluate expression
Cursor stays on the same line without highlighting anything whenever I step over (it is doing things in the background though)
'Warning: No executable code found at...'
Actually searching without slick as keyword showed me that a lot of people have similar issues, like here.
I wrote the following query in order to determine the dependencies between my solution and other assemblies. We have a large library of Internal Nuget shared libraries which is used extensively, and I want to ensure these are included - hence I'm using 't' below to eliminate certain 3rd party assemblies but including our internal libraries.
This query works great, but I have realised that it only shows us dependencies where the dependency is a method call. It doesn't include Constants, enums and structs.
How can I enhance the query below to show us the detail of these and any other dependencies?
let t = Assemblies.WithNameWildcardMatchIn("xunit","RestSharp","NSubstitute","O2S*","EntityFramework","AxInterop*","AutoMapper","Autofac*","ADODB","mscorlib","System*", "Microsoft*","Infra*","Interop*").ToDictionary<IAssembly,String>(c=>c.Name)
from a in Application.Assemblies
from m in a.ChildMethods
from b in m.MethodsCalled
let isThirdParty = t.ContainsKey(b.ParentAssembly.Name)
select new { a,isThirdParty,m.ParentNamespace, m.ParentType,m.Name,DependsOnAssembly=b.ParentAssembly.Name, DependsOnNamespace=b.ParentNamespace,DependsOnParentType=b.ParentType,DependsOnMethod=b.Name}
What about this refactored version of your query:
from a in Application.Assemblies
from m in a.ChildMethods
from b in m.MethodsCalled.Cast<IMember>().Union(m.FieldsUsed.Cast<IMember>())
let isThirdParty = b.IsThirdParty
select new {
a,
isThirdParty,
m.ParentNamespace,
m.ParentType,
m.Name,
DependsOnAssembly=b.ParentAssembly.Name,
DependsOnNamespace=b.ParentNamespace,
DependsOnParentType=b.ParentType,
DependsOnMember=b.Name
}
First we simplified it greatly by using b.IsThirdParty :)
Second we do a Union<IMember>() between MethodsCalled and FieldsUsed. So you get also all fields read and/or assigned in addition to methods called.
Concerning structure usage, as long as you use a member of the structure (constructor, property, field...) the dependency will be listed.
Concerning enum, if a method uses an enumeration, you'll see a dependency toward the instance field EnumName.value__.
However you won't see usage of constant nor enumeration values. The reason is that this information get lost in the IL code that NDepend analyze. Constant (and enumeration values are also constants) are replaced with their values within the IL code.
Hope this help !
As a side note, isn't the query result more readable from within the NDepend UI if you write instead:
from m in Application.Methods
select new {
m,
thirdPartyMethodsCalled = m.MethodsCalled.Where(m1 => m1.IsThirdParty),
applicationMethodsCalled = m.MethodsCalled.Where(m1 => !m1.IsThirdParty),
thirdPartyFieldsUsed = m.FieldsUsed.Where(m1 => m1.IsThirdParty),
applicationFieldsUsed = m.FieldsUsed.Where(m1 => m1.IsThirdParty)
}
Why doesn't eclipse show an error when I use a variable without declaring it?
Edit:
AFAIK dynamic nature only means that type of variable is not known until run time. The variables must still be defined (explicitly or implicitly) before being used. For example - Python which is also a dynamic language reports this as an error.
Edit2:
How does groovy interpret this code so that it still isn't an error?
Because in dynamic languages like groovy, one could have implemented methodMissing() / propertyMissing(). So although such variable or method does not actually exist, it may be still not be an errors until the program is actually run. Such errors can usually only be detected at runtime and hence IDE's usually don't complain about it.
Although to hint you, eclipse is underlining such variables there which it is not able to statically reference.
EDIT :
To explain the concept by code example, just check the method test below. Now IDE can't know that something , that ... can actually be a method in this class.
This vastly helps in building DSLs in groovy.
class TestClass {
def test() {
def a = something.that.didnt.exist()
or how about some random statements that make no sense at all
a = ew Parser().doSomething() ew blah blah blah
}
def propertyMissing(String name) { println "$name"; return this }
def methodMissing(String name, args) { println "$name with $args"; return this }
}
new TestClass().test()
I think you may try to use #CompileStatic tag on method.
Then Eclipse will compile and check errors at compile time or in develop time.
I haven't Eclipse to check this now, so this is just for a proposal.
I started using smalltalk and I am trying to add a new assignment operator :>.
The current operator used in pharo is no selector so I started to look into the class Scanner where the underscore _ can be enabled for assignments. I've tried to do it in a similar way but it did not work.
Do you have any idea or suggestion about how I achieve that?
For a start, have a look at the Method Scanner>>#xColon. This method is called whenever a colon is encountered in the input. By adding the following snippet to the top of the method, you can make it detect your new assignment token :>:
aheadChar = $> ifTrue: [
self step.
tokenType := #leftArrow.
self step.
^ token := #':=' ]
Like this :> behaves exactly like the normal assignment. By customising tokenType and token you can pass your new assignment operator to the parser (see Parser>>#expression and Parser>>#assignment:) and build a different AST (i.e. to achieve a different execution behaviour).
If you are interested in more flexibility you might want to look at Helvetia. Helvetia is a language workbench for Pharo that allows you to modify the host language and adapt the tools in a more modular fashion. One of the examples included with the distribution (CUSwapExample) is adding two new assignment operators to Smalltalk.
A Measured value consists of (typically nonnegative) floating-point number and unit-of-measure. The point is to represent real-world quantities, and the rules that govern them. Here's an example:
scala> val oneinch = Measure(1.0, INCH)
oneinch : Measure[INCH] = Measure(1.0)
scala> val twoinch = Measure(2.0, INCH)
twoinch : Measure[INCH] = Measure(2.0)
scala> val onecm = Measure(1.0, CM)
onecm : Measure[CM] = Measure(1.0)
scala> oneinch + twoinch
res1: Measure[INCH] = Measure(3.0)
scala> oneinch + onecm
res2: Measure[INCH] = Measure(1.787401575)
scala> onecm * onecm
res3: Measure[CMSQ] = Measure(1.0)
scala> onecm * oneinch
res4: Measure[CMSQ] = Measure(2.54)
scala> oncem * Measure(1.0, LITER)
console>:7: error: conformance mismatch
scala> oneinch * 2 == twoinch
res5: Boolean = true
Before you get too excited, I haven't implemented this, I just dummied up a REPL session. I'm not even sure of the syntax, I just want to be able to handle things like adding Measured quantities (even with mixed units), multiplying Measured quantities, and so on, and ideally, I like Scala's vaunted type-system to guarantee at compile-time that expressions make sense.
My questions:
Is there extant terminology for this problem?
Has this already been done in Scala?
If not, how would I represent concepts like "length" and "length measured in meters"?
Has this been done in some other language?
A $330-million Mars probe was lost because the contractor was using yards and pounds and NASA was using meters and newtons. A Measure library would have prevented the crash.
F# has support for it, see for example this link for an introduction. There has been some work done in Scala on Units, for example here and here. There is a Scala compiler plugin as well, as described in this blog post. I briefly tried to install it, but using Scala 2.8.1, I got an exception when I started up the REPL, so I'm not sure whether this plugin is actively maintained at the moment.
Well, this functionality exists in Java, meaning you can use it directly in Scala.
jsr-275, which was moved to google code. jscience implements the spec. Here's a good introduction. If you want a better interface, I'd use this as a base and build a wrapper around it.
Your question is fully answered with one word. You can thank me later.
FRINK. http://futureboy.us/frinkdocs/
FYI, I have developed a Scalar class in Scala to represent physical units. I am currently using it for my R&D work in air traffic control, and it is working well for me. It does not check for unit consistency at compile time, but it checks at run time. I have a unique scheme for easily substituting it with basic numeric types for efficiency after the application is tested. You can find the code and the user guide at
http://russp.us/scalar-scala.htm
Here is the summary from the website:
Summary-- A Scala class was designed to represent physical scalars and to eliminate errors involving implicit physical units (e.g., confusing radians and degrees). The standard arithmetic operators are overloaded to provide syntax identical to that for basic numeric types. The Scalar class itself does not define any units but is part of a package that includes a complete implementation of the standard metric system of units and many common non-metric units. The scalar package also allows the user to define a specialized or reduced set of physical units for any particular application or domain. Once an application has been developed and tested, the Scalar class can be switched off at compile time to achieve the execution efficiency of operations on basic numeric types, which are an order of magnitude faster. The scalar class can also be used for discrete units to enforce type checking of integer counts, thereby enhancing the static type checking of Scala with additional dynamic type checking.
Let me clarify my previous post. I should have said, "These kinds of errors ["meter/yard conversion errors"] are automatically AVOIDED (not "handled") by simply using my Scalar class. All unit conversions are done automatically. That's the easy part.
The harder part is the checking for unit inconsistencies, such as adding a length to a velocity. This is where the issue of dynamic vs. static type checking comes up. I agree that static checking is generally preferable, but only if it can be done without sacrificing usability and convenience.
I have seen at least two "projects" for static checking of units, but I have never heard of anyone actually using them for real work. If someone knows of a case where they were used, please let me know. Until you use software for real work, you don't know what sorts of issues will come up.
As I wrote above, I am currently using my Scalar class (http://russp.us/scalar-scala.htm) for my R&D work in ATC. I've had to make many tweaks along the way for usability and convenience, but it is working well for me. I would be willing to consider a static units implementation if a proven one comes along, but for now I feel that I have essentially 99% of the value of such a thing. Hey, the vast majority of scientists and engineers just use "Doubles," so cut me some slack!
"Yeah, ATC software with run-time type checking? I can see headlines now: "Flight 34 Brought Down By Meter/Yard Conversion"."
Sorry, but you don't know what you're talking about. ATC software is tested for years before it is deployed. That is enough time to catch unit inconsistency errors.
More importantly, meter/yard conversions are not even an issue here. These kinds of errors are automatically handled simply by using my Scalar class. For those kinds of errors, you need neither static nor dynamic checking. The issue of static vs. dynamic checking comes up only for unit inconsistencies, as in adding length to time. These kinds of errors are less common and are typically caught with dynamic checking on the first test run.
By the way, the interface here is terrible.