What is the advantage of global functions when writing functional code - swift

I am a Swift developer and am trying to adopt a functional / reactive style in my code. I have been using ReactiveCocoa in all my projects and I have started giving RAC 3.0 a try. One thing I have seen is that in project, there is heavy use of curried functions that have a global scope (i.e. not tied to an instance).
What I am keen to understand is why global functions is a good idea?
Is this something that is unique to curried functions or is it a general functional programming attribute?

for my experience with haskell (which do not have mutable varibles),i usually write all functions (and auxiliary ones) globally,which is convenient for testing.After debugging i usually move then from global-level to local one,but for library development,we can simply choose not to expose the helper functions to external usage.
Global function definition just means that you can access it any where (not strictly) within this module/file,or export it.

Related

How to write Dart idiomatic utility functions or classes?

I am pondering over a few different ways of writing utility classes/functions. By utility I mean a part of code being reused in many places in the project. For example a set of formatting functions for the date & time handling.
I've got Java background, where there was a tendency to write
class UtilsXyz {
public static doSth(){...};
public static doSthElse(){...};
}
which I find hard to unit test because of their static nature. The other way is to inject here and there utility classes without static members.
In Dart you can use both attitudes, but I find other techniques more idiomatic:
mixins
Widely used and recommended in many articles for utility functions. But I find their nature to be a solution to infamous diamond problem rather than utility classes. And they're not very readable. Although I can imagine more focused utility functions, which pertain only Widgets, or only Presenters, only UseCases etc. They seem to be natural then.
extension functions
It's somehow natural to write '2023-01-29'.formatNicely(), but I'd like to be able to mock utility function, and you cannot mock extension functions.
global functions
Last not least, so far I find them the most natural (in terms of idiomatic Dart) way of providing utilities. I can unit test them, they're widely accessible, and doesn't look weird like mixins. I can also import them with as keyword to give some input for a reader where currently used function actually come from.
Does anybody have some experience with the best practices for utilities and is willing to share them? Am I missing something?
To write utility functions in an idiomatic way for Dart, your options are either extension methods or global functions.
You can see that they have a linter rule quoting this problem:
AVOID defining a class that contains only static members.
Creating classes with the sole purpose of providing utility or otherwise static methods is discouraged. Dart allows functions to exist outside of classes for this very reason.
https://dart-lang.github.io/linter/lints/avoid_classes_with_only_static_members.html.
Extension methods.
but I'd like to unit test some utility functions, and you cannot test extension functions, because they're static.
I did not find any resource that points that the extension methods are static, neither in StackOverflow or the Dart extension documentation. Although extension can have static methods themselves. Also, there is an open issue about supporting static extension members.
So, I think extensions are testable as well.
To test extension methods you have 2 options:
Import the extension name and use the extension syntax inside the tests.
Write an equivalent global utility function test it instead and make the extension method call this global function (I do not recommend this because if someone changes the extension method, the test will not be able to caught).
EDIT: as jamesdlin mentioned, the extension themselves can be tested but they cannot be mocked since they need to be resolved in compile time.
Global functions.
To test global functions, just import and test it.
I think the global functions are pretty straightforward:
This is the most simple, idiomatic way to write utility functions, this does not trigger any "wtf" flag when someone reads your code (like mixins), even Dart beginners.
This also takes advantage of the Dart top-level functions feature.
That's why I prefer this approach for utility functions that are not attached to any other classes.
And, if you are writing a library/package, the annotation #visibleForTesting may fall helpful for you (This annotation is from https://pub.dev/packages/meta).

What do Swift underscore methods and functions do?

While experimenting with Swift I found a lot of useful methods and functions that are prefixed with underscores. For example, strings have a hidden _split() method. For some reason the functions _sin() and _cos() (but not _tan or _sqrt) are also available by default. In fact, the REPL actually suggests me to use these functions when I type sin(2.0). I’m not importing Foundation or anything that imports Foundation.
Why does the Swift offer these “hidden” functions, especially the trig functions which I would have expected to be part of a math module instead of builtins.
Things beginning with underscores are not for public consumption and only exposed publicly for internal reasons (e.g. for testing because the standard library can't use #testable).
By Dave Abrahams

What is the difference between Clojure REPL and Scala REPL?

I’ve been working with Scala language for a few months and I’ve already created couple projects in Scala. I’ve found Scala REPL (at least its IntelliJ worksheet implementation) is quite convenient for quick development. I can write code, see what it does and it’s nice. But I do the procedure only for functions (not whole program). I can’t start my application and change it on spot. Or at least I don’t know how (so if you know you are welcome to give me piece of advice).
Several days ago my associate told me about Clojure REPL. He uses Emacs for development process and he can change code on spot and see results without restarting. For example, he starts the process and if he changes implementation of a function, his code will change his behavior without restart. I would like to have the same thing with Scala language.
P.S. I want to discuss neither which language is better nor does functional programming better than object-oriented one. I want to find a good solution. If Clojure is the better language for the task so let it be.
The short answer is that Clojure was designed to use a very simple, single pass compiler which reads and compiles a single s-expression or form at a time. For better or worse there is no global type information, no global type inference and no global analysis or optimization. Clojure uses clojure.lang.Var instances to create global bindings through a series of hashmaps from textual symbols to transactional values. def forms all create bindings at global scope in this global binding map. So where in Scala a "function" (method) will be resolved to an instance or static method on a given JVM class, in Clojure a "function" (def) is really just a reference to an entry in the table of var bindings. When a function is invoked, there isn't a static link to another class, instead the var is reference by symbolic name, then dereferenced to get an instance of a clojure.lang.IFn object which is then invoked.
This layer of indirection means that it is possible to re-evaluate only a single definition at a time, and that re-evaluation becomes globaly visible to all clients of the re-defined var.
In comparison, when a definition in Scala changes, scalac must reload the changed file, macroexpand, type infer, type check, and compile. Then due to the semantics of classloading on the JVM, scalac must also reload all classes which depend on methods in the class which changed. Also all values which are instances of the changed class become trash.
Both approaches have their strengths and weaknesses. Obviously Clojure's approach is simpler to implement, however it pays an ongoing cost in terms of performance due to continual function lookup operations forget correctness concerns due to lack of static types and what have you. This is arguably suitable for contexts in which lots of change is happening in a short timeframe (interactive development) but is less suitable for context when code is mostly static (deployment, hence Oxcart). some work I did suggests that the slowdown on Clojure programs from lack of static method linking is on the order of 16-25%. This is not to call Clojure slow or Scala fast, they just have different priorities.
Scala chooses to do more work up front so that the compiled application will perform better which is arguably more suitable for application deployment when little or no reloading will take place, but proves a drag when you want to make lots of small changes.
Some material I have on hand about compiling Clojure code more or less cronological by publication order since Nicholas influenced my GSoC work a lot.
Clojure Compilation [Nicholas]
Clojure Compilation: Full Disclojure [Nicholas]
Why is Clojure bootstrapping so slow? [Nicholas]
Oxcart and Clojure [me]
Of Oxen, Carts and Ordering [me]
Which I suppose leaves me in the unhappy place of saying simply "I'm sorry, Scala wasn't designed for that the way Clojure was" with regards to code hot swapping.

Use of Clojure macros for DSLs

I am working on a Clojure project and I often find myself writing Clojure macros for DSLs, but I was watching a Clojure video of how a company uses Clojure in their real work and the speaker said that in practical use they do not use macros for their DSLs, they only use macros to add a little syntactic sugar. Does this mean I should write my DSL in using standard functions and then add a few macros at the end?
Update:
After reading the many varied (and entertaining) responses to this question I have realized that the answer is not as clear cut as I first thought, for many reasons:
There are many different types of API in an application (internal, external)
There are many types of user of the API (business user who just wants to get something done fast, Clojure expert)
Is there macro there to hide boiler plate code?
I will go away and think about the question more deeply, but thanks for your answers as they have given me lots to think about. Also I noticed that Paul Graham thinks the opposite of the Christophe video and thinks macros should be a large part of the codebase (25%):
http://www.paulgraham.com/avg.html
To some extent I believe this depends on the use / purpose of your DSL.
If you are writing a library-like DSL to be used in Clojure code and want it to be used in a functional way, then I would prefer functions over macros. Functions are "nice" for Clojure users because they can be composed dynamically into higher order functions etc. For example, you are writing a functional web framework like Ring.
If you are writing a imperative DSL that will be used pretty independently of other Clojure code and you have decided that you definitely don't need higher order functions, then the usage will be pretty similar and you can chose whichever makes most sense. For example, you might be creating some kind of business rules engine.
If you are writing a specialised DSL that needs to produce highly performant code, then you will probably want to use macros most of the time since they will be expanded at compile time for maximum efficiency. For example, you're writing some graphics code that needs to expand to exactly the right sequence of OpenGL calls......
Yes!
Write functions whenever possible. Never write a macro when a function will do. If you write to many macros you end up with somthing that is much harder to extend. Macros for example cant be applied or passed around.
Christophe Grand: (not= DSL macros)
http://clojure.blip.tv/file/4522250/
No!
Don't be afraid of using macros extensively. Always write a macro when in doubt. Functions are inferior for implementing DSLs - they're taking the burden onto the runtime, whereas macros allows to do many heavyweight computations in a compilation time. Just think of a difference of implementing, say, an embedded Prolog as an interpreter function and as a macro which compiles Prolog into some form of a WAM.
And do not listen to those who say that "macros cant be applied or passed around", this argument is entirely a strawman. Those people are advocating interpreters over compilers, which is simply ridiculous.
A couple of tips on how to implement DSLs using macros:
Do it in stages. Define a long chain of languages from your DSL to the underlying Clojure. Keep each transform as simple as possible - this way you'd be able to easily maintain and debug your DSL compiler.
Prepare a toolbox of DSL components that you will reuse when implementing your DSLs. It should include target languages of different semantics (e.g., untyped eager functional - it is Clojure itself, untyped lazy functional, first order logic, typed imperative, Hindley-Millner typed eager functional, dataflow, etc.). With macros it is trivial to combine properties of all that target semantics seamlessly.
Maintain a set of compiler-building tools. It should include parser generators (useful even if your DSLs are entirely in S-expressions), term rewriting engines, pattern matching engines, implementations for some common algorithms on graphs (e.g., graph colouring), etc.
Here's an example of a DSL in Haskell that uses functions rather than macros:
http://contracts.scheming.org/
Here is a video of Simon Peyton Jones giving a talk about this implementation:
http://ulf.wiger.net/weblog/2008/02/29/simon-peyton-jones-composing-contracts-an-adventure-in-financial-engineering/
Leverage the characteristics of Clojure and FP before going down the path of implementing your own language. I think SK-logic's tips give you a good indication of what is needed to implement a full blown language. There are times when it's worth the effort, but those are rare.

Why do dynamic languages like Ruby and Python not have the concept of interfaces like in Java or C#?

To my surprise as I am developing more interest towards dynamic languages like Ruby and Python. The claim is that they are 100% object oriented but as I read on several basic concepts like interfaces, method overloading, operator overloading are missing. Is it somehow in-built under the cover or do these languages just not need it? If the latter is true are, they 100% object oriented?
EDIT: Based on some answers I see that overloading is available in both Python and Ruby, is it the case in Ruby 1.8.6 and Python 2.5.2 ??
Dynamic languages use duck typing.
Any code can call methods on any object that support those methods, so the concept
of interfaces is extraneous.
Python does in fact support operator overloading(check - 3.3. Special method names) , as does Ruby.
Anyway, you seem to be focusing on aspects that are not essential to object oriented programming. The main focus is on concepts like encapsulation, inheritance, and polymorphism, which are 100% supported in Python and Ruby.
Thanks to late binding, they do not need it. In Java/C#, interfaces are used to declare that some class has certain methods and it is checked during compile time; in Python, whether a method exists is checked during runtime.
Method overloading in Python does work:
>>> class A:
... def foo(self):
... return "A"
...
>>> class B(A):
... def foo(self):
... return "B"
...
>>> B().foo()
'B'
Are they object-oriented? I'd say yes. It's more of an approach thing rather than if any concrete language has feature X or feature Y.
I can only speak for python, but there have been proposals for interfaces as well as home-written interface examples in the past.
However, the way python works with objects dynamically tends to reduce the need for (and the benefit of) interfaces to some extent.
With a dynamic language, your type binding happens at runtime - interfaces are mostly used for compile time constraints on objects - if this happens at runtime, it eliminates some of the need for interfaces.
name based polymorphism
"For those of you unfamiliar with Python, here's a quick intro to name-based polymorphism. Python objects have an internal dictionary that contains a string for every attribute and method. When you access an attribute or method in Python code, Python simply looks up the string in the dict. Therefore, if what you want is a class that works like a file, you don't need to inherit from file, you just create a class that has the file methods that are needed.
Python also defines a bunch of special methods that get called by the appropriate syntax. For example, a+b is equivalent to a.add(b). There are a few places in Python's internals where it directly manipulates built-in objects, but name-based polymorphism works as you expect about 98% of the time. "
Python does provide operator overloading, e.g. you can define a method __add__ if you want to overload +.
You typically don't need to provide method overloading, since you can pass arbitrary parameters into a single method. In many cases, that single method can have a single body that works for all kinds of objects in the same way. If you want to have different code for different parameter types, you can inspect the type, or double-dispatch.
Interfaces are mostly unnecessary because of duck typing, as rossfabricant points out. A few remaining cases are covered in Python by ABCs (abstract base classes) or Zope interfaces.