Normally in Javascript I can do something like this:
var step;
determineStep();
function determineStep() {
step = 'A';
asyncCallbackA(function(result)) {
if (result.testForB) performB();
});
}
function performB() {
step = 'B';
asyncCallbackB(function(result)) {
if (result.testForC) performC();
});
}
function performC() {
step = 'C';
...
}
However Coffeescript does not allow named functions that get hoisted so I would have to define a function before calling it. This would result in them being out of order (very confusing). And if any of them have circular dependencies then it is not possible at all.
In Coffeescript I am forced to do:
step = null
determineStep =
step = 'A'
asyncCallbackA (result) ->
if result.testForB
step = 'B'
asyncCallbackB (result) ->
if result.testForC
step = 'C'
asyncCallbackC (result) ->
...
determineStep()
If you have multiple steps this can quickly get out of hand.
Is it possible to implement the Javascript pattern in Cofffeescript? If not, what is the best way to handle this scenario?
I think you're a little confused. When you say:
f = -> ...
the var f is (of course) hoisted to the top of the scope but the f = function() { ... } definition is left where it is. This means that the only order that matters is that you need to define all your functions before you determineStep().
For example, this works just fine:
f = -> g()
g = -> h()
h = -> console.log('h')
f()
In your case:
step = null
determineStep = ->
step = 'A'
asyncCallbackA (result) -> performB() if(result.testForB)
performB = ->
step = 'B'
asyncCallbackB (result) -> performC() if(result.testForC)
performC = ->
step = 'C'
...
determineStep()
should be fine. determineStep can call performB before performB is defined (in source order) because:
The var performB is hoisted.
By the time determineStep executes, the performB = function() { ... } will have been done.
Similarly for the other functions so you don't have to worry about interdependencies amongst your functions.
Related
i am currently trying to do some self learning in swift just for my own interest. in the course i bought it says that we should create a function similar to this one in order to solve my problem. but I'm blankly staring trying to figure out what this function actually does?
func unknown() -> () -> Int {
var x = 0
let z: () -> Int = {
x += 1
return x
}
return z
}
It is a function that returns another function which will return an integer that will be increased everytime you call it:
let afunc = unknown()
let value1 = afunc() // 1
let value2 = afunc() // 2
let value3 = afunc() // 3
The interesting part of this is the return type. () -> Int is a function that returns an Int, which means that unknown returns a function rather than something simple, like a number.
z is then a variable of that same type and is assigned a function definition to be returned.
If you assign the result of unknown to a variable, you can then invoke the returned function.
This implementation of a high order function is an interesting way of defining generators. An infinite sequence-like class would've achieve the same thing, but with more verbosity:
class MySequence {
private var x = 0
func unknown() -> Int {
x += 1
return x
}
}
var seq = MySequence()
let unknown = seq.unknown
print(unknown()) // 1
print(unknown()) // 2
print(unknown()) // 3
// ... and so on
The main difference between the class and the anonymous closure is the storage for x: the closure captures in due to using the variables within its body, while the class declares explicit storage for the property.
Some fancy stuff can result by using high order functions, like a generator for the Fibonacci numbers:
func fibonnaciSequence() -> () -> Int? {
var a = 0, b = 1
return { let c = a; a += b; b = c; return c }
}
let fibo = fibonnaciSequence()
while let f = fibo() {
// this will print forever
// actually not forever, it will stop at some point due to += overflowing
print(f)
}
In Chapel, we can set the default value of function formal arguments easily, for example,
proc test( a = 1, b = 2.0, c = "hi" ) {
...
}
and call the function by using keywords also:
test( 10 ); // a = 10, b = 2.0, c = "hi"
test( b = 3.14 ); // a = 1, b = 3.14, c = "hi"
test( c = "yo" ); // a = 1, b = 2.0, c = "yo"
Here, I am wondering if it is possible to define a keyword argument that does not require a predefined default value. More specifically, I would like to write a function that can optionally receive an array depending on cases (e.g., to save intermediate data). Here, the only requirement is that I can check whether the actual argument is passed or not, and there is no need to give the default array value. I imagined something like
proc test( ..., optional d: [] real ) {
if present( d ) then ...;
}
or
proc test( ..., d: [] real = None ) {
if present( d ) then ...;
}
but was not able to find similar things. At the moment, my workaround is to give some dummy default value and check their properties to determine whether an actual argument is passed.
proc test( arr = empty2Dreal ) { ... } // where "empty2Dreal" is a pre-defined global array
or
proc test( arr = reshape( [0.0], {1..1,1..1} ) ) { ... } // some dummy array
}
However, I am wondering whether there might be a more elegant(?) or idiomatic(?) approach...
Edit
As suggested in the comment, it is also convenient to overload several functions to get different interfaces, but at some point I guess I need to pass some "dummy" object to the final (full-fledged) routine and ask the latter to see if the passed object is "dummy" or not... MWE is something like this:
const empty1Dint: [1..0] int;
proc test( x: real, arr: [] int )
{
writeln("test() with 2 args");
writeln(( x, arr ));
// here, I need to check whether the passed object is
// an actual array or not by some predefined rule
if arr.size > 0 then writeln("got a non-empty array");
}
proc test( x: real )
{
writeln("test() with 1 arg");
test( x = x, arr = empty1Dint );
}
var work = [1,2,3,4,5];
test( x = 1.0 );
writeln();
test( x = 1.0, arr = work );
which gives
test() with 1 arg
test() with 2 args
(1.0, )
test() with 2 args
(1.0, 1 2 3 4 5)
got a non-empty array
The corresponding default-value version is
const empty1Dint: [1..0] int;
proc test( x: real, arr: [] int = empty1Dint )
{
writeln("test() with 2 args");
writeln(( x, arr ));
if arr.size > 0 then writeln("got a non-empty array");
}
var work = [1,2,3,4,5];
test( x = 1.0 );
writeln();
test( x = 1.0, arr = work );
which gives
test() with 2 args
(1.0, )
test() with 2 args
(1.0, 1 2 3 4 5)
got a non-empty array
Although the above approach works for arrays, the rule needs to change depending on the type of objects used. So, I was wondering if there is some systematic way, e.g., to pass a "null pointer" or some unique global object to tell the final routine about the presence of the actual data. (But, as noted above, the above approach works for arrays).
Edit 2
Another approach may be simply to pass an additional flag for using the passed array (then there is no need to think much about the nature of the default object, so may be overall simpler...)
const empty1Dint: [1..0] int;
proc test( x: real, arr: [] int = empty1Dint, use_arr = false )
{
writeln( "x= ", x );
if use_arr {
writeln("working with the passed array...");
for i in 1..arr.size do arr[ i ] = i * 10;
}
}
test( x = 1.0 );
writeln();
var work: [1..5] int;
test( x = 2.0, arr = work, use_arr = true );
writeln( "work = ", work );
Edit 3
Following Option 3 in the answer, here is a modified version of my code using _void and void:
proc test( x: real, arr: ?T = _void )
{
writeln( "\ntest():" );
writeln( "x = ", x );
writeln( "arr = ", arr );
writeln( "arr.type = ", arr.type:string );
writeln( "T = ", T:string );
if arr.type != void {
writeln( "doing some checks" );
assert( isArray( arr ) );
}
if arr.type != void {
writeln( "writing arr" );
for i in 1..arr.size do arr[ i ] = i * 10;
}
}
// no optional arg
test( x = 1.0 );
// use an optional arg
var work: [1..5] int;
test( x = 2.0, arr = work );
writeln( "\nmain> work = ", work );
Result:
test():
x = 1.0
arr =
arr.type = void
T = void
test():
x = 2.0
arr = 0 0 0 0 0
arr.type = [domain(1,int(64),false)] int(64)
T = [domain(1,int(64),false)] int(64)
doing some checks
writing arr
main> work = 10 20 30 40 50
This answer discusses 3 answers:
The strategy discussed in the edit of the question.
A strategy using a Box type
A strategy using a generic function with a void default value
My favorite of these options is Option 3.
Option 1
proc test( x: real, arr: [] int = empty1Dint, use_arr = false ) strategy described in the question is reasonable, if a little verbose. The main drawback here is that you'd need more overloads of test if you didn't want the call sites to have to pass use_arr=true or use_arr=false. Here is a simple program that does that:
proc test(optional, hasOptional:bool) {
writeln("in test");
writeln(" optional is ", optional);
if hasOptional == false then
writeln(" note: default was used for optional");
}
proc test(optional) {
test(optional, hasOptional=true);
}
proc test() {
var emptyArray:[1..0] int;
test(emptyArray, hasOptional=false);
}
test();
test([1, 2, 3]);
Option 2
Another alternative is to create a class to store the optional argument data, and pass nil by default.
class Box {
var contents;
}
proc makeArray() {
var A:[1..2] int;
return A;
}
proc emptyBox() {
var A:[1..0] int;
var ret: owned Box(A.type) = nil;
return ret;
}
proc test( optional=emptyBox() ) {
writeln("in test with optional=", optional);
}
test();
test(new owned Box(makeArray()));
Here the main tricky part is that the array type returned by makeArray() and emptyBox() have to match. It'd be possible to use a type alias to have them refer to the same array type, but how exactly that would fit in depends on your application. Another problem with this approach is that it causes the array to be copied in the process of passing such an argument. And, one has to think about where the Box will be destroyed. Is test to hang on to the array value (e.g. storing it in a data structure) or just going to use it temporarily? This is set by the type returned by emptyBox in my example.
It's probably reasonable for the standard library to gain such a Box type but it doesn't have one now.
Option 3
My favorite solution to this problem is a third strategy altogether.
Chapel includes a value of void type called _void. The key is the declaration proc test( optional:?t=_void ). Here test is a generic function - the syntax argument:?t indicates that the argument can have a varied type (which will be available as t within the function). This is necessary to get a generic argument that also has a default value (otherwise the argument will have only the type inferred from the default value).
If no optional argument is provided, it will instantiate with optional having type void. Which makes sense as a way to not pass something. Technically it's not the same as checking if the default value was provided, but I think a call site like test(optional=_void) is reasonably clear at communicating that the value of optional should be ignored (since it's void).
Anyway here is the code:
proc test( optional:?t=_void ) {
writeln("in test");
writeln(" optional is ", optional);
if optional.type == void then
writeln(" note: default was used for optional");
}
test();
test([1, 2, 3]);
I have two arrays and I need to preserve the order
let a = ["Icon1", "Icon2", "Icon3",]
let b = ["icon1.png", "icon2.png", "icon3.png",]
If I combine the two I get
let c = a + b
// [Icon1, Icon2, Icon3, icon1.png, icon2.png, icon3.png]
How do I get the result below?
[Icon1, icon1.png, Icon2, icon2.png, Icon3, icon3.png]
UPDATE 12/16/2015: Not sure why I didn't recognize that flatMap was a good candidate here. Perhaps it wasn't in the core library at the time? Anyway the map/reduce can be replaced with one call to flatMap. Also Zip2 has been renamed. The new solution is
let c = Zip2Sequence(a,b).flatMap{[$0, $1]}
And if you run this in the swift repl environment:
> let c = Zip2Sequence(a,b).flatMap{[$0, $1]}
c: [String] = 6 values {
[0] = "Icon1"
[1] = "icon1.png"
[2] = "Icon2"
[3] = "icon2.png"
[4] = "Icon3"
[5] = "icon3.png"
}
Original answer below:
Here's one way I whipped together for fun
let c = map(Zip2(a,b), { t in
[t.0, t.1]
})
let d = c.reduce([], +)
or inlining
let c = map(Zip2(a,b), { t in
[t.0, t.1]
}).reduce([], +)
The zipping seems unnecessary. I imagine there's a better way of doing that. But basically, I'm zipping them together, then converting each tuple into an array, and then flattening the array of arrays.
Finally, a little shorter:
let c = map(Zip2(a,b)){ [$0.0, $0.1] }.reduce([], +)
If both arrays are related to each other and both have the same size you just have append one at a time in a single loop:
let a = ["Icon1", "Icon2", "Icon3"]
let b = ["icon1.png", "icon2.png", "icon3.png"]
var result:[String] = []
for index in 0..<a.count {
result.append(a[index])
result.append(b[index])
}
println(result) // "[Icon1, icon1.png, Icon2, icon2.png, Icon3, icon3.png]"
and just for the fun this is how it would look like as a function:
func interleaveArrays<T>(array1:[T], _ array2:[T]) -> Array<T> {
var result:[T] = []
for index in 0..<array1.count {
result.append(array1[index])
result.append(array2[index])
}
return result
}
interleaveArrays(a, b) // ["Icon1", "icon1.png", "Icon2", "icon2.png", "Icon3", "icon3.png"]
May be it can help you.
let aPlusB = ["Icon1" : "icon1.png" , "Icon2" : "icon2.png" , "Icon3" : "icon3.png"]
for (aPlusBcode, aplusBName) in aPlusB {
println("\(aPlusBcode),\(aplusBName)")
}
I got a table like this which is giving me
'(' expected near 't' at 'errorline'
Which means there must be an syntax error, but I can't detect one. Have you any idea what's wrong with the syntax?
t = {}
t[x] = {
some = "data",
foo = function() return "bar" end,
elements = { -- the class is working 100%, have used it for several projects.
mon = class:new(param),
tue = class:new(param2),
n = class:new(param3),
},
function t[x].elements.mon:clicked() -- <<< --- ERRORLINE
--dosomething
end,
}
Add the function t[x].elements.mon:clicked() after the table declaration i.e after the closing braces of the table .
t = {}
t[x] = {
some = "data",
foo = function() return "bar" end,
elements = { -- the class is working 100%, have used it for several projects.
mon = class:new(param),
tue = class:new(param2),
n = class:new(param3),
}
}
t[x].elements.mon.clicked = function(self)
--dosomething
end
EDIT :
As mentioned in the comments function t[x].elements.mon:clicked() wont work.
Function declaration should be t[x].elements.mon.clicked = function(self).
Note that the first parameter for the function would be self if you call a dot function using colon. i.e If you call the function as c = t[x].elements.mon:clicked(a,b) then the function should be
t[x].elements.mon.clicked = function(self,a,b)
For the following code:
inc = -> value = (value ? 0) + 1
dec = -> value = (value ? 0) - 1
print = -> console.log value ? 0
How can you make this work properly, so inc and dec close upon value instead of creating separate function-local variables, in the way other than explicitly assigning something to value?
In plain Javascript, you would just declare var value at outer scope:
var value;
function inc() { value = (value || 0) + 1; };
function dec() { value = (value || 0) - 1; };
function print() { console.log(value || 0); };
What is CoffeeScript way for exactly the same thing?
In CoffeeScript, the way to introduce a local variable is to assign to the variable in the appropriate scope.
This is simply the way that CoffeeScript was defined and as such is similar to Python or Ruby, which do not require a "variable declaration", except CoffeeScript also allows forward access. A side-effect is that one cannot shadow a lexical variable.
Just as with the placement of var in JavaScript, where this assignment is done (as long as it is in the correct scope) does not affect the scope of the variable.
Given
x = undefined
f = -> x
// JS
var f, x;
x = void 0;
f = function() {
return x;
};
Given
f = -> x
x = undefined
// JS
var f, x;
f = function() {
return x;
};
x = void 0;