Destructuring assignment and null coalescing - coffeescript

For a singular assignment in CoffeeScript, you can use the existential operator:
name = obj?.props?.name
This results in a rather lengthy block of code, that checks that obj and props are defined.
name = typeof obj !== "undefined" && obj !== null ?
(_ref2 = obj.props) != null ?
_ref2.name : void 0 : void 0;
Consider a more complex, destructuring assignment:
{name: name, emails: [primary], age: age} = Person.get(id)
If the object contains no emails property, that code would throw a TypeError. Is there any way to use the existential operator with these kinds of destructuring assignments?
This is the best alternative I have so far:
{name: name, emails: emails, age: age} = Person.get(id)
primary = emails?[0]

In ES6, you can do this:
const {name: name, emails: [primary] = [], age: age} = Person.get(id)
If Person.get(id) returns an empty object, primary will be undefined (no TypeError is thrown).
The forthcoming CoffeeScript 2 supports this as well, which you can try at http://coffeescript.org/v2/#try

I filed an issue about this back in February. It seems like there is some support for it, but it hasn't been assigned or implemented yet.

Related

The element type 'int' can't be assigned to the map value type 'FieldValue' when trying to assign a new value

I have initial data which works fine.
var data = {field1: FieldValue.increment(1)};
And it is also fine when I add another field to the data.
data.addAll({field2: FieldValue.increment(1)});
But if I set the value to 0, it won't allow me to.
data.addAll({field3: 0});
It will give an error of:
The element type 'int' can't be assigned to the map value type 'FieldValue'.
I tried doing this but still, have the same issue.
data[field3] = 0;
How will I set the field3 to a specific value?
Note:
This is the full code.
DocumentReference<Map<String, dynamic>> ref = db.collection('MyCollect').doc(uid);
var data = {field1: FieldValue.increment(1)};
data.addAll({field2: FieldValue.increment(1)});
data.addAll({field3: 0});
ref.set(data, SetOptions(merge: true));
For a better understanding
you can use the var keyword when you don't want to explicitly give a type but its value decides its type, and for the next operations/assignments, it will only accept that specific type that it took from the first time.
On the other hand, the dynamic keyword is used also to not explicitly set a type for a variable, but every other type is valid for it.
var a = "text";
a = "text2"; // ok
a = 1; // throws the error
dynamic b = "text";
b = "text2"; // ok
b = 1; // also ok
in your case, you're using the var keyword, so in the first value assignment it takes its type:
var data = {field1: FieldValue.increment(1)}; // takes the Map<String, FieldValue> type
data.addAll({field3: 0}); // 0 is int and FieldValue.increment(1) is FieldValue type, so it throws an error
However, you can fix the problem and let your data variable accept any kind of element types by either using the dynamic keyword:
dynamic data = {field1: FieldValue.increment(1)}; // will accept it.
or, specifying that this is a Map, but the values of it are dynamic:
Map<String, dynamic> data = {field1: FieldValue.increment(1)}; // will accept it also.
Hope this helps!
check your dart type.
Difference between "var" and "dynamic" type in Dart?
the type var can't change type of variable. so check your code.
var data = {field1: FieldValue.increment(1)};
maybe data's type fixed something <String, FieldValue>
you can try dynamic type.

Migrating to Dart null safety: best practice for migrating ternary operator null checks? Is a monadic approach too unconventional?

I'm migrating a code base to null safety, and it includes lots of code like this:
MyType convert(OtherType value) {
return MyType(
field1: value.field1,
field2: value.field2 != null ? MyWrapper(value.field2) : null,
);
}
Unfortunately, the ternary operator doesn't support type promotion with null checks, which means I have to add ! to assert that it's not null in order to make it compile under null safety:
MyType convert(OtherType value) {
return MyType(
field1: value.field1,
field2: value.field2 != null ? MyWrapper(value.field2!) : null,
);
}
This makes the code a bit unsafe; one could easily image a scenario where the null check is modified or some code is copied and pasted into a situation where that ! causes a crash.
So my question is whether there is a specific best practice to handle this situation more safely? Rewriting the code to take advantage of flow analysis and type promotion directly is unwieldy:
MyType convert(OtherType value) {
final rawField2 = value.field2;
final MyWrapper? field2;
if (rawField2 != null) {
field2 = MyWrapper(rawField2);
} else {
field2 = null;
}
return MyType(
field1: value.field1,
field2: field2,
);
}
As someone who thinks a lot in terms of functional programming, my instinct is to think about about nullable types as a monad, and define map accordingly:
extension NullMap<T> on T? {
U? map<U>(U Function(T) operation) {
final value = this;
if (value == null) {
return null;
} else {
return operation(value);
}
}
}
Then this situation could be handled like this:
MyType convert(OtherType value) {
return MyType(
field1: value.field1,
field2: value.field2.map((f) => MyWrapper(f)),
);
}
This seems like a good approach to maintain both safety and concision. However, I've searched long and hard online and I can't find anyone else using this approach in Dart. There are a few examples of packages that define an Optional monad that seem to predate null safety, but I can't find any examples of Dart developers defining map directly on nullable types. Is there a major "gotcha" here that I'm missing? Is there another approach this is both ergonomic and more conventional in Dart?
Unfortunately, the ternary operator doesn't support type promotion with null checks
This premise is not correct. The ternary operator does do type promotion. However, non-local variables cannot be promoted. Also see:
https://dart.dev/tools/non-promotion-reasons
"The operator can’t be unconditionally invoked because the receiver can be null" error after migrating to Dart null-safety.
Therefore you should just introduce a local variable (which you seem to have already realized in your if-else and NullFlatMap examples):
MyType convert(OtherType value) {
final field2 = value.field2;
return MyType(
field1: value.field1,
field2: field2 != null ? MyWrapper(field2) : null,
);
}

How to find an element in an array of records in Purescript

everyone.
I'd like to find an element in an array of records in Purescript but since I'm not familiar with Purescripot, I can't solve it.
I have an array banks which contains bank records.
This is the type of bank record.
type Bank = {
id :: Int,
name :: String
}
I want to get a bank in banks whose id is the same as a given search id.
I tried as following:
find (_.id == searchId) banks
but getting this error.
Could not match type
Int
with type
Function
{ id :: t0
| t1
}
Please help me with this simple issue.
The expression _.id is a function that takes a Bank and returns its id (a bit oversimplifying, but good enough for now).
To illustrate:
getId = _.id
bank = { id: 42, name: "my bank" }
getId bank == 42
And then you take that function and try to compare it with searchId, which I'm assuming is a number.
Well, you can't compare functions with numbers, and that's what the compiler is telling you: "Could not match type Int with type Function"
The function find expects to get as its first argument a function that takes a Bank and returns a Boolean. There are many ways to produce such a function, but the most obvious one would be with a lambda abstraction:
\bank -> bank.id == searchId
So to plug it into your code:
find (\bank -> bank.id == searchId) banks
You can change your code like this.
find(\{id} -> id == searchId) banks
So you can get the result object.

Does Mongodb have a special value that's ignored in queries?

My web application runs on MongoDB, using python and pyMongo. I get this scenario a lot - code that reads something like:
from pymongo import Connnection
users = Connection().db.users
def findUsers(firstName=None, lastName=None, age=None):
criteria = {}
if firstName:
criteria['firstName'] = firstName
if lastName:
criteria['lastName'] = lastName
if age:
criteria['age'] = age
query = users.find(criteria)
return query
I find that kind of messy how I need an if statement for every value that's optional to figure out if it's needs to go into the search criteria. If only there were a special query value that mongo ignored in queries. Then my code could look like this:
def findUsers(firstName=<ignored by mongo>, lastName=<ignored by mongo>, age=<ignored by mongo>):
query = users.find({'firstName':firstName, 'lastName':lastName, 'age':age})
return query
Now isn't that so much cleaner than before, especially if you have many more optional parameters. Any parameters that aren't specified default to something mongo just ignores. Is there any way to do this? Or at-least something more concise than what I currently have?
You're probably better off filtering your empty values in Python. You don't need a separate if-statement for each of your values. The local variables can be accessed by locals(), so you can create a dictionary by filtering out all keys with None value.
def findUsers(firstName=None, lastName=None, age=None):
loc = locals()
criteria = {k:loc[k] for k in loc if loc[k] != None}
query = users.find(criteria)
Note that this syntax uses dictionary comprehensions, introduced in Python 2.7. If you're running an earlier version of Python, you need to replace that one line with
criteria = dict((k, loc[k]) for k in loc if loc[k] != None)

Filter array using NSPredicate and obtains new object composed by some elements in the query

I've got an array like that
Word array (
{
translation = (
{
name = Roma;
lang = it;
},
{
name = Rome;
lang = en;
}
);
type = provenance;
value = RMU;
},
{
translation = (
{
name = "Milano";
lang = it;
},
{
name = "Milan";
lang = en;
}
);
type = destination;
value = MIL;
},)
The idea is to filter it using an NSPredicate and receive and an array of dictionaries based on the lang key, I'd like to get something like this made by filtering for lang == it,
Word array (
{
name = Roma;
lang = it;
type = provenance;
value = RMU;
},
{
name = "Milano";
lang = it;
type = destination;
value = MIL;
})
I can't simplify the data because it comes from a "JSON" service.
I've tried different predicates using SUBQUERY but none of them works, documentation about SUBQUERY is pretty poor, I'm missing something, probably the problem is that I'd like to receive an object that is really different from the source.
Of course I'm able to obtain that structure enumerating, I'm wondering if there is a shorter solution
This answer from Dave DeLong link to SUBQUERY explanation gave a me a lot of hints about SUBQUERY, but I'm not able to find a solution to my problem.
Can someone give me a hints about?
You can't do this with a predicate. (Well, you could, but it would be stupidly complex, difficult to understand and maintain, and in the end it would be easier to write the code yourself)
NSPredicate is for extracting a subset of data from an existing set. It only* does filtering, because a predicate is simply a statement that evaluates to true or false. If you have a collection and filter it with a predicate, then what happens is the collection starts iterating over its elements and asks the predicate: "does this pass your test?" "does this pass your test?" "does this pass your test?"... Every time that the predicate answers "yes this passes my test", the collection adds that object to a new collection. It is that new collection that is returned from the filter method.
THUS:
NSPredicate does not (easily) allow for merging two sets of data (which is what you're asking for). It is possible (because you can do pretty much anything with a FUNCTION() expression), but it makes for inherently unreadable predicates.
SO:
Don't use NSPredicate to merge your dataset. Do it yourself.