Which property of Scala's type-system make it Turing-complete? [closed] - scala

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Scala uses a type-system based on System F ω, which is normally said to be strongly normalizing. Strongly normalizing implies non-Turing completeness.
Nevertheless, Scala's type-system is Turing-complete.
Which changes/additions/modifications make Scala's type-system Turing-complete compared to the formal algorithms and systems?

It's not a comprehensive answer but the reason is that you can define recursive types.
I've asked similar questions before (about what a non-Turing complete language might look like). The answers were of the form: a Turing complete language must support either arbitrary looping or recursion. Scala's type system supports the latter

Related

What does .collect() do? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I understand that .collect(pf), where pf is a partial function, is the equivalent to .filter(pf.isDefinedAt _).map(pf). What I don't understand is what just .collect() does. Can anyone explain this?
collect without parameters fetches all data stored in a RDD to the driver.
Return an array that contains all of the elements in this RDD.
Note
This method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver's memory.
There is no connection to the version with PartialFunction whatsoever. Both are used for completely different things.

Difference between ReduceByKey and CombineByKey in Spark [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Is there any difference between ReduceByKey and CombineByKey when it comes to performance in Spark. Any help on this is appreciated.
Reduce by key internally calls combineBykey. Hence the basic way of task execution is same for both.
The choice of CombineByKey over reduceBykey is when the input Type and output Type is not expected to be the same. So combineByKey will have a extra overhead of converting one type to another .
If the type conversion is omitted there is no difference at all .
Please follow the following links
http://bytepadding.com/big-data/spark/reducebykey-vs-combinebykey
http://bytepadding.com/big-data/spark/groupby-vs-reducebykey
http://bytepadding.com/big-data/spark/combine-by-key-to-find-max

Which of the most widely used OO metrics cannot be used for Scala? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Which of the most widely used OO metrics cannot be used for Scala ? Why ? Which one of them you do not expect to work the same way in Scala as they do in Java or C++ ? Which one are safe to use in Scala ?
See common Java metrics at http://agile.csc.ncsu.edu/SEMaterials/OOMetrics.htm
I would think that coupling, encapsulation or cohesion related metrics, for example, would be just fine for Scala, this is however just an educated guess, so it would be interesting to hear developers' opinions who have real field experience in using OO metrics for Scala.
Most of them are valid, but you may find that the actual numbers vary wildly.
Cyclomatic complexity, for example, should be significantly lower when writing in a functional style.
coupling/inheritance is possibly going to be higher, depending on how it's measured. The cake pattern definitely drives up the amount of inheritance.
implicits will doubtless drive up some numbers as well, and there are no metrics I know of that specifically recognise type classes or the loose-coupling nature of implicits.
You're also going to see much lower metrics for the amount of hidden methods/attributes, driven in large part by the use of immutable objects.
So yes... you can take most of the measurements. I'm just not sure if you can interpret them in any meaningful way.

Use cases for hstore vs json datatypes in postgresql [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
In Postgresql, the hstore and json datatypes seem to have very similar use cases. When would you choose to use one vs. the other? Initial thoughts:
You can nest with json; you can't with hstore
Functions for parsing json won't be available until 9.3
The json type is just a string. There are no built in functions to parse it. The only thing to be gained when using it is the validity checking.
Edit for those downvoting: This was written when 9.3 still didn't exist.It is correct for 9.2. Also the question was different. Check the edit history.

How does an Antivirus with thousands of signatures scan a file in a very short time? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
What speed optimization techniques do antiviruses use today to scan a file, provided they have to check for all the signatures + the behavioral scan?
I'm not a antivirus programmer, but I think the scan engine scans through a file searching for known pattern inside. The greater number of patterns it can identify, the longer it will take to scan.
Optimization maybe similar to database optimization, with patterns indexing.
Identification Methods