Scala - Project euler #8 - scala

I'm currently learning Scala and I'm trying to solve some of the Euler Challenges with it.
I have some problems getting the response to the 8th challenge and I really don't know where is my bug.
object Product{
def main(args: Array[String]): Unit = {
var s = "7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450";
var len = 13;
var bestSet = s.substring(0,len);
var currentSet = "";
var i = 0;
var compare = 0;
for(i <- 1 until s.length - len){
currentSet = s.substring(i,i+len);
compare = compareBlocks(bestSet,currentSet);
if(compare == 1) bestSet = currentSet;
}
println(v1);
var result = 1L;
var c = ' ';
for(c <- v1.toCharArray){
result = result * c.asDigit.toLong;
}
println(result);
}
def compareBlocks(block1: String, block2: String): Int = {
var i = 0;
var v1 = 0;
var v2 = 0;
if((block1 contains "0") && !(block2 contains "0")) return 1;
if(!(block1 contains "0") && (block2 contains "0")) return -1;
if((block1 contains "0") && (block2 contains "0")) return 0;
var chars = block1.toCharArray;
for(i <- 0 until chars.length){
v1 = v1 + chars(i).asDigit;
}
chars = block2.toCharArray;
for(i <- 0 until chars.length)
{
v2 = v2 + chars(i).asDigit;
}
if(v1 < v2) return 1;
if(v2 < v1) return -1;
return 0;
}
}
My result is:
9753697817977 <- Digit sequence
8821658160 <- Multiplication

Using the Euler Project to challenge yourself and learn a new language is a pretty good idea, but just coming up with the correct answer doesn't mean that you're using the language well.
It's obvious from your code that you have yet to learn idiomatic Scala. Would it surprise you to learn that the desired product can be calculated from the 100-character input string with just one line of code? That one line of code will:
turn each input character into a digit (Int)
slide a fixed size (13-digit) window over all the digits
multiply all the digits within each window
select the maximum from all those products
There's a handy little web site that has solved Euler challenges in Scala. I recommend that every time you solve an Euler problem, compare your code with what's found on that site. (But be careful. It's too easy to look ahead at solutions that you haven't tackled yet.)

Related

Optimize "range-join" in plain scala (not Spark!)

I have two ordered sequences, one (large) is range of positions, one (small) is a sequence of attributes, defined on position_from/position_two which I'd like to join.
So for each element of positions, I need to traverse the other sequences, which is not optimal
def interpolateCurveOnPos(position:Seq[Double], curveAttributes:Seq[CurveSegment]) = {
position.map { pos =>
// range join
val cs = curveAttributes.find(c => pos >= c.position_von && pos < c.position_bis).get
// interpolate curve attribute
val curve = cs.curve_von + (pos - cs.position_von) * (cs.curve_bis - cs.curve_von) / (cs.position_bis - cs.position_von)
return curve
}
What I've tried:
As the index at which the matching curveSegement is found will allways increase, I've introduced a some state variables to reduce the search of the correct entry
def interpolateCurveOnPos(position:Seq[Double], curveAttributes:Seq[CurveSegment]) = {
var idxSave = 0
var csSave : CurveSegment = curveAttributes.head
position.map { pos =>
// range join
val cs = curveAttributes.drop(idxSave).find(c => pos >= c.position_von && pos < c.position_bis).get
if(cs != csSave) {
csSave = cs
idxSave=idxSave+1
}
// interpolate
val curve = cs.curve_von + (pos - cs.position_von) * (cs.curve_bis - cs.curve_von) / (cs.position_bis - cs.position_von)
return curve
}
I wonder if there is a more elegent way to do it?

How to count digits in BigDecimal?

I’m dealing with BigDecimal in Java and I need to make 2 check against BigDecimal fields in my DTO:
Number of digits of full part (before point) < 15
Total number of
digits < 32 including scale (zeros after point)
What is the best way to implement it? I extremely don’t want toBigInteger().toString() and .toString()
I think this will work.
BigDecimal d = new BigDecimal("921229392299229.2922929292920000");
int fractionCount = d.scale();
System.out.println(fractionCount);
int wholeCount = (int) (Math.ceil(Math.log10(d.longValue())));
System.out.println(wholeCount);
I did some testing of the above method vs using indexOf and subtracting lengths of strings. The above seems to be signficantly faster if my testing methodology is reasonable. Here is how I tested it.
Random r = new Random(29);
int nRuns = 1_000_000;
// create a list of 1 million BigDecimals
List<BigDecimal> testData = new ArrayList<>();
for (int j = 0; j < nRuns; j++) {
String wholePart = r.ints(r.nextInt(15) + 1, 0, 10).mapToObj(
String::valueOf).collect(Collectors.joining());
String fractionalPart = r.ints(r.nextInt(31) + 1, 0, 10).mapToObj(
String::valueOf).collect(Collectors.joining());
BigDecimal d = new BigDecimal(wholePart + "." + fractionalPart);
testData.add(d);
}
long start = System.nanoTime();
// Using math
for (BigDecimal d : testData) {
int fractionCount = d.scale();
int wholeCount = (int) (Math.ceil(Math.log10(d.longValue())));
}
long time = System.nanoTime() - start;
System.out.println(time / 1_000_000.);
start = System.nanoTime();
//Using strings
for (BigDecimal d : testData) {
String sd = d.toPlainString();
int n = sd.indexOf(".");
int m = sd.length() - n - 1;
}
time = System.nanoTime() - start;
System.out.println(time / 1_000_000.);
}

Int() doesn't convert from String to Optional Integer (Swift)

I'm new at programming and started with Swift. The first issue I came along with is the following:
I have 4 variables
var a = "345"
var b = "30.6"
var c = "74hf2"
var d = "5"
I need to count the sum of Integers (if not integer, it will turn to nil)
if Int(a) != nil {
var aNum = Int(ar)!
}
if Int (b) != nil {
var bNum = Int (b)!
}
and so on..
As far as I understand, the Int() should convert each element into an Optional Integer.
Then I should use forced unwrapping by convertin the Int? to Int and only then I can use it for my purposes. But instead, when I count the sum of my variables, the compiler sums them as Strings.
var sum = aNum + bNum + cNum + dNum
Output:
34530.674hf25
Why my variables, which are declared as strings and then converted into optional integers with Int(), didn't work?
Your code has typos that make it hard to tell what you are actually trying to do:
Assuming your 2nd variable should be b, as below:
var a = "345"
var b = "30.6"
var c = "74hf2"
var d = "5"
///Then you can use code like this:
var sum = 0
if let aVal = Int(a) { sum += aVal }
if let bVal = Int(b) { sum += bVal }
if let cVal = Int(c) { sum += cVal }
if let dVal = Int(d) { sum += dVal }
print(sum)
That prints 350 since only 345 and 5 are valid Int values.

Functional version of a typical nested while loop

I hope this question may please functional programming lovers. Could I ask for a way to translate the following fragment of code to a pure functional implementation in Scala with good balance between readability and execution speed?
Description: for each elements in a sequence, produce a sub-sequence contains the elements that comes after the current elements (including itself) with a distance smaller than a given threshold. Once the threshold is crossed, it is not necessary to process the remaining elements
def getGroupsOfElements(input : Seq[Element]) : Seq[Seq[Element]] = {
val maxDistance = 10 // put any number you may
var outerSequence = Seq.empty[Seq[Element]]
for (index <- 0 until input.length) {
var anotherIndex = index + 1
var distance = input(index) - input(anotherIndex) // let assume a separate function for computing the distance
var innerSequence = Seq(input(index))
while (distance < maxDistance && anotherIndex < (input.length - 1)) {
innerSequence = innerSequence ++ Seq(input(anotherIndex))
anotherIndex = anotherIndex + 1
distance = input(index) - input(anotherIndex)
}
outerSequence = outerSequence ++ Seq(innerSequence)
}
outerSequence
}
You know, this would be a ton easier if you added a description of what you're trying to accomplish along with the code.
Anyway, here's something that might get close to what you want.
def getGroupsOfElements(input: Seq[Element]): Seq[Seq[Element]] =
input.tails.map(x => x.takeWhile(y => distance(x.head,y) < maxDistance)).toSeq

Calculate IRR (Internal Rate Return) and NPV programmatically in Objective-C

I am developing a financial app and require IRR (in-built functionality of Excel) calculation and found such great tutorials in C here and such answer in C# here.
I implemented code of the C language above, but it gives a perfect result when IRR is in positive. It is not returning a negative value when it should be. Whereas in Excel =IRR(values,guessrate) returns negative IRR as well for some values.
I have referred to code in above C# link too, and it seems that it follows good procedures and returns errors and also hope that it returns negative IRR too, the same as Excel. But I am not familiar with C#, so I am not able to implement the same code in Objective-C or C.
I am writing C code from the above link which I have implemented for helping you guys.
#define LOW_RATE 0.01
#define HIGH_RATE 0.5
#define MAX_ITERATION 1000
#define PRECISION_REQ 0.00000001
double computeIRR(double cf[], int numOfFlows)
{
int i = 0, j = 0;
double m = 0.0;
double old = 0.00;
double new = 0.00;
double oldguessRate = LOW_RATE;
double newguessRate = LOW_RATE;
double guessRate = LOW_RATE;
double lowGuessRate = LOW_RATE;
double highGuessRate = HIGH_RATE;
double npv = 0.0;
double denom = 0.0;
for (i=0; i<MAX_ITERATION; i++)
{
npv = 0.00;
for (j=0; j<numOfFlows; j++)
{
denom = pow((1 + guessRate),j);
npv = npv + (cf[j]/denom);
}
/* Stop checking once the required precision is achieved */
if ((npv > 0) && (npv < PRECISION_REQ))
break;
if (old == 0)
old = npv;
else
old = new;
new = npv;
if (i > 0)
{
if (old < new)
{
if (old < 0 && new < 0)
highGuessRate = newguessRate;
else
lowGuessRate = newguessRate;
}
else
{
if (old > 0 && new > 0)
lowGuessRate = newguessRate;
else
highGuessRate = newguessRate;
}
}
oldguessRate = guessRate;
guessRate = (lowGuessRate + highGuessRate) / 2;
newguessRate = guessRate;
}
return guessRate;
}
I have attached the result for some value which are different in Excel and the above C language code.
Values: Output of Excel: -33.5%
1 = -18.5, Output of C code: 0.010 or say (1.0%)
2 = -18.5,
3 = -18.5,
4 = -18.5,
5 = -18.5,
6 = 32.0
Guess rate: 0.1
Since low_rate and high_rate are both positive, you're not able to get a negative score. You have to change:
#define LOW_RATE 0.01
to, for example,
#define LOW_RATE -0.5