interpolation algorithms for Swift - swift

I need help with an interpolation algorithm for my app.
I have a couple of arrays:
(1) my measurements array (days versus weights)
(2) a reference array with minimum weights
(3) a reference array with maximum weights
(4) a reference array with average weights
First I have to fill the gaps in my measurement array (weights for the missing days).
After that, I want to do a prediction of the coming days (day 45 up to 120) making use of the reference data (array 2 t/m 4). The assumption is that the measurement weights will be up to par… but can take a couple of days longer.
I included a line graph of what the final results should look like.
Can this be done with Swift or should I use a framework like Accelerate or Upsurge?
My measurements:
[ (0.0 , 25.4) , (5.0 , 30.3) , (6.0 , 33.5) , (9.0 , 51.2) , (12.0 , 83.1) , (16.0 , 143.0) , (21.0 , 238.6) , (24.0 , 311.7) , (25.0 , 322.8) , (29.0 , 460.9) , (31.0 , 520.4) , (35.0 , 642.2) , (36.0 , 694.0) , (43.0 , 988.3) , (44.0 , 1018.4) ]
Reference average:
[ (0.0 , 20.0) , (1.0 , 22.5), (2.0 , 27.0), (3.0 , 32.0), (4.0 , 37.2), (5.0 , 44.1), (6.0 , 68.4), (7.0 , 76.7), (8.0 , 101.4), (9.0 , 117.7), (10.0 , 148.8), (11.0 , 172.6), (12.0 , 212.6), (13.0 , 238.4), (14.0 , 272.3), (15.0 , 304.8), (16.0 , 335.6), (17.0 , 369.8), (18.0 , 405.3), (19.0 , 444.3), (20.0 , 476.3), (21.0 , 509.1), (22.0 , 546.5), (23.0 , 583.7), (24.0 , 620.8), (25.0 , 657.0), (26.0 , 698.2), (27.0 , 735.3), (28.0 , 769.7), (29.0 , 810.3), (30.0 , 848.2), (31.0 , 885.0), (32.0 , 921.2), (33.0 , 956.4), (34.0 , 984.2), (35.0 , 1012.1), (36.0 , 1038.8), (37.0 , 1069.8), (38.0 , 1096.4), (39.0 , 1119.1), (40.0 , 1145.5), (41.0 , 1162.1), (42.0 , 1179.6), (43.0 , 1204.0), (44.0 , 1222.8), (45.0 , 1240.6), (46.0 , 1255.7), (47.0 , 1269.6), (48.0 , 1277.5), (49.0 , 1290.5), (50.0 , 1300.6), (51.0 , 1312.4), (52.0 , 1317.3), (53.0 , 1324.6), (54.0 , 1332.1), (55.0 , 1339.6), (56.0 , 1340.2), (57.0 , 1346.8), (58.0 , 1347.4), (59.0 , 1349.6), (60.0 , 1348.0), (61.0 , 1348.4), (62.0 , 1345.4), (63.0 , 1340.2), (64.0 , 1333.3), (65.0 , 1329.0), (66.0 , 1325.3), (67.0 , 1324.8), (68.0 , 1313.7), (69.0 , 1301.1), (70.0 , 1297.5), (71.0 , 1292.2), (72.0 , 1287.1), (73.0 , 1277.5), (74.0 , 1271.9), (75.0 , 1262.2), (76.0 , 1250.3), (77.0 , 1242.9), (78.0 , 1225.5), (79.0 , 1220.5), (80.0 , 1200.8), (81.0 , 1184.4), (82.0 , 1178.4), (83.0 , 1163.1), (84.0 , 1149.5), (85.0 , 1135.4), (86.0 , 1117.2), (87.0 , 1109.1), (88.0 , 1092.1), (89.0 , 1088.8), (90.0 , 1079.4), (91.0 , 1067.8), (92.0 , 1065.0), (93.0 , 1060.7), (94.0 , 1058.9), (95.0 , 1055.5), (96.0 , 1055.1), (97.0 , 1050.1), (98.0 , 1051.4), (99.0 , 1041.4), (100.0 , 1050.9), (101.0 , 1051.6) , (102.0 , 1048.1), (103.0 , 1057.2), (104.0 , 1060.5), (105.0 , 1062.4), (106.0 , 1069.4), (107.0 , 1072.0), (108.0 , 1077.0), (109.0 , 1068.1), (110.0 , 1077.7), (111.0 , 1071.0), (112.0 , 1060.0), (113.0 , 1058.9), (114.0 , 1050.6), (115.0 , 1047.2), (116.0 , 1052.2), (117.0 , 1051.8), (118.0 , 1024.1), (119.0 , 1041.6), (120.0 , 1048.4) ]
Reference minimum and maximum also available.
I tried to fill the gaps with the following code:
typealias Weights = (Double, Double)
var myArray1: [Weights] = [ (0.0 , 25.4) , (5.0 , 30.3) , (6.0 , 33.5) , (9.0 , 51.2) , (12.0 , 83.1) , (16.0 , 143.0) , (21.0 , 238.6) , (24.0 , 311.7) , (25.0 , 322.8) , (29.0 , 460.9) , (31.0 , 520.4) , (35.0 , 642.2) , (36.0 , 694.0) , (43.0 , 988.3) , (44.0 , 1018.4) ]
var myArray2: [Weights] = []
for i in 0..<45 { myArray2.append( (Double(i), 0.00)) }
let mergedArrays = myArray2.map({ calculated->Weights in
if let measured = myArray1.first(where: { $0.0 == calculated.0 }) {
return measured
} else {
// interpolate weight??
return calculated
}
})
For the calculations, it would be something like:
(1) 30.3 - 25.4 = 4.9
(2) 4.9 / 5 days = 0.98 per day
so:
[(0.0 , 25.4) , (1.0 , 26.4) , (2.0 , 27.4) , (3.0 , 28.4) , (4.0 , 28.3) , (5.0 , 30.3)
(3) move on to the next weight after a 'weight with value 0.00'
But how do I implement those calculations?
And then after that... the predictions...

Related

Parametized Postgres Query with IN clause

I have a query param of type array to collect IDs to query from a postgres table. I think I have built everything out appropriately, but the query fails with ERROR: syntax error at or near "$1"
The logs are:
SELECT
professional_leads.first_name
, professional_leads.last_name
, professional_leads.email
, professional_leads.phone_number
, professional_leads.professional_id as proId
, professional_leads.id as proLeadId
, professional_leads.user_id
, professional_leads.interview_offered_at
, professional_leads.sms_enabled
, professional_leads.email_enabled
, professional_leads.resume_pdf_object_key
, professional_leads.created_at
, professional_leads.updated_at
, professional_leads.reschedule_count
, professional_leads.experience_level
, professional_leads.waitlisted_reason
, professional_leads.resume_state
, professional_leads.interview_state
, professional_leads.state
, professional_leads.profession_id
, professional_leads.indicated_specialty_codes
, professional_leads.other_specialties
, professional_leads.professional_id
, professional_leads.license_received_on
, professional_leads.license_expires_on
, professional_leads.region_id
, professional_leads.marketing_channel
, professional_leads.newsletter
, professional_leads.referral_code
, professional_leads.asset_proof_type
, professional_leads.verification_state
, professional_leads.duplicate
FROM
professional_leads
WHERE
id IN :clause
DEBUG 2022-10-01 21:25:45,346 [[MuleRuntime].uber.15: [api-database-sapi].Copy_of_get-Flow.BLOCKING #7f0aedd] [processor: Copy_of_get-Flow/processors/0/processors/0; event: 232aba80-41f1-11ed-b583-f02f4b10a50d] org.mule.db.commons.shaded.internal.domain.executor.AbstractExecutor: Executing query:
SELECT
professional_leads.first_name
, professional_leads.last_name
, professional_leads.email
, professional_leads.phone_number
, professional_leads.professional_id as proId
, professional_leads.id as proLeadId
, professional_leads.user_id
, professional_leads.interview_offered_at
, professional_leads.sms_enabled
, professional_leads.email_enabled
, professional_leads.resume_pdf_object_key
, professional_leads.created_at
, professional_leads.updated_at
, professional_leads.reschedule_count
, professional_leads.experience_level
, professional_leads.waitlisted_reason
, professional_leads.resume_state
, professional_leads.interview_state
, professional_leads.state
, professional_leads.profession_id
, professional_leads.indicated_specialty_codes
, professional_leads.other_specialties
, professional_leads.professional_id
, professional_leads.license_received_on
, professional_leads.license_expires_on
, professional_leads.region_id
, professional_leads.marketing_channel
, professional_leads.newsletter
, professional_leads.referral_code
, professional_leads.asset_proof_type
, professional_leads.verification_state
, professional_leads.duplicate
FROM
professional_leads
WHERE
id IN ?
Parameters:
clause = ('6a379873-93f9-4b16-8752-168aa92c8846','a234570e-a739-4bcc-847a-a875f5202398')
I flatten the array in a var ids:
"(" ++ (attributes.queryParams.*id map "'$'" joinBy ",") ++ ")"
I have my query as follows:
%dw 2.0
output text
---
"SELECT
professional_leads.first_name
, professional_leads.last_name
, professional_leads.email
, professional_leads.phone_number
, professional_leads.professional_id as proId
, professional_leads.id as proLeadId
, professional_leads.user_id
, professional_leads.interview_offered_at
, professional_leads.sms_enabled
, professional_leads.email_enabled
, professional_leads.resume_pdf_object_key
, professional_leads.created_at
, professional_leads.updated_at
, professional_leads.reschedule_count
, professional_leads.experience_level
, professional_leads.waitlisted_reason
, professional_leads.resume_state
, professional_leads.interview_state
, professional_leads.state
, professional_leads.profession_id
, professional_leads.indicated_specialty_codes
, professional_leads.other_specialties
, professional_leads.professional_id
, professional_leads.license_received_on
, professional_leads.license_expires_on
, professional_leads.region_id
, professional_leads.marketing_channel
, professional_leads.newsletter
, professional_leads.referral_code
, professional_leads.asset_proof_type
, professional_leads.verification_state
, professional_leads.duplicate
FROM
professional_leads
WHERE
id IN :clause"
My input parameters in the call is:
{
"clause": vars.ids
}
Grabbing the query and using the bind variable verbatim, the query executes fine.
Is there a limitation with IN and bind variables?

Disable Stock / Index Selection in tradingview

I want to disable the stock selection in tradingview.
Use header_symbol_search if disabled_features:
disabled_features: ["use_localstorage_for_settings"
, "link_to_tradingview"
, "volume_force_overlay"
, "header_interval_dialog_button"
//, "show_dialog_on_snapshot_ready"
, "study_templates"
, "chart_property_page_trading"
, "chart_crosshair_menu"
, "hide_last_na_study_output"
, "header_symbol_search"
],

how to convert com.mongodb.BasicDBList to something useful in Scala?

So far, I am able to retrieve data from MongoDB using mongo-hadoop-core 1.4.2. The data I want to manipulate are values inside arrays inside an embed document inside each document in the collection I am querying, and I need these values as Double's. The data retrieved from collections has type RDD[(Object, org.bson.BSONObject)], which means each document is a tuple of types (Object, org.bson.BSONObject).
Whenever I want get an embed document, I do (working on spark-shell 1.5.1):
import com.mongodb.{BasicDBObject, BasicDBList} // classes I am using here.
// 'documents' already taken from collection.
scala> documents
res4: org.apache.spark.rdd.RDD[(Object, org.bson.BSONObject)] = NewHadoopRDD[0] at newAPIHadoopRDD at <console>:32
// getting one document.
scala> val doc = documents.take(1)(0)
doc: (Object, org.bson.BSONObject) = ( ... _id fields ... , ... lots of fields ...)
// getting an embed document from tuple's second element.
scala> val samples = doc._2.get("samp") match {case x: BasicDBObject => x}
samples: com.mongodb.BasicDBObject = (... some fields ...)
// getting an embed document.
scala> val latency = samples.get("latency") match {case x: BasicDBObject => x}
latency: com.mongodb.BasicDBObject = { "raw" : [ 9.71 , 8.77 , 10.16 , 9.49 , 8.54 , 10.29 , 9.55 , 9.16 , 10.78 , 10.31 , 9.54 , 10.69 , 10.33 , 9.58 , 9.07 , 9.72 , 9.48 , 8.72 , 10.59 , 9.81 , 9.31 , 10.64 , 9.87 , 9.29 , 10.38 , 9.64 , 8.86 , 10.84 , 10.06 , 9.29 , 8.45 , 9.08 , 7.55 , 9.75 , 9.05 , 10.38 , 9.64 , 8.25 , 10.27 , 9.54 , 8.52 , 10.26 , 9.53 , 7.87 , 9.76 , 9.02 , 10.27 , 7.93 , 9.73 , 9 , 10.07 , 9.35 , 7.66 , 13.68 , 11.92 , 14.72 , 14 , 12.55 , 11.77 , 11.02 , 11.59 , 10.87 , 10.4 , 9.13 , 10.28 , 9.55 , 10.43 , 8.33 , 9.66 , 8.93 , 8.05 , 11.26 , 10.53 , 9.81 , 10.2 , 9.42 , 7.73 , 9.76 , 9.04 , 8.29 , 9.34 , 7.21 , 10.05 , 9.32 , 10.28 , 8.59 , 10.15 , 9.53 , 7.88 , 9.9 , 9.15 , 13.96 , 13.19 , 11 , 13.6 , 13.01 , 12.17 , 11.39 , 10.64 , 9.9] , "xtrf" : { "...
// getting a bson array.
scala> val array = latency.get("raw") match {case x: BasicDBList => x}
array: com.mongodb.BasicDBList = [ 9.71 , 8.77 , 10.16 , 9.49 , 8.54 , 10.29 , 9.55 , 9.16 , 10.78 , 10.31 , 9.54 , 10.69 , 10.33 , 9.58 , 9.07 , 9.72 , 9.48 , 8.72 , 10.59 , 9.81 , 9.31 , 10.64 , 9.87 , 9.29 , 10.38 , 9.64 , 8.86 , 10.84 , 10.06 , 9.29 , 8.45 , 9.08 , 7.55 , 9.75 , 9.05 , 10.38 , 9.64 , 8.25 , 10.27 , 9.54 , 8.52 , 10.26 , 9.53 , 7.87 , 9.76 , 9.02 , 10.27 , 7.93 , 9.73 , 9 , 10.07 , 9.35 , 7.66 , 13.68 , 11.92 , 14.72 , 14 , 12.55 , 11.77 , 11.02 , 11.59 , 10.87 , 10.4 , 9.13 , 10.28 , 9.55 , 10.43 , 8.33 , 9.66 , 8.93 , 8.05 , 11.26 , 10.53 , 9.81 , 10.2 , 9.42 , 7.73 , 9.76 , 9.04 , 8.29 , 9.34 , 7.21 , 10.05 , 9.32 , 10.28 , 8.59 , 10.15 , 9.53 , 7.88 , 9.9 , 9.15 , 13.96 , 13.19 , 11 , 13.6 , 13.01 , 12.17 , 11.39 , 10.64 , 9.9]
Converting type Object to BasicDBObject is quite inconvenient but I need to do it in order to use get(key: String). I could also use .asInstanceOf[BasicDBObject] instead of match {case x: BasicDBObject => x} but is there any better way??.
Getting specific types, like Double, Int, String and Date, is straight forward using inhereted methods from BasicBsonObject class.
As for BasicDBList, there's a get(key: String) method, inherited from BasicBsonList, that returns an Object that can be cast to Double but only using a .asInstanceOf[Double] call and there's a toArray() inherited from java.util.ArrayList that returns an array of Object's that I can't cast to Double, even with .map(_.asInstanceOf[Double]) as I'm doing here:
scala> val arrayOfDoubles = array.toArray.map(_.asInstanceOf[Double])
java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Double
at scala.runtime.BoxesRunTime.unboxToDouble(BoxesRunTime.java:119)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:37)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:37)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:37)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:42)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:44)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:46)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:48)
at $iwC$$iwC$$iwC.<init>(<console>:50)
at $iwC$$iwC.<init>(<console>:52)
at $iwC.<init>(<console>:54)
at <init>(<console>:56)
at .<init>(<console>:60)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
but sometimes it works. In some documents this cast works, in other documents it doesn't and prints that error message above. Could this be a problem in the data structure given by MongoDB, to spark, but just in these documents? Smaller arrays, with 30 values, seems to always work.
my solution so far is this inefficient conversion:
scala> val arrayOfDoubles = array.toArray.map(_.toString.toDouble)
arrayOfDoubles: Array[Double] = Array(9.71, 8.77, 10.16, 9.49, 8.54, 10.29, 9.55, 9.16, 10.78, 10.31, 9.54, 10.69, 10.33, 9.58, 9.07, 9.72, 9.48, 8.72, 10.59, 9.81, 9.31, 10.64, 9.87, 9.29, 10.38, 9.64, 8.86, 10.84, 10.06, 9.29, 8.45, 9.08, 7.55, 9.75, 9.05, 10.38, 9.64, 8.25, 10.27, 9.54, 8.52, 10.26, 9.53, 7.87, 9.76, 9.02, 10.27, 7.93, 9.73, 9.0, 10.07, 9.35, 7.66, 13.68, 11.92, 14.72, 14.0, 12.55, 11.77, 11.02, 11.59, 10.87, 10.4, 9.13, 10.28, 9.55, 10.43, 8.33, 9.66, 8.93, 8.05, 11.26, 10.53, 9.81, 10.2, 9.42, 7.73, 9.76, 9.04, 8.29, 9.34, 7.21, 10.05, 9.32, 10.28, 8.59, 10.15, 9.53, 7.88, 9.9, 9.15, 13.96, 13.19, 11.0, 13.6, 13.01, 12.17, 11.39, 10.64, 9.9)
Am I missing something here or things are really inconvenient? Why do all these methods have to return Object or BSONObject? is there any way to overcome this problem I found? Where have this java.lang.Integer come from if there aren't integers in the array being cast to double?
First of all, I'd advice you to have a look at casbah if you haven't yet.
To answer your question: if you import Java conversions:
import scala.collection.JavaConversions._
You should be able to map directly over the collection without the toArray call. If your array contains either Doubles or Integers you could cast it to Number and get the double value. Like so:
array.map(_.asInstanceOf[Number].doubleValue)
I don't know what your data source looks like, but given the fact you occasionally get an Integer where you expect a Double it's likely to assume that you store round decimal numbers (e.g. 11.0) as integer (e.g. 11).

Creating a table in pdf file using PDF::Report and PDF::Report::Table

In one of my application having a requirement to download a PDF file with report details in form of table.
For creating a PDF file and writing a table in it, using cpan module available in perl. PDF::Report and PDF::Report::Table.
Please find below the code sample:
#!/usr/bin/perl
use strict;
use warnings;
use PDF::Report;
use PDF::Report::Table;
my $pdf = PDF::Report->new( PageSize => 'A4', PageOrientation => 'Portrait' );
my $table = PDF::Report::Table->new( $pdf );
my $data = [
['A1' , 'B1' , 'C1'],
['A2' , 'B2' , 'C2'],
['A3' , 'B3' , 'C3'],
['A4' , 'B4' , 'C4'],
['A5' , 'B5' , 'C5'],
['A6' , 'B6' , 'C6'],
['A7' , 'B7' , 'C7'],
['A8' , 'B8' , 'C8'],
['A9' , 'B9' , 'C9'],
['A10' , 'B10' , 'C10'],
['A11' , 'B11' , 'C11'],
['A12' , 'B12' , 'C12'],
['A13' , 'B13' , 'C13'],
['A14' , 'B14' , 'C14'],
['A15' , 'B15' , 'C15'],
['A16' , 'B16' , 'C16'],
['A17' , 'B17' , 'C17'],
['A18' , 'B18' , 'C18'],
['A19' , 'B19' , 'C19'],
['A20' , 'B20' , 'C20'],
['A21' , 'B21' , 'C21'],
['A22' , 'B22' , 'C22'],
['A23' , 'B23' , 'C23'],
['A24' , 'B24' , 'C24'],
['A25' , 'B25' , 'C25'],
['A26' , 'B26' , 'C26'],
['A27' , 'B27' , 'C27'],
['A28' , 'B28' , 'C28'],
['A29' , 'B29' , 'C29'],
['A30' , 'B30' , 'C30'],
['A31' , 'B31' , 'C31'],
['A32' , 'B32' , 'C32'],
['A33' , 'B33' , 'C33'],
['A34' , 'B34' , 'C34'],
['A35' , 'B35' , 'C35'],
['A36' , 'B36' , 'C36'],
['A37' , 'B37' , 'C37'],
['A38' , 'B38' , 'C38'],
['A39' , 'B39' , 'C39'],
['A40' , 'B40' , 'C40'],
['A41' , 'B41' , 'C41'],
];
$pdf->openpage;
$pdf->setAddTextPos( 50, 50 );
$table->addTable( $data, 400 ); # 400 is table width
$pdf->saveAs( 'table.pdf' );
Result: pdf generated with 2 pages.
at continuity of page missing a row data.
Note: i'm facing issue to attach a span shot of result.
Issues is: missing a row data. missing a row with data [A37, B37, C37].
Please help me in fixing this issues.
Thanks in advance for all your help.
well, when I run your code I get
commandPrompt > ./makepdf.pl
Useless use of greediness modifier '?' in regex; marked by <-- HERE in m/(\S{20}? <-- HERE )(?=\S)/ at /usr/local/share/perl/5.20.2/PDF/Table.pm line 386.
!!! Warning: !!! Incorrect Table Geometry! Setting bottom margin to end of sheet!
at /usr/local/share/perl/5.20.2/PDF/Report/Table.pm line 94.
!!! Warning: !!! Incorrect Table Geometry! Setting bottom margin to end of sheet!
at /usr/local/share/perl/5.20.2/PDF/Report/Table.pm line 94.
I would think that
Setting bottom margin to end of sheet!
and
!!! Warning: !!! Incorrect Table Geometry!
would have something to do with it.

Scala list contains vs array contains

Out of interest why does this work in Scala:
val exceptions = List[Char]('+')
assertTrue(exceptions.contains('+'))
but this not
val exceptions = new Array[Char]('+')
assertTrue(exceptions.contains('+'))
Because you wrote new ArrayChar. Doing that, the argument is the size of the array, and the '+' is, rather unfortunately, converted to an int to give the size. And the returned array is full of Char(0).
You should just do Array[Char]('+'), '+' would then be single element in the Array.
Try in the REPL, that makes the answer obvious:
scala> val exceptions = new Array[Char]('+')
exceptions: Array[Char] = Array( , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , )
+ has char-to-int promotion.
scala> val exceptions = Array[Char]('+')
exceptions: Array[Char] = Array(+)
scala> exceptions.contains('+')
res3: Boolean = true
is the equivalent to the List case.