Why SwiftUI Limited `#ViewBuilder` parameters count to 10? - swift

#available(iOS 13.0, macOS 10.15, tvOS 13.0, watchOS 6.0, *)
extension ViewBuilder {
public static func buildBlock<C0, C1, C2, C3, C4, C5, C6, C7, C8, C9>(_ c0: C0, _ c1: C1, _ c2: C2, _ c3: C3, _ c4: C4, _ c5: C5, _ c6: C6, _ c7: C7, _ c8: C8, _ c9: C9) -> TupleView<(C0, C1, C2, C3, C4, C5, C6, C7, C8, C9)> where C0 : View, C1 : View, C2 : View, C3 : View, C4 : View, C5 : View, C6 : View, C7 : View, C8 : View, C9 : View
}
In the source code, ViewBuilder max parameters was limited to 10.

Sadly variadic generics are not yet supported, so it was not possible to declare this function with a variadic input arguments (which means that you could pass in any number of input arguments).
Most probably 10 seemed like a good rule of thumb, since if you need more subviews, you can always break up your view builder into several smaller functions.

Related

Finding recurring pattern between two characters, selecting a specific part from each match (excluding recurring parts)

I constructed this overly specific pattern:
(?:{)([\w\s]+),\s*([\w\s]+)?,\s*([\w\s]+),?\s*([\w\s]+)(?:})
To find from this piece of text..
{a1, b2, c3, d4} hello there {a1, b2, c3, d4}
The following matches and groups:
match1: {a1, b2, c3, d4}
group1: a1
group2: b2
group3: c3
group4: d4
match2: {a1, b2, c3, d4}
group1: a1
group2: b2
group3: c3
group4: d4
I've tried to simplify the pattern, but it doesn't yield the same result. Instead it only finds d4's.
(?:{)(([\w\s]+)(?:,?\s*))+(?:})
match1: {a1, b2, c3, d4}
group1: d4
group2: d4
match2: {a1, b2, c3, d4}
group1: d4
group2: d4
I'm encountering this problem more often, and I repeatidly find myself creating far too specific patterns.

Convert multiple columns into a column of map on Spark Dataframe using Scala

I have a dataframe having variable number of columns like Col1, Col2, Col3.
I need combine Col1 and Col2 into one column of data type map by using the code below.
val df_converted = df.withColumn("ConvertedCols", map(lit("Col1"), col("Col1"), lit("Col2"), col("Col2")))
But how can I do it for all columns when I don't know the number and names of the columns?
One approach would be to expand the column list of the DataFrame via flatMap into a Seq(lit(c1), col(c1), lit(c2), col(c2), ...) and apply Spark's map as shown below:
import org.apache.spark.sql.functions._
import spark.implicits._
val df = Seq(
("a", "b", "c", "d"),
("e", "f", "g", "h")
).toDF("c1", "c2", "c3", "c4")
val kvCols = df.columns.flatMap(c => Seq(lit(c), col(c)))
df.withColumn("ConvertedCols", map(kvCols: _*)).show(false)
// +---+---+---+---+---------------------------------------+
// |c1 |c2 |c3 |c4 |ConvertedCols |
// +---+---+---+---+---------------------------------------+
// |a |b |c |d |Map(c1 -> a, c2 -> b, c3 -> c, c4 -> d)|
// |e |f |g |h |Map(c1 -> e, c2 -> f, c3 -> g, c4 -> h)|
// +---+---+---+---+---------------------------------------+
Another way is to use from_json and to_json to get a map type column:
val df2 = df.withColumn(
"ConvertedCols",
from_json(to_json(struct("*")), lit("map<string,string>"))
)
df2.show(false)
+---+---+---+---+------------------------------------+
|c1 |c2 |c3 |c4 |ConvertedCols |
+---+---+---+---+------------------------------------+
|a |b |c |d |[c1 -> a, c2 -> b, c3 -> c, c4 -> d]|
|e |f |g |h |[c1 -> e, c2 -> f, c3 -> g, c4 -> h]|
+---+---+---+---+------------------------------------+

Split multiple fields or columns of a single row and create multiple rows using Scala

I have a data-frame with 4 fields as mentioned below :
Field1 , Field2 , Field3 , Field4
I have values in the fields as below :
A1 , B1 , C1 , D1
A2 , B2,B3 , C2,C3 , D2,D3
A1 , B4,B5,B6 , C4,C5,C6 , D4,D5,D6
I have to convert it into the below format :
A1 , B1 , C1 , D1
A2 , B2 , C2 , D2
A2 , B3 , C3 , D3
A1 , B4 , C4 , D4
A1 , B5 , C5 , D5
A1 , B6 , C6 , D6
Basically I have to split the comma separated values in multiple columns and form new rows based on the values in the same order.
You can consider all of them as of type String. Can you suggest me a way to do this splitting and forming new rows based on the new values.
I could see already a question similar to this as the below one:
How to flatmap a nested Dataframe in Spark
But this question is different as I have to consider splitting multiple columns in this case and the values should not repeat.
You can convert DataFrame to Dataset[(String, String, String, String)] and flatMap:
import scala.util.Try
val df = Seq(
("A1", "B1", "C1", "D1"),
("A2", "B2,B3", "C2,C3", "D2,D3"),
("A1", "B4,B5,B6", "C4,C5,C6", "D4,D5,D6")
).toDF("x1", "x2", "x3", "x4")
// A simple sequence of expressions which allows us to flatten the results
val exprs = (0 until df.columns.size).map(i => $"value".getItem(i))
df.select($"x1", array($"x2", $"x3", $"x4")).as[(String, Seq[String])].flatMap {
case (x1, xs) =>
Try(xs.map(_.split(",")).transpose).map(_.map("x" +: _)).getOrElse(Seq())
}.toDF.select(exprs:_*)
// +--------+--------+--------+--------+
// |value[0]|value[1]|value[2]|value[3]|
// +--------+--------+--------+--------+
// | A1| B1| C1| D1|
// | A2| B2| C2| D2|
// | A2| B3| C3| D3|
// | A1| B4| C4| D4|
// | A1| B5| C5| D5|
// | A1| B6| C6| D6|
// +--------+--------+--------+--------+
or use an UDF:
val splitRow = udf((xs: Seq[String]) =>
Try(xs.map(_.split(",")).transpose).toOption)
// Same as before but we exclude the first column
val exprs = (0 until df.columns.size - 1).map(i => $"xs".getItem(i))
df
.withColumn("xs", explode(splitRow(array($"x2", $"x3", $"x4"))))
.select($"x1" +: exprs: _*)
You can use posexplode to solve this quickly. Refer http://allabouthadoop.net/hive-lateral-view-explode-vs-posexplode/
So, your code will be like below :
select
Field1,
Field2,
Field3,
Field4
from temp_table
lateral view posexplode(Field2) pn as f2_1,f2_2, Field2
lateral view posexplode(Field3) pn as f3_1,f3_2, Field3
lateral view posexplode(Field3) pn as f4_1,f4_2, Field4
where
(f2_1 == F3_1 and f3_1 == f4_1) and/or (f2_2 == F3_2 and f3_2 == f4_2)

Change unix time to arrange the data in diurnal and hourly pattern

I have an input file with 9 columns where 6th and 7th columns are for start and end unix time. (The overall span of the timeline is 1290895200 to 1291154399, 3 days) I am looking for a perl script which can take in the unix time to specify the hour of the day, counting the start date as Day 1 and increasing accordingle. Could this be obtained by checking the timestamps on each row and generate output file with the format below (with rest of the values in columns remaining unchanged only unix time stamp converted to Hour and Day):.
Hour Day Col3 Col4 Col5 Col6 .....
0 1 .......
1 1 .......
upto
23 3 ....
Assuming no spaces within columns, the Perl '-a' (perldoc perlrun) switch will be useful. It splits the input in the array #F automatically when combined with -n or -p.
Given the start timestamp (a Unix time), it is easy enough to replace either column 6 or 7 (or both) with an hour/day relateve to the start timestamp. The issue is 'which did you want at the beginning of the line'?
#!/usr/bin/perl -pa
use strict;
use warnings;
my $start = 1290895200;
my $t6_hh = int(($F[5] - $start) / 3600);
my $t6_dd = int($t6_hh / 24) + 1;
$t6_hh = int($t6_hh % 24);
my $t7_hh = int(($F[6] - $start) / 3600);
my $t7_dd = int($t7_hh / 24) + 1;
$t7_hh = int($t7_hh % 24);
printf "%2d/%d - %2d/%d ", $t6_hh, $t6_dd, $t7_hh, $t7_dd;
# Perl prints the input line after this because of the -p option
Using this awk program to generate some data:
awk 'BEGIN{
for (i = 1290895200; i < 1291154400; i += 3200)
{
j = i + rand() * (1291154400 - i);
k = i + 60 * rand();
printf "C1 C2 C2 C4 C5 %d %d C8 C9\n", k, j;
}
}'
The Perl script above generated the output that follows. Column 6 contains a time in your given timestamp range, incrementing every 3200 seconds and randomly perturbed by up to 60 more seconds. Column 7 contains a random time in the range between the value in column 6 and the end time. The output is prefixed with two hour/day values, one for column 6, one for column 7. Tinker with the formatting to your heart's content.
0/1 - 0/1 C1 C2 C2 C4 C5 1290895207 1290895202 C8 C9
0/1 - 6/3 C1 C2 C2 C4 C5 1290898427 1291091834 C8 C9
1/1 - 15/2 C1 C2 C2 C4 C5 1290901613 1291036283 C8 C9
2/1 - 5/1 C1 C2 C2 C4 C5 1290904840 1290916542 C8 C9
3/1 - 2/3 C1 C2 C2 C4 C5 1290908056 1291075378 C8 C9
4/1 - 6/2 C1 C2 C2 C4 C5 1290911231 1291004467 C8 C9
5/1 - 12/3 C1 C2 C2 C4 C5 1290914402 1291113831 C8 C9
6/1 - 9/1 C1 C2 C2 C4 C5 1290917631 1290930259 C8 C9
7/1 - 2/3 C1 C2 C2 C4 C5 1290920800 1291077580 C8 C9
8/1 - 8/2 C1 C2 C2 C4 C5 1290924004 1291012338 C8 C9
8/1 - 11/2 C1 C2 C2 C4 C5 1290927241 1291022052 C8 C9
9/1 - 22/2 C1 C2 C2 C4 C5 1290930455 1291062330 C8 C9
10/1 - 14/3 C1 C2 C2 C4 C5 1290933631 1291120433 C8 C9
11/1 - 17/1 C1 C2 C2 C4 C5 1290936839 1290956811 C8 C9
12/1 - 13/2 C1 C2 C2 C4 C5 1290940042 1291029190 C8 C9
13/1 - 18/3 C1 C2 C2 C4 C5 1290943245 1291135459 C8 C9
14/1 - 5/2 C1 C2 C2 C4 C5 1290946402 1291000990 C8 C9
15/1 - 8/3 C1 C2 C2 C4 C5 1290949619 1291100349 C8 C9
16/1 - 3/3 C1 C2 C2 C4 C5 1290952845 1291080339 C8 C9
16/1 - 23/3 C1 C2 C2 C4 C5 1290956021 1291152621 C8 C9
17/1 - 7/2 C1 C2 C2 C4 C5 1290959258 1291007421 C8 C9
18/1 - 9/3 C1 C2 C2 C4 C5 1290962445 1291101150 C8 C9
19/1 - 5/3 C1 C2 C2 C4 C5 1290965604 1291088606 C8 C9
20/1 - 5/3 C1 C2 C2 C4 C5 1290968853 1291086031 C8 C9
21/1 - 11/2 C1 C2 C2 C4 C5 1290972026 1291021742 C8 C9
22/1 - 12/3 C1 C2 C2 C4 C5 1290975228 1291112555 C8 C9
23/1 - 10/2 C1 C2 C2 C4 C5 1290978416 1291020248 C8 C9
0/2 - 17/2 C1 C2 C2 C4 C5 1290981609 1291043680 C8 C9
0/2 - 23/2 C1 C2 C2 C4 C5 1290984853 1291067313 C8 C9
1/2 - 19/3 C1 C2 C2 C4 C5 1290988003 1291139292 C8 C9
2/2 - 19/3 C1 C2 C2 C4 C5 1290991230 1291138839 C8 C9
3/2 - 2/3 C1 C2 C2 C4 C5 1290994419 1291077006 C8 C9
4/2 - 23/3 C1 C2 C2 C4 C5 1290997629 1291152305 C8 C9
5/2 - 16/2 C1 C2 C2 C4 C5 1291000805 1291041679 C8 C9
6/2 - 21/3 C1 C2 C2 C4 C5 1291004004 1291146543 C8 C9
7/2 - 3/3 C1 C2 C2 C4 C5 1291007223 1291080904 C8 C9
8/2 - 19/2 C1 C2 C2 C4 C5 1291010454 1291050299 C8 C9
8/2 - 5/3 C1 C2 C2 C4 C5 1291013627 1291088188 C8 C9
9/2 - 21/3 C1 C2 C2 C4 C5 1291016803 1291146278 C8 C9
10/2 - 15/3 C1 C2 C2 C4 C5 1291020046 1291122347 C8 C9
11/2 - 17/3 C1 C2 C2 C4 C5 1291023207 1291131809 C8 C9
12/2 - 13/2 C1 C2 C2 C4 C5 1291026441 1291028431 C8 C9
13/2 - 19/3 C1 C2 C2 C4 C5 1291029637 1291137957 C8 C9
14/2 - 15/3 C1 C2 C2 C4 C5 1291032843 1291122324 C8 C9
15/2 - 23/3 C1 C2 C2 C4 C5 1291036053 1291154335 C8 C9
16/2 - 23/2 C1 C2 C2 C4 C5 1291039218 1291066064 C8 C9
16/2 - 3/3 C1 C2 C2 C4 C5 1291042430 1291081713 C8 C9
17/2 - 11/3 C1 C2 C2 C4 C5 1291045650 1291109913 C8 C9
18/2 - 6/3 C1 C2 C2 C4 C5 1291048850 1291092315 C8 C9
19/2 - 3/3 C1 C2 C2 C4 C5 1291052024 1291079578 C8 C9
20/2 - 11/3 C1 C2 C2 C4 C5 1291055228 1291108500 C8 C9
21/2 - 4/3 C1 C2 C2 C4 C5 1291058410 1291085972 C8 C9
22/2 - 2/3 C1 C2 C2 C4 C5 1291061634 1291075865 C8 C9
23/2 - 19/3 C1 C2 C2 C4 C5 1291064801 1291136695 C8 C9
0/3 - 12/3 C1 C2 C2 C4 C5 1291068029 1291114176 C8 C9
0/3 - 22/3 C1 C2 C2 C4 C5 1291071244 1291150686 C8 C9
1/3 - 14/3 C1 C2 C2 C4 C5 1291074453 1291118766 C8 C9
2/3 - 15/3 C1 C2 C2 C4 C5 1291077650 1291125588 C8 C9
3/3 - 6/3 C1 C2 C2 C4 C5 1291080812 1291092558 C8 C9
4/3 - 18/3 C1 C2 C2 C4 C5 1291084007 1291134315 C8 C9
5/3 - 7/3 C1 C2 C2 C4 C5 1291087216 1291093314 C8 C9
6/3 - 6/3 C1 C2 C2 C4 C5 1291090424 1291090591 C8 C9
7/3 - 7/3 C1 C2 C2 C4 C5 1291093642 1291095234 C8 C9
8/3 - 23/3 C1 C2 C2 C4 C5 1291096814 1291150822 C8 C9
8/3 - 11/3 C1 C2 C2 C4 C5 1291100019 1291109840 C8 C9
9/3 - 22/3 C1 C2 C2 C4 C5 1291103239 1291148613 C8 C9
10/3 - 12/3 C1 C2 C2 C4 C5 1291106440 1291113616 C8 C9
11/3 - 16/3 C1 C2 C2 C4 C5 1291109623 1291126884 C8 C9
12/3 - 18/3 C1 C2 C2 C4 C5 1291112808 1291133589 C8 C9
13/3 - 19/3 C1 C2 C2 C4 C5 1291116050 1291138547 C8 C9
14/3 - 19/3 C1 C2 C2 C4 C5 1291119257 1291139971 C8 C9
15/3 - 20/3 C1 C2 C2 C4 C5 1291122408 1291140196 C8 C9
16/3 - 23/3 C1 C2 C2 C4 C5 1291125624 1291153919 C8 C9
16/3 - 17/3 C1 C2 C2 C4 C5 1291128833 1291132430 C8 C9
17/3 - 19/3 C1 C2 C2 C4 C5 1291132029 1291137647 C8 C9
18/3 - 21/3 C1 C2 C2 C4 C5 1291135257 1291144109 C8 C9
19/3 - 20/3 C1 C2 C2 C4 C5 1291138411 1291140416 C8 C9
20/3 - 21/3 C1 C2 C2 C4 C5 1291141637 1291145686 C8 C9
21/3 - 21/3 C1 C2 C2 C4 C5 1291144839 1291146016 C8 C9
22/3 - 23/3 C1 C2 C2 C4 C5 1291148048 1291151978 C8 C9
23/3 - 23/3 C1 C2 C2 C4 C5 1291151228 1291151993 C8 C9
The localtime function gives you a breakdown of a unix timestamp into a list of data containing, among others, the hour and the number of the day within the current year.
# 0 1 2 3 4 5 6 7 8
($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(your_timestamp);
^ ^
You can use that to build your format pretty easily.
offset = date - start_date
day = offset / (24*60*60)
hour = (offset % (24*60*60)) / (60*60)
if start_date is 1290895200 and date is 1291154399 then offset is 259199, day is 2 and hour is 23
Have a look at DateTime module which will most probably be of help to you.
#!/usr/bin/env perl
use strict;
use warnings;
use DateTime;
my $time1 = 1290895200;
my $time2 = 1291154399;
my $dt1 = DateTime->from_epoch( 'epoch' => $time1 );
my $dt2 = DateTime->from_epoch( 'epoch' => $time2 );
my $duration = $dt1->subtract_datetime($dt2);
print 'Days: ', $duration->days, "\n";
print 'Hours: ', $duration->hours, "\n";

Multiple level grouping in Crystal Reports

I have a report with the 4 columns,
ColumnA|ColumnB|ColumnC|ColumnD
Row1 A1 B1 C1 D1
Row2 A1 B1 C1 D2
Row3 A1 B1 C1 D1
Row4 A1 B1 C1 D2
Row5 A1 B1 C1 D1
I did like grouping based on the 4 columns, but i got output with space for every row.
But here in this report i would like to get the ouput as,
ColumnA|ColumnB|ColumnC|ColumnD
Row1 A1 B1 C1 D1
Row2 A1 B1 C1 D2
<-------------an empty space ----------->
Row3 A1 B1 C1 D1
Row4 A1 B1 C1 D2
<-------------an empty space ----------->
Row5 A1 B1 C1 D1
How can i achieve the above output?
A standard group by would sort the record like this:
ColumnA|ColumnB|ColumnC|ColumnD
Row1 A1 B1 C1 D1
Row3 A1 B1 C1 D1
Row5 A1 B1 C1 D1
Row2 A1 B1 C1 D2
Row4 A1 B1 C1 D2
Since you don't have a standard grouping, another approach may work. You basically want a blank line after the D2 value. This will only work if you always have D2 values at the end of a group.
Create a new blank detail section under the main section
Detail one
A1 B1 C1 D1
Detail two
<blank>
Then put a conditional suppress expression on detail two
ColumnD <> "D2"
Then whenever D2 is present the blank detail section will be displayed.
You can use a Formula instead of a field Value for grouping.
select Column4 <br>
case D1 : "Group1"<br>
case D2 : "Group2"<br>
case D3 : "Group3"<br>
case D4 : "Group3"<br>
case D5 : "Group3"<br>
case D6 : "Group4"<br>
default "Group5"<br>
Is that your problem ?
The blank lines can be generated as Group Footer.