How to Replace Variable with Level in tbl_summary - gtsummary

A reviewer asked that rather than have both genders listed in the table, to just include one. So, Gender would be replaced with Female and the proportions of the gender that was female would be under each Treatment.
library(gtsummary)
d<-tibble::tribble(
~Gender, ~Treatment,
"Male", "A",
"Male", "B",
"Female", "A",
"Male", "C",
"Female", "B",
"Female", "C")
d %>% tbl_summary(by = Treatment)

One way to do this would be to remove the first row of table_body and only keep the second row of table_body this will only keep the information on Female. This matches your table you provided in the comments.
library(gtsummary)
d<-tibble::tribble(
~Gender, ~Treatment,
"Male", "A",
"Male", "B",
"Female", "A",
"Male", "C",
"Female", "B",
"Female", "C")
t1 <- d %>% filter(Gender == "Female") %>% tbl_summary(by = Treatment)
t1$table_body <- t1$table_body[2,]
t1

Related

How to get a specific key from jsonb in postgresql?

I have column in jsonb named "lines" with many object like this :
[
{
"a" : "1",
"b" : "2",
"c" : "3"
},
{
"a" : "4",
"b" : "5",
"c" : "6"
}
]
This is my query
SELECT *
FROM public.test
WHERE public.test.lines::jsonb ? '[{"c"}]'
In my query i want to get only rows which contain the "c" key in this array
But i have nothing after execution
A quick solution:
SELECT
'c',
*
FROM
jsonb_path_query('[{"a": "1", "b": "2", "c": "3"}, {"a": "4", "b": "5", "c": "6"}]', '$[*].c');
?column? | jsonb_path_query
----------+------------------
c | "3"
c | "6"
The ? operator only works with strings, not with json objects. If you want to test if any of the array elements contains a key with the value c you can use a JSON path predicate:
SELECT *
FROM test
WHERE lines::jsonb ## '$[*].c != null'

Insert row in UITableViewController without replacing row - SWIFT

In my app I have a data pull from Firebase. I have a UITableViewController and would like to insert in a row a text from within the app. The data pull would be like this (please excuse the bad example but I cannot go into too much detail ..)
The original data pull:
Row 1: abc
Row 2: def
Row 3: ghi
Row 4: jkl
Row 5: mno
What I would like to achieve:
Row 1: abc
Row 2: def
Row 3: text from the app
Row 4: ghi
Row 5: jkl
Row 6: text from the app
Row 7: mno
How can I achieve this? I was trying to do something like this in cellForRowAt
if indexPath.row % 3 == 0 {
cell.text = "custom text"
}
But this is replacing every 3rd rows content. I would like to put a row in between, so to speak.
You can modify your server data with local data.
var serverData = ["a","b","c","d","e","f","g","h","i","j","k","l","m"]
let localAppData = ["1","2","3","4","5","6","7","8","9","10"]
var modified = [String]()
var counter = 0
for index in 1...serverData.count {
let value = serverData[index - 1]
if index % 3 == 0 && index != 0 {
if counter < localAppData.count {
modified.append(localAppData[counter])
}else{
modified.append(value)
}
counter += 1
}else{
modified.append(value)
}
}
serverData.removeAll()
serverData.append(contentsOf: modified)
print(serverData) //["a", "b", "1", "d", "e", "2", "g", "h", "3", "j", "k", "4", "m"]
if counter < localAppData.count {
// Appeds the remain local data to your serverData
serverData.append(contentsOf: localAppData[counter...localAppData.count-1])
}
print(serverData) //["a", "b", "1", "d", "e", "2", "g", "h", "3", "j", "k", "4", "m", "5", "6", "7", "8", "9", "10"]
Note: After modification you have to reload the tableView
You can update the datasource by inserting the value at 3rd position and use that datasource in cellforrowat
var a = ["a", "b", "c", "d", "e", "f", "g", "h", "i"]
var temp = a
for (ind, _) in a.enumerated() {
if ind % 3 == 0 && ind != 0 {
temp.insert("current text", at: ind)
}
}
print(temp) // Prints ["a", "b", "c", "current text", "d", "e", "current text", "f", "g", "h", "i"]

spark: how to merge rows to array of jsons

Input:
id1 id2 name value epid
"xxx" "yyy" "EAN" "5057723043" "1299"
"xxx" "yyy" "MPN" "EVBD" "1299"
I want:
{ "id1": "xxx",
"id2": "yyy",
"item_specifics": [
{
"name": "EAN",
"value": "5057723043"
},
{
"name": "MPN",
"value": "EVBD"
},
{
"name": "EPID",
"value": "1299"
}
]
}
I tried the following two solutions from How to aggregate columns into json array? and how to merge rows into column of spark dataframe as vaild json to write it in mysql:
pi_df.groupBy(col("id1"), col("id2"))
//.agg(collect_list(to_json(struct(col("name"), col("value"))).alias("item_specifics"))) // => not working
.agg(collect_list(struct(col("name"),col("value"))).alias("item_specifics"))
But I got:
{ "name":"EAN","value":"5057723043", "EPID": "1299", "id1": "xxx", "id2": "yyy" }
How to fix this? Thanks
For Spark < 2.4
You can create 2 dataframes, one with name and value and other with epic as name and epic value as value and union them together. Then aggregate them as collect_set and create a json. The code should look like this.
//Creating Test Data
val df = Seq(("xxx","yyy" ,"EAN" ,"5057723043","1299"), ("xxx","yyy" ,"MPN" ,"EVBD", "1299") )
.toDF("id1", "id2", "name", "value", "epid")
df.show(false)
+---+---+----+----------+----+
|id1|id2|name|value |epid|
+---+---+----+----------+----+
|xxx|yyy|EAN |5057723043|1299|
|xxx|yyy|MPN |EVBD |1299|
+---+---+----+----------+----+
val df1 = df.withColumn("map", struct(col("name"), col("value")))
.select("id1", "id2", "map")
val df2 = df.withColumn("map", struct(lit("EPID").as("name"), col("epid").as("value")))
.select("id1", "id2", "map")
val jsonDF = df1.union(df2).groupBy("id1", "id2")
.agg(collect_set("map").as("item_specifics"))
.withColumn("json", to_json(struct("id1", "id2", "item_specifics")))
jsonDF.select("json").show(false)
+---------------------------------------------------------------------------------------------------------------------------------------------+
|json |
+---------------------------------------------------------------------------------------------------------------------------------------------+
|{"id1":"xxx","id2":"yyy","item_specifics":[{"name":"MPN","value":"EVBD"},{"name":"EAN","value":"5057723043"},{"name":"EPID","value":"1299"}]}|
+---------------------------------------------------------------------------------------------------------------------------------------------+
For Spark = 2.4
It provides a array_union method. It might be helpful in doing it without union. I haven't tried it though.
val jsonDF = df.withColumn("map1", struct(col("name"), col("value")))
.withColumn("map2", struct(lit("epid").as("name"), col("epid").as("value")))
.groupBy("id1", "id2")
.agg(collect_set("map1").as("item_specifics1"),
collect_set("map2").as("item_specifics2"))
.withColumn("item_specifics", array_union(col("item_specifics1"), col("item_specifics2")))
.withColumn("json", to_json(struct("id1", "id2", "item_specifics2")))
You're pretty close. I believe you're looking for something like this:
val pi_df2 = pi_df.withColumn("name", lit("EPID")).
withColumnRenamed("epid", "value").
select("id1", "id2", "name","value")
pi_df.select("id1", "id2", "name","value").
union(pi_df2).withColumn("item_specific", struct(col("name"), col("value"))).
groupBy(col("id1"), col("id2")).
agg(collect_list(col("item_specific")).alias("item_specifics")).
write.json(...)
The union should bring back epid into item_specifics
Here is what you need to do
import scala.util.parsing.json.JSONObject
import scala.collection.mutable.WrappedArray
//Define udf
val jsonFun = udf((id1 : String, id2 : String, item_specifics: WrappedArray[Map[String, String]], epid: String)=> {
//Add epid to item_specifics json
val item_withEPID = item_specifics :+ Map("epid" -> epid)
val item_specificsArray = item_withEPID.map(m => ( Array(Map("name" -> m.keys.toSeq(0), "value" -> m.values.toSeq(0))))).map(m => m.map( mi => JSONObject(mi).toString().replace("\\",""))).flatten.mkString("[",",","]")
//Add id1 and id2 to output json
val m = Map("id1"-> id1, "id2"-> id2, "item_specifics" -> item_specificsArray.toSeq )
JSONObject(m).toString().replace("\\","")
})
val pi_df = Seq( ("xxx","yyy","EAN","5057723043","1299"), ("xxx","yyy","MPN","EVBD","1299")).toDF("id1","id2","name","value","epid")
//Add epid as part of group by column else the column will not be available after group by and aggregation
val df = pi_df.groupBy(col("id1"), col("id2"), col("epid")).agg(collect_list(map(col("name"), col("value")) as "map").as("item_specifics")).withColumn("item_specifics",jsonFun($"id1",$"id2",$"item_specifics",$"epid"))
df.show(false)
scala> df.show(false)
+---+---+----+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|id1|id2|epid|item_specifics |
+---+---+----+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|xxx|yyy|1299|{"id1" : "xxx", "id2" : "yyy", "item_specifics" : [{"name" : "MPN", "value" : "EVBD"},{"name" : "EAN", "value" : "5057723043"},{"name" : "epid", "value" : "1299"}]}|
+---+---+----+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Content of item_specifics column/ output
{
"id1": "xxx",
"id2": "yyy",
"item_specifics": [{
"name": "MPN",
"value": "EVBD"
}, {
"name": "EAN",
"value": "5057723043"
}, {
"name": "epid",
"value": "1299"
}]
}

reactivex repeated skip between

Suppose I have the following stream of data:
1, 2, 3, a, 5, 6, b, 7, 8, a, 10, 11, b, 12, 13, ...
I want to filter everything between 'a' and 'b' (inclusive) no matter how many times they appear. So the result of the above would be:
1, 2, 3, 7, 8, 12, 13, ...
How can I do this with ReactiveX?
Use scan with initial value b to turn
1, 2, 3, a, 5, 6, b, 7, 8, a, 10, 11, b, 12, 13, ...
into
b, 1, 2, 3, a, a, a, b, 7, 8, a, a, a, b, 12, 13, ...
and then filter out a and b to get
1, 2, 3, 7, 8, 12, 13, ...
In pseudo code
values.scan('b', (s, v) -> if (v == 'a' || v == 'b' || s != 'a') v else s).
filter(v -> v != 'a' && v != 'b');
OK. I'm posting this in case anyone else needs an answer to it. A slightly different setup than I described above just to make it easier to understand.
List<String> values = new List<string>()
{
"1", "2", "3",
"a", "5", "6", "b",
"8", "9", "10", "11",
"a", "13", "14", "b",
"16", "17", "18", "19",
"a", "21", "22", "b",
"24"
};
var aa =
// Create an array of CSV strings split on the terminal sigil value
String.Join(",", values.ToArray())
.Split(new String[] { "b," }, StringSplitOptions.None)
// Create the observable from this array of CSV strings
.ToObservable()
// Now create an Observable from each element, splitting it up again
// It is no longer a CSV string but the original elements up to each terminal value
.Select(s => s.Split(',').ToObservable()
// From each value in each observable take those elements
// up to the initial sigil
.TakeWhile(s1 => !s1.Equals("a")))
// Concat the output of each individual Observable - in order
// SelectMany won't work here since it could interleave the
// output of the different Observables created above.
.Concat();
aa.Subscribe(s => Console.WriteLine(s));
This prints out:
1
2
3
8
9
10
11
16
17
18
19
24
It is a bit more convoluted than I wanted but it works.
Edit 6/3/17:
I actually found a cleaner solution for my case.
List<String> values = new List<string>()
{
"1", "2", "3",
"a", "5", "6", "b",
"8", "9", "10", "11",
"a", "13", "14", "b",
"16", "17", "18", "19",
"a", "21", "22", "b",
"24"
};
string lazyABPattern = #"a.*?b";
Regex abRegex = new Regex(lazyABPattern);
var bb = values.ToObservable()
.Aggregate((s1, s2) => s1 + "," + s2)
.Select(s => abRegex.Replace(s, ""))
.Select(s => s.Split(',').ToObservable())
.Concat();
bb.Subscribe(s => Console.WriteLine(s));
The code is simpler which makes it easier to follow (even though it uses regexes).
The problem here is that it still isn't really a general solution to the problem of removing 'repeated regions' of a data stream. It relies on converting the stream to a single string, operating on the string, then converting it back to some other form. If anyone has any ideas on how to approach this in a general way I would love to hear about it.

Split Set into multiple Sets Scala

I have some Set[String] and a number devider: Int. I need to split the set arbitrary by pieces each of which has size devider. Examples:
1.
Set: "a", "bc", "ds", "fee", "s"
devider: 2
result:
Set1: "a", "bc"
Set2: "ds", "fee"
Set3: "s"
2.
Set: "a", "bc", "ds", "fee", "s", "ff"
devider: 3
result:
Set1: "a", "bc", "ds"
Set2: "fee", "s", "ff"
3.
Set: "a", "bc", "ds"
devider: 4
result:
Set1: "a", "bc", "ds"
What is the idiomatic way to do it in Scala?
You probably want something like:
Set("a", "bc", "ds", "fee", "s").grouped(2).toSet
The problem is that a Set, by definition, has no order, so there's no telling which elements will be grouped together.
Set( "a", "bc", "ds", "fee", "s").grouped(2).toSet
//res0: Set[Set[String]] = Set(Set(s, bc), Set(a, ds), Set(fee))
To get them grouped in a particular fashion you'll need to change the Set to one of the ordered collections, order the elements as required, group them, and transition everything back to Sets.
This is possible only if it is a List like:
val pn=List("a", "bc", "ds", "fee", "s").grouped(2).toSet
println(pn)