Implicit mapping based on column values - entity-framework

I'm trying to do an implicit mapping in EF6 that is based on the values of the various columns. I have a simple table like so...
col1 | col2 | col3
with each of the columns nullable. A dependency would be from a column that has an entry, e.g.
"bla" | "blub" | "qwer"
to any entry that has one or more of the rows at null but the others the same like ...
"bla" | null | "qwer"
null | "bulb" | "qwer"
null | "blub" | null
is that somehow expressible as a "Map" operation or do I have to write a custom select for that (and if so, how)?
modelBuilder.Entity<MyDbType>()
.HasMany(e => e.Dependents)
.WithMany(e => e.DependsOn)
.Map(m => m.???)

Related

How to change column type in PostgreSQL from text to array and cast only non-null values?

I have a table in PostgreSQL:
| id | country| type |
| 1 | USA | FOO |
| 2 | null | BAR |
I want to change the column type for the country column from text to array and cast to the new type only non-null values to have the table look as follows:
| id | country | type |
| 1 | {USA} | FOO |
| 2 | null | BAR |
So far, I have come up with this expression that casts any value to the array. So for the 2nd row, I have an array with a null value.
ALTER TABLE my_table
ALTER COLUMN country TYPE TEXT[]
USING ARRAY[country];
How can I use the USING expression to cast only not null values?
Simply do
ALTER TABLE my_table
ALTER COLUMN country TYPE TEXT[]
USING string_to_array(country,'');
You can use a CASE expression
ALTER TABLE my_table
ALTER COLUMN country TYPE TEXT[]
USING case
when country is null then null
else ARRAY[country]
end;

.isin() with a column from a dataframe

How can I query a table using isin() with another dataframe? For example there is this dataframe, df1:
| id | rank |
|---------|------|
| SE34SER | 1 |
| SEF3445 | 2 |
| 5W4G4F | 3 |
I want to query a table where a column in the table isin(df1.id). I tried doing so like this:
t = (
spark.table('mytable')
.where(sf.col('id').isin(df1.id))
.select('*')
).show()
However it errors:
AttributeError: 'NoneType' object has no attribute 'id'
Unfortunately, you can't pass another dataframe's column to isin() method. You can get all the values of that column in a list and pass list to isin() method but this is not a better approach.
You can do inner join between those 2 dataframes.
df2 = spark.table('mytable')
df2.join(df1.select('id'),df1.id == df2.id, 'inner')

Update jsonb column in CASE statement: "column is of type jsonb but expression is of type text"

I am working in Postgres 11.4. I have a table with a jsonb column:
Table "public.feature_bundle_plan_feature"
Column | Type | Collation | Nullable | Default
-------------------+-----------------------------+-----------+----------+---------------
plan_feature_id | integer | | not null |
feature_bundle_id | integer | | not null |
value | jsonb | | | 'true'::jsonb
I can store an object directly in the jsonb value column like this:
insert into feature_bundle_plan_feature(plan_feature_id, feature_bundle_id, value)
values(1, 1, '{"foo":"bar"}');
However, I don't seem to be able to do the same thing inside a CASE statement:
update feature_bundle_plan_feature set value = case
when feature_bundle_id=1 then '{"foo":"bar"}'
end;
This fails with:
ERROR: column "value" is of type jsonb but expression is of type text
LINE 1: update feature_bundle_plan_feature set value = case
HINT: You will need to rewrite or cast the expression.
What am I doing wrong?

Spark Scala Dataframe - replace/join column values with values from another dataframe (but is transposed)

I have a table with ~300 columns filled with characters (stored as String):
valuesDF:
| FavouriteBeer | FavouriteCheese | ...
|---------------|-----------------|--------
| U | C | ...
| U | E | ...
| I | B | ...
| C | U | ...
| ... | ... | ...
I have a Data Summary, which maps the characters onto their actual meaning. It is in this form:
summaryDF:
| Field | Value | ValueDesc |
|------------------|-------|---------------|
| FavouriteBeer | U | Unknown |
| FavouriteBeer | C | Carlsberg |
| FavouriteBeer | I | InnisAndGunn |
| FavouriteBeer | D | DoomBar |
| FavouriteCheese | C | Cheddar |
| FavouriteCheese | E | Emmental |
| FavouriteCheese | B | Brie |
| FavouriteCheese | U | Unknown |
| ... | ... | ... |
I want to programmatically replace the character values of each column in valuesDF with the Value Descriptions from summaryDF. This is the result I'm looking for:
finalDF:
| FavouriteBeer | FavouriteCheese | ...
|---------------|-----------------|--------
| Unknown | Cheddar | ...
| Unknown | Emmental | ...
| InnisAndGunn | Brie | ...
| Carlsberg | Unknown | ...
| ... | ... | ...
As there are ~300 columns, I'm not keen to type out withColumn methods for each one.
Unfortunately I'm a bit of a novice when it comes to programming for Spark, although I've picked up enough to get by over the last 2 months.
What I'm pretty sure I need to do is something along the lines of:
valuesDF.columns.foreach { col => ...... } to iterate over each column
Filter summaryDF on Field using col String value
Left join summaryDF onto valuesDF based on current column
withColumn to replace the original character code column from valuesDF with new description column
Assign new DF as a var
Continue loop
However, trying this gave me Cartesian product error (I made sure to define the join as "left").
I tried and failed to pivot summaryDF (as there are no aggregations to do??) then join both dataframes together.
This is the sort of thing I've tried, and always getting a NullPointerException. I know this is really not the right way to do this, and can see why I'm getting Null Pointer... but I'm really stuck and reverting back to old, silly & bad Python habits in desperation.
var valuesDF = sourceDF
// I converted summaryDF to a broadcasted RDD
// because its small and a "constant" lookup table
summaryBroadcast
.value
.foreach{ x =>
// searchValue = Value (e.g. `U`),
// replaceValue = ValueDescription (e.g. `Unknown`),
val field = x(0).toString
val searchValue = x(1).toString
val replaceValue = x(2).toString
// error catching as summary data does not exactly mapping onto field names
// the joys of business people working in Excel...
try {
// I'm using regexp_replace because I'm lazy
valuesDF = valuesDF
.withColumn( attribute, regexp_replace(col(attribute), searchValue, replaceValue ))
}
catch {case _: Exception =>
null
}
}
Any ideas? Advice? Thanks.
First, we'll need a function that executes a join of valuesDf with summaryDf by Value and the respective pair of Favourite* and Field:
private def joinByColumn(colName: String, sourceDf: DataFrame): DataFrame = {
sourceDf.as("src") // alias it to help selecting appropriate columns in the result
// the join
.join(summaryDf, $"Value" === col(colName) && $"Field" === colName, "left")
// we do not need the original `Favourite*` column, so drop it
.drop(colName)
// select all previous columns, plus the one that contains the match
.select("src.*", "ValueDesc")
// rename the resulting column to have the name of the source one
.withColumnRenamed("ValueDesc", colName)
}
Now, to produce the target result we can iterate on the names of the columns to match:
val result = Seq("FavouriteBeer",
"FavouriteCheese").foldLeft(valuesDF) {
case(df, colName) => joinByColumn(colName, df)
}
result.show()
+-------------+---------------+
|FavouriteBeer|FavouriteCheese|
+-------------+---------------+
| Unknown| Cheddar|
| Unknown| Emmental|
| InnisAndGunn| Brie|
| Carlsberg| Unknown|
+-------------+---------------+
In case a value from valuesDf does not match with anything in summaryDf, the resulting cell in this solution will contain null. If you want just to replace it with Unknown value, instead of .select and .withColumnRenamed lines above use:
.withColumn(colName, when($"ValueDesc".isNotNull, $"ValueDesc").otherwise(lit("Unknown")))
.select("src.*", colName)

group_by or distinct with postgres/dbix-class

I have a posts table like so:
+-----+----------+------------+------------+
| id | topic_id | text | timestamp |
+-----+----------+------------+------------+
| 789 | 2 | foobar | 1396026357 |
| 790 | 2 | foobar | 1396026358 |
| 791 | 2 | foobar | 1396026359 |
| 792 | 3 | foobar | 1396026360 |
| 793 | 3 | foobar | 1396026361 |
+-----+----------+------------+------------+
How would I could about "grouping" the results by topic id, while pulling the most recent record (sorting by timestamp desc)?
I've come to the understanding that I might not want "group_by" but rather "distinct on". My postgres query looks like this:
select distinct on (topic_id) topic_id, id, text, timestamp
from posts
order by topic_id desc, timestamp desc;
This works great. However, I can't figure out if this is something I can do in DBIx::Class without having to write a custom ResultSource::View. I've tried various arrangements of group_by with selects and columns, and have tried distinct => 1. If/when a result is returned, it doesn't actually preserve the uniqueness.
Is there a way to write the query I am trying through a resultset search, or is there perhaps a better way to achieve the same result through a different type of query?
Check out the section in the DBIC Cookbook on grouping results.
I believe what you want is something along the lines of this though:
my $rs = $base_posts_rs->search(undef, {
columns => [ {topic_id=>"topic_id"}, {text=>"text"}, {timestamp=>"timestamp"} ],
group_by => ["topic_id"],
order_by => [ {-desc=>"topic_id"}, {-desc=>"timestamp"} ],
})
Edit: A quick and dirty way to get around strict SQL grouping would be something like this:
my $rs = $base_posts_rs->search(undef, {
columns => [
{ topic_id => \"MAX(topic_id)" },
{ text => \"MAX(text)" },
{ timestamp => \"MAX(timestamp)" },
],
group_by => ["topic_id"],
order_by => [ {-desc=>"topic_id"}, {-desc=>"timestamp"} ],
})
Of course, use the appropriate aggregate function for your need.