Idiomatic replacement of empty string '' with pl.Null (null) in polars - python-polars

I have a polars DataFrame with a number of Series that look like:
pl.Series(['cow', 'cat', '', 'lobster', ''])
and I'd like them to be
pl.Series(['cow', 'cat', pl.Null, 'lobster', pl.Null])
A simple string replacement won't work since pl.Null is not of type PyString:
pl.Series(['cow', 'cat', '', 'lobster', '']).str.replace('', pl.Null)
What's the idiomatic way of doing this for a Series/DataFrame in polars?

Series
For a single Series, you can use the set method.
import polars as pl
my_series = pl.Series(['cow', 'cat', '', 'lobster', ''])
my_series.set(my_series.str.lengths() == 0, None)
shape: (5,)
Series: '' [str]
[
"cow"
"cat"
null
"lobster"
null
]
DataFrame
For DataFrames, I would suggest using when/then/otherwise. For example, with this data:
df = pl.DataFrame({
'str1': ['cow', 'dog', "", 'lobster', ''],
'str2': ['', 'apple', "orange", '', 'kiwi'],
'str3': ['house', '', "apartment", 'condo', ''],
})
df
shape: (5, 3)
┌─────────┬────────┬───────────┐
│ str1 ┆ str2 ┆ str3 │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ str │
╞═════════╪════════╪═══════════╡
│ cow ┆ ┆ house │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ dog ┆ apple ┆ │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ ┆ orange ┆ apartment │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ lobster ┆ ┆ condo │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ ┆ kiwi ┆ │
└─────────┴────────┴───────────┘
We can run a replacement on all string columns as follows:
df.with_columns([
pl.when(pl.col(pl.Utf8).str.lengths() ==0)
.then(None)
.otherwise(pl.col(pl.Utf8))
.keep_name()
])
shape: (5, 3)
┌─────────┬────────┬───────────┐
│ str1 ┆ str2 ┆ str3 │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ str │
╞═════════╪════════╪═══════════╡
│ cow ┆ null ┆ house │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ dog ┆ apple ┆ null │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ null ┆ orange ┆ apartment │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ lobster ┆ null ┆ condo │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ null ┆ kiwi ┆ null │
└─────────┴────────┴───────────┘
The above should be fairly performant.
If you only want to replace empty strings with null on certain columns, you can provide a list:
only_these = ['str1', 'str2']
df.with_columns([
pl.when(pl.col(only_these).str.lengths() == 0)
.then(None)
.otherwise(pl.col(only_these))
.keep_name()
])
shape: (5, 3)
┌─────────┬────────┬───────────┐
│ str1 ┆ str2 ┆ str3 │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ str │
╞═════════╪════════╪═══════════╡
│ cow ┆ null ┆ house │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ dog ┆ apple ┆ │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ null ┆ orange ┆ apartment │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ lobster ┆ null ┆ condo │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ null ┆ kiwi ┆ │
└─────────┴────────┴───────────┘

Related

Python Polars: Unique values in each row

I have a Polars dataframe with 150 columns of currency codes. I can identify them with a regex expression df.select(pl.col('^*cur$')). I am trying to determine the unique set of currency codes in each row. Nulls should be ignored.
df = pl.DataFrame(
{
"col1cur": ["EUR", "EUR", "EUR"],
"col2cur": [None, "EUR", None],
"col3cur": ["EUR", None, None],
"col4cur": ["EUR", "GBP", None],
"target": [["EUR"], ["EUR", "GBP"], ["EUR"]]
}
)
In pandas, I would do this. Can anyone help, on how I would approach this in Polars?
pandas_df = pd.DataFrame(
{
"col1cur": ["EUR", "EUR", "EUR"],
"col2cur": [None, "EUR", None],
"col3cur": ["EUR", None, None],
"col4cur": ["EUR", "GBP", None],
},
dtype="string",
)
pandas_df["target"] = pandas_df.apply(
lambda x: pd.Series(x.dropna().unique()).to_list(), axis=1
)
Since polars isn't really good at row operations, I'd start off with a melt.
df.drop('target').with_row_count('i').join(
df.drop('target').with_row_count('i').melt('i').filter(~pl.col('value').is_null()) \
.groupby('i').agg(pl.col('value').unique()),
on='i'
).sort('i').drop('i')
We just do a with_row_count to create an index to maintain the identity of the original rows, then filter out the nulls, then groupby what was previously each row, aggregate to unique, and lastly wrap it in a join with the original columns by the row index.
shape: (3, 5)
┌─────────┬─────────┬─────────┬─────────┬────────────────┐
│ col1cur ┆ col2cur ┆ col3cur ┆ col4cur ┆ value │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ str ┆ str ┆ list[str] │
╞═════════╪═════════╪═════════╪═════════╪════════════════╡
│ EUR ┆ null ┆ EUR ┆ EUR ┆ ["EUR"] │
│ EUR ┆ EUR ┆ null ┆ GBP ┆ ["GBP", "EUR"] │
│ EUR ┆ null ┆ null ┆ null ┆ ["EUR"] │
└─────────┴─────────┴─────────┴─────────┴────────────────┘
Here's the closest I could get:
In [39]: df.with_columns(pl.concat_list(pl.col('*')).arr.unique().alias('target'))
Out[39]:
shape: (3, 5)
┌─────────┬─────────┬─────────┬─────────┬──────────────────────┐
│ col1cur ┆ col2cur ┆ col3cur ┆ col4cur ┆ target │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ str ┆ str ┆ list[str] │
╞═════════╪═════════╪═════════╪═════════╪══════════════════════╡
│ EUR ┆ null ┆ EUR ┆ EUR ┆ [null, "EUR"] │
│ EUR ┆ EUR ┆ null ┆ GBP ┆ ["EUR", null, "GBP"] │
│ EUR ┆ null ┆ null ┆ null ┆ [null, "EUR"] │
└─────────┴─────────┴─────────┴─────────┴──────────────────────┘
I'll update the answer if/when I find a way to exclude nulls
Slower solution, but which excludes nulls:
In [44]: df.with_columns(pl.concat_list(pl.col('*')).apply(lambda x: list(set(i for i in x if i is not None))).alias('target'))
Out[44]:
shape: (3, 5)
┌─────────┬─────────┬─────────┬─────────┬────────────────┐
│ col1cur ┆ col2cur ┆ col3cur ┆ col4cur ┆ target │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ str ┆ str ┆ list[str] │
╞═════════╪═════════╪═════════╪═════════╪════════════════╡
│ EUR ┆ null ┆ EUR ┆ EUR ┆ ["EUR"] │
│ EUR ┆ EUR ┆ null ┆ GBP ┆ ["EUR", "GBP"] │
│ EUR ┆ null ┆ null ┆ null ┆ ["EUR"] │
└─────────┴─────────┴─────────┴─────────┴────────────────┘
I would propose the concat_list with arr_eval.
df.drop('target').with_columns(
pl.concat_list('*').arr.eval(pl.element().unique().drop_nulls(), parallel=True).alias('target'))
This is similar to what #jcurious proposed in his comment.
Here is the result:
shape: (3, 5)
┌─────────┬─────────┬─────────┬─────────┬────────────────┐
│ col1cur ┆ col2cur ┆ col3cur ┆ col4cur ┆ target │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ str ┆ str ┆ list[str] │
╞═════════╪═════════╪═════════╪═════════╪════════════════╡
│ EUR ┆ null ┆ EUR ┆ EUR ┆ ["EUR"] │
│ EUR ┆ EUR ┆ null ┆ GBP ┆ ["EUR", "GBP"] │
│ EUR ┆ null ┆ null ┆ null ┆ ["EUR"] │
└─────────┴─────────┴─────────┴─────────┴────────────────┘
Edit: added parallel=True to arr.eval, to run the evaluation in parallel. (suggestion of #jqurious)

How to select the last non-null value from one column and also the value from another column on the same row in Polars?

Below is a non working example in which I retrieve the last available 'Open' but how do I get corresponding 'Time'?
sel = self.data.select([pl.col('Time'),
pl.col('Open').drop_nulls().last()])
For instance, you can use .filter() to select rows that do not contain null and then take last row
Here example:
df = pl.DataFrame({
"a": [1,2,3,4,5],
"b": ["cat", None, "owl", None, None]
})
┌─────┬──────┐
│ a ┆ b │
│ --- ┆ --- │
│ i64 ┆ str │
╞═════╪══════╡
│ 1 ┆ cat │
│ 2 ┆ null │
│ 3 ┆ owl │
│ 4 ┆ null │
│ 5 ┆ null │
└─────┴──────┘
df.filter(
pl.col("b").is_not_null()
).select(pl.all().last())
┌─────┬─────┐
│ a ┆ b │
│ --- ┆ --- │
│ i64 ┆ str │
╞═════╪═════╡
│ 3 ┆ owl │
└─────┴─────┘

How to form dynamic expressions without breaking on types

Any way to make the dynamic polars expressions not break with errors?
Currently I'm just excluding the columns by type, but just wondering if there is a better way.
For example, i have a df coming from parquet, if i just execute an expression on all columns it might break for certain types. Instead I want to contain these errors and possibly return a default value like None or -1 or something else.
import polars as pl
df = pl.scan_parquet("/path/to/data/*.parquet")
print(df.schema)
# Prints: {'date_time': <class 'polars.datatypes.Datetime'>, 'incident': <class 'polars.datatypes.Utf8'>, 'address': <class 'polars.datatypes.Utf8'>, 'city': <class 'polars.datatypes.Utf8'>, 'zipcode': <class 'polars.datatypes.Int32'>}
Now if i form generic expression on top of this, there are chances it may fail. For example,
# Finding positive count across all columns
# Fails due to: exceptions.ComputeError: cannot compare Utf8 with numeric data
print(df.select((pl.all() > 0).count().prefix("__positive_count_")).collect())
# Finding positive count across all columns
# Fails due to: pyo3_runtime.PanicException: 'unique_counts' not implemented for datetime[ns] data types
print(df.select(pl.all().unique_counts().prefix("__unique_count_")).collect())
# Finding positive count across all columns
# Fails due to: exceptions.SchemaError: Series dtype Int32 != utf8
# Note: this could have been avoided by doing an explict cast to string first
print(df.select((pl.all().str.lengths() > 0).count().prefix("__empty_count_")).collect())
I'll keep to things that work in lazy mode, as it appears that you are working in lazy mode with Parquet files.
Let's use this data as an example:
import polars as pl
from datetime import datetime
df = pl.DataFrame(
{
"col_int": [-2, -2, 0, 2, 2],
"col_float": [-20.0, -10, 10, 20, 20],
"col_date": pl.date_range(datetime(2020, 1, 1), datetime(2020, 5, 1), "1mo"),
"col_str": ["str1", "str2", "", None, "str5"],
"col_bool": [True, False, False, True, False],
}
).lazy()
df.collect()
shape: (5, 5)
┌─────────┬───────────┬─────────────────────┬─────────┬──────────┐
│ col_int ┆ col_float ┆ col_date ┆ col_str ┆ col_bool │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ f64 ┆ datetime[ns] ┆ str ┆ bool │
╞═════════╪═══════════╪═════════════════════╪═════════╪══════════╡
│ -2 ┆ -20.0 ┆ 2020-01-01 00:00:00 ┆ str1 ┆ true │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ -2 ┆ -10.0 ┆ 2020-02-01 00:00:00 ┆ str2 ┆ false │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ 0 ┆ 10.0 ┆ 2020-03-01 00:00:00 ┆ ┆ false │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 20.0 ┆ 2020-04-01 00:00:00 ┆ null ┆ true │
├╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 20.0 ┆ 2020-05-01 00:00:00 ┆ str5 ┆ false │
└─────────┴───────────┴─────────────────────┴─────────┴──────────┘
Using the col Expression
One feature of the col expression is that you can supply a datatype, or even a list of datatypes. For example, if we want to contain our queries to floats, we can do the following:
df.select((pl.col(pl.Float64) > 0).sum().suffix("__positive_count_")).collect()
shape: (1, 1)
┌────────────────────────────┐
│ col_float__positive_count_ │
│ --- │
│ u32 │
╞════════════════════════════╡
│ 3 │
└────────────────────────────┘
(Note: (pl.col(...) > 0) creates a series of boolean values that need to be summed, not counted)
To include more than one datatype, you can supply a list of datatypes to col.
df.select(
(pl.col([pl.Int64, pl.Float64]) > 0).sum().suffix("__positive_count_")
).collect()
shape: (1, 2)
┌──────────────────────────┬────────────────────────────┐
│ col_int__positive_count_ ┆ col_float__positive_count_ │
│ --- ┆ --- │
│ u32 ┆ u32 │
╞══════════════════════════╪════════════════════════════╡
│ 2 ┆ 3 │
└──────────────────────────┴────────────────────────────┘
You can also combine these into the same select statement if you'd like.
df.select(
[
(pl.col(pl.Utf8).str.lengths() == 0).sum().suffix("__empty_count"),
pl.col(pl.Utf8).is_null().sum().suffix("__null_count"),
(pl.col([pl.Float64, pl.Int64]) > 0).sum().suffix("_positive_count"),
]
).collect()
shape: (1, 4)
┌──────────────────────┬─────────────────────┬──────────────────────────┬────────────────────────┐
│ col_str__empty_count ┆ col_str__null_count ┆ col_float_positive_count ┆ col_int_positive_count │
│ --- ┆ --- ┆ --- ┆ --- │
│ u32 ┆ u32 ┆ u32 ┆ u32 │
╞══════════════════════╪═════════════════════╪══════════════════════════╪════════════════════════╡
│ 1 ┆ 1 ┆ 3 ┆ 2 │
└──────────────────────┴─────────────────────┴──────────────────────────┴────────────────────────┘
The Cookbook has a handy list of datatypes.
Using the exclude expression
Another handy trick is to use the exclude expression. With this, we can select all columns except columns of certain datatypes. For example:
df.select(
[
pl.exclude(pl.Utf8).max().suffix("_max"),
pl.exclude([pl.Utf8, pl.Boolean]).min().suffix("_min"),
]
).collect()
shape: (1, 7)
┌─────────────┬───────────────┬─────────────────────┬──────────────┬─────────────┬───────────────┬─────────────────────┐
│ col_int_max ┆ col_float_max ┆ col_date_max ┆ col_bool_max ┆ col_int_min ┆ col_float_min ┆ col_date_min │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ f64 ┆ datetime[ns] ┆ u32 ┆ i64 ┆ f64 ┆ datetime[ns] │
╞═════════════╪═══════════════╪═════════════════════╪══════════════╪═════════════╪═══════════════╪═════════════════════╡
│ 2 ┆ 20.0 ┆ 2020-05-01 00:00:00 ┆ 1 ┆ -2 ┆ -20.0 ┆ 2020-01-01 00:00:00 │
└─────────────┴───────────────┴─────────────────────┴──────────────┴─────────────┴───────────────┴─────────────────────┘
Unique counts
One caution: unique_counts results in Series of varying lengths.
df.select(pl.col("col_int").unique_counts().prefix(
"__unique_count_")).collect()
shape: (3, 1)
┌────────────────────────┐
│ __unique_count_col_int │
│ --- │
│ u32 │
╞════════════════════════╡
│ 2 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 1 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2 │
└────────────────────────┘
df.select(pl.col("col_float").unique_counts().prefix(
"__unique_count_")).collect()
shape: (4, 1)
┌──────────────────────────┐
│ __unique_count_col_float │
│ --- │
│ u32 │
╞══════════════════════════╡
│ 1 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 1 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 1 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2 │
└──────────────────────────┘
As such, these should not be combined into the same results. Each column/Series of a DataFrame must have the same length.

polars outer join default null value

https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.DataFrame.join.html
Can I specify the default NULL value for outer joins? Like 0?
The join method does not currently have an option for setting a default value for nulls. However, there is an easy way to accomplish this.
Let's say we have this data:
import polars as pl
df1 = pl.DataFrame({"key": ["a", "b", "d"], "var1": [1, 1, 1]})
df2 = pl.DataFrame({"key": ["a", "b", "c"], "var2": [2, 2, 2]})
df1.join(df2, on="key", how="outer")
shape: (4, 3)
┌─────┬──────┬──────┐
│ key ┆ var1 ┆ var2 │
│ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 │
╞═════╪══════╪══════╡
│ a ┆ 1 ┆ 2 │
├╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ b ┆ 1 ┆ 2 │
├╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ c ┆ null ┆ 2 │
├╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ d ┆ 1 ┆ null │
└─────┴──────┴──────┘
To create a different value for the null values, simply use this:
df1.join(df2, on="key", how="outer").with_column(pl.all().fill_null(0))
shape: (4, 3)
┌─────┬──────┬──────┐
│ key ┆ var1 ┆ var2 │
│ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 │
╞═════╪══════╪══════╡
│ a ┆ 1 ┆ 2 │
├╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ b ┆ 1 ┆ 2 │
├╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ c ┆ 0 ┆ 2 │
├╌╌╌╌╌┼╌╌╌╌╌╌┼╌╌╌╌╌╌┤
│ d ┆ 1 ┆ 0 │
└─────┴──────┴──────┘

Polars: how to add a column in front?

What would be the most idiomatic (and efficient) way to add a column in front of a polars data frame? Same thing like .with_column but add it at index 0?
You can select in the order you want your new DataFrame.
df = pl.DataFrame({
"a": [1, 2, 3],
"b": [True, None, False]
})
df.select([
pl.lit("foo").alias("z"),
pl.all()
])
shape: (3, 3)
┌─────┬─────┬───────┐
│ z ┆ a ┆ b │
│ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ bool │
╞═════╪═════╪═══════╡
│ foo ┆ 1 ┆ true │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ foo ┆ 2 ┆ null │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌╌╌┤
│ foo ┆ 3 ┆ false │
└─────┴─────┴───────┘