I need to generate all dates between two given dates.
My predicate date_between(DateLow, DateHigh, X) works correctly:
?- date_between(date(2020,2,15), date(2020,2,25), X).
X = date(2020, 2, 15) ;
X = date(2020, 2, 16) ;
....
X = date(2020, 2, 25) .
But I think predicate is too clumsy. Is there another approach to do the same but more elegant?
Should I translate back and forth Date to Seconds (Stamp) and Seconds to Date?
I have to compare dates through conversion in seconds?
You can see my code:
date_between(DateLow, DateHigh, DateLow) :-
datestd_stamp(DateLow, StampLow),
datestd_stamp(DateHigh, StampHigh),
StampLow =< StampHigh.
date_between(DateLow, DateHigh, X) :-
datestd_stamp(DateLow, StampLow),
datestd_stamp(DateHigh, StampHigh),
StampLow < StampHigh,
DateLow = date(Y,M,D),
Dnxt is D + 1,
date_time_stamp(date(Y,M,Dnxt,0,0,0,0,-,-), StampNext),
stamp_date_time(StampNext, Dat, 0),
date_time_value(date, Dat, DateNxt),
date_between(DateNxt, DateHigh, X).
datestd_stamp(Data, Stamp) :-
Data = date(Y,M,D),
date_time_stamp(date(Y,M,D,0,0,0,0,-,-), StampTmp),
round(StampTmp, Stamp).
I tried to improve the predicate. The execution time has definitely been reduced.
The predicate has become simpler and faster.
Old version:
?- time((bagof(X, (date_between(date(2020,1,1), date(2100,12,31), X)), Ls))).
% 680,466 inferences, 0.149 CPU in 0.149 seconds (100% CPU, 4563901 Lips)
Ls = [date(2020, 1, 1), date(2020, 1, 2), date(2020, 1, 3),
New version:
?- time((bagof(X, (date_between2(date(2020,1,1), date(2100,12,31), X)), Ls))).
% 207,106 inferences, 0.066 CPU in 0.066 seconds (100% CPU, 3157900 Lips)
Ls = [date(2020, 1, 1), date(2020, 1, 2), date(2020, 1, 3),
You can see new version of predicate:
date_between2(DateLow, DateHigh, DateLow) :-
DateLow #=< DateHigh.
date_between2(DateLow, DateHigh, X) :-
DateLow #< DateHigh,
DateLow = date(Y,M,D),
Dnxt is D + 1,
date_time_stamp(date(Y,M,Dnxt,0,0,0,0,-,-), StampNext),
stamp_date_time(StampNext, Dat, 0),
date_time_value(date, Dat, DateNxt),
date_between2(DateNxt, DateHigh, X).
Related
I am trying to perform the following operation in Polars.
For value in column B which is below 80 will be scaled between 1 and 4, where as for anything above 80, will be set as 5.
df_pandas = pd.DataFrame(
{
"A": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
"B": [50, 300, 80, 12, 105, 78, 66, 42, 61.5, 35],
}
)
test_scaler = MinMaxScaler(feature_range=(1,4)) # from sklearn.preprocessing
df_pandas.loc[df_pandas['B']<80, 'Test'] = test_scaler.fit_transform(df_pandas.loc[df_pandas['B']<80, "B"].values.reshape(-1,1))
df_pandas = df_pandas.fillna(5)
This is what I did with Polars:
# dt is a dictionary
dt = df.filter(
pl.col('B')<80
).to_dict(as_series=False)
below_80 = list(dt.keys())
dt_scale = list(
test_scaler.fit_transform(
np.array(dt['B']).reshape(-1,1)
).reshape(-1) # reshape back to one dimensional
)
# reassign to dictionary dt
dt['B'] = dt_scale
dt_scale_df = pl.DataFrame(dt)
dt_scale_df
dummy = df.join(
dt_scale_df, how="left", on="A"
).fill_null(5)
dummy = dummy.rename({"B_right": "Test"})
Result:
A
B
Test
1
50.0
2.727273
2
300.0
5.000000
3
80.0
5.000000
4
12.0
1.000000
5
105.0
5.000000
6
78.0
4.000000
7
66.0
3.454545
8
42.0
2.363636
9
61.5
3.250000
10
35.0
2.045455
Is there a better approach for this?
Alright, I have got 3 examples for you that should help you from which the last should be preferred.
Because you only want to apply your scaler to a part of a column, we should ensure we only send that part of the data to the scaler. This can be done by:
window function over a partition
partition_by
when -> then -> otherwise + min_max expression
Window function over partititon
This requires a python function that will be applied over the partitions. In the function itself we then have to check in which partition we are and deal with it accordingly.
df = pl.from_pandas(df_pandas)
min_max_sc = MinMaxScaler((1, 4))
def my_scaler(s: pl.Series) -> pl.Series:
if s.len() > 0 and s[0] > 80:
out = (s * 0 + 5)
else:
out = pl.Series(min_max_sc.fit_transform(s.to_numpy().reshape(-1, 1)).flatten())
# ensure all types are the same
return out.cast(pl.Float64)
df.with_column(
pl.col("B").apply(my_scaler).over(pl.col("B") < 80).alias("Test")
)
partition_by
This partitions the the original dataframe to a dictionary holding the different partitions. We then only modify the partitions as needed.
parts = (df
.with_column((pl.col("B") < 80).alias("part"))
.partition_by("part", as_dict=True)
)
parts[True] = parts[True].with_column(
pl.col("B").map(
lambda s: pl.Series(min_max_sc.fit_transform(s.to_numpy().reshape(-1, 1)).flatten())
).alias("Test")
)
parts[False] = parts[False].with_column(
pl.lit(5.0).alias("Test")
)
pl.concat([df for df in parts.values()]).select(pl.all().exclude("part"))
when -> then -> otherwise + min_max expression
This one I like best. We can make function that creates a polars expression that is the min_max scaling function you need. This will have best performance.
def min_max_scaler(col: str, predicate: pl.Expr):
x = pl.col(col)
x_min = x.filter(predicate).min()
x_max = x.filter(predicate).max()
# * 3 + 1 to set scale between 1 - 4
return (x - x_min) / (x_max - x_min) * 3 + 1
predicate = pl.col("B") < 80
df.with_column(
pl.when(predicate)
.then(min_max_scaler("B", predicate))
.otherwise(5).alias("Test")
)
I am doing calculations on date and time using floating-point numbers. However, I notice that the calculated value is not as expected. Here is the code snippet.
Dim sTemp As Single, sLineDateTime As Single
Dim strTemp As String
strTemp = "2019-02-25"
sLineDateTime = DateValue(DateSerial(Left(strTemp, 4), Mid(strTemp, 6, 2), Right(strTemp, 2)))
strTemp = ""21:47:42"
REM TODO Time is being rounded off. Check
sTemp = TimeValue(TimeSerial(Left(strTemp, 2), Mid(strTemp, 4, 2), Right(strTemp, 2)))
sLineDateTime = sLineDateTime + sTemp
The output of the above computation is sLineDateTime="43521.906250" which when converted to date/time is "Mon 25-Feb-2019 09:45:00 PM". The actual value expected is "43521.908125" which translates to error of "0.001875000001746" or in other words 2 minutes 42 seconds. Any suggestions on how I can overcome this problem?
I did try before posting this and it did not help. I switched to using dates which is working perfectly for me now. Here is the code.
strTemp = "2019-02-25"
dLineDateTime = DateSerial(Left(strTemp, 4), Mid(strTemp, 6, 2), Right(strTemp, 2))
strTemp = "21:47:42"
dLineDateTime = dLineDateTime + TimeSerial(Left(strTemp, 2), Mid(strTemp, 4, 2), Right(strTemp, 2))
I want to aggregate a monthly series at the quarterly frequency, for which R has ts and aggregate() (see the first answer on this thread) and pandas has df.resample("Q").sum() (see this question). Does Julia offer something similar?
Appendix: my current solution uses a function to convert a data to the first quarter and split-apply-combine:
"""
month_to_quarter(date)
Returns the date corresponding to the first day of the quarter enclosing date
# Examples
```jldoctest
julia> Date(1990, 1, 1) == RED.month_to_quarter(Date(1990, 2, 1))
true
julia> Date(1990, 1, 1) == RED.month_to_quarter(Date(1990, 1, 1))
true
julia> Date(1990, 1, 1) == RED.month_to_quarter(Date(1990, 2, 25))
true
```
"""
function month_to_quarter(date::Date)
new_month = 1 + 3 * floor((Dates.month(date) - 1) / 3)
return Date(Dates.year(date), new_month, 1)
end
"""
monthly_to_quarterly(monthly_df)
Aggregates a monthly data frame to the quarterly frequency. The data frame should have a :DATE column.
# Examples
```jldoctest
julia> monthly = convert(DataFrame, hcat(collect([Dates.Date(1990, m, 1) for m in 1:3]), [1; 2; 3]));
julia> rename!(monthly, :x1 => :DATE);
julia> rename!(monthly, :x2 => :value);
julia> quarterly = RED.monthly_to_quarterly(monthly);
julia> quarterly[:value][1]
2.0
julia> length(quarterly[:value])
1
```
"""
function monthly_to_quarterly(monthly::DataFrame)
# quarter months: 1, 4, 7, 10
quarter_months = collect(1:3:10)
# Deep copy the data frame
monthly_copy = deepcopy(monthly)
# Drop initial rows until it starts on a quarter
while !in(Dates.month(monthly_copy[:DATE][1]), quarter_months)
# Verify that something is left to pop
#assert 1 <= length(monthly_copy[:DATE])
monthly_copy = monthly_copy[2:end, :]
end
# Drop end rows until it finishes before a quarter
while !in(Dates.month(monthly_copy[:DATE][end]), 2 + quarter_months)
monthly_copy = monthly_copy[1:end-1, :]
end
# Change month of each date to the nearest quarter
monthly_copy[:DATE] = month_to_quarter.(monthly_copy[:DATE])
# Split-apply-combine
quarterly = by(monthly_copy, :DATE, df -> mean(df[:value]))
# Rename
rename!(quarterly, :x1 => :value)
return quarterly
end
I couldn't find such a function in the docs. Here's a more DataFrames.jl-ish and more succint version of your own answer
using DataFrames
# copy-pasted your own function
function month_to_quarter(date::Date)
new_month = 1 + 3 * floor((Dates.month(date) - 1) / 3)
return Date(Dates.year(date), new_month, 1)
end
# the data
r=collect(1:6)
monthly = DataFrame(date=[Dates.Date(1990, m, 1) for m in r],
val=r);
# the functionality
monthly[:quarters] = month_to_quarter.(monthly[:date])
_aggregated = by(monthly, :quarters, df -> DataFrame(S = sum(df[:val])))
#show monthly
#show _aggregated
I was thinking about using RxJava2Extensions, but for some reason these function don't return Maybe/Single<Double> which is a bit strange, IMO.
MathFlowable.averageDouble(Flowable.range(1, 10))
.test()
.assertResult(5.5);
Flowable.just(5, 1, 3, 2, 4)
.to(MathFlowable::min)
.test()
.assertResult(1);
I've got a list of some integers, e.g. [1, 2, 3, 4, 5, 10]
And I've another integer (N). For example, N = 19.
I want to check if my integer can be represented as a sum of any amount of numbers in my list:
19 = 10 + 5 + 4
or
19 = 10 + 4 + 3 + 2
Every number from the list can be used only once. N can raise up to 2 thousand or more. Size of the list can reach 200 integers.
Is there a good way to solve this problem?
4 years and a half later, this question is answered by Jonathan.
I want to post two implementations (bruteforce and Jonathan's) in Python and their performance comparison.
def check_sum_bruteforce(numbers, n):
# This bruteforce approach can be improved (for some cases) by
# returning True as soon as the needed sum is found;
sums = []
for number in numbers:
for sum_ in sums[:]:
sums.append(sum_ + number)
sums.append(number)
return n in sums
def check_sum_optimized(numbers, n):
sums1, sums2 = [], []
numbers1 = numbers[:len(numbers) // 2]
numbers2 = numbers[len(numbers) // 2:]
for sums, numbers_ in ((sums1, numbers1), (sums2, numbers2)):
for number in numbers_:
for sum_ in sums[:]:
sums.append(sum_ + number)
sums.append(number)
for sum_ in sums1:
if n - sum_ in sums2:
return True
return False
assert check_sum_bruteforce([1, 2, 3, 4, 5, 10], 19)
assert check_sum_optimized([1, 2, 3, 4, 5, 10], 19)
import timeit
print(
"Bruteforce approach (10000 times):",
timeit.timeit(
'check_sum_bruteforce([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 200)',
number=10000,
globals=globals()
)
)
print(
"Optimized approach by Jonathan (10000 times):",
timeit.timeit(
'check_sum_optimized([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 200)',
number=10000,
globals=globals()
)
)
Output (the float numbers are seconds):
Bruteforce approach (10000 times): 1.830944365834205
Optimized approach by Jonathan (10000 times): 0.34162875449254027
The brute force approach requires generating 2^(array_size)-1 subsets to be summed and compared against target N.
The run time can be dramatically improved by simply splitting the problem in two. Store, in sets, all of the possible sums for one half of the array and the other half separately. It can now be determined by checking for every number n in one set if the complementN-n exists in the other set.
This optimization brings the complexity down to approximately: 2^(array_size/2)-1+2^(array_size/2)-1=2^(array_size/2 + 1)-2
Half of the original.
Here is a c++ implementation using this idea.
#include <bits/stdc++.h>
using namespace std;
bool sum_search(vector<int> myarray, int N) {
//values for splitting the array in two
int right=myarray.size()-1,middle=(myarray.size()-1)/2;
set<int> all_possible_sums1,all_possible_sums2;
//iterate over the first half of the array
for(int i=0;i<middle;i++) {
//buffer set that will hold new possible sums
set<int> buffer_set;
//every value currently in the set is used to make new possible sums
for(set<int>::iterator set_iterator=all_possible_sums1.begin();set_iterator!=all_possible_sums1.end();set_iterator++)
buffer_set.insert(myarray[i]+*set_iterator);
all_possible_sums1.insert(myarray[i]);
//transfer buffer into the main set
for(set<int>::iterator set_iterator=buffer_set.begin();set_iterator!=buffer_set.end();set_iterator++)
all_possible_sums1.insert(*set_iterator);
}
//iterator over the second half of the array
for(int i=middle;i<right+1;i++) {
set<int> buffer_set;
for(set<int>::iterator set_iterator=all_possible_sums2.begin();set_iterator!=all_possible_sums2.end();set_iterator++)
buffer_set.insert(myarray[i]+*set_iterator);
all_possible_sums2.insert(myarray[i]);
for(set<int>::iterator set_iterator=buffer_set.begin();set_iterator!=buffer_set.end();set_iterator++)
all_possible_sums2.insert(*set_iterator);
}
//for every element in the first set, check if the the second set has the complemenent to make N
for(set<int>::iterator set_iterator=all_possible_sums1.begin();set_iterator!=all_possible_sums1.end();set_iterator++)
if(all_possible_sums2.find(N-*set_iterator)!=all_possible_sums2.end())
return true;
return false;
}
Ugly and brute force approach:
a = [1, 2, 3, 4, 5, 10]
b = []
a.size.times do |c|
b << a.combination(c).select{|d| d.reduce(&:+) == 19 }
end
puts b.flatten(1).inspect