I'm not sure why this is happening. In PySpark, I read in two dataframes and print out their column names and they are as expected, but then when do a SQL join I get an error that cannot resolve column name given the inputs. I have simplified the merge just to get it to work, but I will need to add in more join conditions which is why I'm using SQL (will be adding in: "and b.mnvr_bgn < a.idx_trip_id and b.mnvr_end > a.idx_trip_data"). It appears that the column 'device_id' is being renamed to '_col7' in the df mnvr_temp_idx_prev_temp
mnvr_temp_idx_prev = mnvr_3.select('device_id', 'mnvr_bgn', 'mnvr_end')
print mnvr_temp_idx_prev.columns
['device_id', 'mnvr_bgn', 'mnvr_end']
raw_data_filtered = raw_data.select('device_id', 'trip_id', 'idx').groupby('device_id', 'trip_id').agg(F.max('idx').alias('idx_trip_end'))
print raw_data_filtered.columns
['device_id', 'trip_id', 'idx_trip_end']
raw_data_filtered.registerTempTable('raw_data_filtered_temp')
mnvr_temp_idx_prev.registerTempTable('mnvr_temp_idx_prev_temp')
test = sqlContext.sql('SELECT a.device_id, a.idx_trip_end, b.mnvr_bgn, b.mnvr_end \
FROM raw_data_filtered_temp as a \
INNER JOIN mnvr_temp_idx_prev_temp as b \
ON a.device_id = b.device_id')
Traceback (most recent call last): AnalysisException: u"cannot resolve 'b.device_id' given input columns: [_col7, trip_id, device_id, mnvr_end, mnvr_bgn, idx_trip_end]; line 1 pos 237"
Any help is appreciated!
I would recommend renaming the name of the field 'device_id' in at least one of the data frame. I modified your query just a bit and tested it(in scala). Below query works
test = sqlContext.sql("select * FROM raw_data_filtered_temp a INNER JOIN mnvr_temp_idx_prev_temp b ON a.device_id = b.device_id")
[device_id: string, mnvr_bgn: string, mnvr_end: string, device_id: string, trip_id: string, idx_trip_end: string]
Now if you are doing a 'select * ' in above statement, it will work. But if you try to select 'device_id', you will get an error "Reference 'device_id' is ambiguous" . As you can see in the above 'test' data frame definition, it has two fields with the same name(device_id). So to avoid this, I recommend changing field name in one of the dataframes.
mnvr_temp_idx_prev = mnvr_3.select('device_id', 'mnvr_bgn', 'mnvr_end')
.withColumnRenamned("device_id","device")
raw_data_filtered = raw_data.select('device_id', 'trip_id', 'idx').groupby('device_id', 'trip_id').agg(F.max('idx').alias('idx_trip_end'))
Now use dataframes or sqlContext
//using dataframes with multiple conditions
val test = mnvr_temp_idx_prev.join(raw_data_filtered,$"device" === $"device_id"
&& $"mnvr_bgn" < $"idx_trip_id","inner")
//in SQL Context
test = sqlContext.sql("select * FROM raw_data_filtered_temp a INNER JOIN mnvr_temp_idx_prev_temp b ON a.device_id = b.device and a. idx_trip_id < b.mnvr_bgn")
Above queries will work for your problem. And if your data set is too large, I would recommend to not use '>' or '<' operators in Join condition as it causes cross join which is a costly operation if data set is large. Instead use them in WHERE condition.
Related
Python doesn't like the ampersand below.
I get the error: & is not a supported operation for types str and str. Please review your code.
Any idea how to get this right? I've never tried to join more than 1 column for aliased tables. Thx!!
df_initial_sample = df_crm.alias('crm').join(df_cngpt.alias('cng'), on= (("crm.id=cng.id") & ("crm.cpid = cng.cpid")), how = "inner")
Try using as below -
df_initial_sample = df_crm.alias('crm').join(df_cngpt.alias('cng'), on= (["id"] and ["cpid"]), how = "inner")
Your join condition is overcomplicated. It can be as simple as this
df_initial_sample = df_crm.join(df_cngpt, on=['id', 'cpid'], how = 'inner')
I have a DF A_DF which has among others two columns say COND_B and COND_C. Then I have 2 different df's B_DF with COND_B column and C_DF with COND_C column.
Now I would like to filter A_DF where the value match in one OR the other. Something like:
df = A_DF.filter((A_DF.COND_B == B_DF.COND_B) | (A_DF.COND_C == C_DF.COND_C))
But I found out it is not possible like this.
EDIT
error: Attribute CON_B#264,COND_C#6 is missing from the schema: [... COND_B#532, COND_C#541 ]. Attribute(s) with the same name appear in the operation: COND_B,COND_C. Please check if the right attribute(s) are used.; looks like I can filter only on same DF because of the #number added on the fly..
So I first tried to do a list from B_DF and C_DF and use filter based on that but it was too expensive to use collect() on 100m of records.
So I tried:
AB_DF = A_DF.join(B_DF, 'COND_B', 'left_semi')
AC_DF = A_DF.join(C_DF, 'COND_C', 'left_semi')
df = AB_DF.unionAll(AC_DF).dropDuplicates()
dropDuplicates() I used to removed duplicate records where both conditions where true. But even with that I got some unexpected results.
Is there some other - smoother solution to do it simply? Something like an EXISTS statement in SQL?
EDIT2
I tried SQL based on #mck response:
e.createOrReplaceTempView('E')
b.createOrReplaceTempView('B')
p.createOrReplaceTempView('P')
df = spark.sql("""select * from E where exists (select 1 from B where E.BUSIPKEY = B.BUSIPKEY) or exists (select 1 from P where E.PCKEY = P.PCKEY)""")
my_output.write_dataframe(df)
with error:
Traceback (most recent call last):
File "/myproject/abc.py", line 45, in my_compute_function
df = spark.sql("""select * from E where exists (select 1 from B where E.BUSIPKEY = B.BUSIPKEY) or exists (select 1 from P where E.PCKEY = P.PCKEY)""")
TypeError: sql() missing 1 required positional argument: 'sqlQuery'
Thanks a lot!
Your idea of using exists should work. You can do:
A_DF.createOrReplaceTempView('A')
B_DF.createOrReplaceTempView('B')
C_DF.createOrReplaceTempView('C')
df = spark.sql("""
select * from A
where exists (select 1 from B where A.COND_B = B.COND_B)
or exists (select 1 from C where A.COND_C = C.COND_C)
""")
I have the following select statement in ABAP:
SELECT munic~mandt VREFER BIS AB ZZELECDATE ZZCERTDATE CONSYEAR ZDIMO ZZONE_M ZZONE_T USAGE_M USAGE_T M2MC M2MT M2RET EXEMPTMCMT EXEMPRET CHARGEMCMT
INTO corresponding fields of table GT_INSTMUNIC_F
FROM ZCI00_INSTMUNIC AS MUNIC
INNER JOIN EVER AS EV on
MUNIC~POD = EV~VREFER(9).
"where EV~BSTATUS = '14' or EV~BSTATUS = '32'.
My problem with the above statement is that does not recognize the substring/offset operation on the 'ON' clause. If i remove the '(9) then
it recognizes the field, otherwise it gives error:
Field ev~refer is unknown. It is neither in one of the specified tables
nor defined by a "DATA" statement. I have also tried doing something similar in the 'Where' clause, receiving a similar error:
LOOP AT gt_instmunic.
clear wa_gt_instmunic_f.
wa_gt_instmunic_f-mandt = gt_instmunic-mandt.
wa_gt_instmunic_f-bis = gt_instmunic-bis.
wa_gt_instmunic_f-ab = gt_instmunic-ab.
wa_gt_instmunic_f-zzelecdate = gt_instmunic-zzelecdate.
wa_gt_instmunic_f-ZZCERTDATE = gt_instmunic-ZZCERTDATE.
wa_gt_instmunic_f-CONSYEAR = gt_instmunic-CONSYEAR.
wa_gt_instmunic_f-ZDIMO = gt_instmunic-ZDIMO.
wa_gt_instmunic_f-ZZONE_M = gt_instmunic-ZZONE_M.
wa_gt_instmunic_f-ZZONE_T = gt_instmunic-ZZONE_T.
wa_gt_instmunic_f-USAGE_M = gt_instmunic-USAGE_M.
wa_gt_instmunic_f-USAGE_T = gt_instmunic-USAGE_T.
temp_pod = gt_instmunic-pod.
SELECT vrefer
FROM ever
INTO wa_gt_instmunic_f-vrefer
WHERE ( vrefer(9) LIKE temp_pod ). " PROBLEM WITH SUBSTRING
"AND ( BSTATUS = '14' OR BSTATUS = '32' ).
ENDSELECT.
WRITE: / sy-dbcnt.
WRITE: / 'wa is: ', wa_gt_instmunic_f.
WRITE: / 'wa-ever is: ', wa_gt_instmunic_f-vrefer.
APPEND wa_gt_instmunic_f TO gt_instmunic_f.
WRITE: / wa_gt_instmunic_f-vrefer.
ENDLOOP.
itab_size = lines( gt_instmunic_f ).
WRITE: / 'Internal table populated with', itab_size, ' lines'.
The basic task i want to implement is to modify a specific field on one table,
pulling values from another. They have a common field ( pod = vrefer(9) ). Thanks in advance for your time.
If you are on a late enough NetWeaver version, it works on 7.51, you can use the OpenSQL function LEFT or SUBSTRING. Your query would look something like:
SELECT munic~mandt VREFER BIS AB ZZELECDATE ZZCERTDATE CONSYEAR ZDIMO ZZONE_M ZZONE_T USAGE_M USAGE_T M2MC M2MT M2RET EXEMPTMCMT EXEMPRET CHARGEMCMT
FROM ZCI00_INSTMUNIC AS MUNIC
INNER JOIN ever AS ev
ON MUNIC~POD EQ LEFT( EV~VREFER, 9 )
INTO corresponding fields of table GT_INSTMUNIC_F.
Note that the INTO clause needs to move to the end of the command as well.
field(9) is a subset operation that is processed by the ABAP environment and can not be translated into a database-level SQL statement (at least not at the moment, but I'd be surprised if it ever will be). Your best bet is either to select the datasets separately and merge them manually (if both are approximately equally large) or pre-select one and use a FAE/IN clause.
They have a common field ( pod = vrefer(9) )
This is a wrong assumption, because they both are not fields, but a field an other thing.
If you really need to do that task through SQL, I'll suggest you to check native SQL sentences like SUBSTRING and check if you can manage to use them within an EXEC_SQL or (better) the CL_SQL* classes.
Suppose I have an activity table and a subscription table. Each activity has an array of generic references to some other object, and each subscription has a single generic reference to some other object in the same set.
CREATE TABLE activity (
id serial primary key,
ob_refs UUID[] not null
);
CREATE TABLE subscription (
id UUID primary key,
ob_ref UUID,
subscribed boolean not null
);
I want to join with the set-returning function unnest so I can find the "deepest" matching subscription, something like this:
SELECT id
FROM (
SELECT DISTINCT ON (activity.id)
activity.id,
x.ob_ref, x.ob_depth,
subscription.subscribed IS NULL OR subscription.subscribed = TRUE
AS subscribed,
FROM activity
LEFT JOIN subscription
ON activity.ob_refs #> array[subscription.ob_ref]
LEFT JOIN unnest(activity.ob_refs)
WITH ORDINALITY AS x(ob_ref, ob_depth)
ON subscription.ob_ref = x.ob_ref
ORDER BY x.ob_depth DESC
) sub
WHERE subscribed = TRUE;
But I can't figure out how to do that second join and get access to the columns. I've tried creating a FromClause like this:
act_ref_t = (sa.select(
[sa.column('unnest', UUID).label('ob_ref'),
sa.column('ordinality', sa.Integer).label('ob_depth')],
from_obj=sa.func.unnest(Activity.ob_refs))
.suffix_with('WITH ORDINALITY')
.alias('act_ref_t'))
...
query = (query
.outerjoin(
act_ref_t,
Subscription.ob_ref == act_ref_t.c.ob_ref))
.order_by(activity.id, act_ref_t.ob_depth)
But that results in this SQL with another subquery:
LEFT OUTER JOIN (
SELECT unnest AS ob_ref, ordinality AS ref_i
FROM unnest(activity.ob_refs) WITH ORDINALITY
) AS act_ref_t
ON subscription.ob_refs #> ARRAY[act_ref_t.ob_ref]
... which fails because of the missing and unsupported LATERAL keyword:
There is an entry for table "activity", but it cannot be referenced from this part of the query.
So, how can I create a JOIN clause for this SRF without using a subquery? Or is there something else I'm missing?
Edit 1 Using sa.text with TextClause.columns instead of sa.select gets me a lot closer:
act_ref_t = (sa.sql.text(
"unnest(activity.ob_refs) WITH ORDINALITY")
.columns(sa.column('unnest', UUID),
sa.column('ordinality', sa.Integer))
.alias('act_ref'))
But the resulting SQL fails because it wraps the clause in parentheses:
LEFT OUTER JOIN (unnest(activity.ob_refs) WITH ORDINALITY)
AS act_ref ON subscription.ob_ref = act_ref.unnest
The error is syntax error at or near ")". Can I get TextAsFrom to not be wrapped in parentheses?
It turns out this is not directly supported by SA, but the correct behaviour can be achieved with a ColumnClause and a FunctionElement. First import this recipe as described by zzzeek in this SA issue. Then create a special unnest function that includes the WITH ORDINALITY modifier:
class unnest_func(ColumnFunction):
name = 'unnest'
column_names = ['unnest', 'ordinality']
#compiles(unnest_func)
def _compile_unnest_func(element, compiler, **kw):
return compiler.visit_function(element, **kw) + " WITH ORDINALITY"
You can then use it in joins, ordering, etc. like this:
act_ref = unnest_func(Activity.ob_refs)
query = (query
.add_columns(act_ref.c.unnest, act_ref.c.ordinality)
.outerjoin(act_ref, sa.true())
.outerjoin(Subscription, Subscription.ob_ref == act_ref.c.unnest)
.order_by(act_ref.c.ordinality.desc()))
I have a database in postgreSQL. I want to read some data from there, but I get an error (column anganridref does not exist) when I execute my command.
Here is my NpgsqlCommand:
cmd.CommandText = "select * from angebot,angebotstatus,anrede where anrid=anganridref and anstaid=anganstaidref";
and my 3 tables
the names of my columns are rights. So I don't understand why that error comes. Someone can explain me why it does crash? Its not the problem of large and lowercase.
You are not prefixing your column names in the where clause:
select *
from angebot,
angebotstatus,
anrede
where anrid = anganridref <-- missing tablenames for the columns
and anstaid = anganstaidre
It's also recommended to use an explicit JOIN instead of the old SQL 89 implicit join syntax:
select *
from angebot
join angebotstatus on angebot.aaaa = angebotstatus.bbbb
join anrede on angebot.aaaa = anrede.bbbb