Error while extracting the data from two dataframe using SQL - pyspark

I'm trying to extract the data by joining the two table, in pyspark. My join Query looks like:
SELECT COUNT(DISTINCT m.ticker),to_date(m.date) FROM extractalpha_cam2 m LEFT OUTER JOIN TOP1000 u ON u.date = to_date(m.date) GROUP BY m.date ORDER BY m.date
It is throwing the error:
Error:Py4JJavaError: An error occurred while calling
z:org.apache.zeppelin.spark.ZeppelinContext.showDF
But when, i tried extracting the data from each table, it's working fine. My queries from single table are like
SELECT to_date(date) FROM extractalpha_cam2
SELECT date from TOP1000
These two queries working fine. Can anyone help me in extracting the data from both table by joining.
It would be really helpful if anyone can share any such link, which can guide me in writing the efficient queries in pyspark.

I checked and found that, this error comes when, the job you are running took more time than the time you set for timeout. In my case it was 300 seconds.
Let me know if anyone has more valuable answer than this. Thanks

Related

Querying timestamp column In q

I want to count the number of records inserted in a kdb+ database using a q query.
Currently, using below query:
count select from executionTable where ingestTimeStamp within 2019.09.07D00:00:00.000000000 2019.09.08D00:00:00.000000000
It works but not highly performant. Any recommendations to make it efficient is highly appreciated.
Thank you for your help.
If you only want count then use 'count i' inside select like below:
q) select count i from executionTable where ingestTimeStamp within 2019.09.07D00:00:00.000000000 2019.09.08D00:00:00.000000000
This will only get the count instead of fetching full data which is what your query is doing and that's one of the reasons for taking more time.
And if it is a partitioned database, then add 'date' in the filter as #Callum Biggs mentioned.
Given the information you have provided I'm assuming you're querying on-disk data, likely saved in a standard date partitioned structure. In this case, you should be specifying a date clause before you specify a time clause, this will prevent searching all the date directories.
select from executionTable where date=2019.09.07, ingestTimeStamp within 2019.09.07D00:00:00.000000000 2019.09.08D00:00:00.000000000
I'd suggest reading through the whitepaper on query optimization, it will give some guidance in good query structure, and how to take advantage of map reduction in kdb.

Merge two datasets duplicate BY variables Or I want to make following form

I am a novice in SAS program.
I have a question about merging two dataset.
The two data sets look like (please click this Image link (Excel sheet image):
Please let me know key concepts or code to make this happen!
I have searched the answer through Googling etc., but there is no site that exactly solve what I want.
(If it is possible to tackle above question without PROC SQL.)
To get the desired result you should do a cartesian product (Cross join) which returns all the rows in all tables. Each row in table1 is paired with all the rows in table2. I have used Proc SQL to do this and I am eager to see how this can be done using Data step. Here's what I know,
Proc Sql;
create table test_merge as
select a.*, b.type_rhs, b.rhs1, b.rhs2
from test a, test11 b
where a.yearmonth=b.yearmonth
;
quit;
Again, I am new to SAS as well and I think this is one of the ways to create the desired output.
When working with huge data, you will see a note in log that says "The execution of this query involves performing one or more Cartesian product joins that can not be optimized."

Converting complex query with inner join to tableau

I have a query like this, which we use to generate data for our custom dashboard (A Rails app) -
SELECT AVG(wait_time) FROM (
SELECT TIMESTAMPDIFF(MINUTE,a.finished_time,b.start_time) wait_time
FROM (
SELECT max(start_time + INTERVAL avg_time_spent SECOND) finished_time, branch
FROM mytable
WHERE name IN ('test_name')
AND status = 'SUCCESS'
GROUP by branch) a
INNER JOIN
(
SELECT MIN(start_time) start_time, branch
FROM mytable
WHERE name IN ('test_name_specific')
GROUP by branch) b
ON a.branch = b.branch
HAVING avg_time_spent between 0 and 1000)t
GROUP BY week
Now I am trying to port this to tableau, and I am not being able to find a way to represent this data in tableau. I am stuck at how to represent the inner group by in a calculated field. I can also try to just use a custom sql data source, but I am already using another data source.
columns in mytable -
start_time
avg_time_spent
name
branch
status
I think this could be achieved new Level Of Details formulas, but unfortunately I am stuck at version 8.3
Save custom SQL for rare cases. This doesn't look like a rare case. Let Tableau generate the SQL for you.
If you simply connect to your table, then you can usually write calculated fields to get the information you want. I'm not exactly sure why you have test_name in one part of your query but test_name_specific in another, so ignoring that, here is a simplified example to a similar query.
If you define a calculated field called worst_case_test_time
datediff(min(start_time), dateadd('second', max(start_time), avg_time_spent)), which seems close to what your original query says.
It would help if you explained what exactly you are trying to compute. It appears to be some sort of worst case bound for avg test time. There may be an even simpler formula, but its hard to know without a little context.
You could filter on status = "Success" and avg_time_spent < 1000, and place branch and WEEK(start_time) on say the row and column shelves.
P.S. Your query seems a little off. Don't you need an aggregation function like MAX or AVG after the HAVING keyword?

Can I have more than 250 columns in the result of a PostgreSQL query?

Note that PostgreSQL website mentions that it has a limit on number of columns between 250-1600 columns depending on column types.
Scenario:
Say I have data in 17 tables each table having around 100 columns. All are joinable through primary keys. Would it be okay if I select all these columns in a single select statement? The query would be pretty complex but can be programmatically generated. The reason for doing this is to get denormalised data to populate a web page. Please do not ask why though :)
Quite obviously if I do create table table1 as (<the complex select statement>), I will be hitting the limit mentioned in the website. But do simple queries also face the same restriction?
I could probably find this out by doing the exercise myself. In the next few days I probably will. However, if someone has an idea about this and the problems I might face by doing a single query, please share the knowledge.
I can't find definitive documentation to back this up, but I have
received the following error using JDBC on Postgresql 9.1 before.
org.postgresql.util.PSQLException: ERROR: target lists can have at most 1664 entries
As I say though, I can't find the documentation for that so it may
vary by release.
I've found the confirmation. The maximum is 1664.
This is one of the metrics that is available for confirmation in the INFORMATION_SCHEMA.SQL_SIZING table.
SELECT * FROM INFORMATION_SCHEMA.SQL_SIZING
WHERE SIZING_NAME = 'MAXIMUM COLUMNS IN SELECT';

iphone sqlite query optimization

I am currently using this sqlite query in my application. Two tables are used in this query.....
UPDATE table1 set visited = (SELECT COUNT(DISTINCT table1.itemId) from 'table2' WHERE table2.itemId = table1.itemId AND table2.sessionId ='eyoge2avao');
It is working correct.... My problem is it is taking around 10 seconds to execute this query and retrieve the result..... Don't know what to do... Almost all other process are in right way.. So it seems the problem is with this query formation...
Plz someone help with how to optimize this query....
Regards,
Brian
Make sure you have indexes on the following (combinations of) fields:
table1.itemId
(This will speed up the DISTINCT clause, since the itemId will already be in the correct order).
table2.itemId, table2.sessionId
This will speed up the WHERE clause of your SELECT statement.
How many rows are there in these tables?
Aso try doing an EXPLAIN on your SELECT command to see if it gives you any helpful advice.