Cross Apply error with Oracle - oracle-sqldeveloper

When using CROSS APPLY in Oracle query i got SQL command not properly ended error
Here is the query
select *
from Session_Dept
cross apply(select * from Session_Employee
WHERE Session_Employee.Id= Session_Dept.Id);
Any help will be appreciated.

Related

Crystal Report Gives me a Different SQL Result

I am getting a different result in CR. I got the right result when I tried to run the query in SQL Server
here is the example of my situation
TB1
TB2
what I am trying to achieve is like this
BUT the CR's result is like this
here is my query:
SELECT * FROM TB1 tb1 LEFT JOIN TB2 tb2 WHERE tb1.ControlNo='IDU 2005.0001' AND tb2.Type = 'Applicant'
This can't be the SQL statement used in the Crystal report.
Are you using a Command as the data source in the report? If so, please show the command in your question.
If you are not using a Command, please show the table joins and the record selection formula.
Most likely, you are not using a Command and the record selection formula in the report design is simply wrong.

Run inner join query in Grafana where datasource is influx

I need to run inner join query in Grafana where datasource is InfluxDB.
In sql query will be:
SELECT t.status,count(t.taskName) FROM TasksStatus t
INNER JOIN TasksStatus s
ON t.taskName = s.taskName
WHERE t.modified > s.modified
GROUP BY t.status;
Getting Error while running in grafana
InfluxDB Error: error parsing query: found t, expected ; at line 1, char 71
https://docs.influxdata.com/influxdb/v1.8/query_language/
InfluxQL, the InfluxDB SQL-like query language
It is only SQL-like, but it not a SQL language. InfluxQL doc doesn't mention any support for INNER JOIN, so it is not suported by InfluxQL.

Problem with a query using GROUP BY in PostgreSQL

I'm using the query tool in PGAdmin 4.20 and trying the following query:
select * from metadatavalue group by resource_id order by resource_id;
And I'm getting the following:
ERROR: column "metadatavalue.metadata_value_id" must appear in the
GROUP BY clause or be used in an aggregate function LINE 3: select *
from metadatavalue group by resource_id order by re...
^ SQL state: 42803 Character: 176
The thing is that in another table, I use the same syntax and works:
select * from metadatafieldregistry group by metadata_field_id order by metadata_field_id;
Also, I'm not getting all the entries from a same resource_id, only a few. Could these two problems be related?
Please, help!
Thank you in advance.

Nested SQL Query in Spark [duplicate]

I am running this query in Spark shell but it gives me error,
sqlContext.sql(
"select sal from samplecsv where sal < (select MAX(sal) from samplecsv)"
).collect().foreach(println)
error:
java.lang.RuntimeException: [1.47] failure: ``)'' expected but identifier MAX found
select sal from samplecsv where sal < (select MAX(sal) from samplecsv)
^
at scala.sys.package$.error(package.scala:27)
Can anybody explan me,thanks
Planned features:
SPARK-23945 (Column.isin() should accept a single-column DataFrame as input).
SPARK-18455 (General support for correlated subquery processing).
Spark 2.0+
Spark SQL should support both correlated and uncorrelated subqueries. See SubquerySuite for details. Some examples include:
select * from l where exists (select * from r where l.a = r.c)
select * from l where not exists (select * from r where l.a = r.c)
select * from l where l.a in (select c from r)
select * from l where a not in (select c from r)
Unfortunately as for now (Spark 2.0) it is impossible to express the same logic using DataFrame DSL.
Spark < 2.0
Spark supports subqueries in the FROM clause (same as Hive <= 0.12).
SELECT col FROM (SELECT * FROM t1 WHERE bar) t2
It simply doesn't support subqueries in the WHERE clause.Generally speaking arbitrary subqueries (in particular correlated subqueries) couldn't be expressed using Spark without promoting to Cartesian join.
Since subquery performance is usually a significant issue in a typical relational system and every subquery can be expressed using JOIN there is no loss-of-function here.
https://issues.apache.org/jira/browse/SPARK-4226
There is a pull request to implement that feature .. my guess it might land in Spark 2.0.

How to use joins on Pro*C 10g?

While using inner join on Pro*C I am getting the below error:
PCC-S-02201, Encountered the symbol "inner" when expecting one of the following:
I've just used a simple inner join. When I searched for solution, I was told that 10g doesn't support these kind of syntax and I should use dynamic SQL instead. Is that true? How to achieve inner join using dynamic SQL?
ProC 10g version doesn't allow inner/outer joins. If you want to have these, you will have to upgrade your ProC compiler.
If you use, 11g you can use the solution suggested here: http://forums.oracle.com/forums/thread.jspa?threadID=665519
Use the old syntax.
Instead of: SELECT * FROM TABLE1 INNER JOIN TABLE2 ON TABLE1.PK = TABLE2.FK
Use this: SELECT * FROM TABLE1, TABLE2 WHERE TABLE1.PK = TABLE2.FK
For OUTER JOINS just use the (+) sign on the side you want to be nullable:
Instead of: SELECT * FROM TABLE1 LEFT JOIN TABLE2 ON TABLE1.PK = TABLE2.FK
Use this: SELECT * FROM TABLE1, TABLE2 WHERE TABLE1.PK = TABLE2.FK (+)