Redshift failing on column named "MM" - amazon-redshift

I get this error on a column named mm in Redshift:
select sum(mm) mm from foo;
ERROR: syntax error at or near "mm"
LINE 1: select sum(mm) mm from foo;
^
I do not get the same error if I alias the column to something else like select sum(mm) mm2 ...
what's special about mm? It is not on the list of reserved words.

Use AS:
SELECT
sum(mm) AS mm
FROM foo

Related

Spark doesn't recognize the column name in SQL query while can output it to a dataset

I'm applying the SQL query like that:
s"SELECT * FROM my_table_joined WHERE (timestamp > '2022-01-23' and writetime is not null and acceptTimestamp is not null)"
and I'm getting the error message like that.
warning: there was one deprecation warning (since 2.0.0); for details, enable `:setting -deprecation' or `:replay -deprecation'
org.postgresql.util.PSQLException: ERROR: column "accepttimestamp" does not exist
Hint: Perhaps you meant to reference the column "mf_joined.acceptTimestamp".
Position: 103
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2497)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2233)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:310)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:446)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:370)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:149)
at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:108)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:61)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:226)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:344)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:297)
at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:286)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:286)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:221)
at $$$e76229fa87b6865de321c5274e52c2f9$$$$w$getDFFromJdbcSource(<console>:1133)
... 326 elided
If I omit acceptTimestamp like that:
s"SELECT * FROM my_table_joined WHERE (timestamp > '2022-01-23' and writetime is not null)"
I'm getting the data as below:
+-------------------+----------+----+------------------+-----------------+---+-----+------+----------+---------------+-------+-----------------------+----------+---------+-------------+------------+---------------+---------+-----+-------------------+-----------------------+---------------+--------------+-------------+-------------------+-------------------+---+---+------------------+-----+----+----+------------------+---+
|timestamp |flags |type|lon |lat |alt|speed|course|satellites|digital_twin_id|unit_id|unit_ts |name |unit_type|measure_units|access_level|uid |placement|stale|start |writetime |acceptTimestamp|delayWindowEnd|DiffInSeconds|time |hour |max|min|mean |count|max2|min2|mean2 |rnb|
+-------------------+----------+----+------------------+-----------------+---+-----+------+----------+---------------+-------+-----------------------+----------+---------+-------------+------------+---------------+---------+-----+-------------------+-----------------------+---------------+--------------+-------------+-------------------+-------------------+---+---+------------------+-----+----+----+------------------+---+
please note acceptTimestamp is here!
So how I should handle this column in my query to make it taken into account?
From the exception, it seems this is related to Postgres not Spark. If you look at the error message you got, the column name is folded to lowercase accepttimestamp whereas in your query the T is in uppercase acceptTimestamp.
To make the column name case-sensitive for Postgres, you need to use double-quotes. Try this:
val query = s"""SELECT * FROM my_table_joined
WHERE timestamp > '2022-01-23'
and writetime is not null
and "acceptTimestamp" is not null"""

ERROR: syntax error at or near "SETS"

I'm getting a syntax error when trying to use the GROUPING SETS function in my Postgres DB.
I've looked at the documentation here and I believe the syntax is correct, but I'm still getting an error.
I believe the syntax should be GROUP BY GROUPING SETS ((COLUMN), (COLUMN), ()) to get the grand total of my SUM() from the the SELECT part of my code. So I put in GROUP BY GROUPING SETS ((amount.date), ( places.place), ());
What am I doing wrong here?
SELECT
EXTRACT(MONTH FROM amount.date),
places.place,
CASE
WHEN places.place = 'A' THEN SUM(amount.amount)
WHEN places.place = 'B' THEN SUM(amount.amount)
WHEN places.place = 'C' THEN SUM(amount.amount)
ELSE SUM((amount.amount) * 2.5)
END AS "Total"
FROM amount
LEFT JOIN places ON places.id = amount.id
WHERE EXTRACT(YEAR FROM amount.date) = 2017
GROUP BY GROUPING SETS ((amount.date), (places.place), ());

How to insert a value to postgres database column which contains double quotes and triple quotes

I want to insert a query string into a Postgres database column in the following format
{"enrolled_time":'''SELECT DISTINCT enrolled_time AT TIME ZONE %s FROM alluser'''}
I try this:
UPDATE reports SET raw_query = {"enrolled_time":'''SELECT DISTINCT enrolled_time AT TIME ZONE %s FROM alluser'''} WHERE id=37;
It gives error like
ERROR: syntax error at or near "{"
LINE 1: UPDATE base_reports SET extra_query = {"enrolled_time":'''SE...
When I try using single quotes it throws error like following:
ERROR: syntax error at or near "SELECT"
LINE 1: ...DATE reports SET raw_query = '{"enrolled_time":'''SELECT DIS...
How can I overcome this situation
Use dollar quoting:
UPDATE reports
SET raw_query = $${"enrolled_time":'''SELECT DISTINCT enrolled_time AT TIME ZONE %s FROM alluser'''}$$
WHERE id = 37;

intersystem cache C# query with datetime

When I use cache sql query in C# I'm getting an error:
SQLtext1 = "SELECT top 10 * FROM dbo.DAPPLICATIONSTAT where TIMESTAMP = '2015-02-01 00:00:00'"
I would like to use a where clause with a datetime filter.
I am using InterSystems.Data.CacheClient.dll to execute the query.
Error Messge :
[SQLCODE: <-4>:<A term expected, beginning with one of the following: identifier, constant, aggregate, %ALPHAUP, %EXACT, %MVR, %SQLSTRING, %SQLUPPER, %STRING, %UPPER, $$, :, +, -, (, NOT, EXISTS, or FOR>]
[Cache Error: <<SYNTAX>errdone+2^%qaqqt>] [Details: <Prepare>]
[%msg: < SQL ERROR #4: A term expected, beginning with either of: (, NOT, EXISTS, or FOR^SELECT top :%qpar(1) * FROM dbo . DAPPLICATIONSTAT where TIMESTAMP>
I think that you have reserved word TIMESTAMP and so, you have that error
Try this SQL query, where filedname TIMESTAMP in dobled quotas
SELECT top 10 * FROM dbo.DAPPLICATIONSTAT where "TIMESTAMP" = '2015-02-01 00:00:00'

Error with creating view in PostgreSQL

Here's the code:
CREATE OR REPLACE VIEW skats_4 AS SELECT count(datums) AS
2014_gada_pieteikumi from pieteikums where date_part('YEAR',
datums)=2014;
I keep getting error with text "syntax error near 2014" (the "2014_gada_pieteikumi"). I don't see what is wrong.
Label should not to start by number. Use double quotes or rename label
postgres=# select 10 as 2014_some;
ERROR: syntax error at or near "2014"
LINE 1: select 10 as 2014_some;
^
Time: 0.647 ms
postgres=# select 10 as "2014_some";
2014_some
───────────
10
(1 row)