MySQLWorkbench-Generated Column issue - mysql-workbench

In MySQLWorkbench, I created a date table using following code:
CREATE TABLE DIM_DATE (
DATEOFSALE DATE PRIMARY KEY,
YEAROFSALE DOUBLE AS (YEAR (DATEOFSALE)),
QUARTEROFSALE DOUBLE AS (QUARTER(DATEOFSALE)),
MONTHOFSALE INT AS (MONTH(DATEOFSALE))
)
and inserted values using following data:
INSERT INTO DIM_DATE VALUES
('2005/02/15',2005,1,02),
('2010/01/01',2010,1,01),
('2004/02/04',2004,1,02)
But it throws the following error:`enter code here`
Error code: he value specified for generated column 'YEAROFSALE' in table dim_date is not allowed.
Please help me resolve this issue.

Related

Spark 2.4 Unable to Insert Record using variable

I am trying to insert record into a table using a variable but it is failing.
command:
val query = "INSERT into TABLE Feed_metadata_s2 values ('LOGS','RUN_DATE',{} )".format(s"$RUN_DATE")
spark.sql(s"query")
spark.sql("INSERT into TABLE Feed_metadata_s2 values ('LOGS','ExtractStartTimestamp',$ExtractStartTimestamp)")
error:
INSERT into TABLE Feed_metadata_s2 values ('SDEDLOGS','ExtractStartTimestamp',$ExtractStartTimestamp)
------------------------------------------------------------------------------^^^
at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:241)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:117)
at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:48)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:69)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
It seems you're confused with string interpolation... you need to put s before the last query so that the variable is substituted into the string. Also the first two lines can be simplified:
val query = s"INSERT into TABLE Feed_metadata_s2 values ('LOGS','RUN_DATE',$RUN_DATE)"
spark.sql(query)
spark.sql(s"INSERT into TABLE Feed_metadata_s2 values ('LOGS','ExtractStartTimestamp',$ExtractStartTimestamp)")

How to query from the result of a changed column of a table in postgresql

So I have a string time column in a table and now I want to change that time to date time type and then query data for selected dates.
Is there a direct way to do so? One way I could think of is
1) add a new column
2) insert values into it with converted date
3) Query using the new column
Here I am stuck with the 2nd step with INSERT so need help with that
ALTER TABLE "nds".”unacast_sample_august_2018"
ADD COLUMN new_date timestamp
-- Need correction in select statement that I don't understand
INSERT INTO "nds".”unacast_sample_august_2018” (new_date)
(SELECT new_date from_iso8601_date(substr(timestamp,1,10))
Could some one help me with correction and if possible a better way of doing it?
Tried other way to do in single step but gives error as Column does not exist new_date
SELECT *
FROM (SELECT from_iso8601_date(substr(timestamp,1,10)) FROM "db_name"."table_name") AS new_date
WHERE new_date > from_iso8601('2018-08-26') limit 10;
AND
SELECT new_date = (SELECT from_iso8601_date(substr(timestamp,1,10)))
FROM "db_name"."table_name"
WHERE new_date > from_iso8601('2018-08-26') limit 10;
Could someone correct these queries?
You don't need those steps, just use USING CAST clause on your ALTER TABLE:
CREATE TABLE foobar (my_timestamp) AS
VALUES ('2018-09-20 00:00:00');
ALTER TABLE foobar
ALTER COLUMN my_timestamp TYPE timestamp USING CAST(my_timestamp AS TIMESTAMP);
If your string timestamps are in a correct format this should be enough.
Solved as follows:
select *
from
(
SELECT from_iso8601_date(substr(timestamp,1,10)) as day,*
FROM "db"."table"
)
WHERE day > date_parse('2018-08-26', '%Y-%m-%d')
limit 10

Adding values to a newly inserted column in an existing table in PostgreSQL 9.3

created a table named "collegetable":
create table collegetable (stid integer primary key not null,stname
varchar(50),department varchar(10),dateofjoin date);
provided values for each column:collegetable data
inserted a new column in it named "cgpa" and tried to add values for this column in one shot using the code:
WITH col(stid, cgpa) as
( VALUES((1121,8.01),
(1131,7.12),
(1141,9.86))
)
UPDATE collegetable as colldata
SET cgpa = col.cgpa
FROM col
WHERE colldata.stid = col.stid;
and got error :
ERROR:operator does not exist:integer=record
LINE9:where colldata.stid=col.stid;
HINT:No operator matches the given name and arguement type.you might need to add explicit type casts.
pls help in solving.thanks in advance.
The with clause only defines the names of the columns, not the data types:
with col (stid, cgpa) as (
...
)
update ...;
For details see the tutorial and the full reference

More than one row error when populating a table

I have create a table using create table and with these column:
create table myschema.mytable(
id serial PRIMARY KEY,
row_num integer,
col_num integer,
pix_centroid geometry,
pix_val double precision
)
When I am trying to populate it:
insert into pixelbased (id, row_num, col_num, pix_centroid, pix_val)
values (
DEFAULT,
(select((ST_PixelAsPolygons(rast, 1)).x) from mytable where rid=3),
(select((ST_PixelAsPolygons(rast, 1)).x) from mytable where rid=3),
(select(ST_Centroid((ST_PixelAsPolygons(rast, 1)).geom)) from rwanda8 where rid=3),
(select(ST_PixelAsPolygons(rast, 1)).val from mytable where rid=3)
)
I am encountered with the following error:
ERROR: more than one row returned by a subquery used as an expression.
I know that since I have more than one row for every column it does make sense to have such an error. But I really need to have all the columns calculated as mentioned. Anyone knows what should I do?
In fact I want to insert the result the following query in the table:
select
(ST_PixelAsPolygons(rast, 1)).val as geomval1,
(ST_PixelAsPolygons(rast, 1)).x as X,
(ST_PixelAsPolygons(rast, 1)).y as Y,
(ST_Centroid((ST_PixelAsPolygons(rast, 1)).geom)) as geom
from rwanda8
where rid=3
Anyone knows what should I do?
Just use the select query in instead of the values
insert into pixelbased (row_num, col_num, pix_centroid, pix_val)
select
(ST_PixelAsPolygons(rast, 1)).val as geomval1,
(ST_PixelAsPolygons(rast, 1)).x as X,
(ST_PixelAsPolygons(rast, 1)).y as Y,
(ST_Centroid((ST_PixelAsPolygons(rast, 1)).geom)) as geom
from rwanda8 where rid=3
Do not insert the id as it is a serial and will generate itself.
One of your subqueries returns more than 1 row. So use LIMIT 0,1 or something to get only one single value per subquery.
If you need more than 1 value per column, you should review your insert algorithm and use a cursor for instance.

an empty row with null-like values in not-null field

I'm using postgresql 9.0 beta 4.
After inserting a lot of data into a partitioned table, i found a weird thing. When I query the table, i can see an empty row with null-like values in 'not-null' fields.
That weird query result is like below.
689th row is empty. The first 3 fields, (stid, d, ticker), are composing primary key. So they should not be null. The query i used is this.
select * from st_daily2 where stid=267408 order by d
I can even do the group by on this data.
select stid, date_trunc('month', d) ym, count(*) from st_daily2
where stid=267408 group by stid, date_trunc('month', d)
The 'group by' results still has the empty row.
The 1st row is empty.
But if i query where 'stid' or 'd' is null, then it returns nothing.
Is this a bug of postgresql 9b4? Or some data corruption?
EDIT :
I added my table definition.
CREATE TABLE st_daily
(
stid integer NOT NULL,
d date NOT NULL,
ticker character varying(15) NOT NULL,
mp integer NOT NULL,
settlep double precision NOT NULL,
prft integer NOT NULL,
atr20 double precision NOT NULL,
upd timestamp with time zone,
ntrds double precision
)
WITH (
OIDS=FALSE
);
CREATE TABLE st_daily2
(
CONSTRAINT st_daily2_pk PRIMARY KEY (stid, d, ticker),
CONSTRAINT st_daily2_strgs_fk FOREIGN KEY (stid)
REFERENCES strgs (stid) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE,
CONSTRAINT st_daily2_ck CHECK (stid >= 200000 AND stid < 300000)
)
INHERITS (st_daily)
WITH (
OIDS=FALSE
);
The data in this table is simulation results. Multithreaded multiple simulation engines written in c# insert data into the database using Npgsql.
psql also shows the empty row.
You'd better leave a posting at http://www.postgresql.org/support/submitbug
Some questions:
Could you show use the table
definitions and constraints for the
partions?
How did you load your data?
You get the same result when using
another tool, like psql?
The answer to your problem may very well lie in your first sentence:
I'm using postgresql 9.0 beta 4.
Why would you do that? Upgrade to a stable release. Preferably the latest point-release of the current version.
This is 9.1.4 as of today.
I got to the same point: "what in the heck is that blank value?"
No, it's not a NULL, it's a -infinity.
To filter for such a row use:
WHERE
case when mytestcolumn = '-infinity'::timestamp or
mytestcolumn = 'infinity'::timestamp
then NULL else mytestcolumn end IS NULL
instead of:
WHERE mytestcolumn IS NULL