I created a mysql date table. The primary key is a column of (mysql) type date.
Now I added this date table to powerbi and wanted to mark it as date table.
The validation passed, but the OK button is inactive, as one can see in the image:
What I am missing please?
Related
I'm trying to add a generated column in existing table. I've a transaction table, in that table I've one column named as dateTime(containing date and time in timestamp format). I want to create a virtual column named as transactionDate which will contain a date and time derived from dateTime column.
Below is the query which I created
ALTER TABLE public.transaction
ADD COLUMN "transactionDate" timestamp without time zone GENERATED ALWAYS AS (timestamp("dateTime")::date) STORED;
and I'm getting below error:
ERROR: syntax error at or near ""dateTime""
LINE 2: ... without time zone GENERATED ALWAYS AS (timestamp("dateTime"...
^
SQL state: 42601
Character: 121
Please help me out.
What are you using timestamp() for?
For generated column, it should be like -
ALTER TABLE transaction ADD COLUMN "transactionDate" timestamp without time zone GENERATED ALWAYS AS ("dateTime"::date) STORED;
OR
ALTER TABLE transaction ADD COLUMN "transactionDate" timestamp without time zone GENERATED ALWAYS AS ("dateTime"::timestamp) STORED;
Fiddle here.
The column dateTime is already a timestamp so you do not need to create a timestamp from it. So just: ( see demo)
alter table transaction
add column "transactionDate" date
generated always as ("dateTime"::date) stored;
I have few very small tables (a total of ~1000 rows) that I want to backup regularly into the same DB, to a single table. I know it sounds weird but hear me out.
Let's say that the tables I want to backup are named linux_commands, and windows_commands. These two tables have roughly: id (pkey), name, definition, config (jsonb), commands.
I want to back these up everyday into a table called commands_backup and I want this new table to have a date field, a field for windows_commands, and another one for linux_commands, so three columns in total. Each day, a script would run and write current date to date field, and then fetch whole linux_commands table and write it to related field in a single row, then do the same for windows_commands.
How would you setup something like this? Also, what is the best data type for storing whole data set in a single item?
In the target table, windows_commands and linux_commands should be type jsonb.
Then you can use:
INSERT INTO commands_backup VALUES (
current_date,
(SELECT jsonb_agg(to_jsonb(linux_commands)) FROM linux_commands),
(SELECT jsonb_agg(to_jsonb(windows_commands)) FROM windows_commands)
);
how to check the invalid dates for a date column in DB2.
We have few date columns in db2 while fetching the data in an etl job we are getting ":000-01-01" but when we see the data in the table it shows proper dates..
Is there any way to find out invalid dates in date column in DB2
Please let me know if this question is repeated ...
In PostgreSQL, is there a way to add columns that will automatically record the creation date and latest updated date of a row?
for table creation date look to event triggers
for insertion look into DEFAULT value for timestamptz column (works only if you don't explicitly define value)
for last modification, use trigger FOR EACH ROW before DELETE/UPDATE
The idea - Robust way of adding created and modified fields for data we add to database through db triggers
Update modified_by and modeified_on or modified_at for every db transaction.
Pick created_on and created_by or created_at from modified details whenever you insert a row into tables.
For trigger function, check this repo https://github.com/charan4ks/created_fields.git
I have a query which has a date field with values that look like this in the query results window:
2013-10-01 00:00:00
However, when I save the results to csv, it gets saved like this:
2013-10-01T00:00:00
This is causing a problem when I'm trying to COPY the csv into a table in Redshift, where it gives me an error stating that the value is not a valid timestamp (the field I'm importing to is a timestamp field).
How can I get it so that it either strips out the time component completely, leaving just the date, or at least that the "T" is removed from the results?
I'm exporting results to csv using Aginity SQL Workbench for Redshift.
According to this knowledgebase article:
After import, add new TIMESTAMP columns and use the CAST() function to
populate them:
ALTER TABLE events ADD COLUMN received_at TIMESTAMP DEFAULT NULL;
UPDATE events SET received_at = CAST(received_at_raw as timestamp);
ALTER TABLE events ADD COLUMN generated_at TIMESTAMP DEFAULT NULL;
UPDATE events SET generated_at = CAST(generated_at_raw as timestamp);
Finally, if you forsee no more imports to this table, the raw VARCHAR
timestamp columns may be removed. If you forsee importing more events
from S3, do not remove these columns. To remove the columns, run:
ALTER TABLE events DROP COLUMN received_at_raw; ALTER TABLE events
DROP COLUMN generated_at_raw;
Hope that helps...