This question already has answers here:
Upgrade PostgreSQL JSON column to JSONB?
(2 answers)
Closed 6 years ago.
In postgresql 9.4 the new JSONB was incorporated.
On a live DB in postgresql 9.3 I have a JSON column.
I want to migrate it to JSONB.
Assuming I migrated the DB first to 9.4 (using pg_upgrade). What do I do next?
ALTER TABLE table_with_json
ALTER COLUMN my_json
SET DATA TYPE jsonb
USING my_json::jsonb;
In the context of Rails, here is an ActiveRecord migration alternative:
def change
reversible do |dir|
dir.up { change_column :models, :attribute, 'jsonb USING CAST(attribute AS jsonb)' }
dir.down { change_column :models, :attribute, 'json USING CAST(attribute AS json)' }
end
end
I don't know how this compares to the accepted answer performance-wise, but I tested this on a table with 120 000 records, each record having four json columns and it took me about a minute to migrate that table. Of course, I guess it depends on how complex the json structure is.
Also, notice that if your existing records have a default value of {}, you have to add to the above statements default: {}, because otherwise you'll have jsonb columns, but the default value will remain as '{}'::json.
Related
I've created a Heroku Postgres database, with one of the columns being db.Integer - storing integers. However, I realize that I should actually store floating point numbers instead - i.e., db.Float.
Can I change the type of a PostgreSQL database column - after it has been created with data?
Yes, you can. Use following query to do that:
ALTER TABLE $TABLE_NAME ALTER COLUMN $COLUMN_NAME TYPE <float type, e.g. float(64)>;
This question already has an answer here:
update vachar column to date in postgreSQL
(1 answer)
Closed 2 years ago.
I have a PostgreSQL table that contains a column of data type 'text', that column contains only dates.
I'm trying to convert it to date type, But it doesn't work.
publication_date
-----------
"9/16/2006"
"9/1/2004"
"11/1/2003"
"5/1/2004"
"9/13/2004"
"4/26/2005"
"9/12/2005"
"11/1/2005"
"4/30/2002"
"8/3/2004"
ALTER TABLE books
ALTER COLUMN publication_date SET DATA TYPE date;
outputs:
ERROR: column "publication_date" cannot be cast automatically to type date
HINT: You might need to specify "USING publication_date::date".
SQL state: 42804
The error message tells you what to do:
ALTER TABLE books
ALTER COLUMN publication_date SET DATA TYPE date
USING to_date(publication_date, 'mm/dd/yyyy');
This question already has answers here:
Create timestamp index from JSON on PostgreSQL
(1 answer)
Index on Timestamp: Functions in index expression must be marked as IMMUTABLE
(1 answer)
Closed 4 years ago.
Using postgres 10.4.
I have a table my_table1 with a column attr_a declared as jsonb
Within that field I have { my_attrs: {UpdatedDttm:'<date as iso8601 str>}}
for example:
{ my_attrs: {UpdatedDttm:'2018-09-20T17:55:52Z'}}
I use the date in a query that needs to do order by (and to compare with another date).
Because dates are strings, I have to convert them on the fly to a timestamp, using to_timestamp.
But I also need to add that order & filtering condition to an index.
So that Postgres can leverage that.
Problem is, my index creation is failing with the message:
create index my_table1__attr_a_jb_UpdatedDttm__idx ON my_table1 using btree (((to_timestamp(attr_a->'my_attrs'->>'UpdatedDttm', 'YYYY-MM-DD"T"HH24:MI:SS.MS"Z"')) ) DESC);
ERROR: functions in index expression must be marked IMMUTABLE SQL
state: 42P17
Is there a method to get this to work (hopefully, without creating a custom date formatting function), and I need to keep the time zone (cannot drop it).
I checked a similar question
And I am using what's recommended there to_timestamp and without the ::timestamptz typecast.
Still getting the error if I am using the built in function.
That answer suggests to create my own function (but I was trying to avoid that (as noted in my question at the bottom, because it requires rewritting/changing all the sql statemetns that use the table).
You could create an INSERT/UPDATE trigger that moves that nested JSONB value out to a top-level column; then index that like normal. You'd then also use that top-level column in your queries, effectively ignoring the nested JSONB value.
I'm using the latest version of:
% psql --version
psql (PostgreSQL) 9.6.5
So far my Phoenix application has worked fine. The last time I reset my db was about 2-3 weeks ago.
Now after I've reset it, my custom psql function has started throwing an exception related "integer vs bigint for ID/primary key column".
DETAIL: Returned type bigint does not match expected type integer in column 1.
But it's always been integer in my app with no problem.
The thing is that I've not changed anything in the migrations related to ID columns.
Has there been any breaking changes in Ecto or Postgresql related to ID/primary key datatypes?.
P.S.
In all my old phoenix applications all ID columns are integers -- it's how they were generated by Ecto or Phoenix. I've not reset a db in these apps. However, in this app they're now generated bigint. Why? Where can I read about this?
Ecto changed default type for ids to be bigint from integer. Here's the issue and PR on the Ecto Github repo, if you want to see the code and read about it:
https://github.com/elixir-ecto/ecto/issues/1879
For general discussion you might want to create a topic at elixirforum.com.
I am mainly from a Sql Server background, and following some issues with getting MySql to work with the Microsoft Sync Framework (namely it does not cater for snapshots), I am having to look into Postgres and try to get that working with the Sync Framework.
The triggers that are needed include a call to function "##DBTS", but I am having trouble finding an equivalent in Postgres for this.
From the microsoft documentation for this it says:
##DBTS returns the current database's last-used timestamp value.
A new timestamp value is generated when a row with a timestamp
column is inserted or updated.
In MySql it was the following:
USE INFORMATION_SCHEMA;
SELECT MAX(UPDATE_TIME) FROM TABLES WHERE UPDATE_TIME < NOW();
Can anyone tell me what this would be in Postgres?
PostgreSQL does not keep track when a table was last modified. So there is no equivalent for SQL Server's ##DBTS nor for MySQL's INFORMATION_SCHEMA.TABLES.UPDATE_TIME.
You also might be interested in this discussion:
http://archives.postgresql.org/pgsql-general/2009-02/msg01171.php
which essentially says: "if you need to know when a table was last modified, you have to add a timestamp column to each table that records that last time the row was updated".