I have a client table that has IP and macaddr saved as BIGINT.
I was able to convert IP to text with ::inet, how do I transform BIGINT to macaddr?
Example of MACaddr as saved: 8796349528980
You can convert your BIGINT to HEX, and abuse the macddr type of postgresql to format that, like that:
SELECT TO_HEX(8796349528980)::macaddr;
Related
We tried to upload a csv file with column 'cpf' on AWS-Athena, the field cpf contains numbers like this '372.088.989-03'
create external table (
cpf bigint,
name string
cell bigint
)
Athena doesn't read this field, how can i register?
we try to register like string and this works but is not correct
Ah! It's the CPF number - Wikipedia.
It does not match rules for numbers, and you won't be doing any mathematics on it, so I would recommend treating the CPF as a string.
I have looked into other solutions before but could not find out the problem from explanations. I am trying to run a python script where the data is loaded from an oltp MySQL database (AWS RDS) to an olap database on AWS Redshift. I have defined my table in Redshift as below:
create_product = ("""CREATE TABLE IF NOT EXISTS product (
productCode varchar(15) NOT NULL PRIMARY KEY,
productName varchar(70) NOT NULL,
productLine varchar(50) NOT NULL,
productScale varchar(10) NOT NULL,
productVendor varchar(50) NOT NULL,
productDescription text NOT NULL,
buyPrice decimal(10,2) NOT NULL,
MSRP decimal(10,2) NOT NULL
)""")
I am using a python script to load the data from RDS to Redshift. My function body to load
for query in dimension_etl_query:
oltp_cur.execute(query[0])
items = oltp_cur.fetchall()
try:
olap_cur.executemany(query[1], items)
olap_cnx.commit()
logger.info("Inserted data with: %s", query[1])
except sqlconnector.Error as err:
logger.error('Error %s Couldnt run query %s', err, query[1])
The script run throws the error
olap_cur.executemany(query[1], items)
psycopg2.errors.StringDataRightTruncation: value too long for type character varying(256)
I have checked in my SQL database for each of the columns length and only productDescription has length greater than 265 characters. However I am using text datatype in postgres for that column. Would appreciate any tips on how to find the rootcause?
See here:
https://docs.aws.amazon.com/redshift/latest/dg/r_Character_types.html#r_Character_types-text-and-bpchar-types
TEXT and BPCHAR types
You can create an Amazon Redshift table with a TEXT column, but it is converted to a VARCHAR(256) column that accepts variable-length values with a maximum of 256 characters.
You can create an Amazon Redshift column with a BPCHAR (blank-padded character) type, which Amazon Redshift converts to a fixed-length CHAR(256) column.
Looks like you might need VARCHAR, I think. From same link:
VARCHAR or CHARACTER VARYING
...
If used in an expression, the size of the output is determined using the input expression (up to 65535).
You will have to experiment to see if that works.
Just try to keep everything under 256 chars even if it is a text
my table has a varchar(64) column ,when i try to convert it to char(36), it not work.
SELECT
CONVERT('00edff66-3ef4-4447-8319-fc8eb45776ab',CHAR(36)) AS A,
CAST('00edff66-3ef4-4447-8319-fc8eb45776ab' AS CHAR(36)) as B
this is desc result
It's like this because it is derived from MySQL where there was no CAST AS VARCHAR option. In MySQL there was only CAST AS CHAR which was producing VARCHAR. Here are what the supported options were in MySQL 5.6:
https://dev.mysql.com/doc/refman/5.6/en/cast-functions.html#function_cast
See they explicitly mention that "No padding occurs for values shorter than N characters". Later MariaDB started adding CAST AS VARCHAR only to make it more cross-platform compatible with systems like Oracle, Postgre, MSSQL, etc.:
https://jira.mariadb.org/browse/MDEV-11283
But still CAST AS CHAR and CAST AS VARCHAR is the same. And I guess it should be the same. Why do you need the fixed length datatype in RAM? It should matter only when you store it.
For example if you have a table with CHAR datatype:
CREATE TABLE tbltest(col CHAR(10));
and you insert casted as VARCHAR data for example:
INSERT INTO tbltest(col) VALUES(CAST('test' AS VARCHAR(5)));
it will be stored as CHAR(10) datatype. Because that's what the table uses.
In a Postgres 9.3 table I have an integer as primary key with automatic sequence to increment, but I have reached the maximum for integer. How to convert it from integer to serial?
I tried:
ALTER TABLE my_table ALTER COLUMN id SET DATA TYPE bigint;
But the same does not work with the data type serial instead of bigint. Seems like I cannot convert to serial?
serial is a pseudo data type, not an actual data type. It's an integer underneath with some additional DDL commands executed automatically:
Create a SEQUENCE (with matching name by default).
Set the column NOT NULL and the default to draw from that sequence.
Make the column "own" the sequence.
Details:
Safely rename tables using serial primary key columns
A bigserial is the same, built around a bigint column. You want bigint, but you already achieved that. To transform an existing serial column into a bigserial (or smallserial), all you need to do is ALTER the data type of the column. Sequences are generally based on bigint, so the same sequence can be used for any integer type.
To "change" a bigint into a bigserial or an integer into a serial, you just have to do the rest by hand:
Creating a PostgreSQL sequence to a field (which is not the ID of the record)
The actual data type is still integer / bigint. Some clients like pgAdmin will display the data type serial in the reverse engineered CREATE TABLE script, if all criteria for a serial are met.
I have a table with a text column representing a MAC address, but I want to use the macaddr type instead. So, I tried:
alter table mytable alter column mac_column type macaddr;
And got this error
ERROR: column "mac_column" cannot be cast automatically to type macaddr
HINT: Specify a USING expression to perform the conversion.
But I don't know what to use as USING expression:
alter table mytable alter column mac_column type macaddr using(????????)
What should I use as USING expression?
Many thanks in advance
You can’t simply change the data type because data is already there in the column. Since the data is of type String PostgreSQL can't expect it as macaddr though you entered valid String representation of the macaddr. So now, as PostgreSQL suggested you can use the ‘USING’ expression to cast your data into macaddr.
ALTER TABLE mytable ALTER COLUMN mac_column TYPE macaddr USING(mac_column::macaddr)