I am getting this error when i try to insert from C# to MySQL
I tried using a BIGINT but it didn't work
MySql.Data.MySqlClient.MySqlException (0x80004005): Out of range value for column 'S_num' at row 1
The value of the C# int is either to big or to small for BIGINT too handle.(probably to big)
BIGINT in mysql directly translates into c#s Int64(or long).
Hope this post helps.
Related
I have looked into other solutions before but could not find out the problem from explanations. I am trying to run a python script where the data is loaded from an oltp MySQL database (AWS RDS) to an olap database on AWS Redshift. I have defined my table in Redshift as below:
create_product = ("""CREATE TABLE IF NOT EXISTS product (
productCode varchar(15) NOT NULL PRIMARY KEY,
productName varchar(70) NOT NULL,
productLine varchar(50) NOT NULL,
productScale varchar(10) NOT NULL,
productVendor varchar(50) NOT NULL,
productDescription text NOT NULL,
buyPrice decimal(10,2) NOT NULL,
MSRP decimal(10,2) NOT NULL
)""")
I am using a python script to load the data from RDS to Redshift. My function body to load
for query in dimension_etl_query:
oltp_cur.execute(query[0])
items = oltp_cur.fetchall()
try:
olap_cur.executemany(query[1], items)
olap_cnx.commit()
logger.info("Inserted data with: %s", query[1])
except sqlconnector.Error as err:
logger.error('Error %s Couldnt run query %s', err, query[1])
The script run throws the error
olap_cur.executemany(query[1], items)
psycopg2.errors.StringDataRightTruncation: value too long for type character varying(256)
I have checked in my SQL database for each of the columns length and only productDescription has length greater than 265 characters. However I am using text datatype in postgres for that column. Would appreciate any tips on how to find the rootcause?
See here:
https://docs.aws.amazon.com/redshift/latest/dg/r_Character_types.html#r_Character_types-text-and-bpchar-types
TEXT and BPCHAR types
You can create an Amazon Redshift table with a TEXT column, but it is converted to a VARCHAR(256) column that accepts variable-length values with a maximum of 256 characters.
You can create an Amazon Redshift column with a BPCHAR (blank-padded character) type, which Amazon Redshift converts to a fixed-length CHAR(256) column.
Looks like you might need VARCHAR, I think. From same link:
VARCHAR or CHARACTER VARYING
...
If used in an expression, the size of the output is determined using the input expression (up to 65535).
You will have to experiment to see if that works.
Just try to keep everything under 256 chars even if it is a text
my table has a varchar(64) column ,when i try to convert it to char(36), it not work.
SELECT
CONVERT('00edff66-3ef4-4447-8319-fc8eb45776ab',CHAR(36)) AS A,
CAST('00edff66-3ef4-4447-8319-fc8eb45776ab' AS CHAR(36)) as B
this is desc result
It's like this because it is derived from MySQL where there was no CAST AS VARCHAR option. In MySQL there was only CAST AS CHAR which was producing VARCHAR. Here are what the supported options were in MySQL 5.6:
https://dev.mysql.com/doc/refman/5.6/en/cast-functions.html#function_cast
See they explicitly mention that "No padding occurs for values shorter than N characters". Later MariaDB started adding CAST AS VARCHAR only to make it more cross-platform compatible with systems like Oracle, Postgre, MSSQL, etc.:
https://jira.mariadb.org/browse/MDEV-11283
But still CAST AS CHAR and CAST AS VARCHAR is the same. And I guess it should be the same. Why do you need the fixed length datatype in RAM? It should matter only when you store it.
For example if you have a table with CHAR datatype:
CREATE TABLE tbltest(col CHAR(10));
and you insert casted as VARCHAR data for example:
INSERT INTO tbltest(col) VALUES(CAST('test' AS VARCHAR(5)));
it will be stored as CHAR(10) datatype. Because that's what the table uses.
I built a db with serial type for the pks, I migrated to another server and the pk columns are now integer and as a result I cannot add new data due to the not null restriction of a pk. Is there any Alter command which can fix this?
SERIAL is not a data type in postgresql just a convenience word when creating tables that makes the column an integer type and adds auto-incrementing. All you have to do is add back auto-incrementing (a sequence) to the column and make sure its next value is greater than anything in the table.
This question covers adding serial to an existing column
This answer explains how to reset the counter
Using EF-Core for PostgresSQL, I have an entity with a field of type byte but decided to change it to type byte[]. But when I do migrations, on applying the migration file generated, it threw the following exception:
Npgsql.PostgresException (0x80004005): 42804: column "Logo" cannot be
cast automatically to type bytea
I have searched the internet for a solution but all I saw were similar problems with other datatypes and not byte array. Please help.
The error says exactly what is happening... In some cases PostgreSQL allows for column type changes (e.g. int -> bigint), but in many cases where such a change is non-trivial or potentially destructive, it refuses to do so automatically. In this specific case, this happens because Npgsql maps your CLR byte field as PostgreSQL smallint (a 2-byte field), since PostgreSQL lacks a 1-byte data field. So PostgreSQL refuses to cast from smallint to bytea, which makes sense.
However, you can still do a migration by writing the data conversion yourself, from smallint to bytea. To do so, edit the generated migration, find the ALTER COLUMN ... ALTER TYPE statement and add a USING clause. As the PostgreSQL docs say, this allows you to provide the new value for the column based on the existing column (or even other columns). Specifically for converting an int (or smallint) to a bytea, use the following:
ALTER TABLE tab ALTER COLUMN col TYPE BYTEA USING set_bytea(E'0', 0, col);
If your existing column happens to contain more than a single byte (should not be an issue for you), it should get truncated. Obviously test the data coming out of this carefully.
What attribute of column I should use in order to get index value from postgresql sequence? valueNumeric? valueComputed?
As far as I understand the value of attribute should be nextval( 'simple_id_seq' ).
In postgresql sequence values are created as INTEGER or BIGINT.
Often this was done by using SERIAL or BIGSERIAL as column type ... but will indirectly create a sequencer of int or bigint and set the default value of the column to nextval(sequencer).
In a resultset of table data the column contains int or bigint.
Normaly there is no need to use nextval(sequencer) ... it fills the column on INSERT automatically (in the INSERT statemant the column shoult not appear).
Refer to http://www.postgresql.org/docs/9.3/static/datatype-numeric.html
If you do not want to use SERIAL or BIGSERIAL as suggested by #double_word_distruptor, use valueComputed.
With valueComputed you are telling Liquibase you are passing a function like nextval('simple_id_seq') and it will not try to parse it as a number or do any quoting.
You may also be able to use valueSequenceNext="simple_id_seq" to gain a little cross-database compatibility.