Error Fixed Length column [MyColumn] data length mismatch in ORACLE - oracle12c

I have an error when doing:
FDTable.Refresh;
The error is the following:
"Fixed Length column [COL3] data length mismatch. Value length - [34], column fixed length - [4]"
The above error does not always occur, and sometimes the problem is removed when I rename the table in ORACLE (RENAME TABLE ..) to any other name.
My table is like the following:
CREATE TABLE "MyDB"."TABLE_A"(
COL1 VARCHAR2(1 CHAR) DEFAULT(' ') COLLATE USING_NLS_COMP NOT NULL ENABLE,
COL2 NUMBER(10, 0) NOT NULL ENABLE,
COL3 NUMBER(10, 0) NOT NULL ENABLE
//other columns
CONSTRAINT TABLE_A1 PRIMARY KEY (COL1, COL2, COL3)
) DEFAULT COLLATION USING_NLS_COMP
I have the following "Map Rule":
with MapRules.Add do begin
PrecMin:= 10;
PrecMax:= 10;
ScaleMin:= 0;
ScaleMax:= 0;
SourceDataType:= dtBCD;
SourceDataType:= dtInt32;
end;
I have observed in "TFDPhysOracleCommand.CreateDefineInfo", that when I activate (TFDTable.Active:= True), the value of the precision of this column is 10, however, when it fails, the value that is taken in the precision (variable " iPrec") is equal to 38.
Information:
Oracle 12c R2.
RAD Studio XE7.
Any suggestion?
Thanks in advance.

Related

sqlalchemy seems have no support for insert cte

By given table creation statement and query it's necessary to get old values before update:
CREATE TABLE IF NOT EXISTS products(
id INT GENERATED BY DEFAULT AS IDENTITY NOT NULL PRIMARY KEY,
product_id INT UNIQUE,
image_link CHARACTER VARYING NOT NULL,
additional_image_links CHARACTER VARYING[] NOT NULL
);
WITH temp AS (
INSERT INTO products(product_id, image_link, additional_image_links)
VALUES(1, 'http://www.e1xazm1ple1k113.com',ARRAY['http://www.examkple1113.com','http://www.example2.com'])
ON CONFLICT (product_id) DO UPDATE SET image_link = EXCLUDED.image_link, additional_image_links = EXCLUDED.additional_image_links
WHERE products.image_link != EXCLUDED.image_link OR products.additional_image_links != EXCLUDED.additional_image_links OR products.image_link != EXCLUDED.image_link
RETURNING id, image_link, additional_image_links
)
SELECT image_link, additional_image_links FROM products WHERE id IN (SELECT id FROM temp);
If conflict happens and new values conform criteria result is generated, however I need to use sqlalchemy machinery for it. Approximate but not working example:
def upsert(table, rows, constraint, update_cols):
query = insert(table).values(rows)
return query.on_conflict_do_update(
constraint=constraint,
set_={c: getattr(query.excluded, c) for c in update_cols},
where=getattr(table.c, "additional_image_link") != getattr(query.excluded, "additional_image_link"),
).cte("upsert")
Calling which produces the exception:
sesh = session(autocommit=False, autoflush=False, engine=DEFAULT)
sesh.execute(upsert(*args))
sqlalchemy.exc.ArgumentError: Executable SQL or text() construct expected, got <sqlalchemy.sql.selectable.CTE at 0x1042c3f10; upsert>.

PostgreSQL ERROR: invalid input syntax for integer: "1e+06"

The full error message is:
ERROR: invalid input syntax for integer: "1e+06"
SQL state: 22P02
Context: In PL/R function sample
The query I'm using is:
WITH a as
(
SELECT a.tract_id_alias,
array_agg(a.pgid ORDER BY a.pgid) as pgids,
array_agg(a.sample_weight_geo ORDER BY a.pgid) as block_weights
FROM results_20161109.block_microdata_res_joined a
WHERE a.tract_id_alias in (66772, 66773, 66785, 66802, 66805, 66806, 66813)
AND a.bldg_count_res > 0
GROUP BY a.tract_id_alias
)
SELECT NULL::INTEGER agent_id,
a.tract_id_alias,
b.year,
unnest(shared.sample(a.pgids,
b.n_agents,
1 * b.year,
True,
a.block_weights)
) as pgid
FROM a
LEFT JOIN results_20161109.initial_agent_count_by_tract_res_11 b
ON a.tract_id_alias = b.tract_id_alias
ORDER BY b.year, a.tract_id_alias, pgid;
And the shared.sample function I'm using is:
CREATE OR REPLACE FUNCTION shared.sample(ids bigint[], size integer, seed integer DEFAULT 1, with_replacement boolean DEFAULT false, probabilities numeric[] DEFAULT NULL::numeric[])
RETURNS integer[] AS
$BODY$
set.seed(seed)
if (length(ids) == 1) {
s = rep(ids,size)
} else {
s = sample(ids,size, with_replacement,probabilities)
}
return(s)
$BODY$
LANGUAGE plr VOLATILE
COST 100;
ALTER FUNCTION shared.sample(bigint[], integer, integer, boolean, numeric[])
OWNER TO "server-superusers";
I'm pretty new to this stuff, so any help would be appreciated.
Not a problem of the function. Like the error messages says: The string '1e+06' cannot be cast to integer.
Obviously, the columns n_agents in your table results_20161109.initial_agent_count_by_tract_res_11 is not an integer column. Probably type text or varchar? (That info would help in your question.)
Either way, the assignment cast does not work for the target type integer. But it does for numeric:
Does not work:
SELECT '1e+06'::text::int; -- error as in question
Works:
SELECT '1e+06'::text::numeric::int;
If my assumptions hold, you can use this as stepping stone.
Replace b.n_agents in your query with b.n_agents::numeric::int.
It's your responsibility that numbers stay in integer range, or you get the next exception.
If that did not nail it, you need to look into function overloading:
Is there a way to disable function overloading in Postgres
And function type resolution:
PostgreSQL function call
The schema search path is relevant in many related cases, but you did schema-qualify all objects, so we can rule that out.
How does the search_path influence identifier resolution and the "current schema"
Your query generally looks good. I had a look and only found minor improvements:
SELECT NULL::int AS agent_id -- never omit the AS keyword for column alias
, a.tract_id_alias
, b.year
, s.pgid
FROM (
SELECT tract_id_alias
, array_agg(pgid) AS pgids
, array_agg(sample_weight_geo) AS block_weights
FROM ( -- use a subquery, cheaper than CTE
SELECT tract_id_alias
, pgid
, sample_weight_geo
FROM results_20161109.block_microdata_res_joined
WHERE tract_id_alias IN (66772, 66773, 66785, 66802, 66805, 66806, 66813)
AND bldg_count_res > 0
ORDER BY pgid -- sort once in a subquery. cheaper.
) sub
GROUP BY 1
) a
LEFT JOIN results_20161109.initial_agent_count_by_tract_res_11 b USING (tract_id_alias)
LEFT JOIN LATERAL
unnest(shared.sample(a.pgids
, b.n_agents
, b.year -- why "1 * b.year"?
, true
, a.block_weights)) s(pgid) ON true
ORDER BY b.year, a.tract_id_alias, s.pgid;

like with nvarchar is not working correctly?

Using Microsoft SQL Server 2012 - 11.0.5058.0 (X64) Express Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1) (Hypervisor)
with that creation/fill script :
CREATE TABLE tbl (col CHAR (32) )
insert into tbl values ('test')
Those kind of statements :
declare #var varchar(32) = 'test'
delete from tbl where col like #var
or
delete from tbl where col like 'test'
actually delete the line but why this one :
declare #nvar nvarchar(32) = 'test'
delete from tbl where col like #nvar
do not delete the line ?
According to this page... https://msdn.microsoft.com/en-us/library/ms179859.aspx
When you use Unicode data (nchar or nvarchar data types) with LIKE,
trailing blanks are significant;
Since you are using CHAR data type with length 32, the actual data stored is "test" + 28 spaces.
In your comparison, you are mixing CHAR and nvarchar. Because of the differing data types, SQL Server converts the CHAR data type to NCHAR to perform the comparison.
If you change the data type of the column to VARCHAR, your code works. You could also change your code to:
delete from tbl where col like #nvar + '%'

Column is of type timestamp without time zone but expression is of type character

I'm trying to insert records on my trying to implement an SCD2 on Redshift
but get an error.
The target table's DDL is
CREATE TABLE ditemp.ts_scd2_test (
id INT
,md5 CHAR(32)
,record_id BIGINT IDENTITY
,from_timestamp TIMESTAMP
,to_timestamp TIMESTAMP
,file_id BIGINT
,party_id BIGINT
)
This is the insert statement:
INSERT
INTO ditemp.TS_SCD2_TEST(id, md5, from_timestamp, to_timestamp)
SELECT TS_SCD2_TEST_STAGING.id
,TS_SCD2_TEST_STAGING.md5
,from_timestamp
,to_timestamp
FROM (
SELECT '20150901 16:34:02' AS from_timestamp
,CASE
WHEN last_record IS NULL
THEN '20150901 16:34:02'
ELSE '39991231 11:11:11.000'
END AS to_timestamp
,CASE
WHEN rownum != 1
AND atom.id IS NOT NULL
THEN 1
WHEN atom.id IS NULL
THEN 1
ELSE 0
END AS transfer
,stage.*
FROM (
SELECT id
FROM ditemp.TS_SCD2_TEST_STAGING
WHERE file_id = 2
GROUP BY id
HAVING count(*) > 1
) AS scd2_count_ge_1
INNER JOIN (
SELECT row_number() OVER (
PARTITION BY id ORDER BY record_id
) AS rownum
,stage.*
FROM ditemp.TS_SCD2_TEST_STAGING AS stage
WHERE file_id IN (2)
) AS stage
ON (scd2_count_ge_1.id = stage.id)
LEFT JOIN (
SELECT max(rownum) AS last_record
,id
FROM (
SELECT row_number() OVER (
PARTITION BY id ORDER BY record_id
) AS rownum
,stage.*
FROM ditemp.TS_SCD2_TEST_STAGING AS stage
)
GROUP BY id
) AS last_record
ON (
stage.id = last_record.id
AND stage.rownum = last_record.last_record
)
LEFT JOIN ditemp.TS_SCD2_TEST AS atom
ON (
stage.id = atom.id
AND stage.md5 = atom.md5
AND atom.to_timestamp > '20150901 16:34:02'
)
) AS TS_SCD2_TEST_STAGING
WHERE transfer = 1
and to short things up, I am trying to insert 20150901 16:34:02 to from_timestamp and 39991231 11:11:11.000 to to_timestamp.
and get
ERROR: 42804: column "from_timestamp" is of type timestamp without time zone but expression is of type character varying
Can anyone please suggest how to solve this issue?
Postgres isn't recognizing 20150901 16:34:02 (your input) as a valid time/date format, so it assumes it's a string.
Use a standard date format instead, preferably ISO-8601. 2015-09-01T16:34:02
SQLFiddle example
Just in case someone ends up here trying to insert into a postgresql a timestamp or a timestampz from a variable in groovy or Java from a prepared statement and getting the same error (as I did), I managed to do it by setting the property stringtype to "unspecified". According to the documentation:
Specify the type to use when binding PreparedStatement parameters set
via setString(). If stringtype is set to VARCHAR (the default), such
parameters will be sent to the server as varchar parameters. If
stringtype is set to unspecified, parameters will be sent to the
server as untyped values, and the server will attempt to infer an
appropriate type. This is useful if you have an existing application
that uses setString() to set parameters that are actually some other
type, such as integers, and you are unable to change the application
to use an appropriate method such as setInt().
Properties props = [user : "user", password: "password",
driver:"org.postgresql.Driver", stringtype:"unspecified"]
def sql = Sql.newInstance("url", props)
With this property set, you can insert a timestamp as a string variable without the error raised in the question title. For instance:
String myTimestamp= Instant.now().toString()
sql.execute("""INSERT INTO MyTable (MyTimestamp) VALUES (?)""",
[myTimestamp.toString()]
This way, the type of the timestamp (from a String) is inferred correctly by postgresql. I hope this helps.
Inside apache-tomcat-9.0.7/conf/server.xml
Add "?stringtype=unspecified" to the end of url address.
For example:
<GlobalNamingResources>
<Resource name="jdbc/??" auth="Container" type="javax.sql.DataSource"
...
url="jdbc:postgresql://127.0.0.1:5432/Local_DB?stringtype=unspecified"/>
</GlobalNamingResources>

How to import data into teradata tables from delimited file using BTEQ import?

I am trying to execute following bteq command on linux environment but couldn't load data properly into Teradata DB server. Can someone please advise me to resolve the below issue that I am facing while loading.
BTEQ Command used :
.SET width 64000;
.SET session transaction btet;
.logmech ldap
.logon XXXXXXX/XXXXXXXX,********;
DATABASE corecm;
.PACK 1000
.IMPORT VARTEXT '~' FILE=/v/global/user/application_event_bus_evt
.REPEAT *
USING(APPLICATION_EVENT_ID CHAR(24),BUS_EVT_ID CHAR(24),BUS_EVT_VID BIGINT,BUS_EVT_RESTATE_IN SMALLINT)
insert into corecm.application_event_bus_evt (APPLICATION_EVENT_ID
, BUS_EVT_ID
, BUS_EVT_VID
, BUS_EVT_RESTATE_IN
)
values
( COALESCE(:APPLICATION_EVENT_ID,1)
, COALESCE(:BUS_EVT_ID,1)
, COALESCE(:BUS_EVT_VID,1)
, COALESCE(:BUS_EVT_RESTATE_IN,1)
) ;
.LOGOFF;
.EXIT;
SAMPLE INPUT FILE DELIMITTER "~" [ /v/global/user/application_event_bus_evt ] :
Ckn3gMxLEeOgIQBQVgErYA==~g+GDDtlaY3n7BdUrYshDFA==~1~1
CL1kEcxLEeOgIQBQVgErYA==~qoKoiuGDbClpcGt/z6RKGw==~1~1
oYIVcMxKEeOgIQBQVgErYA==~mfmQiwl7yAteevzJfilMvA==~1~1
5N7ME5bM4xGhM7exj3ykUw==~yFM2FZbM4xGhM7exj3ykUw==~1~0
JLBH4JfM4xGDH9s5+Ds/8w==~doZ/7pfM4xGDH9s5+Ds/8w==~1~0
fGvpoMxKEeOgIQBQVgErYA==~mQUQIK2mY6WIPcszfp5BTQ==~1~1
Table Definition :
CREATE MULTISET TABLE CORECM.APPLICATION_EVENT_BUS_EVT ,NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT,
DEFAULT MERGEBLOCKRATIO
(
APPLICATION_EVENT_ID CHAR(26) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
BUS_EVT_ID CHAR(26) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
BUS_EVT_VID BIGINT NOT NULL,
BUS_EVT_RESTATE_IN SMALLINT)
UNIQUE PRIMARY INDEX ( APPLICATION_EVENT_ID ,BUS_EVT_ID ,BUS_EVT_VID )
INDEX APPLICATION_EVENT_BUS_EVT_IDX1 ( APPLICATION_EVENT_ID )
INDEX APPLICATION_EVENT_BUS_EVT_IDX2 ( BUS_EVT_ID ,BUS_EVT_VID );
Results set in DB server as,
APPLICATION_EVENT_ID BUS_EVT_ID BUS_EVT_VID BUS_EVT_RESTATE_IN
1 Ckn3gMxLEeOgIQBQVgErYA == g+GDDtlaY3n7BdUrYshD 85,849,873,219,141,958 12,544
2 CL1kEcxLEeOgIQBQVgErYA == qoKoiuGDbClpcGt/z6RK 85,849,873,219,155,783 12,544
3 oYIVcMxKEeOgIQBQVgErYA == mfmQiwl7yAteevzJfilM 85,849,873,219,142,006 12,544
4 5N7ME5bM4xGhM7exj3ykUw == JAf0GpbM4xGhM7exj3yk 85,849,873,219,155,797 12,288
5 JLBH4JfM4xGDH9s5+Ds/8w == Du6T7pfM4xGDH9s5+Ds/ 85,849,873,219,155,768 12,288
6 fGvpoMxKEeOgIQBQVgErYA == mQUQIK2mY6WIPcszfp5B 85,849,873,219,146,068 12,544
If we look at the Data, we can see two issues as,
First two column data length is 24 CHARACTERS ( as per input file ), but the issue is that it been shifted two characters in next column.
Column BUS_EVT_VID and BUS_EVT_RESTATE_IN has wrong data 85,849,873,219,141,958 and 12,544 instead of 1 and 1 respectively (this may be because first two column data got shifted)
I tried following options to resolve the above issue but couldn't resolve the issue,
Modified the Table Definition, i.e. changed datatype to
CHAR(28),CHAR(24),CHAR(26)
Modified the Table Definition column
datatypes to VARCHAR(24), VARCHAR(26)
Modified BTEQ command, i.e. altered datatype in below line,
USING(APPLICATION_EVENT_ID CHAR(24),BUS_EVT_ID CHAR(24),BUS_EVT_VID BIGINT,BUS_EVT_RESTATE_IN SMALLINT)
Thanks in advance.
When you define VARTEXT all input columns must be defined as VARCHAR, but you used CHAR and INT.
This should work, VARCHAR length based on the definition of your target table:
USING(
APPLICATION_EVENT_ID VARCHAR(26),
BUS_EVT_ID VARCHAR(26),
BUS_EVT_VID VARCHAR(19),
BUS_EVT_RESTATE_IN VARCHAR(6)
)