I am using symmetricds free version to replicate my firebird database. When I demo by creating new (blank) DB, it worked fine. But when I config on my existing DB (have data), error occurred.
I use Firebird-2.5.5.26952_0 32bit & symmetric-server-3.9.5, OS is Windows Server 2008 Enterprise.
I had searched for whole day but found nothing to solve this. Anyone please help. Thank for your time.
UPDATE:
When initial load, Symmetric execute the statement to declare UDF in firebird DB:
declare external function sym_hex blob
returns cstring(32660) free_it
entry_point 'sym_hex' module_name 'sym_udf
It caused error because my existing DB charset is UNICODE_FSS, max length of CSTRING is 10922. When I work around by updating charset to NONE, it worked fine. But it is not a safe solution. Still working to find better.
One more thing, anyone know others open source tool to replicate Firebird database, I tried many in here and only Symmetric work.
The problem seems to be a bug in Firebird where the length of CSTRING is supposed to be in bytes, but in reality it uses characters. Your database seems to have UTF8 (or UNICODE_FSS) as its default character set, which means each character can take up to 4 bytes (3 for UNICODE_FSS). The maximum length of CSTRING is 32767 bytes, but if it calculates in characters for CSTRING, that suddenly reduces the maximum to 8191 characters (or 32764 bytes) (or 10922 characters, 32766 bytes for UNICODE_FSS).
The workaround to this problem would be to create a database with a different default character set. Alternatively, you could (temporarily) alter the default character set:
For Firebird 3:
Set the default character set to a single byte character set (eg NONE). Use of NONE is preferable to avoid unintended transliteration issues
alter database set default character set NONE;
Disconnect (important, you may need to disconnect all current connections because of metadata caching!)
Set up SymmetricDS so it creates the UDF
Set the default character set back to UTF8 (or UNICODE_FSS)
alter database set default character set UTF8;
Disconnect again
When using Firebird 2.5 or earlier, you will need perform a direct system table update (which is no longer possible in Firebird 3) using:
Step 2:
update RDB$DATABASE set RDB$CHARACTER_SET_NAME = 'NONE'
Step 4:
update RDB$DATABASE set RDB$CHARACTER_SET_NAME = 'UTF8'
The alternative would be for SymmetricDS to change its initialization to
DECLARE EXTERNAL FUNCTION SYM_HEX
BLOB
RETURNS CSTRING(32660) CHARACTER SET NONE FREE_IT
ENTRY_POINT 'sym_hex' MODULE_NAME 'sym_udf';
Or maybe character set BINARY instead of NONE, as that seems closer to the intent of the UDF.
Related
We are seeing issue in table value which are populated from DB2 (source) to Postgres (Target).
I have including here all the job details for each component.
Based on the above approach and once the data has been populated, when we run the below query in the Postgres DB.
SELECT * FROM VMRCTTA1.VMRRCUST_SUMM where cust_gssn_cd='XY03666699' ;
SELECT * FROM VMRCTTA1.VMRRCUST_SUMM where cust_cntry_cd='847' ;
There will be no records were returned however, when we run the same query with Trim as below it works.
SELECT * FROM VMRCTTA1.VMRRCUST_SUMM where trim(cust_gssn_cd)='XY03666699' ;
SELECT * FROM VMRCTTA1.VMRRCUST_SUMM where trim(cust_cntry_cd)='847' ;
Below are the ways we have tried to overcome this but no luck.
Used tmap between source and target component.
Used trim in source component under Advanced setting.
Change the datatype in Postgres DB of cust_cntry_cd from char(5) to Character varying, this will allow value without any length restriction.
Please suggest what is missing as we have this issue in almost all the table where we have character/varchar columns.
We are using TOS.
The data type is probably character(5) in DB2.
That means that the trailing spaces are part of the column and will be migrated. You have to compare with
cust_cntry_cd = '847 '
or cast the right argument to character(5):
cust_cntry_cd = CAST ('847' AS character(5))
Maybe you could delete all spaces in the advanced settings of the tDB2Input component.
Like the screen :
I need to display some mathematical/greek symbols in the RTE and later in the frontend. Inserting them via copy/paste or the "Insert characters" option works great, but as soon as I save the text, the inserted symbol get's replaced with a question mark and T3 throws following error:
1: These fields of record 56 in table "tt_content" have not been saved correctly: bodytext! The values might have changed due to type casting of the database.
I think there is an issue with the character set of T3 or my DB, but I don't know where to start looking.
Tested on my 7.6.8 and it seems to work OK. When I login to my mysql and run this query:
SELECT default_character_set_name FROM information_schema.SCHEMATA
WHERE schema_name = "7_6_local_typo3_org";
(7_6_local_typo3_org is database name) it returns:
+----------------------------+
| default_character_set_name |
+----------------------------+
| utf8 |
+----------------------------+
1 row in set (0.00 sec)
and also collation:
SELECT default_collation_name FROM information_schema.SCHEMATA
WHERE schema_name = "7_6_local_typo3_org";
+------------------------+
| default_collation_name |
+------------------------+
| utf8_general_ci |
+------------------------+
1 row in set (0.00 sec)
Then also I have in my my.cnf (mysql config file):
character-set-server = utf8
collation-server = utf8_general_ci
Similar problem when pasting HTML with UTF-Icons into Raw-HTML content-element in TYPO3-8.7.x but it works when i encode the symbols, for example:
<span class="menuicon">⌚</span>
Possible reasons for error message
1: These fields of record X in table "tt_content" have not been saved correctly: bodytext! The values might have changed due to type casting of the database.
in a TYPO3 installation (example installation's version: 10.4.20) can be
the MySQL/MariaDB tables of this TYPO3 installation are using an inappropriate/outdated character set and/or collation (Step 1 below).
this TYPO3 installation is not yet configured to use utf8mb4 for the database (Step 2 below).
TYPO3 supports utf8mb4 since at least version 9.5. With it comes proper Unicode support, including emojis, mathematical symbols, and Greek letters (e.g. ⌚∰β) in CKEditor bodytext.
I migrated my TYPO3 installation's database and configuration to utf8mb4 in the following way, getting rid of the aforementioned error message and saving and displaying Unicode multibyte characters correctly.
Be sure to apply these migrations in a test environment first, then check existing content and test usual content editing scenarios before applying these migrations on a production system to make sure MySQL/MariaDB converted between the character sets correctly and without data loss (truncation).
Step 1
Update TYPO3 database tables to use utf8mb4 as character set and utf8mb4_unicode_ci as collation.
The following bash one-liner loops over all tables in database typo3 and applies these updates. It assumes MySQL/MariaDB root privileges, a password-less socket connection, and a TYPO3 database (table_schema) named typo3. Adapt accordingly. Tested successfully on
Debian 11 MariaDB Server (10.5.12-MariaDB-0+deb11u1)
Ubuntu 20.04 LTS MySQL Server (8.0.27-0ubuntu0.20.04.1)
for tbl in $(mysql --disable-column-names --batch -e 'select distinct TABLE_NAME from information_schema.tables where table_schema="typo3" and table_type="BASE TABLE";'); do echo "updating table $tbl" && mysql -e "ALTER TABLE typo3.${tbl} CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;"; done
To ensure that during this conversion (from a "smaller" encoding to the up-to-four-bytes-per-character utf8mb4 encoding) no (string) data gets lost/truncated, MySQL/MariaDB automatically adapts a text/string column's datatype to a larger text/string datatype, e.g. from TEXT to MEDIUMTEXT.
To restore some TYPO3 (extension) table's column back to its specified datatype, visit TYPO3 backend -> Maintenance -> Analyze Database Structure. This tool will allow to restore those column's original (smaller) datatypes. This may cause data truncations. I'm not sure whether TYPO3 will warn if truncation actually occurs, though assuming the TYPO3 (extension) developers had utf8mb4 in mind when specifying/designing a column's datatype and the user-provided content of a particular database cell is not too large, truncation should not be happening (overview of text/string datatype sizes).
Step 2
Configure TYPO3 to use utf8mb4. For example, when leveraging typo3conf/AdditionalConfiguration.php, have the following configurations in AdditionalConfiguration.php:
// ...
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default']['charset'] = 'utf8mb4';
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default']['tableoptions']['charset'] = 'utf8mb4';
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default']['tableoptions']['collate'] = 'utf8mb4_unicode_ci';
// ...
Good day,
i have a Sybase ASE 12.5 database on windows NT server
i need to know the character set of some Arabic data stored in the database
i checked the database default character set : it is "CP850"
but the stored data are "Arabic" data,so they are stored using another character set
i tried checking the "master..syscharsets" table , i can't find any popular Arabic charsets
Command: select id, csid, name, description from master..syscharsets
Result: http://dc414.2shared.com/download/CCfkf_RW/syscharsets_cropped.jpg?tsid=20140507-130321-3ade23f2
any ideas how to know the character set of the data?
i think it uses cp850 multilingual, try sp_configure "enable unicode conversions" in the server and try also sp_help tableName
I use the translate function to process searches accent insensitive.
To improve this request, i've created a matching index :
CREATE INDEX person_lastname_ci_ai_si
ON person
USING btree
(translate(upper(lastname::text), '\303\200\303\201\303\202\303\203\303\204\303\205\303\206\303\207\303\210\303\211\303\212\303\213\303\214\303\215\303\216\303\217\303\221\303\222\303\223\303\224\303\225\303\226\303\230\303\231\303\232\303\233\303\234\303\235\303\237\303\240\303\241\303\242\303\243\303\244\303\245\303\246\303\247\303\250\303\251\303\252\303\253\303\254\303\255\303\256\303\257\303\261\303\262\303\263\303\264\303\265\303\266\303\270\303\271\303\272\303\273\303\274\303\275\303\277'::text, 'AAAAAAACEEEEIIIINOOOOOOUUUUYSaaaaaaaceeeeiiiinoooooouuuuyy'::text)
);
It works fine with postgres 9.1 but it seems to don't work with 9.0.
Postgres 9.0 seems to replace
'\303\200\303\201\303\202\303\203\303\204\303\205\303\206\303\207\303\210\303\211\303\212\303\213\303\214\303\215\303\216\303\217\303\221\303\222\303\223\303\224\303\225\303\226\303\230\303\231\303\232\303\233\303\234\303\235\303\237\303\240\303\241\303\242\303\243\303\244\303\245\303\246\303\247\303\250\303\251\303\252\303\253\303\254\303\255\303\256\303\257\303\261\303\262\303\263\303\264\303\265\303\266\303\270\303\271\303\272\303\273\303\274\303\275\303\277'
by
ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÑÒÓÔÕÖØÙÚÛÜÝßàáâãäåæçèéêëìíîïñòóôõöøùúûüýÿ
Then, because my code perform searches using ascii codes, it doesn't use the index..
Is there a way to avoid postgres to convert ascii codes to characters when creating index ?
For example :
select '\303\200\303\201\303\202\303\203\303\204\303\205\303\206\303\207\303\210\303\211\303\212\303\213\303\214\303\215\303\216\303\217\303\221\303\222\303\223\303\224\303\225\303\226\303\230\303\231\303\232\303\233\303\234\303\235\303\237\303\240\303\241\303\242\303\243\303\244\303\245\303\246\303\247\303\250\303\251\303\252\303\253\303\254\303\255\303\256\303\257\303\261\303\262\303\263\303\264\303\265\303\266\303\270\303\271\303\272\303\273\303\274\303\275\303\277'
;
Result
ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÑÒÓÔÕÖØÙÚÛÜÝßàáâãäåæçèéêëìíîïñòóôõöøùúûüýÿ
How can i have this result ?
\303\200\303\201\303\202\303\203\303\204\303\205\303\206\303\207\303\210\303\211\303\212\303\213\303\214\303\215\303\216\303\217\303\221\303\222\303\223\303\224\303\225\303\226\303\230\303\231\303\232\303\233\303\234\303\235\303\237\303\240\303\241\303\242\303\243\303\244\303\245\303\246\303\247\303\250\303\251\303\252\303\253\303\254\303\255\303\256\303\257\303\261\303\262\303\263\303\264\303\265\303\266\303\270\303\271\303\272\303\273\303\274\303\275\303\277
Starting from version 9.1, PostgreSQL standard_conforming_strings option defaults to ON.
This means that backslash \ character is treated as-is and not as escaping symbol, this is done to prevent SQL injection attacks; this follows SQL standard recommendations.
It is still possible to use \ to get special characters, but only within string constants.
For the pre-9.1 versions of PostgreSQL I suppose these options are possible:
Change system-wide standard_conforming_strings option to ON, but this will affect whole cluster and may give unexpected results in other areas;
Change standard_conforming_strings option on a per-user basis, using ALTER ROLE ... SET standard_conforming_strings TO on;, this one also may have side effects;
Use plain SET standard_conforming_strings TO on; as a first command you issue in your session before creating the index;
Double all your backslashes so that are treated as a literal \ symbol in your CREATE INDEX ... statement.
Let me know if this helps.
I want to create a table whose field name is of 100 characters but postgres limit for no of characters is 64 so how to change that limit to 100?
example:
Create table Test
(
PatientFirstnameLastNameSSNPolicyInsuraceTicketDetailEMRquestionEMR varchar(10)
)
This table creation fails as the name exceeds 64 characters
Actually name's limit is equal to NAMEDATALEN - 1 bytes (not necessarily characters), default value for NAMEDATALEN is 64.
NAMEDATALEN was determined at compile time (in src/include/pg_config_manual.h). You have to recompile PostgreSQL with new NAMEDATALEN limit to make it work.
However think about design and compatibility with other servers with standard 63 bytes limit. It's not common practice to use such long names.
It's because of the special name type (see table 8.5), which is used in pg_catalog. It won't accept anything longer than 63 bytes (plus terminator). There is no workaround.