How to easily determine version of .fdb-file (Firebird database) - firebird

When looking at a .fdb-database of a proprietary software (probably using Firebird Embedded), how can I determine which version of Firebird I need to setup?
The only way I can currently imagine is by having a look with a hex viewer at 'ODS-version' which is part of a page header, which is most likely also used as format for the file header, and then somehow by digging through respository history find out which Firebird release supports which ODS-Version. ODS-version, atleast nowadays, is encoded like stated below.
Related docs: https://firebirdsql.org/file/documentation/reference_manuals/reference_material/Firebird-Internals.pdf
Related code:
https://github.com/FirebirdSQL/firebird/blob/3dd6a2f5366e0ae3d0e6793ef3da02f0fd05823a/src/jrd/ods.h
and
inline USHORT DECODE_ODS_MAJOR(USHORT ods_version)
{
return ((ods_version & 0x7FF0) >> 4);
}
inline USHORT DECODE_ODS_MINOR(USHORT ods_version)
{
return (ods_version & 0x000F);
}
Is there really no easier way to determine required firebird version, e.g. with some cli-tool?

If you have a Firebird installation at hand, you can use gstat to check the ODS for a database. For example:
gstat -h <path-to-your-database>
If the ODS version of the database is supported by the version of gstat, you'll get something like:
Database "D:\DATA\DB\FB4\FB4TESTDATABASE.FDB"
Gstat execution time Sat Mar 17 18:08:09 2018
Database header page information:
Flags 0
Generation 308
System Change Number 0
Page size 16384
ODS version 13.0
Oldest transaction 393
Oldest active 394
Oldest snapshot 394
Next transaction 395
Sequence number 0
Next attachment ID 150
Implementation HW=AMD/Intel/x64 little-endian OS=Windows CC=MSVC
Shadow count 0
Page buffers 0
Next header page 0
Database dialect 3
Creation date Jan 6, 2017 14:05:48
Attributes force write
Variable header data:
*END*
Here ODS version 13.0 means it is a Firebird 4 database.
If the gstat version does not support the ODS version of the database, you will get an error like (eg in this case using a Firebird 4 gstat on a Firebird 2.5/ODS 11.2 database):
Wrong ODS version, expected 13, encountered 11
This has its downsides though: it doesn't provide the ODS minor versions, and for example when using a Firebird 2.0 (ODS 11.0) or 2.1 (ODS 11.1) gstat to access a Firebird 2.5 (ODS 11.2) database, this will lead to the unhelpful error message:
Wrong ODS version, expected 11, encountered 11
The quickest route is to use a Firebird 2.5 gstat as this will allow you to pinpoint the exact ODS versions between 10 (Firebird 1) and 11.2 (Firebird 2.5), and at the same time the error message will allow you to pinpoint if you need a newer version (eg ODS 12 is Firebird 3, ODS 13 is Firebird 4).
However, you will also need to look at the Implementation output of gstat. Firebird database files have platform specific storage (although this has been reduced since Firebird 2.0). For example in Firebird 1.5 and earlier (ODS 10), a database from a 32 bit Firebird cannot be accessed by a 64 bit Firebird. A Firebird database from a little endian platform (most common) cannot be read on a big endian platform (and vice versa).
Within these limitations, a Firebird 2.5 installation can read databases with ODS 10 through 11.2. Firebird 3 can only read ODS 12, and Firebird 4 only ODS 13.
If there are platform mismatches (eg old 32/64 bit or little/big endian) or unsupported ODS versions, you will need to have a transportable backup (gbak) to convert and/or upgrade.
For an overview of ODS versions and accompanying Firebird (or InterBase) version, see All Firebird and InterBase On-Disk-Structure (ODS) versions.

Related

DBVisualizer displays null on date field holding '0001-01-01'

I issued an SQL statement in DbVis:
select vestdate, name from person where vestdate is not null
And got many results where DbVisualizer showed vestdate as (null)!
After investigating, I discovered that the vestdate was '0001-01-01', so the query correctly returned these records, but DbVisualizer displays them as (null).
I have just switched from windows 8 to windows 10.
It works on windows 8 (displays '0001-01-01'), but not not windows 10 (displays null):
Product: DbVisualizer Pro 11.0.4 [Build #3103]
OS: Windows 8.1
OS Version: 6.3
OS Arch: amd64
Java Version: 1.8.0_252
Java VM: OpenJDK 64-Bit Server VM
Java Vendor: AdoptOpenJDK
Java Home: c:\program files\dbvisualizer\jre
DbVis Home: C:\Program Files\DbVisualizer
User Home: -------
PrefsDir: -------
SessionId: 55
BindDir: -------
Product: DbVisualizer Pro 11.0.5 [Build #3113]
OS: Windows 10
OS Version: 10.0
OS Arch: amd64
Java Version: 1.8.0_252
Java VM: OpenJDK 64-Bit Server VM
Java Vendor: AdoptOpenJDK
Java Home: c:\program files\dbvisualizer\jre
DbVis Home: C:\Program Files\DbVisualizer
User Home: -------
PrefsDir: -------
SessionId: 968
BindDir: -------
Any ideas how to make the program show me the real value, not the interpreted value of null?
The issue is explained in an IBM support document:
Problem
Trying to insert a date value into a date column before 1940 or after 2039 will represent the date as NULL within the respective database table.
Cause
This is caused by a limitation with the IBM i database Toolbox JDBC driver as detailed in the related link:
How does the Toolbox JDBC driver deal with dates before 1940 (or after 2039)?
The IBM i database supports several date formats. The Toolbox JDBC driver uses the date format that is set up as the default on the IBM i system. This default is usually set to "mdy" which only supports dates between 1940 and 2039. You can override the date format by specifying the "date format" property when opening the JDBC connection. The best choice is "iso", which supports a full four-digit date. The easiest way to do this is to add ";date format=iso" to the end of the URL used when connecting to the database.
Resolving The Problem
Appending the ";date format=iso" to the host connection property for the applicable database
via Preferences-> EGL-> SQL Database Connections will then show the respective dates correctly eg:
1939-01-01.
The issue can be fixed in DBVisualizer by doing the following:
Database -> Edit Database Connection(s)...
Select Properties tab
Select Driver Properties
Edit parameters date format and time format to be iso
Apply Changes
Disconnect and reconnect

How to identify truncated columns in SQL Server 2016

I have been experimenting using the code below and it seems it does not work.
DBCC TRACEON (460);
DECLARE #aa as TABLE (name varchar(5))
INSERT INTO #aa
SELECT '1234567890'
Error
String or binary data would be truncated
Expected error:
String or binary data would be truncated in table #aa, column name. Truncated value: '1234567890'
According to https://www.procuresql.com/blog/2018/09/26/string-or-binary-data-get-truncated/ SQL Sever 2019 will be able to identify the columns that have been truncated, but can be used in SQL Server 2016 using TRACEON 460.
In terms of roles, I have "public", "processadmin", and "sysadmin".
In the sys.messages I think the patch for this feature based on message_id=2628:
+------------+------------------------------------------------------------------------------------------------------+
| message_id | text |
+------------+------------------------------------------------------------------------------------------------------+
| 2628 | String or binary data would be truncated in table '%.*ls', column '%.*ls'. Truncated value: '%.*ls'. |
| 8152 | String or binary data would be truncated. |
+------------+------------------------------------------------------------------------------------------------------+
Details:
Microsoft SQL Server 2016 Standard (64-bit)
Version : 13.0.5149.0
Is Clustered : False
Is HADR Enabled : False
Is XTP Supported : True
The new error message hasn't yet been back-ported to SQL Server 2016. From this post (emphasis mine):
This new message is also backported ... (and in an upcoming SQL Server 2016 SP2 CU) ...
This CU has not been delivered yet. The most recent, CU5 (13.0.5264.1), was released in January and did not include it.
And just a small correction, you need to opt in to this behavior (via the trace flag) even in the SQL Server 2019 CTPs. The reason is that a different error number is produced, and this could break existing applications and unit tests that behave based on the error number raised. This will be documented as a breaking change when SQL Server 2019 is released, but I'm sure it will still bite some people when they upgrade.

db2set codepage in not working in DB2 windows

I am using DB2 version 9.7 and 10.1 both.When i execute the command to change the code page like "db2set db2codepage=1250" it does not show any error. but aftre that when i import the data then it throws the following error
SQLCODE: -332 - SQLSTATE: 57017
*** SQL0332N Character conversion from the source code page "1252" to the target code page "1250" is not supported. SQLSTATE=57017
if the charset is utf-8 you should use DB2CODEPAGE=1208 on non-windows and 1252 on windows in the system and restart your application.

Apache solr 5.3.1 out of memory

i'm new to solr, though i'm struggling for a few days to run full indexing on a postgreSQL 9.4 DB on a entity with about 117.000.000 entries.
I'm using solr 5.3.1 on Windows 7 x64 with 16 GB of RAM. I'm not intending to use this machine as a server, it's just some kind of prototyping i'm at.
I kept getting this error on JDK x86 with just starting solr as solr start without any options. Then i tried:
solr start -m 2g which results in solr not coming up at all
solr start -m 1g makes solr start, but after indexing about 87.000.000 entries it dies with an out of memory error.
It is exactly the same point at which it dies without any options, though at the admin dashboard I see JVM heap is full.
So, since solr warns me anyway to use a x64 JDK i did and now use 8u65. I started solr with 4g Heap and started full import again. Again after 87.000.000 entries it threw the same exception. But the heap isn't even full (42%), neither is RAM or SWAP.
Does anyone have an idea what could be the reason for this behaviour?
Here is my data-config
<dataConfig>
<dataSource
type="JdbcDataSource"
driver="org.postgresql.Driver"
url="jdbc:postgresql://localhost:5432/dbname"
user="user"
password="secret"
readOnly="true"
autoCommit="false"
transactionIsolation="TRANSACTION_READ_COMMITTED"
holdability="CLOSE_CURSORS_AT_COMMIT" />
<entity name="hotel"
query="select * from someview;"
deltaImportQuery = "select * someview where solr_id = '${dataimporter.delta.id}'"
deltaQuery="select * from someview where changed > '${dataimporter.last_index_time}';">
<field name="id" column="id"/>
... etc for all 84 columns
in solrconfig.xml i have defined a RequestProcessorChain to generate a unique key while indexing, which seems to work.
in schema.xml there again are 84 columns with type, indexed and other attributes.
Here is the exception i'm getting, they are in german but the first one is saying "error 48" and the other "out of memory"
getNext() failed for query 'select * from someview;':org.apache.solr.handler.dataimport.DataImportHandlerException: org.postgresql.util.PSQLException: FEHLER: Speicher aufgebraucht
Detail: Fehler bei Anfrage mit Größe 48.
at org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:62)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:416)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.access$500(JdbcDataSource.java:296)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:331)
at org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:132)
at org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:74)
at org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:243)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
at org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:461)
Caused by: org.postgresql.util.PSQLException: FEHLER: Speicher aufgebraucht
Detail: Fehler bei Anfrage mit Größe 48.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2182)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1911)
at org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:2113)
at org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1964)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:408)
... 12 more
Thank you in advance
As pointed by MatsLindh it was a JDBC error. Meanwhile i worked with hibernate search and experienced the same error at exactly the same time (near 87.000.000 indexed entities). The trick was to commit more often.
So at this case i tried several things at one time and it worked (don't know which option exactly did the trick):
1. set maxDocs for autoCommit in solrconfig.xml to 100.000. I believe that the default setting for committing is something at 15 seconds if no new documents are added, what actually happens all the time, until heap space runs full.
2. Set batchSize for the postrgreSQL JDBC Driver at 100 (Default is 500).
3. Changed the evil 'select * from table' to 'select c1, c2, ..., c85 from table'
4. Updated the JDBC Driver from 9.4.1203 to 9.4.1207
5. Updated Java to 1.8u74
I think it worked due to 1. and/or 3., I will do some further testing and update my post.
While i was trying the indexing with hibernate search I could see that the allocated RAM for PostgreSQL Server was freed at commit, so the RAM never was an issue again. It didn't happen here and the DB Server was at 85 GB RAM in the end, but kept on working.

Pyodbc utf-8 bind param error With FreeTDS and unixODBC

FreeTDS version 0.82
unixODBC version 2.3.0
pyodbc version 2.1.8
freetds.conf:
tds version = 7.0
client charset = UTF-8
using Servername in the odbc.ini (which for some crazed reason made a difference in getting unixODBC to recognize the client charset in freetds)
I'm able to pull utf8 data correctly and can update with the string inline ie:
UPDATE table
SET col = N'私はトカイ大好き'
WHERE id = 182333369
But
text = u'私はトカイ大好き'
cursor.execute("""
UPDATE table
SET column = ?
WHERE id = 182333369
""", text)
Fails with:
pyodbc.Error: ('HY004', '[HY004] [FreeTDS][SQL Server]
Invalid data type (0) (SQLBindParameter)')
If I add:
text = text.encode('utf-8')
I get the following error:
pyodbc.ProgrammingError: ('42000', '[42000] [FreeTDS][SQL Server]The incoming tabular data stream (TDS) protocol stream is incorrect. The stream ended unexpectedly. (4002) (SQLExecDirectW)')
Any ideas as to where things have gone astray?
Unicode support was reworked in pyodbc 3.0.x. Try testing with the latest source (3.0.2-beta02, etc.)