How to get the servername\hostname in Firebird 2.5.x - firebird

I use Firebird 2.5, and I want to retrieve the following values
Username:
I used SELECT rdb$get_context('SYSTEM', 'CURRENT_USER') FROM ...
Database name:
I used SELECT rdb$get_context('SYSTEM', 'DB_NAME') FROM ...
But for server name, I did not find any client API, do you know how can I retrieve the server name with a SELECT statement.

There is nothing built-in in Firebird to obtain the server host name (or host names, as a server can have multiple host names) through SQL.
The closest you can get is by requesting the isc_info_firebird_version information item through the isc_database_info API function. This returns version information that - if connecting through TCP/IP - includes a host name of the server.
But as your primary reason for this is to discern between environments in SQL, it might be better to look for a different solution. Some ideas:
Use an external table
You can create an external table to contain the environment information you need
In this example I just put in a short, fixed width name for the environment types, but you could include other information, just be aware the external table format is a binary format. When using CHAR it will look like a fixed width format, where values shorter than declared need to be padded with spaces.
You can follow these steps:
Configure ExternalFileAccess in firebird.conf (for this example, you'd need to set ExternalFileAccess = Restrict D:\data\DB\exttables).
You can then create a table as
create table environment
external file 'D:\data\DB\exttables\environment.dat' (
environment_type CHAR(3) CHARACTER SET ASCII NOT NULL
)
Next, create the file D:\data\DB\exttables\environment.dat and populate it with exactly three characters (eg TST for test, PRO for production, etc). You can also insert the value instead, the external table file will be created automatically. Inserting might be simpler if you want more columns, or data with varying length, etc. Just keep in mind it is binary, but using CHAR for all columns will make it look like text.
Do this for each environment, and make sure the file is read-only to avoid accidental inserts.
After this is done, you can obtain the environment information using
select environment_type
from environment
You can share the same file for multiple databases on the same host, and external files are - by default - not included in a gbak backup (they are only included if you apply the -convert backup option), so this would allow moving database between environments without dragging this information along.
Use an UDF or UDR
You can write an UDF (User-Defined Function) or UDR (User Defined Routine) in a suitable programming language to return the information you want and define this function in your database. Firebird can then call this function from SQL.
UDFs are considered deprecated, and you should use UDRs - introduced in Firebird 3 - instead if possible.
I have never written an UDF or UDR myself, so I can't describe it in detail.

Related

How to enhance length of Property of an attribute of class in eco Model designer

Iam using mdriven build 7.0.0.11347 for DDD project and have model designed in .ecomdl file.
In this file i have a class Job with WorkDone as one of a property. Backedup SQL table has WorkDone varchar(255) field. Now i wanted to increase length of this field and When i changed the WorkDone property length from 255 to 2000 then it modified the code file but when application runs EvolveSchema then evolving process doesn't recognize this change which leads to no scripts being generated. In the end database doesn't get this updated.
Can you please help me how to get this change persist to database. I thought to increase manually to SQL table but then if database gets change in case of new envrionment QA production then it has to be done every time, which id don't want to do.
In MDriven we dont evolve attribute changes - we only write a warning (255->2000 this change will not be evolved)
You should take steps to alter the column in the database yourself.
We should fix in the future but currently this is a limitation
To expand on my comment, VARCHAR can only be from 0-255 chars
Using TEXT will allow for non-binary (character) strings and BLOBs will allow for binary (byte) strings
Your mileage may vary with this as to what you can do with them, as I am using MySQL knowledge and knowledgebases (since you don't specify your SQL type)
See below for exaplanations of the types;
char / varchar
blobs / text

Creating spectrum table in matillion for csv file with comma inside quotes

I have a scenario for creating spectrum table in redshift using matillion.
my CSV file data is like this:-
column1,column2,column3
abc,"qwety,pqr",xyz
but in spectrum table i am seeing data
as
column1 column2 column3
abc qwerty pqr
Matillion is not taking quotes value as one.
can you please suggest how to achieve this using matillion's EXTERNAL TABLE component.
Basically you would like to specify a quote parameter for your CSV data.
Redshift has 2 ways of specifying external tables (see Redshift Docs for reference):
using the default built-in SerDes and properties like ROW FORMAT DELIMITED, FIELDS TERMINATED BY
explicitly specifying a SerDe with ROW FORMAT SERDE, WITH SERDEPROPERTIES
I don't think it's possible to specify a quote parameter using the built-in SerDes.
It is possible to specify them using org.apache.hadoop.hive.serde2.OpenCSVSerde (look here for details on it's properties), but beware that there are know problems with it, as one described in this SO question.
Now for Metillion:
I have never used Matillion, but looking at their Redshift External Table documentation page, looks like it's only possible to specify the FORMAT and the FIELD TERMINATOR, but not to specify a SerDe and it's properties, hence it's not possible to specify the quote parameters for the external table - unless there are some undocumented means to specify a custom SerDe.
Personal note:
We have experienced many problems with ingesting data stored as CSV, and we basically try to avoid it. There's no standard for CSV, each tool implements it's own version of support for it, and it's very difficult convince all your tools to see data the same way.

Db2 for I: Cpyf *nochk emulation

In the IBM i system there's a way to copy a from a structured file to one without structure using Cpyf *nochk.
How can it be done with sql?
The answer may be "You can't", not if you are using DDL defined tables anyway. The problem is that *NOCHK just dumps data into the file like a flat file. Files defined with CRTPF, whether they have source, or are program defined, don't care about bad data until read time, so they can contain bad data. In fact you can even read bad data out of a file if you use a program definition for that file.
But, an SQL Table (one defined using DDL) cannot contain bad data. No matter how you write it, the database validates the data at write time. Even the *NOCHK option of the CPYF command cannot coerce bad data into an SQL table.
There really isn't an easy way
Closest would be to just build a big character string using CONCAT...
insert into flatfile
select mycharfld1
concat cast(myvchar as char(20))
concat digits(zonedFld3)
from mytable
That works for fixed length, varchar (if casted to char) and zoned decimal...
Packed decimal would be problematic..
I've seen user defined functions that can return the binary character string that make up a packed decimal...but it's very ugly
I question why you think you need to do this.
You can use QSYS2.QCMDEXC stored procedure to execute OS commands.
Example:
call qsys2.qcmdexc ( 'CPYF FROMFILE(QTEMP/FILE1) TOFILE(QTEMP/FILE2) MBROPT(*replace) FMTOPT(*NOCHK)' )

Firebird error: Attempted update of read-only database

I am using firebird database version 2.0. When I try to update a row, I get an error message: Attempted update of read-only database.
http://www.firebirdfaq.org/faq359/ suggests I may query a blob field that uses a different character set than the connection character set.
I do query a blob field and when the blob field has a value then the updating causes the error. If there is no value in the blob then the updating is just fine.
I use IBConsole to open the firebird database and check the database metadata, I find the metadata says "Default character set NONE".
To fix the problem I firstly need to know what are the character sets used in my database.
So my questions are:
what is the character set being used for my database (Connection character set)
the data type of the blob field is MEMOBLOB, and MEMOBLOB is created as Create domain MEMOBLOB as blob sub_type TEXT segment size 80; So what is the character set use for the MEMOBLOB?
No, that is not about queries or BLOBs.
Firebird databases have a number of modes, one of them is "read-only". In this mode no any change to database is allowed.
You can use gfix utility to change this database mode. You can also use corresponding menu in IBExpert and other development tools that use Firebird Services API
The very link you posted - http://www.firebirdfaq.org/faq359/ - says that:
It does not mean that the database file is read-only, but it (database) contains a read-only mark
gfix -mode read_only /path/to/database.fdb
gfix -mode read_write /path/to/database.fdb
See also https://www.firebirdsql.org/manual/gfix-dbmode.html
See also https://www.ibexpert.net/ibe/pmwiki.php?n=Doc.DatabaseProperties
Of tangential questions:
character set being used for my database
According to your text, it is NONE.
To be exact, database does not use some charset. It is every textual column (char/varchar/blob sub_type text) that do. But usually developer does not bother with specifying individual per-column charsets, so they inherit that default one.
Read also: https://www.firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref25-ddl-tbl.html#fblangref25-ddl-tbl-character
metadata says "Default character set NONE"
This is as close to "database charset" as can be.
Granted, that is only default one and you could override it when creating your columns, but i do not think you did. So probably all your textual columns have charset "NONE".
That is rather dangerous setting, meaning all texts in such columns are stored as raw bytes dump, and hoping application would correctly guess how to convert bytes to letters and letters to bytes.
Read more: https://www.firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref25-datatypes-chartypes.html
Flame Robin seems not to show charset by default, but maybe in DDL section it would.
http://www.flamerobin.org/images/screenshots/0.6.0/gtk2/property_page.png
IBExpert does: https://www.ibexpert.net/ibe/uploads/Doc/dmiles783.gif
(Connection character set)
....is not "character set used by database", it is character set used by, well, connection your application (such as IBConsole or FlameRobin or IBExpert) makes to the database.
You have to set it in every application's connection properties. Easiest option would be UTF-8, but when you have NONE-charset columns it might fail...
For example in FR: http://www.flamerobin.org/images/screenshots/0.7.1/winxp/databaseinfo.png
You can use monitoring tables to query for charset id of your CURRENT_CONNECTION, see more at https://www.firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref-appx05-monattach.html
After I add transaction parameters for the relevant tables. It solves the problem.
The parameter value I added are isc_tpb_lock_write and isc_tpb_shared.
Thank you Arioch 'The.

How does one prevent MS Access from concatenating the schema and table names thereby taking them over the 64 character limit?

I have been trying to get around this for several day's now with no luck. I loaded Libre Office to see how that would handle it, and its native support for PostgeSQL works wonderfully and I can see the true data structure. Which is how I found out I was dealing with more than one table. What I am seeing in MS Access is the two names concatenated together. The concatenation takes them over the 64 character limit that appears to be built into the ODBC driver. I have seen many references to modifying namedatalen on the server side, but my problem is on the ODBC side. Most of the tables are under the 64 character limit even with the concatenation and work fine. As such I know everything else is working. The specific error I am getting is
'your_extra_long_schema_name_your_table_name_that_you_want_to_get_data_from'
is not a valid name. Make sure it does not include invalid characters
or punctuation and that it is not too long.
Object names in an Access database are limited to 64 characters (ref: here). When creating an ODBC linked table in the Access UI the default behaviour is to concatenate the schema name and table name with an underscore and use that as the linked table name so, for example, the remote table table1 in the schema public would produce a linked table in Access named public_table1. If such a name exceeds 64 characters then Access will throw an error.
However, we can use VBA to create the table link with a shorter name, like so:
Option Compare Database
Option Explicit
Sub so38999346()
DoCmd.TransferDatabase acLink, "ODBC Database", "ODBC;DSN=PostgreSQL35W", acTable, _
"public.hodor_hodor_hodor_hodor_hodor_hodor_hodor_hodor_hodor_hodor", _
"hodor_linked_table"
End Sub
(Tested with Access 2010.)