Firebird 1.5 : Duplicate columns in a table - firebird

I noticed that Firebird creates duplicate columns for a single table, thus incorrect indices are being used in the query which cause query to be slow. Please example below.
I have 2 tables with the same columns and indices, but when checking the table structure, one table shows duplicate columns
Table A : Name VARCHAR(30)
Age INTEGER
BIRTH_DATE TIMESTAMP
Indices : Name, Birth_date (Asc), Birth_date(Desc)
Table B : Name VARCHAR(30)
Age INTEGER
BIRTH_DATE TIMESTAMP
Name VARCHAR(30)
Age INTEGER
BIRTH_DATE TIMESTAMP
Indices : Name, Birth_date (Asc), Birth_date(Desc)
When joining the table with Table C and order by Birth_date, Table A is using the Birth_date index Ordered, but Table B is not.
Please help! what is the cause behind this? Thank you.

I just had a problem where a duplicate column had been allowed to be created. This request
SELECT a.RDB$FIELD_NAME
FROM RDB$RELATION_FIELDS a
WHERE a.RDB$FIELD_NAME like '%COLUMN_NAME%'
was showing two COLUMN_NAME lines. By copy pasting the fields elsewhere it became apparent that one column had trailing whitespace, while the other had a carriage return + line feed (CRLF) and then trailing whitespace.
The FlameRobin wizard was used to create the column. My take on it is a copy-paste was used and a CRLF was inserted. Excel and other softwares can do that to you. FlameRobin, FlameRobin's driver and FireBird should each guard against that, though.
We dropped the offending column by crafting some DDL which had the offending CRLF in the column name.

Related

I have two columns, I want the second column to have the same values as the first column

I have two columns, I want the second column to have the same values as the first column always, in PostgreSQL.
The columns are landmark_id (integer) and name (varchar), I want the name column to always have the same values (id's) from landmark_id.
landmark_id (integer) | name (varchar)
1 1
2 2
3 3
I don't understand why you would want to do that, but I can think of two ways to accomplish your request. One is by using a generated column
CREATE TABLE g (
landmark_id int,
name varchar(100) GENERATED ALWAYS AS (landmark_id::varchar) STORED
)
and the other is by enforcing a constraint
CREATE TABLE c (
landmark_id int,
name varchar(100),
CONSTRAINT equality_cc CHECK (landmark_id = name::varchar)
)
Both approaches will cause the name column to occupy disk space. The first approach will not allow you to specify the name column in INSERT or UPDATE statements. In the latter case, you will be forced to specify both columns when inserting.
You could also have used a trigger to update the second column.
Late edit: Others suggested using a view. I agree that it's a better idea than what I wrote.
Create a view, as suggested by #jarlh in comments. This automatically generates column name for you on the fly. This is usually preferred to storing essentially the same data multiple times as in an actual table, where the data occupies more disk space and also can get out of sync. For example:
CREATE VIEW landmarks_names AS
SELECT landmark_id,
landmark_id::text AS name
FROM landmarks;

PostgreSQL id column not defined

I am new in PostgreSQL and I am working with this database.
I got a file which I imported, and I am trying to get rows with a certain ID. But the ID is not defined, as you can see it in this picture:
so how do I access this ID? I want to use an SQL command like this:
SELECT * from table_name WHERE ID = 1;
If any order of rows is ok for you, just add a row number according to the current arbitrary sort order:
CREATE SEQUENCE tbl_tbl_id_seq;
ALTER TABLE tbl ADD COLUMN tbl_id integer DEFAULT nextval('tbl_tbl_id_seq');
The new default value is filled in automatically in the process. You might want to run VACUUM FULL ANALYZE tbl to remove bloat and update statistics for the query planner afterwards. And possibly make the column your new PRIMARY KEY ...
To make it a fully fledged serial column:
ALTER SEQUENCE tbl_tbl_id_seq OWNED BY tbl.tbl_id;
See:
Creating a PostgreSQL sequence to a field (which is not the ID of the record)
What you see are just row numbers that pgAdmin displays, they are not really stored in the database.
If you want an artificial numeric primary key for the table, you'll have to create it explicitly.
For example:
CREATE TABLE mydata (
id integer GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
obec text NOT NULL,
datum timestamp with time zone NOT NULL,
...
);
Then to copy the data from a CSV file, you would run
COPY mydata (obec, datum, ...) FROM '/path/to/csvfile' (FORMAT 'csv');
Then the id column is automatically filled.

Redshift - truncating string when inserting to target

I have two tables. Table A is an operational store and table B is the destination table.
Table A DDL:
Column A Varchar(1000)
Table B DDL:
Column B Varchar(250)
So I'm trying to do an insert of truncated column A as so:
Insert into table B (select left(table a.column a, 249)) , but it gives the error
"error: Value too long for character type"
I have also tried substring to try and truncate the text but to no avail. Please note, that there is also Arabic text in Column A - but it hasn't been an issue in Table A.
Any help / suggestions would be much appreciated!
To get around the issue of multi-byte characters, you can cast your field to the desired VarChar size using ::VarChar([some length]). For your example, you would do:
Insert into table B (select table a.column a::VarChar(250))
The problem is that each Arabic symbol taking more than 1 byte because RedShift is Unicode DB. The varchar definition is in bytes. So to be on safe side you can divide everything by 4.

Import column from file with additional fixed fields

Can I somehow import a column or columns from a file, where I specify one or more fields held fixed for all rows?
For example:
CREATE TABLE users(userid int PRIMARY KEY, fname text, lname text);
COPY users (userid,fname) from 'users.txt';
but where lname is assumed to be 'SMITH' for all the rows in users.txt?
My actual setting is more complex, where the field I want to supply for all rows is part of the PRIMARY KEY.
Possibly something of this nature:
COPY users (userid,fname,'smith' as lname) from 'users.txt';
Since I can't find a native solution to this in Cassandra, my solution was to perform a preparation step with Perl so the file contained all the relevant columns prior to calling COPY. This works fine, although I would prefer an answer that avoided this intermediate step.
e.g. adding a column with 'Smith' for every row to users.txt and calling:
COPY users (userid,fname,lname) from 'users.txt';

Trimming Values in a SQL Server database

I have a SQL Server 2008 database. This database has a table with a column named "Name" whose data is not cleaned up. For instance, there are values like 'Hello ', where there is unnecessary whitespace. I want to remove that whitespace from all of the values in the "Name" column. My table looks like this:
MyTable
-------
ID (int)
Name varchar(50)
Description nvarchar(128)
How do I trim up the values in the Name column to remove leading and trailing whitespace?
Thank you!
Use LTRIM and RTRIM:
UPDATE dbo.MyTable SET Name = LTRIM(RTRIM(Name));