Firebird 3.0 Create table is slower than 2.5 - firebird

I have a problem. I use script which creates +/- 200 tables. On Firebird 3.0, time evaluation is 2 minutes. On Firebird 2.5, 6 seconds. I don't know why. My first 3 tables
CREATE TABLE defdok_analityka
(id_analityka INTEGER NOT NULL
, id_nagl INTEGER NOT NULL
, nazwa VARCHAR(60)
);
CREATE TABLE d_czas
(dc_id INTEGER NOT NULL
, dc_rok_mc VARCHAR(7)
, dc_rok_kwartal VARCHAR(7)
, dc_rok_tydzien_r VARCHAR(7)
, dc_data DATE
, dc_rok INTEGER
, dc_dzien_r INTEGER
, dc_kwartal INTEGER
, dc_miesiac INTEGER
, dc_dzien_mc INTEGER
, dc_dzien_tg INTEGER
, dc_tydzien_r INTEGER
);
CREATE TABLE d_kontrahent
(dk_id INTEGER NOT NULL
, dk_poczta_kod VARCHAR(20)
, dk_key_konsolid VARCHAR(30)
, dk_kraj_nazwa VARCHAR(35)
, dk_oper_prow VARCHAR(40)
, dk_przed_handl VARCHAR(40)
, dk_woj VARCHAR(50)
, dk_poczta VARCHAR(50)
, dk_firma_ident VARCHAR(50)
, dk_obszar_logist VARCHAR(60)
, dk_gmina VARCHAR(64)
, dk_powiat VARCHAR(64)
, dk_miejscowosc VARCHAR(64)
, dk_adres VARCHAR(90)
, dk_nazwa_skr VARCHAR(100)
, dk_obszar_handl VARCHAR(100)
, dk_nazwa VARCHAR(150)
, dk_src VARCHAR(20) NOT NULL
, dk_nip VARCHAR(35) NOT NULL
, dk_nip_wew VARCHAR(35) NOT NULL
, dk_data_utworzenia DATE
, dk_jdt0 TIMESTAMP
, dk_jdt1 TIMESTAMP
, dk_nr INTEGER
, dk_ver INTEGER
, dk_id_kon INTEGER
, dk_id_kontrah INTEGER
, dk_id_attribute INTEGER
, dk_id_firmy INTEGER NOT NULL
);
I tried using a lot of different server configuration and client connection. Even when I run isql directly FB3 still slower.

So I found a solution to my problem. My administrator installed FB2.5 on NVME disk and FB3.0 on HDD. This is a misunderstanding.

Related

How get all historical record in KSQL kafka

I have created a stream and i am writing KSQL on that stream .
But when i run this query and data arrive i can see records but when i data is not arriving and i run that query i do not see any older records .
So this is my KSQL
LOG_DIR=./ksql_logs /usr/local/confluent/bin/ksql http://localhost:8088
CREATE STREAM AUDIT_EVENT ( ID VARCHAR , VERSION VARCHAR , ACTION_TYPE VARCHAR , EVENT_TYPE VARCHAR , CLIENT_ID VARCHAR , DETAILS VARCHAR , OBJECT_TYPE VARCHAR , UTC_DATE_TIME VARCHAR , POINT_IN_TIME_PRECISION VARCHAR , TIME_ZONE VARCHAR , TIMELINE_PRECISION VARCHAR , GROUP_ID VARCHAR , OBJECT_DISPLAY_NAME VARCHAR , OBJECT_ID VARCHAR , USER_DISPLAY_NAME VARCHAR , USER_ID VARCHAR , PARENT_EVENT_ID VARCHAR , NOTES VARCHAR , SUMMARY VARCHAR , AUDIT_EVENT_TO_UTC_DT VARCHAR , AUDIT_EVENT_TO_DATE_PITP VARCHAR , AUDIT_EVENT_TO_DATE_TZ VARCHAR , AUDIT_EVENT_TO_DATE_TP VARCHAR ) WITH (KAFKA_TOPIC='AVRO-AUDIT_EVENT', VALUE_FORMAT='AVRO');
SELECT * FROM "AUDIT_EVENT" WHERE CLIENT_ID='fgh-5d1e-17a2-9749-0e4d00';
I have created table and tried but i table also i can not see my older records .
Is there any way i can records when ever i run this query ?
SET 'auto.offset.reset' = 'earliest' before your SELECT query statement.

NetBeans Generated REST Service works for XML requests, but not JSON

I am using NetBeans to help me build a REST web service. My steps taken have been the following:
New Project -> Web Application
New File -> Web Service -> RESTFul WebService from Database
Select the Datasource (which is a MySQL DB)
Everything generates, right click on project -> Test RESTFul
WebServices
When I want to test GET(application/XML), I can get a result returned fine, I can also get an XML response from a Jersey Client I made after the fact. But when I test the JSON functions, I always seem to get errors:
exception
javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: Could not initialize class org.eclipse.persistence.jaxb.BeanValidationHelper
root cause
org.glassfish.jersey.server.ContainerException:
java.lang.NoClassDefFoundError: Could not initialize class
org.eclipse.persistence.jaxb.BeanValidationHelper
root cause
java.lang.NoClassDefFoundError: Could not initialize class org.eclipse.persistence.jaxb.BeanValidationHelper
My Database schema, which would really be the only thing I've written myself, is the following:
-- ****************** SqlDBM: MySQL ******************;
-- ***************************************************;
DROP TABLE `roles`;
DROP TABLE `orders`;
DROP TABLE `inventory`;
DROP TABLE `users`;
DROP TABLE `warehouses`;
DROP TABLE `inventory`;
DROP TABLE `addresses`;
DROP TABLE `products`;
-- ************************************** `addresses`
CREATE TABLE `addresses`
(
`addressID` INTEGER NOT NULL AUTO_INCREMENT ,
`streetNum` INTEGER NOT NULL ,
`streetName` VARCHAR(45) NOT NULL ,
`unitNum` INTEGER ,
`city` VARCHAR(45) NOT NULL ,
`province` VARCHAR(45) NOT NULL ,
`postalCode` VARCHAR(6) NOT NULL ,
PRIMARY KEY (`addressID`)
);
-- ************************************** `products`
CREATE TABLE `products`
(
`productID` INTEGER NOT NULL AUTO_INCREMENT ,
`productNO` VARCHAR(40) NOT NULL ,
`productName` VARCHAR(80) NOT NULL ,
`productDesc` VARCHAR(160) NOT NULL ,
`cost` REAL NOT NULL ,
`price` REAL NOT NULL ,
PRIMARY KEY (`productID`)
);
-- ************************************** `users`
CREATE TABLE `users`
(
`userID` INTEGER NOT NULL AUTO_INCREMENT ,
`firstName` VARCHAR(20) NOT NULL ,
`lastName` VARCHAR(20) NOT NULL ,
`dateOfBirth` DATE NOT NULL ,
`email` VARCHAR(45) NOT NULL ,
`password` CHAR(64) NOT NULL ,
`addressID` INTEGER NOT NULL ,
PRIMARY KEY (`userID`),
KEY `fkIdx_87` (`addressID`),
CONSTRAINT `FK_87` FOREIGN KEY `fkIdx_87` (`addressID`) REFERENCES `addresses` (`addressID`)
);
-- ************************************** `warehouses`
CREATE TABLE `warehouses`
(
`warehouseID` INTEGER NOT NULL AUTO_INCREMENT ,
`addressID` INTEGER NOT NULL ,
PRIMARY KEY (`warehouseID`),
KEY `fkIdx_58` (`addressID`),
CONSTRAINT `FK_58` FOREIGN KEY `fkIdx_58` (`addressID`) REFERENCES `addresses` (`addressID`)
);
-- ************************************** `inventory`
CREATE TABLE `inventory`
(
`id` INTEGER NOT NULL AUTO_INCREMENT ,
`productID` INTEGER NOT NULL ,
`warehouseID` INTEGER NOT NULL ,
`expiry` DATE NOT NULL ,
`productID_1` INTEGER NOT NULL ,
PRIMARY KEY (`id`),
KEY `fkIdx_31` (`productID_1`),
CONSTRAINT `FK_31` FOREIGN KEY `fkIdx_31` (`productID_1`) REFERENCES `products` (`productID`)
);
-- ************************************** `roles`
CREATE TABLE `roles`
(
`roleID` INTEGER NOT NULL AUTO_INCREMENT ,
`roleName` VARCHAR(20) NOT NULL ,
`userID` INTEGER NOT NULL ,
PRIMARY KEY (`roleID`),
KEY `fkIdx_96` (`userID`),
CONSTRAINT `FK_96` FOREIGN KEY `fkIdx_96` (`userID`) REFERENCES `users` (`userID`)
);
-- ************************************** `orders`
CREATE TABLE `orders`
(
`orderID` INTEGER NOT NULL AUTO_INCREMENT ,
`orderNum` INTEGER NOT NULL ,
`productID` INTEGER NOT NULL ,
`quantity` INTEGER NOT NULL ,
`userID` INTEGER NOT NULL ,
PRIMARY KEY (`orderID`),
KEY `fkIdx_70` (`productID`),
CONSTRAINT `FK_70` FOREIGN KEY `fkIdx_70` (`productID`) REFERENCES `products` (`productID`),
KEY `fkIdx_100` (`userID`),
CONSTRAINT `FK_100` FOREIGN KEY `fkIdx_100` (`userID`) REFERENCES `users` (`userID`)
);
-- ************************************** `inventory`
CREATE TABLE `inventory`
(
`id` INTEGER NOT NULL AUTO_INCREMENT ,
`expiry` DATE ,
`productID` INTEGER NOT NULL ,
`warehouseID` INTEGER NOT NULL ,
PRIMARY KEY (`id`),
KEY `fkIdx_40` (`productID`),
CONSTRAINT `FK_40` FOREIGN KEY `fkIdx_40` (`productID`) REFERENCES `products` (`productID`),
KEY `fkIdx_62` (`warehouseID`),
CONSTRAINT `FK_62` FOREIGN KEY `fkIdx_62` (`warehouseID`) REFERENCES `warehouses` (`warehouseID`)
);

How to insert characters such as "±, ≧, ≦" inside a table in PL/SQL

Here is the SQL I'm using to create my table:
create table LAB_REQUESTS_REQUREMENTS
(
id INTEGER not null,
request_id INTEGER not null,
test_id INTEGER not null,
requrement VARCHAR2(30) not null,
assistant_id INTEGER,
measured_result VARCHAR2(50),
measured_condition VARCHAR2(50),
price_id INTEGER not null,
single_price NUMBER(15,2) default 0 not null,
measured_date DATE,
measure NVARCHAR2(30),
metric_tolerance NVARCHAR2(30)
)
tablespace VIK
pctfree 10
initrans 1
maxtrans 255
storage
(
initial 64K
next 1M
minextents 1
maxextents unlimited
);
How should I go about inserting those characters inside 'MEASURE' for example without going through conversion when inserting and then reading from the table?

Postgres error "ERROR: INSERT has more target columns than expressions"

I have the following tables :
CREATE TABLE public.participant_audit
(
participant_audit_id bigint NOT NULL DEFAULT nextval('participant_audit_participant_audit_id_seq'::regclass),
participant_id bigint,
shared_asset_id bigint NOT NULL,
asset_role_type character varying(200) NOT NULL,
user_external_ref_uuid uuid NOT NULL,
user_first_name character varying(200) NOT NULL,
user_last_name character varying(200) NOT NULL,
user_email_address character varying(200) NOT NULL,
deleted_timestamp timestamp(0) with time zone,
row_updated_timestamp timestamp(6) with time zone NOT NULL,
row_created_timestamp timestamp(6) with time zone NOT NULL,
row_created_by_db_user oid NOT NULL,
row_updated_by_db_user oid NOT NULL,
created_by_client uuid,
updated_by_client uuid,
CONSTRAINT participant_audit_pkey PRIMARY KEY (participant_audit_id)
)
WITH (
OIDS=FALSE
);
CREATE TABLE public.participant
(
participant_id bigint NOT NULL DEFAULT nextval('participant_participant_id_seq'::regclass),
shared_asset_id bigint NOT NULL,
asset_role_type_id bigint NOT NULL,
user_external_ref_uuid uuid NOT NULL,
user_first_name character varying(200) NOT NULL,
user_last_name character varying(200) NOT NULL,
user_email_address character varying(200) NOT NULL,
deleted_timestamp timestamp(0) with time zone,
row_updated_timestamp timestamp(6) with time zone NOT NULL,
row_created_timestamp timestamp(6) with time zone NOT NULL,
row_created_by_db_user oid NOT NULL,
row_updated_by_db_user oid NOT NULL,
created_by_client uuid,
updated_by_client uuid,
CONSTRAINT participant_pkey PRIMARY KEY (participant_id),
CONSTRAINT participant_asset_role_type_id_fkey FOREIGN KEY (asset_role_type_id)
REFERENCES public.asset_role_type (asset_role_type_id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT participant_shared_asset_id_fkey FOREIGN KEY (shared_asset_id)
REFERENCES public.shared_asset (shared_asset_id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
OIDS=FALSE
);
And the following TRIGGER FUNCTION:
-- DROP FUNCTION public.participant_audit();
CREATE OR REPLACE FUNCTION public.participant_audit()
RETURNS trigger AS
$BODY$
BEGIN
insert into participant_audit
(participant_audit_id, participant_id , shared_asset_id , asset_role_type , user_external_ref_uuid,
user_first_name , user_last_name , user_email_address , deleted_timestamp, row_updated_timestamp,
row_created_timestamp , row_created_by_db_user , row_updated_by_db_user , created_by_client,
updated_by_client
)
select NEW.* ;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER
COST 100;
When I execute the following INSERT
INSERT INTO participant (shared_asset_id,asset_role_type_id,
user_external_ref_uuid,user_first_name,user_last_name,
user_email_address,row_created_by_db_user,
row_updated_by_db_user,created_by_client,updated_by_client)
VALUES (1, 1, 'c9d140ad-b0da-4a9d-a898-8719000c7b7b'::uuid , 'john', 'simpson', 'js#gmail.com', 1::oid,1::oid, '53ed670d-f680-4e81-b53d-59b3d487633f'::uuid, '53ed670d-f680-4e81-b53d-59b3d487633f'::uuid);
I get the following error:
ERROR: INSERT has more target columns than expressions LINE 2:
...user , row_updated_by_db_user , created_by_client,updated_by...
^ QUERY: insert into public.participant_audit
(participant_audit_id, participant_id , shared_asset_id , asset_role_type ,
user_external_ref_uuid,user_first_name , user_last_name ,
user_email_address , deleted_timestamp,
row_updated_timestamp,row_created_timestamp , row_created_by_db_user ,
row_updated_by_db_user , created_by_client,updated_by_client)
select NEW.* CONTEXT: PL/pgSQL function participant_audit() line 3 at SQL statement
********** Error **********
ERROR: INSERT has more target columns than expressions SQL state:
42601 Context: PL/pgSQL function participant_audit() line 3 at SQL
statement
How can I fix this issue ??
The problem is in your trigger. Count the columns that you are trying to insert into the audit table here.
insert into participant_audit
(participant_audit_id, participant_id , shared_asset_id , asset_role_type , user_external_ref_uuid,
user_first_name , user_last_name , user_email_address , deleted_timestamp, row_updated_timestamp,
row_created_timestamp , row_created_by_db_user , row_updated_by_db_user , created_by_client,
updated_by_client
)
select NEW.* ;
That's quite a few more than what's contained in NEW because your insert statement has only 10 columns in it. I believe some of your columns maybe taking NULL values. Pass nulls explicitly in the SELECT part of your statement inside the trigger.

How to add check constraints in SQL Server?

I want to add the following constraint to the table named service in my database.
Constraint 1:
Neither DateCompleted nor DueDate can precede StartDate.
CREATE TABLE [Service] (
Service_ID INT NOT NULL PRIMARY KEY
, Invoice_ID INT
, Project_ID INT NOT NULL
, Description CHAR(20) NOT NULL
, Start_Date VARCHAR(10) NOT NULL
, Due_Date VARCHAR(10)
, Planned_Price VARCHAR(10)
, Actual_Price VARCHAR(10)
, Status CHAR(10) NOT NULL
, Date_Completed VARCHAR(10)
);
Another constraint should be added to the table named client.
Constraint 2:
Column Post_Code values should be positive integers with three or four digits.
CREATE TABLE [Client] (
Client_ID INT NOT NULL PRIMARY KEY
, First_Name VARCHAR(15) NOT NULL
, Last_Name VARCHAR(15) NOT NULL
, Street_Address VARCHAR(50)
, Suburb VARCHAR(15) NOT NULL
, State VARCHAR(3) NOT NULL
, Post_Code INT NOT NULL
, Phone_number INT NOT NULL
);
Service:
ALTER TABLE [Service]
ADD CONSTRAINT [service_datecheck]
CHECK ([StartDate] <= [DateCompleted] AND [StartDate] <= [DueDate]);
Client:
ALTER TABLE [Client]
ADD CONSTRAINT [client_postcodecheck]
CHECK ([Post_Code] > 0 AND LEN([Post_Code]) IN (3,4));