Unable to drop table in Firebird - firebird

I open a command prompt in Ubuntu and then log in to Firebird, like so:
$ isql-fb
SQL> connect "localhost:/var/lib/firebird/2.5/data/reestr.fdb" user 'SYSDBA' password 'root';
Then I list all tables in my database:
> show tables;
ARCHIVE_1_ ...
...
...
Finally, I want to drop one table. I try it this way:
> DROP TABLE ARCHIVE_1_;
........ absolutely no reaction, propmt is waiting for something
If I log in again and list tables, I see that the table is still there. So, what is wrong with all that?
EDIT
This is what set; command in isql prompt returns:
Print statistics: OFF
Echo commands: OFF
List format: OFF
List Row Count: OFF
Select rowcount limit: 0
Autocommit DDL: ON
Access Plan: OFF
Access Plan only: OFF
Display BLOB type: 1
Column headings: ON
Terminator: ;
Time: OFF
Warnings: ON
Bail on error: OFF

It could be that you have turned off autocommit of DDL statements (the default is on). To check use set; command in the isql, it'll list the current setup. If the autoddl is off then you can set it on again using SET AUTODDL ON; or just issue commit; after you'r DROP statement.

Related

PostgreSQL pg_dump creates sql script, but it is not a sql script: is there a way to get pg_dump to create a standard sql script?

I'm running pg_dump to create a script to automate the creation of a system like this:
pg_dump --dbname=postgresql://postgres:ohdsi#127.0.0.1:5432/OHDSI -t webapi.* > webapi.sql
This creates a sql script, but it is not really a sql script as it has code in it like what is shown below.
When this script is run as a sql script, it fails giving the error shown below.
Is there a way to get pg_dump to create a sql script that is standard sql and can be executed as a sql script?
Code sample from sql generated by pg_dump:
COPY webapi.cohort_version (asset_id, comment, description, version, asset_json, archived, created_by_id, created_date) FROM stdin;
\.
--
-- Data for Name: concept_of_interest; Type: TABLE DATA; Schema: webapi; Owner: ohdsi_admin_user
--
COPY webapi.concept_of_interest (id, concept_id, concept_of_interest_id) FROM stdin;
1 4329847 4185932
2 4329847 77670
3 192671 4247120
4 192671 201340
Error seen when running the script generated by pg_dump:
--
-- Name: penelope_laertes_uni_pivot id; Type: DEFAULT; Schema: webapi; Owner: ohdsi_admin_user
--
ALTER TABLE ONLY webapi.penelope_laertes_uni_pivot ALTER COLUMN id SET DEFAULT nextval('webapi.penelope_laertes_uni_pivot_id_seq'::regclass)
--
-- Data for Name: achilles_cache; Type: TABLE DATA; Schema: webapi; Owner: ohdsi_admin_user
--
COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
Error executing: COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
. Cause: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
Exception in thread "main" java.lang.RuntimeException: org.apache.ibatis.jdbc.RuntimeSqlException: Error executing: COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
. Cause: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
at org.yaorma.database.Database.executeSqlScript(Database.java:344)
at org.yaorma.database.Database.executeSqlScript(Database.java:332)
at org.nachc.tools.fhirtoomop.tools.build.postgres.build.A04_CreateAtlasWebApiTables.exec(A04_CreateAtlasWebApiTables.java:29)
at org.nachc.tools.fhirtoomop.tools.build.postgres.build.A04_CreateAtlasWebApiTables.main(A04_CreateAtlasWebApiTables.java:19)
Caused by: org.apache.ibatis.jdbc.RuntimeSqlException: Error executing: COPY webapi.achilles_cache (id, source_id, cache_name, cache) FROM stdin
. Cause: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
at org.apache.ibatis.jdbc.ScriptRunner.executeLineByLine(ScriptRunner.java:109)
at org.apache.ibatis.jdbc.ScriptRunner.runScript(ScriptRunner.java:71)
at org.yaorma.database.Database.executeSqlScript(Database.java:342)
... 3 more
Caused by: org.postgresql.util.PSQLException: ERROR: COPY from stdin failed: COPY commands are only supported using the CopyManager API.
Where: COPY achilles_cache, line 1
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2675)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2365)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:355)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:329)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:315)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:291)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:286)
at org.apache.ibatis.jdbc.ScriptRunner.executeStatement(ScriptRunner.java:190)
at org.apache.ibatis.jdbc.ScriptRunner.handleLine(ScriptRunner.java:165)
at org.apache.ibatis.jdbc.ScriptRunner.executeLineByLine(ScriptRunner.java:102)
... 5 more
--- EDIT ------------------------------------
The --inserts method in the accepted answer gave me exactly what I needed.
I ended up doing this:
pg_dump --inserts --dbname=postgresql://postgres:ohdsi#127.0.0.1:5432/OHDSI -t webapi.* > webapi.sql
The client tool you are using to restore the dump cannot deal with the data from the (nonstandard) COPY command being mixed into the script. You need psql to restore such a dump.
You can use the --inserts option of pg_dump to create a dump that contains INSERT statements rather than COPY. That will be slower to restore, but will work with more client tools.
However, your wish to get a standard SQL script is hopeless. PostgreSQL extends the standard in many ways, so a database cannot be dumped with a standard SQL script. Note, for example, that indexes are not defined by the SQL standard. If you are looking to transfer a PostgreSQL dump to a different RDBMS, you will be disappointed. That is more difficult.

Multi line command (to export .csv) not working in Apache Drill (web interface)

I am trying to use Apache Drill to export a .csv file. This other question indicated that this is achieved by:
use dfs.tmp;
alter session set `store.format`='csv';
create table dfs.tmp.my_output as select * from cp.`employee.json`;
I tried running this block (of three commands) simultaneously in the Apache Drill web interface but got the error bellow. It somehow is not recognizing the ; or not taking multiple commands.
I also tried running each line separately, without the ; but the changes of the two commands did not persist (and the export command (3rd command) deafauted back to exporting a parquet file (the set default)).
How can I run this in Drill?
Query Failed: An Error Occurred
org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: Encountered ";" at line 1, column 12. Was expecting one of: <EOF> "." ... "[" ... SQL Query use dfs.tmp; ^ alter session set `store.format`='csv'; create table dfs.tmp.`elos_cnis` as select * from dfs.tmp.`/bases_parquet/elos_cnis` [Error Id: 00493fbe-924e-43e9-a684-f7d1abfed04e on sbsb35.ipea.gov.br:31010] (org.apache.calcite.sql.parser.SqlParseException) Encountered ";" at line 1, column 12. Was expecting one of: <EOF> "." ... "[" ... org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.convertException():391 org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.normalizeException():121 org.apache.calcite.sql.parser.SqlParser.parseStmt():149 org.apache.drill.exec.planner.sql.SqlConverter.parse():157 org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():104 org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79 org.apache.drill.exec.work.foreman.Foreman.runSQL():1017 org.apache.drill.exec.work.foreman.Foreman.run():289 java.util.concurrent.ThreadPoolExecutor.runWorker():1142 java.util.concurrent.ThreadPoolExecutor$Worker.run():617 java.lang.Thread.run():748 Caused By (org.apache.drill.exec.planner.sql.parser.impl.ParseException) Encountered ";" at line 1, column 12. Was expecting one of: <EOF> "." ... "[" ... org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.generateParseException():17963 org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.jj_consume_token():17792 org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.SqlStmtEof():861 org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.parseSqlStmtEof():180 org.apache.drill.exec.planner.sql.parser.impl.DrillParserWithCompoundIdConverter.parseSqlStmtEof():59 org.apache.calcite.sql.parser.SqlParser.parseStmt():142 org.apache.drill.exec.planner.sql.SqlConverter.parse():157 org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():104 org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79 org.apache.drill.exec.work.foreman.Foreman.runSQL():1017 org.apache.drill.exec.work.foreman.Foreman.run():289 java.util.concurrent.ThreadPoolExecutor.runWorker():1142 java.util.concurrent.ThreadPoolExecutor$Worker.run():617 java.lang.Thread.run():748
Drill Web-UI does not support submitting multiple queries within the same query page. Please try using SqlLine or submit in Web-UI one-by-one
alter system set `store.format`='csv';
query to set store.format at the system level, since Web-UI does not store session by default and after that submit the following query
create table dfs.tmp.my_output as select * from cp.`employee.json`;

Postgres "Did not find any relation named <tablename>"

I created a new database in Postgres (Ubuntu 18.04) and created a table from the Postgres command line with:
CREATE TABLE TMB01
the command line returns with no error messages. Then I created columns from the command line (one by one, but I only had four columns names to enter).
Now I want to see the names of all tables in my database:
\d+ "TMB01"
"Did not find any relation named "TMB01."
Try it without quotes:
\d+ TMB01
"Did not find any relation named "TMB01."
Then I tried:
select * from TMB01 where false
No error message, cursor returns.
What went wrong with my table creation?
The only reason you didn't get an error with this command:
CREATE TABLE TMB01
Is that it wasn't finished yet. There's no ; at the end. At a minimum you would need:
CREATE TABLE TMB01 ();
Try granting access privileges to the postgres user grant wizard

error while insert column in postgresql using shell script?

Below is my shell script, I am trying to insert columns in posrgresql using shell script.
But getting below error.
Script:
i='dcm_account494401_click_2017050511_20170505_093843_556195422.csv.gz'
load_date='2017-05-12'
load_status='Fail'
message="INFO: Load into table 'stg_ft_raw_activity' completed, 362554 record(s) loaded successfully.
INFO: Load into table 'stg_ft_raw_activity' completed, 1 record(s) were loaded with replacements made for ACCEPTINVCHARS. Check 'stl_replacements' system table for details.
"
psql "host=$HOST port=$DBPORT dbname=$DBNAME user=$DBUSER password=$DBPASS" -F --no-align <<EOF
truncate table stg.notification_table;
\set fname $i
\set load_date $load_date
\set load_status $load_status
\set message $message
insert into stg.notification_table values (:'fname', :'load_date', :'load_status',:"message");
EOF
error:
Expanded display is used automatically.
TRUNCATE TABLE and COMMIT TRANSACTION
ERROR: syntax error at or near "INFO"
LINE 1: INFO: Load into table 'stg_ft_raw_activity' completed, 1 re...
^
message col is string value and also contains spl chars. is that a reason?
Please help to resolve.
Thansk,
There is no need to use these variables with \set.
Here is an example:
message="INFO: Load into table 'stg_ft_raw_activity' completed, 362554 record(s) loaded successfully.
INFO: Load into table 'stg_ft_raw_activity' completed, 1 record(s) were loaded with replacements made for ACCEPTINVCHARS. Check 'stl_replacements' system table for details.
"
psql <<EOF
INSERT INTO message_table VALUES (\$\$$message\$\$);
EOF
This makes use of “dollar quoting” for string literals, with the $ signs that are no shell variable reference escaped with backslashes.

How to log an error into a Table from SQLPLus

I am very new to Oracle so please base with me if this is covered else where.
I have a MS SQL box running Jobs calling batch files running scripts in SQLPLUS to ETL to an Oracle 10G database.
I have an intermittent issue with a script that is causing the ETL to fail which at the minute without error logging is something of an unknown. The current solution highlights the load failure based on rowcounts for before and after the script has finsihed.
I'd like to be able to insert any errors encoutered whilst running the offending script into an error log table on the same database receiving the data loads.
There's nothing too technical about the script, at a high level is performs the following steps all via SQL code and no procedural calls.
Updates a table with Date and current row counts
Pulls data from a remote source into a staging table
Merges the Staging table into an intermediate staging table
Performs some transformational actions
Merges the intermediate staging table into the final Fact table
Updates a table with new row counts
Please advise whether it is possible to pass error messages, codes, line number etc etc via SQLPLUS into a Database table? and if so the easiest method to achieve this.
A first few lines of the script are shown below to give a flavour
/*set echo off*/
set heading off
set feedback off
set sqlblanklines on
/* ID 1 BATCH START TIME */
INSERT INTO CITSDMI.CITSD_TIMETABLE_ORDERLINE TGT
(TGT.BATCH_START_TIME)
(SELECT SYSDATE FROM DUAL);
COMMIT;
insert into CITSDMI.CITSD_TIMETABLE_ALL_LOADS
(LOAD_NAME, LOAD_CRITICALITY,LOAD_TYPE,BATCH_START_TIME)
values
('ORDERLINE','HIGH','SMART',(SELECT SYSDATE FROM DUAL));
commit;
/* Clear the Staging Tables */
TRUNCATE TABLE STAGE_SMART_ORDERLINE;
Commit;
TRUNCATE TABLE TRANSF_FACT_ORDERLINE;
Commit;
and so it goes on with the rest of the steps.
Any assistant will be greatly appreciated.
Whilst not fully understanding your requirement, a couple of pointers.
The WHENEVER command will help you control what sqlplus should do when an error occurs, e.g.
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
WHENEVER OSERROR EXIT FAILURE ROLLBACK
INSERT ...
INSERT ...
This will cause sqlplus to exit with error status 1 if any of the following statements fail.
You can also have WHENEVER SQLERROR CONTINUE ...
Since the WHENEVER ... EXIT FAILURE/SUCCESS controls the exit status, the calling script/program will know if it worked failed.
Logging
use SPOOL to spool the out to a file.
Logging to table.
Best way is to wrap your statements into PLSQL anonymous blocks and use exception hanlders to log errors.
So, putting the above together, using a UNIX shell as the invoker:
sqlplus -S /nolog <<EOF
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
CONNECT ${USRPWD}
SPOOL ${SPLFILE}
BEGIN
INSERT INTO the_table ( c1, c1 ) VALUES ( '${V1}', '${V2}' );
EXCEPTION
WHEN OTHERS THEN
INSERT INTO the_error_tab ( name, errno, errm ) VALUES ( 'the_script', SQLCODE, SQLERRM );
COMMIT;
END;
/
SPOOL OFF
QUIT
EOF
if [ ${?} -eq 0 ]
then
echo "Success!"
else
echo "Oh dear!! See ${SPLFILE}"
fi