The original query is for an ms-access db table and contains a like statement which I can not run on the same table transfered to postgresql.
The sql statement:
SELECT Table1.property1, Table1.property2
FROM Table1, Table2
WHERE (((Table2.NAME) Like "*" & [Table1]![property2] & "*"));
property2 is the name of the column. How can I keep the same logic and transfer the sql statement so it works with postgresql?
Related
Currently I'm working on a simple library project using Embarcadero C++Builder 10.3 Community Edition, and Firebird and FlameRobin to create databases.
So far, I need only use simple queries, that were connected to a single database. Therefore, I used TFDConnection and TFDPhysFbDriverLink to connect to a .fdb file. Then, TFDQuery to create SQL commands and TDataSource. It works great.
Unfortunately, now I must join two tables. How do I write this command? I tried this:
SELECT * FROM users_books
join books on
users_books.id_book = books.id
where users_books and books are databases.
I got an error:
SQL error code = -204
Table unknown
BOOKS.
So I think I must connect somehow to these two databases simultaneously. How to do that?
Firebird databases are isolated and don't know about other databases. As a result, it is not possible to join tables across databases with a normal select statement.
What you can do, is use PSQL (Procedural SQL), for example in an EXECUTE BLOCK. You can then use FOR EXECUTE STATEMENT ... ON EXTERNAL to loop over the table in the other database, and then 'manually' join the local table using FOR SELECT (or vice versa).
For example (assuming a table user_books in the remote database, and a table books in the current database):
execute block
returns (book_id integer, book_title varchar(100), username varchar(50))
as
begin
for execute statement 'select book_id, username from user_books'
on external 'users_books' /* may need AS USER and PASSWORD clause as well */
into book_id, username do
begin
for select book_title from books where id = :book_id
into book_title do
begin
suspend;
end
end
end
I am new to database triggers/PostgreSQL and trying to convert the following SQL trigger to PostgreSQL.
SQL script :
CREATE TRIGGER tr_EmpMerger ON Emp INSTEAD OF INSERT
AS
BEGIN
MERGE INTO Emp AS Target
USING ( SELECT * FROM INSERTED ) AS Source
ON
( Target.EmpId = Source.EmpId
)
WHEN MATCHED THEN UPDATE SET
EmpName = Source.EmpName,
Age = Source.Age
WHEN NOT MATCHED THEN INSERT VALUES
(
Source.EmpId,
Source.EmpName,
Source.Age
);
END
GO
Questions :
1) Is there any equivalent of SQL's INSERTED table in PostgreSQL? If not, what is the work around?
2) Does PostgreSQL support Merge triggers? If not, what is the work around?
3) What will be the equivalent PostgreSQL script for the above merge trigger?
EDIT :
Note - In this scenario the insertion of data into the Emp table (as well as other tables) is happening through Bulk Copy command of Postgres. So there is no direct INSERT INTO query available for this table
I have a postgres DB and inside of it there are many schemas.
Each one of those schemas contains tables. For example:
Schema Name: personal has tables actions_takes, page_views etc
How can i write a SQL query or ActiveRecord query to query the table inside the schema?
Something like:
select * from actions_takes where user_id = 123;
I can create a model for each table and query it that way, but i want to write a script that passed a user goes over all tables and get the data for that user.
in pgAdmin 4 web console should use double quotation marks like following select statement
SELECT "col1", "col2"
FROM "schemaName".profile;
Point to specific table within a given schema using a dot notation schema.table_name. In your case it translates to
select * from personal.actions_takes where user_id = 123;
For me this query worked : select * from schemaName."Table_Name"
I'm using SQL Query Analyzer to build a report from the database on one machine (A), and I'd like to create a temp table on a database server on another machine(B) and load it with the data from machine A.
To be more specific, I have a report that runs on machine A (machine.a.com), pulling from schema tst. Using SQL Query Analyzer, I log into the server at machine.a.com and then have access to the tst schema:
USE tst;
SELECT *
FROM prospect;
I would like to create a temp table from this query window, only I'd like it built on another machine (call it machine.b.com). What syntax would I use for this? My guess is something like:
CREATE TABLE machine.b.com.#temp_prospect_list(name varchar(45) Not Null, id decimal(10) Not Null);
And then I'd like to load this new table with something like:
INSERT INTO machine.b.com.#temp_prospect_list VALUES (
USE tst;
SELECT *
FROM prospect; );
The syntax to access a remote server in T-SQL is to fully qualify any table name with the following (brackets included when necessary):
[LinkedServer].[RemoteDatabase].[User].[Table]
So, for example, to run a SELECT statement on one server that accesses a table on another server:
SELECT * FROM [machine.b.com].tst.dbo.table7;
We've got a system (MS SQL 2008 R2-based) that has a number of "input" database and a one "output" database. I'd like to write a query that will read from the output DB, and JOIN it to data in one of the source DB. However, the source table may be one or more individual tables :( The name of the source DB is included in the output DB; ideally, I'd like to do something like the following (pseudo-SQL ahoy)
select o.[UID]
,o.[description]
,i.[data]
from [output].dbo.[description] as o
left join (select [UID]
,[data]
from
[output.sourcedb].dbo.datatable
) as i
on i.[UID] = o.[UID];
Is there any way to do something like the above - "dynamically" specify the database and table to be joined on for each row in the query?
Try using the exec function, then specify the select as a string, adding variables for database names and tables where appropriate. Simple example:
DECLARE #dbName VARCHAR(255), #tableName VARCHAR(255), #colName VARCHAR(255)
...
EXEC('SELECT * FROM ' + #dbName + '.dbo.' + #tableName + ' WHERE ' + #colName + ' = 1')
No, the table must be known at the time you prepare the query. Otherwise how would the query optimizer know what indexes it might be able to use? Or if the table you reference even has an UID column?
You'll have to do this in stages:
Fetch the sourcedb value from your output database in one query.
Build an SQL query string, interpolating the value you fetched in the first query into the FROM clause of the second query.
Be careful to check that this value contains a legitimate database name. For instance, filter out non-alpha characters or apply a regular expression or look it up in a whitelist. Otherwise you're exposing yourself to a SQL Injection risk.
Execute the new SQL string you built with exec() as #user353852 suggests.