I use PostgreSQL Logical Decoding functionality for retrieving WAL contents:
SELECT * FROM pg_create_logical_replication_slot(...)
SELECT * FROM pg_logical_slot_get_changes('<my_slot>', NULL, NULL);
These calls are implemented in the SQL interface within a "C" / ODBC
programming framework.
This works very nice for me.
Data records are fetched one by one in a row - as expected.
Yet - there are certain cases where I need to re-position the slot flow
to a past LSN.
The Logical Decoding REPLICATION interface provides means for achieving that.
For example the PG_RECVLOGICAL program introduces a "--startpos=X/Y" option
that works as I expect.
Is there an equivalent option in the "SQL" interface?
I guess that both interfaces share much in common.
So - I went through the documentation and did not manage to find any equivalent
option in the SQL interface.
For now - the SQL interface moves forward only and cannot retract back to a
past LSN.
Did I miss something here?
Does any body have any clue or experience with that?
Kindest regards
Hillel.
According to the replication SQL function docs it doesn't look possible :-(
http://www.postgresql.org/docs/9.4/static/functions-admin.html#FUNCTIONS-REPLICATION
Related
How to get column name and data type returned by a custom query in postgres? We have inbuilt functions for table/views but not for custom queries. For more clarification I would say that I need a postgres function which will take sql string as parameter and will return colnames and their datatype.
I don't think there's any built-in SQL function which does this for you.
If you want to do this purely at the SQL level, the simplest and cheapest way is probably to CREATE TEMP VIEW AS (<your_query>), dig the column definitions out of the catalog tables, and drop the view when you're done. However, this can have a non-trivial overhead depending on how often you do it (as it needs to write view definitions to the catalogs), can't be run in a read-only transaction, and can't be done on a standby server.
The ideal solution, if it fits your use case, is to build a prepared query on the client side, and make use of the metadata returned by the server (in the form of a RowDescription message passed as part of the query protocol). Unfortunately, this depends very much on which client library you're using, and how much of this information it chooses to expose. For example, libpq will give you access to everything, whereas the JDBC driver limits you to the public methods on its ResultSetMetadata object (though you could probably pull more information from its private fields via reflection, if you're determined enough).
If you want a read-only, low-overhead, client-independent solution, then you could also write a server-side C function to prepare and describe the query via SPI. Writing and building C functions comes with a bit of a learning curve, but you can find numerous examples on PGXN, or within Postgres' own contrib modules.
I have checked the documentation (for my version 9.3):
http://www.postgresql.org/docs/9.3/static/sql-notify.html
http://www.postgresql.org/docs/9.3/static/sql-listen.html
I have read multiple discussions and blogs about notify-listen in postgres.
They all use a listening process / interface, which is not implemented inside "classic" procedure (which is function in postgres anyway). They implement it in different language and/or environment, external to the postgres server (e.g. perl, C#).
My question: Is it possible to implement listen(ing) inside postgres function (language plpgsql) ? If not (what I assume from not being to able to find such topic / example), can someone explain a bit, why it can't be done, or maybe why it does not make sense to do it that way ?
It is a classic use case for Trigger Function in case you depend on a single table: https://www.postgresql.org/docs/current/plpgsql-trigger.html
I'm wondering if there's any possible way how to use or implement SELECT query into JavaM API for GT.M database system. I'm using version 0.1, since I haven't found any other version ( https://github.com/Gadreel/javam/blob/master/README.md ).
If there's no option yet, could you recommend me any other API for this DBMS, using Java? I know there's some gtm4j ( http://code.vistaehr.com/gtm4j ), but it takes advantage of springframework, which is not convenient for me.
I'm new with GT.M and I just want to test, how to connect to it using Java and use some basic queries. Thanks a lot for your advices.
The database side of GT.M is a hierarchical key-value store, so features like SELECT (I'm guessing you want a full SQL SELECT) needs to be implemented by some framework (either an existing framework or one created by you).
From a quick look at the JavaM API, it seems it only offers/showcase a Java interface to the features offered by GT.M. So I think you would have to implement the SQL SELECT feature yourself, in Java.
That said, it is possible that what you wanted to use a SQL SELECT for can be done easilly using the standard GT.M / JavaM API, so there would be no need to implement a full SQL SELECT.
Actually, you could use M to write a parser for your SELECT command syntax. And would certainly be easier to do with the GTMJI plug-in for full-duplex GT.M/Java communication that FIS have just released.
I need to run dynamically constructed queries against Informix IDS 9.x; while WHERE clause is mostly quite simple, Projection clause can be quite complicated with lots of columns and formulas applied to columns. Here is one example:
SELECT ((((table.I_ACDTIME + table.I_ACWTIME + table.I_DA_ACDTIME + table.I_DA_ACWTIME +
table.I_RINGTIME))+(table.I_ACDOTHERTIME + table.I_ACDAUXINTIME +
table.I_ACDAUX_OUTTIME)+(table.I_TAUXTIME + table.I_TAVAILTIME +
table.I_TOTHERTIME)+((table.I_AVAILTIME + table.I_AUXTIME)*
((table.MAX_TOT_PERCENTS/100)/table.MAXSTAFFED)))/(table.INTRVL*60))
FROM table
WHERE ...
The problem arises when some of the fields used contain zeroes; Informix predictably throws division by zero error, but the error message is not very helpful:
DBD::Informix::st fetchrow_arrayref failed:
SQL: -1202: An attempt was made to divide by zero.
In this case, it is desirable to return NULL upon failed calculation. Is there any way to achieve this other than parse Projection clause and enclose each and every division attempt in CASE ... END? I would prefer to use some DBD::Informix magic if it's there.
I don't believe you'll be able to solve this with DBD::Informix or any other database client, without resorting to parsing the SQL and rewriting it. There's no option to just ignore the column with the /0 arithmetic: the whole statement fails when the error is encountered, at the engine level.
If it's any help, you can write the code to avoid /0 as a DECODE rather than CASE ... END, which is a little cleaner, ie:
DECODE(table.MAXSTAFFED, 0, NULL,
((table.MAX_TOT_PERCENTS/100)/table.MAXSTAFFED)))/(table.INTRVL*60)))
DBD::Informix is an interface to the Informix DBMS, and as thin as possible (which isn't anywhere near as thin as I'd like, but that's another discussion). Such behaviour cannot reasonably be mediated by DBD::Informix (or any other DBD driver accessing a DBMS); it must be handled by the DBMS itself.
IDS does not provide a mechanism to yield NULL in lieu of a divide by zero error. It might be a reasonable feature request - but it would not be implemented until the successor version to Informix 11.70 at the earliest.
Note that Informix Dynamic Server (IDS) 9.x is several years beyond the end of its supported life (10.00 is also unsupported).
From experience working with informix I would say you woud be lucky to get that kind of functionallity within IDS (earlier versions of IDS - not much earlier than your version - had barely any string manipulation function nevermind anything complicated.)
I would save yourself the time and generate the calculations against an in memory list.
I have a Posrgres 9.04 database table with over 12,000,000 rows.
I need a program to read each row, do some calculations and lookups (against a 2nd table), then write a new row in a 3rd table with the results of these calculations. When done, the 3rd table will have the same number of rows as the 1st table.
Executing serially on a Core i7 720QM processor takes more than 24 hours. It only taxes one of my 8 cores (4 physical cores, but 8 visible to Windows 7 via HTT).
I want to speed this up with parallelism. I thought I could use PLINQ and Npgsql:
NpgsqlDataReader records = new NpgsqlCommand("SELECT * FROM table", conn).ExecuteReader();
var single_record = from row in records.AsParallel()
select row;
However, I get an error for records.AsParallel(): Could not find an implementation of the query pattern for source type 'System.Linq.ParallelQuery'. 'Select' not found. Consider explicitly specifying the type of the range variable 'row'.
I've done a lot of Google searches, and I'm just coming up more confused. NpgsqlDataReader inherits from System.Data.Common.DbDataReader, which in turn implements IEnumerable, which has the AsParallel extension, so seems like the right stuff is in place to get this working?
It's not clear to me what I could even do to explicitly specify the type of the range variable. It appears that best practice is not to specify this.
I am open to switching to a DataSet, presuming that's PLINQ compatible, but would rather avoid if possible because of the 12,000,000 rows.
Is this even something achievable with Npgsql? Do I need to use Devart's dotConnect for PostgreSQL instead?
UPDATE: Just found http://social.msdn.microsoft.com/Forums/en-US/parallelextensions/thread/2f5ce226-c500-4899-a923-99285ace42ae, which led me to try this:
foreach(IDataRecord arrest in
from row in arrests.AsParallel().Cast <IDataRecord>()
select row)
So far no errors in the IDE, but is this a proper way of constructing this?
This is indeed the solution:
foreach(IDataRecord arrest in
from row in arrests.AsParallel().Cast <IDataRecord>()
select row)
This solution was inspired by what I found at http://social.msdn.microsoft.com/Forums/en-US/parallelextensions/thread/2f5ce226-c500-4899-a923-99285ace42ae#1956768e-9403-4671-a196-8dfb3d7070e3. It's not clear to me why the cast and type specification is needed, but it works.
EDIT: While this doesn't cause syntax or runtime errors, it in fact does not make things run in parallel. Everything is still serialized. See PLINQ on ConcurrentQueue isn't multithreading for a superior solution.
You should consider using Greenplum. It's trivial to accomplish this in a Greenplum database. The free version isn't gimped in any way and it's postgresql at its core.