Configuration parameters for PostgreSQL ODBC driver connection string - postgresql

I am currently extracting data from PostgreSQL using its own ODBC driver.
The basic parameter described on Connection Strings work so far, but I was not able to find which other parameters are supported.
The documentation of the Devart ODBC Driver also supports the field Schema, which does not seem to work with the one of the PostgreSQL project.
Last but not least, there is a list in the documentation of the ODBC driver listing connection keywords, but these do not match the ones in Connection Strings either.
Is there any resource or standard describing the Connection String parameters I missed?

You should trust the documentation of the product rather than an unrelated site.
If you want to set the search_path with a connection option, you can use the Pqopt parameter like this
pqopt={search_path=myschema,public}
Disclaimer: I didn't test it.

Related

Why isn't "connect" option in mongo connection string documented?

The issue was that even if i target just one node of my replica set in my connection string, mongo-go-driver always want to discover and connect other nodes.
I found a solution here that basically say i should add the connect option in the connection string.
mongodb://host:27017/authDb?connect=direct
My question is: How good or bad practice is this and why mongo doesn't have documented, are there other available values that this option can have?
That option only exists for the Go driver. For all other drivers it is unrecognized, so it is not documented as a general connection string option.
It is documented for the Go Driver at https://godoc.org/go.mongodb.org/mongo-driver/mongo#example-Connect--Direct
How good or bad practice is this and why mongo doesn't have documented, are there other available values that this option can have?
As pointed out in the accepted answer, that this is documented under the driver documentation. Now for the other part of the question.
Generally speaking in the replica set context, you would want to connect to the topology instead of directly to a specific replica set member, with an exception for administrative purposes. Replication is designed to provide redundancy, and connecting directly to one member i.e. Primary is not recommended in case of fail-over.
All of the official MongoDB drivers follows MongoDB Specifications. In regards to the direct connections, the requirement currently is server-discovery-and-monitoring.rst#general-requirements:
Direct connections: A client MUST be able to connect to a single
server of any type. This includes querying hidden replica set members,
and connecting to uninitialized members (see RSGhost) in order to run
"replSetInitiate". Setting a read preference MUST NOT be necessary to
connect to a secondary. Of course, the secondary will reject all
operations done with the PRIMARY read preference because the slaveOk
bit is not set, but the initial connection itself succeeds. Drivers
MAY allow direct connections to arbiters (for example, to run
administrative commands).
It only specifies that it MUST be able to do so, but not how. MongoDB Go driver is not the only driver that currently supporting the direct option approach, there are also .NET/C# and Ruby as well.
Currently there is an open PR for the specifications to unify the behaviour. In the future, all drivers will have the same way of establishing a direct connection.

How to Get translateBinary to Work in Rational Application Developer Data Connection

Using Rational Application Developer for Websphere 9.1.0 to make a data connection to a DB2 iseries file, the column data displays a Hex(I think).
I have added the "translateBinary=true" property to the url connection but is does not change the display results.
jdbc:as400:host;translateBinary=true
DB2 for iSeries uses EBCDIC natively but the Toolbox JDBC driver will automatically attempt to translate EBCDIC to unicode for you. Since only some fields are not being translated, it is likely those fields are tagged with CCSID 65535 indicating to the Toolbox driver not to translate them. You can either tag those fields with a CCSID indicating to translate, or use the translate binary driver property, which you're attempting to. The property is not working because you mis-typed it. According to this faq, it should be ";translate binary=true" instead of what you've tried.

Is there any way to specify custom connection parameters to scalikejdbc?

When connecting to AWS Athena, a required parameter is s3_staging_dir to specify the output directory of the query. Is there any way to specify this parameter in scalikejdbc? I've tried looking through all of scalikejdbc's docs, but I found nothing of this sort.
Athena doc: http://docs.aws.amazon.com/athena/latest/ug/connect-with-jdbc.html
Scalikejdbc doc: http://scalikejdbc.org/documentation/configuration.html
I just tried to do it using a custom connection pool factory.
I did manage to connect to Athena, but I couldn't execute any SQL since "prepareStatement" is not implemented in the Athena JDBC driver.
So don't try it, It'll be useless.
Sorry :(

How to use Solr on Postgresql and index a table

I am new to Solr with the specific need to crawl existing database table and generate results.
Any online example/tutorial so far only explains about you give documents and it gets indexed, but not any indication of how to do same on database.
Can anyone please explain steps how to achieve this ?
Links like this wiki shows everything with jdbc driver and mysql so I even doubt if Solr supports this with .NET or not. My tech boundries are in C# and Postgresql
You have stumpled over the included support for JDBC already, but you have to use the postgres JDBC driver. The example will be identical with the MySQL one, but you'll have to use the proper URL for postgres instead and reference the JDBC driver (which will depend on which Postgres JDBC driver you use).
jdbc:postgresql://localhost/test
This is a configuration option in Solr, and isn't related to .NET or other external dependencies.
However, the other option is to write the indexing code yourself, and this can often be a good solution as it makes it easier to pre-process the content and apply certain logic before storing content in Solr. For .NET you have Solrnet, a Solr client, that'll make it easy to both query from and submit documents to Solr.

Can NpgsqlTsVector/NpgsqlTsQuery from NpgSql Data Provider be used for Full Text Search?

I'm trying to understand PostgreSQL and Npgsql in regards to "Full Text Search". Is there something in the Npgsql project that helps doing those searches on a database?
I found the NpgsqlTsVector.cs/NpgsqlTsQuery.cs classes in the Npgsql source code project. Can they be used for "Full Text Search", and, if so, how?
Yes, since 3.0.0 Npgsql has special support for PostgreSQL's full text search types (tsvector and tsquery).
Make sure to read the PostgreSQL docs and understand the two types and how they work.
Npgsql's support for these types means that it allows you to seamlessly send and receive tsvector and tsquery from PostgreSQL. In other words, you can create an instance of NpgsqlTsVector, populate it with the lexemes you want, and then set it as a parameter in an NpgsqlCommand just like any other parameter type (the same goes for reading a tsvector or tsquery).
For more generic help on using Npgsql to interact with PostgreSQL you can read the Npgsql docs.