ADD content-Type in postgres request? - postgresql

I want use elasticsearch via zombodb extension so I use this query in postgres : CREATE INDEX idx_zdb_graduated ON masterview USING zombodb(zdb('masterview', masterview.ctid), zdb(masterview)) WITH (url='http://localhost:9200/'
);
but it has a this error
> ERROR: rc=406; {"error":"Content-Type header [application/x-www-form-urlencoded] is not supported","status":406}
I googled and i found that i should use -H'Content-Type:application/json' if I use curl to connect to elasticsearch . but i don't know how can I add content-type to my query ?

You probably use unsupported version's elastic search or postgresql or os.
I had same error with elastisearh version than 6 and fix it to version.
In my git repository, with debian 9, postgres:9.5, and elasticsearch:5.6.4 should work.

Related

You have specified a database that is not empty, please specify an empty database

I'm trying to connect to an RDS cluster in AWS that's an Aurora PostgreSQL database. It's a brand-new database that I created along with the instances that I have Jira deployed to. However, when I try to connect to the instance from the Jira configuration screen I get this error
You have specified a database that is not empty, please specify an empty database.
I haven't touched this database at all, why is it giving me this error? I have one read and one write database in my cluster and the "hostname" is the endpoint for my write database, which is what the docs say. Could this be an issue with the Jira version I'm using?
This is the download link I'm using in my user-data script to install Jira. I'm also using PostgreSQL version 12.11
https://www.atlassian.com/software/jira/downloads/binary/atlassian-servicedesk-4.19.1-x64.bin
I switched to a different PostgreSQL version and now it's working.
PostgreSQL version 12.11 was giving me the error and switching to version 13.7 works as expected.

pg: unknown authentication message response: 10 (Golang) [duplicate]

I'm trying to follow the diesel.rs tutorial using PostgreSQL. When I get to the Diesel setup step, I get an "authentication method 10 not supported" error. How do I resolve it?
You have to upgrade the PostgreSQL client software (in this case, the libpq used by the Rust driver) to a later version that supports the scram-sha-256 authentication method introduced in PostgreSQL v10.
Downgrading password_encryption in PostgreSQL to md5, changing all the passwords and using the md5 authentication method is a possible, but bad alternative. It is more effort, and you get worse security and old, buggy software.
This isn't a Rust-specific question; the issue applies to any application connecting to a Postgres DB that doesn't support the scram-sha-256 authentication method. In my case it was a problem with the Perl application connecting to Postgres.
These steps are based on a post.
You need to have installed the latest Postgres client.
The client bin directory (SRC) is "C:\Program Files\PostgreSQL\13\bin" in this example. The target (TRG) directory is where my application binary is installed: "C:\Strawberry\c\bin". My application failed during an attempt to connect the Postgres DB with error "... authentication method 10 not supported ...".
set SRC=C:\Program Files\PostgreSQL\13\bin
set TRG=C:\Strawberry\c\bin
dir "%SRC%\libpq.dll" # to see the source DLL
dir "%TRG%\libpq__.dll" # to see the target DLL. Will be replaced from SRC
cp "%SRC%\libpq.dll" %TRG%\.
cd %TRG%
pexports libpq.dll > libpq.def
dlltool --dllname libpq.dll --def libpq.def --output-lib ..\lib\libpq.a
move "%TRG%"\libpq__.dll "%TRG%"\libpq__.dll_BUP # rename ORIGINAL name to BUP
move "%TRG%"\libpq.dll "%TRG%"\libpq__.dll # rename new DLL to ORIGINAL
At this point I was able successfully connect to Postgres from my Perl script.
The initial post shown above also suggested to copy other DLLs from source to the target:
libiconv-2.dll
libcrypto-1_1-x64.dll
libssl-1_1-x64.dll
libintl-8.dll
However, I was able to resolve my issue without copying these libraries.
Downgrading to PostgreSQL 12 helped

MongoDB 5.0.3 - mongoexport: BSON field 'saslContinue.mechanism' is an unknown field

I am using MongoDB Latest Version 5.0.3. When I am trying to export the data using mongoexport command the following error message has come.
server returned error on SASL authentication step: BSON field 'saslContinue.mechanism' is an unknown field.
Please help me if I am missing anything in the configuration.
Thanks...
I get this error on version 5.0.6.
You can trying to use the lowwer version like 4.4.6 if you don't have to use the latest version.

unsupported frontend protocol 1234.5680: server supports 2.0 to 3.0

I am running confluence 7.9.1 and postgres 10 but when we start only postgres container it doesn't throw below logs
unsupported frontend protocol 1234.5680: server supports 2.0 to 3.0
but when we start confluence with version 7.9.1 , postgres container will throw above logs.
Anyone know how we can resolve this since we tried PGGSSENCMODE=disable in env but it didnt help.
Regards,
Samurai
We resolved this by using new postgresql-42.2.18.jar which we replaced with postgresql-42.2.16.jar
suggested here : https://jira.atlassian.com/browse/CONFSERVER-60515?error=login_required&error_description=Login+required&state=14f30dda-a08b-4f9d-9841-ed77c8e91c79
Thank you for your support.
For those who are using postgresql-42.2.16.jar or prior and are looking to quiet this error without upgrading the JDBC jar, you can use the following option in the connection string - note case sensitivity:
gssEncMode=disable

Anyone using hadoop_fdw with cloudera 5.2.0?

After painful installation of hadoop_fdw into our running pgsql 9.3.4, I am trying to connect it to cloudera cluster 5.2.0 with no luck.
Is there a way of debugging the fdw? After creating the foreign table and selecting from it, I just got an error - ERROR: failed to connect to Hive: No more data to read.
btw.: Some old version of hadoop_fdw was capable of using url (jdbc://server:port/args), but not the recent version, there's just address & port.
Hadoop_fdw didn'd make it. There's probably something wrong/old/obsolete in hive.c. But with even more effort we managed to make jdbc_fdw work with cloudera jdbc drivers. The steps were as follows:
1) install jdbc_fdw extension
2) merge all driver jar files into one
3) CREATE SERVER cloudera2 FOREIGN DATA WRAPPER jdbc_fdw OPTIONS(drivername 'com.cloudera.hive.jdbc4.HS2Driver',url 'jdbc:hive2://fqdn:10000;user=hive',querytimeout '15', jarfile '/opt/cloudera/combined.jar');
mental note: set client_min_messages to debug5; can help you identify where is the problem e.g.:driver not found etc