MySQL Connector (Mac) returning Hex instead of String - tableau-api

I was screwing around trying to get Excel on the Mac to get to MySql data the other day, but just realized today that Tableau (my primary use of the Mysql Connector) is returning my blob columns as hex strings...not normal text strings as it usually does.
I cannot figure out how to set characterEncoding on the connector side. Nothing changed on the server side. Any help would be much appreciated. Using mysql connector 8.00.0020 and iodbc to facilitate use.

reverted to 5.3 and it worked fine

Related

Is there a way in Debezium to stop data serialization? Trying to get values from source as it is

I have seen many posts on StackOverflow where people are trying to capture the data from source RDBMS and are using Debezium for the same. I am working with SQL Server. However since the DECIMAL and TIMESTAMP values are encoded by default, it becomes an overhead to decode those values into its original form.
I was looking to avoid this extra decoding step but to no avail. Can anyone please tell me how to import data via Debezium as it is i.e. without serializing it.
I saw some youtube videos where DECIMAL values were extracted in its original form.
FOR EX-> 800.0 from SQL Server is obtained as 800.0 via Debezium and not as "ATiA" (encoded)
But i am not sure how to do this. Can anyone please help me with what configuration will be required for the same on Debezium. I am using Debezium Server for now. Can work with Debezium connectors as well if that's needed.
Any help is appreciated.
Thanks.
It may be a matter of representation of timestamp and decimal values as opposed to encoding.
For timestamps, try using different values of time.precision.mode and for decimals, use decimal.handling.mode.
For MySQL, documentation is here

PostgreSQL database causing loss of datetime-values

I have a PostgreSQL database containing a table with several 'timestamp with timezone' fields.
I have a tool (DBSync) that I want to use to transfer the contents of this table to another server/database.
When I transfer the data to a MSSQL server all datetime values are replaced with '1753-01-01'. When I transfer the data to a PostgreSQL database all datetime values are replaced with '0001-01-01'.
The smallest possible date for those systems.
Now i recreate the source-table (including contents) in a different database on the same PostgreSQL server. The only difference: the sourcetable is in a different database. Same server, same routing. Only ports are different.
User is different but in each database I have the same rights.
How can it be that the database is responsible for an apparant different interpretation of the data? Do PostgreSQL databases have database-specific settings that can cause such behaviour? What database-settings can/should I check?
To be clear, I am not looking for another way to transfer data. I have several available. The thing that I am trying to understand is: how can it be that, if an application reads datetime info from table A in database Y on server X, it gives me the the wrong date while when reading the same table from database Z on server X will give me the data as it should be.
It turns out that the cause is probably the difference in server-version. One is a Postgres 9 (works ok), the other is a Postgres 10 (does not work okay).
They are different instances on the same machine. Somehow I missed that (blush).
With transferring I meant that I am reading records from a sourcedatabase (Postgresql) and inserting them in a targetdatabase (mssql 2017).
This is done through the application, I am not sure what drivers it is using.
I wil work with the people who made the application.
For those wondering: it is this application: https://dbconvert.com/mssql/postgresql/
When a solution is found I will update this answer with the found solution.

PostgreSQL + Delphi XE7 + ADO + ODBC

Our application successfully communicates with various databases (MSSQL, Oracle, Firebird) via ADO, that's why I'm trying to use ADO to add the PostgreSQL option to our software. I use standard PostgreSQL ODBC provider. All was fine until I faced with the problem with the large TEXT fields. When I use Unicode version of the provider and try to get TEXT field AsString, the application is just crushed with EAccessViolation in method RemoveMediumFreeBlock(). ANSI version works, but it cuts the content of the field (I guess the characters besides default 8190 LongVarChar limit). Small TEXT fields are read OK.
Could you suggest me what to do with this issue?
Is there the better option to work with PostgreSQL via ADO in Delphi?

nvarchar(max) columns interfere with each other in SELECT statement, only through ODBC

In a recent update of run-time engine and SQL Server version (2008R2 to 2012) I have begun experiencing an issue where largish queries through ODBC come back with blank fields where there should not be any. The same query run directly in SQL Server worked fine.
I started to delete fields from the query and found that it was the five TEXT datatype fields in the query that were giving me trouble. The first TEXT field listed in the SELECT statement would show up fine, and subsequent TEXT fields would not show up. If I deleted all but two fields from the query, the remaining two would come through.
Since the problem is clearly occurring within the ODBC, my first thought was to switch my windows 8 odbc drivers from "SQL Server Native Client 11.0" to "SQL Server". This did not help.
Since TEXT is on the way out of support, I thought it might be the culprit. I converted all the TEXT fields to NVARCHAR(MAX) (I am also looking for unicode support). This did not fix anything. Next I tried converting the out-of-page datatypes to an in-page format NVARCHAR(4000). This fixed the problem, but it does not work across the board, because I have some fields that are longer than 4000 characters.
My questions:
What is the limitation of ODBC related to out-of-page data that is causing this issue. My understanding is that nvarchar(max) data is only stored out-of-page if it is sufficiently long (am I wrong about this). In the example table that I'm working with, none of the text data fields are longer than 255 characters, however the problem still occurs.
I could probably get by if I could figure out which fields need the extra length and only leave those fields in out-of-page representation. However, the size of the application makes figuring out the exact (and possible) use of every field time prohibitive. I hope I don't have to go this route.

Load image from Postgres into Report Builder 3.0

I've loaded an image directly into postgres and I know it's there as I can do lo-export and extract it. It's a .png in an OID column. I have a connection to postgres through report builder, which is successfully pulling data from my other tables. I can also use the image as an embedded image ok. However, when I use 'database' or 'external' as the image source and select the image field from my table, I only get a red cross when i run the report.
Is there something I'm missing?
Thanks
Thinking through this, here are some things I think would be worth trying. I cant find any discussion of this in the Report Builder 3.0 documentation which is not surprising since it is designed for SQL Server. I would not be surprised if this unsupported.
Try storing as a bytea instead of as a lob. The lob API is pretty complex and with bytea, all you have to worry about is text vs binary mode and whether the driver will unescape the results or not.
If it works as a bytea but not as a lob, then your issue is solely with the lob API. Bytea should be fine for images and small files anyway. It's only when you get to the point where seek() is helpful that lobs really shine.
If it does not work as bytea, then you may want to look at exporting the lob to your filesystem. Take a look at the postgreSQL documentation for lo_export.