Encode to Quoted-Printable in TSQL (or FreeMarker)? - tsql

I store the message part of lots of emails in an MsSql database. Before sending an email with the message I need to encode it into Quoted-Printable format. I don't encode it before saving it to db because I want to have the original message. And I don't want to have both the original and the encoded one in the db.
I'm using third-party software for sending mails so my only options to encode the messages is when reading them from the database or to encode them in freemarker.
So, does anyone know how to encode the messages from TSQL or FreeMarker? Preferrably a solution that doesn't involve buying a license.

The options you have are as follows:
select the original email from sql server and then encode it in client application.
create an extended stored procedure or function using CLR.
create a sql function without using CLR. In this case you will have to implement all the Quoted Printable rules. This solution would be really messy and may not be very efficient also.

Related

Is there a way in Debezium to stop data serialization? Trying to get values from source as it is

I have seen many posts on StackOverflow where people are trying to capture the data from source RDBMS and are using Debezium for the same. I am working with SQL Server. However since the DECIMAL and TIMESTAMP values are encoded by default, it becomes an overhead to decode those values into its original form.
I was looking to avoid this extra decoding step but to no avail. Can anyone please tell me how to import data via Debezium as it is i.e. without serializing it.
I saw some youtube videos where DECIMAL values were extracted in its original form.
FOR EX-> 800.0 from SQL Server is obtained as 800.0 via Debezium and not as "ATiA" (encoded)
But i am not sure how to do this. Can anyone please help me with what configuration will be required for the same on Debezium. I am using Debezium Server for now. Can work with Debezium connectors as well if that's needed.
Any help is appreciated.
Thanks.
It may be a matter of representation of timestamp and decimal values as opposed to encoding.
For timestamps, try using different values of time.precision.mode and for decimals, use decimal.handling.mode.
For MySQL, documentation is here

When inserting Unicode data in ODBC application, how to determine the encoding in which it should be

I have a generic ODBC application reading and writing data via ODBC to some db(can be ms sql, mysql or anything else). The received and sent data can be Unicode. I'm using SQL_C_WCHAR for my bindings in this case.
So I have two questions here:
Can I determine the encoding in which the data came from the ODBC data source?
In which encoding should I send data to the ODBC data source? I'm running parameterised insert statement for this purpose.
My researched showed that some data sources have connection options to set the encoding, but I want to write a generic application working with anything.
Couldn't find any ODBC option telling me the encoding of the data source. Is there something like that? ODBC docs just say use SQL_C_WCHAR. Is SQL_C_WCHAR for UTF-16?
I did some more research and both Microsoft docs and unixodbc docs seem to point out that ODBC only supports UCS-2. So I think all the data sent or received needs to be UCS-2 encoded.

Database encoding in PostgreSQL

I have recently started using PostgreSQL for creating/updating existing SQL databases. Being rather new in this I came across an issue of selecting correct encoding type while creating new database. UTF-8 (default) did not work for me as data to be included is of various languages (English, Chinese, Japanese, Russia etc) as well as includes symbolic characters.
Question: What is the right database encoding type to satisfy my needs.
Any help is highly appreciated.
There are four different encoding settings at play here:
The server side encoding for the database
The client_encoding that the PostgreSQL client announces to the PostgreSQL server. The PostgreSQL server assumes that text coming from the client is in client_encoding and converts it to the server encoding.
The operating system default encoding. This is the default client_encoding set by psql if you don't provide a different one. Other client drivers might have different defaults; eg PgJDBC always uses utf-8.
The encoding of any files or text being sent via the client driver. This is usually the OS default encoding, but it might be a different one - for example, your OS might be set to use utf-8 by default, but you might be trying to COPY some CSV content that was saved as latin-1.
You almost always want the server encoding set to utf-8. It's the rest that you need to change depending on what's appropriate for your situation. You would have to give more detail (exact error messages, file contents, etc) to be able to get help with the details.

Can COPY FROM tolerantly consume bad CSV?

I am trying to load text data into a postgresql database via COPY FROM. Data is definitely not clean CSV.
The input data isn't always consistent: sometimes there are excess fields (separator is part of a field's content) or there are nulls instead of 0's in integer fields.
The result is that PostgreSQL throws an error and stops loading.
Currently I am trying to massage the data into consistency via perl.
Is there a better strategy?
Can PostgreSQL be asked to be as tolerant as mysql or sqlite in that respect?
Thanks
PostgreSQL's COPY FROM isn't designed to handle bodgy data and is quite strict. There's little support for tolerance of dodgy data.
I thought there was little interest in adding any until I saw this proposed patch posted just a few days ago for possible inclusion in PostgreSQL 9.3. The patch has been resoundingly rejected, but shows that there's some interest in the idea; read the thread.
It's sometimes possible to COPY FROM into a staging TEMPORARY table that has all text fields with no constraints. Then you can massage the data using SQL from there. That'll only work if the SQL is at least well-formed and regular, though, and it doesn't sound like yours is.
If the data isn't clean, you need to pre-process it with a script in a suitable scripting language.
Have that script:
Connect to PostgreSQL and INSERT rows;
Connect to PostgreSQL and use the scripting language's Pg APIs to COPY rows in; or
Write out clean CSV that you can COPY FROM
Python's csv module can be handy for this. You can use any language you like; perl, python, php, Java, C, whatever.
If you were enthusiastic you could write it in PL/Perlu or PL/Pythonu, inserting the data as you read it and clean it up. I wouldn't bother.

How to export data with Arabic characters

I had an application that used a Sybase ASA 8 database. However, the application is not working anymore and the vendor went out of business.
Therefore, I've been trying to extract the data from the database, which has Arabic characters. When I connect to the database and display the contents, Arabic characters do not display correctly; instead, it looks something like ÇáÏãÇã.
Which is incorrect.
I tried to export the data to a text file. Same result. Tried to save the text file with UTF-8 encoding, but to no avail.
I have no idea what collation the tables are set to. Is there a way to export the data correctly, or convert it to the correct encoding?
the problem was solved by exporting the data from the database using "Windows-1252" encoding, and then importing it to other applications with "Windows-1256" encoding.
When you connect to the database, use the CHARSET=UTF-8 connection parameter. That will tell the server to convert the data to UTF-8 before sending it to the client application. Then you can save the data from the client to a file.
This, of course, is assuming that the data was saved with the correct character set to begin with. If it wasn't, you may be out of luck.