COPY HEADER available only in CSV mode - postgresql

When I try to use the COPY command with HEADER option and format text to export a table in postgreSQL, I get the following error:
COPY HEADER available only in CSV mode
I understand that we can use format CSV with a different delimiter than , to generate a different file format, but I am wondering why the use of HEADER with text format is prohibited?

The default text format of COPY is proprietary to PostgreSQL and not very useful for data exchange with other software. For example, a NULL value is represented as \N.
Since nobody saw a need for having header data in this format, it didn't get implemented.
Use the csv format for data exchange.

Related

Problem reading values column headers to csv in robot framework

When exporting a column header from web menu to CSV in Robot framework, the language is polish the text identifies unknown charcters. How to encode it?
I don't think the problem you are seeing above is to do with encoding. The result from
Get column headers from CSV file isn't a list which is what your error is pointing too.
List Should Contain Sub List is expecting 2 lists as args

I am trying to read the time and message value field data as shown below and write it to an excel

Sample data and required excel image:
Also, Read Time section as shown in file, and populate excel file with the data in a column with the header name Time as shown above. Likewise, read the message value as shown in the .asc file and populate in excel file by converting the numbers from hexadecimal to decimal in columns named Data1, Data2, Data3,…
If your '.asc' file consists of tab delimited ASCII text then Excel will allow you to import it into an Excel worksheet.
The following explainer comes from Microsoft's Office support site:
There are two ways to import data from a text file by using Microsoft
Excel: You can open the text file in Excel, or you can import the text
file as an external data range. To export data from Excel to a text
file, use the Save As command.
There are two commonly used text file formats:
Delimited text files (.txt), in which the TAB character (ASCII
character code 009) typically separates each field of text.
Comma separated values text files (.csv), in which the comma character
(,) typically separates each field of text.
You can change the separator character that is used in both delimited
and .csv text files. This may be necessary to make sure that the
import or export operation works the way that you want it to.
If neither of those methods work for you and your '.asc' was generated by MATLAB then you may be able to use MATLAB to export directly to an Excel worksheet. MATLAB has a function xlswrite that you can use to write directly to a Microsoft Excel spreadsheet.
Another option, if you're comfortable writing some VBA code in Excel, is to use the textscan function to parse your '.asc' file.

SSIS unicode flat file issue "Character not in code page"

I have a text file created in java using UTF-16 encoding.
When I try to import I am getting a validation failure/error on the flat file source before it even begins to move data. The error is a character is not in the specified code page.
[Flat File Source [908]] Error: Data conversion failed. The data conversion for column "ACTIVE_INGREDIENT" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.
In my Flat File connection, I don't have unicode selected (as that struggles to find my CR LF line terminators), but have have set code page to 65001-UTF8.
In may flat file data source, I have changed all Internal and External Columns to be DT_WSTR in the advanced editor (I can't cahnge code page it seems, stuck on 0 with this option).
I am not doing a data conversion as I am mapping to NVARCHAR tables (the SSIS job isnt even getting this far to try to transfer data).
I cant even redirect the rows to a text file to identify them as I have the same issue trying to output to a flat file destination.
Any help appreciated.

Parsing COPY...WITH BINARY results

I'm using this:
COPY( select field1, field2, field3 from table ) TO 'C://Program
Files/PostgreSql//8.4//data//output.dat' WITH BINARY
To export some fields to a file, one of them is a ByteA field. Now, I need to read the file with a custom made program.
How can I parse this file?
The general format of a file generated by COPY...BINARY is explained in the documentation, and it's non-trivial.
bytea contents are the most easy to deal with, since they're not encoded.
Each other datatype has its own encoding rules, which are not described in the documentation but in the source code. From the doc:
To determine the appropriate binary format for the actual tuple data
you should consult the PostgreSQL source, in particular the *send and
*recv functions for each column's data type (typically these functions are found in the src/backend/utils/adt/ directory of the source
distribution).
It might be easier to use the text format rather than binary (so just remove the WITH BINARY). The text format has better documentation and is designed for better interoperability. The binary format is more intended for moving between postgres installations, and even there they have version incompatibilities.
Text format will write the bytea field as if it was text, and encode any non-printable characters with \nnn octal representation (except for a few special cases that it encodes with C style \x patterns, such as \n and \t etc.) These are listed in the COPY documentation.
The only caveat with this is you need to be absolutely sure that the character encoding you're using is the same when saving the file as when reading it. To make sure that the printable characters map to the same numbers. I'd stick to SQL_ASCII as it keeps thing simpler.

formatting text in a csv export

I'm having trouble with a .csv export which is being uploaded to a website. There are must be some hidden or illegal characters in a description field I have in the database. I'm having a tough time getting the text to format correctly and not break a php script.
If I use the GetAs(css) function in a calculation, the text works fine. Obviously this won't work as a working file but it at least validates there's something in the formatting of the description field that's breaking the export. I did use the excel clean(text) calculation and that fixes the issue as well. Just need to find a way in Filemaker to do this.
Any suggestions?? Maybe a custom function that strips out bad characters?
You can filter invalid characters out of text using the filter function. If you only want a minimal set of ASCII characters, use it like
filter(mytable::myfield; "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ 0123456789.!?")