Load image from Postgres into Report Builder 3.0 - postgresql

I've loaded an image directly into postgres and I know it's there as I can do lo-export and extract it. It's a .png in an OID column. I have a connection to postgres through report builder, which is successfully pulling data from my other tables. I can also use the image as an embedded image ok. However, when I use 'database' or 'external' as the image source and select the image field from my table, I only get a red cross when i run the report.
Is there something I'm missing?
Thanks

Thinking through this, here are some things I think would be worth trying. I cant find any discussion of this in the Report Builder 3.0 documentation which is not surprising since it is designed for SQL Server. I would not be surprised if this unsupported.
Try storing as a bytea instead of as a lob. The lob API is pretty complex and with bytea, all you have to worry about is text vs binary mode and whether the driver will unescape the results or not.
If it works as a bytea but not as a lob, then your issue is solely with the lob API. Bytea should be fine for images and small files anyway. It's only when you get to the point where seek() is helpful that lobs really shine.
If it does not work as bytea, then you may want to look at exporting the lob to your filesystem. Take a look at the postgreSQL documentation for lo_export.

Related

fixed width data into postgres

Looking for good way to load FIXED-Width data into postgres tables. I do this is sas and python not postgres. I guess there is not a native method. The files are a few GB. The one way I have seen does not work on my file for some reason (possibly memory issues). There you load as one large column and then parse into tables. I can use psycopy2 but because of memory issues would rather not. Any ideas or tools that work. Does pgloader work well or are there native methods?
http://www.postgresonline.com/journal/index.php?/archives/157-Import-fixed-width-data-into-PostgreSQL-with-just-PSQL.html
Thanks
There's no convenient built-in method to ingest fixed-width tabular data in PostgreSQL. I suggest using a tool like Pentaho Kettle or Talend Studio to do the data-loading, as they're good at consuming many different file formats. I don't remember if pg_bulkload supports fixed-width, but suspect not.
Alternately, you can generally write a simple script with something like Python and the psycopg2 module, loading the fixed-width data row by row and sending that to PostgreSQL. psycopg2's support for the COPY command via copy_from makes this vastly more efficient. I didn't find a convenient fixed-width file reader for Python in a quick search but I'm sure they're out there. You can use whatever language you like anyway - Perl's DBI and DBD::Pg do just as well, and there are millions of fixed-width file reader modules for Perl.
The Python Pandas library has a function pandas.read_fwf which works great.
Data can be read in using python, then written to Postgres database.

nvarchar(max) columns interfere with each other in SELECT statement, only through ODBC

In a recent update of run-time engine and SQL Server version (2008R2 to 2012) I have begun experiencing an issue where largish queries through ODBC come back with blank fields where there should not be any. The same query run directly in SQL Server worked fine.
I started to delete fields from the query and found that it was the five TEXT datatype fields in the query that were giving me trouble. The first TEXT field listed in the SELECT statement would show up fine, and subsequent TEXT fields would not show up. If I deleted all but two fields from the query, the remaining two would come through.
Since the problem is clearly occurring within the ODBC, my first thought was to switch my windows 8 odbc drivers from "SQL Server Native Client 11.0" to "SQL Server". This did not help.
Since TEXT is on the way out of support, I thought it might be the culprit. I converted all the TEXT fields to NVARCHAR(MAX) (I am also looking for unicode support). This did not fix anything. Next I tried converting the out-of-page datatypes to an in-page format NVARCHAR(4000). This fixed the problem, but it does not work across the board, because I have some fields that are longer than 4000 characters.
My questions:
What is the limitation of ODBC related to out-of-page data that is causing this issue. My understanding is that nvarchar(max) data is only stored out-of-page if it is sufficiently long (am I wrong about this). In the example table that I'm working with, none of the text data fields are longer than 255 characters, however the problem still occurs.
I could probably get by if I could figure out which fields need the extra length and only leave those fields in out-of-page representation. However, the size of the application makes figuring out the exact (and possible) use of every field time prohibitive. I hope I don't have to go this route.

Can COPY FROM tolerantly consume bad CSV?

I am trying to load text data into a postgresql database via COPY FROM. Data is definitely not clean CSV.
The input data isn't always consistent: sometimes there are excess fields (separator is part of a field's content) or there are nulls instead of 0's in integer fields.
The result is that PostgreSQL throws an error and stops loading.
Currently I am trying to massage the data into consistency via perl.
Is there a better strategy?
Can PostgreSQL be asked to be as tolerant as mysql or sqlite in that respect?
Thanks
PostgreSQL's COPY FROM isn't designed to handle bodgy data and is quite strict. There's little support for tolerance of dodgy data.
I thought there was little interest in adding any until I saw this proposed patch posted just a few days ago for possible inclusion in PostgreSQL 9.3. The patch has been resoundingly rejected, but shows that there's some interest in the idea; read the thread.
It's sometimes possible to COPY FROM into a staging TEMPORARY table that has all text fields with no constraints. Then you can massage the data using SQL from there. That'll only work if the SQL is at least well-formed and regular, though, and it doesn't sound like yours is.
If the data isn't clean, you need to pre-process it with a script in a suitable scripting language.
Have that script:
Connect to PostgreSQL and INSERT rows;
Connect to PostgreSQL and use the scripting language's Pg APIs to COPY rows in; or
Write out clean CSV that you can COPY FROM
Python's csv module can be handy for this. You can use any language you like; perl, python, php, Java, C, whatever.
If you were enthusiastic you could write it in PL/Perlu or PL/Pythonu, inserting the data as you read it and clean it up. I wouldn't bother.

Fields on tables linked to Access 2010 from Postgres not becoming Memo?

I have a repeating problem that just feels so basic yet I cannot solve it nor can I find a solution online. Really hoping someone has something simple.
I have multiple situations where I have relatively large tables stores in Postgres (v8.4) and I want to be able to easily display them for my testers to review. The tables always have character varying fields that go well beyond the 255 max that Access wants to display in a Text field; it should become a Memo field. The data also has every possible separator imaginable already in it (tab, carriage return, semi colon, pipe, etc) and extracting it to Excel or such will never work smoothly. The easiest thing WOULD be using ODBC to link the table into an Access DB and viewing it there ... except that when I link or import, Access translates the field to Text. I've tried settings on the ODBC, but nothing can get those Fields to be Memo.
I'll take a way to extract to Excel cleaner, to view it in Access better .. just anything that gets me the entire table in a low level user friendly way to consistently get a table like that to a place they can review it. Suggestions?
Better late than never..
I just ran into this problems with Access 2010 and Postgres 9.1. I found a setting in the Postgres ODBC driver settings that you have to change. In the ODBC Data Source Administrator, select the datasource that you setup and click the 'Configure...' button.
Click the 'Datasources' button.
Uncheck the 'Text as LongVarChar' checkbox
In Access, you may have to delete the linked tables and re-add them. I tried relinking and one table updated properly and one did not. After deleting are re-adding, I had both working.
Try setting text datatype for all columns that you want to have Memo Data Type. I checked that with PostgreSQL 9.0 (64 bit), psqlodbc_09_00_0310 (32 bit, so I created User DSN under C:\Windows\SysWOW64\odbcad32.exe) and as I see all columns wit text type become Memo, as opposite to characted(6) column that has Text Data Type in Access.

is there any PostgreSQL loader like Oracle has?

how to use copy statement in postgresql to load data from a text file where the file has an escape character as a delimiter into a postgresql table?
Is there any otherway of loading data from textfile into a PostgreSQL table?
pg loader emulates oracles sql loader:
http://pgfoundry.org/projects/pgloader/
pg bulkload is used to load lots of data in an otherwise offline db. Useful for large data warehouses, fast, and somewhat dangerous and quirky:
http://pgbulkload.projects.postgresql.org/
You should use COPY with the DELIMITER 'xx' option. You probably need to play around a little bit to get it right, but the docs give a pretty good information about what to do with each option available to the command.