Fields on tables linked to Access 2010 from Postgres not becoming Memo? - postgresql

I have a repeating problem that just feels so basic yet I cannot solve it nor can I find a solution online. Really hoping someone has something simple.
I have multiple situations where I have relatively large tables stores in Postgres (v8.4) and I want to be able to easily display them for my testers to review. The tables always have character varying fields that go well beyond the 255 max that Access wants to display in a Text field; it should become a Memo field. The data also has every possible separator imaginable already in it (tab, carriage return, semi colon, pipe, etc) and extracting it to Excel or such will never work smoothly. The easiest thing WOULD be using ODBC to link the table into an Access DB and viewing it there ... except that when I link or import, Access translates the field to Text. I've tried settings on the ODBC, but nothing can get those Fields to be Memo.
I'll take a way to extract to Excel cleaner, to view it in Access better .. just anything that gets me the entire table in a low level user friendly way to consistently get a table like that to a place they can review it. Suggestions?

Better late than never..
I just ran into this problems with Access 2010 and Postgres 9.1. I found a setting in the Postgres ODBC driver settings that you have to change. In the ODBC Data Source Administrator, select the datasource that you setup and click the 'Configure...' button.
Click the 'Datasources' button.
Uncheck the 'Text as LongVarChar' checkbox
In Access, you may have to delete the linked tables and re-add them. I tried relinking and one table updated properly and one did not. After deleting are re-adding, I had both working.

Try setting text datatype for all columns that you want to have Memo Data Type. I checked that with PostgreSQL 9.0 (64 bit), psqlodbc_09_00_0310 (32 bit, so I created User DSN under C:\Windows\SysWOW64\odbcad32.exe) and as I see all columns wit text type become Memo, as opposite to characted(6) column that has Text Data Type in Access.

Related

Is it possible to copy just highlighted numbers from Tableau?

I have a Tableau workbook that connects to a database and then has several sheets that reorganize the data into different tables and graphs that I need.
If I make a sheet that has 2 rows and 1 field for example, I can't highlight the numbers and just copy them without also copying the row names for each item.
Is there a way I can copy just the numbers, nothing else?
It does not appear to be possible :(
As can be seen from the following Tableau threads:
Copy data from Text tables to clipboard
Copy single cell from view data
various incarnations of your request have already been asked of the development team that have yet to make it into Tableau. I also couldn't find anything in the user documents that describes a workaround.
There's a way to do this using Python and probably Autohotkey if that's of interest - both options are hackish.

SQL Developer returns query results on one computer but not on another

I can run a query on views in SQL Developer 3.1.07 and it returns the results I expect. My co-worker, who is in Mexico using the same user, can connect to the same database, sees the same views, runs the same query and gets no results, even from a simple "select * from VIEWNAME" query. The column headers display, but no data. If he selects a view from the connections window and selects the DATA tab no data displays. This user does not have access to any tables on this specific database.
I'm not certain he is running the same version of Developer, but it's not far off. I have checked as many settings in SQL Developer that I think could be the issue, but see no significant difference in his settings from mine.
Connecting to another database allows him to access data in both tables and views
Any thoughts on what we're missing?
I know I'm a few years late, but check if the underlying view doesn't filter on something that is based on localisation! I just had the issue and it turned out to be a statment like this that was causing issues:
SELECT *
FROM sometable
WHERE language = userenv('LANG')
Copy the JDBC folder from your oracle home and copy it over to your c-workers machine. we had the same issue and replacing the JDBC folder worked.
Faced the same which got resolved when I checked the 'skip NLS settings' box. My query was returning zero results earlier but when I ran the same query again, I could see the table rows.
Since your co-worker is in a different country, most probably the NLS settings (related to the language) are the culprit here.
I was facing the same issue, turned out that the update to the database from my sqldevelolper was not commited to the main database, that's why, I was getting results on my sqldeveloper for that query, but from aws it was returning empty results. When I chatted with DBA, he could find stale data. After I committed the data from my sqldeveloper, the db was actually updated.

Filemaker Pro Advanced - Scripting import from ODBC with variable target table

I have several tables I'm importing from ODBC using the import script step. Currently, I have an import script for each and every table. This is becoming unwieldy as I now have nearly 200 different tables.
I know I can calculate the SQL statement to say something like "Select * from " & $TableName. However, I can't figure out how to set the target table without specifying it in the script. Please, tell me I'm being dense and there is a good way to do this!
Thanks in advance for your assistance,
Nicole Willson
Integrated Research
Unfortunately, the target table of an import has to be hard coded in FileMaker up through version 12 if you're using the Import Records script step. I can think of a workaround to this, but it's rather convoluted and if you're importing a large number of records, would probably significantly increase the time to import them.
The workaround would be to not use the Import Records script step, but to script the creation of records and the population of data into fields yourself.
First of all, the success of this would depend on how you're using ODBC. As far as I can think, it would only work if you're using ODBC to create shadow tables within FileMaker so that FileMaker can access the ODBC database via other script steps. I'm not an expert with the other ODBC facilities of FileMaker, so I don't know if this workaround would be helpful in other cases.
So, if you have a shadow table into the remote ODBC database, then you can use a script something like the following. The basic idea is to have two sets of layouts, one for the shadow tables that information is coming from and another for the FileMaker tables that the information needs to go to. Loop through this list, pulling information from the shadow table into variables (or something like the dictionary library I wrote which you can find at https://github.com/chivalry/filemaker-dictionary). Then go to the layout linked to the target table, create a record and populate the fields.
This isn't a novice technique, however. In addition to using variables and loops, you're also going to have to use FileMaker's design functions to determine the source and destination of each field and Set Field By Name to put the data in the right place. But as far as I can tell, it's the only way to dynamically target tables for importing data.

nvarchar(max) columns interfere with each other in SELECT statement, only through ODBC

In a recent update of run-time engine and SQL Server version (2008R2 to 2012) I have begun experiencing an issue where largish queries through ODBC come back with blank fields where there should not be any. The same query run directly in SQL Server worked fine.
I started to delete fields from the query and found that it was the five TEXT datatype fields in the query that were giving me trouble. The first TEXT field listed in the SELECT statement would show up fine, and subsequent TEXT fields would not show up. If I deleted all but two fields from the query, the remaining two would come through.
Since the problem is clearly occurring within the ODBC, my first thought was to switch my windows 8 odbc drivers from "SQL Server Native Client 11.0" to "SQL Server". This did not help.
Since TEXT is on the way out of support, I thought it might be the culprit. I converted all the TEXT fields to NVARCHAR(MAX) (I am also looking for unicode support). This did not fix anything. Next I tried converting the out-of-page datatypes to an in-page format NVARCHAR(4000). This fixed the problem, but it does not work across the board, because I have some fields that are longer than 4000 characters.
My questions:
What is the limitation of ODBC related to out-of-page data that is causing this issue. My understanding is that nvarchar(max) data is only stored out-of-page if it is sufficiently long (am I wrong about this). In the example table that I'm working with, none of the text data fields are longer than 255 characters, however the problem still occurs.
I could probably get by if I could figure out which fields need the extra length and only leave those fields in out-of-page representation. However, the size of the application makes figuring out the exact (and possible) use of every field time prohibitive. I hope I don't have to go this route.

What applications do you use for data entry and retrieval via ODBC?

What apps or tools do you use for data entry into your database? I'm trying to improve our existing (cumbersome) system that uses a php web based system for entering data one ... item ... at ... a ... time.
My current solution to this is to use a spreadsheet. It works well with text and numbers that are human readable, but not with foreign keys that are used to join with the other table's rows.
Imagine that I want a row of data to include what city someone lives in. The column holding this is id_city, which is keyed to the "city" table which has two columns: id (serial) and name (text).
I envision being able to extend the spreadsheet capabilities to include dropdown menu's for every row of the id_city column that would allow the user to select which city (displaying the text of the city names), but actually storing the city id chosen. This way, the spreadsheet would:
(1) show a great deal of data on each screen and
(2) could be exported as a csv file and thrown to our existing scripts that manually insert rows into the database.
I have been playing around with MS Excel and Access, as well as OpenOffice's suite, but have not found something that gives me the functionality I mention above.
Other items on my wish-list:
(1) dynamically fetch the name of cities that can be selected by the user.
(2) allow the user to push the data directly into the backend (not via external files/scripts.
(3) If any of the columns of the rows of data gets changed in the backend, the user could refresh the data on the screen to reflect any recent changes.
Do you know how I could improve the process of data entry? What tools do you use? I use PostgreSQL for the backend and have access to MS Office, OpenOffice, as well as web based solutions. I would love a solution that is flexible, powerful, and doesn't require much time to develop or deploy (I know, dream on...)
I know that pgAdmin3 has similar functionality, but from what I have seen, it is more of an administrative tool rather than something for users to use.
As j_random_hacker noted, I've used MS Access for years (since Access 97) to connect to an ODBC Data Source.
You can do this via linking to external tables: (in Access 2010:)
New -> Blank Database
External Data -> ODBC Database -> Link to Data Source
Machine Data Source -> New -> System Data Source -> Select Driver (Oracle, or whatever) -> Finish
Enter a new name for your DSN, the all of the connection parameters, then click OK
Select newly created DSN, hit ok.
You can do so much once Access sees your external table as a linked table, including sorting, filtering, etc. There's one caveat: as far as I can tell, ALL operations happen on the client side unless you're using a pass-through query. That's fine if you're looking at a table with 3000 records. With 2,000,000 records, that hurts. To be clear, all data in the table comes down to the workstation, for all tables being joined, and the join happens client-side, NOT server-side.
There are usually standalone tools for basic database management - e.g., for Oracle and MySQL a free tool called SQL Developer suffices for basic database data entry.
For more complex types (especially involving clobs) I can usually knock an application together in Java+SWT in a day if we already have the model and DAOs available on the Java side. Yeah, you have to put some effort in, but if it will be used regularly in the future then it is probably worth it.
In your case (well, the case where you have bulk imports of data) knocking up some Perl that reads from the CSV and does the city id lookup would be trivial to implement. Maybe a waste for a one-off thing? Depends on the amount of data to import.
I would be surprised if MS Access can't do what you're looking for -- this is basically the exact use case for it. Namely, quickly throwing together a nice UI for a simple CRUD DB application that a spreadsheet doesn't quite stretch to.
This is an answer, technically, but not a recommendation:
I've used Excel and SSIS for importing simple data entry files into MS SQL, but it's not adequate - there's very little ability to control the data, and SSIS is so very touchy, especially when working with Excel.
MS Access does not work well with some non-Microsoft databases. There is an open-source equivalent called Apache OpenOffice Base you may want to try.