SQL Developer returns query results on one computer but not on another - oracle-sqldeveloper

I can run a query on views in SQL Developer 3.1.07 and it returns the results I expect. My co-worker, who is in Mexico using the same user, can connect to the same database, sees the same views, runs the same query and gets no results, even from a simple "select * from VIEWNAME" query. The column headers display, but no data. If he selects a view from the connections window and selects the DATA tab no data displays. This user does not have access to any tables on this specific database.
I'm not certain he is running the same version of Developer, but it's not far off. I have checked as many settings in SQL Developer that I think could be the issue, but see no significant difference in his settings from mine.
Connecting to another database allows him to access data in both tables and views
Any thoughts on what we're missing?

I know I'm a few years late, but check if the underlying view doesn't filter on something that is based on localisation! I just had the issue and it turned out to be a statment like this that was causing issues:
SELECT *
FROM sometable
WHERE language = userenv('LANG')

Copy the JDBC folder from your oracle home and copy it over to your c-workers machine. we had the same issue and replacing the JDBC folder worked.

Faced the same which got resolved when I checked the 'skip NLS settings' box. My query was returning zero results earlier but when I ran the same query again, I could see the table rows.
Since your co-worker is in a different country, most probably the NLS settings (related to the language) are the culprit here.

I was facing the same issue, turned out that the update to the database from my sqldevelolper was not commited to the main database, that's why, I was getting results on my sqldeveloper for that query, but from aws it was returning empty results. When I chatted with DBA, he could find stale data. After I committed the data from my sqldeveloper, the db was actually updated.

Related

Tableau Queries with JOINS and check for NULL are failing in ClickHouse

I am running Tableau connected to ClickHouse via ODBC driver. At first mostly any report request was failing. I have configured this tdc file https://github.com/yandex/clickhouse-odbc/blob/clickhouse-tbc/clickhouse.tdc and its actually started to work, however now some of the query requests with JOINS that contain check for NULL in ON are failing because of using IS NULL instead of isNull(id)
JOIN users ON ((users.user_id = t0.user_id) OR ((users.user_id IS NULL) AND (t0.user_id IS NULL)))
This is the correct way that works:
JOIN users ON ((users.user_id = t0.user_id) OR ((isNull(users.user_id) = 1) AND (isNull(t0.user_id) = 1 IS NULL)))
How to make tablau driver to send the right requerst?
Here are a few suggestions:
This post on the Tableau Community looks like it has similar symptoms as you describe. The suggested resolution is to wrap all fields as such IfNull([Dimension], "") thereby reducing the need, apparently, to have Clickhouse do the check of nulls.
The TDC file from Github looks pretty complete, but they might not have taken joins into consideration. The GitHub commit states that the tdc is "untested." I would message the creator of that TDC and see if they've done any work around joins and if they have any suggestions.
Here is a list of possible ODBC Customizations that can be added to or removed from your TDC file. The combination of which may take some experimentation, but they're well worth researching as a possible solution.
Create an extract before performing complex analysis. If you're able to connect initially, then it should be possible to bring all the data from Clickhouse into an extract.
Custom SQL would probably alleviate any join syntax issue because the query and any joins are purely written by you. After making the initial connection to ClickHouse, instead of choosing a table, select "Custom ODBC" and write a query that will return the joined tables of your choosing.
Finally, the Tableau Ideas Forum is a place to ask for and/or vote on upcoming connectors. I can see there is already an idea in place for ClickHouse. Feel free to vote it up.
If you can make sure not to have any NULL values in the data, you can also use this proxy that I wrote for this exact problem.
https://github.com/kfzteile24/clickhouse-proxy
It kinda worked, for most cases, but it's not bullet-proof.

Crystal Reports ODBC connection: the database table <tablename> cannot be found

When running (or verifying the database) for a report in Crystal Reports 10, I am getting the message:
"The database table "SomeTable" cannot be found. Proceed to remove this table from the report?"
for multiple tables.
The report used to work fine. The report is getting data from multiple sources, and the missing tables are those that are coming from an ODBC connection to a SQL Server DB. I think the issue may be that when the report was created, the ODBC was pointing at a different instance of the database (same structures, just different location.)
I've checked and the report user has all the required permissions on the new database.
In Crystal, if you ignore the messages the report seems to run fine. However when deploying the report to be run from within the Crystal Report Viewer in a website, it is throwing a File I/O error.
This very handy blog post provides the solution: https://wisdomofsolomon.wordpress.com/2011/06/18/crystal-reports-tables-not-found-during-verify-database/
By running Show SQL Query you can see that the generated query is running SQL like
select * from databasename.dbo.SomeTable
It's the databasename part of that that seems to be causing the problem (although as far as I could tell, in my case the DB name isn't any different between the old DB connection and the new one in my case.) Amending the table queries to remove the databasename from the SQL solved the problem for me.
You can do this as follows:
go to Database / Set Datasource Location in the menus.
drill down in the report tree to the tables that are causing a problem
Under Properties, click Overriden Qualified Table Name:
In the text box, type the name of the table without the database name (e.g. dbo.SomeTable)
Do this for all the tables causing a problem
(As a comment on that blog post points out, you could also create a new connection and replace the tables with the equivalents from that new datasource, but that leaves you with the fully qualified table name from the new connection - so you might get the same problem again in future.)
in Crystal, under file and options and database tab, the data explorer must have tables checked. This is not an easy feature to know about

How to Recover PostgreSQL 8.0 Database

On my PostgreSQL 8.0 database, I started receiving a "ERROR: could not open relation 1663/17269/16691: No such file or directory" message, and now my data is inaccessible.
Any ideas on how to recover at least some of the data? Professional support is an option.
Regards.
RP
If you want your data back in a hurry and it's worth something to you, then the professional support option should be simple enough.
Some things to check, now that you've got a full backup of all your database (that's base, pg_clog, pg_xlog and all the other folders at that level).
Does that file actually exist? It might be a permissions problem rather than the file actualy going missing.
Check your anti-virus/security packages - have they mistakenly quarantined the file? If you can exclude PostgreSQL's database directories from scans/active scans that's worthwhile too.
Make a note of everything you can remember about when this happened and what happened just before. This will help with troubleshooting for you or a consultant.
Check the logs likewise - this error will be logged, find the first occurrence and see if there's anything odd before.
Double-check you really do have all your existing files backed up, and restart PostgreSQL.
Try connecting as user postgres to database postgres or database template1. If that works then the file is one of your database files rather than the global list of users or some such.
Try creating an empty file with the right name (and permissions - check the other files). If you are really lucky it's just an index. Otherwise it could be a data table you can live without. Then you can dump other tables individually.
OK - if you're here then you can connect to your DB. Those numbers in the file-path are PostgreSQL's OIDs identifying system objects. You can try a couple of useful queries here. These two queries should give you the IDs of the databases and then the object with the missing file. This is useful information for your professional too.
SELECT oid, datname, dattablespace FROM pg_database;
SELECT * FROM pg_class WHERE relfilenode = 16691;
Remember make sure you have the filesystem backup before tinkering.

How to discover what a Postgresql table's usage is?

Our team is working on a Postgresql database with lots of tables and views, without any referential constraints. The project is undocumented and there appears to be a great number of unused/temporary/duplicate tables/views dirtying the schema.
We need to discover what database objects have real value and are actually used and accessed.
My inital thoughts were to query the Catalog/'data-dictionary'.
Is it possible to query the Postgresql Catalog to find an object's last query time.
Any thoughts, alternative approaches and or tools ideas?
Check the Statistics Collector
Not sure about the last query time but you can adjust your postgresql.conf to log all SQL:
log_min_duration_statement = 0
That will at least give you an idea of current activity.
Reset the statistics, pg_stat_reset(), and then check the pg catalog tables like pg_stat_user_tables and such to see where activity looks like its showing up.

Database last updated?

I'm working with SQL 2000 and I need to determine which of these databases are actually being used.
Is there a SQL script I can used to tell me the last time a database was updated? Read? Etc?
I Googled it, but came up empty.
Edit: the following targets issue of finding, post-facto, the last access date. With regards to figuring out who is using which databases, this can definitively monitored with the right filters in the SQL profiler. Beware however that profiler traces can get quite big (and hence slow/hard to analyze) when the filters are not adequate.
Changes to the database schema, i.e. addition of table, columns, triggers and other such objects typically leaves "dated" tracks in the system tables/views (can provide more detail about that if need be).
However, and unless the data itself includes timestamps of sorts, there are typically very few sure-fire ways of knowing when data was changed, unless the recovery model involves keeping all such changes to the Log. In that case you need some tools to "decompile" the log data...
With regards to detecting "read" activity... A tough one. There may be some computer-forensic like tricks, but again, no easy solution I'm afraid (beyond the ability to see in server activity the very last query for all still active connections; obviously a very transient thing ;-) )
I typically run the profiler if I suspect the database is actually used. If there is no activity, then simply set it to read-only or offline.
You can use a transaction log reader to check when data in a database was last modified.
With SQL 2000, I do not know of a way to know when the data was read.
What you can do is to put a trigger on the login to the database and track when the login is successful and track associated variables to find out who / what application is using the DB.
If your database is fully logged, create a new transaction log backup, and check it's size. The log backup will have a fixed small lengh, when there were no changes made to the database since the previous transaction log backup has been made, and it will be larger in case there were changes.
This is not a very exact method, but it can be easily checked, and might work for you.