I have a dashboard in Tableau which pulls data from about 10 tables in a SQL database.
These tables are refreshed at various times of day. There are occasions where one of them is not available (or has been deleted and awaiting rebuild)
However when I open my tableau dashboard on the server it wont let me see any of it. Not seeing the data from the missing table is fine but the majority of the data that does not come from that table is unavailable too.
I get this error
An unexpected error occurred. If you continue to receive this error please contact your Tableau Server Administrator.
TableauException: [Microsoft][SQL Server Native Client 11.0][SQL Server]Invalid object name 'dbo.survey_order_info_fy16_TV_L'. The table "[dbo].[survey_order_info_fy16_TV_L]" does not exist. Unable to connect to the server "dbedwro.vistaprint.net". Check that the server is running and that you have access privileges to the requested database.
"survey_order_info_fy16_TV_L" being the missing table but not one I'm bothered about right now.
Is there an option that might help me see all the other data?
I am not sure if it's possible to avoid this behavior.
If there isn't there is a workaround for that by creating extract of these tables and storing them on the Tableau server. You can then use these extracts instead of the tables on the DB and just refresh them either by schedule if you know when the tables are available again or from the SQL server (eg. with SSIS by triggering the refresh once the data is available again).
Advantage of that would be that
you can refresh them independently and always have the latest data
it performs better than an SQL connection
you don't jam your SQL server with connections (in case you have a lot of users accesing)
you can filter and select if you didn't want your users to get access to the full dataset
disadvantages:
you will have to create one extract per table, and replace all data sources in workbooks you already use
It's a matter of creating a workbook, connecting to the source (adding filters or hiding fields) and publishing it to the server. Details of that can be found here:
http://onlinehelp.tableau.com/current/pro/online/mac/en-us/publish_datasources.html
Related
We recently migrated a large DB2 database to a new server. It got trimmed a lot in the migration, for instance 10 years of data chopped down to 3, to name a few. But now I find that I need certain data from the old server until after tax season.
How can I run a UNION query in DBeaver that pulls data from two different connections..? What's the proper syntax of the table identifiers in the FROM and JOIN keywords..?
I use DBeaver for my regular SQL work, and I cannot determine how to span a UNION query across two different connections. However, I also use Microsoft Access, and I easily did it there with two Pass-Through queries that are fed to a native Microsoft Access union query.
But how to do it in DBeaver..? I can't understand how to use two connections at the same time.
For instance, here are my connections:
And I need something like this...
SELECT *
FROM ASP7.F_CERTOB.LDHIST
UNION
SELECT *
FROM OLD.VIPDTAB.LDHIST
...but I get the following error, to which I say "No kidding! That's what I want!", lol... =-)
SQL Error [56023]: [SQL0512] Statement references objects in multiple databases.
How can this be done..?
This is not a feature of DBeaver. DBeaver can only access the data that the DB gives it, and this is restricted to a single connection at a time (save for import/export operations). This feature is being considered for development, so keep an eye out for this answer to be outdated sometime in 2019.
You can export data from your OLD database and import it into ASP7 using DBeaver (although vendor tools for this are typically more efficient for this). Then you can do your union as suggested.
Many RDBMS offer a way to logically access foreign databases as if they were local, in which case DBeaver would then be able to access the data from the OLD database (as far as DBeaver is concerned in this situation, all the data is coming from a single connection). In Postgres, for example, one can use a foreign data wrapper to access foreign data.
I'm not familiar with DB2, but a quick Google search suggests that you can set up foreign connections within DB2 using nicknames or three-part-names.
If you check this github issue:
https://github.com/dbeaver/dbeaver/issues/3605
The way to solve this is to create a task and execute it in different connections:
https://github.com/dbeaver/dbeaver/issues/3605#issuecomment-590405154
We are trying to connect Talend to our Oracle 12c database using CDC. The tOracleCDC component uses Oracle XStream to do the actual change data capture work. The issue is that when creating the CDC endpoint in Oracle one creates an "Outbound Server" which listens for changes on a number of tables, or even a number of whole schemas.
In Talend when configuring the tOracleCDC component one of the required fields is "Table Using CDC" which in the generated Java code is used to filter the incoming change records using something like "TableName".equalsIgnoringCase(... )
This means that we can only get changes for a single table for a given XStream connection (and each connection will require a unique outbound server object in the database).
We must be missing something, how can we pull changes for multiple tables in Talend?
Thanks!
The solution is to use an empty string as the table name in the Table Using CDC field. This will cause the templating engine to not emit the table name check that was causing this problem.
I could not find this documented anywhere, so it might be unsupported, but examining the templates shows that it is the intended behavior.
When running (or verifying the database) for a report in Crystal Reports 10, I am getting the message:
"The database table "SomeTable" cannot be found. Proceed to remove this table from the report?"
for multiple tables.
The report used to work fine. The report is getting data from multiple sources, and the missing tables are those that are coming from an ODBC connection to a SQL Server DB. I think the issue may be that when the report was created, the ODBC was pointing at a different instance of the database (same structures, just different location.)
I've checked and the report user has all the required permissions on the new database.
In Crystal, if you ignore the messages the report seems to run fine. However when deploying the report to be run from within the Crystal Report Viewer in a website, it is throwing a File I/O error.
This very handy blog post provides the solution: https://wisdomofsolomon.wordpress.com/2011/06/18/crystal-reports-tables-not-found-during-verify-database/
By running Show SQL Query you can see that the generated query is running SQL like
select * from databasename.dbo.SomeTable
It's the databasename part of that that seems to be causing the problem (although as far as I could tell, in my case the DB name isn't any different between the old DB connection and the new one in my case.) Amending the table queries to remove the databasename from the SQL solved the problem for me.
You can do this as follows:
go to Database / Set Datasource Location in the menus.
drill down in the report tree to the tables that are causing a problem
Under Properties, click Overriden Qualified Table Name:
In the text box, type the name of the table without the database name (e.g. dbo.SomeTable)
Do this for all the tables causing a problem
(As a comment on that blog post points out, you could also create a new connection and replace the tables with the equivalents from that new datasource, but that leaves you with the fully qualified table name from the new connection - so you might get the same problem again in future.)
in Crystal, under file and options and database tab, the data explorer must have tables checked. This is not an easy feature to know about
Is it possible to have a MS access backend database (Microsoft JET or Access Database Engine) set up so that whenever entries are inserted/updated those changes are replicated* to a PostgreSQL database?
Two-way synchronization would be nice, but one way would be acceptable.
I know it's popular to link the two and use one as a frontend, but it's essential that both be backend.
Any suggestions?
* ie reflected, synchronized, mirrored
Can you use Microsoft SQL Server Express Edition? Or do you have to use Microsoft Access Database Engine? It's possible you'll have more options using MS SQL express, like more complete triggers and logging.
Either way, you're going to need a way to accumulate a log of changed rows from the source database engine, and a program to sync them to PostgreSQL by reading the log and converting it into suitable PostgreSQL INSERT, UPDATE and DELETE statements.
You could do this by having audit triggers in MADB/Express insert a row into an audit shadow table for every "real" table whenever it changed, including inserting special "row deleted" audit entries. Then your sync program could connect to both MADB/Express, read the audit tables, apply the changes to PostgreSQL, and empty the audit tables.
I'll be surprised if you find anything to do this out of the box. It's one area where Microsoft SQL Server has a big advantage because of all the deep Access and MADB engine integation to support the synchronisation and integration features.
There are some ETL ("Extract, Transform, Load") tools that might be helpful, like Pentaho and Talend. I don't know if you can achieve the desired degree of automation with them though.
I am using two tables one is of access database and other is of oracle database, I am getting an error saying that ‘SQL odbc error’ what should I do?
Here is what I would do.
Make a test report.
Link it to one of the 2 databases.
Drag some fields from your source into the report.
Can you preview this test report? If not, that datasource may be the problem.
Do the same thing with the other datasource.
If both reports work fine, make a new test report that joins the 2 datasources. If this doesn't work, then your 2 data sources may not be matching up in some way.
"SQL odbc error" always refers to some kind of connection error via an ODBC connection. (These connections can be managed via the Control Panel -> Administrative Tools -> Data Sources)
Follow PowerUsers steps above, but test the Access connection first, since it is the one most likely to be connected via ODBC. Oracle may be ODBC, or it may be a direct TNS connection, so I would suspect the Access DB first.
Once you identifiy which one (or both) of your connections is a problem, then you need to figure out what it is. The likely culprits are:
The name or IP of your ODBC connection is wrong
The user name in the connection is wrong
The password in the connection is wrong
They are all correct, but the user doesn't have permissions to view the tables the report was built with.