How to make a Cassandra connection using CQL used to create a table? - tableau-api

I am new to tableau, gone through the site before having this question posted, didn't found answer matching to my question.
I have connection established successfully to Cassandra using "DataStax cassandra ODBC driver 64bit windows", evrything is fine, filled all details like "keyspace name, table name as per documentation available in Datastax site".
But when I drag the available table to canvas it's keep on loading for minutes, what the database guy has told me about the data is it's more millions of data for one day, so we have 6months data and that to data gets updated for every 10 minutes, it;s for a reputed wind energy company.
My client has given me "" CQL used for creating table:
create table abc_data_test.machine_data
(machine_id text, tag text, timestamp timestamp, value double,
PRIMARY KEY((machine_id, tag), timestamp))
WITH CLUSTERING ORDER BY(timestamp DESC)
AND compression = { 'sstable_compression' : 'LZ4Compressor' };"".
Where to keep this code?
I tried to insert in connection page it's giving a error. I am getting a new custom sql error (I placed the code in "new custom sql" ) .
The time is still running, can be seen as:
processing request: connecting to datasource, Elapsed time 87:09
The error from new custom sql is
An error occured while commuicating with the datasource. [DataStax][CassandraODBC] (10) Error while executing a query in Cassandra:33562624: line 1.11 no viable alternative at input '1' (SELECT [TOP]1...)
I'm using windows 10 64bit, DataStax odbc driver 64bit-2.4.1 version,DSE is4.8 and later .

You cannot pass DDL sql into the custom sql box. If the Cassandra connection supports the Initial SQL option, you could pass it there. Then your custom sql would be some sort of select statement. Otherwise, create the table in Cassandra then connect to that table from Tableau.

Related

Unable to create db2 table with DATE data type

I am using DB2 9.7 (LUW) in a windows server, in which multiple DBs are available in a single DB instance. I just found that in one of these DBs, I am unable to add a column with DATE data type, during table creation or altering. The column been added is getting changed to timestamp instead.
Any help on this will be welcome.
Check out your Oracle Compatibility setting
Depending on that setting a date is interpreted as Timestamp(0) like in your example.
Because these settings take effect if the database has been created after setting the DB2_COMPATIBILITY_VECTOR registry variable your database can show a different behaviour.

How to populated the table via Pentaho Data Integration's table_output step?

I am performing an ETL job via Pentaho 7.1.
The job is to populate a table 'PRO_T_TICKETS' in PostgreSQL 9.2 via the Pentaho Jobs and transformations?
I have mapped the table fields with respect to the stream fields
Mapped Fields
My Table PRO_T_TICKETS contains the Schema (Column Names) in UPPERCASE.
Is this the reason I can't populate the table PRO_T_TICKETS with my ETL Job?
I duplicated the step TABLE_OUTPUT to PRO_T_TICKETS and changed the Target table field to 'PRO_T_TICKETS2'. Pentaho created a new table with lowercase schema and populated the data in it.
But I want this data to be uploaded in the table PRO_T_TICKETS only and with the UPPERCASE schema if possible.
I am attaching the whole job here and the error thrown by Pentaho. Pentaho Error I have also tried my query by adding double quotes to the column names as you can see in the error. But it didn't help.
What do you think I should do?
When you create (or modify) the connection, select Advanced on the left panel and click on the Force to upper case or Force to lower case or, even better, Preserve case of reserved words.
To know which option to choose, copy the 4th line of your error log, the line starting with INSERT INTO "public"."PRO_T_TICKETS("OID"... in your SQL-developer tool and change the connection advanced parameters until it works.
Also, at debug time, don't use batch updates, don't use lazy conversion on previous steps, and try with one (1) field rather than all (25).
Just as a complement: it worked for me following the tips from AlainD and using specific configurations that I'd like to share with you. I have a transformation streaming data from MySQL to PostgreSQL using a Table Input and Output. In both of DBs I have uppercase objects.
I did the following steps to work in the right way:
In the table input (MySQL) the objects are uppercase too, but I typed in lowercase and it worked and I didn't set any special option in the DB Connection.
In the table output (PostgreSQL) I typed everything in uppercase (schema, table name and columns) and I also set "specify the database fields" (clicking on "Get fields").
In the target DB Connection (PostgreSQL) I put the options (in "Advanced" section): "Quote all in database" and "Preserve case of reserved words".
PS: Ah, the last option is because I've found out that there was one more problem with my fields: there was a column called "Admin" (yes guys, they created a camelcase column using a reserved word!) and for that reason I must to put "Preserve case of reserved words" and type it as "Admin" (without quotes and in camelcase) in the Table Output.

How to check the status of long running DB2 query?

I am running a db2 query that unions two very large tables. I started the query 10 hours ago, and it doesn't seem to finish yet.
However, when I check the status of the process by using top, it shows the status is 'S'. Does this mean that my query stopped running? But I couldn't find any error message.
How can I check what is happening to the query?
In DB2 for LUW 11.1 there is a text-based dsmtop utility that allows you to monitor the DB2 instance, down to individual executing statements, in real time. It's pre-11.1 equivalent is called db2top.
There is also a Web-based application, IBM Data Server Manager, which has a free edition with basic monitoring features.
Finally, you can query one of the supplied SQL monitor interfaces, for example, the SYSIBMADM.MON_CURRENT_SQL view:
SELECT session_auth_id,
application_handle,
elapsed_time_sec,
activity_state,
rows_read,
SUBSTR(stmt_text,1,200)
FROM sysibmadm.mon_current_sql
ORDER BY elapsed_time_sec DESC
FETCH FIRST 5 ROWS ONLY
You can try this command as well
db2 "SELECT agent_id,
Substr(appl_name, 1, 20) AS APPLNAME,
elapsed_time_min,
Substr(authid, 1, 10) AS AUTH_ID,
agent_id,
appl_status,
Substr(stmt_text, 1, 30) AS STATEMENT
FROM sysibmadm.long_running_sql
WHERE elapsed_time_min > 0
ORDER BY elapsed_time_min desc
FETCH first 5 ROWS only"

SELECT old data state with DB2

Is it possible to access old data state in an DB2 database?
Oracle has the clause select ... as of timestamp to do it. Does DB2 have something like it?
Yes, you can select a set of rows that were / will be valid in a past / future time. This is called Time Travel in DB2, but you have to configure / create the table with the extra columns in order to activate this feature. This is new in DB2 10, but I think it is not available in all editions.
For more information, take a look at this: http://www.ibm.com/developerworks/data/library/techarticle/dm-1204db2temporaldata/
Remember, there are two concepts: business time and application time, and when using both is called bi-temporal.

Why does SQL Server 2000 treat SELECT test.* and SELECT t.est.* the same?

I butter-fingered a query in SQL Server 2000 and added a period in the middle of the table name:
SELECT t.est.* FROM test
Instead of:
SELECT test.* FROM test
And the query still executed perfectly. Even SELECT t.e.st.* FROM test executes without issue.
I've tried the same query in SQL Server 2008 where the query fails (error: the column prefix does not match with a table name or alias used in the query). For reasons of pure curiosity I have been trying to figure out how SQL Server 2000 handles the table names in a way that would allow the butter-fingered query to run, but I haven't had much luck so far.
Any sql gurus know why SQL Server 2000 ran the query without issue?
Update: The query appears to work regardless of the interface used (e.g. Enterprise Manager, SSMS, OSQL) and as Jhonny pointed out below it bizarrely even works when you try:
SELECT TOP 1000 dbota.ble.* FROM dbo.table
Maybe table names are constructed from a naive concatenation of prefix and base name.
't' + 'est' == 'test'
And maybe in the later versions of SQL Server, the distinction was made more semantic/more rigorously.
{ owner = t, table = est } != { table = test }
SQL Server 2005 and up has a "proper" implementation of schemas. SQL 2000 and earlier did not. The details escape me (its been years since I used SQL 2000), all I recall clearly is that you'd be nuts to create anything that wasn't owned by "dbo". It all ties into users and object ownership, but the 2000 and earlier model was pretty confusticated. Hopefully someone will read up on BOL, do some experimentation, and post their results here.
S-SQL reference manual:
"[dot] Can be used to combine multiple names into a name of the form A.B to refer to a column in a table, or a table in a schema. Note that you calso just use a symbol with a dot in it."
So I think if you referenced tblTest as tblT.est it would work OK as long as there isn't a column called 'est' in tblTest.
If it can't find a column name referenced with the dot I imagine it checks the parent of the object.
I found a reference to it being a bug
Note: as a result of a comparison
algorithm bug in SQL Server 2000, dot
symbols themselves have no effect on
matching, so "dbo.t" will successfully
match with tables "dbot", "d.b.o.t",
etc
from Link
It's been fixed in SQL Server 2005. Same link > Changes introduced in SQL Server 2005
Dot-related comparison bug has been fixed.
Is it in the "Open table" view of SSMS or via Enterprise Manager or via an SSMS Query Window?
There is/was a SQL Server 2005 issue with SSMS so how you run the query affects how it behaves.
This is a bug.
It has to do with internal representation of column names in SQL server 2000 that leaked out.
You will also not be able to create tablecolumn with a name which collides with table+column concatenation with another column, like, if you have tables User and UserDetail, you won't be able to have columns DetailAge and Age in these tables, respectively.