Why does my tableau tooltip get changed when I change my data source? - tableau-api

I am moving from a csv to a postgresSQl for my tableau workbook. Both of them have the same field names, exact data types. However, when I change my data source the tooltip gets a random text which breaks the filter in the toolip viz
I tried replacing the csv file with the same csv file and the same thing happened. So I think this is a tableau issue and not a database issue
<Sheet name="Tooltip: Level 2 Site Scores" maxwidth="300" maxheight="300" filter="<Site>,<[federated.02mez2l0u2i0o018sk45f0skmrv7].[none:level_2:nk]>"> (This is what happens)
<Sheet name="Tooltip: Level 2 Site Scores" maxwidth="300" maxheight="300" filter="<Level 2>,<Site>"> (This is what I want)
The 'Level 2' field gets messed up for some reason

If you open Tableau desktop file in a text editor, you'll see that it's an XML. In an example, I have a file with the following line. Tableau assigns a unique id to calculated fields I create, this is the "Calculation_104990228446113793"
<column-instance column='[Calculation_104990228446113793]' derivation='None' name='[none:Calculation_104990228446113793:nk]' pivot='key' type='nominal' />
The same can be seen for data source references.
<datasource caption='my_data_source' inline='true' name='federated.0dbu8r50hqicaj1fm4f2b1r4o814' version='10.5'>
So when you swap a data source, unique id's change and cause the error you're seeing. Not sure if your issue is a bug or not, you could report it. But this is what is happening in your case.

Related

Case-insensitive column names breaks the Data Preview mode in Data Flow of Data Factory

I have a csv file in my ADLS:
a,A,b
1,1,1
2,2,2
3,3,3
When I load this data into a delimited text Dataset in ADF with first row as header the data preview seems correct (see picture below). The schema has the names a, A and b for columns.
However, now I want to use this dataset in Mapping Data Flow and here does the Data Preview mode break. The second column name (A) is seen as duplicate and no preview can be loaded.
All other functionality in Data Flow keeps on working fine, it is only the Data Preview tab that gives an error. All consequent transformation nodes also gives this error in the Data Preview.
Moreover, if the data contains two "exact" same column names (e.g. a, a, b), then the Dataset recognizes the columns as duplicates and puts a "1" and "2" after each name. It is only when they are case-sensitive unequal and case-insensitive equal that the Dataset doesn't get an error and Data Flow does.
Is this a known error? Is it possible to change a specific column name in the dataset before loading into Data Flow? Or is there just something I'am missing?
I testes it and get the error in source Data Preview:
I ask Azure support for help and they are testing now. Please wait my update.
Update:
I sent Azure Support the test.csv file. They tested and replied me. If you insist to use " first row as header", Data Factory can not solve the error. The solution is that re-edit the csv file. Even in Azure SQL database, it doesn't support we create a table with same column name. Column names are case-insensitive.
For example, this code is not supported:
Here's the full email message:
Hi Leon,
Good morning! Thanks for your information.
I have tested the sample file you share with me and reproduce the issue. The data preview is alright by default when I connect to your sample file.
But I noticed when we do the trouble shooting session – a, A, b are column name, so you have checked the first row as header your source connection. Please confirm it’s alright and you want to use a, A, b as column headers. If so, it should be a error because there’s no text- transform for “A” in schema.
Hope you can understand the column name doesn’t influence the data transform and it’s okay to change it to make sure no errors block the data flow.
There’re two tips for you to remove block, one is change the column name from your source csv directly or you can click the import schema button ( in the blow screen shot) in schema tab, and you can choose a sample file to redefine the schema which also allows you to change column name.
Hope this helps.

Filemaker Pro 14 History tables

With a few solutions Ive worked with I've created temp table's or history tables. Normally I script it to take a handful of fields needed from a main table and copy it over to the other table by
Setting a variable then setting field to the variable for each field in the new table / new record.
I have a situation now, where Im building a history table that needs to copy the current record as is. A snapshot where all fields from that instance of the record are copied to the history table.
Rather then setting a variable then set field to the variable, Id like to get some input on a quicker way to get this done where I can do this on a record level and not type out field by field to get it done. Also if fields are added to both tables then I have to make sure my script gets updated.
Ill keep hunting around.. appreciate any help.
-Rich
Do you have a sample of copying a record from 1 table to another
including all fields and setting some fields?
As I suggested in comments, use the Import Records[] script step, and select the same file as the source. If you choose Arrange by: [ matching names ] in the Import Field Mapping dialog, it will automatically map all source fields to their similarly named counterparts.
Note that you must establish a found set in the source table before importing.
For "setting some fields", you can define auto-enter options and activate them during the import, or run Replace Field Contents[] immediately after the import.

Kentico Import Toolkit 8.1

I am currently using the Kentico Import Toolkit to create documents in the tree.
At this point, I have imported around 100 documents using the toolkit, and they are all located at the correct place in the tree. Now the issue/concern that I had was, as I have imported these documents, my spreadsheet has been updated, so extra fields and data were added, so how do I go about importing this extra data into the currently existing documents? Also just bear in mind I don't want other fields or data to be affected by this, as some of the documents were updated with some other content by the content editors using CMS Desk, which isn't available in the spreadsheet.
Import toolkit is not the right tool to achieve this task. Even if you select "Import new and overwrite existing pages" it'll overwrite most of your columns. Actually it only preserves system and id columns from the existing documents - all other columns get overwritten.
Either you can write a piece of custom code or you can try following:
Open SSMS and navigate to the coupled table of your page type (something like CONTENT_MyDocType). This is where your custom columns are stored.
Right click -> Edit top 200 rows
Click "Show SQL Pane"
Adjust the columns, ORDER BY and WHERE clause to match your excel file, re-run the query
Select desired rows in your excel file and copy them to clipboard
Paste the data in the SSMS
rocky is right, Import Toolkit is meant for importing complete objects, not partial/continuous update.
You could map the fields that you know are not changed in the spreadsheet to a SQL query selecting the value from the target database.
To achieve this, just insert #<target> at the beginning of the SQL select statement you will be mapping the field to.
It will be rather laborious though and it also requires certain knowledge about the nature of the spreadsheet changes.

Using Select query, nothing merges onto Crystal report

I have a Crystal template that I am modifying in developer because we are changing the datasource from an Access file to our Oracle DB. I created a database field that accurately connects to Oracle and added a select statement that because pulls a field from a particular table
select s.field from table s;
On the right hand side, under database fields, I see my command and can right click and browse the data, which right now returns two values.
I also made a formula field using an Azalea barcode function that calls the values (I think, this is where stuff is going wrong, I guess)
The formula field is
BarcodeC39ASCII({Command.field})
So this should take the data and format it into the barcode, except when I use print preview or print out the report, no data is merged.
I've tested this by creating a new formula field with just the Command.field, and still no data is merged. I imagine there is something really obvious that I am missing and would appreciate any input.
So unless I misunderstood your question, you are changing your datasource from Access DB to Oracle DB, correct? Assuming that the database structure remains the same then all you should need to do is go into Database -> Set Datasource Location and set the datasource location from the Access DB to the Oracle DB and your existing report should work as it did. You might have to map some fields, but that should be the extend of it. Is that what you are trying to do?
Chris

Crystal Report referring to xsd file

I am trying to create a crystal report, but I want the connection string that I have mentioned in the Web.Config file to be accessed. Also, I want to create an xsd file for my crystal report file to refer. Can someone please direct me to a tutorial or forum to solve my problem?
Thank you all for helping me out.
I am now able to display data in the crystal report file using the XSD file.
The xsd file has fields which are exactly the same name as mentioned in the select query that is bringing values from DB.
The following is the part of the xsd file that contains the field names in the xs:element tag.
The datatypes of these fields , also needs to be mentioned. The XSD file name can be kept as desired.
The name "Summary_Report_on_portal" and "Summary_Updt" does not correspond to any dataset name in code behind or query fields. It can be kept as per the user's wish.
This XSD file needs to be referred into the crystal report, using the Database Expert in the Field Explorer window. The "Summary_Updt" name is visible in the new connections , which can be added to the crystal report.
The fields mentioned in the xs:element field are visible for the user to drag and drop into the crystal report.
When the user mentions the data source to the crystal report(dataset), the fields in the dataset are matched with the XSD field values.
CODE:
objBL.Rpt.SetDataSource(objBL.ds_shipment_info.Tables[0]);
Hope this is detailed enough. Let me know if anyone wants more info
you can also programatically set the report data source if you need to - I can provide details if you require
between calling myReportDocument.Load("myreport.rpt") and myReportDocument.Refresh() (the latter of which actually gets the data from the database) it is possible to add a call to myReportDocument.SetDataSource(myDataSource) which takes an object of data source type, which you can create, with a call to its constructors, with the URL of the data source you wish to use, its username and password.
Hope this helps
a different way would be to call ReportDocument::SetDatabaseLogon (String * user, String * password, String * server, String * database); before Refresh(); if you don't want to reuse a connection. This has the benefit of being simple but means that you dont reuse data sources.