Have a data source being uploaded to a server environment. Calc fields I have are fine within my workbook before doing so, but sometimes the = portion of the name disappears and it turns into a regular dimensional field.
Example:
pre upload
post-upload
What could be causing that?
Related
i've created a script to mass produce copies of a Google Sheet from a master sheet. The script changes the name of the documents according to data in a separate sheet.
Within the template sheet, I've set row 2 as a "named range" and what i'd like the script to do is to also change the data in that row based on data I have in the master sheet.
I have been previously advised that this is possible but I confess I have no clue how to code this in to my script!
Is anyone able to offer any code which might do the job?
Many thanks
Kerry
I could really do with some help with ADF; I've recently started trying to use it thinking it would be similar to SSIS but wow am I having a hard time! I've built up this kinda complicated pipeline over the last few weeks which basically reads a list of files from a folder and from within a For Each loop it's supposed to check where the data starts per file and import it into a SQL table. I'll not bore you with all the issues I've had so far but atm it seems to be working aside from the For Each part of it, it's importing all the files in the folder per iteration and it seems to be the data set configuration which is not recognising the filename per iteration because if I look through the debugging I can see it pick up the list of files, set the DSFileName variable to the first of them, but the output of the data flow task is both files. So it seems like I've missed a step somewhere and I've just spent the last 5 hours looking and could really do with some help :(
I reckon to have followed the instructions here: https://www.sqlshack.com/how-to-use-iterations-and-conditions-activities-in-azure-data-factory/
Some pictures to show the debugging I've done:
Here it shows it's picking up 2 files (after I filtered out folders and stuff)
Here shows the first file name only being passed into the first data flow
Here shows the output from it, where it has picked up both files somehow and displays the count of 2 files
Here shows the Data Set set up where I believe to have correctly set the variable as the file name to be used
I just don't even know where to start now tbh, I reckon to have checked everything I can see and I'm not using any wild cards or anything. I can see it passing the 1 file name per iteration into that variable but each iteration I can see 2x counts of the file going into the table and the output of each data flow task showing both file counts.
Does anybody have any ideas or know what I've missed?
EDIT 23/07/22: Pics of the source as requested:
Data Source Settings
Data Source Options
So it turns out that adding .name to item() in the dataset parameter means it uses just the current one instead of them all.... I'm confused by this as all the documentation I've read states that item() references the CURRENT item within the For Each, did I misunderstand?
Adding .name to the dataset here is now importing just the current file per loop iteration
I have a number of excel files where there is a line of text (and blank row) above the header row for the table.
What would be the best way to process the file so I can extract the text from that row AND include it as a column when appending multiple files? Is it possible without having to process each file twice?
Example
This file was created on machine A on 01/02/2013
Task|Quantity|ErrorRate
0102|4550|6 per minute
0103|4004|5 per minute
And end up with the data from multiple similar files
Task|Quantity|ErrorRate|Machine|Date
0102|4550|6 per minute|machine A|01/02/2013
0103|4004|5 per minute|machine A|01/02/2013
0467|1264|2 per minute|machine D|02/02/2013
I put together a small, crude sample of how it can be done. I call it crude because a. it is not dynamic, you can add more files to process but you need to know how many files in advance of building your job, and b. it shows the basic concept, but would require more work to suite your needs. For example, in my test files I simply have "MachineA" or "MachineB" in the first line. You will need to parse that data out to obtain the machine name and the date.
But here is how may sample works. Each Excel is setup as two inputs. For the header the tFileInput_Excel is configured to read only the first line while the body tFileInput_Excel is configured to start reading at line 4.
In the tMap they are combined (not joined) into the output schema. This is done for the Machine A Excel and Machine B excels, then those tMaps are combined with a tUnite for the final output.
As you can see in the log row the data is combined and includes the header info.
I am pulling data for a couple brands into google sheets with zapier. I am pulling information from each sheet as a separate data source in tableau. The formatting across the sheets is uniform, only values are different.
My objective is to use a completed viz sheet as a template, so that I can duplicate the sheet, and replace the data source. However I am running into a problem.
Generally when replacing a data source with "replace data source" the changes occur on a project wide level, but I need the changes to occur on a sheet level.
Is there any way to hook a viz sheet into a different data source, assuming the data source has the same formatting as "template" file?
When I need to replace a data source of just one sheet, I copy and paste that sheet into a new workbook. Replace it there, and copy and paste it back to my original workbook.
A quick look on google brought up this: https://community.tableau.com/ideas/1156
It shows first that there is no "Replace source for current sheet"function as such but also gives a workaround for that:
Create bookmark (details: https://onlinehelp.tableau.com/current/pro/desktop/en-us/save_savework_bookmarks.html)
Rename original datasource
Re-import bookmark
This creates a second instance of the data source for the bookmarked sheet
Change the newly created datasource which is only used on one sheet
I have annoying issue with CR 2011. We are trying to upgrade from very old CR8.5 (DBF files are used as source(s) for the reports) to CR2011 and right now strange issue has appeared.
There are several fields on the report and all of them contain some data (as can be seen in the dbf file itself and/or in Browse Field Data) but few of them are never "shown" on the report. (If I, however, browse the data within the preview in CR2011 designer, I can see the data with no problem.)
This report uses two (non-linked) tables.
If I try to create a blank report, add these two tables & format the report again, I'll get what I expect (i.e. all fields shown on the report). (But this is not a solution as we have hundreds of reports.)
It does not matter if I (re)save report in latest format.
Everything is shown when using CR8.5 (designer or "runtime")
Has anyone experienced similar behavior and/or some tips where to look?
Non-linked tables are "less" supported than before (whole support for dbf files is limited). So the only correct solution is to link non-linked tables together. In my case, as the "free" table was parameter table. I had to simply add "paramid" to both tables (always set to 0) and perform the linking.