How to extract filename from $$FILEPATH in Synapse copy activity? - azure-data-factory

$$FILEPATH here gives me values like 'folder/sub/sub1/filename.ext'
I want to consistently extract filename.ext only. Can I do that in the copy activity itself?
I currently do it with a stored procedure afterwards but that's a slow update.
update [table-x]
set Filename = right(Filename, charindex('/', reverse(Filename) + '/') - 1)
where Filename like '%/%'
Screenshots in [here|https://stackoverflow.com/a/63170857/300248] by #mjonsing suggest it may be possible, but I don't know exactly how to.

Related

Is there a difference in performance between set/save when saving columns to tables?

I have a small utility that checks for new columns for an intraday hdb and adds new columns.
At the moment I am using :
.[set;(pth;?[data;();();cls]);{[p;e] .log.error[.z.h;"Failed to save to path [",string[p],"] with error :",e]}[pth;]]
where path is :
`:path_to_hdb/2022.03.31/table01/newDummyThree
and
?[data;();();cls] // just an exec statement
Would it make any difference to use save instead:
.[save;(pth;?[data;();();cls]);{[p;e] .log.error[.z.h;"Failed to save to path [",string[p],"] with error :",e]}[pth;]]
Yes. If you are adding entire columns to a table then you might want to store it splayed, i.e. as a directory of column files rather than as a single table file. This means using set rather than save.
https://code.kx.com/q/kb/splayed-tables/
But test actual example updates.
As mentioned in the documentation for save:
Use set instead to save
a variable to a file of a different name
local data
So set has the advantage of not requiring a global and you can name the file a different name to the name of your in-memory global variable.
There is no difference in how they serialise/write the data. In fact, save uses set under the covers anyway:
q)save
k){$[1=#p:`\:*|`\:x:-1!x;set[x;. *p]; x 0:.h.tx[p 1]#.*p]}'
By the way - you can't use save in the way that you've suggested in your post. save takes a symbol as input and this symbol is the symbol name of your global variable containing the data you want to write.

Matlab loop: ignores variable definition when loading something from directories

I am new to MATLAB and try to run a loop within a loop. I define a variable ID beforehand, for example ID={'100'}. In my loop, I then want to go to the ID's directory and then load the matfile in there. However, whenever I load the matfile, suddenly the ID definition gets overridden by all possible IDs (all folders in the directory where also the ID 100 is). Here is my code - I also tried fullfile, but no luck so far:
ID={'100'}
for subno=1:length(ID) % first loop
try
for sessno=1:length(Session) % second loop, for each ID there are two sessions
subj_name = ([ID{subno} '_' Session{sessno} '_vectors_SC.mat']);
cd(['C:\' ID{subno} '\' Session{sessno}]);
load(subj_vec_name) % the problem occurs here, when loading, not before
end
end
end
When I then check the length of the IDs, it is now not 1 (one ID, namely 100), but there are all possible IDs within the directory where 100 also lies, and the loop iterates again for all other possible IDs (although it should stop after ID 100).
You should always specify an output to load to prevent over-writing variables in your workspaces and polluting your workspace with all of the contents of the file. There is an extensive discussion of some of the strange potential side-effects of not doing this here.
Chances are, you have a variable named ID inside of the .mat file and it over-writes the ID variable in your workspace. This can be avoided using the output of load.
The output to load will contain a struct which can be used to access your data.
data = load(subj_vec_name);
%// Access variables from file
id_from_file = data.ID;
%// Can still access ID from your workspace!
ID
Side Note
It is generally not ideal to change directories to access data. This is because if a user runs your script, they may start in a directory they want to be in but when your program returns, it dumps them somewhere unexpected.
Instead, you can use fullfile to construct a path to the file and not have to change folders. This also allows your code to work on both *nix and Windows systems.
subj_name = ([ID{subno} '_' Session{sessno} '_vectors_SC.mat']);
%// Construct the file path
filepath = fullfile('C:', ID{subno}, Session{sessno}, subj_name);
%// Load the data without changing directories
data = load(filepath);
With the command load(subj_vec_name) you are loading the complete mat file located there. If this mat file contains the variable "ID" it will overwrite your initial ID.
This should not cause your outer for-loop to execute more than once. The vector 1:length(ID) is created by the initial for-loop and should not get overwritten by subsequent changes to length(ID).
If you insert disp(ID) before and after the load command and post the output it might be easier to help.

Use CSV values in JMeter as request path

I have one of jmeter User defined variable as a "comma separated value" - ${countries} = IN,US,CA,ALL .
(I was first trying to get it as a list/array - [IN,US,CA,ALL] )
I want to use the variable to test a web service - GET /${country}/info . IS it possible using ForEach controller or Loop controller ?
Only thing is that I want to save it or read it as IN,US,..,ALL and use it in the request path.
Thanks
The CSV should be as per the format mentioned in the image attached.
Refer to the link on how to use CSV in Jmeter: http://ivetetecedor.com/how-to-use-a-csv-file-with-jmeter/
Thread Group Settings
No. of threads: 1
Ramp-up period: 1
Loop Count: 4
Hope this will help.
CSV config is a red herring, you don't need it.
You can use a regular expression extractor to split up the variable into another variable (eg MyVar), using something like:
(.+?)[,\n]
This is trying to match each item before a , or newline. It will place the values in variables like MyVar_1, MyVar_2, etc. This is as close to an array as JMeter understands natively.
You can then loop on the contents of the matches using MyVar_matchNr, and MyVar_1 to MyVar_n (you will need to use __V() function to access the 'array' contents.

Talend How To Pass Last Modified File Into TFileInputDelimited?

I have searched all over, and read this post.
But it doesn't seem complete and doesn't work.
The situation: I need to get the last modified file from a directory on the local machine. I then need to pass that file into the fileinputdelimited component.
I currently have:
tfilelist --> iterate --> titeratetoflow --> tsamplerow
-->tflowtoiterate -> tinpufiledelimited ---> tlogrow (just to make sure its pulling the right file)
But it doesn't work. I have configured it. so that titeratetoflow has a column called
"FileName" with "((String)globalMap.get("CURRENT_FILE"))" as the value,
"FileDirectory" with ((String)globalMap.get("CURRENT_FILEDIRECTORY")) as value, and
"FileAndDirectory" with ((String)globalMap.get("CURRENT_FILEPATH")) as value.
The tsamplerow is limited to "1".
The tiflowtoiterate is set so that
"FileNameOnly" is value of "FileName"
"FileDirectoryOnly" is "FileDirectory" and
"FilePathComplete" is "FileAndDirectory"
In the File location field of the tinputfiledelimited, I have "((String)globalMap.get("FilePathComplete"))"
When it runs I get an error saying cannot find file or path. If I cut out the fileinput component and have it send straight to the tlogrow, it shows a single line of blank entry.
Any ideas?
I'm not sure if you've just slightly misconfigured the job here but it seems to work fine for me.
Here's a few screenshots showing my job design:
The only thing I can think of just by looking at your post is that you might have slightly messed up the key value pair combinations in the tFlowToIterate. I tend to find that the default settings there work fine pretty much all of the time and it makes it a little more obvious what it's doing as well.
EDIT: Actually, it looks like you might be using the wrong values in your tIterateToFlow. The tFileList will throw the values for the file paths etc in to the global map but it will preface it with the unique component name. If you hit ctrl+space in the value window it should prompt you with a list of available values (these are also specified in the "Outline" tab of the studio). It typically makes an implicit conversion to String but for this you will need to explicitly convert it so use .toString() instead of (String).
Another way to get last modified file is as below
tFileList(sorted DESC by file modified date) ------> tFixedFlowInput (schema - filename, filenumber) ----->tHashOutput
here in tFixedFlowInput
filename = file(String)globalMap.get("tFileList_1_CURRENT_FILEPATH")+"/"+(String)globalMap.get("tFileList_1_CURRENT_FILE")
filenumber = (Integer)globalMap.get("tFileList_1_NB_FILE")
What above will accomplish is get list of all files in the directory with their number/rank - where the file last modified will have file number =1 and next to that will have 2...and so on.
Now on SubJobOK of above tFileList you can have tHashInput which will read from above tHashOutput and filter only row where filenumber==1 - which means the last modified file.
tHashInput (link to tHashoutput) ---->tFilterRow(filenumber==1)------>tLogRow
One reason why you are getting null is probably you have used globalMap.get("CURRENT_FILEPATH) instead of globalMap.get("tFileList_1_CURRENT_FILEPATH")
The Simple Solution for above problem could be as below:
tFileList(sorted ASC by file modified date)--> tIterateToFlow --> tJava( just to end the subjob).
Then on
subjob ok --> tfileinput ( use (String)globalMap.get("tFileList_1_CURRENT_FILE") or (String)globalMap.get("tFileList_1_CURRENT_FILEPATH") as a file name/file path)
Explanation:
Since tFileList iterates all the files in ASC order, it will always have Latest file name stored in globalMap for the last iteration. The list is only iterated till tIterateToFlow hence after this component (String)globalMap.get("tFileList_1_CURRENT_FILE") will always give the last file name from the iterated list, which is the latest file in out case.
Main Flow :
Component View:

Powershell controlling MSWord: How to select the entire content and update?

I am dealing with a whole load of Word documents that make heavy use of fields and cross-references (internally and between documents).
To update these and make everything consistent again after a change I have to open each file, select the entire file's content (equivalent of hitting Ctrl-A) and update all fields (the equivalent of hitting F9). And I have to do this twice for all files, so that also all inter-file cross-references are also updated properly.
Since this is a rather tedious and lengthy process I wanted to write me a little PowerShell-script that does that for me. The relevant function to update a file looks like this:
...
function UpdateDoc([object]$word, [object]$fileHandle) {
Write-Host("Updating: '" + $fileHandle.Name + "' ('" + $fileHandle.FullName + "'):")
# open the document:
$doc = $word.Documents.Open($fileHandle.FullName)
# select the entire document:
???
# update it:
???
# then save it:
$doc.Save
$doc.Close
Write-Host("'" + $fileHandle.Name + "' updated.")
}
...
But I am stuck on how to select the file's content and update it all, i.e. what has to go into this code instead of the two ???-markers to achieve what I want?
Did you try:
$doc.Fields | %{$_.Update()}
That should update all the fields