I am working with an ADF Mapping Data Flow.
The files in the dataset that I need to process are named in a format like this:
SS_Instagram_Posts_2020-11-10T16_45_14.9490665Z
When I set this in the source and look at the Data Preview I get this error message.
at Source 'source1': java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: SS_Instagram_Posts_2020-11-10T16:44:39.6950865Z.json
I found a microsoft page that reports that the ':' character in the filename is the issue.
I have a massive amount of files in this format, so is there a forthcoming fix or a workaround that will allow me to use my current files as they are named with the timestamp????
File names with these special characters are only supported in dataflows running in Synapse workspace not in ADF.
I tested a json file dataset with same name as you, but haven't met this error.
Source dataset:
Just from the error message, your file name is SS_Instagram_Posts_2020-11-10T16_45_14.9490665Z.json, but in the expression , the file name is SS_Instagram_Posts_2020-11-10T16:44:39.6950865Z.json, the difference of the date format which caused the error.
Please rebuild your Wildcard paths expression to change the date format.
One of our customers has strange problem with CSV export from Grafana (3.1.1 - we still run this "ancient" version due to some other dependencies).
When they export numbers from graph showing rates as percentage, they get repeatedly strangely formatted results:
2018-09-11T00:00:00.000Z;44.773.054;39.500.635;37.322.795
2018-09-12T00:00:00.000Z;51.743.917;4.409.222;37.691.824
2018-09-13T00:00:00.000Z;1.421.662;4.341.522;3.631.485
Proper results should look like this:
2018-09-11T00:00:00.000Z;4.4773054;3.9500635;3.7322795
2018-09-12T00:00:00.000Z;5.1743917;4.409222;3.7691824
2018-09-13T00:00:00.000Z;1.421662;4.341522;3.631485
As you can see - digits are generally OK, but decimal point is gone and number is formatted as huge number with separators for thousands, millions etc.
Client uses Windows 7 Enterprise, Latest Chrome and OS is set to German lang. Our best guess is that it could be caused by some setting of LOCALs because German settings are different from UK/US settings. But we are unable to simulate it on any of our computers.
Maybe some of you already encountered something like this? I tried to google for it but did not find so far anything close enough to this. Thank you very much.
CSV is made in the browser + numeric values are formated by toLocaleString function, which uses browser local setting. You need to change browser local configuration.
x = 123456789
console.log('Original: ' + x)
console.log('en-EN: ' + x.toLocaleString('en-EN'))
console.log('de-DE: ' + x.toLocaleString('de-DE'))
I am using Matlab to do some image processing on a cluster that uses Sun Grid Engine. On my personal laptop the code runs fine but when I run it on the cluster I get several errors of files that cannot be found. For example a .nii (nifti) file that exists (I can read it when I run matlab interactively in the shell) is not found. An excerpt from the output log:
{^HError using load_nii_ext (line 97)
Cannot find file
"/path/imageFile.nii".
And I also get errors from an xml structured file (that needs to have a .mps extension to be readable by a postprocessing toolbox, which all worked fine on my own laptop). Another excerpt from the output log:
/path/pointSetFile.mps exists
{^HError using readpointsmps (line 24)
Failed to read XML file
/path/pointSetFile.mps.
In this second error message the first line is the output I get from including in the script,
if exist(strcat(folder, fileName), 'file') == 2
disp([strcat(folder, fileName) ' exists'])
end
So it's weird that 1) I can see the files, 2) I can open them manually with Matlab, 3) according to the matlab function exist() they do indeed exist, but when the functions xmlread() and read_niigz() want to open them they suddenly can't be found.
As extra information: I run the scripts with the flags -nodisplay -nodesktop -nosplash, and I currently run the scripts as 2 tasks with the SGE. Memory should be good, I give it 5GB and all my images combined are about 1.5GB.
I'm using absolute paths starting at the root /, have been reading the paths letter by letter about 200 times now and have no clue anymore what's going on.
I have solved the problems now.
#Xiangrui Li pointed out in the comments that the missing .nii files were due to interference with the unzipping, reading and deletion of the .nii and .nii.gz files. That was indeed the problem. Thanks!
I found that the second problem was due to umlauts in the filenames. Apparently there was a difference between how the system and matlab and even other processes involved encode the filenames. Removing the characters with umlauts solved the problem.
This is sort of a general question here, as I cannot provide the data I am trying to import. However, I have exported a csv file from Filemaker Pro Advanced 15 and I am trying to import it into PostgreSQL using PgAdmin III. When importing this infromation, the progress bar fills and I am presented the option to click done (I presume the import is done). However, after clicking "done" and looking in the table, none of the data "imported" is actually there. I am not sure why this is, I have looked many places and have also contacted FM. Does anyone here have any suggestions ?
Here is the log for this execution:
2016-06-15 17:40:22 QUERY : COPY query (127.0.0.1:5432):
COPY public."Clients"(name,"createdAt","updatedAt",age,birthday,canreceivetxt,phone,dln,ssn,zip,scndcntctprsn,casetype,incidentloc,dol,pdp,pr,incidentfcts,posncar,pdd,ymmov,advinfo,wtnsnfo,advtick,clitick,policy,adjstrnme,adjstrphne,adjstrfx,insclaimnum,advpol,advinsn,advinsp,trnsprtbprmdc,whtcmpny,medtrtmnt,prvdrs,doi,rltdclaims,wglss,source,notes,locov,dls,email,cai,advins,address,gender,scndrycntctprsonphone,scndryctnctprsnrel,inslimit,checksreceived,disbursement,clientinssettlesamnt,casefeepercent,advsettleamnt,"Casecost",lineitemfees,lastcall)
FROM STDIN
(FORMAT 'csv', DELIMITER ',', HEADER, QUOTE '"', ESCAPE '"', ENCODING 'UTF8')
I have a question in Talend :
I need to create a file with name like "File_" + TalendDate.getDate("CCYY-MM-DD hh:mm:ss") + ".txt" and populate it with the result of a SQL query and add a "\t" separator on each column of each row.
After that, I need to connect to a FTP (Through tFTPConnection component), and put this file on a folder (Through tFTPPut component)
The main problem i encounter is that i don't know which composent i should use when i'll create the text file ? Should I use a tFileOutputPositional ? tFileOutputDelimited ? Another component ?
Moreover, i have another issue : When i'm connecting to the FTP, no worries but when i'm on the tFTPPut component, i have this issue :
java.net.SocketTimeoutException: Accept timed out
Any idea ?
Thanks
First you need to execute your SQL query.
To generate the file you should use tFileOutputDelimited on the row data and change the field separator to tab "\t".
Set the filename directly in the tFileOutputDelimited component. Keep in mind that the path only contains forward slashes, e.g.:
"C:/my-folder/File_" + TalendDate.getDate("CCYY-MM-DD hh:mm:ss") + ".txt"
Depending on your configuration it might help to set the ftp connection to passive mode (see more here https://community.boomi.com/docs/DOC-1643)