I have a tFileList iterating over a set of XML files.
And for testing I have 1 XML file in the folder:
THe XML schema has a field called "operationresultdesc":
In my tJavaFlex, I am outputting the value of this field and it returns a null value (I am outputting "null apparently").
This "NULL" check should fail as I can see values in the field in the XML.
The weird thing is, if I add more XMLs file to the folder, the null failure "passes" to another file, and the one failing before apparently is not null any more. This is the file which failed before with a null message, but now there is more than one XML file in the folder...no null issue (there never was a null issue, nor is there one in the file being flagged as null now).
For reference tJavaFlex code is:
UPDATE: This is based odd a repository XML schema driven off the XSD, and I do get the data flowing, the issue is the fact that I get a null when the file is on its own, but the same file works when I add more than one XML file in the folder...and the null failure passes to something else.
Feels like I am not understanding how the tFileList is iterating and how tJavaFlex is working in this case perhaps...but is very strange.
Related
I have an xml dataset, I want to parametrize the compression type to treat .xml and .xml.gz files with the same pipeline :
When I put 'gzip' value in compression type it reads xml.gzip file. I want to know what value I should put to read uncompressed .xml file because it does not accept empty value. It is able to read xml file just when I delete the compression_type parameter
You should pass "None" and it should work out .
I feel "None" is more of a workaround in this particular case. "None" is still a string value, not empty.
In my scenario right now, I have an Excel dataset. I want to make every parameter as generic as possible, including the file path/name, sheet name, and the range. The value of "Range" under Connection tab allows empty value. However if I specify it as #dataset().DataRange and leave my parameter DataRange empty, I cannot preview the data or submit the pipeline because it complains that the value cannot be empty.
I have been trying to rewrite some logic to use IText7 that I had previously working with ITextSharp (5.x) but ran into a roadblock. GetTranslatedFieldName no longer existed so I did a quick search and found it in SignatureUtilities but the result doesn't match the actual field name. I assumed that maybe I was misusing it so I dug around a bit more and found FindFieldName on the AcroForm but it returns the same incorrect field name.
Is this a bug or am I still misusing these methods? Is there a native way to Retrieve the full field name from a partial name in IText7?
For reference this is what my full field name looks like
someFormX[0].Page1[0].FieldX[0]
And this is what the field name being returned by GetTranslatedFieldName and FindFieldName looks like
someFormX[0].FieldX[0]
When I am exporting data from one environment and importing it to another, I am seeing an ambiguous unique keys error. I did check the ambiguity but did not find anything would cause this violation.
I get the following error (there are several identical errors but only posting 1):
Error Begin
**insert_update ABClCMSParagraphComponent;&Item;catalogVersion(catalog(id),version)[unique=true,allownull=true];content[lang=en];creationtime[forceWrite=true,dateformat=dd.MM.yyyy
hh:mm:ss];modifiedtime[dateformat=dd.MM.yyyy
hh:mm:ss];name;owner(&Item)[allownull=true,forceWrite=true];uid[unique=true,allownull=true]
ABClCMSParagraphComponent,8796158592060,,,Error saving batch in bulk
mode [reason:
unique keys {catalogVersion=CatalogVersionModel (8796093186649#41),
uid=DMparaleftdescrip} for model ABClCMSParagraphComponentModel
(8796158657596#1) - found 2 item(s) using the same keys]. Will try
line-by-line mode.,
unique keys {catalogVersion=CatalogVersionModel (8796093186649#41),
uid=comp_000003UX} for model ABClCMSParagraphComponentModel
(8796158592060#1) - found 2 item(s) using the same keys
;Item111;abcContentCatalog:Staged;"< p >Hello < a href="">world< /a><
/p>";12.09.2017 07:04:12;18.09.2017 09:38:39;Feed Article -
Makeup;;comp_000003UX
Error End
What would be the reason why it's showing the ambiguous error?
The logs didn't show the error clearly with the 2 check boxes that were selected. When I deleted these 2 columns, owner(&Item) & creationtime, the script imported successfully.
Often, when no specific errors are shown, there was a error saving the item. In your case it might be the initial attributes "owner" and "creationdate". If the item is present, initial attributes cannot be changed.
I have searched all over, and read this post.
But it doesn't seem complete and doesn't work.
The situation: I need to get the last modified file from a directory on the local machine. I then need to pass that file into the fileinputdelimited component.
I currently have:
tfilelist --> iterate --> titeratetoflow --> tsamplerow
-->tflowtoiterate -> tinpufiledelimited ---> tlogrow (just to make sure its pulling the right file)
But it doesn't work. I have configured it. so that titeratetoflow has a column called
"FileName" with "((String)globalMap.get("CURRENT_FILE"))" as the value,
"FileDirectory" with ((String)globalMap.get("CURRENT_FILEDIRECTORY")) as value, and
"FileAndDirectory" with ((String)globalMap.get("CURRENT_FILEPATH")) as value.
The tsamplerow is limited to "1".
The tiflowtoiterate is set so that
"FileNameOnly" is value of "FileName"
"FileDirectoryOnly" is "FileDirectory" and
"FilePathComplete" is "FileAndDirectory"
In the File location field of the tinputfiledelimited, I have "((String)globalMap.get("FilePathComplete"))"
When it runs I get an error saying cannot find file or path. If I cut out the fileinput component and have it send straight to the tlogrow, it shows a single line of blank entry.
Any ideas?
I'm not sure if you've just slightly misconfigured the job here but it seems to work fine for me.
Here's a few screenshots showing my job design:
The only thing I can think of just by looking at your post is that you might have slightly messed up the key value pair combinations in the tFlowToIterate. I tend to find that the default settings there work fine pretty much all of the time and it makes it a little more obvious what it's doing as well.
EDIT: Actually, it looks like you might be using the wrong values in your tIterateToFlow. The tFileList will throw the values for the file paths etc in to the global map but it will preface it with the unique component name. If you hit ctrl+space in the value window it should prompt you with a list of available values (these are also specified in the "Outline" tab of the studio). It typically makes an implicit conversion to String but for this you will need to explicitly convert it so use .toString() instead of (String).
Another way to get last modified file is as below
tFileList(sorted DESC by file modified date) ------> tFixedFlowInput (schema - filename, filenumber) ----->tHashOutput
here in tFixedFlowInput
filename = file(String)globalMap.get("tFileList_1_CURRENT_FILEPATH")+"/"+(String)globalMap.get("tFileList_1_CURRENT_FILE")
filenumber = (Integer)globalMap.get("tFileList_1_NB_FILE")
What above will accomplish is get list of all files in the directory with their number/rank - where the file last modified will have file number =1 and next to that will have 2...and so on.
Now on SubJobOK of above tFileList you can have tHashInput which will read from above tHashOutput and filter only row where filenumber==1 - which means the last modified file.
tHashInput (link to tHashoutput) ---->tFilterRow(filenumber==1)------>tLogRow
One reason why you are getting null is probably you have used globalMap.get("CURRENT_FILEPATH) instead of globalMap.get("tFileList_1_CURRENT_FILEPATH")
The Simple Solution for above problem could be as below:
tFileList(sorted ASC by file modified date)--> tIterateToFlow --> tJava( just to end the subjob).
Then on
subjob ok --> tfileinput ( use (String)globalMap.get("tFileList_1_CURRENT_FILE") or (String)globalMap.get("tFileList_1_CURRENT_FILEPATH") as a file name/file path)
Explanation:
Since tFileList iterates all the files in ASC order, it will always have Latest file name stored in globalMap for the last iteration. The list is only iterated till tIterateToFlow hence after this component (String)globalMap.get("tFileList_1_CURRENT_FILE") will always give the last file name from the iterated list, which is the latest file in out case.
Main Flow :
Component View:
I'm trying to import a .csv file into phpmyadmin where several fields are purposefully left blank. I need these field to register as null values and not just left as a blank string.
I know in the field properties you can select to allow "null" vs. "not null" for each field, but it still doesn't change cell to a null value while importing. After the import I can manually go check the null box for each field on each record, but that it unrealistic considering the amount of data I'm working with.
Is there a way to get phpmyadmin to set these blank cell to null values on import?
I've been experience similar issues.
If you download a PhpMyAdmin CSV file with NULL values, you'll notice that NULL doesn't get encapsulated with quotes. So you'll have a line like this:
"1";"2";NULL;NULL
"2";"2";NULL;NULL
etc.
However, if you edit a CSV file in something like Open Office Calc, it might change this to put quotes around NULL, like so:
"1";"2";"NULL";"NULL"
"2";"2";"NULL";"NULL"
etc.
What should work is doing a search and replace for ["NULL" = NULL].
In your case, because you have empty (blank) fields, you'll be looking at doing a search and replace like this:
[,, = ,NULL,]
And probably a second pass for NULL values at the end of a line like so:
[,\n = ,NULL\n]
Ancient question, but in case another MySQL noob like myself comes across it.
The find/replace rigamarole jmbertucci describes is avoidable if you're in charge of the creation of the CSV file, for example when you're backing up your own databases. In phpMyAdmin, if you select "custom" export method, you will see replace NULL with: and the default is NULL. Simply change that to "NULL" and you save yourself a step.
I ran into this same problem and jmbertucci's answer worked great. I did run into one additional problem. In the case with a row of data like such
"hello","world",,,,,,
which has multiple sets of null values in a row doing a search replace with [,, = ,NULL,] as jmbertucci suggested won't work as you intend it to on the first pass. Instead you'll end up with
"hello","world",NULL,,NULL,,NULL
You should continue to do the search replace to until you end up with 0 occurrences replaced