My filter does not work for multi-line text, not even for ASCII characters. Only single line cells are filtered out correctly.
Example of a row with 4 columns:
[1, WELD FIL.. ... MCW430
(REP.. ..bL), 各溶接.. ..と。(..430NbL), X]
(REP.. ..bL) text is still in the 2-nd column, but in new line
X character is in the 4-th column
Creating nattable:
private NatTable createTable(Composite parent, List<TableLine> tLines, String[][] propertyNames,
PropertyToLabels[] propToLabels, TableParams params, TextMatcherEditor<TableLine>editor, boolean openableParts) {
// another code: bodyLayerStack, bodyDataLayer
// another code: set row heights, set column widths
// another code: columnHeaderDataLayer
// another code: include table headers
CompositeLayer composite = null;
if( propertyNames != null ) {
ColumnHeaderLayer columnHeaderLayer =
new ColumnHeaderLayer(
columnHeaderDataLayer,
bodyLayerStack,
(SelectionLayer)null);
columnHeaderLayer.addConfiguration(NatTableLayerConfigurations.getColumnHeaderLayerConfiguration(false));
SortHeaderLayer<TableLine> sortHeaderLayer =
new SortHeaderLayer<TableLine>(
columnHeaderLayer,
new GlazedListsSortModel<TableLine>(
bodyLayerStack.getSortedList(),
getSortingColumnPropAccessor(propertyNames[0]),
configRegistry,
columnHeaderDataLayer));
// another code: setChildLayer, add configurations
natTable.configure();
editor.setFilterator(new TextFilterator<TableLine>() {
#Override
public void getFilterStrings(List<String> baseList, TableLine element) {
for( int i = 0; i < element.getLength(); i++ )
baseList.add("" + element.getObjectByColumn(i));
}
});
editor.setMode(TextMatcherEditor.REGULAR_EXPRESSION);
bodyLayerStack.getFilterList().setMatcherEditor(editor);
NatTableContentProvider.addNatTableData(natTable, bodyLayerStack.getSelectionLayer(), bodyLayerStack.getBodyDataProvider());
return natTable;
}
I figured out, that in ca.odell.glazedlists.impl.filter.TextMatchers.matches(List<String>, TextFilterator<? super E>, SearchTerm<E>[], TextSearchStrategy[], E) the condition if(filterString != null && textSearchStrategy.indexOf(filterString.toString()) != -1) is missed, when filtering w character. It is clear, the string in the 2-nd cell starts with W. See picture.
My search text for filtering w is (?i).*w.*(?-i), case insensitive.
Is there any workaround, or setting for this? Or do I need to transform data before sorting? If transform data - how? Those Glazedlists classes are final, I can not override them
Thanks for any comment!
Related
I need to capture input from a barcode scanner. Up until now the input has been just simple alphanum text which I have captured in one Text field. I added a ModifyListener to the Text field and am able to see the input arrive. That has worked fine.
I now need to handle a more complex matrix code which contains values for multiple fields. The values are separated by non-printable characters such as RS, GS and EOT (0x1E, 0x1D, 0x04). The complete data stream has a well-defined header and an EOT at the end, so I am hoping that I can detect barcode input as opposed to manual input.
When a barcode is detected, I can use the record separators RS to split the message and insert the values into the relevant Text fields.
However, the standard key handler on the Text controls ignore these non-printable characters and they do not appear in the controls text. This makes it impossible to proceed as planned.
How could I modify these Text fields to accept and store all characters? Or is there an alternative approach I could use?
This is the code I used to handle the barcode stream.
public class Main
{
static StringBuilder sb = new StringBuilder();
public static void main(String[] args)
{
Display d = new Display();
Shell shell = new Shell(d);
shell.setLayout(new FillLayout());
Text text = new Text(shell, 0);
text.addListener(SWT.KeyDown, new Listener()
{
#Override
public void handleEvent(Event e)
{
// only accept real characters
if (e.character != 0 && e.keyCode < 0x1000000)
{
sb.append(e.character);
String s = sb.toString();
// have start and end idents in buffer?
int i = s.indexOf("[)>");
if (i > -1)
{
int eot = s.indexOf("\u0004", i);
if (eot > -1)
{
String message = s.substring(i, eot + 1);
handleMessageHere(message);
// get ready for next message
sb = new StringBuilder();
}
}
}
}
});
shell.open();
while (!shell.isDisposed())
{
if (!d.readAndDispatch())
d.sleep();
}
}
I use iText 7.0.4.0 with my .net application to generate pdfs. But inner tables overflow when the text is long.
Outer table has 10 columns with green border and seems it has rendered fine as per the image below. Each Outer table cell contains one table with one cell inside it.But Inner Table cell has overflown when the paragraph text is large.
I use iText in a large Forms building product. Hence I've recreated the issue with simple scenario and the code is given below. Please note that the number of columns are not fixed in real usage.
Could anyone please show me the correct path to achieve this?
Here is the C# Code
private Table OuterTable()
{
var columns = GetTableColumnWidth(10);
var outerTable = new Table(columns, true);
outerTable.SetWidthPercent(100);
for (int index = 0; index < columns.Length; index++)
{
Cell outerTableCell = new Cell();
Table innerTable = new Table(new float[] { 100 });
innerTable.SetWidthPercent(100);
Cell innerTableCell = new Cell();
Paragraph paragraph = new Paragraph("ABCDEFGHIJKL").AddStyle(_fieldValueStyle);
innerTableCell.Add(paragraph);
innerTable.AddCell(innerTableCell);
outerTableCell.Add(innerTable);
outerTable.AddCell(outerTableCell);
innerTableCell.SetBorder(new SolidBorder(Color.RED, 2));
innerTableCell.SetBorderRight(new SolidBorder(Color.BLUE, 2));
outerTableCell.SetBorder(new SolidBorder(Color.GREEN, 2));
}
return outerTable;
}
Thanks mkl for spending your valuable time. I solved my issue with your idea of 'no inner tables'. This is not how to solve the issue of nested tables mentioned in the question but another way of achieving the result.
I've used "\n" in the paragraph to achieve what I want. Here is the output and the code.
private Table OuterTable()
{
var columns = GetTableColumnWidth(10);
var outerTable = new Table(columns, true);
outerTable.SetWidthPercent(100);
for (int index = 0; index < columns.Length; index++)
{
Cell outerTableCell = new Cell();
outerTableCell.Add(GetContent());
outerTable.AddCell(outerTableCell);
}
return outerTable;
}
private Paragraph GetContent()
{
int maxIndex = 3;
Paragraph paragraph = new Paragraph();
for (int index = 0; index < maxIndex; index++)
{
paragraph.Add(index + " - ABCDEFGHIJKL \n").AddStyle(_fieldValueStyle);
}
return paragraph;
}
By the help of some very kind community members here I managed to programatically create a function to replace text inside content controls in a Word document using open xml. After the document is generated it removes the formatting of the text after I replace the text.
Any ideas on how I can still keep the formatting in word and remove the content control tags ?
This is my code:
using (var wordDoc = WordprocessingDocument.Open(mem, true))
{
var mainPart = wordDoc.MainDocumentPart;
ReplaceTags(mainPart, "FirstName", _firstName);
ReplaceTags(mainPart, "LastName", _lastName);
ReplaceTags(mainPart, "WorkPhoe", _workPhone);
ReplaceTags(mainPart, "JobTitle", _jobTitle);
mainPart.Document.Save();
SaveFile(mem);
}
private static void ReplaceTags(MainDocumentPart mainPart, string tagName, string tagValue)
{
//grab all the tag fields
IEnumerable<SdtBlock> tagFields = mainPart.Document.Body.Descendants<SdtBlock>().Where
(r => r.SdtProperties.GetFirstChild<Tag>().Val == tagName);
foreach (var field in tagFields)
{
//remove all paragraphs from the content block
field.SdtContentBlock.RemoveAllChildren<Paragraph>();
//create a new paragraph containing a run and a text element
Paragraph newParagraph = new Paragraph();
Run newRun = new Run();
Text newText = new Text(tagValue);
newRun.Append(newText);
newParagraph.Append(newRun);
//add the new paragraph to the content block
field.SdtContentBlock.Append(newParagraph);
}
}
Keeping the style is a tricky problem as there could be more than one style applied to the text you are trying to replace. What should you do in that scenario?
Assuming a simple case of one style (but potentially over many Paragraphs, Runs and Texts) you could keep the first Text element you come across per SdtBlock and place your required value in that element then delete any further Text elements from the SdtBlock. The formatting from the first Text element will then be maintained. Obviously you can apply this theory to any of the Text blocks; you don't have to necessarily use the first. The following code should show what I mean:
private static void ReplaceTags(MainDocumentPart mainPart, string tagName, string tagValue)
{
IEnumerable<SdtBlock> tagFields = mainPart.Document.Body.Descendants<SdtBlock>().Where
(r => r.SdtProperties.GetFirstChild<Tag>().Val == tagName);
foreach (var field in tagFields)
{
IEnumerable<Text> texts = field.SdtContentBlock.Descendants<Text>();
for (int i = 0; i < texts.Count(); i++)
{
Text text = texts.ElementAt(i);
if (i == 0)
{
text.Text = tagValue;
}
else
{
text.Remove();
}
}
}
}
I have been following this tutorial to come up with a simple source code editor. (The feature that I want the most is keyword highlighting.) What I do not understand is the last part:
class Scanner extends RuleBasedScanner {
public Scanner() {
WordRule rule = new WordRule(new IWordDetector() {
public boolean isWordStart(char c) {
return Character.isJavaIdentifierStart(c);
}
public boolean isWordPart(char c) {
return Character.isJavaIdentifierPart(c);
}
});
Token keyword = new Token(new TextAttribute(Editor.KEYWORD, null, SWT.BOLD));
Token comment = new Token(new TextAttribute(Editor.COMMENT));
Token string = new Token(new TextAttribute(Editor.STRING));
//add tokens for each reserved word
for (int n = 0; n < Parser.KEYWORDS.length; n++) {
rule.addWord(Parser.KEYWORDS[n], keyword);
}
setRules(new IRule[] {
rule,
new SingleLineRule("#", null, comment),
new SingleLineRule("\"", "\"", string, '\\'),
new SingleLineRule("'", "'", string, '\\'),
new WhitespaceRule(new IWhitespaceDetector() {
public boolean isWhitespace(char c) {
return Character.isWhitespace(c);
}
}),
});
}
}
The instruction is as follows:
For each of the keywords in our little language, we define a word entry in our WordRule. We pass our keyword detector, together with rules for recognizing comments, strings, and white spaces to the scanner. With this simple set of rules, the scanner can segment a stream of bytes into sections and then use the underlying rules to color the sections.
Shed me some light please? I do not know what it is I have to do to set the desired keywords..
Hi all I have a to parse some files to load into a DataSet and I ran into an issue where the first row value is sometimes blank so when I parse the data the rows added to the columns are off because there is no value for the row[RouteCode].
Example Data
Columns are in the first line(Tab delimited) DataRows are in the following rows(Tab delimited)
RouteCode City EmailAddress FirstName
NULL MyCity My-Email MyFirstName
What I am seeing is all the Columns are added fine but each row added the first tab value is not detected so it shifts the columns(hope I am making sense) so in this case the city data is sitting in the RouteCode column and the last column somehow is getting the first row value (tab).
class TextToDataSet
{
public TextToDataSet()
{ }
/// <summary>
/// Converts a given delimited file into a dataset.
/// Assumes that the first line
/// of the text file contains the column names.
/// </summary>
/// <param name="File">The name of the file to open</param>
/// <param name="TableName">The name of the
/// Table to be made within the DataSet returned</param>
/// <param name="delimiter">The string to delimit by</param>
/// <returns></returns>
public static DataSet Convert(string File,
string TableName, string delimiter)
{
//The DataSet to Return
DataSet result = new DataSet();
//Open the file in a stream reader.
using (StreamReader s = new StreamReader(File))
{
//Split the first line into the columns
string[] columns = s.ReadLine().Split(delimiter.ToCharArray());
//Add the new DataTable to the RecordSet
result.Tables.Add(TableName);
//Cycle the colums, adding those that don't exist yet
//and sequencing the one that do.
foreach (string col in columns)
{
bool added = false;
string next = "";
int i = 0;
while (!added)
{
//Build the column name and remove any unwanted characters.
string columnname = col + next;
columnname = columnname.Replace("#", "");
columnname = columnname.Replace("'", "");
columnname = columnname.Replace("&", "");
//See if the column already exists
if (!result.Tables[TableName].Columns.Contains(columnname))
{
//if it doesn't then we add it here and mark it as added
result.Tables[TableName].Columns.Add(columnname);
added = true;
}
else
{
//if it did exist then we increment the sequencer and try again.
i++;
next = "_" + i;
}
}
}
//Read the rest of the data in the file.
string AllData = s.ReadToEnd();
//Split off each row at the Carriage Return/Line Feed
//Default line ending in most windows exports.
//You may have to edit this to match your particular file.
//This will work for Excel, Access, etc. default exports.
string[] rows = AllData.Split("\n".ToCharArray());
//Now add each row to the DataSet
foreach (string r in rows)
{
//Split the row at the delimiter.
string[] items = r.Split(delimiter.ToCharArray());
//Add the item
result.Tables[TableName].Rows.Add(items);
}
}
//Return the imported data.
return result;
}
}
}
If there aren't supposed to be any missing entries anywhere in the file (i.e there should always be something between the tabs) then you could use:
string[] columns = s.ReadLine().Split(delimiter.ToCharArray(), StringSplitOptions.RemoveEmptyEntries);
and then check that columns is not an empty array. If it is then read the next line and carry on processing:
while (columns.Length == 0)
{
// Row is empty so read the next line out of the file
columns = s.ReadLine().Split(delimiter.ToCharArray(), StringSplitOptions.RemoveEmptyEntries);
}
This will ensure that your data always starts with a filled row. However, it will break down if there is ever an empty entry further down the list.
If there could be empty entries then you'll probably have to check for all columns being empty:
while (columns.All(c => string.IsNullOrEmpty(c)))
{
// Row is empty so read the next line out of the file
columns = s.ReadLine().Split(delimiter.ToCharArray());
}