using JFace text Singlelinerule - eclipse

Is it possible to use the Singlelinerule in RuleBasedPartitionScanner to detect
whether the partition starts with an alphabet or space ?

If you have rules for //, //* and /* you don't need a rule to cover the remaining text - that text will be put in the default IDocument.DEFAULT_CONTENT_TYPE partition.
Update:
Neither SingleLineRule or its parent class PatternRule support testing for a range of characters. However you could write your own implementation of IPredicateRule to do this, look at the PatternRule implementation to see how columns and match is handled.

Related

Is it possible to have different styles in one element in Jaspersoft?

Let's say I need to print the full name of a person. What I originally did was to separate the first name and last name into two elements and placed them side by side since they needed different styling (this is just an example):
Lastname, Firstname
However, I found out after that I can't actually make their width dynamic because the developers made an effort not to allow it. So now I'm wondering if I can present the name with two different styles inside one element. Is this possible? How would I accomplish that? I hope you can help, thanks!
Okay so apparently you can. All you need to do is setup the markup attribute on the text field.
Refer here for more details: http://jasperreports.sourceforge.net/sample.reference/markup/

Visio ShapeSheet ShapeData: keep two rows in sync

I have two Shape Data rows for a Shape's ShapeSheet:
Shape Data Label Prompt Type Format Value Invisible
Prop.Type "Type" "" 4 "Alpha;Beta;Gamma;Delta;Epsilon;Zeta;Eta;Theta;Iota;Kappa" INDEX(4,Prop.Type.Format) False
Prop.Abbrev "Abbrev" No Formula 4 "A;B;G;D;E;Z;E;T;I;K" INDEX(4,Prop.Abbrev.Format) True
The way I intent to use this is to have the user select the Type, say Epsilon, and then have the Abbrev automatically switch to the corresponding value in the Prop.Abbrev.Format.
Note: the values used here are placeholders for the actual values for my application, which are not shown here so they don't distract from the real answer I need, how to keep the selections in sync when the first one is chosen or changed.
Thanks for any help you can give!
I don't have Visio on this machine, so I am unable to copy and paste a working solution. The approach gets a little complicated, but extremely flexible.
Save your lists in the User section, rather than Prop - this then becomes underlying data for use in properties. If you are using a master stencil then this also helps with managing the fields.
You can now store an index in your data as well - this index points to the appropriate values in your arrays. You can use Actions and side menus to set the index which, when referenced properly, means you can have the full name and/or abbreviation in the side menu and the ShapeSheet does all the work underneath.
The functions you want to look at are:
Index (e.g. INDEX(1,User.Type) will return "Beta". (0-based)
Lookup (e.g. LOOKUP("D", User.Abbrev) will return "3". (0-based)
GetAtRef
SetAtRef
SetAtRefExpr
SetF
I had a similar business problem which relied on setting a background colour based on the value of shape data. Your final solution could end up including formulas like this: =SETF(GetRef(Prop.Type),"GUARD(INDEX(LOOKUP(Prop.X,Prop.X.Format),User.Type))").
For more in-depth discussion - check out https://superuser.com/questions/1277331/fillforegnd-in-shapesheet-using-wrong-data and the extended discussion at http://visguy.com/vgforum/index.php?topic=8205.15 - the latter link also includes an example file with working shapesheets (well, working to the extent that they exposed my problem).

yadcf autocomplete not working as in original examples

In the older examples of the AutoComplete filter, as soon as you typed in a letter the filtering was applied reducing the numbers of rows shown. For example, if I typed the letter "x", only rows with that column containing the letter "x" would be displayed.
{ column_number : 1, filter_type: "auto_complete" },
In the newer examples, the AutoComplete filter does not do this. I want the original functionality for my web page - how do I achieve this? Is there an option I need to pass to the AutoComplete plugin via the filter_plugin_options parameter? Or is there a different filter I should be using?
I can't recall that aytocomplete ever worked this way, you need to use the filter_type: 'text' instead, see the fourth column in the following showcase page
IMO triggering the filter on each keyboard type fits better the text filter rather then autocomplete

Are Sphinx Excerpts inserted into the data and thus accessible by MySql?

In reading more about Excerpts:
http://sphinxsearch.com/wiki/doku.php?id=php_api_docs#buildexcerpts_documents_index_words_options
I am still unclear if the inserted tags are internally used by sphinx to format the final display text or if they actually added to the mysql:
before_match is a string to insert before each set of matching words. The default is '<b>'.
In other words if I changed the string to some non display htlm e.g. <!-- START --> for before_match and <!-- END --> for after_match I could then search on those in mysql or is the search still just a zone inside the index?
No, buildExerpts does not touch the database at all. It does not even change the sphinx index either.
You simply pass it a block of text (or multiple) and a text query. And then it forms a new block of text for each, highlighting the provided terms. The new block(s) of text are provided by return.
... if you want to do something with that block of text (so can search it otherwise!) would then have save it somewhere for example.
(the 'inserted tags' you talk about, are just added in the new block of text, which is generally considered to be HTML, so would then just be displayed to the end user)

iTextSharp extracts wrapped cell contents into new lines - how do you identify to which column a given wrapped piece of data belongs now?

I am using iTextSharp to extract data from pdfs.
I stumbled across the following problem, depicted by the scenario below:
I created a sample excel file to illustrate. Here is what it looks like:
I convert it to a pdf, using one of the many free online converters available out there, which generates a pdf looking like (when I generated the pdf I did not apply the styling to the excel):
Now, using iTextSharp to extract the data from the pdf, returns me the following string as the data extracted:
As you can see, wrapped cell data generate new lines, where each wrapped piece of data separated by a single white space.
The problem: how does one identify, now, to which column a given piece of wrapped data belongs to ? If only iTextSharp preserved as many white spaces as columns...
In my example - how can I identify to which column does 111 belong ?
Update 1:
A similar problem occurs whenever a field has more than one word (i.e., contains white spaces). For example, considering the 1st line of the sample above:
say it looked like
---A--- ---B--- ---C--- ---D---
aaaaaaa bb b cccc
iText again would generate the extraction for this one as:
aaaaaaa bb b cccc
Same problem here, in having to determine the borders of each column.
Update 2:
A sample of the real pdf file I am working with:
This is how the pdf data looks like.
In addition to Chris' generic answer, some background in iText(Sharp) content parsing...
iText(Sharp) provides a framework for content extraction in the namespace iTextSharp.text.pdf.parser / package com.itextpdf.text.pdf.parser. This franework reads the page content, keeps track of the current graphics state, and forwards information on pieces of content to the IExtRenderListener or IRenderListener / ExtRenderListener or RenderListener the user (i.e. you) provides. In particular it does not interpret structure into this information.
This render listener may be a text extraction strategy (ITextExtractionStrategy / TextExtractionStrategy), i.e. a special render listener which is predominantly designed to extract a pure text stream without formatting or layout information. And for this special case iText(Sharp) additionally provides two sample implementations, the SimpleTextExtractionStrategy and the LocationTextExtractionStrategy.
For your task you need a more sophisticated render listener which either
exports the text with coordinates (Chris in one of his answers has provided an extended LocationTextExtractionStrategy which can additionally provide positions and bounding boxes of text chunks) allowing you in additional code to analyse tabular structures; or
does the analysis of tabular data itself.
I do not have an example for the latter variant because generically recognizing and parsing tables is a whole project in itself. You might want to look into the Tabula project for inspiration; this project is surprisingly good at the task of table extraction.
PS: If you feel more at home with trying to extract structured content from a pure string representation of the content which nonetheless tries to reflect the original layout, you might try something like what is proposed in this answer, a variant of the LocationTextExtractionStrategy working similar to the pdftotext -layout tool; only the changes to be applied to the LocationTextExtractionStrategy are shown there.
PPS: Extraction of data from very specific PDF tables may be much easier; for example have a look at this answer which demonstrates that after some PDF analysis the specific way a given table is created might give rise to a simple custom render listener for extracting the table data. This can make sense for a single PDF with a table spanning many many pages like in the case of that answer, or it can make sense if you have many PDFs identically created by the same software.
This is why I asked for a representative sample file in a comment to your question
Concerning your comments
Still with the pdf example above, both with an implementation from scratch of ITextExtractionStrategy and with extending LocationExtractionStrategy, I see that each RenderText is called at the following chunks: Fi, el, d, A, Fi, el, d... and so on. Can this be changed?
The chunks of text you get as separate RenderText calls are not separated by accident or some random decision of iText. They are the very strings drawn separately in the page content!
In your sample "Fi", "el", "d", and "A" come in different RenderText calls because the content stream contains operations in which first "Fi" is drawn, then "el", then "d", then "A".
This may sound weird at first. A common cause for such torn up words is that PDF does not use the kerning information from fonts; to apply kerning, therefore, the PDF generating software has to insert tiny forward or backward jumps between characters which should be farther from or nearer to each other than without kerning. Thus, words often are torn apart between kerning pairs.
So this cannot be changed, you will get those pieces, and it is the job of the text extraction strategy to put them together.
By the way, there are worse PDFs, some PDF generators position each and every glyph separately, foremost such generators which predominantly build GUIs but can as a feature automatically export GUI canvasses as PDFs.
I would expect that in entering the realm of "adding my own implementation" I would have control over how to determine what is a "chunk" of text.
You can... well, you have to decide which of the incoming pieces belong together and which don't. E.g. do glyphs with the same y coordinate form a single line? Or do they form separate lines in different columns which just happen to be located next to each other.
So yes, you decide which glyphs you interpret as a single word or as content of a single table cell, but your input consists of the groups of glyphs used in the actual PDF content stream.
Not only that, in none of the interface's methods I can "spot" how/where it deals with non-text data/images - so I could intercede with the spacing issue (RenderImage is not called)
RenderImage will be called for embedded bitmap images, JPEGs etc. If you want to be informed about vector graphics, your strategy will also have to implement IExtRenderListener which provides methods ModifyPath, RenderPath and ClipPath.
This isn't really an answer but I needed a spot to show some things that might help you understand things.
First "conversion" from Excel, Word, PowerPoint, HTML or whatever to PDF is almost always going to be a destructive change. The destructive part is very important and it happens because you are taking data from a program that has very specific knowledge of what that data represents (Excel) and you are turning it into drawing commands in a very generic universal format (PDF) that only cares about what the data looks like, not the data itself. Unless the data is "tagged" (and it almost never is these days still) then there is no context for the drawing commands. There are no paragraphs, there are no sentences, there are no columns, rows, tables, etc. There's literally just draw this letter at x,y and draw this word at a,b.
Second, imagine you Excel file had that following data and for some reason that last column was narrower than the others when the PDF was made:
Column A | Column B | Column
C
Data #1 Data #2 Data
#3
You and I have context so we know that the second and fourth lines are really just the continuation of the first and third lines. But since iText doesn't have any context during extraction it doesn't think like that and it sees four lines of text. In fact, since it doesn't have context it doesn't even see columns, just the lines themselves.
Third, although a very small thing you need to understand that you don't draw spaces in PDF. Imagine the three column table below:
Column A | Column B | Column C
Yes
If you extracted that from a PDF you'd get this data:
Column A | Column B | Column C
Yes
Inside the PDF the word "Yes" will be just drawn at a certain x coordinate that you and I consider to be under the third column and it won't have a bunch of spaces in front of it.
As I said at the beginning, this isn't much of an answer but hopefully it will explain to you the problem that you are trying to solve. If your PDF is tagged then it will have context and you can use that context during extraction. Context isn't universal, however, so there usually isn't just a magic "insert context" checkbox. Excel actually does have a checkbox (if I remember correctly) to make a tagged PDF during export and it ultimately creates a tagged PDF using HTML-like tags for tables. Very primitive but it will works. However it will be up to you to parse this context.
Leaving here an alternative strategy for extracting the data - that does not solve the problem of who are spaces treated/can be treated, but gives you somewhat more control over the extraction by specifying geometric areas you want to extract text from. Taken from here.
public static System.util.RectangleJ GetRectangle(float distanceInPixelsFromLeft, float distanceInPixelsFromBottom, float width, float height)
{
return new System.util.RectangleJ(
distanceInPixelsFromLeft,
distanceInPixelsFromBottom,
width,
height);
}
public static void Strategy2()
{
// In this example, I'll declare a pageNumber integer variable to
// only capture text from the page I'm interested in
int pageNumber = 1;
var text = new StringBuilder();
List<Tuple<string, int>> result = new List<Tuple<string, int>>();
// The PdfReader object implements IDisposable.Dispose, so you can
// wrap it in the using keyword to automatically dispose of it
using (var pdfReader = new PdfReader("D:/Example.pdf"))
{
float distanceInPixelsFromLeft = 20;
//float distanceInPixelsFromBottom = 730;
float width = 300;
float height = 10;
for (int i = 800; i >= 0; i -= 10)
{
var rect = GetRectangle(distanceInPixelsFromLeft, i, width, height);
var filters = new RenderFilter[1];
filters[0] = new RegionTextRenderFilter(rect);
ITextExtractionStrategy strategy =
new FilteredTextRenderListener(
new LocationTextExtractionStrategy(),
filters);
var currentText = PdfTextExtractor.GetTextFromPage(
pdfReader,
pageNumber,
strategy);
currentText =
Encoding.UTF8.GetString(Encoding.Convert(
Encoding.Default,
Encoding.UTF8,
Encoding.Default.GetBytes(currentText)));
//text.Append(currentText);
result.Add(new Tuple<string, int>(currentText, currentText.Length));
}
}
// You'll do something else with it, here I write it to a console window
//Console.WriteLine(text.ToString());
foreach (var line in result.Distinct().Where(r => !string.IsNullOrWhiteSpace(r.Item1)))
{
Console.WriteLine("Text: [{0}], Length: {1}", line.Item1, line.Item2);
}
//Console.WriteLine("", string.Join("\r\n", result.Distinct().Where(r => !string.IsNullOrWhiteSpace(r.Item1))));
Outputs:
PS.: We are still left with the problem of how to deal with spaces/non text data.