I'm using Lift to generate my web front.
In the scala file I have a list: val testList = List("part1","part2","part3")
I'm apply a function to each element for the list. For now I just want to make them bold. I know there is another way to make them bold by changing the html code, but that's not the point of this exercise. I'm trying to see if I can generate the html in the scala file as opposed to the .html file.
I defined a function
def formatText(s:String)={
<B> s </B>
}
and I call var testList2= testList.map(formatText(_))
The problem is that in the output all I see is s s s in bold. If I put quotes around the <B> then the string is escapsed so instead of getting part1 (in bold), I get < B >part1< / B >.
How do I display those strings in bold? Is there a $s to tell Lift/scala I mean the variable s and no char s in formatText?
The XML-literal "escape" characters (for adding variables, expressions, etc) are { and }:
def formatText(s:String)= <B> {s} </B>
Take a look at Programming in Scala 26.3: XML Literals for more details.
Related
I have a document I would like to parse one line at a time. My tokens are entire lines:
Pizza Is Great
12
14
17
red
blue
buckle my shoe
PS. I <3 🐈🐈🐈
I could match the above with a grammar (pseudocode) something like:
text → /.*/
int → /[0-9]+/
blank → /^\s+$/
Document → text + blank + int* + blank + text* + blank + text
What I want is to send each line independently into the parser as a token and try to match it, but every solution I have tried so far (scala-parser-combinators, FastParse, etc.) requires me to tediously define each token with the newline attached in order to break it apart correctly. Clearly I don't actually want my grammar to know about the newlines; they should be used to tokenize the input before it ever hits the parser.
Is there a Scala-compatible parsing solution that can work line-by-line in this way, so that the newlines disappear from my grammar definition entirely? (Could someone show me a simple example?)
I have a subreport in SSRS which returns a text embedded in HTML tags. I will like to know if there is a way of stripping these HTML tags so as to only have the text. I am using VS 2008.
I have tried using a regex function as below to strip the HTML tags but this does not work:
Shared FUNCTION RemoveHtml(ByVal Text As String) AS String
IF Text IsNot Nothing Then
Dim mRemoveTagRegex AS NEW System.Text.RegularExpressions.Regex(“<(.|\n)+?>”)
Return mRemoveTagRegex.Replace(text, "")
End If
end function
You could probably just use a combination of the built-in functions provided with SSRS to do what you need. I'd recommend combining Mid with InStr. The following expression will take the value between the last character of the opening HTML tag and the first character of the closing HTML tag.
=MID(Fields!Field.Value,
InStr(Fields!Field.Value, ">") + 1,
InStrRev(Fields!Field.Value, "</")
-Len(Left(Fields!Field.Value,
InStr(Fields!Field.Value, ">") + 1)))
Edit: It got a little more complex than I thought, but this should do the trick.
In order to use if statements in Freemarker templates, the following syntax is used;
[#if ${numberCoupons} <= 1]
[#assign couponsText = 'coupon']
[/#if]
Is there a way to replace the '#' character with something else, because I am trying to integrate it with drools (a java based rule engine) and the '#' character is used to mark start of comments so the formatting breaks?
There isn't anything for that out of the box (it uses a JavaCC generated parser, which is static). But you can write a TemplateLoader that just delegates to another TemplateLoader, but replaces the Reader with a FilterReader that replaces [% and [/% and [%-- and --%] with [#, etc. Then then you can use % instead of # in the FreeMarker tags. (It's somewhat confusing though, as error messages will still use #, etc.)
As #ddekany wrote, you can write code that tranform the template without the pound sign, But notice it can clash with HTML or XML (and similar) tags, at least from an editor prespective.
I've got an existing DOORS module which happens to have some rich text entries; these entries have some symbols in them such as 'curly' quotes. I'm trying to upgrade a DXL macro which exports a LaTeX source file, and the problem is that these high-number symbols are not considered "standard UTF-8" by TexMaker's import function (and in any case probably won't be processed by Xelatex or other converters) . I can't simply use the UnicodeString functions in DXL because those break the rest of the rich text, and apparently the character identifier charOf(decimal_number_code) only works over the basic set of characters, i.e. less than some numeric code value. For example, charOf(8217) should create a right-curly single quote, but when I tried code along the lines of
if (charOf(8217) == one_char)
I never get a match. I did copy the curly quote from the DOORS module and verified via an online unicode analyzer that it was definitely Unicode decimal value 8217 .
So, what am I missing here? I just want to be able to detect any symbol character, identify it correctly, and then replace it with ,e.g., \textquoteright in the output stream.
My overall setup works for lower-count chars, since this works:
( c is a single character pulled from a string)
thedeg = charOf(176)
if( thedeg == c )
{
temp += "$\\degree$"
}
Got some help from DXL coding experts over at IBM forums.
Quoting the important stuff (there's some useful code snippets there as well):
Hey, you are right it seems intOf(char) and charOf(int) both do some
modulo 256 and therefore cut anything above that off. Try:
int i=8217;
char c = addr_(i);
print c;
Which then allows comparison of c with any input char.
Can actions in Lex access individual regex groups?
(NOTE: I'm guessing not, since the group characters - parentheses - are according to the documentation used to change precedence. But if so, do you recommend an alternative C/C++ scanner generator that can do this? I'm not really hot on writing my own lexical analyzer.)
Example:
Let's say I have this input: foo [tagName attribute="value"] bar and I want to extract the tag using Lex/Flex. I could certainly write this rule:
\[[a-z]+[[:space:]]+[a-z]+=\"[a-z]+\"\] printf("matched %s", yytext);
But let's say I would want to access certain parts of the string, e.g. the attribute but without having to parse yytext again (as the string has already been scanned it doesn't really make sense to scan part of it again). So something like this would be preferable (regex groups):
\[[a-z]+[[:space:]]+[a-z]+=\"([a-z]+)\"\] printf("matched attribute %s", $1);
You can separate it to start conditions. Something like this:
%x VALUEPARSE ENDSTATE
%%
char string_buf[100];
<INITIAL>\[[a-z]+[[:space:]]+[a-z]+=\" {BEGIN(VALUEPARSE);}
<VALUEPARSE>([a-z]+) (strncpy(string_buf, yytext, yyleng);BEGIN(ENDSTATE);} //getting value text
<ENDSTATE>\"\] {BEGIN(INITIAL);}
%%
About an alternative C/C++ scanner generator - I use QT class QRegularExpression for same things, it can very easy get regex group after match.
Certainly at least some forms of them do.
But the default lex/flex downloadable from sourceforge.org do not seem to list it in their documentation, and this example leaves the full string in yytext.
From IBM's LEX documentation for AIX:
(Expression)
Matches the expression in the parentheses.
The () (parentheses) operator is used for grouping and causes the expression within parentheses to be read into the yytext array. A group in parentheses can be used in place of any single character in any other pattern.
Example: (ab|cd+)?(ef)* matches such strings as abefef, efefef, cdef, or cddd; but not abc, abcd, or abcdef.