I am trying to extract message body using procmail be it in the message or in an attachment, but I got first two characters from below:
:0B
* ^()[a-z]+[0-9]+[^\+]
{ msgID = "$MATCH" }
:0B
* ^Content-Disposition: *attachment.*(($)[a-z0-9].*)*($)($)\/[a-z]+[0-9]+
| base64 --decode
{ msgID = "$MATCH" }
The decoding of base64 is not working, it's either not matching as whole condition, or assigning LASTFOLDER=base64 --decode"
See also my earlier question
You can't have two actions on a single recipe. Where you have
:0B
* ^stuff\/more stuff
{ msgid="$MATCH" }
| base64 --decode
the second line is a syntax error. I'm guessing you mean something like
:0B
* ^stuff\/more stuff
{
msgid="$MATCH"
:0
| base64 --decode
}
See? If you need more than one action, add a pair of braces (don't forget the closing braces at the end!) around another group of recipes as the action section. Maybe see also http://www.iki.fi/era/procmail/quickref.html
But in your case, you apparently want to extract the token base64 decoded, so the code you are looking for would be
:0B
* ^Content-Disposition: *attachment.*(($)[a-z0-9].*)*($)($)\/[a-z]+[0-9]+
{ msgID=`echo "$MATCH" | base64 --decode` }
Notice also that you cannot have a space around the equals sign. And the regex is incorrect for arbitrary base64; plus and slash are also among the allowed characters, and equals at the end. Change the part after \/ to [a-z0-9/+]+=*
Related
I want to parse Scala grammar with flex and bison. But I don't know how to parse the newline token in Scala grammar.
If I parse newline as a token T_NL, Here's the Toy.l for example:
...
[a-zA-Z_][a-zA-Z0-9_]* { yylval->literal = strdup(yy_text); return T_ID; }
\n { yylval->token = T_LN; return T_LN; }
[ \t\v\f\r] { /* skip whitespaces */ }
...
And here's the Toy.y for example:
function_def: 'def' T_ID '(' argument_list ')' return_expression '=' expression T_NL
;
argument_list: argument
| argument ',' argument_list
;
expression: ...
;
return_expression: ...
;
You could see that I have to skip T_NL in all other statements and definitions in Toy.y, which is really boring.
Please educate me with source code example!
This is a clear case where bison push-parsers are useful. The basic idea is that the decision to send an NL token (or tokens) can only be made when the following token has been identified (and, in one corner case, the second following token).
The advantage of push parsers is that they let us implement strategies like this, where there is not necessarily a one-to-one relationship between input lexemes and tokens sent to the parser. I'm not going to deal with all the particularities of setting up a push parser (though it's not difficult); you should refer to the bison manual for details. [Note 1]
First, it's important to read the Scala language description with care. Newline processing is described in section 2.13:
A newline in a Scala source text is treated as the special token “nl” if the three following criteria are satisfied:
The token immediately preceding the newline can terminate a statement.
The token immediately following the newline can begin a statement.
The token appears in a region where newlines are enabled.
Rules 1 and 2 are simple lookup tables, which are precisely defined in the following two paragraphs. There is just one minor exception to rule 2 has a minor exception, described below:
A case token can begin a statement only if followed by a class or object token.
One hackish possibility to deal with that exception would be to add case[[:space:]]+class and case[[:space:]]+object as lexemes, on the assumption that no-one will put a comment between case and class. (Or you could use a more sophisticated pattern, which allows comments as well as whitespace.) If one of these lexemes is recognised, it could either be sent to the parser as a single (fused) token, or it could be sent as two tokens using two invocations of SEND in the lexer action. (Personally, I'd go with the fused token, since once the pair of tokens has been recognised, there is no advantage to splitting them up; afaik, there's no valid program in which case class can be parsed as anything other than case class. But I could be wrong.)
To apply rules one and two, then, we need two lookup tables indexed by token number: token_can_end_stmt and token_cannot_start_stmt. (The second one has its meaning reversed because most tokens can start statements; doing it this way simplifies initialisation.)
/* YYNTOKENS is defined by bison if you specify %token-tables */
static bool token_can_end_stmt[YYNTOKENS] = {
[T_THIS] = true, [T_NULL] = true, /* ... */
};
static bool token_cannot_start_stmt[YYNTOKENS] = {
[T_CATCH] = true, [T_ELSE] = true, /* ... */
};
We're going to need a little bit of persistent state, but fortunately when we're using a push parser, the scanner does not need to return to its caller every time it recognises a token, so we can keep the persistent state as local variables in the scan loop. (That's another advantage of the push-parser architecture.)
From the above description, we can see that what we're going to need to maintain in the scanner state are:
some indication that a newline has been encountered. This needs to be a count, not a boolean, because we might need to send two newlines:
if two tokens are separated by at least one completely blank line (i.e a line which contains no printable characters), then two nl tokens are inserted.
A simple way to handle this is to simply compare the current line number with the line number at the previous token. If they are the same, then there was no newline. If they differ by only one, then there was no blank line. If they differ by more than one, then there was either a blank line or a comment (or both). (It seems odd to me that a comment would not trigger the blank line rule, so I'm assuming that it does. But I could be wrong, which would require some adjustment to this scanner.) [Note 2]
the previous token (for rule 1). It's only necessary to record the token number, which is a simple small integer.
some way of telling whether we're in a "region where newlines are enabled" (for rule 3). I'm pretty sure that this will require assistance from the parser, so I've written it that way here.
By centralising the decision about sending a newline into a single function, we can avoid a lot of code duplication. My typical push-parser architecture uses a SEND macro anyway, to deal with the boilerplate of saving the semantic value, calling the parser, and checking for errors; it's easy to add the newline logic there:
// yylloc handling mostly omitted for simplicity
#define SEND_VALUE(token, tag, value) do { \
yylval.tag = value; \
SEND(token); \
} while(0);
#define SEND(token) do { \
int status = YYPUSH_MORE; \
if (yylineno != prev_lineno) \
&& token_can_end_stmt[prev_token] \
&& !token_cannot_start_stmt[token] \
&& in_new_line_region) { \
status = yypush_parse(ps, T_NL, NULL, &yylloc, &nl_enabled); \
if (status == YYPUSH_MORE \
&& yylineno - prev_lineno > 1) \
status = yypush_parse(ps, T_NL, NULL, &yylloc, &nl_enabled); \
} \
nl_encountered = 0; \
if (status == YYPUSH_MORE) \
status = yypush_parse(ps, token, &yylval, &yylloc, &nl_enabled); \
if (status != YYPUSH_MORE) return status; \
prev_token = token; \
prev_lineno = yylineno; \
while (0);
Specifying the local state in the scanner is extremely simple; just place the declarations and initialisations at the top of your scanner rules, indented. Any indented code prior to the first rule is inserted directly into yylex, almost at the top of the function (so it is executed once per call, not once per lexeme):
%%
int nl_encountered = 0;
int prev_token = 0;
int prev_lineno = 1;
bool nl_enabled = true;
YYSTYPE yylval;
YYLTYPE yylloc = {0};
Now, the individual rules are pretty simple (except for case). For example, we might have rules like:
"while" { SEND(T_WHILE); }
[[:lower:]][[:alnum:]_]* { SEND_VALUE(T_VARID, str, strdup(yytext)); }
That still leaves the question of how to determine if we are in a region where newlines are enabled.
Most of the rules could be handled in the lexer by just keeping a stack of different kinds of open parentheses, and checking the top of the stack: If the parenthesis on the top of the stack is a {, then newlines are enabled; otherwise, they are not. So we could use rules like:
[{([] { paren_stack_push(yytext[0]); SEND(yytext[0]); }
[])}] { paren_stack_pop(); SEND(yytext[0]); }
However, that doesn't deal with the requirement that newlines be disabled between a case and its corresponding =>. I don't think it's possible to handle that as another type of parenthesis, because there are lots of uses of => which do not correspond with a case and I believe some of them can come between a case and it's corresponding =>.
So a better approach would be to put this logic into the parser, using lexical feedback to communicate the state of the newline-region stack, which is what is assumed in the calls to yypush_parse above. Specifically, they share one boolean variable between the scanner and the parser (by passing a pointer to the parser). [Note 3] The parser then maintains the value of this variable in MRAs in each rule which matches a region of potentially different newlinedness, using the parse stack itself as a stack. Here's a small excerpt of a (theoretical) parser:
%define api.pure full
%define api.push-pull push
%locations
%parse-param { bool* nl_enabled; }
/* More prologue */
%%
// ...
/* Some example productions which modify nl_enabled: */
/* The actions always need to be before the token, because they need to take
* effect before the next lookahead token is requested.
* Note how the first MRA's semantic value is used to keep the old value
* of the variable, so that it can be restored in the second MRA.
*/
TypeArgs : <boolean>{ $$ = *nl_enabled; *nl_enabled = false; }
'[' Types
{ *nl_enabled = $1; } ']'
CaseClause : <boolean>{ $$ = *nl_enabled; *nl_enabled = false; }
"case" Pattern opt_Guard
{ *nl_enabled = $1; } "=>" Block
FunDef : FunSig opt_nl
<boolean>{ $$ = *nl_enabled; *nl_enabled = true; }
'{' Block
{ *nl_enabled = $3; } '}'
Notes:
Push parsers have many other advantages; IMHO they are they are the solution of choice. In particular, using push parsers avoids the circular header dependency which plagues attempts to build pure parser/scanner combinations.
There is still the question of multiline comments with preceding and trailing text:
return /* Is this comment
a newline? */ 42
I'm not going to try to answer that question.)
It would be possible to keep this flag in the YYLTYPE structure, since only one instance of yylloc is ever used in this example. That might be a reasonable optimisation, since it cuts down on the number of parameters sent to yypush_parse. But it seemed a bit fragile, so I opted for a more general solution here.
I'm hoping this is an easy question, and that I'm just not seeing the forest due to all the trees.
I have a string in flutter than came from a REST API that looks like this:
"What\u0027s this?"
The \u is causing a problem.
I can't do a string.replaceAll("\", "\") on it as the single slash means it's looking for a character after it, which is not what I need.
I tried doing a string.replaceAll(String.fromCharCode(0x92), "") to remove it - That didn't work.
I then tried using a regex to remove it like string.replaceAll("/(?:\)/", "") and the same single slash remains.
So, the question is how to remove that single slash, so I can add in a double slash, or replace it with a double slash?
Cheers
Jase
I found the issue. I was looking for hex 92 (0x92) and it should have been decimal 92.
I ended up solving the issue like this...
String removeUnicodeApostrophes(String strInput) {
// First remove the single slash.
String strModified = strInput.replaceAll(String.fromCharCode(92), "");
// Now, we can replace the rest of the unicode with a proper apostrophe.
return strModified.replaceAll("u0027", "\'");
}
When the string is read, I assume what's happening is that it's being interpreted as literal rather than as what it should be (code points) i.e. each character of \0027 is a separate character. You may actually be able to fix this depending on how you access the API - see the dart convert library. If you use utf8.decode on the raw data you may be able to avoid this entire problem.
However, if that's not an option there's an easy enough solution for you.
What's happening when you're writing out your regex or replace is that you're not escaping the backslash, so it's essentially becoming nothing. If you use a double slash, that solve the problem as it escapes the escape character. "\\" => "\".
The other option is to use a raw string like r"\" which ignores the escape character.
Paste this into https://dartpad.dartlang.org:
String withapostraphe = "What\u0027s this?";
String withapostraphe1 = withapostraphe.replaceAll('\u0027', '');
String withapostraphe2 = withapostraphe.replaceAll(String.fromCharCode(0x27), '');
print("Original encoded properly: $withapostraphe");
print("Replaced with nothing: $withapostraphe1");
print("Using char code for ': $withapostraphe2");
String unicodeNotDecoded = "What\\u0027s this?";
String unicodeWithApostraphe = unicodeNotDecoded.replaceAll('\\u0027', '\'');
String unicodeNoApostraphe = unicodeNotDecoded.replaceAll('\\u0027', '');
String unicodeRaw = unicodeNotDecoded.replaceAll(r"\u0027", "'");
print("Data as read with escaped unicode: $unicodeNotDecoded");
print("Data replaced with apostraphe: $unicodeWithApostraphe");
print("Data replaced with nothing: $unicodeNoApostraphe");
print("Data replaced using raw string: $unicodeRaw");
To see the result:
Original encoded properly: What's this?
Replaced with nothing: Whats this?
Using char code for ': Whats this?
Data as read with escaped unicode: What\u0027s this?
Data replaced with apostraphe: What's this?
Data replaced with nothing: Whats this?
Data replaced using raw string: What's this?
The embedded expressions are not replaced when appended, prepended or surrounded by characters in the following simplified and very basic scenario:
* def jobId = '0001'
* def out =
"""
{
"jobId": "#(jobId)",
"outputMetadata": {
"fileName_OK": "#(jobId)",
"fileName_Fail_1": "some_text_#(jobId)",
"fileName_Fail_2": "#(jobId)-and-some-more-text",
"fileName_Fail_3": "prepend #(jobId) and append"
}
}
"""
* print out
Executing the scenario returns:
{
"jobId": "0001",
"outputMetadata": {
"fileName_OK": "0001",
"fileName_Fail_1": "some_text_#(jobId)",
"fileName_Fail_2": "#(jobId)-and-some-more-text",
"fileName_Fail_3": "prepend #(jobId) and append"
}
}
Is it a feature, a limitation, or a bug? Or, did I miss something?
This is as designed ! You can do this:
"fileName_Fail_2": "#(jobId + '-and-some-more-text')"
Any valid JS expression can be stuffed into an embedded expression, so this is not a limitation. And this works only within JSON string values or when the entire RHS is a string within quotes and keeps the parsing simple. Hope that helps !
I have this string passed from a JavaScript form:
4 H/M’s
Which gets posted to an array, called '$out' and is keyed by "blurb".
I use FPDF to output it, with MultiCell, like so:
$pdf->MultiCell(190,4,$out["blurb"]);
However, FPDF outputs this string:
4 H/M’s
I've tried
html_entity_decode($out["blurb"], ENT_QUOTES, "UTF-8")
but it doesn't seem to be working. Any suggestions?
Use iconv like this :
// put $str your sting to encode for FPDF
// put $myencoding value as you current encoding
function fpdf_encode($str,$myencoding='latin1') {
return iconv($myencoding, 'windows-1252', trim(utf8_decode($str)));
}
I have several characters that aren't recognized properly.
Characters like:
º
á
ó
(etc..)
This means that the characters encoding is not utf-8 right?
So, can you tell me what character encoding could it be please.
We don't have nearly enough information to really answer this, but the gist of it is: you shouldn't just guess. You need to work out where the data is coming from, and find out what the encoding is. You haven't told us anything about the data source, so we're completely in the dark. You might want to try Encoding.Default if these are files saved with something like Notepad.
If you know what the characters are meant to be and how they're represented in binary, that should suggest an encoding... but again, we'd need to know more information.
read this first http://www.joelonsoftware.com/articles/Unicode.html
There are two encodings: the one that was used to encode string and one that is used to decode string. They must be the same to get expected result. If they are different then some characters will be displayed incorrectly. we can try to guess if you post actual and expected results.
I wrote a couple of methods to narrow down the possibilities a while back for situations just like this.
static void Main(string[] args)
{
Encoding[] matches = FindEncodingTable('Ÿ');
Encoding[] enc2 = FindEncodingTable(159, 'Ÿ');
}
// Locates all Encodings with the specified Character and position
// "CharacterPosition": Decimal position of the character on the unknown encoding table. E.G. 159 on the extended ASCII table
//"character": The character to locate in the encoding table. E.G. 'Ÿ' on the extended ASCII table
static Encoding[] FindEncodingTable(int CharacterPosition, char character)
{
List matches = new List();
byte myByte = (byte)CharacterPosition;
byte[] bytes = { myByte };
foreach (EncodingInfo encInfo in Encoding.GetEncodings())
{
Encoding thisEnc = Encoding.GetEncoding(encInfo.CodePage);
char[] chars = thisEnc.GetChars(bytes);
if (chars[0] == character)
{
matches.Add(thisEnc);
break;
}
}
return matches.ToArray();
}
// Locates all Encodings that contain the specified character
static Encoding[] FindEncodingTable(char character)
{
List matches = new List();
foreach (EncodingInfo encInfo in Encoding.GetEncodings())
{
Encoding thisEnc = Encoding.GetEncoding(encInfo.CodePage);
char[] chars = { character };
byte[] temp = thisEnc.GetBytes(chars);
if (temp != null)
matches.Add(thisEnc);
}
return matches.ToArray();
}
Encoding is the form of modifying some existing content; thus allowing it to be parsed by the required destination protocols.
An example of encoding can be seen when browsing the internet:
The URL you visit: www.example.com, may have the search facility to run custom searches via the URL address:
www.example.com?search=...
The following variables on the URL require URL encoding. If you was to write:
www.example.com?search=cat food cheap
The browser wouldn't understand your request as you have used an invalid character of ' ' (a white space)
To correct this encoding error you should exchange the ' ' with '%20' to form this URL:
www.example.com?search=cat%20food%20cheap
Different systems use different forms of encoding, in this example I have used standard Hex encoding for a URL. In other applications and instances you may find the need to use other types of encoding.
Good Luck!