In a StringTemplate how to temporarily suppress automatic indentation? - stringtemplate-4

In a StringTemplate how to temporarily suppress automatic indentation?
Suppose a template:
fooTemplate() ::= <<
I want this to be indented normally.
# I do not want this line to be indented.
>>
So you can understand the motivation.
I am generating C-lang code and I do not want the preprocessor instructions to be indented. e.g.
#if
To be clear the fooTemplate is not the only template.
It is called by other templates (which may nest several levels deep).
Introducing a special character into the template to temporarily disable indentation would be acceptable.
fooTemplate() ::= <<
I want this to be indented normally.
<\u0008># I do not want this line to be indented.
>>

I see that indentation is actually applied by the 'AutoIndentWriter' https://github.com/antlr/stringtemplate4/blob/master/doc/indent.md
I implemented my own 'SemiAutoIndentWriter' which looks for a magic character (\b in my case) in the stream.
When seen the magic character sets a 'suppressIndent' switch which causes indentation to be suppressed.
package org.stringtemplate.v4;
import java.io.IOException;
import java.io.Writer;
/** Just pass through the text. */
public class SemiAutoIndentWriter extends AutoIndentWriter {
public boolean suppressIndent = false;
public SemiAutoIndentWriter (Writer out) {
super(out);
}
#Override
public int write(String str) throws IOException {
int n = 0;
int nll = newline.length();
int sl = str.length();
for (int i=0; i<sl; i++) {
char c = str.charAt(i);
if ( c=='\b' ) {
suppressIndent = true;
continue;
}
// found \n or \r\n newline?
if ( c=='\r' ) continue;
if ( c=='\n' ) {
suppressIndent = false
atStartOfLine = true;
charPosition = -nll; // set so the write below sets to 0
out.write(newline);
n += nll;
charIndex += nll;
charPosition += n; // wrote n more char
continue;
}
// normal character
// check to see if we are at the start of a line; need indent if so
if ( atStartOfLine ) {
if (! suppressIndent) n+=indent();
atStartOfLine = false;
}
n++;
out.write(c);
charPosition++;
charIndex++;
}
return n;
}
Note that the '<\b>' is not a recognized special character by ST4 but '' is recognized.

Related

Bad address error when comparing Strings within BPF

I have an example program I am running here to see if the substring matches the string and then print them out. So far, I am having trouble running the program due to a bad address. I am wondering if there is a way to fix this problem? I have attached the entire code but my problem is mostly related to isSubstring.
#include <uapi/linux/bpf.h>
#define ARRAYSIZE 64
struct data_t {
char buf[ARRAYSIZE];
};
BPF_ARRAY(lookupTable, struct data_t, ARRAYSIZE);
//char name[20];
//find substring in a string
static bool isSubstring(struct data_t stringVal)
{
char substring[] = "New York";
int M = sizeof(substring);
int N = sizeof(stringVal.buf) - 1;
/* A loop to slide pat[] one by one */
for (int i = 0; i <= N - M; i++) {
int j;
/* For current index i, check for
pattern match */
for (j = 0; j < M; j++)
if (stringVal.buf[i + j] != substring[j])
break;
if (j == M)
return true;
}
return false;
}
int Test(void *ctx)
{
#pragma clang loop unroll(full)
for (int i = 0; i < ARRAYSIZE; i++) {
int k = i;
struct data_t *line = lookupTable.lookup(&k);
if (line) {
// bpf_trace_printk("%s\n", key->buf);
if (isSubstring(*line)) {
bpf_trace_printk("%s\n", line->buf);
}
}
}
return 0;
}
My python code here:
import ctypes
from bcc import BPF
b = BPF(src_file="hello.c")
lookupTable = b["lookupTable"]
#add hello.csv to the lookupTable array
f = open("hello.csv","r")
contents = f.readlines()
for i in range(0,len(contents)):
string = contents[i].encode('utf-8')
print(len(string))
lookupTable[ctypes.c_int(i)] = ctypes.create_string_buffer(string, len(string))
f.close()
b.attach_kprobe(event=b.get_syscall_fnname("clone"), fn_name="Test")
b.trace_print()
Edit: Forgot to add the error: It's really long and can be found here: https://pastebin.com/a7E9L230
I think the most interesting part of the error is near the bottom where it mentions:
The sequence of 8193 jumps is too complex.
And a little bit farther down mentions: Bad Address.
The verifier checks all branches in your program. Each time it sees a jump instruction, it pushes the new branch to its “stack of branches to check”. This stack has a limit (BPF_COMPLEXITY_LIMIT_JMP_SEQ, currently 8192) that you are hitting, as the verifier tells you. “Bad Address” is just the translation of kernel's errno value which is set to -EFAULT in that case.
Not sure how to fix it though, you could try:
With smaller strings, or
On a 5.3+ kernel (which supports bounded loops): without unrolling the loop with clang (I don't know if it would help).

Generating DXL documentation using Doxygen : if is shown as a function

I am trying to generate some DXL documentation usings Doxygen , but the results are often not correct , DXL is used as a scripting language and that has a C/C++ like syntax with some changes , like for example i can ignor using the Semicolons , What should i do to correct this problem ?
which creates some problems while generating the documentation, here is an example of my dxl code database :
string replace (string sSource, string sSearch, string sReplace) {
int iLen = length sSource
if (iLen == 0) return ""
int iLenSearch = length(sSearch)
if (iLenSearch == 0) {
return ""
}
char firstChar = sSearch[0]
Buffer s = create()
int pos = 0, d1,d2;
int i
while (pos < iLen) {
char ch = sSource[pos];
bool found = true
if (ch != firstChar) {pos ++; s+= ch; continue}
for (i = 1; i < iLenSearch; i++) {
if (sSource[pos+i] != sSearch[i]) { found = false; break }
}
if (!found) {pos++; s+= ch; continue}
s += sReplace
pos += iLenSearch
}
string result = stringOf s
delete s
return result }
as i said the main difference with C and that may cause doxygen to interpret this code incorrectly is that in DXL , we dont have to use ";" .
thanks in advance
You must do three things to apply Doxygen successfully on DXL scripts:
1.) In Doxygen-GUI, 'Wizard' tab, section 'Mode' choose 'Optimize for C or PHP'
2.) The DXL code must be C-confom, i.e. each statement ends with a semicolon ';'
3.) In tab 'Expert' set language mapping for DXL and INC files in section 'Project' under 'EXTENSION_MAPPING':
dxl=C
inc=C
This all tells Doxygen to treat DXL scripts as C code.
Further, for DOORS to recognize a DXL file documented for DoxyGen as valid and bind it to a menu item, it must comply with certain header structure, consisting of single line and multi-line comment, e.g.
// <dxl-file>
/**
* #file <dxl-file>
* #copyright (c) ...
* #author Th. Grosser
* #date 01 Dec 2017
* #brief ...
*/

How to make 'ő' and 'ű' work in Java?

String word = inputField.getText();
int wordLength = word.length();
boolean backWord = false;
boolean longWord = false;
String backArray[]=new String[6];
backArray[0] = "a";
backArray[1] = "á";
backArray[2] = "ö";
backArray[3] = "ő";
backArray[4] = "ü";
backArray[5] = "ű";
for (int i = 0;i < wordLength ;i ++ ) {
String character = word.substring(i, i + 1);
for (int j = 0;j < 5;j ++) {
if (character.equals(backArray[j])) {
backWord = true;
}
}
}
if (backWord) {
outputField.setText(word+"ban");
}
else {
outputField.setText(word+"ben");
}
This is the code I wrote for an applet for conjugating Hungarian nouns while taking vowel harmony into consideration. For the unaware, the TL;DR of vowel harmony is that Hungarian has lots of suffixes and you can determine which suffix to use based on the vowels in a word.
This code works fine for all the vowels, except for ő and ű. So if my input is 'szálloda', the output will be 'szállodaban'. However, if my input is 'idő' (weather) the output will be 'időben', though it should be 'időban' according to the code.
I assumed this is because java somehow doesn't recognize these two letters because the code works fine for the other ones. Is that the problem? And if so, how do I solve it?

order of execution of forked processes

#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<sys/sem.h>
#include<sys/ipc.h>
int sem_id;
void update_file(int number)
{
struct sembuf sem_op;
FILE* file;
printf("Inside Update Process\n");
/* wait on the semaphore, unless it's value is non-negative. */
sem_op.sem_num = 0;
sem_op.sem_op = -1; /* <-- Amount by which the value of the semaphore is to be decreased */
sem_op.sem_flg = 0;
semop(sem_id, &sem_op, 1);
/* we "locked" the semaphore, and are assured exclusive access to file. */
/* manipulate the file in some way. for example, write a number into it. */
file = fopen("file.txt", "a+");
if (file) {
fprintf(file, " \n%d\n", number);
fclose(file);
}
/* finally, signal the semaphore - increase its value by one. */
sem_op.sem_num = 0;
sem_op.sem_op = 1;
sem_op.sem_flg = 0;
semop( sem_id, &sem_op, 1);
}
void write_file(char* contents)
{
printf("Inside Write Process\n");
struct sembuf sem_op;
sem_op.sem_num = 0;
sem_op.sem_op = -1;
sem_op.sem_flg = 0;
semop( sem_id, &sem_op, 1);
FILE *file = fopen("file.txt","w");
if(file)
{
fprintf(file,contents);
fclose(file);
}
sem_op.sem_num = 0;
sem_op.sem_op = 1;
sem_op.sem_flg = 0;
semop( sem_id, &sem_op, 1);
}
int main()
{
//key_t key = ftok("file.txt",'E');
sem_id = semget( IPC_PRIVATE, 1, 0600 | IPC_CREAT);
/*here 100 is any arbit number to be assigned as the key of the
semaphore,1 is the number of semaphores in the semaphore set, */
if(sem_id == -1)
{
perror("main : semget");
exit(1);
}
int rc = semctl( sem_id, 0, SETVAL, 1);
pid_t u = fork();
if(u == 0)
{
update_file(100);
exit(0);
}
else
{
wait();
}
pid_t w = fork();
if(w == 0)
{
write_file("Hello!!");
exit(0);
}
else
{
wait();
}
}
If I run the above code as a c code, the write_file() function is called after the update_file () function
Whereas if I run the same code as a c++ code, the order of execution is reverse... why is it so??
Just some suggestions, but it looks to me like it could be caused by a combination of things:
The wait() call is supposed to take a pointer argument (that can
be NULL). Compiler should have caught this, but you must be picking
up another definition somewhere that permits your syntax. You are
also missing an include for sys/wait.h. This might be why the
compiler isn't complaining as I'd expect it to.
Depending on your machine/OS configuration the fork'd process may
not get to run until after the parent yields. Assuming the "wait()"
you are calling isn't working the way we would be expecting, it is
possible for the parent to execute completely before the children
get to run.
Unfortunately, I wasn't able to duplicate the same temporal behavior. However, when I generated assembly files for each of the two cases (C & C++), I noticed that the C++ version is missing the "wait" system call, but the C version is as I would expect. To me, this suggests that somewhere in the C++ headers this special version without an argument is being #defined out of the code. This difference could be the reason behind the behavior you are seeing.
In a nutshell... add the #include, and change your wait calls to "wait(0)"

Removing comments with JFlex, but keeping line terminators

I'm writing lexical specification for JFlex (it's like flex, but for Java). I have problem with TraditionalComment (/* */) and DocumentationComment (/** */). So far I have this, taken from JFlex User's Manual:
LineTerminator = \r|\n|\r\n
InputCharacter = [^\r\n]
WhiteSpace = {LineTerminator} | [ \t\f]
/* comments */
Comment = {TraditionalComment} | {EndOfLineComment} | {DocumentationComment}
TraditionalComment = "/*" [^*] ~"*/" | "/*" "*"+ "/"
EndOfLineComment = "//" {InputCharacter}* {LineTerminator}
DocumentationComment = "/**" {CommentContent} "*"+ "/"
CommentContent = ( [^*] | \*+ [^/*] )*
{Comment} { /* Ignore comments */ }
{LineTerminator} { return LexerToken.PASS; }
LexerToken.PASS means that later I'm passing line terminators on output. Now, what I want to do is:
Ignore everything which is inside the comment, except new line terminators.
For example, consider such input:
/* Some
* quite long comment. */
In fact it is /* Some\n * quite long comment. */\n. With current lexer it will be converted to a single line. The output will be single '\n'. But I would like to have 2 lines, '\n\n'. In general, I would like that my output will always have the same number of lines as input. How to do it?
After couple of days I found a solution. I will post it here, maybe somebody will have the same problem.
The trick is, after recognizing that you are inside a comment - go once more through its body and if you spot new line terminators - pass them, not ignore:
%{
public StringBuilder newLines;
%}
// ...
{Comment} {
char[] ch;
ch = yytext().toCharArray();
newLines = new StringBuilder();
for (char c : ch)
{
if (c == '\n')
{
newLines.append(c);
}
}
return LexerToken.NEW_LINES;
}