Embedded Perl FREETMPS not cleaning up array of hash references properly - perl

I have a project with several macros to help me write Perl functions, and basically "convert" C structs into Perl hashes. Some of these hashes nest each other, but I'll get to the point.
My program's memory usage rises drastically over time, it's a social media frontend written in C and Embedded Perl (odd language choice, but it's whatever). I'm 100% certain I've found the root cause of code, as removing it and other things causes it to go away, and I've profiled it as well, and every issue points to my array of hashes.
if (notifs && notifs_len)
{
// Leak occurs here
mXPUSHs(newRV_noinc((SV*)perlify_notifications(notifs, notifs_len)));
}
else ARG_UNDEFINED();
// Run function
PERL_STACK_SCALAR_CALL("base_page");
char* dup = PERL_GET_STACK_EXIT;
Where PERL_GET_STACK_EXIT is a macro which equates to
#define PERL_GET_STACK_EXIT savesharedsvpv(POPs); \
PUTBACK; \
FREETMPS; \
LEAVE; \
perl_unlock()
// Macros for storing data
#define hvstores_str(hv, key, val) if (val) { hv_stores((hv), key, newSVpv((val), 0)); }
#define hvstores_int(hv, key, val) hv_stores((hv), key, newSViv((val)))
#define hvstores_ref(hv, key, val) if (val) { hv_stores((hv), key, newRV_noinc((SV*)(val))); }
perlify_notifications is created using another macro.
// Calls one of the `perlify_X` functions for each element in an array
// Note: I was not using av_push before, I only temporarily switched thinking that was the issue or if I was misreading the docs
// EX: PERLIFY_MULTI(notification, notifications, notification_t
#define PERLIFY_MULTI(type, types, mstype) AV* perlify_##types(const struct mstype* const types, size_t len) { \
if (!(types && len)) return NULL; \
AV* av = newAV(); \
for (size_t i = 0; i < len; ++i) \
av_push(av, newRV_noinc((SV*)perlify_##type(types + i))); \
return av; \
}
Taking a step further, here is the hash functions. Such as perlify_notification in the code.
// Converts it into a perl struct
HV* perlify_notification(const struct mstdnt_notification* const notif)
{
if (!notif) return NULL;
HV* notif_hv = newHV();
// Profiler doesn't show much data leaking for these 4, so I am uncertain
hvstores_str(notif_hv, "id", notif->id);
hvstores_int(notif_hv, "created_at", notif->created_at);
hvstores_str(notif_hv, "emoji", notif->emoji);
hvstores_str(notif_hv, "type", mstdnt_notification_t_to_str(notif->type));
// The functions below seem to leak the most (and they store a bunch of data), if I comment them out, most of the
// memleak is gone (because these output a lot of data). The functions
// being called look equivalent to this one, so it's no secret.
hvstores_ref(notif_hv, "account", perlify_account(notif->account));
hvstores_ref(notif_hv, "pleroma", perlify_notification_pleroma(notif->pleroma));
hvstores_ref(notif_hv, "status", perlify_status(notif->status));
return notif_hv;
}
I tried to compile all of Perl from source since I'm on Gentoo to further debug this issue and look into the FREETMPS internals, and tried to see what was happening. I tried dumping all the references I could, yet every object had a REFCNT of 1. No luck
When I ran this with massif, viewed with Top, I would just watch the memory increase slowly. I tested my luck with Trial and error: It starts at around 9 MB, then slowly increases up to 15 MB to 25 MB memory usage, as long as I keep calling these functions over and over. Of course, if I comment FREETMPS out, or just run the functions without anything else, memory usage increases at the same rate as before. I'm still certain this is the root issue.
I'm really at a loss here, am I doing something wrong with how I'm "perlifying" my C structs?

Related

How to parse new line in Scala grammar with flex/bison?

I want to parse Scala grammar with flex and bison. But I don't know how to parse the newline token in Scala grammar.
If I parse newline as a token T_NL, Here's the Toy.l for example:
...
[a-zA-Z_][a-zA-Z0-9_]* { yylval->literal = strdup(yy_text); return T_ID; }
\n { yylval->token = T_LN; return T_LN; }
[ \t\v\f\r] { /* skip whitespaces */ }
...
And here's the Toy.y for example:
function_def: 'def' T_ID '(' argument_list ')' return_expression '=' expression T_NL
;
argument_list: argument
| argument ',' argument_list
;
expression: ...
;
return_expression: ...
;
You could see that I have to skip T_NL in all other statements and definitions in Toy.y, which is really boring.
Please educate me with source code example!
This is a clear case where bison push-parsers are useful. The basic idea is that the decision to send an NL token (or tokens) can only be made when the following token has been identified (and, in one corner case, the second following token).
The advantage of push parsers is that they let us implement strategies like this, where there is not necessarily a one-to-one relationship between input lexemes and tokens sent to the parser. I'm not going to deal with all the particularities of setting up a push parser (though it's not difficult); you should refer to the bison manual for details. [Note 1]
First, it's important to read the Scala language description with care. Newline processing is described in section 2.13:
A newline in a Scala source text is treated as the special token “nl” if the three following criteria are satisfied:
The token immediately preceding the newline can terminate a statement.
The token immediately following the newline can begin a statement.
The token appears in a region where newlines are enabled.
Rules 1 and 2 are simple lookup tables, which are precisely defined in the following two paragraphs. There is just one minor exception to rule 2 has a minor exception, described below:
A case token can begin a statement only if followed by a class or object token.
One hackish possibility to deal with that exception would be to add case[[:space:]]+class and case[[:space:]]+object as lexemes, on the assumption that no-one will put a comment between case and class. (Or you could use a more sophisticated pattern, which allows comments as well as whitespace.) If one of these lexemes is recognised, it could either be sent to the parser as a single (fused) token, or it could be sent as two tokens using two invocations of SEND in the lexer action. (Personally, I'd go with the fused token, since once the pair of tokens has been recognised, there is no advantage to splitting them up; afaik, there's no valid program in which case class can be parsed as anything other than case class. But I could be wrong.)
To apply rules one and two, then, we need two lookup tables indexed by token number: token_can_end_stmt and token_cannot_start_stmt. (The second one has its meaning reversed because most tokens can start statements; doing it this way simplifies initialisation.)
/* YYNTOKENS is defined by bison if you specify %token-tables */
static bool token_can_end_stmt[YYNTOKENS] = {
[T_THIS] = true, [T_NULL] = true, /* ... */
};
static bool token_cannot_start_stmt[YYNTOKENS] = {
[T_CATCH] = true, [T_ELSE] = true, /* ... */
};
We're going to need a little bit of persistent state, but fortunately when we're using a push parser, the scanner does not need to return to its caller every time it recognises a token, so we can keep the persistent state as local variables in the scan loop. (That's another advantage of the push-parser architecture.)
From the above description, we can see that what we're going to need to maintain in the scanner state are:
some indication that a newline has been encountered. This needs to be a count, not a boolean, because we might need to send two newlines:
if two tokens are separated by at least one completely blank line (i.e a line which contains no printable characters), then two nl tokens are inserted.
A simple way to handle this is to simply compare the current line number with the line number at the previous token. If they are the same, then there was no newline. If they differ by only one, then there was no blank line. If they differ by more than one, then there was either a blank line or a comment (or both). (It seems odd to me that a comment would not trigger the blank line rule, so I'm assuming that it does. But I could be wrong, which would require some adjustment to this scanner.) [Note 2]
the previous token (for rule 1). It's only necessary to record the token number, which is a simple small integer.
some way of telling whether we're in a "region where newlines are enabled" (for rule 3). I'm pretty sure that this will require assistance from the parser, so I've written it that way here.
By centralising the decision about sending a newline into a single function, we can avoid a lot of code duplication. My typical push-parser architecture uses a SEND macro anyway, to deal with the boilerplate of saving the semantic value, calling the parser, and checking for errors; it's easy to add the newline logic there:
// yylloc handling mostly omitted for simplicity
#define SEND_VALUE(token, tag, value) do { \
yylval.tag = value; \
SEND(token); \
} while(0);
#define SEND(token) do { \
int status = YYPUSH_MORE; \
if (yylineno != prev_lineno) \
&& token_can_end_stmt[prev_token] \
&& !token_cannot_start_stmt[token] \
&& in_new_line_region) { \
status = yypush_parse(ps, T_NL, NULL, &yylloc, &nl_enabled); \
if (status == YYPUSH_MORE \
&& yylineno - prev_lineno > 1) \
status = yypush_parse(ps, T_NL, NULL, &yylloc, &nl_enabled); \
} \
nl_encountered = 0; \
if (status == YYPUSH_MORE) \
status = yypush_parse(ps, token, &yylval, &yylloc, &nl_enabled); \
if (status != YYPUSH_MORE) return status; \
prev_token = token; \
prev_lineno = yylineno; \
while (0);
Specifying the local state in the scanner is extremely simple; just place the declarations and initialisations at the top of your scanner rules, indented. Any indented code prior to the first rule is inserted directly into yylex, almost at the top of the function (so it is executed once per call, not once per lexeme):
%%
int nl_encountered = 0;
int prev_token = 0;
int prev_lineno = 1;
bool nl_enabled = true;
YYSTYPE yylval;
YYLTYPE yylloc = {0};
Now, the individual rules are pretty simple (except for case). For example, we might have rules like:
"while" { SEND(T_WHILE); }
[[:lower:]][[:alnum:]_]* { SEND_VALUE(T_VARID, str, strdup(yytext)); }
That still leaves the question of how to determine if we are in a region where newlines are enabled.
Most of the rules could be handled in the lexer by just keeping a stack of different kinds of open parentheses, and checking the top of the stack: If the parenthesis on the top of the stack is a {, then newlines are enabled; otherwise, they are not. So we could use rules like:
[{([] { paren_stack_push(yytext[0]); SEND(yytext[0]); }
[])}] { paren_stack_pop(); SEND(yytext[0]); }
However, that doesn't deal with the requirement that newlines be disabled between a case and its corresponding =>. I don't think it's possible to handle that as another type of parenthesis, because there are lots of uses of => which do not correspond with a case and I believe some of them can come between a case and it's corresponding =>.
So a better approach would be to put this logic into the parser, using lexical feedback to communicate the state of the newline-region stack, which is what is assumed in the calls to yypush_parse above. Specifically, they share one boolean variable between the scanner and the parser (by passing a pointer to the parser). [Note 3] The parser then maintains the value of this variable in MRAs in each rule which matches a region of potentially different newlinedness, using the parse stack itself as a stack. Here's a small excerpt of a (theoretical) parser:
%define api.pure full
%define api.push-pull push
%locations
%parse-param { bool* nl_enabled; }
/* More prologue */
%%
// ...
/* Some example productions which modify nl_enabled: */
/* The actions always need to be before the token, because they need to take
* effect before the next lookahead token is requested.
* Note how the first MRA's semantic value is used to keep the old value
* of the variable, so that it can be restored in the second MRA.
*/
TypeArgs : <boolean>{ $$ = *nl_enabled; *nl_enabled = false; }
'[' Types
{ *nl_enabled = $1; } ']'
CaseClause : <boolean>{ $$ = *nl_enabled; *nl_enabled = false; }
"case" Pattern opt_Guard
{ *nl_enabled = $1; } "=>" Block
FunDef : FunSig opt_nl
<boolean>{ $$ = *nl_enabled; *nl_enabled = true; }
'{' Block
{ *nl_enabled = $3; } '}'
Notes:
Push parsers have many other advantages; IMHO they are they are the solution of choice. In particular, using push parsers avoids the circular header dependency which plagues attempts to build pure parser/scanner combinations.
There is still the question of multiline comments with preceding and trailing text:
return /* Is this comment
a newline? */ 42
I'm not going to try to answer that question.)
It would be possible to keep this flag in the YYLTYPE structure, since only one instance of yylloc is ever used in this example. That might be a reasonable optimisation, since it cuts down on the number of parameters sent to yypush_parse. But it seemed a bit fragile, so I opted for a more general solution here.

Proper way to call a different method from the same C-extension module?

I'm converting a pure-Python module to a C-extension to familiarize myself with the C API.
The Python implementation is as follows:
_CRC_TABLE_ = [0] * 256
def initialize_crc_table():
if _CRC_TABLE_[1] != 0: # Safeguard against re-initialization
return
# snip
def calculate_crc(data: bytes, initial: int = 0) -> int:
if _CRC_TABLE_[1] == 0: # In case user forgets to initialize first
initialize_crc_table()
# snip
# additional non-CRC methods trimmed
My C-extension thus far works:
#include <Python.h>
static Py_ssize_t CRC_TABLE_LEN = 256;
PyObject *_CRC_TABLE_;
static PyObject *method_initialize_crc_table(PyObject *self, PyObject *args) {
// snip
}
static PyMethodDef module_methods[] = {
{"initialize_crc_table", method_initialize_crc_table, METH_VARARGS, NULL},
{NULL, NULL, 0, NULL}
};
void _allocate_table_() {
_CRC_TABLE = PyList_New(CRC_TABLE_LEN);
PyObject *zero = Py_BuildValue("i", 0);
for (int i = 0; i < CRC_TABLE_LEN; i++) {
PyList_SetItem(_CRC_TABLE_, i, zero);
}
}
#if PY_MAJOR_VERSION >= 3
static struct PyModuleDef module_utilities = {
PyModuleDef_HEAD_INIT,
"utilities",
NULL,
-1,
module_methods,
};
PyMODINIT_FUNC PyInit_utilities() {
PyObject *module = PyModule_Create(&module_utilities);
_allocate_table_();
PyModule_AddObject(module, "_CRC_TABLE", _CRC_TABLE_);
return module;
}
#else
PyMODINIT_FUNC initutilities() {
PyObject *module = Py_InitModule3("utilities", module_methods, NULL);
_allocate_table_();
PyModule_AddObject(module, "_CRC_TABLE", _CRC_TABLE_);
}
I am able to access utilities._CRC_TABLE_ from the C-extension in the interpreter and values match the Python-equivalent when invoking utilities.intialize_crc_table.
Now I'm trying to call initialize_crc_table at the start of calculate_crc, performing the same check as used in the Python implementation. I'm returning None for now:
static PyObject *method_calculate_crc(PyObject *self, PyObject *args) {
if (!(uint)PyLong_AsUnsignedLong(PyList_GetItem(_CRC_TABLE_, (Py_ssize_t) 1))) {
PyObject *call_initialize_crc_table = PyObject_GetAttrString(self, "initialize_crc_table");
PyObject_CallObject(call_initialize_crc_table, NULL);
Py_DECREF(call_initialize_crc_table);
}
Py_RETURN_NONE;
}
I've added this to module_methods[] and it compiles without warnings or errors. When I run this method within the interpreter, I get a segfault. I assume it's because self isn't the module as an object.
I can do this as an alternative, which appears to work without issue:
static PyObject *method_calculate_crc(PyObject *self, PyObject *args) {
if (!(uint)PyLong_AsUnsignedLong(PyList_GetItem(_CRC_TABLE_, (Py_ssize_t) 1))) {
method_initialize_crc_table(self, NULL);
}
Py_RETURN_NONE;
}
However, I am not certain if I should be passing self, NULL, or something else to the method.
What is the proper way of invoking method_initialize_crc_table from method_calculate_crc?
There was a "gotcha" here that I must clarify on. While the code was intended for Python 3, development was initially done in Python 2 as the development files were not yet available on the machine I was using. This shed some light on some differences in how each version handles things. David's comments helped lead to this clarification.
If a method is defined as METH_VARARGS but is defined for a module (versus a class), Python 2 does not pass anything for the PyObject *self parameter. This is noted in the documentation but is easy to overlook if you're not careful. Python 3, however, does pass a pointer to the module. As DavidW recommended, I implemented a global variable to hold a reference to the module. Assuming his claims of Python handling the de-referencing at exit are correct, we can safely use this for accessing module global attributes.
With our issue of PyObject *self solved, we no longer get a segfault. We can then address the question of which approach is (seemingly more) correct for calling a method within the local scope of the module. Do we do this:
if (/* conditional */)
PyObject_CallMethod(module, "initialize_crc_table", NULL);
Or this:
if (/* conditional */)
method_initialize_crc_table(self, args, kwargs);
Benchmarks seem to provide an answer here. Using Python's built-in timeit module, we can see a very clear difference in terms of performance. Note that so far in our implementation, .calculate_crc accesses ._CRC_TABLE_ and checks if it's initialized, but no processing occurs. Performance compared to Python 2 and 3 were identical and thus ignored.
The command is as follows:
python3 -m timeit "import utilities; utilities.calculate_crc(0)"
PyObject_CallMethod: 874 nsec per loop
method_initialize_crc_table: 44.3 usec per loop
Using the PyObject_ function is reported as 50x faster, quite a significant difference. Benchmarks alone do not facilitate what is "more correct" but with no clear guidance it may be a sufficient justification for our use. Therefore, I will be using PyObject_ calls for this project.

Perl XS garbage collection

I had to deal with a really old codebase in my company which had C++ apis exposed via perl.
In on of the code reviews, I suggested it was necessary to garbage collect memory which was being allocated in c++.
Here is the skeleton of the code:
char* convert_to_utf8(char *src, int length) {
.
.
.
length = get_utf8_length(src);
char *dest = new char[length];
.
.
// No delete
return dest;
}
Perl xs definition:
PROTOTYPE: ENABLE
char * _xs_convert_to_utf8(src, length)
char *src
int length
CODE:
RETVAL = convert_to_utf8(src, length)
OUTPUT:
RETVAL
so, I had a comment that the memory created in the c++ function will not garbage collected by Perl. And 2 java developers think it will crash since perl will garbage collect the memory allocated by c++. I suggested the following code.
CLEANUP:
delete[] RETVAL
Am I wrong here?
I also ran this code and showed them the increasing memory utilization, with and without the CLEANUP section. But, they are asking for exact documentation which proves it and I couldn't find it.
Perl Client:
use ExtUtils::testlib;
use test;
for (my $i=0; $i<100000000;$i++) {
my $a = test::hello();
}
C++ code:
#define PERL_NO_GET_CONTEXT
#include "EXTERN.h"
#include "perl.h"
#include "XSUB.h"
#include "ppport.h"
#include <stdio.h>
char* create_mem() {
char *foo = (char*)malloc(sizeof(char)*150);
return foo;
}
XS code:
MODULE = test PACKAGE = test
char * hello()
CODE:
RETVAL = create_mem();
OUTPUT:
RETVAL
CLEANUP:
free(RETVAL);
I'm afraid that the people who wrote (and write) the Perl XS documentation probably consider it too obvious that Perl cannot magically detect memory allocation made in other languages (like C++) to document that explicitly. There's a bit in the perlguts documentation page that says that all memory to be used via the Perl XS API must use Perl's macros to do so that may help you argue.
When you write XS code, you're writing C (or sometimes C++) code. You still need to write proper C/C++, which includes deallocating allocated memory when appropriate.
The glue function you desire XS to create is the following:
void hello() {
dSP; // Declare and init SP, the stack pointer used by mXPUSHs.
char* mem = create_mem();
mXPUSHs(newSVpv(mem, 0)); // Create a scalar, mortalize it, and push it on the stack.
free(mem); // Free memory allocated by create_mem().
XSRETURN(1);
}
newSVpv makes a copy of mem rather than taking possession of it, so the above clearly shows that free(mem) is needed to deallocate mem.
In XS, you could write that as
void hello()
CODE:
{ // A block is needed since we're declaring vars.
char* mem = create_mem();
mXPUSHs(newSVpv(mem, 0));
free(mem);
XSRETURN(1);
}
Or you could take advantage of XS features such as RETVAL and CLEANUP.
SV* hello()
char* mem; // We can get rid of the block by declaring vars here.
CODE:
mem = create_mem();
RETVAL = newSVpv(mem, 0); // Values returned by SV* subs are automatically mortalized.
OUTPUT:
RETVAL
CLEANUP: // Happens after RETVAL has been converted
free(mem); // and the converted value has been pushed onto the stack.
Or you could also take advantage of the typemap, which defines how to convert the returned value into a scalar.
char* hello()
CODE:
RETVAL = create_mem();
OUTPUT:
RETVAL
CLEANUP:
free(RETVAL);
All three of these are perfectly acceptable.
A note on mortals.
Mortalizing is a delayed reference count decrement. If you were to decrement the SV created by hello before hello returns, it would get deallocated before hello returns. By mortalizing it instead, it won't be deallocated until the caller has a chance to inspect it or take possession of it (by increasing its reference count).

iOS - libical / const char * - memory usage

I am using the libical library to parse the iCalendar format and read the information I need out of it. It is working absolutely fine so far, but there is one odd thing concerning ical.
This is my code:
icalcomponent *root = icalparser_parse_string([iCalData cStringUsingEncoding:NSUTF8StringEncoding]);
if (root)
{
icalcomponent *currentEvent = icalcomponent_get_first_component(root, ICAL_VEVENT_COMPONENT);
while (currentEvent)
{
while(currentProperty)
{
icalvalue *value = icalproperty_get_value(currentProperty);
char *icalString = icalvalue_as_ical_string_r(value); //seems to leak
NSString *currentValueAsString = [NSString stringWithCString:icalString
encoding:NSUTF8StringEncoding];
icalvalue_free(value);
//...
//import data
//...
icalString = nil;
currentValueAsString = nil;
icalproperty_free(currentProperty);
currentProperty = icalcomponent_get_next_property(currentEvent, ICAL_ANY_PROPERTY);
} //end while
} //end while
icalcomponent_free(currentEvent);
}
icalcomponent_free(root);
//...
I did use instruments to check my memory usage and were able to find out, that this line seems to leak:
char *icalString = icalvalue_as_ical_string_r(value); //seems to leak
If I'd copy and paste this line 5 or six times my memory usage would grow about 400kb and never get released anymore.
There is no free method for the icalvalue_as_ical_string_r method because it's returning a char *..
Any suggestions how to solve this issue? I would appreciate any help!
EDIT
Taking a look at the apple doc says the following:
To get a C string from a string object, you are recommended to use UTF8String. This returns a const char * using UTF8 string encoding.
const char *cString = [#"Hello, world" UTF8String];
The C string you receive is owned by a temporary object, and will become invalid when automatic deallocation takes place. If you want to get a permanent C string, you must create a buffer and copy the contents of the const char * returned by the method.
But how to release a char * string properly now if using arc?
I tried to add #autorelease {...} in front of my while-loop but without any effort. Still increasing memory usage...
Careful with the statement "no free method...because it's returning a char*"; that is never something you can just assume.
In the absence of documentation you can look at the source code of the library to see what it does; for example:
http://libical.sourcearchive.com/documentation/0.44-2/icalvalue_8c-source.html
Unfortunately this function can do a lot of different things. There are certainly some cases where calling free() on the returned buffer would be right but maybe that hasn't been ensured in every case.
I think it would be best to request a proper deallocation method from the maintainers of the library. They need to clean up their own mess; the icalvalue_as_ical_string_r() function has at least a dozen cases in a switch that might have different deallocation requirements.
icalvalue_as_ical_string_r returns a char * because it has done a malloc() for your result string. If your pointer is non-NULL, you have to free() it after use.

Page fault with newlib functions

I've been porting newlib to my very small kernel, and I'm stumped: whenever I include a function that references a system call, my program will page fault on execution. If I call a function that does not reference a system call, like rand(), nothing will go wrong.
Note: By include, I mean as long as the function, e.g. printf() or fopen(), is somewhere inside the program, even if it isn't called through main().
I've had this problem for quite some time now, and have no idea what could be causing this:
I've rebuilt newlib numerous times
Modified my ELF loader to load the
code from the section headers instead of program headers
Attempted to build newlib/libgloss separately (which failed)
Linked the libraries (libc, libnosys) through the ld script using GROUP, gcc and ld
I'm not quite sure what other information I should include with this, but I'd be happy to include what I can.
Edit: To verify, the page faults occurring are not at the addresses of the failing functions; they are elsewhere in the program. For example, when I call fopen(), located at 0x08048170, I will page fault at 0xA00A316C.
Edit 2:
Relevant code for loading ELF:
int krun(u8int *name) {
int fd = kopen(name);
Elf32_Ehdr *ehdr = kmalloc(sizeof(Elf32_Ehdr*));
read(fd, ehdr, sizeof(Elf32_Ehdr));
if (ehdr->e_ident[0] != 0x7F || ehdr->e_ident[1] != 'E' || ehdr->e_ident[2] != 'L' || ehdr->e_ident[3] != 'F') {
kfree(ehdr);
return -1;
}
int pheaders = ehdr->e_phnum;
int phoff = ehdr->e_phoff;
int phsize = ehdr->e_phentsize;
int sheaders = ehdr->e_shnum;
int shoff = ehdr->e_shoff;
int shsize = ehdr->e_shentsize;
for (int i = 0; i < pheaders; i++) {
lseek(fd, phoff + phsize * i, SEEK_SET);
Elf32_Phdr *phdr = kmalloc(sizeof(Elf32_Phdr*));
read(fd, phdr, sizeof(Elf32_Phdr));
u32int page = PMMAllocPage();
int flags = 0;
if (phdr->p_flags & PF_R) flags |= PAGE_PRESENT;
if (phdr->p_flags & PF_W) flags |= PAGE_WRITE;
int pages = (phdr->p_memsz / 0x1000) + 1;
while (pages >= 0) {
u32int mapaddr = (phdr->p_vaddr + (pages * 0x1000)) & 0xFFFFF000;
map(mapaddr, page, flags | PAGE_USER);
pages--;
}
lseek(fd, phdr->p_offset, SEEK_SET);
read(fd, (void *)phdr->p_vaddr, phdr->p_filesz);
kfree(phdr);
}
// Removed: code block that zeroes .bss: it's already zeroed whenever I check it anyways
// Removed: code block that creates thread and adds it to scheduler
kfree(ehdr);
return 0;
}
Edit 3: I've noticed that if I call a system call, such as write(), and then call printf() two or more times, I will get an unknown opcode interrupt. Odd.
Whoops! Figured it out: when I map the virtual address, I should allocate a new page each time, like so:
map(mapaddr, PMMAllocPage(), flags | PAGE_USER);
Now it works fine.
For those curious as to why it didn't work: when I wasn't including printf(), the size of the program was under 0x1000 bytes, so mapping with only one page was okay. When I include printf() or fopen(), the size of the program was much bigger so that's what caused the issue.