Support macro invocations with multiple tokens for a single argument.
We provide for this by changing the value of the argument-list
production from a list of strings (string_list_t) to a new
data-structure that holds a list of lists of strings
(argument_list_t).
Make macro-expansion productions create string-list values rather than printing
Then we print the final string list up at the top-level content
production along with all other printing.
Additionally, having macro-expansion productions that create values
will make it easier to solve problems like composed function-like
macro invocations in the future.
Move most printing to the action in the content production.
Previously, printing was occurring all over the place. Here we
document that it should all be happening at the top-level content
production, and we move the printing of directive newlines.
The printing of expanded macros is still happening in lower-level
productions, but we plan to fix that soon.
Instead of "parameter_list" and "replacement_list" just use
"parameters" and "replacements". This is consistent with the existing
"arguments" and keeps the line length down in the face of the
now-longer "string_list_t" rather than "list_t".
Make the lexer return SPACE tokens unconditionally.
It seems strange to always be returning SPACE tokens, but since we
were already needing to return a SPACE token in some cases, this
actually simplifies our lexer.
This also allows us to fix two whitespace-handling differences
compared to "gcc -E" so that now the recent modification to the test
suite passes once again.
This shows two minor failures in our current parsing (resulting in
whitespace-only changes, oso not that significant):
1. We are inserting extra whitespace between tokens not originally
separated by whitespace in the replacement list of a macro
definition.
2. We are swallowing whitespace separating tokens in the general
content.
Fix parsing of object-like macro with a definition that begins with '('.
Previously our parser was incorrectly treating this case as a
function-like macro. We fix this by conditionally passing a SPACE
token from the lexer, (but only immediately after the identifier
immediately after #define).
Add test for an object-like macro with a definition beginning with '('
Our current parser sees "#define foo (" as an identifier token
followed by a '(' token and parses this as a function-like macro.
That would be correct for "#define foo(" but the preprocessor
specification treats this whitespace as significant here so this test
currently fails.
Eliminate a reduce/reduce conflict in the function-like macro production.
Previously, an empty argument could be parsed as either an "argument_list"
directly or first as an "argument" and then an "argument_list".
We fix this by removing the possibility of an empty "argument_list"
directly.
Add support for the structure of function-like macros.
We accept the structure of arguments in both macro definition and
macro invocation, but we don't yet expand those arguments. This is
just enough code to pass the recently-added tests, but does not yet
provide any sort of useful function-like macro.
Add tests for the structure of function-like macros.
These test only the most basic aspect of parsing of function-like
macros. Specifically, none of the definitions of these function like
macros use the arguments of the function.
No function-like macros are implemented yet, so all of these fail for
now.
Make the lexer distinguish between identifiers and defined macros.
This is just a minor style improvement for now. But the same
mechanism, (having the lexer peek into the table of defined macros),
will be essential when we add function-like macros in addition to the
current object-like macros.
Remove some redundancy in the top-level production.
Previously we had two copies of all top-level actions, (once in a list
context and once in a non-list context). Much simpler to instead have
a single list-context production with no action and then only have the
actions in their own non-list contexts.
Simplify lexer significantly (remove all stateful lexing).
We are able to remove all state by simply passing NEWLINE through
as a token unconditionally (as opposed to only passing newline when
on a driective line as we did previously).
This isn't ideal for two reasons:
1. There's a bunch of stateful redundancy in the lexer that should be
cleaned up.
2. The hash table does not provide a mechanism to delete an entry, so
we waste memory to add a new NULL entry in front of the existing
entry with the same key.
But this does at least work, (it passes the recently added undef test
case).
The lexer was previously using strdup (expecting the parser to free),
but is now more consistent, easier to use, and slightly more efficent
by using talloc along with the parser.
Also, we add xtalloc and xtalloc_strdup wrappers around talloc and
talloc_strdup to put all of the out-of-memory-checking code in one
place.
Fix defines involving both literals and other defined macros.
We now store a list of tokens in our hash-table rather than a single
string. This lets us replace each macro in the value as necessary.
This code adds a link dependency on talloc which does exactly what we
want in terms of memory management for a parser.
The 3 tests added in the previous commit now pass.
Add tests defining a macro to be a literal and another macro.
These 3 new tests are modeled after 3 existing tests but made slightly
more complex since now instead of definining a new macro to be an
existing macro, we define it to be replaced with two tokens, (one a
literal, and one an existing macro).
These tests all fail currently because the replacement lookup is
currently happening on the basis of the entire replacement string
rather than on a list of tokens.
Add a couple more tests for chained #define directives.
One with the chained defines in the opposite order, and one with the
potential to trigger an infinite-loop bug through mutual
recursion. Each of these tests pass already.
The fix is as simple as adding a loop to continue to lookup values
in the hash table until one of the following termination conditions:
1. The token we look up has no definition
2. We get back the original symbol we started with
This second termination condition prevents infinite iteration.