9: Hints for Preparing Specifications
This section contains miscellaneous hints on preparing efficient, easy to change,
and clear specifications.
The individual subsections are more or less
provide rules with substantial actions
and still have a readable specification file.
The following style hints owe much to Brian Kernighan.
Use all capital letters for token names, all lower case letters for
This rule comes under the heading of ``knowing who to blame when
Put grammar rules and actions on separate lines.
This allows either to be changed without
an automatic need to change the other.
Put all rules with the same left hand side together.
Put the left hand side in only once, and let all
following rules begin with a vertical bar.
Put a semicolon only after the last rule with a given left hand side,
and put the semicolon on a separate line.
This allows new rules to be easily added.
Indent rule bodies by two tab stops, and action bodies by three
The example in Appendix A is written following this style, as are
the examples in the text of this paper (where space permits).
The user must make up his own mind about these stylistic questions;
the central problem, however, is to make the rules visible through
the morass of action code.
The algorithm used by the Yacc parser encourages so called ``left recursive''
grammar rules: rules of the form
name : name rest_of_rule ;
These rules frequently arise when
writing specifications of sequences and lists:
In each of these cases, the first rule
will be reduced for the first item only, and the second rule
will be reduced for the second and all succeeding items.
With right recursive rules, such as
the parser would be a bit bigger, and the items would be seen, and reduced,
More seriously, an internal stack in the parser
would be in danger of overflowing if a very long sequence were read.
Thus, the user should use left recursion wherever reasonable.
It is worth considering whether a sequence with zero
elements has any meaning, and if so, consider writing
the sequence specification with an empty rule:
Once again, the first rule would always be reduced exactly once, before the
and then the second rule would be reduced once for each item read.
Permitting empty sequences
often leads to increased generality.
However, conflicts might arise if Yacc is asked to decide
which empty sequence it has seen, when it hasn't seen enough to
Some lexical decisions depend on context.
For example, the lexical analyzer might want to
delete blanks normally, but not within quoted strings.
Or names might be entered into a symbol table in declarations,
One way of handling this situation is
to create a global flag that is
examined by the lexical analyzer, and set by actions.
For example, suppose a program
consists of 0 or more declarations, followed by 0 or more statements.
... other declarations ...
is now 0 when reading statements, and 1 when reading declarations,
except for the first token in the first statement.
This token must be seen by the parser before it can tell that
the declaration section has ended and the statements have
In many cases, this single token exception does not
This kind of ``backdoor'' approach can be elaborated
Nevertheless, it represents a way of doing some things
that are difficult, if not impossible, to
Some programming languages
use words like ``if'', which are normally reserved,
as label or variable names, provided that such use does not
conflict with the legal use of these names in the programming language.
This is extremely hard to do in the framework of Yacc;
it is difficult to pass information to the lexical analyzer
telling it ``this instance of `if' is a keyword, and that instance is a variable''.
The user can make a stab at it, using the
mechanism described in the last subsection,
A number of ways of making this easier are under advisement.
Until then, it is better that the keywords be
that is, be forbidden for use as variable names.
There are powerful stylistic reasons for preferring this, anyway.