.TH FLEX 1 "26 May 1990" "Version 2.3"
flex - fast lexical analyzer generator
.B [-bcdfinpstvFILT8 -C[efmF] -Sskeleton]
programs which recognized lexical patterns in text.
the given input files, or its standard input if no file names are given,
for a description of a scanner to generate. The description is in
of regular expressions and C code, called
generates as output a C source file,
This file is compiled and linked with the
library to produce an executable. When the executable is run,
it analyzes its input for occurrences
of the regular expressions. Whenever it finds one, it executes
the corresponding C code.
First some simple examples to get the flavor of how one uses
input specifies a scanner which whenever it encounters the string
"username" will replace it with the user's login name:
username printf( "%s", getlogin() );
By default, any text not matched by a
is copied to the output, so the net effect of this scanner is
to copy its input file to its output with each occurrence
In this input, there is just one rule. "username" is the
The "%%" marks the beginning of the rules.
Here's another simple example:
int num_lines = 0, num_chars = 0;
\\n ++num_lines; ++num_chars;
printf( "# of lines = %d, # of chars = %d\\n",
This scanner counts the number of characters and the number
of lines in its input (it produces no output other than the
final report on the counts). The first line
declares two globals, "num_lines" and "num_chars", which are accessible
routine declared after the second "%%". There are two rules, one
which matches a newline ("\\n") and increments both the line count and
the character count, and one which matches any character other than
a newline (indicated by the "." regular expression).
A somewhat more complicated example:
/* scanner for a toy Pascal-like language */
/* need this for the call to atof() below */
printf( "An integer: %s (%d)\\n", yytext,
printf( "A float: %s (%d)\\n", yytext,
if|then|begin|end|procedure|function {
printf( "A keyword: %s\\n", yytext );
{ID} printf( "An identifier: %s\\n", yytext );
"+"|"-"|"*"|"/" printf( "An operator: %s\\n", yytext );
"{"[^}\\n]*"}" /* eat up one-line comments */
[ \\t\\n]+ /* eat up whitespace */
. printf( "Unrecognized character: %s\\n", yytext );
++argv, --argc; /* skip over program name */
yyin = fopen( argv[0], "r" );
This is the beginnings of a simple scanner for a language like
Pascal. It identifies different types of
and reports on what it has seen.
The details of this example will be explained in the following
.SH FORMAT OF THE INPUT FILE
input file consists of three sections, separated by a line with just
section contains declarations of simple
definitions to simplify the scanner specification, and declarations of
which are explained in a later section.
Name definitions have the form:
The "name" is a word beginning with a letter or an underscore ('_')
followed by zero or more letters, digits, '_', or '-' (dash).
The definition is taken to begin at the first non-white-space character
following the name and continuing to the end of the line.
The definition can subsequently be referred to using "{name}", which
will expand to "(definition)". For example,
defines "DIGIT" to be a regular expression which matches a
"ID" to be a regular expression which matches a letter
followed by zero-or-more letters-or-digits.
A subsequent reference to
and matches one-or-more digits followed by a '.' followed
input contains a series of rules of the form:
where the pattern must be unindented and the action must begin
See below for a further description of patterns and actions.
Finally, the user code section is simply copied to
It is used for companion routines which call or are called
by the scanner. The presence of this section is optional;
if it is missing, the second
in the input file may be skipped, too.
In the definitions and rules sections, any
is copied verbatim to the output (with the %{}'s removed).
The %{}'s must appear unindented on lines by themselves.
any indented or %{} text appearing before the
first rule may be used to declare variables
which are local to the scanning routine and (after the declarations)
code which is to be executed whenever the scanning routine is entered.
Other indented or %{} text in the rule section is still copied to the output,
but its meaning is not well-defined and it may well cause compile-time
errors (this feature is present for
compliance; see below for other such features).
In the definitions section, an unindented comment (i.e., a line
beginning with "/*") is also copied verbatim to the output up
to the next "*/". Also, any line in the definitions section
beginning with '#' is ignored, though this style of comment is
deprecated and may go away in the future.
The patterns in the input are written using an extended set of regular
x match the character 'x'
. any character except newline
[xyz] a "character class"; in this case, the pattern
matches either an 'x', a 'y', or a 'z'
[abj-oZ] a "character class" with a range in it; matches
an 'a', a 'b', any letter from 'j' through 'o',
[^A-Z] a "negated character class", i.e., any character
but those in the class. In this case, any
character EXCEPT an uppercase letter.
[^A-Z\\n] any character EXCEPT an uppercase letter or
r* zero or more r's, where r is any regular expression
r? zero or one r's (that is, "an optional r")
r{2,5} anywhere from two to five r's
{name} the expansion of the "name" definition
the literal string: [xyz]"foo
\\X if X is an 'a', 'b', 'f', 'n', 'r', 't', or 'v',
then the ANSI-C interpretation of \\x.
Otherwise, a literal 'X' (used to escape
\\123 the character with octal value 123
\\x2a the character with hexadecimal value 2a
(r) match an r; parentheses are used to override
rs the regular expression r followed by the
regular expression s; called "concatenation"
r/s an r but only if it is followed by an s. The
s is not part of the matched text. This type
of pattern is called as "trailing context".
^r an r, but only at the beginning of a line
r$ an r, but only at the end of a line. Equivalent
<s>r an r, but only in start condition s (see
below for discussion of start conditions)
same, but in any of start conditions s1,
an end-of-file when in start condition s1 or s2
The regular expressions listed above are grouped according to
precedence, from highest precedence at the top to lowest at the bottom.
Those grouped together have equal precedence. For example,
since the '*' operator has higher precedence than concatenation,
and concatenation higher than alternation ('|'). This pattern
the string "ba" followed by zero-or-more r's.
To match "foo" or zero-or-more "bar"'s, use:
and to match zero-or-more "foo"'s-or-"bar"'s:
A negated character class such as the example "[^A-Z]"
unless "\\n" (or an equivalent escape sequence) is one of the
characters explicitly present in the negated character class
(e.g., "[^A-Z\\n]"). This is unlike how many other regular
expression tools treat negated character classes, but unfortunately
the inconsistency is historically entrenched.
Matching newlines means that a pattern like [^"]* can match an entire
input (overflowing the scanner's input buffer) unless there's another
A rule can have at most one instance of trailing context (the '/' operator
or the '$' operator). The start condition, '^', and "<<EOF>>" patterns
can only occur at the beginning of a pattern, and, as well as with '/' and '$',
cannot be grouped inside parentheses. A '^' which does not occur at
the beginning of a rule or a '$' which does not occur at the end of
a rule loses its special properties and is treated as a normal character.
The following are illegal:
Note that the first of these, can be written "foo/bar\\n".
The following will result in '$' or '^' being treated as a normal character:
If what's wanted is a "foo" or a bar-followed-by-a-newline, the following
could be used (the special '|' action is explained below):
bar$ /* action goes here */
A similar trick will work for matching a foo or a
bar-at-the-beginning-of-a-line.
.SH HOW THE INPUT IS MATCHED
When the generated scanner is run, it analyzes its input looking
for strings which match any of its patterns. If it finds more than
one match, it takes the one matching the most text (for trailing
context rules, this includes the length of the trailing part, even
though it will then be returned to the input). If it finds two
or more matches of the same length, the
Once the match is determined, the text corresponding to the match
is made available in the global character pointer
and its length in the global integer
corresponding to the matched pattern is then executed (a more
detailed description of actions follows), and then the remaining
input is scanned for another match.
If no match is found, then the
is executed: the next character in the input is considered matched and
copied to the standard output. Thus, the simplest legal
which generates a scanner that simply copies its input (one character
at a time) to its output.
Each pattern in a rule has a corresponding action, which can be any
arbitrary C statement. The pattern ends at the first non-escaped
whitespace character; the remainder of the line is its action. If the
action is empty, then when the pattern is matched the input token
is simply discarded. For example, here is the specification for a program
which deletes all occurrences of "zap me" from its input:
(It will copy all other characters in the input to the output since
they will be matched by the default rule.)
Here is a program which compresses multiple blanks and tabs down to
a single blank, and throws away whitespace found at the end of a line:
[ \\t]+$ /* ignore this token */
If the action contains a '{', then the action spans till the balancing '}'
is found, and the action may cross multiple lines.
knows about C strings and comments and won't be fooled by braces found
within them, but also allows actions to begin with
and will consider the action to be all the text up to the next
(regardless of ordinary braces inside the action).
An action consisting solely of a vertical bar ('|') means "same as
the action for the next rule." See below for an illustration.
Actions can include arbitrary C code, including
statements to return a value to whatever routine called
is called it continues processing tokens from where it last left
off until it either reaches
the end of the file or executes a return. Once it reaches an end-of-file,
however, then any subsequent call to
will simply immediately return, unless
is first called (see below).
Actions are not allowed to modify yytext or yyleng.
There are a number of special directives which can be included within
copies yytext to the scanner's output.
followed by the name of a start condition places the scanner in the
corresponding start condition (see below).
directs the scanner to proceed on to the "second best" rule which matched the
input (or a prefix of the input). The rule is chosen as described
above in "How the Input is Matched", and
It may either be one which matched as much text
as the originally chosen rule but came later in the
input file, or one which matched less text.
For example, the following will both count the
words in the input and call the routine special() whenever "frob" is seen:
[^ \\t\\n]+ ++word_count;
any "frob"'s in the input would not be counted as words, since the
scanner normally executes only one action per token.
are allowed, each one finding the next best choice to the currently
active rule. For example, when the following scanner scans the token
"abcd", it will write "abcdabcaba" to the output:
.|\\n /* eat up any unmatched character */
(The first three rules share the fourth's action since they use
is a particularly expensive feature in terms scanner performance;
of the scanner's actions it will slow down
of the scanner's matching. Furthermore,
Note also that unlike the other special actions,
code immediately following it in the action will
tells the scanner that the next time it matches a rule, the corresponding
onto the current value of
rather than replacing it. For example, given the input "mega-kludge"
the following will write "mega-mega-kludge" to the output:
First "mega-" is matched and echoed to the output. Then "kludge"
is matched, but the previous "mega-" is still hanging around at the
for the "kludge" rule will actually write "mega-kludge".
in the scanner's action entails a minor performance penalty in the
scanner's matching speed.
returns all but the first
characters of the current token back to the input stream, where they
will be rescanned when the scanner looks for the next match.
are adjusted appropriately (e.g.,
). For example, on the input "foobar" the following will write out
will cause the entire current input string to be scanned again. Unless you've
changed how the scanner will subsequently process its input (using
for example), this will result in an endless loop.
back onto the input stream. It will be the next character scanned.
The following action will take the current token and cause it
to be rescanned enclosed in parentheses.
for ( i = yyleng - 1; i >= 0; --i )
puts the given character back at the
of the input stream, pushing back strings must be done back-to-front.
reads the next character from the input stream. For example,
the following is one way to eat up C comments:
while ( (c = input()) != '*' &&
; /* eat up text of comment */
while ( (c = input()) == '*' )
break; /* found the end */
error( "EOF in comment" );
(Note that if the scanner is compiled using
is instead referred to as
in order to avoid a name clash with the
can be used in lieu of a return statement in an action. It terminates
the scanner and returns a 0 to the scanner's caller, indicating "all done".
Subsequent calls to the scanner will immediately return unless preceded
is also called when an end-of-file is encountered. It is a macro and
.SH THE GENERATED SCANNER
which contains the scanning routine
a number of tables used by it for matching tokens, and a number
of auxiliary routines and macros. By default,
... various definitions and the actions in here ...
(If your environment supports function prototypes, then it will
be "int yylex( void )".) This definition may be changed by redefining
the "YY_DECL" macro. For example, you could use:
#define YY_DECL float lexscan( a, b ) float a, b;
to give the scanning routine the name
returning a float, and taking two floats as arguments. Note that
if you give arguments to the scanning routine using a
K&R-style/non-prototyped function declaration, you must terminate
the definition with a semi-colon (;).
is called, it scans tokens from the global input file
(which defaults to stdin). It continues until it either reaches
an end-of-file (at which point it returns the value 0) or
one of its actions executes a
In the former case, when called again the scanner will immediately
In the latter case (i.e., when an action
executes a return), the scanner may then be called again and it
will resume scanning where it left off.
By default (and for purposes of efficiency), the scanner uses
block-reads rather than simple
calls to read characters from
The nature of how it gets its input can be controlled by redefining the
YY_INPUT's calling sequence is "YY_INPUT(buf,result,max_size)". Its
characters in the character array
and return in the integer variable
number of characters read or the constant YY_NULL (0 on Unix systems)
to indicate EOF. The default YY_INPUT reads from the
global file-pointer "yyin".
A sample redefinition of YY_INPUT (in the definitions
section of the input file):
#define YY_INPUT(buf,result,max_size) \\
result = ((buf[0] = getchar()) == EOF) ? YY_NULL : 1;
This definition will change the input processing to occur
You also can add in things like keeping track of the
input line number this way; but don't expect your scanner to
When the scanner receives an end-of-file indication from YY_INPUT,
returns false (zero), then it is assumed that the
function has gone ahead and set up
to point to another input file, and scanning continues. If it returns
true (non-zero), then the scanner terminates, returning 0 to its
always returns 1. Presently, to redefine it you must first
"#undef yywrap", as it is currently implemented as a macro. As indicated
by the hedging in the previous sentence, it may be changed to
a true function in the near future.
global (default, stdout), which may be redefined by the user simply
by assigning it to some other
provides a mechanism for conditionally activating rules. Any rule
whose pattern is prefixed with "<sc>" will only be active when
the scanner is in the start condition named "sc". For example,
<STRING>[^"]* { /* eat up the string body ... */
will be active only when the scanner is in the "STRING" start
<INITIAL,STRING,QUOTE>\\. { /* handle an escape ... */
will be active only when the current start condition is
either "INITIAL", "STRING", or "QUOTE".
are declared in the definitions (first) section of the input
using unindented lines beginning with either
followed by a list of names.
start conditions, the latter
start conditions. A start condition is activated using the
action is executed, rules with the given start
condition will be active and
rules with other start conditions will be inactive.
If the start condition is
then rules with no start conditions at all will also be active.
rules qualified with the start condition will be active.
A set of rules contingent on the same exclusive start condition
describe a scanner which is independent of any of the other rules in the
exclusive start conditions make it easy to specify "mini-scanners"
which scan portions of the input that are syntactically different
from the rest (e.g., comments).
If the distinction between inclusive and exclusive start conditions
is still a little vague, here's a simple example illustrating the
connection between the two. The set of rules:
<example>foo /* do something */
<INITIAL,example>foo /* do something */
any unmatched character) remains active in start conditions.
returns to the original state where only the rules with
no start conditions are active. This state can also be
referred to as the start-condition "INITIAL", so
(The parentheses around the start condition name are not required but
are considered good style.)
actions can also be given as indented code at the beginning
of the rules section. For example, the following will cause
the scanner to enter the "SPECIAL" start condition whenever
is called and the global variable
To illustrate the uses of start conditions,
here is a scanner which provides two different interpretations
of a string like "123.456". By default it will treat it as
as three tokens, the integer "123", a dot ('.'), and the integer "456".
But if the string is preceded earlier in the line by the string
it will treat it as a single token, the floating-point number
expect-floats BEGIN(expect);
<expect>[0-9]+"."[0-9]+ {
printf( "found a float, = %f\\n",
/* that's the end of the line, so
* we need another "expect-number"
* before we'll recognize any more
printf( "found an integer, = %d\\n",
"." printf( "found a dot\\n" );
Here is a scanner which recognizes (and discards) C comments while
maintaining a count of the current input line.
<comment>[^*\\n]* /* eat anything that's not a '*' */
<comment>"*"+[^*/\\n]* /* eat up '*'s not followed by '/'s */
<comment>"*"+"/" BEGIN(INITIAL);
Note that start-conditions names are really integer values and
can be stored as such. Thus, the above could be extended in the
comment_caller = INITIAL;
<comment>[^*\\n]* /* eat anything that's not a '*' */
<comment>"*"+[^*/\\n]* /* eat up '*'s not followed by '/'s */
<comment>"*"+"/" BEGIN(comment_caller);
One can then implement a "stack" of start conditions using an
array of integers. (It is likely that such stacks will become
feature in the future.) Note, though, that
start conditions do not have their own name-space; %s's and %x's
declare names in the same fashion as #define's.
.SH MULTIPLE INPUT BUFFERS
Some scanners (such as those which support "include" files)
require reading from several input streams. As
scanners do a large amount of buffering, one cannot control
where the next input will be read from by simply writing a
which is sensitive to the scanning context.
is only called when the scanner reaches the end of its buffer, which
may be a long time after scanning a statement such as an "include"
which requires switching the input source.
To negotiate these sorts of problems,
provides a mechanism for creating and switching between multiple
input buffers. An input buffer is created by using:
YY_BUFFER_STATE yy_create_buffer( FILE *file, int size )
pointer and a size and creates a buffer associated with the given
file and large enough to hold
characters (when in doubt, use
for the size). It returns a
handle, which may then be passed to other routines:
void yy_switch_to_buffer( YY_BUFFER_STATE new_buffer )
switches the scanner's input buffer so subsequent tokens will
may be used by yywrap() to sets things up for continued scanning, instead
of opening a new file and pointing
void yy_delete_buffer( YY_BUFFER_STATE buffer )
is used to reclaim the storage associated with a buffer.
provided for compatibility with the C++ use of
for creating and destroying dynamic objects.
handle to the current buffer.
Here is an example of using these features for writing a scanner
which expands include files (the
feature is discussed below):
/* the "incl" state is used for picking up the name
#define MAX_INCLUDE_DEPTH 10
YY_BUFFER_STATE include_stack[MAX_INCLUDE_DEPTH];
int include_stack_ptr = 0;
<incl>[ \\t]* /* eat the whitespace */
<incl>[^ \\t\\n]+ { /* got the include file name */
if ( include_stack_ptr >= MAX_INCLUDE_DEPTH )
fprintf( stderr, "Includes nested too deeply" );
include_stack[include_stack_ptr++] =
yyin = fopen( yytext, "r" );
yy_create_buffer( yyin, YY_BUF_SIZE ) );
if ( --include_stack_ptr < 0 )
include_stack[include_stack_ptr] );
The special rule "<<EOF>>" indicates
actions which are to be taken when an end-of-file is
encountered and yywrap() returns non-zero (i.e., indicates
no further files to process). The action must finish
by doing one of four things:
has been pointed at a new file to process;
or, switching to a new buffer using
as shown in the example above.
<<EOF>> rules may not be used with other
patterns; they may only be qualified with a list of start
conditions. If an unqualified <<EOF>> rule is given, it
start conditions which do not already have <<EOF>> actions. To
specify an <<EOF>> rule for only the initial start condition, use
These rules are useful for catching things like unclosed comments.
...other rules for dealing with quotes...
error( "unterminated quote" );
yyin = fopen( *filelist, "r" );
can be redefined to provide an action
which is always executed prior to the matched rule's action. For example,
it could be #define'd to call a routine to convert yytext to lower-case.
may be redefined to provide an action which is always executed before
the first scan (and before the scanner's internal initializations are done).
For example, it could be used to call a routine to read
in a data table or open a logging file.
In the generated scanner, the actions are all gathered in one large
switch statement and separated using
which may be redefined. By default, it is simply a "break", to separate
each rule's action from the following rule's.
allows, for example, C++ users to
#define YY_BREAK to do nothing (while being very careful that every
rule ends with a "break" or a "return"!) to avoid suffering from
unreachable statement warnings where because a rule's action ends with
.SH INTERFACING WITH YACC
parsers expect to call a routine named
to find the next input token. The routine is supposed to
return the type of the next token as well as putting any associated
to instruct it to generate the file
containing definitions of all the
input. This file is then included in the
scanner. For example, if one of the tokens is "TOK_NUMBER",
part of the scanner might look like:
[0-9]+ yylval = atoi( yytext ); return TOK_NUMBER;
In the name of POSIX compliance,
for mapping input characters into groups.
The table is specified in the first section, and its format looks like:
2 ABCDEFGHIJKLMNOPQRSTUVWXYZ
This example specifies that the characters 'a', 'b', 'c', and 'd'
are to all be lumped into group #1, upper-case letters
in group #2, digits in group #52, tabs, blanks, and newlines into
no other characters will appear in the patterns.
The group numbers are actually disregarded by
serves, though, to lump characters together. Given the above
table, for example, the pattern "a(AA)*5" is equivalent to "d(ZQ)*0".
They both say, "match any character in group #1, followed by
zero-or-more pairs of characters
from group #2, followed by a character from group #52." Thus
provides a crude way for introducing equivalence classes into
the scanner specification.
option (see below) coupled with the equivalence classes which
automatically generates take care of virtually all the instances
when one might consider using
But what the hell, it's there if you want it.
has the following options:
Generate backtracking information to
This is a list of scanner states which require backtracking
and the input characters on which they do so. By adding rules one
can remove backtracking states. If all backtracking states
is used, the generated scanner will run faster (see the
flag). Only users who wish to squeeze every last cycle out of their
scanners need worry about this option. (See the section on PERFORMANCE
is a do-nothing, deprecated option included for POSIX compliance.
specified table-compression options. This functionality is
flag. To ease the the impact of this change, when
it currently issues a warning message and assumes that
was desired instead. In the future this "promotion" of
will go away in the name of full POSIX compliance (unless
the POSIX meaning is removed first).
makes the generated scanner run in
mode. Whenever a pattern is recognized and the global
is non-zero (which is the default),
the scanner will write to
--accepting rule at line 53 ("the matched text")
The line number refers to the location of the rule in the file
defining the scanner (i.e., the file that was fed to flex). Messages
are also generated when the scanner backtracks, accepts the
default rule, reaches the end of its input buffer (or encounters
a NUL; at this point, the two look the same as far as the scanner's concerned),
or reaches an end-of-file.
specifies (take your pick)
No table compression is done. The result is large but fast.
This option is equivalent to
scanner. The case of letters given in the
be ignored, and tokens in the input will be matched regardless of case. The
will have the preserved case (i.e., it will not be folded).
is another do-nothing, deprecated option included only for
generates a performance report to stderr. The report
consists of comments regarding features of the
input file which will cause a loss of performance in the resulting scanner.
and variable trailing context (see the BUGS section in flex(1))
entails a substantial performance penalty; use of
flag entail minor performance penalties.
(that unmatched scanner input is echoed to
to be suppressed. If the scanner encounters input that does not
match any of its rules, it aborts with an error. This option is
useful for finding holes in a scanner's rule set.
to write the scanner it generates to standard output instead
a summary of statistics regarding the scanner it generates.
Most of the statistics are meaningless to the casual
first line identifies the version of
which is useful for figuring
out where you stand with respect to patches and new releases,
and the next two lines give the date when the scanner was created
and a summary of the flags which were in effect.
scanner table representation should be used. This representation is
about as fast as the full table representation
and for some sets of patterns will be considerably smaller (and for
others, larger). In general, if the pattern set contains both "keywords"
and a catch-all, "identifier" rule, such as in the set:
"switch" return TOK_SWITCH;
"default" return TOK_DEFAULT;
then you're better off using the full table representation. If only
the "identifier" rule is present and you then use a hash table or some such
to detect the keywords, you're better off using
This option is equivalent to
scanner. Normally, scanners generated by
character before deciding that a rule has been matched. At the cost of
will generate a scanner which only looks ahead
when needed. Such scanners are called
because if you want to write a scanner for an interactive system such as a
command shell, you will probably want the user's input to be terminated
with a newline, and without
the user will have to type a character in addition to the newline in order
to have the newline recognized. This leads to dreadful interactive
If all this seems to confusing, here's the general rule: if a human will
be typing in input to your scanner, use
otherwise don't; if you don't care about squeezing the utmost performance
from your scanner and you
don't want to make any assumptions about the input to your scanner,
cannot be used in conjunction with
directives. Without this option,
peppers the generated scanner
with #line directives so error messages in the actions will be correctly
located with respect to the original
the fairly meaningless line numbers of
does not presently generate the necessary directives
to "retarget" the line numbers for those parts of
which it generated. So if there is an error in the generated code,
a meaningless line number is reported.)
mode. It will generate a lot of messages to
the form of the input and the resultant non-deterministic and deterministic
finite automata. This option is mostly for use in maintaining
to generate an 8-bit scanner, i.e., one which can recognize 8-bit
characters. On some sites,
is installed with this option as the default. On others, the default
is 7-bit characters. To see which is the case, check the verbose
output for "equivalence classes created". If the denominator of
the number shown is 128, then by default
is generating 7-bit characters. If it is 256, then the default is
flag is not required (but may be a good idea to keep the scanner
specification portable). Feeding a 7-bit scanner 8-bit characters
will result in infinite loops, bus errors, or other such fireworks,
so when in doubt, use the flag. Note that if equivalence classes
are used, 8-bit scanners take only slightly more table space than
7-bit scanners (128 bytes, to be exact); if equivalence classes are
not used, however, then the tables may grow up to twice their
controls the degree of table compression.
which have identical lexical properties (for example, if the only
appearance of digits in the
input is in the character class
"[0-9]" then the digits '0', '1', ..., '9' will all be put
in the same equivalence class). Equivalence classes usually give
dramatic reductions in the final table/object file sizes (typically
a factor of 2-5) and are pretty cheap performance-wise (one array
look-up per character scanned).
scanner tables should be generated -
tables by taking advantages of similar transition functions for
specifies that the alternate fast scanner representation (described
.I meta-equivalence classes,
which are sets of equivalence classes (or characters, if equivalence
classes are not being used) that are commonly used together. Meta-equivalence
classes are often a big win when using compressed tables, but they
have a moderate performance impact (one or two "if" tests and one
array look-up per character scanned).
specifies that the scanner tables should be compressed but neither
equivalence classes nor meta-equivalence classes should be used.
do not make sense together - there is no opportunity for meta-equivalence
classes if the table is not being compressed. Otherwise the options
should generate equivalence classes
and meta-equivalence classes. This setting provides the highest
degree of table compression. You can trade off
faster-executing scanners at the cost of larger tables with
the following generally being true:
Note that scanners with the smallest tables are usually generated and
compiled the quickest, so
during development you will usually want to use the default, maximal
is often a good compromise between speed and size for production
options are not cumulative; whenever the flag is encountered, the
previous -C settings are forgotten.
overrides the default skeleton file from which
constructs its scanners. You'll never need this option unless you are doing
maintenance or development.
.SH PERFORMANCE CONSIDERATIONS
is that it generate high-performance scanners. It has been optimized
for dealing well with large sets of rules. Aside from the effects
of table compression on scanner speed outlined above,
there are a number of options/actions which degrade performance. These
are, from most expensive to least:
pattern sets that require backtracking
arbitrary trailing context
'^' beginning-of-line operator
with the first three all being quite expensive and the last two
should be avoided at all costs when performance is important.
It is a particularly expensive option.
Getting rid of backtracking is messy and often may be an enormous
amount of work for a complicated scanner. In principal, one begins
file. For example, on the input
foobar return TOK_KEYWORD;
State #6 is non-accepting -
associated rule line numbers:
jam-transitions: EOF [ \\001-n p-\\177 ]
State #8 is non-accepting -
associated rule line numbers:
jam-transitions: EOF [ \\001-` b-\\177 ]
State #9 is non-accepting -
associated rule line numbers:
jam-transitions: EOF [ \\001-q s-\\177 ]
Compressed tables always backtrack.
The first few lines tell us that there's a scanner state in
which it can make a transition on an 'o' but not on any other
character, and that in that state the currently scanned text does not match
any rule. The state occurs when trying to match the rules found
at lines 2 and 3 in the input file.
If the scanner is in that state and then reads
something other than an 'o', it will have to backtrack to find
a rule which is matched. With
a bit of headscratching one can see that this must be the
state it's in when it has seen "fo". When this has happened,
if anything other than another 'o' is seen, the scanner will
have to back up to simply match the 'f' (by the default rule).
The comment regarding State #8 indicates there's a problem
when "foob" has been scanned. Indeed, on any character other
than a 'b', the scanner will have to back up to accept "foo".
Similarly, the comment for State #9 concerns when "fooba" has
The final comment reminds us that there's no point going to
all the trouble of removing backtracking from the rules unless
since there's no performance gain doing so with compressed scanners.
The way to remove the backtracking is to add "error" rules:
foobar return TOK_KEYWORD;
/* false alarm, not really a keyword */
Eliminating backtracking among a list of keywords can also be
done using a "catch-all" rule:
foobar return TOK_KEYWORD;
This is usually the best solution when appropriate.
Backtracking messages tend to cascade.
With a complicated set of rules it's not uncommon to get hundreds
of messages. If one can decipher them, though, it often
only takes a dozen or so rules to eliminate the backtracking (though
it's easy to make a mistake and have an error rule accidentally match
a valid token. A possible future
feature will be to automatically add rules to eliminate backtracking).
trailing context (where both the leading and trailing parts do not have
a fixed length) entails almost the same performance loss as
(i.e., substantial). So when possible a rule like:
mouse|rat/(cat|dog) run();
Note that here the special '|' action does
provide any savings, and can even make things worse (see
Another area where the user can increase a scanner's performance
(and one that's easier to implement) arises from the fact that
the longer the tokens matched, the faster the scanner will run.
This is because with long tokens the processing of most input
characters takes place in the (short) inner scanning loop, and
does not often have to go through the additional work of setting up
the scanning environment (e.g.,
for the action. Recall the scanner for C comments:
<comment>"*"+"/" BEGIN(INITIAL);
This could be sped up by writing it as:
<comment>[^*\\n]*\\n ++line_num;
<comment>"*"+[^*/\\n]*\\n ++line_num;
<comment>"*"+"/" BEGIN(INITIAL);
Now instead of each newline requiring the processing of another
action, recognizing the newlines is "distributed" over the other rules
to keep the matched text as long as possible. Note that
slow down the scanner! The speed of the scanner is independent
of the number of rules or (modulo the considerations given at the
beginning of this section) how complicated the rules are with
regard to operators such as '*' and '|'.
A final example in speeding up a scanner: suppose you want to scan
through a file containing identifiers and keywords, one per line
and with no other extraneous characters, and recognize all the
keywords. A natural first approach is:
while /* it's a keyword */
.|\\n /* it's not a keyword */
To eliminate the back-tracking, introduce a catch-all rule:
while /* it's a keyword */
.|\\n /* it's not a keyword */
Now, if it's guaranteed that there's exactly one word per line,
then we can reduce the total number of matches by a half by
merging in the recognition of newlines with that of the other
while\\n /* it's a keyword */
.|\\n /* it's not a keyword */
One has to be careful here, as we have now reintroduced backtracking
into the scanner. In particular, while
know that there will never be any characters in the input stream
other than letters or newlines,
can't figure this out, and it will plan for possibly needing backtracking
when it has scanned a token like "auto" and then the next character
is something other than a newline or a letter. Previously it would
then just match the "auto" rule and be done, but now it has no "auto"
rule, only a "auto\\n" rule. To eliminate the possibility of backtracking,
we could either duplicate all rules but without final newlines, or,
since we never expect to encounter such an input and therefore don't
how it's classified, we can introduce one more catch-all rule, this
one which doesn't include a newline:
while\\n /* it's a keyword */
.|\\n /* it's not a keyword */
this is about as fast as one can get a
scanner to go for this particular problem.
is slow when matching NUL's, particularly when a token contains
It's best to write rules which match
amounts of text if it's anticipated that the text will often include NUL's.
.SH INCOMPATIBILITIES WITH LEX AND POSIX
tool (the two implementations do not share any code, though),
with some extensions and incompatibilities, both of which
are of concern to those who wish to write scanners acceptable
to either implementation. At present, the POSIX
very close to the original
implementation, so some of these
incompatibilities are also in conflict with the POSIX draft. But
the intent is that except as noted below,
as it presently stands will
ultimately be POSIX conformant (i.e., that those areas of conflict with
the POSIX draft will be resolved in
mind that all the comments which follow are with regard to the POSIX
standard of Summer 1989, and not the final document (or subsequent
drafts); they are included so
users can be aware of the standardization issues and those areas where
may in the near future undergo changes incompatible with
with the following exceptions:
does not support exclusive start conditions (%x), though they
are in the current POSIX draft.
When definitions are expanded,
encloses them in parentheses.
foo{NAME}? printf( "Found it\\n" );
will not match the string "foo" because when the macro
is expanded the rule is equivalent to "foo[A-Z][A-Z0-9]*?"
and the precedence is such that the '?' is associated with
the rule will be expanded to
"foo([A-Z][A-Z0-9]*)?" and so the string "foo" will match.
Note that because of this, the
operators cannot be used in a
The POSIX draft interpretation is the same as
To specify a character class which matches anything but a left bracket (']'),
one can use "[^]]" but with
one must use "[^\\]]". The latter works with
scanner internal variable
is not supported. (The variable is not part of the POSIX draft.)
routine is not redefinable, though it may be called to read characters
following whatever has been matched by a rule. If
encounters an end-of-file the normal
processing is done. A ``real'' end-of-file is returned by
Input is instead controlled by redefining the
cannot be redefined is in accordance with the POSIX draft, but
has not yet been accepted into the draft.
macro is done to the file-pointer
The POSIX draft mentions that an
routine exists but currently gives no details as to what it does.
(generate a Ratfor scanner) option is not supported. It is not part
If you are providing your own yywrap() routine, you must include a
"#undef yywrap" in the definitions section (section 1). Note that
the "#undef" will have to be enclosed in %{}'s.
specifies that yywrap() is a function and this is unlikely to change; so
is likely to be changed to a function in the near future.
are undefined until the next token is matched. This is not the case with
or the present POSIX draft.
(numeric range) operator is different.
interprets "abc{1,3}" as "match one, two, or
three occurrences of 'abc'", whereas
interprets it as "match 'ab'
followed by one, two, or three occurrences of 'c'". The latter is
in agreement with the current POSIX draft.
interprets "^foo|bar" as "match either 'foo' at the beginning of a line,
or 'bar' anywhere", whereas
interprets it as "match either 'foo' or 'bar' if they come at the beginning
of a line". The latter is in agreement with the current POSIX draft.
To refer to yytext outside of the scanner source file,
the correct definition with
is "extern char *yytext" rather than "extern char yytext[]".
This is contrary to the current POSIX draft but a point on which
will not be changing, as the array representation entails a
serious performance penalty. It is hoped that the POSIX draft will
be emended to support the
variety of declaration (as this is a fairly painless change to
the first time the scanner is called, providing
has not already been assigned to a non-NULL value. The difference is
subtle, but the net effect is that with
does not have a valid value until the scanner has been called.
The special table-size declarations such as
is #define'd so scanners may be written for use with either
features are not included in
or the POSIX draft standard:
comments beginning with '#' (deprecated)
multiple actions on a line
This last feature refers to the fact that with
you can put multiple actions on the same line, separated with
foo handle_foo(); ++num_foos_seen;
is (rather surprisingly) truncated to
does not truncate the action. Actions that are not enclosed in
braces are simply terminated at the end of the line.
.I reject_used_but_not_detected undefined
.I yymore_used_but_not_detected undefined -
These errors can occur at compile time. They indicate that the
failed to notice the fact, meaning that
scanned the first two sections looking for occurrences of these actions
and failed to find any, but somehow you snuck some in (via a #include
file, for example). Make an explicit reference to the action in your
input file. (Note that previously
mechanism for dealing with this problem; this feature is still supported
but now deprecated, and will go away soon unless the author hears from
people who can argue compellingly that they need it.)
has encountered an input string which wasn't matched by
.I flex input buffer overflowed -
a scanner rule matched a string long enough to overflow the
scanner's internal input buffer (16K bytes by default - controlled by
in "flex.skel". Note that to redefine this macro, you must first
.I scanner requires -8 flag -
Your scanner specification includes recognizing 8-bit characters and
you did not specify the -8 flag (and your site has not installed flex
.I too many %t classes! -
You managed to put every single character into its own %t class.
requires that at least one of the classes share characters.
flex(1), lex(1), yacc(1), sed(1), awk(1).
M. E. Lesk and E. Schmidt,
.I LEX - Lexical Analyzer Generator
Vern Paxson, with the help of many ideas and much inspiration from
Van Jacobson. Original version by Jef Poskanzer. The fast table
representation is a partial implementation of a design done by Van
Jacobson. The implementation was done by Kevin Gong and Vern Paxson.
beta-testers, feedbackers, and contributors, especially Casey
Frederic Brehm, Nick Christopher, Jason Coughlin,
Scott David Daniels, Leo Eskin,
Chris Faylor, Eric Goldman, Eric
Hughes, Jeffrey R. Jones, Kevin B. Kenny, Ronald Lamprecht,
Greg Lee, Craig Leres, Mohamed el Lozy, Jim Meyering, Marc Nozell, Esmond Pitt,
Jef Poskanzer, Jim Roskind,
Dave Tallman, Frank Whaley, Ken Yap, and those whose names
have slipped my marginal mail-archiving skills but whose contributions
are appreciated all the same.
Thanks to Keith Bostic, John Gilmore, Craig Leres, Bob
Mulcahy, Rich Salz, and Richard Stallman for help with various distribution
Thanks to Esmond Pitt and Earle Horton for 8-bit character support;
to Benson Margulies and Fred
Burke for C++ support; to Ove Ewerlid for the basics of support for
NUL's; and to Eric Hughes for the basics of support for multiple buffers.
Work is being done on extending
to generate scanners in which the
state machine is directly represented in C code rather than tables.
These scanners may well be substantially faster than those generated
using -f or -F. If you are working in this area and are interested
in comparing notes and seeing whether redundant work can be avoided,
contact Ove Ewerlid (ewerlid@mizar.DoCS.UU.SE).
This work was primarily done when I was at the Real Time Systems Group
at the Lawrence Berkeley Laboratory in Berkeley, CA. Many thanks to all there
for the support I received.
Computer Science Department