.\" Copyright (c) 1990 The Regents of the University of California.
.\" Redistribution and use in source and binary forms are permitted provided
.\" that: (1) source distributions retain this entire copyright notice and
.\" comment, and (2) distributions including binaries display the following
.\" acknowledgement: ``This product includes software developed by the
.\" University of California, Berkeley and its contributors'' in the
.\" documentation or other materials provided with the distribution and in
.\" all advertising materials mentioning features or use of this software.
.\" Neither the name of the University nor the names of its contributors may
.\" be used to endorse or promote products derived from this software without
.\" specific prior written permission.
.\" THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED
.\" WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF
.\" MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
.\" @(#)lex.1 5.10 (Berkeley) 7/24/90
.Nd fast lexical analyzer generator
programs which recognized lexical patterns in text.
the given input files, or its standard input if no file names are given,
for a description of a scanner to generate. The description is in
of regular expressions and C code, called
generates as output a C source file,
This file is compiled and linked with the
library to produce an executable. When the executable is run,
it analyzes its input for occurrences
of the regular expressions. Whenever it finds one, it executes
the corresponding C code.
For full documentation, see
This manual entry is intended for use as a quick reference.
has the following options:
Generate backtracking information to
This is a list of scanner states which require backtracking
and the input characters on which they do so. By adding rules one
can remove backtracking states. If all backtracking states
is used, the generated scanner will run faster.
is a do-nothing, deprecated option included for POSIX compliance.
specified table-compression options. This functionality is
flag. To ease the the impact of this change, when
it currently issues a warning message and assumes that
was desired instead. In the future this "promotion" of
will go away in the name of full POSIX compliance (unless
the POSIX meaning is removed first).
makes the generated scanner run in
mode. Whenever a pattern is recognized and the global
is non-zero (which is the default), the scanner will
.Dl --accepting rule at line 53 ("the matched text")
The line number refers to the location of the rule in the file
defining the scanner (i.e., the file that was fed to lex). Messages
are also generated when the scanner backtracks, accepts the
default rule, reaches the end of its input buffer (or encounters
a NUL; the two look the same as far as the scanner's concerned),
or reaches an end-of-file.
specifies (take your pick)
No table compression is done. The result is large but fast.
This option is equivalent to
scanner. The case of letters given in the
be ignored, and tokens in the input will be matched regardless of case. The
will have the preserved case (i.e., it will not be folded).
is another do-nothing, deprecated option included only for
generates a performance report to stderr. The report
consists of comments regarding features of the
input file which will cause a loss of performance in the resulting scanner.
(that unmatched scanner input is echoed to
to be suppressed. If the scanner encounters input that does not
match any of its rules, it aborts with an error.
to write the scanner it generates to standard output instead
a summary of statistics regarding the scanner it generates.
scanner table representation should be used. This representation is
about as fast as the full table representation
and for some sets of patterns will be considerably smaller (and for
This option is equivalent to
scanner, that is, a scanner which stops immediately rather than
looking ahead if it knows
that the currently scanned text cannot be part of a longer rule's match.
cannot be used in conjunction with
The default is to generate such directives so error
messages in the actions will be correctly
located with respect to the original
the fairly meaningless line numbers of
mode. It will generate a lot of messages to
the form of the input and the resultant non-deterministic and deterministic
finite automata. This option is mostly for use in maintaining
to generate an 8-bit scanner.
On some sites, this is the default. On others, the default
is 7-bit characters. To see which is the case, check the verbose
output for "equivalence classes created". If the denominator of
the number shown is 128, then by default
is generating 7-bit characters. If it is 256, then the default is
controls the degree of table compression. The default setting is
specifies that the scanner tables should be compressed but neither
equivalence classes nor meta-equivalence classes should be used.
.Em equivalence classes ,
which have identical lexical properties.
Equivalence classes usually give
dramatic reductions in the final table/object file sizes (typically
a factor of 2-5) and are pretty cheap performance-wise (one array
look-up per character scanned).
scanner tables should be generated -
tables by taking advantages of similar transition functions for
specifies that the alternate fast scanner representation (described in
.Em meta-equivalence classes ,
which are sets of equivalence classes (or characters, if equivalence
classes are not being used) that are commonly used together. Meta-equivalence
classes are often a big win when using compressed tables, but they
have a moderate performance impact (one or two "if" tests and one
array look-up per character scanned).
Generate both equivalence classes
and meta-equivalence classes. This setting provides the highest
degree of table compression.
Faster-executing scanners can be traded off at the cost of larger tables with
the following generally being true:
options are not cumulative; whenever the flag is encountered, the
previous -C settings are forgotten.
do not make sense together - there is no opportunity for meta-equivalence
classes if the table is not being compressed. Otherwise the options
overrides the default skeleton file from which
constructs its scanners. Useful for
maintenance or development.
.Sh SUMMARY OF Lex REGULAR EXPRESSIONS
The patterns in the input are written using an extended set of regular
any character except newline
a "character class"; in this case, the pattern
matches either an 'x', a 'y', or a 'z'
a "character class" with a range in it; matches
an 'a', a 'b', any letter from 'j' through 'o',
a "negated character class", i.e., any character
but those in the class. In this case, any
character EXCEPT an uppercase letter.
any character EXCEPT an uppercase letter or
zero or more r's, where r is any regular expression
zero or one r's (that is, "an optional r")
anywhere from two to five r's
the expansion of the "name" definition
if X is an 'a', 'b', 'f', 'n', 'r', 't', or 'v',
then the ANSI-C interpretation of \ex.
Otherwise, a literal 'X' (used to escape
the character with octal value 123
the character with hexadecimal value 2a
match an r; parentheses are used to override
the regular expression r followed by the
regular expression s; called "concatenation"
an r but only if it is followed by an s. The
s is not part of the matched text. This type
of pattern is called as "trailing context".
an r, but only at the beginning of a line
an r, but only at the end of a line. Equivalent
an r, but only in start condition s (see
below for discussion of start conditions)
same, but in any of start conditions s1,
an end-of-file when in start condition s1 or s2
The regular expressions listed above are grouped according to
precedence, from highest precedence at the top to lowest at the bottom.
Those grouped together have equal precedence.
Negated character classes
unless "\en" (or an equivalent escape sequence) is one of the
characters explicitly present in the negated character class
A rule can have at most one instance of trailing context (the '/' operator
or the '$' operator). The start condition, '^', and "<<EOF>>" patterns
can only occur at the beginning of a pattern, and, as well as with '/' and '$',
cannot be grouped inside parentheses. The following are all illegal:
.Sh SUMMARY OF SPECIAL ACTIONS
In addition to arbitrary C code, the following can appear in actions:
Followed by the name of a start condition places the scanner in the
corresponding start condition.
Directs the scanner to proceed on to the "second best" rule which matched the
input (or a prefix of the input).
are set up appropriately. Note that
is a particularly expensive feature in terms scanner performance;
of the scanner's actions it will slow down
of the scanner's matching. Furthermore,
Note also that unlike the other special actions,
code immediately following it in the action will
tells the scanner that the next time it matches a rule, the corresponding
onto the current value of
rather than replacing it.
returns all but the first
characters of the current token back to the input stream, where they
will be rescanned when the scanner looks for the next match.
are adjusted appropriately (e.g.,
back onto the input stream. It will be the next character scanned.
reads the next character from the input stream (this routine is called
if the scanner is compiled using
can be used in lieu of a return statement in an action. It terminates
the scanner and returns a 0 to the scanner's caller, indicating "all done".
is also called when an end-of-file is encountered. It is a macro and
is an action available only in <<EOF>> rules. It means "Okay, I've
set up a new input file, continue scanning".
.Tp Fn yy_create_buffer file size
It returns a YY_BUFFER_STATE
handle to a new input buffer large enough to accomodate
characters and associated with the given file. When in doubt, use
.Tp Fn yy_switch_to_buffer new_buffer
switches the scanner's processing to scan for tokens from
the given buffer, which must be a YY_BUFFER_STATE.
.Tp Fn yy_delete_buffer buffer
deletes the given buffer.
.Sh \&VALUES\ AVAILABLE\ TO THE USER
holds the text of the current token. It may not be modified.
holds the length of the current token. It may not be modified.
is the file which by default
reads from. It may be redefined but doing so only makes sense before
scanning begins. Changing it in the middle of scanning will have
buffers its input. Once scanning terminates because an end-of-file
.Fn void\ yyrestart FILE\ *new_file
actions are done. It can be reassigned by the user.
handle to the current buffer.
.Sh MACROS THE USER CAN REDEFINE
controls how the scanning routine is declared.
By default, it is "int yylex()", or, if prototypes are being
used, "int yylex(void)". This definition may be changed by redefining
the "YY_DECL" macro. Note that
if you give arguments to the scanning routine using a
K&R-style/non-prototyped function declaration, you must terminate
the definition with a semi-colon (;).
The nature of how the scanner
gets its input can be controlled by redefining the
YY_INPUT's calling sequence is "YY_INPUT(buf,result,max_size)". Its
characters in the character array
and return in the integer variable
number of characters read or the constant YY_NULL (0 on Unix systems)
to indicate EOF. The default YY_INPUT reads from the
global file-pointer "yyin".
A sample redefinition of YY_INPUT (in the definitions
section of the input file):
#define YY_INPUT(buf,result,max_size) \\
result = ((buf[0] = getchar()) == EOF) ? YY_NULL : 1;
When the scanner receives an end-of-file indication from YY_INPUT,
returns false (zero), then it is assumed that the
function has gone ahead and set up
to point to another input file, and scanning continues. If it returns
true (non-zero), then the scanner terminates, returning 0 to its
always returns 1. Presently, to redefine it you must first
"#undef yywrap", as it is currently implemented as a macro. It is
will soon be defined to be a function rather than a macro.
can be redefined to provide an action
which is always executed prior to the matched rule's action.
may be redefined to provide an action which is always executed before
In the generated scanner, the actions are all gathered in one large
switch statement and separated using
which may be redefined. By default, it is simply a "break", to separate
each rule's action from the following rule's.
backtracking information for
.Em LEX \- Lexical Analyzer Generator
.Tp Li reject_used_but_not_detected undefined
.Tp Li yymore_used_but_not_detected undefined
These errors can occur at compile time.
failed to notice the fact,
scanned the first two sections looking for occurrences of these actions
but somehow you snuck some in via a #include
Make an explicit reference to the action in your
mechanism for dealing with this problem;
this feature is still supported
and will go away soon unless the author hears from
people who can argue compellingly that they need it.
.Tp Li lex scanner jammed
has encountered an input string which wasn't matched by
.Tp Li lex input buffer overflowed
a scanner rule matched a string long enough to overflow the
scanner's internal input buffer 16K bytes - controlled by
.Tp Li scanner requires \&\-8 flag
Your scanner specification includes recognizing 8-bit characters and
you did not specify the -8 flag and your site has not installed lex
.Tp Li too many \&%t classes!
You managed to put every single character into its own %t class.
requires that at least one of the classes share characters.
appeared in Version 6 AT&T Unix.
The version this man page describes is
derived from code contributed by Vern Paxson.
Vern Paxson, with the help of many ideas and much inspiration from
Van Jacobson. Original version by Jef Poskanzer.
for additional credits and the address to send comments to.
patterns cannot be properly matched and generate
warning messages ("Dangerous trailing context"). These are
patterns where the ending of the
first part of the rule matches the beginning of the second
part, such as "zx*/xy*", where the 'x*' matches the 'x' at
the beginning of the trailing context. (Note that the POSIX draft
states that the text matched by such patterns is undefined.)
For some trailing context rules, parts which are actually fixed-length are
not recognized as such, leading to the abovementioned performance loss.
In particular, parts using '\&|' or {n} (such as "foo{3}") are always
considered variable-length.
Combining trailing context with the special '\&|' action can result in
trailing context being turned into the more expensive
trailing context. This happens in the following example:
invalidates yytext and yyleng.
to push back more text than was matched can
result in the pushed-back text matching a beginning-of-line ('^')
rule even though it didn't come at the beginning of the line
Pattern-matching of NUL's is substantially slower than matching other
does not generate correct #line directives for code internal
to the scanner; thus, bugs in
yield bogus line numbers.
Due to both buffering of input and read-ahead, you cannot intermix
calls to <stdio.h> routines, such as, for example,
rules and expect it to work. Call
The total table entries listed by the
flag excludes the number of table entries needed to determine
what rule has been matched. The number of entries is equal
to the number of DFA states if the scanner does not use
and somewhat greater than the number of states if it does.
Some of the macros, such as
may in the future become functions which live in the
library. This will doubtless break a lot of code, but may be
required for POSIX-compliance.
internal algorithms need documentation.