Initial commit of OpenSPARC T2 design and verification files.
[OpenSPARC-T2-DV] / tools / perl-5.8.0 / man / man3 / Math::MatrixReal.3
.\" Automatically generated by Pod::Man v1.34, Pod::Parser v1.13
.\"
.\" Standard preamble:
.\" ========================================================================
.de Sh \" Subsection heading
.br
.if t .Sp
.ne 5
.PP
\fB\\$1\fR
.PP
..
.de Sp \" Vertical space (when we can't use .PP)
.if t .sp .5v
.if n .sp
..
.de Vb \" Begin verbatim text
.ft CW
.nf
.ne \\$1
..
.de Ve \" End verbatim text
.ft R
.fi
..
.\" Set up some character translations and predefined strings. \*(-- will
.\" give an unbreakable dash, \*(PI will give pi, \*(L" will give a left
.\" double quote, and \*(R" will give a right double quote. | will give a
.\" real vertical bar. \*(C+ will give a nicer C++. Capital omega is used to
.\" do unbreakable dashes and therefore won't be available. \*(C` and \*(C'
.\" expand to `' in nroff, nothing in troff, for use with C<>.
.tr \(*W-|\(bv\*(Tr
.ds C+ C\v'-.1v'\h'-1p'\s-2+\h'-1p'+\s0\v'.1v'\h'-1p'
.ie n \{\
. ds -- \(*W-
. ds PI pi
. if (\n(.H=4u)&(1m=24u) .ds -- \(*W\h'-12u'\(*W\h'-12u'-\" diablo 10 pitch
. if (\n(.H=4u)&(1m=20u) .ds -- \(*W\h'-12u'\(*W\h'-8u'-\" diablo 12 pitch
. ds L" ""
. ds R" ""
. ds C` ""
. ds C' ""
'br\}
.el\{\
. ds -- \|\(em\|
. ds PI \(*p
. ds L" ``
. ds R" ''
'br\}
.\"
.\" If the F register is turned on, we'll generate index entries on stderr for
.\" titles (.TH), headers (.SH), subsections (.Sh), items (.Ip), and index
.\" entries marked with X<> in POD. Of course, you'll have to process the
.\" output yourself in some meaningful fashion.
.if \nF \{\
. de IX
. tm Index:\\$1\t\\n%\t"\\$2"
..
. nr % 0
. rr F
.\}
.\"
.\" For nroff, turn off justification. Always turn off hyphenation; it makes
.\" way too many mistakes in technical documents.
.hy 0
.if n .na
.\"
.\" Accent mark definitions (@(#)ms.acc 1.5 88/02/08 SMI; from UCB 4.2).
.\" Fear. Run. Save yourself. No user-serviceable parts.
. \" fudge factors for nroff and troff
.if n \{\
. ds #H 0
. ds #V .8m
. ds #F .3m
. ds #[ \f1
. ds #] \fP
.\}
.if t \{\
. ds #H ((1u-(\\\\n(.fu%2u))*.13m)
. ds #V .6m
. ds #F 0
. ds #[ \&
. ds #] \&
.\}
. \" simple accents for nroff and troff
.if n \{\
. ds ' \&
. ds ` \&
. ds ^ \&
. ds , \&
. ds ~ ~
. ds /
.\}
.if t \{\
. ds ' \\k:\h'-(\\n(.wu*8/10-\*(#H)'\'\h"|\\n:u"
. ds ` \\k:\h'-(\\n(.wu*8/10-\*(#H)'\`\h'|\\n:u'
. ds ^ \\k:\h'-(\\n(.wu*10/11-\*(#H)'^\h'|\\n:u'
. ds , \\k:\h'-(\\n(.wu*8/10)',\h'|\\n:u'
. ds ~ \\k:\h'-(\\n(.wu-\*(#H-.1m)'~\h'|\\n:u'
. ds / \\k:\h'-(\\n(.wu*8/10-\*(#H)'\z\(sl\h'|\\n:u'
.\}
. \" troff and (daisy-wheel) nroff accents
.ds : \\k:\h'-(\\n(.wu*8/10-\*(#H+.1m+\*(#F)'\v'-\*(#V'\z.\h'.2m+\*(#F'.\h'|\\n:u'\v'\*(#V'
.ds 8 \h'\*(#H'\(*b\h'-\*(#H'
.ds o \\k:\h'-(\\n(.wu+\w'\(de'u-\*(#H)/2u'\v'-.3n'\*(#[\z\(de\v'.3n'\h'|\\n:u'\*(#]
.ds d- \h'\*(#H'\(pd\h'-\w'~'u'\v'-.25m'\f2\(hy\fP\v'.25m'\h'-\*(#H'
.ds D- D\\k:\h'-\w'D'u'\v'-.11m'\z\(hy\v'.11m'\h'|\\n:u'
.ds th \*(#[\v'.3m'\s+1I\s-1\v'-.3m'\h'-(\w'I'u*2/3)'\s-1o\s+1\*(#]
.ds Th \*(#[\s+2I\s-2\h'-\w'I'u*3/5'\v'-.3m'o\v'.3m'\*(#]
.ds ae a\h'-(\w'a'u*4/10)'e
.ds Ae A\h'-(\w'A'u*4/10)'E
. \" corrections for vroff
.if v .ds ~ \\k:\h'-(\\n(.wu*9/10-\*(#H)'\s-2\u~\d\s+2\h'|\\n:u'
.if v .ds ^ \\k:\h'-(\\n(.wu*10/11-\*(#H)'\v'-.4m'^\v'.4m'\h'|\\n:u'
. \" for low resolution devices (crt and lpr)
.if \n(.H>23 .if \n(.V>19 \
\{\
. ds : e
. ds 8 ss
. ds o a
. ds d- d\h'-1'\(ga
. ds D- D\h'-1'\(hy
. ds th \o'bp'
. ds Th \o'LP'
. ds ae ae
. ds Ae AE
.\}
.rm #[ #] #H #V #F C
.\" ========================================================================
.\"
.IX Title "MatrixReal 3"
.TH MatrixReal 3 "2002-05-15" "perl v5.8.0" "User Contributed Perl Documentation"
.SH "NAME"
Math::MatrixReal \- Matrix of Reals
.PP
Implements the data type "matrix of reals" (and consequently also
"vector of reals").
.SH "DESCRIPTION"
.IX Header "DESCRIPTION"
Implements the data type \*(L"matrix of reals\*(R", which can be used almost
like any other basic Perl type thanks to \fB\s-1OPERATOR\s0 \s-1OVERLOADING\s0\fR, i.e.,
.PP
.Vb 1
\& $product = $matrix1 * $matrix2;
.Ve
.PP
does what you would like it to do (a matrix multiplication).
.PP
Also features many important operations and methods: matrix norm,
matrix transposition, matrix inverse, determinant of a matrix, order
and numerical condition of a matrix, scalar product of vectors, vector
product of vectors, vector length, projection of row and column vectors,
a comfortable way for reading in a matrix from a file, the keyboard or
your code, and many more.
.PP
Allows to solve linear equation systems using an efficient algorithm
known as \*(L"L\-R\-decomposition\*(R" and several approximative (iterative) methods.
.PP
Features an implementation of Kleene's algorithm to compute the minimal
costs for all paths in a graph with weighted edges (the \*(L"weights\*(R" being
the costs associated with each edge).
.SH "SYNOPSIS"
.IX Header "SYNOPSIS"
.Sh "Constructor Methods And Such"
.IX Subsection "Constructor Methods And Such"
.RE
.IP "\(bu"
\&\f(CW\*(C`use Math::MatrixReal;\*(C'\fR
.PP
Makes the methods and overloaded operators of this module available
to your program.
.RE
.IP "\(bu"
\&\f(CW\*(C`$new_matrix = new Math::MatrixReal($rows,$columns);\*(C'\fR
.PP
The matrix object constructor method. A new matrix of size \f(CW$rows\fR by \f(CW$columns\fR
will be created, with the value \f(CW0.0\fR for all elements.
.PP
Note that this method is implicitly called by many of the other methods
in this module.
.RE
.IP "\(bu"
\&\f(CW\*(C`$new_matrix = $some_matrix\->\*(C'\fR\f(CW\*(C`new($rows,$columns);\*(C'\fR
.PP
Another way of calling the matrix object constructor method.
.PP
Matrix "\f(CW$some_matrix\fR" is not changed by this in any way.
.RE
.IP "\(bu"
\&\f(CW\*(C`$new_matrix = $matrix\->new_from_cols( [ $column_vector|$array_ref|$string, ... ] )\*(C'\fR
.PP
Creates a new matrix given a reference to an array of any of the following:
.IP "\(bu column vectors ( n by 1 Math::MatrixReal matrices )" 4
.IX Item "column vectors ( n by 1 Math::MatrixReal matrices )"
.PD 0
.IP "\(bu references to arrays" 4
.IX Item "references to arrays"
.ie n .IP "\(bu strings properly formatted to create a column with Math::MatrixReal's ""new_from_string"" command" 4
.el .IP "\(bu strings properly formatted to create a column with Math::MatrixReal's \f(CWnew_from_string\fR command" 4
.IX Item "strings properly formatted to create a column with Math::MatrixReal's new_from_string command"
.PD
.PP
You may mix and match these as you wish. However, all must be of the
same dimension\*(--no padding happens automatically. Example:
.PP
.Vb 2
\& my $matrix = Math::MatrixReal->new_from_cols( [ [1,2], [3,4] ] );
\& print $matrix;
.Ve
.PP
will print
.PP
.Vb 2
\& [ 1.000000000000E+00 3.000000000000E+00 ]
\& [ 2.000000000000E+00 4.000000000000E+00 ]
.Ve
.RE
.IP "\(bu"
\&\f(CW\*(C`new_from_rows( [ $row_vector|$array_ref|$string, ... ] )\*(C'\fR
.PP
Creates a new matrix given a reference to an array of any of the following:
.IP "\(bu row vectors ( 1 by n Math::MatrixReal matrices )" 4
.IX Item "row vectors ( 1 by n Math::MatrixReal matrices )"
.PD 0
.IP "\(bu references to arrays" 4
.IX Item "references to arrays"
.ie n .IP "\(bu strings properly formatted to create a row with Math::MatrixReal's ""new_from_string"" command" 4
.el .IP "\(bu strings properly formatted to create a row with Math::MatrixReal's \f(CWnew_from_string\fR command" 4
.IX Item "strings properly formatted to create a row with Math::MatrixReal's new_from_string command"
.PD
.PP
You may mix and match these as you wish. However, all must be of the
same dimension\*(--no padding happens automatically. Example:
.PP
.Vb 2
\& my $matrix = Math::MatrixReal->new_from_rows( [ [1,2], [3,4] ] );
\& print $matrix;
.Ve
.PP
will print
.PP
.Vb 2
\& [ 1.000000000000E+00 2.000000000000E+00 ]
\& [ 3.000000000000E+00 4.000000000000E+00 ]
.Ve
.RE
.IP "\(bu"
\&\f(CW\*(C`$new_matrix = Math::MatrixReal\->new_diag( $array_ref );\*(C'\fR
.PP
This method allows you to create a diagonal matrix by only specifying
the diagonal elements. Example:
.PP
.Vb 2
\& $matrix = Math::MatrixReal->new_diag( [ 1,2,3,4 ] );
\& print $matrix;
.Ve
.PP
will print
.PP
.Vb 4
\& [ 1.000000000000E+00 0.000000000000E+00 0.000000000000E+00 0.000000000000E+00 ]
\& [ 0.000000000000E+00 2.000000000000E+00 0.000000000000E+00 0.000000000000E+00 ]
\& [ 0.000000000000E+00 0.000000000000E+00 3.000000000000E+00 0.000000000000E+00 ]
\& [ 0.000000000000E+00 0.000000000000E+00 0.000000000000E+00 4.000000000000E+00 ]
.Ve
.RE
.IP "\(bu"
\&\f(CW\*(C`$new_matrix = Math::MatrixReal\->\*(C'\fR\f(CW\*(C`new_from_string($string);\*(C'\fR
.PP
This method allows you to read in a matrix from a string (for
instance, from the keyboard, from a file or from your code).
.PP
The syntax is simple: each row must start with "\f(CW\*(C`[ \*(C'\fR\*(L" and end with
\&\*(R"\f(CW\*(C` ]\en\*(C'\fR\*(L" (\*(R"\f(CW\*(C`\en\*(C'\fR\*(L" being the newline character and \*(R"\f(CW\*(C` \*(C'\fR" a space or
tab) and contain one or more numbers, all separated from each other
by spaces or tabs.
.PP
Additional spaces or tabs can be added at will, but no comments.
.PP
Examples:
.PP
.Vb 3
\& $string = "[ 1 2 3 ]\en[ 2 2 -1 ]\en[ 1 1 1 ]\en";
\& $matrix = Math::MatrixReal->new_from_string($string);
\& print "$matrix";
.Ve
.PP
By the way, this prints
.PP
.Vb 3
\& [ 1.000000000000E+00 2.000000000000E+00 3.000000000000E+00 ]
\& [ 2.000000000000E+00 2.000000000000E+00 -1.000000000000E+00 ]
\& [ 1.000000000000E+00 1.000000000000E+00 1.000000000000E+00 ]
.Ve
.PP
But you can also do this in a much more comfortable way using the
shell-like \*(L"here\-document\*(R" syntax:
.PP
.Vb 9
\& $matrix = Math::MatrixReal->new_from_string(<<'MATRIX');
\& [ 1 0 0 0 0 0 1 ]
\& [ 0 1 0 0 0 0 0 ]
\& [ 0 0 1 0 0 0 0 ]
\& [ 0 0 0 1 0 0 0 ]
\& [ 0 0 0 0 1 0 0 ]
\& [ 0 0 0 0 0 1 0 ]
\& [ 1 0 0 0 0 0 -1 ]
\& MATRIX
.Ve
.PP
You can even use variables in the matrix:
.PP
.Vb 3
\& $c1 = 2 / 3;
\& $c2 = -2 / 5;
\& $c3 = 26 / 9;
.Ve
.PP
.Vb 1
\& $matrix = Math::MatrixReal->new_from_string(<<"MATRIX");
.Ve
.PP
.Vb 3
\& [ 3 2 0 ]
\& [ 0 3 2 ]
\& [ $c1 $c2 $c3 ]
.Ve
.PP
.Vb 1
\& MATRIX
.Ve
.PP
(Remember that you may use spaces and tabs to format the matrix to
your taste)
.PP
Note that this method uses exactly the same representation for a
matrix as the \*(L"stringify\*(R" operator "": this means that you can convert
any matrix into a string with \f(CW\*(C`$string = "$matrix";\*(C'\fR and read it back
in later (for instance from a file!).
.PP
Note however that you may suffer a precision loss in this process
because only 13 digits are supported in the mantissa when printed!!
.PP
If the string you supply (or someone else supplies) does not obey
the syntax mentioned above, an exception is raised, which can be
caught by \*(L"eval\*(R" as follows:
.PP
.Vb 14
\& print "Please enter your matrix (in one line): ";
\& $string = <STDIN>;
\& $string =~ s/\e\en/\en/g;
\& eval { $matrix = Math::MatrixReal->new_from_string($string); };
\& if ($@)
\& {
\& print "$@";
\& # ...
\& # (error handling)
\& }
\& else
\& {
\& # continue...
\& }
.Ve
.PP
or as follows:
.PP
.Vb 7
\& eval { $matrix = Math::MatrixReal->new_from_string(<<"MATRIX"); };
\& [ 3 2 0 ]
\& [ 0 3 2 ]
\& [ $c1 $c2 $c3 ]
\& MATRIX
\& if ($@)
\& # ...
.Ve
.PP
Actually, the method shown above for reading a matrix from the keyboard
is a little awkward, since you have to enter a lot of \*(L"\en\*(R"'s for the
newlines.
.PP
A better way is shown in this piece of code:
.PP
.Vb 13
\& while (1)
\& {
\& print "\enPlease enter your matrix ";
\& print "(multiple lines, <ctrl-D> = done):\en";
\& eval { $new_matrix =
\& Math::MatrixReal->new_from_string(join('',<STDIN>)); };
\& if ($@)
\& {
\& $@ =~ s/\es+at\eb.*?$//;
\& print "${@}Please try again.\en";
\& }
\& else { last; }
\& }
.Ve
.PP
Possible error messages of the \*(L"\fInew_from_string()\fR\*(R" method are:
.PP
.Vb 2
\& Math::MatrixReal::new_from_string(): syntax error in input string
\& Math::MatrixReal::new_from_string(): empty input string
.Ve
.PP
If the input string has rows with varying numbers of columns,
the following warning will be printed to \s-1STDERR:\s0
.PP
.Vb 1
\& Math::MatrixReal::new_from_string(): missing elements will be set to zero!
.Ve
.PP
If everything is okay, the method returns an object reference to the
(newly allocated) matrix containing the elements you specified.
.RE
.IP "\(bu"
\&\f(CW\*(C`$new_matrix = $some_matrix\->shadow();\*(C'\fR
.PP
Returns an object reference to a \fB\s-1NEW\s0\fR but \fB\s-1EMPTY\s0\fR matrix
(filled with zero's) of the \fB\s-1SAME\s0 \s-1SIZE\s0\fR as matrix "\f(CW$some_matrix\fR".
.PP
Matrix "\f(CW$some_matrix\fR" is not changed by this in any way.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix1\->copy($matrix2);\*(C'\fR
.PP
Copies the contents of matrix "\f(CW$matrix2\fR" to an \fB\s-1ALREADY\s0 \s-1EXISTING\s0\fR
matrix "\f(CW$matrix1\fR\*(L" (which must have the same size as matrix \*(R"\f(CW$matrix2\fR"!).
.PP
Matrix "\f(CW$matrix2\fR" is not changed by this in any way.
.RE
.IP "\(bu"
\&\f(CW\*(C`$twin_matrix = $some_matrix\->clone();\*(C'\fR
.PP
Returns an object reference to a \fB\s-1NEW\s0\fR matrix of the \fB\s-1SAME\s0 \s-1SIZE\s0\fR as
matrix "\f(CW$some_matrix\fR\*(L". The contents of matrix \*(R"\f(CW$some_matrix\fR" have
\&\fB\s-1ALREADY\s0 \s-1BEEN\s0 \s-1COPIED\s0\fR to the new matrix "\f(CW$twin_matrix\fR\*(L". This
is the method that the operator \*(R"=" is overloaded to when you type
\&\f(CW\*(C`$a = $b\*(C'\fR, when \f(CW$a\fR and \f(CW$b\fR are matrices.
.PP
Matrix "\f(CW$some_matrix\fR" is not changed by this in any way.
.Sh "Matrix Row, Column and Element operations"
.IX Subsection "Matrix Row, Column and Element operations"
.RE
.IP "\(bu"
\&\f(CW\*(C`$row_vector = $matrix\->row($row);\*(C'\fR
.PP
This is a projection method which returns an object reference to
a \fB\s-1NEW\s0\fR matrix (which in fact is a (row) vector since it has only
one row) to which row number "\f(CW$row\fR\*(L" of matrix \*(R"\f(CW$matrix\fR" has
already been copied.
.PP
Matrix "\f(CW$matrix\fR" is not changed by this in any way.
.RE
.IP "\(bu"
\&\f(CW\*(C`$column_vector = $matrix\->column($column);\*(C'\fR
.PP
This is a projection method which returns an object reference to
a \fB\s-1NEW\s0\fR matrix (which in fact is a (column) vector since it has
only one column) to which column number "\f(CW$column\fR\*(L" of matrix
\&\*(R"\f(CW$matrix\fR" has already been copied.
.PP
Matrix "\f(CW$matrix\fR" is not changed by this in any way.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->assign($row,$column,$value);\*(C'\fR
.PP
Explicitly assigns a value "\f(CW$value\fR\*(L" to a single element of the
matrix \*(R"\f(CW$matrix\fR\*(L", located in row \*(R"\f(CW$row\fR\*(L" and column \*(R"\f(CW$column\fR",
thereby replacing the value previously stored there.
.RE
.IP "\(bu"
\&\f(CW\*(C`$value = $matrix\->\*(C'\fR\f(CW\*(C`element($row,$column);\*(C'\fR
.PP
Returns the value of a specific element of the matrix "\f(CW$matrix\fR\*(L",
located in row \*(R"\f(CW$row\fR\*(L" and column \*(R"\f(CW$column\fR".
.RE
.IP "\(bu"
\&\f(CW\*(C`$new_matrix = $matrix\->each( \e&function )\*(C'\fR;
.PP
Creates a new matrix by evaluating a code reference on each element of the
given matrix. The function is passed the element, the row index and the column
index, in that order. The value the function returns ( or the value of the last
executed statement ) is the value given to the corresponding element in \f(CW$new_matrix\fR.
.PP
Example:
.PP
.Vb 2
\& # add 1 to every element in the matrix
\& $matrix = $matrix->each ( sub { (shift) + 1 } );
.Ve
.PP
Example:
.PP
.Vb 4
\& my $cofactor = $matrix->each( sub { my(undef,$i,$j) = @_;
\& ($i+$j) % 2 == 0 ? $matrix->minor($i,$j)->det()
\& : -1*$matrix->minor($i,$j)->det();
\& } );
.Ve
.PP
This code needs some explanation. For each element of \f(CW$matrix\fR, it throws away the actual value
and stores the row and column indexes in \f(CW$i\fR and \f(CW$j\fR. Then it sets element [$i,$j] in \f(CW$cofactor\fR
to the determinant of \f(CW\*(C`$matrix\->minor($i,$j)\*(C'\fR if it is an \*(L"even\*(R" element, or \f(CW\*(C`\-1*$matrix\->minor($i,$j)\*(C'\fR
if it is an \*(L"odd\*(R" element.
.RE
.IP "\(bu"
\&\f(CW\*(C`$new_matrix = $matrix\->each_diag( \e&function )\*(C'\fR;
.PP
Creates a new matrix by evaluating a code reference on each diagonal element of the
given matrix. The function is passed the element, the row index and the column
index, in that order. The value the function returns ( or the value of the last
executed statement ) is the value given to the corresponding element in \f(CW$new_matrix\fR.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->swap_col( $col1, $col2 );\*(C'\fR
.PP
This method takes two one-based column numbers and swaps the values of each element in each column.
\&\f(CW\*(C`$matrix\->swap_col(2,3)\*(C'\fR would replace column 2 in \f(CW$matrix\fR with column 3, and replace column
3 with column 2.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->swap_row( $row1, $row2 );\*(C'\fR
.PP
This method takes two one-based row numbers and swaps the values of each element in each row.
\&\f(CW\*(C`$matrix\->swap_row(2,3)\*(C'\fR would replace row 2 in \f(CW$matrix\fR with row 3, and replace row
3 with row 2.
.Sh "Matrix Operations"
.IX Subsection "Matrix Operations"
.RE
.IP "\(bu"
\&\f(CW\*(C`$det = $matrix\->det();\*(C'\fR
.PP
Returns the determinant of the matrix, without going through
the rigamarole of computing a \s-1LR\s0 decomposition. This method should
be much faster than \s-1LR\s0 decomposition if the matrix is diagonal or
triangular. Otherwise, it is just a wrapper for
\&\f(CW\*(C`$matrix\->decompose_LR\->det_LR\*(C'\fR. If the determinant is zero,
there is no inverse and vice\-versa. Only quadratic matrices have
determinants.
.RE
.IP "\(bu"
\&\f(CW\*(C`$inverse = $matrix\->inverse();\*(C'\fR
.PP
Returns the inverse of a matrix, without going through the
rigamarole of computing a \s-1LR\s0 decomposition. If no inverse exists,
undef is returned and an error is printed via \f(CW\*(C`carp()\*(C'\fR.
This is nothing but a wrapper for \f(CW\*(C`$matrix\->decompose_LR\->invert_LR\*(C'\fR.
.RE
.IP "\(bu"
\&\f(CW\*(C`($rows,$columns) = $matrix\->dim();\*(C'\fR
.PP
Returns a list of two items, representing the number of rows
and columns the given matrix "\f(CW$matrix\fR" contains.
.RE
.IP "\(bu"
\&\f(CW\*(C`$norm_one = $matrix\->norm_one();\*(C'\fR
.PP
Returns the \*(L"one\*(R"\-norm of the given matrix "\f(CW$matrix\fR".
.PP
The \*(L"one\*(R"\-norm is defined as follows:
.PP
For each column, the sum of the absolute values of the elements in the
different rows of that column is calculated. Finally, the maximum
of these sums is returned.
.PP
Note that the \*(L"one\*(R"\-norm and the \*(L"maximum\*(R"\-norm are mathematically
equivalent, although for the same matrix they usually yield a different
value.
.PP
Therefore, you should only compare values that have been calculated
using the same norm!
.PP
Throughout this package, the \*(L"one\*(R"\-norm is (arbitrarily) used
for all comparisons, for the sake of uniformity and comparability,
except for the iterative methods \*(L"\fIsolve_GSM()\fR\*(R", \*(L"\fIsolve_SSM()\fR\*(R" and
\&\*(L"\fIsolve_RM()\fR\*(R" which use either norm depending on the matrix itself.
.RE
.IP "\(bu"
\&\f(CW\*(C`$norm_max = $matrix\->norm_max();\*(C'\fR
.PP
Returns the \*(L"maximum\*(R"\-norm of the given matrix \f(CW$matrix\fR.
.PP
The \*(L"maximum\*(R"\-norm is defined as follows:
.PP
For each row, the sum of the absolute values of the elements in the
different columns of that row is calculated. Finally, the maximum
of these sums is returned.
.PP
Note that the \*(L"maximum\*(R"\-norm and the \*(L"one\*(R"\-norm are mathematically
equivalent, although for the same matrix they usually yield a different
value.
.PP
Therefore, you should only compare values that have been calculated
using the same norm!
.PP
Throughout this package, the \*(L"one\*(R"\-norm is (arbitrarily) used
for all comparisons, for the sake of uniformity and comparability,
except for the iterative methods \*(L"\fIsolve_GSM()\fR\*(R", \*(L"\fIsolve_SSM()\fR\*(R" and
\&\*(L"\fIsolve_RM()\fR\*(R" which use either norm depending on the matrix itself.
.RE
.IP "\(bu"
\&\f(CW\*(C`$norm_sum = $matrix\->norm_sum();\*(C'\fR
.PP
This is a very simple norm which is defined as the sum of the
absolute values of every element.
.RE
.IP "\(bu"
\&\f(CW$p_norm\fR = \f(CW$matrix\fR\->norm_p($n);>
.PP
This function returns the \*(L"p\-norm\*(R" of a vector. The argument \f(CW$n\fR
must be a number greater than or equal to 1 or the string \*(L"Inf\*(R".
The p\-norm is defined as (sum(x_i^p))^(1/p). In words, it raised
each element to the p\-th power, adds them up, and then takes the
p\-th root of that number. If the string \*(L"Inf\*(R" is passed, the
\&\*(L"infinity\-norm\*(R" is computed, which is really the limit of the
p\-norm as p goes to infinity. It is defined as the maximum element
of the vector. Also, note that the familiar Euclidean distance
between two vectors is just a special case of a p\-norm, when p is
equal to 2.
.PP
Example:
\f(CW$a\fR = Math::MatrixReal\->new_from_cols([[1,2,3]]);
\f(CW$p1\fR = \f(CW$a\fR\->\fInorm_p\fR\|(1);
\f(CW$p2\fR = \f(CW$a\fR\->\fInorm_p\fR\|(2);
\f(CW$p3\fR = \f(CW$a\fR\->\fInorm_p\fR\|(3);
\f(CW$pinf\fR = \f(CW$a\fR\->norm_p(\*(L"Inf\*(R");
.PP
.Vb 1
\& print "(1,2,3,Inf) norm:\en$p1\en$p2\en$p3\en$pinf\en";
.Ve
.PP
.Vb 2
\& $i1 = $a->new_from_rows([[1,0]]);
\& $i2 = $a->new_from_rows([[0,1]]);
.Ve
.PP
.Vb 2
\& # this should be sqrt(2) since it is the same as the
\& # hypotenuse of a 1 by 1 right triangle
.Ve
.PP
.Vb 2
\& $dist = ($i1-$i2)->norm_p(2);
\& print "Distance is $dist, which should be " . sqrt(2) . "\en";
.Ve
.PP
Output:
.PP
.Vb 5
\& (1,2,3,Inf) norm:
\& 6
\& 3.74165738677394139
\& 3.30192724889462668
\& 3
.Ve
.PP
.Vb 1
\& Distance is 1.41421356237309505, which should be 1.41421356237309505
.Ve
.RE
.IP "\(bu"
\&\f(CW$frob_norm\fR = \f(CW\*(C`$matrix\->norm_frobenius();\*(C'\fR
.PP
This norm is similar to that of a p\-norm where p is 2, except it
acts on a \fBmatrix\fR, not a vector. Each element of the matrix is
squared, this is added up, and then a square root is taken.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->spectral_radius();\*(C'\fR
.PP
Returns the maximum value of the absolute value of all eigenvalues.
Currently this computes \fBall\fR eigenvalues, then sifts through them
to find the largest in absolute value. Needless to say, this is very
inefficient, and in the future an algorithm that computes only the
largest eigenvalue may be implemented.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix1\->transpose($matrix2);\*(C'\fR
.PP
Calculates the transposed matrix of matrix \f(CW$matrix2\fR and stores
the result in matrix "\f(CW$matrix1\fR\*(L" (which must already exist and have
the same size as matrix \*(R"\f(CW$matrix2\fR"!).
.PP
This operation can also be carried out \*(L"in\-place\*(R", i.e., input and
output matrix may be identical.
.PP
Transposition is a symmetry operation: imagine you rotate the matrix
along the axis of its main diagonal (going through elements (1,1),
(2,2), (3,3) and so on) by 180 degrees.
.PP
Another way of looking at it is to say that rows and columns are
swapped. In fact the contents of element \f(CW\*(C`(i,j)\*(C'\fR are swapped
with those of element \f(CW\*(C`(j,i)\*(C'\fR.
.PP
Note that (especially for vectors) it makes a big difference if you
have a row vector, like this:
.PP
.Vb 1
\& [ -1 0 1 ]
.Ve
.PP
or a column vector, like this:
.PP
.Vb 3
\& [ -1 ]
\& [ 0 ]
\& [ 1 ]
.Ve
.PP
the one vector being the transposed of the other!
.PP
This is especially true for the matrix product of two vectors:
.PP
.Vb 3
\& [ -1 ]
\& [ -1 0 1 ] * [ 0 ] = [ 2 ] , whereas
\& [ 1 ]
.Ve
.PP
.Vb 5
\& * [ -1 0 1 ]
\& [ -1 ] [ 1 0 -1 ]
\& [ 0 ] * [ -1 0 1 ] = [ -1 ] [ 1 0 -1 ] = [ 0 0 0 ]
\& [ 1 ] [ 0 ] [ 0 0 0 ] [ -1 0 1 ]
\& [ 1 ] [ -1 0 1 ]
.Ve
.PP
So be careful about what you really mean!
.PP
Hint: throughout this module, whenever a vector is explicitly required
for input, a \fB\s-1COLUMN\s0\fR vector is expected!
.RE
.IP "\(bu"
\&\f(CW\*(C`$trace = $matrix\->trace();\*(C'\fR
.PP
This returns the trace of the matrix, which is defined as
the sum of the diagonal elements. The matrix must be
quadratic.
.RE
.IP "\(bu"
\&\f(CW\*(C`$minor = $matrix\->minor($row,$col);\*(C'\fR
.PP
Returns the minor matrix corresponding to \f(CW$row\fR and \f(CW$col\fR. \f(CW$matrix\fR must be quadratic.
If \f(CW$matrix\fR is n rows by n cols, the minor of \f(CW$row\fR and \f(CW$col\fR will be an (n\-1) by (n\-1)
matrix. The minor is defined as crossing out the row and the col specified and returning
the remaining rows and columns as a matrix. This method is used by \f(CW\*(C`cofactor()\*(C'\fR.
.RE
.IP "\(bu"
\&\f(CW\*(C`$cofactor = $matrix\->cofactor();\*(C'\fR
.PP
The cofactor matrix is constructed as follows:
.PP
For each element, cross out the row and column that it sits in.
Now, take the determinant of the matrix that is left in the other
rows and columns.
Multiply the determinant by (\-1)^(i+j), where i is the row index,
and j is the column index.
Replace the given element with this value.
.PP
The cofactor matrix can be used to find the inverse of the matrix. One formula for the
inverse of a matrix is the cofactor matrix transposed divided by the original
determinant of the matrix.
.PP
The following two inverses should be exactly the same:
.PP
.Vb 2
\& my $inverse1 = $matrix->inverse;
\& my $inverse2 = ~($matrix->cofactor)->each( sub { (shift)/$matrix->det() } );
.Ve
.PP
Caveat: Although the cofactor matrix is simple algorithm to compute the inverse of a matrix, and
can be used with pencil and paper for small matrices, it is comically slower than
the native \f(CW\*(C`inverse()\*(C'\fR function. Here is a small benchmark:
.PP
.Vb 6
\& # $matrix1 is 15x15
\& $det = $matrix1->det;
\& timethese( 10,
\& {'inverse' => sub { $matrix1->inverse(); },
\& 'cofactor' => sub { (~$matrix1->cofactor)->each ( sub { (shift)/$det; } ) }
\& } );
.Ve
.PP
.Vb 3
\& Benchmark: timing 10 iterations of LR, cofactor, inverse...
\& inverse: 1 wallclock secs ( 0.56 usr + 0.00 sys = 0.56 CPU) @ 17.86/s (n=10)
\& cofactor: 36 wallclock secs (36.62 usr + 0.01 sys = 36.63 CPU) @ 0.27/s (n=10)
.Ve
.RE
.IP "\(bu"
\&\f(CW\*(C`$adjoint = $matrix\->adjoint();\*(C'\fR
.PP
The adjoint is just the transpose of the cofactor matrix. This method is
just an alias for \f(CW\*(C` ~($matrix\->cofactor)\*(C'\fR.
.Sh "Arithmetic Operations"
.IX Subsection "Arithmetic Operations"
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix1\->add($matrix2,$matrix3);\*(C'\fR
.PP
Calculates the sum of matrix "\f(CW$matrix2\fR\*(L" and matrix \*(R"\f(CW$matrix3\fR\*(L"
and stores the result in matrix \*(R"\f(CW$matrix1\fR\*(L" (which must already exist
and have the same size as matrix \*(R"\f(CW$matrix2\fR\*(L" and matrix \*(R"\f(CW$matrix3\fR"!).
.PP
This operation can also be carried out \*(L"in\-place\*(R", i.e., the output and
one (or both) of the input matrices may be identical.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix1\->subtract($matrix2,$matrix3);\*(C'\fR
.PP
Calculates the difference of matrix "\f(CW$matrix2\fR\*(L" minus matrix \*(R"\f(CW$matrix3\fR\*(L"
and stores the result in matrix \*(R"\f(CW$matrix1\fR\*(L" (which must already exist
and have the same size as matrix \*(R"\f(CW$matrix2\fR\*(L" and matrix \*(R"\f(CW$matrix3\fR"!).
.PP
This operation can also be carried out \*(L"in\-place\*(R", i.e., the output and
one (or both) of the input matrices may be identical.
.PP
Note that this operation is the same as
\&\f(CW\*(C`$matrix1\->add($matrix2,\-$matrix3);\*(C'\fR, although the latter is
a little less efficient.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix1\->multiply_scalar($matrix2,$scalar);\*(C'\fR
.PP
Calculates the product of matrix "\f(CW$matrix2\fR\*(L" and the number \*(R"\f(CW$scalar\fR\*(L"
(i.e., multiplies each element of matrix \*(R"\f(CW$matrix2\fR\*(L" with the factor
\&\*(R"\f(CW$scalar\fR\*(L") and stores the result in matrix \*(R"\f(CW$matrix1\fR\*(L" (which must
already exist and have the same size as matrix \*(R"\f(CW$matrix2\fR"!).
.PP
This operation can also be carried out \*(L"in\-place\*(R", i.e., input and
output matrix may be identical.
.RE
.IP "\(bu"
\&\f(CW\*(C`$product_matrix = $matrix1\->multiply($matrix2);\*(C'\fR
.PP
Calculates the product of matrix "\f(CW$matrix1\fR\*(L" and matrix \*(R"\f(CW$matrix2\fR\*(L"
and returns an object reference to a new matrix \*(R"\f(CW$product_matrix\fR" in
which the result of this operation has been stored.
.PP
Note that the dimensions of the two matrices "\f(CW$matrix1\fR\*(L" and \*(R"\f(CW$matrix2\fR"
(i.e., their numbers of rows and columns) must harmonize in the following
way (example):
.PP
.Vb 3
\& [ 2 2 ]
\& [ 2 2 ]
\& [ 2 2 ]
.Ve
.PP
.Vb 4
\& [ 1 1 1 ] [ * * ]
\& [ 1 1 1 ] [ * * ]
\& [ 1 1 1 ] [ * * ]
\& [ 1 1 1 ] [ * * ]
.Ve
.PP
I.e., the number of columns of matrix "\f(CW$matrix1\fR\*(L" has to be the same
as the number of rows of matrix \*(R"\f(CW$matrix2\fR".
.PP
The number of rows and columns of the resulting matrix "\f(CW$product_matrix\fR\*(L"
is determined by the number of rows of matrix \*(R"\f(CW$matrix1\fR\*(L" and the number
of columns of matrix \*(R"\f(CW$matrix2\fR", respectively.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix1\->negate($matrix2);\*(C'\fR
.PP
Calculates the negative of matrix "\f(CW$matrix2\fR\*(L" (i.e., multiplies
all elements with \*(R"\-1\*(L") and stores the result in matrix \*(R"\f(CW$matrix1\fR\*(L"
(which must already exist and have the same size as matrix \*(R"\f(CW$matrix2\fR"!).
.PP
This operation can also be carried out \*(L"in\-place\*(R", i.e., input and
output matrix may be identical.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix_to_power = $matrix1\->exponent($integer);\*(C'\fR
.PP
Raises the matrix to the \f(CW$integer\fR power. Obviously, \f(CW$integer\fR must
be an integer. If it is zero, the identity matrix is returned. If a negative
integer is given, the inverse will be computed (if it exists) and then raised
the the absolute value of \f(CW$integer\fR. The matrix must be quadratic.
.Sh "Boolean Matrix Operations"
.IX Subsection "Boolean Matrix Operations"
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_quadratic();\*(C'\fR
.PP
Returns a boolean value indicating if the given matrix is
quadratic (also know as \*(L"square\*(R" or \*(L"n by n\*(R"). A matrix is
quadratic if it has the same number of rows as it does columns.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_square();\*(C'\fR
.PP
This is an alias for \f(CW\*(C`is_quadratic()\*(C'\fR.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_symmetric();\*(C'\fR
.PP
Returns a boolean value indicating if the given matrix is
symmetric. By definition, a matrix is symmetric if and only
if (\fBM\fR[\fIi\fR,\fIj\fR]=\fBM\fR[\fIj\fR,\fIi\fR]). This is equivalent to
\&\f(CW\*(C`($matrix == ~$matrix)\*(C'\fR but without memory allocation.
Only quadratic matrices can be symmetric.
.PP
Notes: A symmetric matrix always has real eigenvalues/eigenvectors.
A matrix plus its transpose is always symmetric.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_skew_symmetric();\*(C'\fR
.PP
Returns a boolean value indicating if the given matrix is
skew symmetric. By definition, a matrix is symmetric if and only
if (\fBM\fR[\fIi\fR,\fIj\fR]=\fB\-M\fR[\fIj\fR,\fIi\fR]). This is equivalent to
\&\f(CW\*(C`($matrix == \-(~$matrix))\*(C'\fR but without memory allocation.
Only quadratic matrices can be skew symmetric.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_diagonal();\*(C'\fR
.PP
Returns a boolean value indicating if the given matrix is
diagonal, i.e. all of the nonzero elements are on the main diagonal.
Only quadratic matrices can be diagonal.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_tridiagonal();\*(C'\fR
.PP
Returns a boolean value indicating if the given matrix is
tridiagonal, i.e. all of the nonzero elements are on the main diagonal
or the diagonals above and below the main diagonal.
Only quadratic matrices can be tridiagonal.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_upper_triangular();\*(C'\fR
.PP
Returns a boolean value indicating if the given matrix is upper triangular,
i.e. all of the nonzero elements not on the main diagonal are above it.
Only quadratic matrices can be upper triangular.
Note: diagonal matrices are both upper and lower triangular.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_lower_triangular();\*(C'\fR
.PP
Returns a boolean value indicating if the given matrix is lower triangular,
i.e. all of the nonzero elements not on the main diagonal are below it.
Only quadratic matrices can be lower triangular.
Note: diagonal matrices are both upper and lower triangular.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_orthogonal();\*(C'\fR
.PP
Returns a boolean value indicating if the given matrix is orthogonal.
An orthogonal matrix is has the property that the transpose equals the
inverse of the matrix. Instead of computing each and comparing them, this
method multiplies the matrix by it's transpose, and returns true if this
turns out to be the identity matrix, false otherwise.
Only quadratic matrices can orthogonal.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_binary();\*(C'\fR
.PP
Returns a boolean value indicating if the given matrix is binary.
A matrix is binary if it contains only zeroes or ones.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_gramian();\*(C'\fR
.PP
Returns a boolean value indicating if the give matrix is Gramian.
A matrix \f(CW$A\fR is Gramian if and only if there exists a
square matrix \f(CW$B\fR such that \f(CW\*(C`$A = ~$B*$B\*(C'\fR. This is equivalent to
checking if \f(CW$A\fR is symmetric and has all nonnegative eigenvalues, which
is what Math::MatrixReal uses to check for this property.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_LR();\*(C'\fR
.PP
Returns a boolean value indicating if the matrix is an \s-1LR\s0 decomposition
matrix.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_positive();\*(C'\fR
.PP
Returns a boolean value indicating if the matrix contains only
positive entries. Note that a zero entry is not positive and
will cause \f(CW\*(C`is_positive()\*(C'\fR to return false.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_negative();\*(C'\fR
.PP
Returns a boolean value indicating if the matrix contains only
negative entries. Note that a zero entry is not negative and
will cause \f(CW\*(C`is_negative()\*(C'\fR to return false.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_periodic($k);\*(C'\fR
.PP
Returns a boolean value indicating if the matrix is periodic
with period \f(CW$k\fR. This is true if \f(CW\*(C`$matrix ** ($k+1) == $matrix\*(C'\fR.
When \f(CW\*(C`$k == 1\*(C'\fR, this reduces down to the \f(CW\*(C`is_idempotent()\*(C'\fR
function.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_idempotent();\*(C'\fR
.PP
Returns a boolean value indicating if the matrix is idempotent,
which is defined as the square of the matrix being equal to
the original matrix, i.e \f(CW\*(C`$matrix ** 2 == $matrix\*(C'\fR.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_row_vector();\*(C'\fR
.PP
Returns a boolean value indicating if the matrix is a row vector.
A row vector is a matrix which is 1xn. Note that the 1x1 matrix is
both a row and column vector.
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->is_col_vector();\*(C'\fR
.PP
Returns a boolean value indicating if the matrix is a col vector.
A col vector is a matrix which is nx1. Note that the 1x1 matrix is
both a row and column vector.
.Sh "Eigensystems"
.IX Subsection "Eigensystems"
.IP "\(bu" 2
\&\f(CW\*(C`($l, $V) = $matrix\->sym_diagonalize();\*(C'\fR
.Sp
This method performs the diagonalization of the quadratic
\&\fIsymmetric\fR matrix \fBM\fR stored in \f(CW$matrix\fR.
On output, \fBl\fR is a column vector containing all the eigenvalues
of \fBM\fR and \fBV\fR is an orthogonal matrix which columns are the
corresponding normalized eigenvectors.
The primary property of an eigenvalue \fIl\fR and an eigenvector
\&\fBx\fR is of course that: \fBM\fR * \fBx\fR = \fIl\fR * \fBx\fR.
.Sp
The method uses a Householder reduction to tridiagonal form
followed by a \s-1QL\s0 algoritm with implicit shifts on this
tridiagonal. (The tridiagonal matrix is kept internally
in a compact form in this routine to save memory.)
In fact, this routine wraps the \fIhouseholder()\fR and
\&\fItri_diagonalize()\fR methods described below when their
intermediate results are not desired.
The overall algorithmic complexity of this technique
is O(N^3). According to several books, the coefficient
hidden by the 'O' is one of the best possible for general
(symmetric) matrixes.
.IP "\(bu" 2
\&\f(CW\*(C`($T, $Q) = $matrix\->householder();\*(C'\fR
.Sp
This method performs the Householder algorithm which reduces
the \fIn\fR by \fIn\fR real \fIsymmetric\fR matrix \fBM\fR contained
in \f(CW$matrix\fR to tridiagonal form.
On output, \fBT\fR is a symmetric tridiagonal matrix (only
diagonal and off-diagonal elements are non\-zero) and \fBQ\fR
is an \fIorthogonal\fR matrix performing the tranformation
between \fBM\fR and \fBT\fR (\f(CW\*(C`$M == $Q * $T * ~$Q\*(C'\fR).
.IP "\(bu" 2
\&\f(CW\*(C`($l, $V) = $T\->tri_diagonalize([$Q]);\*(C'\fR
.Sp
This method diagonalizes the symmetric tridiagonal
matrix \fBT\fR. On output, \f(CW$l\fR and \f(CW$V\fR are similar to the
output values described for \fIsym_diagonalize()\fR.
.Sp
The optional argument \f(CW$Q\fR corresponds to an orthogonal
transformation matrix \fBQ\fR that should be used additionally
during \fBV\fR (eigenvectors) computation. It should be supplied
if the desired eigenvectors correspond to a more general
symmetric matrix \fBM\fR previously reduced by the
\&\fIhouseholder()\fR method, not a mere tridiagonal. If \fBT\fR is
really a tridiagonal matrix, \fBQ\fR can be omitted (it
will be internally created in fact as an identity matrix).
The method uses a \s-1QL\s0 algorithm (with implicit shifts).
.IP "\(bu" 2
\&\f(CW\*(C`$l = $matrix\->sym_eigenvalues();\*(C'\fR
.Sp
This method computes the eigenvalues of the quadratic
\&\fIsymmetric\fR matrix \fBM\fR stored in \f(CW$matrix\fR.
On output, \fBl\fR is a column vector containing all the eigenvalues
of \fBM\fR. Eigenvectors are not computed (on the contrary of
\&\f(CW\*(C`sym_diagonalize()\*(C'\fR) and this method is more efficient
(even though it uses a similar algorithm with two phases).
However, understand that the algorithmic complexity of this
technique is still also O(N^3). But the coefficient hidden
by the 'O' is better by a factor of..., well, see your
benchmark, it's wiser.
.Sp
This routine wraps the \fIhouseholder_tridiagonal()\fR and
\&\fItri_eigenvalues()\fR methods described below when the
intermediate tridiagonal matrix is not needed.
.IP "\(bu" 2
\&\f(CW\*(C`$T = $matrix\->householder_tridiagonal();\*(C'\fR
.Sp
This method performs the Householder algorithm which reduces
the \fIn\fR by \fIn\fR real \fIsymmetric\fR matrix \fBM\fR contained
in \f(CW$matrix\fR to tridiagonal form.
On output, \fBT\fR is the obtained symmetric tridiagonal matrix
(only diagonal and off-diagonal elements are non\-zero). The
operation is similar to the \fIhouseholder()\fR method, but potentially
a little more efficient as the transformation matrix is not
computed.
.IP "\(bu" 2
\&\f(CW\*(C`$l = $T\->tri_eigenvalues();\*(C'\fR
.Sp
This method computesthe eigenvalues of the symmetric
tridiagonal matrix \fBT\fR. On output, \f(CW$l\fR is a vector
containing the eigenvalues (similar to \f(CW\*(C`sym_eigenvalues()\*(C'\fR).
This method is much more efficient than \fItri_diagonalize()\fR
when eigenvectors are not needed.
.Sh "Miscellaneous"
.IX Subsection "Miscellaneous"
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->zero();\*(C'\fR
.PP
Assigns a zero to every element of the matrix "\f(CW$matrix\fR\*(L", i.e.,
erases all values previously stored there, thereby effectively
transforming the matrix into a \*(R"zero\*(L"\-matrix or \*(R"null"\-matrix,
the neutral element of the addition operation in a Ring.
.PP
(For instance the (quadratic) matrices with \*(L"n\*(R" rows and columns
and matrix addition and multiplication form a Ring. Most prominent
characteristic of a Ring is that multiplication is not commutative,
i.e., in general, "\f(CW\*(C`matrix1 * matrix2\*(C'\fR\*(L" is not the same as
\&\*(R"\f(CW\*(C`matrix2 * matrix1\*(C'\fR"!)
.RE
.IP "\(bu"
\&\f(CW\*(C`$matrix\->one();\*(C'\fR
.PP
Assigns one's to the elements on the main diagonal (elements (1,1),
(2,2), (3,3) and so on) of matrix "\f(CW$matrix\fR\*(L" and zero's to all others,
thereby erasing all values previously stored there and transforming the
matrix into a \*(R"one"\-matrix, the neutral element of the multiplication
operation in a Ring.
.PP
(If the matrix is quadratic (which this method doesn't require, though),
then multiplying this matrix with itself yields this same matrix again,
and multiplying it with some other matrix leaves that other matrix
unchanged!)
.RE
.IP "\(bu"
\&\f(CW\*(C`$latex_string = $matrix\->as_latex( align=> "c", format => "%s", name => "" );\*(C'\fR
.PP
This function returns the matrix as a LaTeX string. It takes a hash as an
argument which is used to control the style of the output. The hash element \f(CW\*(C`align\*(C'\fR
may be \*(L"c\*(R",\*(L"l\*(R" or \*(L"r\*(R", corresponding to center, left and right, respectively. The
\&\f(CW\*(C`format\*(C'\fR element is a format string that is given to \f(CW\*(C`sprintf\*(C'\fR to control the
style of number format, such a floating point or scientific notation. The \f(CW\*(C`name\*(C'\fR
element can be used so that a LaTeX string of \*(L"$name = \*(R" is prepended to the string.
.PP
Example:
.PP
.Vb 2
\& my $a = Math::MatrixReal->new_from_cols([[ 1.234, 5.678, 9.1011],[1,2,3]] );
\& print $a->as_latex( ( format => "%.2f", align => "l",name => "A" ) );
.Ve
.PP
Output:
\f(CW$A\fR = $ $
\eleft( \ebegin{array}{ll}
1.23&1.00 \e\e
5.68&2.00 \e\e
9.10&3.00
\eend{array} \eright)
$
.RE
.IP "\(bu"
\&\f(CW\*(C`$yacas_string = $matrix\->as_yacas( format => "%s", name => "", semi => 0 );\*(C'\fR
.PP
This function returns the matrix as a string that can be read by Yacas.
It takes a hash as
an an argument which controls the style of the output. The
\&\f(CW\*(C`format\*(C'\fR element is a format string that is given to \f(CW\*(C`sprintf\*(C'\fR to control the
style of number format, such a floating point or scientific notation. The \f(CW\*(C`name\*(C'\fR
element can be used so that \*(L"$name = \*(R" is prepended to the string. The <semi> element can
be set to 1 to that a semicolon is appended (so Matlab does not print out the matrix.)
.PP
Example:
.PP
.Vb 2
\& $a = Math::MatrixReal->new_from_cols([[ 1.234, 5.678, 9.1011],[1,2,3]] );
\& print $a->as_yacas( ( format => "%.2f", align => "l",name => "A" ) );
.Ve
.PP
Output:
.PP
.Vb 1
\& A := {{1.23,1.00},{5.68,2.00},{9.10,3.00}}
.Ve
.RE
.IP "\(bu"
\&\f(CW\*(C`$matlab_string = $matrix\->as_matlab( format => "%s", name => "", semi => 0 );\*(C'\fR
.PP
This function returns the matrix as a string that can be read by Matlab. It takes a hash as
an an argument which controls the style of the output. The
\&\f(CW\*(C`format\*(C'\fR element is a format string that is given to \f(CW\*(C`sprintf\*(C'\fR to control the
style of number format, such a floating point or scientific notation. The \f(CW\*(C`name\*(C'\fR
element can be used so that \*(L"$name = \*(R" is prepended to the string. The <semi> element can
be set to 1 to that a semicolon is appended (so Matlab does not print out the matrix.)
.PP
Example:
.PP
.Vb 2
\& my $a = Math::MatrixReal->new_from_rows([[ 1.234, 5.678, 9.1011],[1,2,3]] );
\& print $a->as_matlab( ( format => "%.3f", name => "A",semi => 1 ) );
.Ve
.PP
Output:
A = [ 1.234 5.678 9.101;
1.000 2.000 3.000];
.RE
.IP "\(bu"
\&\f(CW\*(C`$scilab_string = $matrix\->as_scilab( format => "%s", name => "", semi => 0 );\*(C'\fR
.PP
This function is just an alias for \f(CW\*(C`as_matlab()\*(C'\fR, since both Scilab and Matlab have the
same matrix format.
.RE
.IP "\(bu"
\&\f(CW\*(C`$minimum = Math::MatrixReal::min($number1,$number2);\*(C'\fR
.PP
Returns the minimum of the two numbers "\f(CW\*(C`number1\*(C'\fR\*(L" and \*(R"\f(CW\*(C`number2\*(C'\fR".
.RE
.IP "\(bu"
\&\f(CW\*(C`$maximum = Math::MatrixReal::max($number1,$number2);\*(C'\fR
.PP
Returns the maximum of the two numbers "\f(CW\*(C`number1\*(C'\fR\*(L" and \*(R"\f(CW\*(C`number2\*(C'\fR".
.RE
.IP "\(bu"
\&\f(CW\*(C`$minimal_cost_matrix = $cost_matrix\->kleene();\*(C'\fR
.PP
Copies the matrix "\f(CW$cost_matrix\fR\*(L" (which has to be quadratic!) to
a new matrix of the same size (i.e., \*(R"clones" the input matrix) and
applies Kleene's algorithm to it.
.PP
See \fIMath::Kleene\fR\|(3) for more details about this algorithm!
.PP
The method returns an object reference to the new matrix.
.PP
Matrix "\f(CW$cost_matrix\fR" is not changed by this method in any way.
.RE
.IP "\(bu"
\&\f(CW\*(C`($norm_matrix,$norm_vector) = $matrix\->normalize($vector);\*(C'\fR
.PP
This method is used to improve the numerical stability when solving
linear equation systems.
.PP
Suppose you have a matrix \*(L"A\*(R" and a vector \*(L"b\*(R" and you want to find
out a vector \*(L"x\*(R" so that \f(CW\*(C`A * x = b\*(C'\fR, i.e., the vector \*(L"x\*(R" which
solves the equation system represented by the matrix \*(L"A\*(R" and the
vector \*(L"b\*(R".
.PP
Applying this method to the pair (A,b) yields a pair (A',b') where
each row has been divided by (the absolute value of) the greatest
coefficient appearing in that row. So this coefficient becomes equal
to \*(L"1\*(R" (or \*(L"\-1\*(R") in the new pair (A',b') (all others become smaller
than one and greater than minus one).
.PP
Note that this operation does not change the equation system itself
because the same division is carried out on either side of the equation
sign!
.PP
The method requires a quadratic (!) matrix "\f(CW$matrix\fR\*(L" and a vector
\&\*(R"\f(CW$vector\fR" for input (the vector must be a column vector with the same
number of rows as the input matrix) and returns a list of two items
which are object references to a new matrix and a new vector, in this
order.
.PP
The output matrix and vector are clones of the input matrix and vector
to which the operation explained above has been applied.
.PP
The input matrix and vector are not changed by this in any way.
.PP
Example of how this method can affect the result of the methods to solve
equation systems (explained immediately below following this method):
.PP
Consider the following little program:
.PP
.Vb 1
\& #!perl -w
.Ve
.PP
.Vb 1
\& use Math::MatrixReal qw(new_from_string);
.Ve
.PP
.Vb 5
\& $A = Math::MatrixReal->new_from_string(<<"MATRIX");
\& [ 1 2 3 ]
\& [ 5 7 11 ]
\& [ 23 19 13 ]
\& MATRIX
.Ve
.PP
.Vb 5
\& $b = Math::MatrixReal->new_from_string(<<"MATRIX");
\& [ 0 ]
\& [ 1 ]
\& [ 29 ]
\& MATRIX
.Ve
.PP
.Vb 7
\& $LR = $A->decompose_LR();
\& if (($dim,$x,$B) = $LR->solve_LR($b))
\& {
\& $test = $A * $x;
\& print "x = \en$x";
\& print "A * x = \en$test";
\& }
.Ve
.PP
.Vb 1
\& ($A_,$b_) = $A->normalize($b);
.Ve
.PP
.Vb 7
\& $LR = $A_->decompose_LR();
\& if (($dim,$x,$B) = $LR->solve_LR($b_))
\& {
\& $test = $A * $x;
\& print "x = \en$x";
\& print "A * x = \en$test";
\& }
.Ve
.PP
This will print:
.PP
.Vb 16
\& x =
\& [ 1.000000000000E+00 ]
\& [ 1.000000000000E+00 ]
\& [ -1.000000000000E+00 ]
\& A * x =
\& [ 4.440892098501E-16 ]
\& [ 1.000000000000E+00 ]
\& [ 2.900000000000E+01 ]
\& x =
\& [ 1.000000000000E+00 ]
\& [ 1.000000000000E+00 ]
\& [ -1.000000000000E+00 ]
\& A * x =
\& [ 0.000000000000E+00 ]
\& [ 1.000000000000E+00 ]
\& [ 2.900000000000E+01 ]
.Ve
.PP
You can see that in the second example (where \*(L"\fInormalize()\fR\*(R" has been used),
the result is \*(L"better\*(R", i.e., more accurate!
.RE
.IP "\(bu"
\&\f(CW\*(C`$LR_matrix = $matrix\->decompose_LR();\*(C'\fR
.PP
This method is needed to solve linear equation systems.
.PP
Suppose you have a matrix \*(L"A\*(R" and a vector \*(L"b\*(R" and you want to find
out a vector \*(L"x\*(R" so that \f(CW\*(C`A * x = b\*(C'\fR, i.e., the vector \*(L"x\*(R" which
solves the equation system represented by the matrix \*(L"A\*(R" and the
vector \*(L"b\*(R".
.PP
You might also have a matrix \*(L"A\*(R" and a whole bunch of different
vectors \*(L"b1\*(R"..\*(L"bk\*(R" for which you need to find vectors \*(L"x1\*(R"..\*(L"xk\*(R"
so that \f(CW\*(C`A * xi = bi\*(C'\fR, for \f(CW\*(C`i=1..k\*(C'\fR.
.PP
Using Gaussian transformations (multiplying a row or column with
a factor, swapping two rows or two columns and adding a multiple
of one row or column to another), it is possible to decompose any
matrix \*(L"A\*(R" into two triangular matrices, called \*(L"L\*(R" and \*(L"R\*(R" (for
\&\*(L"Left\*(R" and \*(L"Right\*(R").
.PP
\&\*(L"L\*(R" has one's on the main diagonal (the elements (1,1), (2,2), (3,3)
and so so), non-zero values to the left and below of the main diagonal
and all zero's in the upper right half of the matrix.
.PP
\&\*(L"R\*(R" has non-zero values on the main diagonal as well as to the right
and above of the main diagonal and all zero's in the lower left half
of the matrix, as follows:
.PP
.Vb 5
\& [ 1 0 0 0 0 ] [ x x x x x ]
\& [ x 1 0 0 0 ] [ 0 x x x x ]
\& L = [ x x 1 0 0 ] R = [ 0 0 x x x ]
\& [ x x x 1 0 ] [ 0 0 0 x x ]
\& [ x x x x 1 ] [ 0 0 0 0 x ]
.Ve
.PP
Note that "\f(CW\*(C`L * R\*(C'\fR\*(L" is equivalent to matrix \*(R"A" in the sense that
\&\f(CW\*(C`L * R * x = b <==> A * x = b\*(C'\fR for all vectors \*(L"x\*(R", leaving
out of account permutations of the rows and columns (these are taken
care of \*(L"magically\*(R" by this module!) and numerical errors.
.PP
Trick:
.PP
Because we know that \*(L"L\*(R" has one's on its main diagonal, we can
store both matrices together in the same array without information
loss! I.e.,
.PP
.Vb 5
\& [ R R R R R ]
\& [ L R R R R ]
\& LR = [ L L R R R ]
\& [ L L L R R ]
\& [ L L L L R ]
.Ve
.PP
Beware, though, that \*(L"\s-1LR\s0\*(R" and "\f(CW\*(C`L * R\*(C'\fR" are not the same!!!
.PP
Note also that for the same reason, you cannot apply the method \*(L"\fInormalize()\fR\*(R"
to an \*(L"\s-1LR\s0\*(R" decomposition matrix. Trying to do so will yield meaningless
rubbish!
.PP
(You need to apply \*(L"\fInormalize()\fR\*(R" to each pair (Ai,bi) \fB\s-1BEFORE\s0\fR decomposing
the matrix \*(L"Ai'\*(R"!)
.PP
Now what does all this help us in solving linear equation systems?
.PP
It helps us because a triangular matrix is the next best thing
that can happen to us besides a diagonal matrix (a matrix that
has non-zero values only on its main diagonal \- in which case
the solution is trivial, simply divide "\f(CW\*(C`b[i]\*(C'\fR\*(L" by \*(R"\f(CW\*(C`A[i,i]\*(C'\fR\*(L"
to get \*(R"\f(CW\*(C`x[i]\*(C'\fR"!).
.PP
To find the solution to our problem "\f(CW\*(C`A * x = b\*(C'\fR", we divide this
problem in parts: instead of solving \f(CW\*(C`A * x = b\*(C'\fR directly, we first
decompose \*(L"A\*(R" into \*(L"L\*(R" and \*(L"R\*(R" and then solve "\f(CW\*(C`L * y = b\*(C'\fR\*(L" and
finally \*(R"\f(CW\*(C`R * x = y\*(C'\fR" (motto: divide and rule!).
.PP
From the illustration above it is clear that solving "\f(CW\*(C`L * y = b\*(C'\fR\*(L"
and \*(R"\f(CW\*(C`R * x = y\*(C'\fR" is straightforward: we immediately know that
\&\f(CW\*(C`y[1] = b[1]\*(C'\fR. We then deduce swiftly that
.PP
.Vb 1
\& y[2] = b[2] - L[2,1] * y[1]
.Ve
.PP
(and we know "\f(CW\*(C`y[1]\*(C'\fR" by now!), that
.PP
.Vb 1
\& y[3] = b[3] - L[3,1] * y[1] - L[3,2] * y[2]
.Ve
.PP
and so on.
.PP
Having effortlessly calculated the vector \*(L"y\*(R", we now proceed to
calculate the vector \*(L"x\*(R" in a similar fashion: we see immediately
that \f(CW\*(C`x[n] = y[n] / R[n,n]\*(C'\fR. It follows that
.PP
.Vb 1
\& x[n-1] = ( y[n-1] - R[n-1,n] * x[n] ) / R[n-1,n-1]
.Ve
.PP
and
.PP
.Vb 2
\& x[n-2] = ( y[n-2] - R[n-2,n-1] * x[n-1] - R[n-2,n] * x[n] )
\& / R[n-2,n-2]
.Ve
.PP
and so on.
.PP
You can see that \- especially when you have many vectors \*(L"b1\*(R"..\*(L"bk\*(R"
for which you are searching solutions to \f(CW\*(C`A * xi = bi\*(C'\fR \- this scheme
is much more efficient than a straightforward, \*(L"brute force\*(R" approach.
.PP
This method requires a quadratic matrix as its input matrix.
.PP
If you don't have that many equations, fill up with zero's (i.e., do
nothing to fill the superfluous rows if it's a \*(L"fresh\*(R" matrix, i.e.,
a matrix that has been created with \*(L"\fInew()\fR\*(R" or \*(L"\fIshadow()\fR\*(R").
.PP
The method returns an object reference to a new matrix containing the
matrices \*(L"L\*(R" and \*(L"R\*(R".
.PP
The input matrix is not changed by this method in any way.
.PP
Note that you can \*(L"\fIcopy()\fR\*(R" or \*(L"\fIclone()\fR\*(R" the result of this method without
losing its \*(L"magical\*(R" properties (for instance concerning the hidden
permutations of its rows and columns).
.PP
However, as soon as you are applying any method that alters the contents
of the matrix, its \*(L"magical\*(R" properties are stripped off, and the matrix
immediately reverts to an \*(L"ordinary\*(R" matrix (with the values it just happens
to contain at that moment, be they meaningful as an ordinary matrix or not!).
.RE
.IP "\(bu"
\&\f(CW\*(C`($dimension,$x_vector,$base_matrix) = $LR_matrix\*(C'\fR\f(CW\*(C`\->\*(C'\fR\f(CW\*(C`solve_LR($b_vector);\*(C'\fR
.PP
Use this method to actually solve an equation system.
.PP
Matrix "\f(CW$LR_matrix\fR\*(L" must be a (quadratic) matrix returned by the
method \*(R"\fIdecompose_LR()\fR\*(L", the \s-1LR\s0 decomposition matrix of the matrix
\&\*(R"A" of your equation system \f(CW\*(C`A * x = b\*(C'\fR.
.PP
The input vector "\f(CW$b_vector\fR\*(L" is the vector \*(R"b" in your equation system
\&\f(CW\*(C`A * x = b\*(C'\fR, which must be a column vector and have the same number of
rows as the input matrix "\f(CW$LR_matrix\fR".
.PP
The method returns a list of three items if a solution exists or an
empty list otherwise (!).
.PP
Therefore, you should always use this method like this:
.PP
.Vb 8
\& if ( ($dim,$x_vec,$base) = $LR->solve_LR($b_vec) )
\& {
\& # do something with the solution...
\& }
\& else
\& {
\& # do something with the fact that there is no solution...
\& }
.Ve
.PP
The three items returned are: the dimension "\f(CW$dimension\fR\*(L" of the solution
space (which is zero if only one solution exists, one if the solution is
a straight line, two if the solution is a plane, and so on), the solution
vector \*(R"\f(CW$x_vector\fR\*(L" (which is the vector \*(R"x" of your equation system
\&\f(CW\*(C`A * x = b\*(C'\fR) and a matrix "\f(CW$base_matrix\fR" representing a base of the
solution space (a set of vectors which put up the solution space like
the spokes of an umbrella).
.PP
Only the first "\f(CW$dimension\fR" columns of this base matrix actually
contain entries, the remaining columns are all zero.
.PP
Now what is all this stuff with that \*(L"base\*(R" good for?
.PP
The output vector \*(L"x\*(R" is \fB\s-1ALWAYS\s0\fR a solution of your equation system
\&\f(CW\*(C`A * x = b\*(C'\fR.
.PP
But also any vector "\f(CW$vector\fR"
.PP
.Vb 1
\& $vector = $x_vector->clone();
.Ve
.PP
.Vb 1
\& $machine_infinity = 1E+99; # or something like that
.Ve
.PP
.Vb 4
\& for ( $i = 1; $i <= $dimension; $i++ )
\& {
\& $vector += rand($machine_infinity) * $base_matrix->column($i);
\& }
.Ve
.PP
is a solution to your problem \f(CW\*(C`A * x = b\*(C'\fR, i.e., if "\f(CW$A_matrix\fR\*(L" contains
your matrix \*(R"A", then
.PP
.Vb 1
\& print abs( $A_matrix * $vector - $b_vector ), "\en";
.Ve
.PP
should print a number around 1E\-16 or so!
.PP
By the way, note that you can actually calculate those vectors "\f(CW$vector\fR"
a little more efficient as follows:
.PP
.Vb 1
\& $rand_vector = $x_vector->shadow();
.Ve
.PP
.Vb 1
\& $machine_infinity = 1E+99; # or something like that
.Ve
.PP
.Vb 4
\& for ( $i = 1; $i <= $dimension; $i++ )
\& {
\& $rand_vector->assign($i,1, rand($machine_infinity) );
\& }
.Ve
.PP
.Vb 1
\& $vector = $x_vector + ( $base_matrix * $rand_vector );
.Ve
.PP
Note that the input matrix and vector are not changed by this method
in any way.
.RE
.IP "\(bu"
\&\f(CW\*(C`$inverse_matrix = $LR_matrix\->invert_LR();\*(C'\fR
.PP
Use this method to calculate the inverse of a given matrix "\f(CW$LR_matrix\fR\*(L",
which must be a (quadratic) matrix returned by the method \*(R"\fIdecompose_LR()\fR".
.PP
The method returns an object reference to a new matrix of the same size as
the input matrix containing the inverse of the matrix that you initially
fed into \*(L"\fIdecompose_LR()\fR\*(R" \fB\s-1IF\s0 \s-1THE\s0 \s-1INVERSE\s0 \s-1EXISTS\s0\fR, or an empty list
otherwise.
.PP
Therefore, you should always use this method in the following way:
.PP
.Vb 8
\& if ( $inverse_matrix = $LR->invert_LR() )
\& {
\& # do something with the inverse matrix...
\& }
\& else
\& {
\& # do something with the fact that there is no inverse matrix...
\& }
.Ve
.PP
Note that by definition (disregarding numerical errors), the product
of the initial matrix and its inverse (or vice\-versa) is always a matrix
containing one's on the main diagonal (elements (1,1), (2,2), (3,3) and
so on) and zero's elsewhere.
.PP
The input matrix is not changed by this method in any way.
.RE
.IP "\(bu"
\&\f(CW\*(C`$condition = $matrix\->condition($inverse_matrix);\*(C'\fR
.PP
In fact this method is just a shortcut for
.PP
.Vb 1
\& abs($matrix) * abs($inverse_matrix)
.Ve
.PP
Both input matrices must be quadratic and have the same size, and the result
is meaningful only if one of them is the inverse of the other (for instance,
as returned by the method \*(L"\fIinvert_LR()\fR\*(R").
.PP
The number returned is a measure of the \*(L"condition\*(R" of the given matrix
"\f(CW$matrix\fR", i.e., a measure of the numerical stability of the matrix.
.PP
This number is always positive, and the smaller its value, the better the
condition of the matrix (the better the stability of all subsequent
computations carried out using this matrix).
.PP
Numerical stability means for example that if
.PP
.Vb 1
\& abs( $vec_correct - $vec_with_error ) < $epsilon
.Ve
.PP
holds, there must be a "\f(CW$delta\fR\*(L" which doesn't depend on the vector
\&\*(R"\f(CW$vec_correct\fR\*(L" (nor \*(R"\f(CW$vec_with_error\fR", by the way) so that
.PP
.Vb 1
\& abs( $matrix * $vec_correct - $matrix * $vec_with_error ) < $delta
.Ve
.PP
also holds.
.RE
.IP "\(bu"
\&\f(CW\*(C`$determinant = $LR_matrix\->det_LR();\*(C'\fR
.PP
Calculates the determinant of a matrix, whose \s-1LR\s0 decomposition matrix
"\f(CW$LR_matrix\fR\*(L" must be given (which must be a (quadratic) matrix
returned by the method \*(R"\fIdecompose_LR()\fR").
.PP
In fact the determinant is a by-product of the \s-1LR\s0 decomposition: It is
(in principle, that is, except for the sign) simply the product of the
elements on the main diagonal (elements (1,1), (2,2), (3,3) and so on)
of the \s-1LR\s0 decomposition matrix.
.PP
(The sign is taken care of \*(L"magically\*(R" by this module)
.RE
.IP "\(bu"
\&\f(CW\*(C`$order = $LR_matrix\->order_LR();\*(C'\fR
.PP
Calculates the order (called \*(L"Rang\*(R" in German) of a matrix, whose
\&\s-1LR\s0 decomposition matrix "\f(CW$LR_matrix\fR\*(L" must be given (which must
be a (quadratic) matrix returned by the method \*(R"\fIdecompose_LR()\fR").
.PP
This number is a measure of the number of linear independent row
and column vectors (= number of linear independent equations in
the case of a matrix representing an equation system) of the
matrix that was initially fed into \*(L"\fIdecompose_LR()\fR\*(R".
.PP
If \*(L"n\*(R" is the number of rows and columns of the (quadratic!) matrix,
then \*(L"n \- order\*(R" is the dimension of the solution space of the
associated equation system.
.RE
.IP "\(bu"
\&\f(CW\*(C`$rank = $LR_matrix\->rank_LR();\*(C'\fR
.PP
This is an alias for the \f(CW\*(C`order_LR()\*(C'\fR function. The \*(L"order\*(R"
is usually called the \*(L"rank\*(R" in the United States.
.RE
.IP "\(bu"
\&\f(CW\*(C`$scalar_product = $vector1\->scalar_product($vector2);\*(C'\fR
.PP
Returns the scalar product of vector "\f(CW$vector1\fR\*(L" and vector \*(R"\f(CW$vector2\fR".
.PP
Both vectors must be column vectors (i.e., a matrix having
several rows but only one column).
.PP
This is a (more efficient!) shortcut for
.PP
.Vb 2
\& $temp = ~$vector1 * $vector2;
\& $scalar_product = $temp->element(1,1);
.Ve
.PP
or the sum \f(CW\*(C`i=1..n\*(C'\fR of the products \f(CW\*(C`vector1[i] * vector2[i]\*(C'\fR.
.PP
Provided none of the two input vectors is the null vector, then
the two vectors are orthogonal, i.e., have an angle of 90 degrees
between them, exactly when their scalar product is zero, and
vice\-versa.
.RE
.IP "\(bu"
\&\f(CW\*(C`$vector_product = $vector1\->vector_product($vector2);\*(C'\fR
.PP
Returns the vector product of vector "\f(CW$vector1\fR\*(L" and vector \*(R"\f(CW$vector2\fR".
.PP
Both vectors must be column vectors (i.e., a matrix having several rows
but only one column).
.PP
Currently, the vector product is only defined for 3 dimensions (i.e.,
vectors with 3 rows); all other vectors trigger an error message.
.PP
In 3 dimensions, the vector product of two vectors \*(L"x\*(R" and \*(L"y\*(R"
is defined as
.PP
.Vb 3
\& | x[1] y[1] e[1] |
\& determinant | x[2] y[2] e[2] |
\& | x[3] y[3] e[3] |
.Ve
.PP
where the "\f(CW\*(C`x[i]\*(C'\fR\*(L" and \*(R"\f(CW\*(C`y[i]\*(C'\fR\*(L" are the components of the two vectors
\&\*(R"x\*(L" and \*(R"y\*(L", respectively, and the \*(R"\f(CW\*(C`e[i]\*(C'\fR\*(L" are unity vectors (i.e.,
vectors with a length equal to one) with a one in row \*(R"i" and zero's
elsewhere (this means that you have numbers and vectors as elements
in this matrix!).
.PP
This determinant evaluates to the rather simple formula
.PP
.Vb 3
\& z[1] = x[2] * y[3] - x[3] * y[2]
\& z[2] = x[3] * y[1] - x[1] * y[3]
\& z[3] = x[1] * y[2] - x[2] * y[1]
.Ve
.PP
A characteristic property of the vector product is that the resulting
vector is orthogonal to both of the input vectors (if neither of both
is the null vector, otherwise this is trivial), i.e., the scalar product
of each of the input vectors with the resulting vector is always zero.
.RE
.IP "\(bu"
\&\f(CW\*(C`$length = $vector\->length();\*(C'\fR
.PP
This is actually a shortcut for
.PP
.Vb 1
\& $length = sqrt( $vector->scalar_product($vector) );
.Ve
.PP
and returns the length of a given (column!) vector "\f(CW$vector\fR".
.PP
Note that the \*(L"length\*(R" calculated by this method is in fact the
\&\*(L"two\*(R"\-norm of a vector "\f(CW$vector\fR"!
.PP
The general definition for norms of vectors is the following:
.PP
.Vb 4
\& sub vector_norm
\& {
\& croak "Usage: \e$norm = \e$vector->vector_norm(\e$n);"
\& if (@_ != 2);
.Ve
.PP
.Vb 3
\& my($vector,$n) = @_;
\& my($rows,$cols) = ($vector->[1],$vector->[2]);
\& my($k,$comp,$sum);
.Ve
.PP
.Vb 2
\& croak "Math::MatrixReal::vector_norm(): vector is not a column vector"
\& unless ($cols == 1);
.Ve
.PP
.Vb 2
\& croak "Math::MatrixReal::vector_norm(): norm index must be > 0"
\& unless ($n > 0);
.Ve
.PP
.Vb 2
\& croak "Math::MatrixReal::vector_norm(): norm index must be integer"
\& unless ($n == int($n));
.Ve
.PP
.Vb 8
\& $sum = 0;
\& for ( $k = 0; $k < $rows; $k++ )
\& {
\& $comp = abs( $vector->[0][$k][0] );
\& $sum += $comp ** $n;
\& }
\& return( $sum ** (1 / $n) );
\& }
.Ve
.PP
Note that the case \*(L"n = 1\*(R" is the \*(L"one\*(R"\-norm for matrices applied to a
vector, the case \*(L"n = 2\*(R" is the euclidian norm or length of a vector,
and if \*(L"n\*(R" goes to infinity, you have the \*(L"infinity\*(R"\- or \*(L"maximum\*(R"\-norm
for matrices applied to a vector!
.RE
.IP "\(bu"
\&\f(CW\*(C`$xn_vector = $matrix\->\*(C'\fR\f(CW\*(C`solve_GSM($x0_vector,$b_vector,$epsilon);\*(C'\fR
.RE
.IP "\(bu"
\&\f(CW\*(C`$xn_vector = $matrix\->\*(C'\fR\f(CW\*(C`solve_SSM($x0_vector,$b_vector,$epsilon);\*(C'\fR
.RE
.IP "\(bu"
\&\f(CW\*(C`$xn_vector = $matrix\->\*(C'\fR\f(CW\*(C`solve_RM($x0_vector,$b_vector,$weight,$epsilon);\*(C'\fR
.PP
In some cases it might not be practical or desirable to solve an
equation system "\f(CW\*(C`A * x = b\*(C'\fR\*(L" using an analytical algorithm like
the \*(R"\fIdecompose_LR()\fR\*(L" and \*(R"\fIsolve_LR()\fR" method pair.
.PP
In fact in some cases, due to the numerical properties (the \*(L"condition\*(R")
of the matrix \*(L"A\*(R", the numerical error of the obtained result can be
greater than by using an approximative (iterative) algorithm like one
of the three implemented here.
.PP
All three methods, \s-1GSM\s0 (\*(L"Global Step Method\*(R" or \*(L"Gesamtschrittverfahren\*(R"),
\&\s-1SSM\s0 (\*(L"Single Step Method\*(R" or \*(L"Einzelschrittverfahren\*(R") and \s-1RM\s0 (\*(L"Relaxation
Method\*(R" or \*(L"Relaxationsverfahren\*(R"), are fix-point iterations, that is, can
be described by an iteration function "\f(CW\*(C`x(t+1) = Phi( x(t) )\*(C'\fR" which has
the property:
.PP
.Vb 1
\& Phi(x) = x <==> A * x = b
.Ve
.PP
We can define "\f(CWPhi(x)\fR" as follows:
.PP
.Vb 1
\& Phi(x) := ( En - A ) * x + b
.Ve
.PP
where \*(L"En\*(R" is a matrix of the same size as \*(L"A\*(R" (\*(L"n\*(R" rows and columns)
with one's on its main diagonal and zero's elsewhere.
.PP
This function has the required property.
.PP
Proof:
.PP
.Vb 1
\& A * x = b
.Ve
.PP
.Vb 1
\& <==> -( A * x ) = -b
.Ve
.PP
.Vb 1
\& <==> -( A * x ) + x = -b + x
.Ve
.PP
.Vb 1
\& <==> -( A * x ) + x + b = x
.Ve
.PP
.Vb 1
\& <==> x - ( A * x ) + b = x
.Ve
.PP
.Vb 1
\& <==> ( En - A ) * x + b = x
.Ve
.PP
This last step is true because
.PP
.Vb 1
\& x[i] - ( a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] ) + b[i]
.Ve
.PP
is the same as
.PP
.Vb 1
\& ( -a[i,1] x[1] + ... + (1 - a[i,i]) x[i] + ... + -a[i,n] x[n] ) + b[i]
.Ve
.PP
qed
.PP
Note that actually solving the equation system "\f(CW\*(C`A * x = b\*(C'\fR" means
to calculate
.PP
.Vb 1
\& a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] = b[i]
.Ve
.PP
.Vb 4
\& <==> a[i,i] x[i] =
\& b[i]
\& - ( a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] )
\& + a[i,i] x[i]
.Ve
.PP
.Vb 5
\& <==> x[i] =
\& ( b[i]
\& - ( a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] )
\& + a[i,i] x[i]
\& ) / a[i,i]
.Ve
.PP
.Vb 5
\& <==> x[i] =
\& ( b[i] -
\& ( a[i,1] x[1] + ... + a[i,i-1] x[i-1] +
\& a[i,i+1] x[i+1] + ... + a[i,n] x[n] )
\& ) / a[i,i]
.Ve
.PP
There is one major restriction, though: a fix-point iteration is
guaranteed to converge only if the first derivative of the iteration
function has an absolute value less than one in an area around the
point "\f(CWx(*)\fR\*(L" for which \*(R"\f(CW\*(C`Phi( x(*) ) = x(*)\*(C'\fR\*(L" is to be true, and
if the start vector \*(R"\f(CWx(0)\fR" lies within that area!
.PP
This is best verified grafically, which unfortunately is impossible
to do in this textual documentation!
.PP
See literature on Numerical Analysis for details!
.PP
In our case, this restriction translates to the following three conditions:
.PP
There must exist a norm so that the norm of the matrix of the iteration
function, \f(CW\*(C`( En \- A )\*(C'\fR, has a value less than one, the matrix \*(L"A\*(R" may
not have any zero value on its main diagonal and the initial vector
"\f(CWx(0)\fR\*(L" must be \*(R"good enough\*(L", i.e., \*(R"close enough\*(L" to the solution
\&\*(R"\f(CWx(*)\fR".
.PP
(Remember school math: the first derivative of a straight line given by
"\f(CW\*(C`y = a * x + b\*(C'\fR\*(L" is \*(R"a"!)
.PP
The three methods expect a (quadratic!) matrix "\f(CW$matrix\fR\*(L" as their
first argument, a start vector \*(R"\f(CW$x0_vector\fR\*(L", a vector \*(R"\f(CW$b_vector\fR\*(L"
(which is the vector \*(R"b\*(L" in your equation system \*(R"\f(CW\*(C`A * x = b\*(C'\fR\*(L"), in the
case of the \*(R"Relaxation Method\*(L" (\*(R"\s-1RM\s0\*(L"), a real number \*(R"\f(CW$weight\fR\*(L" best
between zero and two, and finally an error limit (real number) \*(R"\f(CW$epsilon\fR".
.PP
(Note that the weight "\f(CW$weight\fR\*(L" used by the \*(R"Relaxation Method\*(L" (\*(R"\s-1RM\s0")
is \fB\s-1NOT\s0\fR checked to lie within any reasonable range!)
.PP
The three methods first test the first two conditions of the three
conditions listed above and return an empty list if these conditions
are not fulfilled.
.PP
Therefore, you should always test their return value using some
code like:
.PP
.Vb 8
\& if ( $xn_vector = $A_matrix->solve_GSM($x0_vector,$b_vector,1E-12) )
\& {
\& # do something with the solution...
\& }
\& else
\& {
\& # do something with the fact that there is no solution...
\& }
.Ve
.PP
Otherwise, they iterate until \f(CW\*(C`abs( Phi(x) \- x ) < epsilon\*(C'\fR.
.PP
(Beware that theoretically, infinite loops might result if the starting
vector is too far \*(L"off\*(R" the solution! In practice, this shouldn't be
a problem. Anyway, you can always press <ctrl\-C> if you think that the
iteration takes too long!)
.PP
The difference between the three methods is the following:
.PP
In the \*(L"Global Step Method\*(R" (\*(L"\s-1GSM\s0\*(R"), the new vector "\f(CW\*(C`x(t+1)\*(C'\fR\*(L"
(called \*(R"y\*(L" here) is calculated from the vector \*(R"\f(CWx(t)\fR\*(L"
(called \*(R"x" here) according to the formula:
.PP
.Vb 5
\& y[i] =
\& ( b[i]
\& - ( a[i,1] x[1] + ... + a[i,i-1] x[i-1] +
\& a[i,i+1] x[i+1] + ... + a[i,n] x[n] )
\& ) / a[i,i]
.Ve
.PP
In the \*(L"Single Step Method\*(R" (\*(L"\s-1SSM\s0\*(R"), the components of the vector
"\f(CW\*(C`x(t+1)\*(C'\fR" which have already been calculated are used to calculate
the remaining components, i.e.
.PP
.Vb 5
\& y[i] =
\& ( b[i]
\& - ( a[i,1] y[1] + ... + a[i,i-1] y[i-1] + # note the "y[]"!
\& a[i,i+1] x[i+1] + ... + a[i,n] x[n] ) # note the "x[]"!
\& ) / a[i,i]
.Ve
.PP
In the \*(L"Relaxation method\*(R" (\*(L"\s-1RM\s0\*(R"), the components of the vector
"\f(CW\*(C`x(t+1)\*(C'\fR\*(L" are calculated by \*(R"mixing\*(L" old and new value (like
cold and hot water), and the weight \*(R"\f(CW$weight\fR\*(L" determines the
\&\*(R"aperture\*(L" of both the \*(R"hot water tap\*(L" as well as of the \*(R"cold
water tap", according to the formula:
.PP
.Vb 6
\& y[i] =
\& ( b[i]
\& - ( a[i,1] y[1] + ... + a[i,i-1] y[i-1] + # note the "y[]"!
\& a[i,i+1] x[i+1] + ... + a[i,n] x[n] ) # note the "x[]"!
\& ) / a[i,i]
\& y[i] = weight * y[i] + (1 - weight) * x[i]
.Ve
.PP
Note that the weight "\f(CW$weight\fR" should be greater than zero and
less than two (!).
.PP
The three methods are supposed to be of different efficiency.
Experiment!
.PP
Remember that in most cases, it is probably advantageous to first
\&\*(L"\fInormalize()\fR\*(R" your equation system prior to solving it!
.SH "OVERLOADED OPERATORS"
.IX Header "OVERLOADED OPERATORS"
.Sh "\s-1SYNOPSIS\s0"
.IX Subsection "SYNOPSIS"
.IP "\(bu" 2
Unary operators:
.Sp
"\f(CW\*(C`\-\*(C'\fR\*(L", \*(R"\f(CW\*(C`~\*(C'\fR\*(L", \*(R"\f(CW\*(C`abs\*(C'\fR", \f(CW\*(C`test\*(C'\fR, "\f(CW\*(C`!\*(C'\fR", '\f(CW""\fR'
.IP "\(bu" 2
Binary (arithmetic) operators:
.Sp
"\f(CW\*(C`+\*(C'\fR\*(L", \*(R"\f(CW\*(C`\-\*(C'\fR\*(L", \*(R"\f(CW\*(C`*\*(C'\fR\*(L", \*(R"\f(CW\*(C`**\*(C'\fR\*(L",
\&\*(R"\f(CW\*(C`+=\*(C'\fR\*(L", \*(R"\f(CW\*(C`\-=\*(C'\fR\*(L", \*(R"\f(CW\*(C`*=\*(C'\fR\*(L", \*(R"\f(CW\*(C`**=\*(C'\fR"
.IP "\(bu" 2
Binary (relational) operators:
.Sp
"\f(CW\*(C`==\*(C'\fR\*(L", \*(R"\f(CW\*(C`!=\*(C'\fR\*(L", \*(R"\f(CW\*(C`<\*(C'\fR\*(L", \*(R"\f(CW\*(C`<=\*(C'\fR\*(L", \*(R"\f(CW\*(C`>\*(C'\fR\*(L", \*(R"\f(CW\*(C`>=\*(C'\fR"
.Sp
"\f(CW\*(C`eq\*(C'\fR\*(L", \*(R"\f(CW\*(C`ne\*(C'\fR\*(L", \*(R"\f(CW\*(C`lt\*(C'\fR\*(L", \*(R"\f(CW\*(C`le\*(C'\fR\*(L", \*(R"\f(CW\*(C`gt\*(C'\fR\*(L", \*(R"\f(CW\*(C`ge\*(C'\fR"
.Sp
Note that the latter ("\f(CW\*(C`eq\*(C'\fR\*(L", \*(R"\f(CW\*(C`ne\*(C'\fR\*(L", ... ) are just synonyms
of the former (\*(R"\f(CW\*(C`==\*(C'\fR\*(L", \*(R"\f(CW\*(C`!=\*(C'\fR", ... ), defined for convenience
only.
.Sh "\s-1DESCRIPTION\s0"
.IX Subsection "DESCRIPTION"
.IP "'\-'" 5
Unary minus
.Sp
Returns the negative of the given matrix, i.e., the matrix with
all elements multiplied with the factor \*(L"\-1\*(R".
.Sp
Example:
.Sp
.Vb 1
\& $matrix = -$matrix;
.Ve
.IP "'~'" 5
Transposition
.Sp
Returns the transposed of the given matrix.
.Sp
Examples:
.Sp
.Vb 2
\& $temp = ~$vector * $vector;
\& $length = sqrt( $temp->element(1,1) );
.Ve
.Sp
.Vb 1
\& if (~$matrix == $matrix) { # matrix is symmetric ... }
.Ve
.IP "abs" 5
.IX Item "abs"
Norm
.Sp
Returns the \*(L"one\*(R"\-Norm of the given matrix.
.Sp
Example:
.Sp
.Vb 1
\& $error = abs( $A * $x - $b );
.Ve
.IP "test" 5
.IX Item "test"
Boolean test
.Sp
Tests wether there is at least one non-zero element in the matrix.
.Sp
Example:
.Sp
.Vb 1
\& if ($xn_vector) { # result of iteration is not zero ... }
.Ve
.IP "'!'" 5
Negated boolean test
.Sp
Tests wether the matrix contains only zero's.
.Sp
Examples:
.Sp
.Vb 2
\& if (! $b_vector) { # heterogenous equation system ... }
\& else { # homogenous equation system ... }
.Ve
.Sp
.Vb 1
\& unless ($x_vector) { # not the null-vector! }
.Ve
.ie n .IP "'""""""""'" 5
.el .IP "'``''``'''" 5
\&\*(L"Stringify\*(R" operator
.Sp
Converts the given matrix into a string.
.Sp
Uses scientific representation to keep precision loss to a minimum in case
you want to read this string back in again later with \*(L"\fInew_from_string()\fR\*(R".
.Sp
Uses a 13\-digit mantissa and a 20\-character field for each element so that
lines will wrap nicely on an 80\-column screen.
.Sp
Examples:
.Sp
.Vb 5
\& $matrix = Math::MatrixReal->new_from_string(<<"MATRIX");
\& [ 1 0 ]
\& [ 0 -1 ]
\& MATRIX
\& print "$matrix";
.Ve
.Sp
.Vb 2
\& [ 1.000000000000E+00 0.000000000000E+00 ]
\& [ 0.000000000000E+00 -1.000000000000E+00 ]
.Ve
.Sp
.Vb 3
\& $string = "$matrix";
\& $test = Math::MatrixReal->new_from_string($string);
\& if ($test == $matrix) { print ":-)\en"; } else { print ":-(\en"; }
.Ve
.IP "'+'" 5
Addition
.Sp
Returns the sum of the two given matrices.
.Sp
Examples:
.Sp
.Vb 1
\& $matrix_S = $matrix_A + $matrix_B;
.Ve
.Sp
.Vb 1
\& $matrix_A += $matrix_B;
.Ve
.IP "'\-'" 5
Subtraction
.Sp
Returns the difference of the two given matrices.
.Sp
Examples:
.Sp
.Vb 1
\& $matrix_D = $matrix_A - $matrix_B;
.Ve
.Sp
.Vb 1
\& $matrix_A -= $matrix_B;
.Ve
.Sp
Note that this is the same as:
.Sp
.Vb 1
\& $matrix_S = $matrix_A + -$matrix_B;
.Ve
.Sp
.Vb 1
\& $matrix_A += -$matrix_B;
.Ve
.Sp
(The latter are less efficient, though)
.IP "'*'" 5
Multiplication
.Sp
Returns the matrix product of the two given matrices or
the product of the given matrix and scalar factor.
.Sp
Examples:
.Sp
.Vb 1
\& $matrix_P = $matrix_A * $matrix_B;
.Ve
.Sp
.Vb 1
\& $matrix_A *= $matrix_B;
.Ve
.Sp
.Vb 1
\& $vector_b = $matrix_A * $vector_x;
.Ve
.Sp
.Vb 1
\& $matrix_B = -1 * $matrix_A;
.Ve
.Sp
.Vb 1
\& $matrix_B = $matrix_A * -1;
.Ve
.Sp
.Vb 1
\& $matrix_A *= -1;
.Ve
.IP "'**'" 5
Exponentiation
.Sp
Returns the matrix raised to an integer power. If 0 is passed,
the identity matrix is returned. If a negative integer is passed,
it computes the inverse (if it exists) and then raised the inverse
to the absolute value of the integer. The matrix must be quadratic.
.Sp
Examples:
.Sp
.Vb 1
\& $matrix2 = $matrix ** 2;
.Ve
.Sp
.Vb 1
\& $matrix **= 2;
.Ve
.Sp
.Vb 1
\& $inv2 = $matrix ** -2;
.Ve
.Sp
.Vb 1
\& $ident = $matrix ** 0;
.Ve
.IP "'=='" 5
Equality
.Sp
Tests two matrices for equality.
.Sp
Example:
.Sp
.Vb 1
\& if ( $A * $x == $b ) { print "EUREKA!\en"; }
.Ve
.Sp
Note that in most cases, due to numerical errors (due to the finite
precision of computer arithmetics), it is a bad idea to compare two
matrices or vectors this way.
.Sp
Better use the norm of the difference of the two matrices you want
to compare and compare that norm with a small number, like this:
.Sp
.Vb 1
\& if ( abs( $A * $x - $b ) < 1E-12 ) { print "BINGO!\en"; }
.Ve
.IP "'!='" 5
Inequality
.Sp
Tests two matrices for inequality.
.Sp
Example:
.Sp
.Vb 1
\& while ($x0_vector != $xn_vector) { # proceed with iteration ... }
.Ve
.Sp
(Stops when the iteration becomes stationary)
.Sp
Note that (just like with the '==' operator), it is usually a bad idea
to compare matrices or vectors this way. Compare the norm of the difference
of the two matrices with a small number instead.
.IP "'<'" 5
Less than
.Sp
Examples:
.Sp
.Vb 1
\& if ( $matrix1 < $matrix2 ) { # ... }
.Ve
.Sp
.Vb 1
\& if ( $vector < $epsilon ) { # ... }
.Ve
.Sp
.Vb 1
\& if ( 1E-12 < $vector ) { # ... }
.Ve
.Sp
.Vb 1
\& if ( $A * $x - $b < 1E-12 ) { # ... }
.Ve
.Sp
These are just shortcuts for saying:
.Sp
.Vb 1
\& if ( abs($matrix1) < abs($matrix2) ) { # ... }
.Ve
.Sp
.Vb 1
\& if ( abs($vector) < abs($epsilon) ) { # ... }
.Ve
.Sp
.Vb 1
\& if ( abs(1E-12) < abs($vector) ) { # ... }
.Ve
.Sp
.Vb 1
\& if ( abs( $A * $x - $b ) < abs(1E-12) ) { # ... }
.Ve
.Sp
Uses the \*(L"one\*(R"\-norm for matrices and Perl's built-in \*(L"\fIabs()\fR\*(R" for scalars.
.IP "'<='" 5
Less than or equal
.Sp
As with the '<' operator, this is just a shortcut for the same expression
with \*(L"\fIabs()\fR\*(R" around all arguments.
.Sp
Example:
.Sp
.Vb 1
\& if ( $A * $x - $b <= 1E-12 ) { # ... }
.Ve
.Sp
which in fact is the same as:
.Sp
.Vb 1
\& if ( abs( $A * $x - $b ) <= abs(1E-12) ) { # ... }
.Ve
.Sp
Uses the \*(L"one\*(R"\-norm for matrices and Perl's built-in \*(L"\fIabs()\fR\*(R" for scalars.
.IP "'>'" 5
Greater than
.Sp
As with the '<' and '<=' operator, this
.Sp
.Vb 1
\& if ( $xn - $x0 > 1E-12 ) { # ... }
.Ve
.Sp
is just a shortcut for:
.Sp
.Vb 1
\& if ( abs( $xn - $x0 ) > abs(1E-12) ) { # ... }
.Ve
.Sp
Uses the \*(L"one\*(R"\-norm for matrices and Perl's built-in \*(L"\fIabs()\fR\*(R" for scalars.
.IP "'>='" 5
Greater than or equal
.Sp
As with the '<', '<=' and '>' operator, the following
.Sp
.Vb 1
\& if ( $LR >= $A ) { # ... }
.Ve
.Sp
is simply a shortcut for:
.Sp
.Vb 1
\& if ( abs($LR) >= abs($A) ) { # ... }
.Ve
.Sp
Uses the \*(L"one\*(R"\-norm for matrices and Perl's built-in \*(L"\fIabs()\fR\*(R" for scalars.
.SH "SEE ALSO"
.IX Header "SEE ALSO"
Math::VectorReal, Math::PARI, Math::MatrixBool,
DFA::Kleene, Math::Kleene,
Set::IntegerRange, Set::IntegerFast .
.SH "VERSION"
.IX Header "VERSION"
This man page documents Math::MatrixReal version 1.9.
.PP
The latest version can always be found at
http://leto.net/code/Math\-MatrixReal/
.SH "AUTHORS"
.IX Header "AUTHORS"
Steffen Beyer <sb@engelschall.com>, Rodolphe Ortalo <ortalo@laas.fr>,
Jonathan Leto <jonathan@leto.net>.
.PP
Currently maintained by Jonathan Leto, send all bugs/patches to me.
.SH "CREDITS"
.IX Header "CREDITS"
Many thanks to Prof. Pahlings for stoking the fire of my enthusiasm for
Algebra and Linear Algebra at the university (\s-1RWTH\s0 Aachen, Germany), and
to Prof. Esser and his assistant, Mr. Jarausch, for their fascinating
lectures in Numerical Analysis!
.SH "COPYRIGHT"
.IX Header "COPYRIGHT"
Copyright (c) 1996\-2002 by Steffen Beyer, Rodolphe Ortalo, Jonathan Leto.
All rights reserved.
.SH "LICENSE AGREEMENT"
.IX Header "LICENSE AGREEMENT"
This package is free software; you can redistribute it and/or
modify it under the same terms as Perl itself.