Commit | Line | Data |
---|---|---|
86530b38 AT |
1 | =head1 NAME |
2 | ||
3 | perluniintro - Perl Unicode introduction | |
4 | ||
5 | =head1 DESCRIPTION | |
6 | ||
7 | This document gives a general idea of Unicode and how to use Unicode | |
8 | in Perl. | |
9 | ||
10 | =head2 Unicode | |
11 | ||
12 | Unicode is a character set standard which plans to codify all of the | |
13 | writing systems of the world, plus many other symbols. | |
14 | ||
15 | Unicode and ISO/IEC 10646 are coordinated standards that provide code | |
16 | points for characters in almost all modern character set standards, | |
17 | covering more than 30 writing systems and hundreds of languages, | |
18 | including all commercially-important modern languages. All characters | |
19 | in the largest Chinese, Japanese, and Korean dictionaries are also | |
20 | encoded. The standards will eventually cover almost all characters in | |
21 | more than 250 writing systems and thousands of languages. | |
22 | ||
23 | A Unicode I<character> is an abstract entity. It is not bound to any | |
24 | particular integer width, especially not to the C language C<char>. | |
25 | Unicode is language-neutral and display-neutral: it does not encode the | |
26 | language of the text and it does not define fonts or other graphical | |
27 | layout details. Unicode operates on characters and on text built from | |
28 | those characters. | |
29 | ||
30 | Unicode defines characters like C<LATIN CAPITAL LETTER A> or C<GREEK | |
31 | SMALL LETTER ALPHA> and unique numbers for the characters, in this | |
32 | case 0x0041 and 0x03B1, respectively. These unique numbers are called | |
33 | I<code points>. | |
34 | ||
35 | The Unicode standard prefers using hexadecimal notation for the code | |
36 | points. If numbers like C<0x0041> are unfamiliar to | |
37 | you, take a peek at a later section, L</"Hexadecimal Notation">. | |
38 | The Unicode standard uses the notation C<U+0041 LATIN CAPITAL LETTER A>, | |
39 | to give the hexadecimal code point and the normative name of | |
40 | the character. | |
41 | ||
42 | Unicode also defines various I<properties> for the characters, like | |
43 | "uppercase" or "lowercase", "decimal digit", or "punctuation"; | |
44 | these properties are independent of the names of the characters. | |
45 | Furthermore, various operations on the characters like uppercasing, | |
46 | lowercasing, and collating (sorting) are defined. | |
47 | ||
48 | A Unicode character consists either of a single code point, or a | |
49 | I<base character> (like C<LATIN CAPITAL LETTER A>), followed by one or | |
50 | more I<modifiers> (like C<COMBINING ACUTE ACCENT>). This sequence of | |
51 | base character and modifiers is called a I<combining character | |
52 | sequence>. | |
53 | ||
54 | Whether to call these combining character sequences "characters" | |
55 | depends on your point of view. If you are a programmer, you probably | |
56 | would tend towards seeing each element in the sequences as one unit, | |
57 | or "character". The whole sequence could be seen as one "character", | |
58 | however, from the user's point of view, since that's probably what it | |
59 | looks like in the context of the user's language. | |
60 | ||
61 | With this "whole sequence" view of characters, the total number of | |
62 | characters is open-ended. But in the programmer's "one unit is one | |
63 | character" point of view, the concept of "characters" is more | |
64 | deterministic. In this document, we take that second point of view: | |
65 | one "character" is one Unicode code point, be it a base character or | |
66 | a combining character. | |
67 | ||
68 | For some combinations, there are I<precomposed> characters. | |
69 | C<LATIN CAPITAL LETTER A WITH ACUTE>, for example, is defined as | |
70 | a single code point. These precomposed characters are, however, | |
71 | only available for some combinations, and are mainly | |
72 | meant to support round-trip conversions between Unicode and legacy | |
73 | standards (like the ISO 8859). In the general case, the composing | |
74 | method is more extensible. To support conversion between | |
75 | different compositions of the characters, various I<normalization | |
76 | forms> to standardize representations are also defined. | |
77 | ||
78 | Because of backward compatibility with legacy encodings, the "a unique | |
79 | number for every character" idea breaks down a bit: instead, there is | |
80 | "at least one number for every character". The same character could | |
81 | be represented differently in several legacy encodings. The | |
82 | converse is also not true: some code points do not have an assigned | |
83 | character. Firstly, there are unallocated code points within | |
84 | otherwise used blocks. Secondly, there are special Unicode control | |
85 | characters that do not represent true characters. | |
86 | ||
87 | A common myth about Unicode is that it would be "16-bit", that is, | |
88 | Unicode is only represented as C<0x10000> (or 65536) characters from | |
89 | C<0x0000> to C<0xFFFF>. B<This is untrue.> Since Unicode 2.0, Unicode | |
90 | has been defined all the way up to 21 bits (C<0x10FFFF>), and since | |
91 | Unicode 3.1, characters have been defined beyond C<0xFFFF>. The first | |
92 | C<0x10000> characters are called the I<Plane 0>, or the I<Basic | |
93 | Multilingual Plane> (BMP). With Unicode 3.1, 17 planes in all are | |
94 | defined--but nowhere near full of defined characters, yet. | |
95 | ||
96 | Another myth is that the 256-character blocks have something to | |
97 | do with languages--that each block would define the characters used | |
98 | by a language or a set of languages. B<This is also untrue.> | |
99 | The division into blocks exists, but it is almost completely | |
100 | accidental--an artifact of how the characters have been and | |
101 | still are allocated. Instead, there is a concept called I<scripts>, | |
102 | which is more useful: there is C<Latin> script, C<Greek> script, and | |
103 | so on. Scripts usually span varied parts of several blocks. | |
104 | For further information see L<Unicode::UCD>. | |
105 | ||
106 | The Unicode code points are just abstract numbers. To input and | |
107 | output these abstract numbers, the numbers must be I<encoded> somehow. | |
108 | Unicode defines several I<character encoding forms>, of which I<UTF-8> | |
109 | is perhaps the most popular. UTF-8 is a variable length encoding that | |
110 | encodes Unicode characters as 1 to 6 bytes (only 4 with the currently | |
111 | defined characters). Other encodings include UTF-16 and UTF-32 and their | |
112 | big- and little-endian variants (UTF-8 is byte-order independent) | |
113 | The ISO/IEC 10646 defines the UCS-2 and UCS-4 encoding forms. | |
114 | ||
115 | For more information about encodings--for instance, to learn what | |
116 | I<surrogates> and I<byte order marks> (BOMs) are--see L<perlunicode>. | |
117 | ||
118 | =head2 Perl's Unicode Support | |
119 | ||
120 | Starting from Perl 5.6.0, Perl has had the capacity to handle Unicode | |
121 | natively. Perl 5.8.0, however, is the first recommended release for | |
122 | serious Unicode work. The maintenance release 5.6.1 fixed many of the | |
123 | problems of the initial Unicode implementation, but for example | |
124 | regular expressions still do not work with Unicode in 5.6.1. | |
125 | ||
126 | B<Starting from Perl 5.8.0, the use of C<use utf8> is no longer | |
127 | necessary.> In earlier releases the C<utf8> pragma was used to declare | |
128 | that operations in the current block or file would be Unicode-aware. | |
129 | This model was found to be wrong, or at least clumsy: the "Unicodeness" | |
130 | is now carried with the data, instead of being attached to the | |
131 | operations. Only one case remains where an explicit C<use utf8> is | |
132 | needed: if your Perl script itself is encoded in UTF-8, you can use | |
133 | UTF-8 in your identifier names, and in string and regular expression | |
134 | literals, by saying C<use utf8>. This is not the default because | |
135 | scripts with legacy 8-bit data in them would break. See L<utf8>. | |
136 | ||
137 | =head2 Perl's Unicode Model | |
138 | ||
139 | Perl supports both pre-5.6 strings of eight-bit native bytes, and | |
140 | strings of Unicode characters. The principle is that Perl tries to | |
141 | keep its data as eight-bit bytes for as long as possible, but as soon | |
142 | as Unicodeness cannot be avoided, the data is transparently upgraded | |
143 | to Unicode. | |
144 | ||
145 | Internally, Perl currently uses either whatever the native eight-bit | |
146 | character set of the platform (for example Latin-1) is, defaulting to | |
147 | UTF-8, to encode Unicode strings. Specifically, if all code points in | |
148 | the string are C<0xFF> or less, Perl uses the native eight-bit | |
149 | character set. Otherwise, it uses UTF-8. | |
150 | ||
151 | A user of Perl does not normally need to know nor care how Perl | |
152 | happens to encode its internal strings, but it becomes relevant when | |
153 | outputting Unicode strings to a stream without a PerlIO layer -- one with | |
154 | the "default" encoding. In such a case, the raw bytes used internally | |
155 | (the native character set or UTF-8, as appropriate for each string) | |
156 | will be used, and a "Wide character" warning will be issued if those | |
157 | strings contain a character beyond 0x00FF. | |
158 | ||
159 | For example, | |
160 | ||
161 | perl -e 'print "\x{DF}\n", "\x{0100}\x{DF}\n"' | |
162 | ||
163 | produces a fairly useless mixture of native bytes and UTF-8, as well | |
164 | as a warning: | |
165 | ||
166 | Wide character in print at ... | |
167 | ||
168 | To output UTF-8, use the C<:utf8> output layer. Prepending | |
169 | ||
170 | binmode(STDOUT, ":utf8"); | |
171 | ||
172 | to this sample program ensures that the output is completely UTF-8, | |
173 | and removes the program's warning. | |
174 | ||
175 | If your locale environment variables (C<LANGUAGE>, C<LC_ALL>, | |
176 | C<LC_CTYPE>, C<LANG>) contain the strings 'UTF-8' or 'UTF8', | |
177 | regardless of case, then the default encoding of your STDIN, STDOUT, | |
178 | and STDERR and of B<any subsequent file open>, is UTF-8. Note that | |
179 | this means that Perl expects other software to work, too: if Perl has | |
180 | been led to believe that STDIN should be UTF-8, but then STDIN coming | |
181 | in from another command is not UTF-8, Perl will complain about the | |
182 | malformed UTF-8. | |
183 | ||
184 | All features that combine Unicode and I/O also require using the new | |
185 | PerlIO feature. Almost all Perl 5.8 platforms do use PerlIO, though: | |
186 | you can see whether yours is by running "perl -V" and looking for | |
187 | C<useperlio=define>. | |
188 | ||
189 | =head2 Unicode and EBCDIC | |
190 | ||
191 | Perl 5.8.0 also supports Unicode on EBCDIC platforms. There, | |
192 | Unicode support is somewhat more complex to implement since | |
193 | additional conversions are needed at every step. Some problems | |
194 | remain, see L<perlebcdic> for details. | |
195 | ||
196 | In any case, the Unicode support on EBCDIC platforms is better than | |
197 | in the 5.6 series, which didn't work much at all for EBCDIC platform. | |
198 | On EBCDIC platforms, the internal Unicode encoding form is UTF-EBCDIC | |
199 | instead of UTF-8. The difference is that as UTF-8 is "ASCII-safe" in | |
200 | that ASCII characters encode to UTF-8 as-is, while UTF-EBCDIC is | |
201 | "EBCDIC-safe". | |
202 | ||
203 | =head2 Creating Unicode | |
204 | ||
205 | To create Unicode characters in literals for code points above C<0xFF>, | |
206 | use the C<\x{...}> notation in double-quoted strings: | |
207 | ||
208 | my $smiley = "\x{263a}"; | |
209 | ||
210 | Similarly, it can be used in regular expression literals | |
211 | ||
212 | $smiley =~ /\x{263a}/; | |
213 | ||
214 | At run-time you can use C<chr()>: | |
215 | ||
216 | my $hebrew_alef = chr(0x05d0); | |
217 | ||
218 | See L</"Further Resources"> for how to find all these numeric codes. | |
219 | ||
220 | Naturally, C<ord()> will do the reverse: it turns a character into | |
221 | a code point. | |
222 | ||
223 | Note that C<\x..> (no C<{}> and only two hexadecimal digits), C<\x{...}>, | |
224 | and C<chr(...)> for arguments less than C<0x100> (decimal 256) | |
225 | generate an eight-bit character for backward compatibility with older | |
226 | Perls. For arguments of C<0x100> or more, Unicode characters are | |
227 | always produced. If you want to force the production of Unicode | |
228 | characters regardless of the numeric value, use C<pack("U", ...)> | |
229 | instead of C<\x..>, C<\x{...}>, or C<chr()>. | |
230 | ||
231 | You can also use the C<charnames> pragma to invoke characters | |
232 | by name in double-quoted strings: | |
233 | ||
234 | use charnames ':full'; | |
235 | my $arabic_alef = "\N{ARABIC LETTER ALEF}"; | |
236 | ||
237 | And, as mentioned above, you can also C<pack()> numbers into Unicode | |
238 | characters: | |
239 | ||
240 | my $georgian_an = pack("U", 0x10a0); | |
241 | ||
242 | Note that both C<\x{...}> and C<\N{...}> are compile-time string | |
243 | constants: you cannot use variables in them. if you want similar | |
244 | run-time functionality, use C<chr()> and C<charnames::vianame()>. | |
245 | ||
246 | Also note that if all the code points for pack "U" are below 0x100, | |
247 | bytes will be generated, just like if you were using C<chr()>. | |
248 | ||
249 | my $bytes = pack("U*", 0x80, 0xFF); | |
250 | ||
251 | If you want to force the result to Unicode characters, use the special | |
252 | C<"U0"> prefix. It consumes no arguments but forces the result to be | |
253 | in Unicode characters, instead of bytes. | |
254 | ||
255 | my $chars = pack("U0U*", 0x80, 0xFF); | |
256 | ||
257 | =head2 Handling Unicode | |
258 | ||
259 | Handling Unicode is for the most part transparent: just use the | |
260 | strings as usual. Functions like C<index()>, C<length()>, and | |
261 | C<substr()> will work on the Unicode characters; regular expressions | |
262 | will work on the Unicode characters (see L<perlunicode> and L<perlretut>). | |
263 | ||
264 | Note that Perl considers combining character sequences to be | |
265 | characters, so for example | |
266 | ||
267 | use charnames ':full'; | |
268 | print length("\N{LATIN CAPITAL LETTER A}\N{COMBINING ACUTE ACCENT}"), "\n"; | |
269 | ||
270 | will print 2, not 1. The only exception is that regular expressions | |
271 | have C<\X> for matching a combining character sequence. | |
272 | ||
273 | Life is not quite so transparent, however, when working with legacy | |
274 | encodings, I/O, and certain special cases: | |
275 | ||
276 | =head2 Legacy Encodings | |
277 | ||
278 | When you combine legacy data and Unicode the legacy data needs | |
279 | to be upgraded to Unicode. Normally ISO 8859-1 (or EBCDIC, if | |
280 | applicable) is assumed. You can override this assumption by | |
281 | using the C<encoding> pragma, for example | |
282 | ||
283 | use encoding 'latin2'; # ISO 8859-2 | |
284 | ||
285 | in which case literals (string or regular expressions), C<chr()>, | |
286 | and C<ord()> in your whole script are assumed to produce Unicode | |
287 | characters from ISO 8859-2 code points. Note that the matching for | |
288 | encoding names is forgiving: instead of C<latin2> you could have | |
289 | said C<Latin 2>, or C<iso8859-2>, or other variations. With just | |
290 | ||
291 | use encoding; | |
292 | ||
293 | the environment variable C<PERL_ENCODING> will be consulted. | |
294 | If that variable isn't set, the encoding pragma will fail. | |
295 | ||
296 | The C<Encode> module knows about many encodings and has interfaces | |
297 | for doing conversions between those encodings: | |
298 | ||
299 | use Encode 'from_to'; | |
300 | from_to($data, "iso-8859-3", "utf-8"); # from legacy to utf-8 | |
301 | ||
302 | =head2 Unicode I/O | |
303 | ||
304 | Normally, writing out Unicode data | |
305 | ||
306 | print FH $some_string_with_unicode, "\n"; | |
307 | ||
308 | produces raw bytes that Perl happens to use to internally encode the | |
309 | Unicode string. Perl's internal encoding depends on the system as | |
310 | well as what characters happen to be in the string at the time. If | |
311 | any of the characters are at code points C<0x100> or above, you will get | |
312 | a warning. To ensure that the output is explicitly rendered in the | |
313 | encoding you desire--and to avoid the warning--open the stream with | |
314 | the desired encoding. Some examples: | |
315 | ||
316 | open FH, ">:utf8", "file"; | |
317 | ||
318 | open FH, ">:encoding(ucs2)", "file"; | |
319 | open FH, ">:encoding(UTF-8)", "file"; | |
320 | open FH, ">:encoding(shift_jis)", "file"; | |
321 | ||
322 | and on already open streams, use C<binmode()>: | |
323 | ||
324 | binmode(STDOUT, ":utf8"); | |
325 | ||
326 | binmode(STDOUT, ":encoding(ucs2)"); | |
327 | binmode(STDOUT, ":encoding(UTF-8)"); | |
328 | binmode(STDOUT, ":encoding(shift_jis)"); | |
329 | ||
330 | The matching of encoding names is loose: case does not matter, and | |
331 | many encodings have several aliases. Note that the C<:utf8> layer | |
332 | must always be specified exactly like that; it is I<not> subject to | |
333 | the loose matching of encoding names. | |
334 | ||
335 | See L<PerlIO> for the C<:utf8> layer, L<PerlIO::encoding> and | |
336 | L<Encode::PerlIO> for the C<:encoding()> layer, and | |
337 | L<Encode::Supported> for many encodings supported by the C<Encode> | |
338 | module. | |
339 | ||
340 | Reading in a file that you know happens to be encoded in one of the | |
341 | Unicode or legacy encodings does not magically turn the data into | |
342 | Unicode in Perl's eyes. To do that, specify the appropriate | |
343 | layer when opening files | |
344 | ||
345 | open(my $fh,'<:utf8', 'anything'); | |
346 | my $line_of_unicode = <$fh>; | |
347 | ||
348 | open(my $fh,'<:encoding(Big5)', 'anything'); | |
349 | my $line_of_unicode = <$fh>; | |
350 | ||
351 | The I/O layers can also be specified more flexibly with | |
352 | the C<open> pragma. See L<open>, or look at the following example. | |
353 | ||
354 | use open ':utf8'; # input and output default layer will be UTF-8 | |
355 | open X, ">file"; | |
356 | print X chr(0x100), "\n"; | |
357 | close X; | |
358 | open Y, "<file"; | |
359 | printf "%#x\n", ord(<Y>); # this should print 0x100 | |
360 | close Y; | |
361 | ||
362 | With the C<open> pragma you can use the C<:locale> layer | |
363 | ||
364 | $ENV{LC_ALL} = $ENV{LANG} = 'ru_RU.KOI8-R'; | |
365 | # the :locale will probe the locale environment variables like LC_ALL | |
366 | use open OUT => ':locale'; # russki parusski | |
367 | open(O, ">koi8"); | |
368 | print O chr(0x430); # Unicode CYRILLIC SMALL LETTER A = KOI8-R 0xc1 | |
369 | close O; | |
370 | open(I, "<koi8"); | |
371 | printf "%#x\n", ord(<I>), "\n"; # this should print 0xc1 | |
372 | close I; | |
373 | ||
374 | or you can also use the C<':encoding(...)'> layer | |
375 | ||
376 | open(my $epic,'<:encoding(iso-8859-7)','iliad.greek'); | |
377 | my $line_of_unicode = <$epic>; | |
378 | ||
379 | These methods install a transparent filter on the I/O stream that | |
380 | converts data from the specified encoding when it is read in from the | |
381 | stream. The result is always Unicode. | |
382 | ||
383 | The L<open> pragma affects all the C<open()> calls after the pragma by | |
384 | setting default layers. If you want to affect only certain | |
385 | streams, use explicit layers directly in the C<open()> call. | |
386 | ||
387 | You can switch encodings on an already opened stream by using | |
388 | C<binmode()>; see L<perlfunc/binmode>. | |
389 | ||
390 | The C<:locale> does not currently (as of Perl 5.8.0) work with | |
391 | C<open()> and C<binmode()>, only with the C<open> pragma. The | |
392 | C<:utf8> and C<:encoding(...)> methods do work with all of C<open()>, | |
393 | C<binmode()>, and the C<open> pragma. | |
394 | ||
395 | Similarly, you may use these I/O layers on output streams to | |
396 | automatically convert Unicode to the specified encoding when it is | |
397 | written to the stream. For example, the following snippet copies the | |
398 | contents of the file "text.jis" (encoded as ISO-2022-JP, aka JIS) to | |
399 | the file "text.utf8", encoded as UTF-8: | |
400 | ||
401 | open(my $nihongo, '<:encoding(iso2022-jp)', 'text.jis'); | |
402 | open(my $unicode, '>:utf8', 'text.utf8'); | |
403 | while (<$nihongo>) { print $unicode } | |
404 | ||
405 | The naming of encodings, both by the C<open()> and by the C<open> | |
406 | pragma, is similar to the C<encoding> pragma in that it allows for | |
407 | flexible names: C<koi8-r> and C<KOI8R> will both be understood. | |
408 | ||
409 | Common encodings recognized by ISO, MIME, IANA, and various other | |
410 | standardisation organisations are recognised; for a more detailed | |
411 | list see L<Encode::Supported>. | |
412 | ||
413 | C<read()> reads characters and returns the number of characters. | |
414 | C<seek()> and C<tell()> operate on byte counts, as do C<sysread()> | |
415 | and C<sysseek()>. | |
416 | ||
417 | Notice that because of the default behaviour of not doing any | |
418 | conversion upon input if there is no default layer, | |
419 | it is easy to mistakenly write code that keeps on expanding a file | |
420 | by repeatedly encoding the data: | |
421 | ||
422 | # BAD CODE WARNING | |
423 | open F, "file"; | |
424 | local $/; ## read in the whole file of 8-bit characters | |
425 | $t = <F>; | |
426 | close F; | |
427 | open F, ">:utf8", "file"; | |
428 | print F $t; ## convert to UTF-8 on output | |
429 | close F; | |
430 | ||
431 | If you run this code twice, the contents of the F<file> will be twice | |
432 | UTF-8 encoded. A C<use open ':utf8'> would have avoided the bug, or | |
433 | explicitly opening also the F<file> for input as UTF-8. | |
434 | ||
435 | B<NOTE>: the C<:utf8> and C<:encoding> features work only if your | |
436 | Perl has been built with the new PerlIO feature. | |
437 | ||
438 | =head2 Displaying Unicode As Text | |
439 | ||
440 | Sometimes you might want to display Perl scalars containing Unicode as | |
441 | simple ASCII (or EBCDIC) text. The following subroutine converts | |
442 | its argument so that Unicode characters with code points greater than | |
443 | 255 are displayed as C<\x{...}>, control characters (like C<\n>) are | |
444 | displayed as C<\x..>, and the rest of the characters as themselves: | |
445 | ||
446 | sub nice_string { | |
447 | join("", | |
448 | map { $_ > 255 ? # if wide character... | |
449 | sprintf("\\x{%04X}", $_) : # \x{...} | |
450 | chr($_) =~ /[[:cntrl:]]/ ? # else if control character ... | |
451 | sprintf("\\x%02X", $_) : # \x.. | |
452 | chr($_) # else as themselves | |
453 | } unpack("U*", $_[0])); # unpack Unicode characters | |
454 | } | |
455 | ||
456 | For example, | |
457 | ||
458 | nice_string("foo\x{100}bar\n") | |
459 | ||
460 | returns: | |
461 | ||
462 | "foo\x{0100}bar\x0A" | |
463 | ||
464 | =head2 Special Cases | |
465 | ||
466 | =over 4 | |
467 | ||
468 | =item * | |
469 | ||
470 | Bit Complement Operator ~ And vec() | |
471 | ||
472 | The bit complement operator C<~> may produce surprising results if | |
473 | used on strings containing characters with ordinal values above | |
474 | 255. In such a case, the results are consistent with the internal | |
475 | encoding of the characters, but not with much else. So don't do | |
476 | that. Similarly for C<vec()>: you will be operating on the | |
477 | internally-encoded bit patterns of the Unicode characters, not on | |
478 | the code point values, which is very probably not what you want. | |
479 | ||
480 | =item * | |
481 | ||
482 | Peeking At Perl's Internal Encoding | |
483 | ||
484 | Normal users of Perl should never care how Perl encodes any particular | |
485 | Unicode string (because the normal ways to get at the contents of a | |
486 | string with Unicode--via input and output--should always be via | |
487 | explicitly-defined I/O layers). But if you must, there are two | |
488 | ways of looking behind the scenes. | |
489 | ||
490 | One way of peeking inside the internal encoding of Unicode characters | |
491 | is to use C<unpack("C*", ...> to get the bytes or C<unpack("H*", ...)> | |
492 | to display the bytes: | |
493 | ||
494 | # this prints c4 80 for the UTF-8 bytes 0xc4 0x80 | |
495 | print join(" ", unpack("H*", pack("U", 0x100))), "\n"; | |
496 | ||
497 | Yet another way would be to use the Devel::Peek module: | |
498 | ||
499 | perl -MDevel::Peek -e 'Dump(chr(0x100))' | |
500 | ||
501 | That shows the UTF8 flag in FLAGS and both the UTF-8 bytes | |
502 | and Unicode characters in C<PV>. See also later in this document | |
503 | the discussion about the C<is_utf8> function of the C<Encode> module. | |
504 | ||
505 | =back | |
506 | ||
507 | =head2 Advanced Topics | |
508 | ||
509 | =over 4 | |
510 | ||
511 | =item * | |
512 | ||
513 | String Equivalence | |
514 | ||
515 | The question of string equivalence turns somewhat complicated | |
516 | in Unicode: what do you mean by "equal"? | |
517 | ||
518 | (Is C<LATIN CAPITAL LETTER A WITH ACUTE> equal to | |
519 | C<LATIN CAPITAL LETTER A>?) | |
520 | ||
521 | The short answer is that by default Perl compares equivalence (C<eq>, | |
522 | C<ne>) based only on code points of the characters. In the above | |
523 | case, the answer is no (because 0x00C1 != 0x0041). But sometimes, any | |
524 | CAPITAL LETTER As should be considered equal, or even As of any case. | |
525 | ||
526 | The long answer is that you need to consider character normalization | |
527 | and casing issues: see L<Unicode::Normalize>, Unicode Technical | |
528 | Reports #15 and #21, I<Unicode Normalization Forms> and I<Case | |
529 | Mappings>, http://www.unicode.org/unicode/reports/tr15/ and | |
530 | http://www.unicode.org/unicode/reports/tr21/ | |
531 | ||
532 | As of Perl 5.8.0, the "Full" case-folding of I<Case | |
533 | Mappings/SpecialCasing> is implemented. | |
534 | ||
535 | =item * | |
536 | ||
537 | String Collation | |
538 | ||
539 | People like to see their strings nicely sorted--or as Unicode | |
540 | parlance goes, collated. But again, what do you mean by collate? | |
541 | ||
542 | (Does C<LATIN CAPITAL LETTER A WITH ACUTE> come before or after | |
543 | C<LATIN CAPITAL LETTER A WITH GRAVE>?) | |
544 | ||
545 | The short answer is that by default, Perl compares strings (C<lt>, | |
546 | C<le>, C<cmp>, C<ge>, C<gt>) based only on the code points of the | |
547 | characters. In the above case, the answer is "after", since | |
548 | C<0x00C1> > C<0x00C0>. | |
549 | ||
550 | The long answer is that "it depends", and a good answer cannot be | |
551 | given without knowing (at the very least) the language context. | |
552 | See L<Unicode::Collate>, and I<Unicode Collation Algorithm> | |
553 | http://www.unicode.org/unicode/reports/tr10/ | |
554 | ||
555 | =back | |
556 | ||
557 | =head2 Miscellaneous | |
558 | ||
559 | =over 4 | |
560 | ||
561 | =item * | |
562 | ||
563 | Character Ranges and Classes | |
564 | ||
565 | Character ranges in regular expression character classes (C</[a-z]/>) | |
566 | and in the C<tr///> (also known as C<y///>) operator are not magically | |
567 | Unicode-aware. What this means that C<[A-Za-z]> will not magically start | |
568 | to mean "all alphabetic letters"; not that it does mean that even for | |
569 | 8-bit characters, you should be using C</[[:alpha:]]/> in that case. | |
570 | ||
571 | For specifying character classes like that in regular expressions, | |
572 | you can use the various Unicode properties--C<\pL>, or perhaps | |
573 | C<\p{Alphabetic}>, in this particular case. You can use Unicode | |
574 | code points as the end points of character ranges, but there is no | |
575 | magic associated with specifying a certain range. For further | |
576 | information--there are dozens of Unicode character classes--see | |
577 | L<perlunicode>. | |
578 | ||
579 | =item * | |
580 | ||
581 | String-To-Number Conversions | |
582 | ||
583 | Unicode does define several other decimal--and numeric--characters | |
584 | besides the familiar 0 to 9, such as the Arabic and Indic digits. | |
585 | Perl does not support string-to-number conversion for digits other | |
586 | than ASCII 0 to 9 (and ASCII a to f for hexadecimal). | |
587 | ||
588 | =back | |
589 | ||
590 | =head2 Questions With Answers | |
591 | ||
592 | =over 4 | |
593 | ||
594 | =item * | |
595 | ||
596 | Will My Old Scripts Break? | |
597 | ||
598 | Very probably not. Unless you are generating Unicode characters | |
599 | somehow, old behaviour should be preserved. About the only behaviour | |
600 | that has changed and which could start generating Unicode is the old | |
601 | behaviour of C<chr()> where supplying an argument more than 255 | |
602 | produced a character modulo 255. C<chr(300)>, for example, was equal | |
603 | to C<chr(45)> or "-" (in ASCII), now it is LATIN CAPITAL LETTER I WITH | |
604 | BREVE. | |
605 | ||
606 | =item * | |
607 | ||
608 | How Do I Make My Scripts Work With Unicode? | |
609 | ||
610 | Very little work should be needed since nothing changes until you | |
611 | generate Unicode data. The most important thing is getting input as | |
612 | Unicode; for that, see the earlier I/O discussion. | |
613 | ||
614 | =item * | |
615 | ||
616 | How Do I Know Whether My String Is In Unicode? | |
617 | ||
618 | You shouldn't care. No, you really shouldn't. No, really. If you | |
619 | have to care--beyond the cases described above--it means that we | |
620 | didn't get the transparency of Unicode quite right. | |
621 | ||
622 | Okay, if you insist: | |
623 | ||
624 | use Encode 'is_utf8'; | |
625 | print is_utf8($string) ? 1 : 0, "\n"; | |
626 | ||
627 | But note that this doesn't mean that any of the characters in the | |
628 | string are necessary UTF-8 encoded, or that any of the characters have | |
629 | code points greater than 0xFF (255) or even 0x80 (128), or that the | |
630 | string has any characters at all. All the C<is_utf8()> does is to | |
631 | return the value of the internal "utf8ness" flag attached to the | |
632 | C<$string>. If the flag is off, the bytes in the scalar are interpreted | |
633 | as a single byte encoding. If the flag is on, the bytes in the scalar | |
634 | are interpreted as the (multi-byte, variable-length) UTF-8 encoded code | |
635 | points of the characters. Bytes added to an UTF-8 encoded string are | |
636 | automatically upgraded to UTF-8. If mixed non-UTF8 and UTF-8 scalars | |
637 | are merged (double-quoted interpolation, explicit concatenation, and | |
638 | printf/sprintf parameter substitution), the result will be UTF-8 encoded | |
639 | as if copies of the byte strings were upgraded to UTF-8: for example, | |
640 | ||
641 | $a = "ab\x80c"; | |
642 | $b = "\x{100}"; | |
643 | print "$a = $b\n"; | |
644 | ||
645 | the output string will be UTF-8-encoded C<ab\x80c\x{100}\n>, but note | |
646 | that C<$a> will stay byte-encoded. | |
647 | ||
648 | Sometimes you might really need to know the byte length of a string | |
649 | instead of the character length. For that use either the | |
650 | C<Encode::encode_utf8()> function or the C<bytes> pragma and its only | |
651 | defined function C<length()>: | |
652 | ||
653 | my $unicode = chr(0x100); | |
654 | print length($unicode), "\n"; # will print 1 | |
655 | require Encode; | |
656 | print length(Encode::encode_utf8($unicode)), "\n"; # will print 2 | |
657 | use bytes; | |
658 | print length($unicode), "\n"; # will also print 2 | |
659 | # (the 0xC4 0x80 of the UTF-8) | |
660 | ||
661 | =item * | |
662 | ||
663 | How Do I Detect Data That's Not Valid In a Particular Encoding? | |
664 | ||
665 | Use the C<Encode> package to try converting it. | |
666 | For example, | |
667 | ||
668 | use Encode 'encode_utf8'; | |
669 | if (encode_utf8($string_of_bytes_that_I_think_is_utf8)) { | |
670 | # valid | |
671 | } else { | |
672 | # invalid | |
673 | } | |
674 | ||
675 | For UTF-8 only, you can use: | |
676 | ||
677 | use warnings; | |
678 | @chars = unpack("U0U*", $string_of_bytes_that_I_think_is_utf8); | |
679 | ||
680 | If invalid, a C<Malformed UTF-8 character (byte 0x##) in unpack> | |
681 | warning is produced. The "U0" means "expect strictly UTF-8 encoded | |
682 | Unicode". Without that the C<unpack("U*", ...)> would accept also | |
683 | data like C<chr(0xFF>), similarly to the C<pack> as we saw earlier. | |
684 | ||
685 | =item * | |
686 | ||
687 | How Do I Convert Binary Data Into a Particular Encoding, Or Vice Versa? | |
688 | ||
689 | This probably isn't as useful as you might think. | |
690 | Normally, you shouldn't need to. | |
691 | ||
692 | In one sense, what you are asking doesn't make much sense: encodings | |
693 | are for characters, and binary data are not "characters", so converting | |
694 | "data" into some encoding isn't meaningful unless you know in what | |
695 | character set and encoding the binary data is in, in which case it's | |
696 | not just binary data, now is it? | |
697 | ||
698 | If you have a raw sequence of bytes that you know should be | |
699 | interpreted via a particular encoding, you can use C<Encode>: | |
700 | ||
701 | use Encode 'from_to'; | |
702 | from_to($data, "iso-8859-1", "utf-8"); # from latin-1 to utf-8 | |
703 | ||
704 | The call to C<from_to()> changes the bytes in C<$data>, but nothing | |
705 | material about the nature of the string has changed as far as Perl is | |
706 | concerned. Both before and after the call, the string C<$data> | |
707 | contains just a bunch of 8-bit bytes. As far as Perl is concerned, | |
708 | the encoding of the string remains as "system-native 8-bit bytes". | |
709 | ||
710 | You might relate this to a fictional 'Translate' module: | |
711 | ||
712 | use Translate; | |
713 | my $phrase = "Yes"; | |
714 | Translate::from_to($phrase, 'english', 'deutsch'); | |
715 | ## phrase now contains "Ja" | |
716 | ||
717 | The contents of the string changes, but not the nature of the string. | |
718 | Perl doesn't know any more after the call than before that the | |
719 | contents of the string indicates the affirmative. | |
720 | ||
721 | Back to converting data. If you have (or want) data in your system's | |
722 | native 8-bit encoding (e.g. Latin-1, EBCDIC, etc.), you can use | |
723 | pack/unpack to convert to/from Unicode. | |
724 | ||
725 | $native_string = pack("C*", unpack("U*", $Unicode_string)); | |
726 | $Unicode_string = pack("U*", unpack("C*", $native_string)); | |
727 | ||
728 | If you have a sequence of bytes you B<know> is valid UTF-8, | |
729 | but Perl doesn't know it yet, you can make Perl a believer, too: | |
730 | ||
731 | use Encode 'decode_utf8'; | |
732 | $Unicode = decode_utf8($bytes); | |
733 | ||
734 | You can convert well-formed UTF-8 to a sequence of bytes, but if | |
735 | you just want to convert random binary data into UTF-8, you can't. | |
736 | B<Any random collection of bytes isn't well-formed UTF-8>. You can | |
737 | use C<unpack("C*", $string)> for the former, and you can create | |
738 | well-formed Unicode data by C<pack("U*", 0xff, ...)>. | |
739 | ||
740 | =item * | |
741 | ||
742 | How Do I Display Unicode? How Do I Input Unicode? | |
743 | ||
744 | See http://www.alanwood.net/unicode/ and | |
745 | http://www.cl.cam.ac.uk/~mgk25/unicode.html | |
746 | ||
747 | =item * | |
748 | ||
749 | How Does Unicode Work With Traditional Locales? | |
750 | ||
751 | In Perl, not very well. Avoid using locales through the C<locale> | |
752 | pragma. Use only one or the other. | |
753 | ||
754 | =back | |
755 | ||
756 | =head2 Hexadecimal Notation | |
757 | ||
758 | The Unicode standard prefers using hexadecimal notation because | |
759 | that more clearly shows the division of Unicode into blocks of 256 characters. | |
760 | Hexadecimal is also simply shorter than decimal. You can use decimal | |
761 | notation, too, but learning to use hexadecimal just makes life easier | |
762 | with the Unicode standard. The C<U+HHHH> notation uses hexadecimal, | |
763 | for example. | |
764 | ||
765 | The C<0x> prefix means a hexadecimal number, the digits are 0-9 I<and> | |
766 | a-f (or A-F, case doesn't matter). Each hexadecimal digit represents | |
767 | four bits, or half a byte. C<print 0x..., "\n"> will show a | |
768 | hexadecimal number in decimal, and C<printf "%x\n", $decimal> will | |
769 | show a decimal number in hexadecimal. If you have just the | |
770 | "hex digits" of a hexadecimal number, you can use the C<hex()> function. | |
771 | ||
772 | print 0x0009, "\n"; # 9 | |
773 | print 0x000a, "\n"; # 10 | |
774 | print 0x000f, "\n"; # 15 | |
775 | print 0x0010, "\n"; # 16 | |
776 | print 0x0011, "\n"; # 17 | |
777 | print 0x0100, "\n"; # 256 | |
778 | ||
779 | print 0x0041, "\n"; # 65 | |
780 | ||
781 | printf "%x\n", 65; # 41 | |
782 | printf "%#x\n", 65; # 0x41 | |
783 | ||
784 | print hex("41"), "\n"; # 65 | |
785 | ||
786 | =head2 Further Resources | |
787 | ||
788 | =over 4 | |
789 | ||
790 | =item * | |
791 | ||
792 | Unicode Consortium | |
793 | ||
794 | http://www.unicode.org/ | |
795 | ||
796 | =item * | |
797 | ||
798 | Unicode FAQ | |
799 | ||
800 | http://www.unicode.org/unicode/faq/ | |
801 | ||
802 | =item * | |
803 | ||
804 | Unicode Glossary | |
805 | ||
806 | http://www.unicode.org/glossary/ | |
807 | ||
808 | =item * | |
809 | ||
810 | Unicode Useful Resources | |
811 | ||
812 | http://www.unicode.org/unicode/onlinedat/resources.html | |
813 | ||
814 | =item * | |
815 | ||
816 | Unicode and Multilingual Support in HTML, Fonts, Web Browsers and Other Applications | |
817 | ||
818 | http://www.alanwood.net/unicode/ | |
819 | ||
820 | =item * | |
821 | ||
822 | UTF-8 and Unicode FAQ for Unix/Linux | |
823 | ||
824 | http://www.cl.cam.ac.uk/~mgk25/unicode.html | |
825 | ||
826 | =item * | |
827 | ||
828 | Legacy Character Sets | |
829 | ||
830 | http://www.czyborra.com/ | |
831 | http://www.eki.ee/letter/ | |
832 | ||
833 | =item * | |
834 | ||
835 | The Unicode support files live within the Perl installation in the | |
836 | directory | |
837 | ||
838 | $Config{installprivlib}/unicore | |
839 | ||
840 | in Perl 5.8.0 or newer, and | |
841 | ||
842 | $Config{installprivlib}/unicode | |
843 | ||
844 | in the Perl 5.6 series. (The renaming to F<lib/unicore> was done to | |
845 | avoid naming conflicts with lib/Unicode in case-insensitive filesystems.) | |
846 | The main Unicode data file is F<UnicodeData.txt> (or F<Unicode.301> in | |
847 | Perl 5.6.1.) You can find the C<$Config{installprivlib}> by | |
848 | ||
849 | perl "-V:installprivlib" | |
850 | ||
851 | You can explore various information from the Unicode data files using | |
852 | the C<Unicode::UCD> module. | |
853 | ||
854 | =back | |
855 | ||
856 | =head1 UNICODE IN OLDER PERLS | |
857 | ||
858 | If you cannot upgrade your Perl to 5.8.0 or later, you can still | |
859 | do some Unicode processing by using the modules C<Unicode::String>, | |
860 | C<Unicode::Map8>, and C<Unicode::Map>, available from CPAN. | |
861 | If you have the GNU recode installed, you can also use the | |
862 | Perl front-end C<Convert::Recode> for character conversions. | |
863 | ||
864 | The following are fast conversions from ISO 8859-1 (Latin-1) bytes | |
865 | to UTF-8 bytes, the code works even with older Perl 5 versions. | |
866 | ||
867 | # ISO 8859-1 to UTF-8 | |
868 | s/([\x80-\xFF])/chr(0xC0|ord($1)>>6).chr(0x80|ord($1)&0x3F)/eg; | |
869 | ||
870 | # UTF-8 to ISO 8859-1 | |
871 | s/([\xC2\xC3])([\x80-\xBF])/chr(ord($1)<<6&0xC0|ord($2)&0x3F)/eg; | |
872 | ||
873 | =head1 SEE ALSO | |
874 | ||
875 | L<perlunicode>, L<Encode>, L<encoding>, L<open>, L<utf8>, L<bytes>, | |
876 | L<perlretut>, L<Unicode::Collate>, L<Unicode::Normalize>, L<Unicode::UCD> | |
877 | ||
878 | =head1 ACKNOWLEDGMENTS | |
879 | ||
880 | Thanks to the kind readers of the perl5-porters@perl.org, | |
881 | perl-unicode@perl.org, linux-utf8@nl.linux.org, and unicore@unicode.org | |
882 | mailing lists for their valuable feedback. | |
883 | ||
884 | =head1 AUTHOR, COPYRIGHT, AND LICENSE | |
885 | ||
886 | Copyright 2001-2002 Jarkko Hietaniemi <jhi@iki.fi> | |
887 | ||
888 | This document may be distributed under the same terms as Perl itself. |