Han unification

effort by Unicode/ISO 10646 to map Han characters into a individual fructify, ignoring regional variations
Differences for the lapp Unicode quality ( U+8FD4 ) in regional versions of Source Han Sans
Han unification is an campaign by the authors of Unicode and the Universal Character Set to map multiple character sets of the Han characters of the alleged CJK languages into a individual put of unite characters. Han characters are a feature shared in common by written Chinese ( hanzi ), japanese ( kanji ), and Korean ( hanja ). Modern Chinese, Japanese and Korean typefaces typically use regional or diachronic variants of a given Han character. In the conceptualization of Unicode, an try was made to unify these variants by considering them different glyph representing the same “ character “, or orthographic unit of measurement, hence, “ Han union ”, with the resulting quality repertoire sometimes contracted to Unihan. [ citation needed ] Nevertheless, many characters have regional variants assigned to different code points, such as traditional 個 ( U+500B ) versus Simplified 个 ( U+4E2A ).

Bạn đang đọc: Han unification

Unihan can besides refer to the Unihan Database maintained by the Unicode Consortium, which provides data about all of the mix Han characters encoded in the Unicode Standard, including mappings to diverse national and industry standards, indices into standard dictionaries, encoded variants, pronunciations in versatile languages, and an english definition. The database is available to the public as text files [ 1 ] and via an interactional web site. [ 2 ] [ 3 ] The latter besides includes representative glyph and definitions for compound words drawn from the unblock japanese EDICT and Chinese CEDICT dictionary projects ( which are provided for appliance and are not a dinner dress part of the Unicode Standard ) .

Rationale and controversy [edit ]

The Unicode Standard details the principles of Han fusion. [ 4 ] [ 5 ] The Ideographic Research Group ( IRG ), made up of experts from the Chinese-speaking countries, North and South Korea, Japan, Vietnam, and other countries, is responsible for the summons. One possible rationale is the desire to limit the size of the full Unicode character set, where CJK characters as represented by discrete ideograms may approach or exceed 100,000 [ a ] characters. Version 1 of Unicode was designed to fit into 16 bits and only 20,940 characters ( 32 % ) out of the possible 65,536 were reserved for these CJK Unified Ideographs. Unicode was late extended to 21 bits allowing many more CJK characters ( 92,865 are assigned, with room for more ). The article The secret life of Unicode, located on IBM DeveloperWorks attempts to illustrate character of the motivation for Han union :

The problem stems from the fact that Unicode encodes characters preferably than “ glyph, ” which are the ocular representations of the characters. There are four basic traditions for East asian character shapes : traditional Chinese, simplified chinese, japanese, and Korean. While the Han beginning character may be the same for CJK languages, the glyph in common use for the like characters may not be. For example, the traditional chinese glyph for “ eatage ” uses four strokes for the “ eatage ” radical [ ⺿ ], whereas the simplify Chinese, Japanese, and Korean glyphs [ ⺾ ] use three. But there is only one Unicode point for the grass character ( U+8349 ) [ 草 ] careless of writing arrangement. Another case is the ideogram for “ one, ” which is different in chinese, japanese, and Korean. many people think that the three versions should be encoded differently .

In fact, the three ideogram for “ one ” ( 一, 壹, or 壱 ) are encoded individually in Unicode, as they are not considered national variants. The first is the common human body in all three countries, while the second and third base are used on fiscal instruments to prevent tampering ( they may be considered variants ). however, Han union has besides caused considerable controversy, particularly among the japanese public, who, with the nation ‘s literati, have a history of protesting the pick of historically and culturally significant variants. [ 6 ] [ 7 ] ( See Kanji § Orthographic reform and lists of kanji. today, the tilt of characters officially recognized for use in proper names continues to expand at a humble pace. ) In 1993, the Japan Electronic Industries Development Association ( JEIDA ) published a booklet titled “ 未来の文字コード体系に私達は不安をもっています ” ( We are feeling anxious for the future character encoding system JPNO 20985671 ), summarizing major criticism against the Han Unification set about adopted by Unicode. Aditya Mukerjee criticized the attempt as an attempt to create an artificial, limited fructify of characters, preferably than amply accredit diverseness of asian languages, and compared Han union to a hypothetical union of european alphabets, including english and russian, as sharing the same etymon in Greek. He besides pointed out at fast growing emoji subset of Unicode, leading to an absurd situation where he can type a 💩 U+1F4A9 PILE OF POO quality having its own code point, but he ca n’t properly type his first name in Bengali without resorting to substitute characters. [ 8 ]

Graphemes versus glyph [edit ]

The Latin lowercase “ a “ has wide differing glyph that all represent concrete instances of the lapp abstraction character. Although a native reader of any terminology using the Latin script recognizes these two glyph as the same character, to others they might appear to be wholly unrelated. A character is the smallest abstract unit of measurement of meaning in a compose organization. Any character has many possible glyph expressions, but all are recognized as the same character by those with read and writing cognition of a particular write system. Although Unicode typically assigns characters to code points to express the graphemes within a system of writing, the Unicode Standard ( incision 3.4 D7 ) does caution :

An abstract quality does not necessarily correspond to what a exploiter thinks of as a “ character ” and should not be confused with a grapheme .

however, this quote refers to the fact that some graphemes are composed of several characters. therefore, for case, the character U+0061 a LATIN SMALL LETTER A combined with U+030A ◌̊ COMBINING RING ABOVE ( i.e. the combination “ å ” ) might be understood by a drug user as a single character while being composed of multiple Unicode pilfer characters. In addition, Unicode besides assigns some code points to a small number ( early than for compatibility reasons ) of formatting characters, whitespace characters, and early abstract characters that are not graphemes, but alternatively used to control the breaks between lines, words, graphemes and character clusters. With the unite Han ideogram, the Unicode Standard makes a deviation from anterior practices in assigning abstract characters not as graphemes, but according to the underlying intend of the character : what linguists sometimes call sememes. This passing consequently is not simply explained by the frequently quoted eminence between an pilfer character and a glyph, but is more root in the difference between an abstract character assigned as a character and an pilfer character assigned as a sememe. In contrast, consider ASCII ‘s fusion of punctuation and diacritics, where graphemes with widely different meanings ( for exemplar, an apostrophe and a single quotation check ) are unified because the glyph are the same. For Unihan the characters are not unified by their appearance, but by their definition or meaning. For a character to be represented by respective glyphs means that the character has glyph variations that are normally determined by selecting one font or another or using glyph substitution features where multiple glyph are included in a single baptismal font. such glyph variations are considered by Unicode a feature of rich text protocols and not properly handled by the plain textbook goals of Unicode. however, when the change from one glyph to another constitutes a change from one character to another—where a glyph can not possibly placid, for example, mean the lapp character understand as the humble letter “ a ” —Unicode separates those into distinguish code points. For Unihan the same thing is done whenever the abstract entail changes, however preferably than speak of the abstract mean of a character ( the letter “ a ” ), the fusion of Han ideographs assigns a modern code charge for each different meaning—even if that mean is expressed by distinct graphemes in different languages. Although a character such as “ ö ” might mean something different in English ( as used in the bible “ coördinated ” ) than it does in German, it is still the same character and can be easily unified so that English and German can share a park abstract Latin writing organization ( along with Latin itself ). This case besides points to another reason that “ abstract character ” and grapheme as an abstract whole in a written language do not inevitably map one-to-one. In English the combining umlaut, “ ¨ ”, and the “ oxygen ” it modifies may be seen as two separate graphemes, whereas in languages such as swedish, the letter “ ö ” may be seen as a single character. similarly in English the point on an “ i ” is understand as a separate of the “ i ” character whereas in other languages, such as Turkish, the point may be seen as a separate character added to the dotless “ ı ”. To deal with the use of different graphemes for the same Unihan sememe, Unicode has relied on several mechanisms : particularly as it relates to rendering textbook. One has been to treat it as merely a baptismal font issue so that unlike fonts might be used to render Chinese, Japanese or Korean. besides font formats such as OpenType let for the map of alternate glyph according to linguistic process indeed that a text rendering organization can look to the user ‘s environmental settings to determine which glyph to use. The problem with these approaches is that they fail to meet the goals of Unicode to define a reproducible way of encoding multilingual text. [ 9 ] so preferably than treat the issue as a ample text problem of glyph alternates, Unicode added the concept of variation selectors, first introduced in version 3.2 and supplemented in version 4.0. [ 10 ] While variation selectors are treated as combining characters, they have no associated diacritic or mark. rather, by combining with a establish character, they signal the two character sequence selects a variation ( typically in terms of character, but besides in terms of underlying think of as in the sheath of a location name or other proper noun ) of the base character. This then is not a choice of an alternate glyph, but the choice of a character variation or a mutant of the base abstract character. such a two-character sequence however can be well mapped to a separate one glyph in mod fonts. Since Unicode has assigned 256 distinguish version selectors, it is adequate to of assigning 256 variations for any Han ideogram. such variations can be specific to one language or another and enable the encode of obviously textbook that includes such character variations .

See also  Difference between Aluminum Foil and Tin Foil

Unihan “ abstract characters ” [edit ]

Since the Unihan standard encodes “ abstract characters ”, not “ glyph ”, the graphic artifacts produced by Unicode have been considered temp technical hurdles, and at most, cosmetic. however, again, particularly in Japan, due in part to the means in which chinese characters were incorporated into japanese writing systems historically, the inability to specify a especial random variable was considered a meaning obstacle to the practice of Unicode in scholarly exercise. For case, the union of “ grass ” ( explained above ), means that a historical text can not be encoded sol as to preserve its particular orthography. rather, for example, the scholar would be required to locate the desire glyph in a specific font in holy order to convey the text as written, defeating the purpose of a unite character set. Unicode has responded to these needs by assigning variation selectors so that authors can select character variations of particular ideogram ( or flush early characters ). [ 10 ] little differences in graphic representation are besides debatable when they affect legibility or belong to the wrong cultural tradition. Besides making some Unicode fonts unserviceable for text involving multiple “ Unihan languages ”, names or other orthographically sensitive terminology might be displayed incorrectly. ( proper names tend to be specially orthographically conservative—compare this to changing the spell of one ‘s appoint to suit a terminology reform in the US or UK. ) While this may be considered chiefly a graphic representation or rendering problem to be overcome by more artful fonts, the far-flung use of Unicode would make it difficult to preserve such distinctions. The problem of one quality representing semantically different concepts is besides present in the Latin part of Unicode. The Unicode character for an apostrophe is the like as the character for a right individual quote ( ’ ). On the early hand, the das kapital Latin letter A is not unified with the greek letter Α or the cyrillic alphabet letter А. This is, of course, desirable for reasons of compatibility, and deals with a much smaller alphabetic character set. While the fusion aspect of Unicode is controversial in some quarters for the reasons given above, Unicode itself does nowadays encode a huge number of seldom-used characters of a more-or-less antiquarian nature. Some of the controversy stems from the fact that the very decisiveness of performing Han fusion was made by the initial Unicode Consortium, which at the time was a consortium of union american companies and organizations ( most of them in California ), [ 11 ] but included no East asian government representatives. The initial design goal was to create a 16-bit standard, [ 12 ] and Han union was consequently a critical measure for avoiding tens of thousands of character duplications. This 16-bit prerequisite was former abandoned, making the size of the character set less of an issue today. The controversy subsequently extended to the internationally example ISO : the initial CJK Joint Research Group ( CJK-JRG ) favored a proposal ( DIS 10646 ) for a non-unified fictional character set, “ which was thrown out in favor of union with the Unicode Consortium ‘s coordinated character set by the votes of American and european ISO members ” ( even though the japanese place was indecipherable ). [ 13 ] Endorsing the Unicode Han union was a necessary step for the heat ISO 10646/Unicode fusion. much of the controversy surrounding Han union is based on the distinction between glyph, as defined in Unicode, and the refer but distinct theme of graphemes. Unicode assigns abstraction characters ( graphemes ), as opposed to glyphs, which are a particular ocular representations of a character in a specific font. One character may be represented by many clear-cut glyph, for model a “ gram ” or an “ a ”, both of which may have one loop ( ɑ, ɡ ) or two ( a, guanine ). Yet for a lector of Latin script based languages the two variations of the “ a ” character are both recognized as the same character. Graphemes present in home character code standards have been added to Unicode, as required by Unicode ‘s Source Separation rule, even where they can be composed of characters already available. The national character code standards existing in CJK languages are well more involved, given the technological limitations under which they evolved, and so the official CJK participants in Han union may well have been amenable to reform. Unlike european versions, CJK Unicode fonts, due to Han union, have large but irregular patterns of overlap, requiring language-specific fonts. unfortunately, language-specific fonts besides make it unmanageable to access a variant which, as with the “ grass ” model, happens to appear more typically in another speech style. ( That is to say, it would be unmanageable to access “ grass ” with the four-stroke radical more typical of Traditional Chinese in a japanese environment, which fonts would typically depict the three-stroke radical. ) Unihan proponents tend to favor markup languages for defining linguistic process strings, but this would not ensure the use of a specific form in the case given, only the language-specific font more probable to depict a character as that form. ( At this distributor point, merely stylistic differences do enroll in, as a excerpt of japanese and chinese fonts are not likely to be visually compatible. ) chinese users seem to have fewer objections to Han union, largely because Unicode did not attempt to unify Simplified taiwanese characters with Traditional chinese characters. ( simplified chinese characters are used among chinese speakers in the People ‘s Republic of China, Singapore, and Malaysia. Traditional chinese characters are used in Hong Kong and Taiwan ( Big5 ) and they are, with some differences, more familiar to Korean and japanese users. ) Unicode is seen as neutral with regards to this politically charge issue, and has encoded Simplified and Traditional Chinese glyph individually ( e.g. the ideogram for “ discard ” is 丟 U+4E1F for Traditional Chinese Big5 # A5E1 and 丢 U+4E22 for Simplified Chinese GB # 2210 ). It is besides noted that Traditional and Simplified characters should be encoded individually according to Unicode Han Unification rules, because they are distinguished in preexistent PRC character sets. furthermore, as with other variants, Traditional to Simplified characters is not a one-to-one relationship .

Alternatives [edit ]

There are several alternative character sets that are not encoding according to the principle of Han Unification, and thus rid from its restrictions :
These region-dependent character sets are besides seen as not affected by Han Unification because of their region-specific nature :

  • ISO/IEC 2022 (based on sequence codes to switch between Chinese, Japanese, Korean character sets – hence without unification)
  • Big5 extensions
  • GCCS and its successor HKSCS

however, none of these alternate standards has been as widely adopted as Unicode, which is immediately the basis character set for many newly standards and protocols, internationally adopted, and is built into the architecture of operating systems ( Microsoft Windows, Apple macOS, and many Unix-like systems ), programming languages ( Perl, Python, C #, Java, Common Lisp, APL, C, C++ ), and libraries ( IBM International Components for Unicode ( ICU ) along with the Pango, Graphite, Scribe, Uniscribe, and ATSUI rendering engines ), font formats ( TrueType and OpenType ) and indeed on. In March 1989, a ( B ) TRON -based arrangement was adopted by japanese government organizations “ Center for Educational Computing ” as the system of choice for school education including compulsory education. [ 14 ] however, in April, a report titled “ 1989 National Trade Estimate Report on Foreign Trade Barriers ” from Office of the United States Trade Representative have specifically listed the system as a deal barrier in Japan. The report card claimed that the adoption of the TRON-based system by the japanese government is advantageous to japanese manufacturers, and thus excluding US operating systems from the huge modern market ; specifically the report lists MS-DOS, OS/2 and UNIX as examples. The Office of USTR was allegedly under Microsoft ‘s charm as its former military officer Tom Robertson was then offered a lucrative position by Microsoft. [ 15 ] While the TRON arrangement itself was subsequently removed from the number of sanction by Section 301 of the Trade Act of 1974 after protests by the organization in May 1989, the trade quarrel caused the Ministry of International Trade and Industry to accept a request from Masayoshi Son to cancel the Center of Educational Computing ‘s choice of the TRON-based system for the use of educational computers. [ 16 ] The incident is regarded as a emblematic event for the loss of momentum and eventual death of the BTRON system, which led to the widespread adoption of MS-DOS in Japan and the eventual borrowing of Unicode with its successor Windows .

fusion of all equivalent characters [edit ]

There has not been any push for full semantic union of all semantically-linked characters, though the theme would treat the respective users of East Asian languages the like, whether they write in Korean, Simplified Chinese, Traditional Chinese, Kyūjitai Japanese, Shinjitai Japanese or Vietnamese. alternatively of some variants getting clear-cut code points while other groups of variants have to plowshare individual code points, all variants could be faithfully expressed only with metadata tags ( for example, CSS formatting in webpages ). The burden would be on all those who use differing versions of 直, 別, 兩, 兔, whether that difference be due to simplification, international variability or intra-national discrepancy. however, for some platforms ( for example, smartphones ), a device may come with entirely one font pre-installed. The arrangement baptismal font must make a decisiveness for the default glyph for each code target and these glyphs can differ greatly, indicating different implicit in graphemes. consequently, relying on lyric markup across the control panel as an access is beset with two major issues. First, there are contexts where lyric markup is not available ( code commits, plain textbook ). moment, any solution would require every operating system to come pre-installed with many glyphs for semantically identical characters that have many variants. In accession to the criterion quality sets in Simplified Chinese, Traditional Chinese, Korean, Vietnamese, Kyūjitai Japanese and Shinjitai Japanese, there besides exist “ ancient ” forms of characters that are of interest to historians, linguists and philologists. Unicode ‘s Unihan database has already drawn connections between many characters. The Unicode database catalogs the connections between random variable characters with discrete code points already. however, for characters with a shared code point, the mention glyph image is normally biased toward the Traditional Chinese version. besides, the decisiveness of whether to classify pairs as semantic variants or z-variants is not always reproducible or well-defined, despite rationalizations in the handbook. [ 17 ] alleged semantic variants of 丟 ( U+4E1F ) and 丢 ( U+4E22 ) are examples that Unicode gives as differing in a significant way in their abstract shapes, while Unicode lists 佛 and 仏 as z-variants, differing alone in baptismal font style. paradoxically, Unicode considers 兩 and 両 to be about identical z-variants while at the lapp meter classifying them as significantly different semantic variants. There are besides cases of some pairs of characters being simultaneously semantic variants and specialized semantic variants and simplified variants : 個 ( U+500B ) and 个 ( U+4E2A ). There are cases of non-mutual equality. For model, the Unihan database entrance for 亀 ( U+4E80 ) considers 龜 ( U+9F9C ) to be its z-variant, but the entrance for 龜 does not list 亀 as a z-variant, tied though 龜 was obviously already in the database at the time that the entry for 亀 was written. Some clerical errors led to doubling of wholly identical characters such as 﨣 ( U+FA23 ) and 𧺯 ( U+27EAF ). If a baptismal font has glyphs encoded to both points sol that one font is used for both, they should appear identical. These cases are listed as z-variants despite having no variance at all. Intentionally duplicated characters were added to facilitate bit-for-bit round-trip conversion. Because round-trip conversion was an early sell point of Unicode, this mean that if a national standard in manipulation unnecessarily duplicated a fictional character, Unicode had to do the same. Unicode calls these intentional duplications “ compatibility variants “ as with 漢 ( U+FA9A ) which calls 漢 ( U+6F22 ) its compatibility discrepancy. equally long as an application uses the same font for both, they should appear identical. sometimes, as in the case of 車 with U+8ECA and U+F902, the add compatibility character lists the already award adaptation of 車 as both its compatibility version and its z-variant. The compatibility variant field overrides the z-variant field, forcing standardization under all forms, including basic equality. Despite the name, compatibility variants are actually canonically equivalent and are united in any Unicode standardization system and not only under compatibility standardization. This is like to how U+212B Å ANGSTROM SIGN is canonically equivalent to a pre-composed U+00C5 Å LATIN CAPITAL LETTER A WITH RING ABOVE. much software ( such as the MediaWiki software that hosts Wikipedia ) will replace all canonically equivalent characters that are discouraged ( e.g. the angstrom symbol ) with the recommend equivalent. Despite the appoint, CJK “ compatibility variants ” are canonically equivalent characters and not compatibility characters. 漢 ( U+FA9A ) was added to the database late than 漢 ( U+6F22 ) was and its entry informs the drug user of the compatibility information. On the other hand, 漢 ( U+6F22 ) does not have this comparison listed in this submission. Unicode demands that all entries, once admitted, can not change compatibility or equality so that standardization rules for already existing characters do not change. Some pairs of Traditional and Simplified are besides considered to be semantic variants. According to Unicode ‘s definitions, it makes feel that all simplifications ( that do not result in wholly different characters being merged for their homophony ) will be a human body of semantic version. Unicode classifies 丟 and 丢 as each early ‘s respective traditional and simplified variants and besides as each other ‘s semantic variants. however, while Unicode classifies 億 ( U+5104 ) and 亿 ( U+4EBF ) as each other ‘s respective traditional and simplify variants, Unicode does not consider 億 and 亿 to be semantic variants of each other. Unicode claims that “ ideally, there would be no pair of z-variants in the Unicode Standard. ” [ 17 ] This would make it seem that the goal is to at least unite all minor variants, compatibility redundancies and accidental redundancies, leaving the differentiation to fonts and to terminology tags. This conflicts with the express goal of Unicode to take away that overhead, and to allow any phone number of any of the world ‘s scripts to be on the like text file with one encoding system. [ improper synthesis? ] Chapter One of the handbook states that “ With Unicode, the information technology industry has replaced proliferating character sets with data stability, global interoperability and data exchange, simplified software, and reduced development costs. While taking the ASCII character set as its starting point, the Unicode Standard goes far beyond ASCII ‘s limited ability to encode only the upper- and lowercase letters A through Z. It provides the capacity to encode all characters used for the written languages of the worldly concern – more than 1 million characters can be encoded. No evasion sequence or dominance code is required to specify any character in any lyric. The Unicode character encoding treats alphabetic characters, ideographic characters, and symbols equivalently, which means they can be used in any concoction and with equal facility. ” [ 9 ] That leaves us with settling on one unite reference character for all z-variants, which is contentious since few outside of Japan would recognize 佛 and 仏 as equivalent. evening within Japan, the variants are on different sides of a major simplification called Shinjitai. Unicode would effectively make the PRC ‘s simplification of 侣 ( U+4FA3 ) and 侶 ( U+4FB6 ) a monumental difference by comparison. Such a design would besides eliminate the very visually clear-cut variations for characters like 直 ( U+76F4 ) and 雇 ( U+96C7 ). One would expect that all simplify characters would simultaneously besides be z-variants or semantic variants with their traditional counterparts, but many are neither. It is easier to explain the foreign case that semantic variants can be simultaneously both semantic variants and specialized variants when Unicode ‘s definition is that specialize semantic variants have the like meaning alone in certain context. Languages use them differently. A pair whose characters are 100 % drop-in replacements for each other in japanese may not be so flexible in taiwanese. thus, any comprehensive amalgamation of recommend code points would have to maintain some variants that differ alone slenderly in appearance even if the entail is 100 % the lapp for all context in one language, because in another terminology the two characters may not be 100 % drop-in replacements .

Examples of language-dependent glyph [edit ]

In each row of the following board, the same character is repeated in all six columns. however, each column is marked ( by the lang attribute ) as being in a different linguistic process : chinese ( simplified and two types of traditional ), japanese, korean, or vietnamese. The browser should select, for each character, a glyph ( from a baptismal font ) desirable to the specified language. ( Besides actual character variation—look for differences in stroke order, total, or direction—the typefaces may besides reflect different typographic styles, as with serif and non-serif alphabets. ) This only works for disengagement glyph choice if you have CJK fonts installed on your system and the font selected to display this article does not include glyph for these characters .

Code point



Hong Kong)
















close (simplified) / laugh (traditional)




knife edge


























one who does/-ist/-er













No character version that is exclusive to Korean or Vietnamese has received its own code item, whereas about all Shinjitai japanese variants or Simplified chinese variants each have discrete code points and unambiguous reference glyph in the Unicode standard. In the twentieth century, East asian countries made their own respective encode standards. Within each standard, there coexisted variants with distinct code points, therefore the distinct code points in Unicode for certain sets of variants. Taking simplify Chinese as an exemplar, the two character variants of 內 ( U+5167 ) and 内 ( U+5185 ) differ in precisely the same direction as do the Korean and non-Korean variants of 全 ( U+5168 ). Each respective variant of the first character has either 入 ( U+5165 ) or 人 ( U+4EBA ). Each respective discrepancy of the second character has either 入 ( U+5165 ) or 人 ( U+4EBA ). Both variants of the first character got their own distinct code points. however, the two variants of the second character had to share the like code point. The justification Unicode gives is that the national standards body in the PRC made discrete code points for the two variations of the first gear fictional character 內/内, whereas Korea never made separate code points for the unlike variants of 全. There is a cause for this that has nothing to do with how the domestic bodies view the characters themselves. China went through a process in the twentieth century that changed ( if not simplified ) respective characters. During this transition, there was a indigence to be able to encode both variants within the same document. Korean has always used the variant of 全 with the 入 ( U+5165 ) radical on top. therefore, it had no reason to encode both variants. korean terminology documents made in the twentieth hundred had fiddling reason to represent both versions in the like document. about all of the variants that the PRC developed or standardized got distinct code points owing merely to the fortune of the Simplified Chinese passage carrying through into the computer science age. This privilege however, seems to apply inconsistently, whereas most simplifications performed in Japan and mainland China with code points in home standards, including characters simplified differently in each state, did make it into Unicode as distinct code points. Sixty-two Shinjitai “ simplified ” characters with discrete code points in Japan got merged with their Kyūjitai traditional equivalents, like 海. This can cause problems for the speech tagging strategy. There is no universal tag for the traditional and “ simplified ” versions of japanese as there are for Chinese. Thus, any japanese writer wanting to display the Kyūjitai phase of 海 may have to tag the character as “ traditional chinese ” or trust that the recipient role ‘s japanese font uses only the Kyūjitai glyph, but tags of Traditional Chinese and Simplified Chinese may be necessary to show the two forms side by side in a japanese casebook. This would preclude one from using the like font for an integral text file, however. There are two discrete code points for 海 in Unicode, but only for “ compatibility reasons ”. Any Unicode-conformant font must display the Kyūjitai and Shinjitai versions ‘ equivalent code points in Unicode as the like. unofficially, a baptismal font may display 海 differently with 海 ( U+6D77 ) as the Shinjitai version and 海 ( U+FA45 ) as the Kyūjitai version ( which is identical to the traditional version in written Chinese and Korean ). The root 糸 ( U+7CF8 ) is used in characters like 紅/红, with two variants, the second gear shape being just the cursive imprint. The root components of 紅 ( U+7D05 ) and 红 ( U+7EA2 ) are semantically identical and the glyph differ lone in the latter using a longhand translation of the 糸 component. however, in mainland China, the standards bodies wanted to standardize the longhand form when used in characters like 红. Because this change happened relatively recently, there was a conversion period. Both 紅 ( U+7D05 ) and 红 ( U+7EA2 ) got separate code points in the PRC ‘s text encode standards bodies sol Chinese-language documents could use both version. The two variants received discrete code points in Unicode as well. The font of the radical 艸 ( U+8278 ) proves how arbitrary the submit of affairs is. When used to compose characters like 草 ( U+8349 ), the radical was placed at the acme, but had two different forms. traditional taiwanese and korean habit a four-stroke version. At the lead of 草 should be something that looks like two plus signs ( ⺿ ). Simplified chinese, Kyūjitai Japanese and Shinjitai Japanese use a three-stroke interpretation, like two plus signs sharing their horizontal strokes ( ⺾, i.e. 草 ). The PRC ‘s text encoding bodies did not encode the two variants differently. The fact that about every early transfer brought about by the PRC, no count how minor, did warrant its own code detail suggests that this exception may have been unintentional. Unicode copied the existing standards as is, preserving such irregularities. The Unicode Consortium has recognized errors in other instances. The myriad Unicode blocks for CJK Han Ideographs have redundancies in original standards, redundancies brought about by flaw importing of the original standards, a well as accidental mergers that are late corrected, providing common law for dis-unifying characters. For native speakers, variants can be opaque or be unacceptable in educate context. english speakers may understand a handwritten note saying “ 4P5 kilogram ” as “ 495 kilogram ”, but writing the nine backwards ( so it looks like a “ P ” ) can be jar and would be considered faulty in any school. Likewise, to users of one CJK terminology reading a document with “ extraneous ” glyph : variants of 骨 can appear as mirror images, 者 can be missing a stroke/have an external stroke, and 令 may be indecipherable or be confused with 今 depending on which random variable of 令 ( e.g. 令 ) is used .

Examples of some non-unified Han ideogram [edit ]

In some cases, often where the changes are the most strike, Unicode has encoded discrepancy characters, making it unnecessary to switch between fonts or lang attributes. however, some variants with arguably minimal differences get distinct codepoints, and not every variant with arguably significant changes gets a alone codepoint. As an example, take a character such as 入 ( U+5165 ), for which the only way to display the variants is to change font ( or lang assign ) as described in the previous table. On the other hand, for 內 ( U+5167 ), the discrepancy of 内 ( U+5185 ) gets a unique codepoint. For some characters, like 兌/兑 ( U+514C/U+5151 ), either method can be used to display the unlike glyph. In the pursue postpone, each row compares variants that have been assigned different code points. For brevity, note that shinjitai variants with different components will normally ( and unsurprisingly ) take unique codepoints ( for example, 氣/気 ). They will not appear here nor will the simplified taiwanese characters that take systematically simplified radical components ( for example, 紅/红, 語/语 ). [ 2 ] This list is not exhaustive .




Other variant




to lose





two, both





to ride




give birth






to cash






to leave




meditation (Zen)


















to research

Sources: MDBG Chinese-English Dictionary

ideographic Variation Database ( IVD ) [edit ]

In order to resolve issues brought by Han union, a Unicode Technical Standard known as the Unicode Ideographic Variation Database have been created to resolve the problem of specifying specific glyph in homely text environment. [ 18 ] By registering glyph collections into the Ideographic Variation Database ( IVD ), it is potential to use ideographic Variation Selectors to form Ideographic Variation Sequence ( IVS ) to specify or restrict the appropriate glyph in textbook action in a Unicode environment .

Unicode ranges [edit ]

ideographic characters assigned by Unicode appear in the follow blocks :
Unicode includes support of CJKV radicals, strokes, punctuation, marks and symbols in the follow blocks :
extra compatibility ( discouraged function ) characters appear in these blocks :
These compatibility characters ( excluding the twelve unified ideogram in the CJK Compatibility Ideographs jam ) are included for compatibility with bequest text handling systems and other bequest quality sets. They include forms of characters for upright text layout and rich text characters that Unicode recommends handling through other means .

International Ideographs Core [edit ]

The International Ideographs Core ( IICore ) is a subset of 9810 ideogram derived from the CJK Unified Ideographs tables, designed to be implemented in devices with specify memory, input/output capability, and/or applications where the manipulation of the complete ISO 10646 ideogram repertory is not feasible. There are 9810 characters in the current standard. [ 20 ]

Unihan database files [edit ]

The Unihan plan has constantly made an attempt to make available their build database. [ 1 ] The libUnihan project provides a normalize SQLite Unihan database and corresponding C library. [ 21 ] All tables in this database are in fifth convention shape. libUnihan is released under the LGPL, while its database, UnihanDb, is released under the MIT License .

See besides [edit ]

Notes [edit ]

  1. ^ Most of these are bequest and disused characters, however, as per Unicode ‘s objective to encode every writing system that is or has ever been used ; only 2000 to 3000 characters are necessity to be considered literate.

References [edit ]