Character Set Support

22.3. Character Set Support

The character set support in PostgreSQL allows you to store text in a variety of character sets (also called encodings), including single-byte character sets such as the ISO 8859 series and multiple-byte character sets such as EUC (Extended Unix Code), UTF-8, and Mule internal code. All supported character sets can be used transparently by clients, but a few are not supported for use within the server (that is, as a server-side encoding). The default character set is selected while initializing your PostgreSQL database cluster using initdb. It can be overridden when you create a database, so you can have multiple databases each with a different character set.

An important restriction, however, is that each database's character set must be compatible with the database's LC_CTYPE (character classification) and LC_COLLATE (string sort order) locale settings. For C or POSIX locale, any character set is allowed, but for other locales there is only one character set that will work correctly. (On Windows, however, UTF-8 encoding can be used with any locale.)

22.3.1. Supported Character Sets

Table 22-1 shows the character sets available for use in PostgreSQL.

Table 22-1. PostgreSQL Character Sets

NameDescriptionLanguageServer?Bytes/CharAliases
BIG5Big FiveTraditional ChineseNo1-2WIN950, Windows950
EUC_CNExtended UNIX Code-CNSimplified ChineseYes1-3 
EUC_JPExtended UNIX Code-JPJapaneseYes1-3 
EUC_JIS_2004Extended UNIX Code-JP, JIS X 0213JapaneseYes1-3 
EUC_KRExtended UNIX Code-KRKoreanYes1-3 
EUC_TWExtended UNIX Code-TWTraditional Chinese, TaiwaneseYes1-3 
GB18030National StandardChineseNo1-2 
GBKExtended National StandardSimplified ChineseNo1-2WIN936, Windows936
ISO_8859_5ISO 8859-5, ECMA 113Latin/CyrillicYes1 
ISO_8859_6ISO 8859-6, ECMA 114Latin/ArabicYes1 
ISO_8859_7ISO 8859-7, ECMA 118Latin/GreekYes1 
ISO_8859_8ISO 8859-8, ECMA 121Latin/HebrewYes1 
JOHABJOHABKorean (Hangul)No1-3 
KOI8RKOI8-RCyrillic (Russian)Yes1KOI8
KOI8UKOI8-UCyrillic (Ukrainian)Yes1 
LATIN1ISO 8859-1, ECMA 94Western EuropeanYes1ISO88591
LATIN2ISO 8859-2, ECMA 94Central EuropeanYes1ISO88592
LATIN3ISO 8859-3, ECMA 94South EuropeanYes1ISO88593
LATIN4ISO 8859-4, ECMA 94North EuropeanYes1ISO88594
LATIN5ISO 8859-9, ECMA 128TurkishYes1ISO88599
LATIN6ISO 8859-10, ECMA 144NordicYes1ISO885910
LATIN7ISO 8859-13BalticYes1ISO885913
LATIN8ISO 8859-14CelticYes1ISO885914
LATIN9ISO 8859-15LATIN1 with Euro and accentsYes1ISO885915
LATIN10ISO 8859-16, ASRO SR 14111RomanianYes1ISO885916
MULE_INTERNALMule internal codeMultilingual EmacsYes1-4 
SJISShift JISJapaneseNo1-2Mskanji, ShiftJIS, WIN932, Windows932
SHIFT_JIS_2004Shift JIS, JIS X 0213JapaneseNo1-2 
SQL_ASCIIunspecified (see text)anyYes1 
UHCUnified Hangul CodeKoreanNo1-2WIN949, Windows949
UTF8Unicode, 8-bitallYes1-4Unicode
WIN866Windows CP866CyrillicYes1ALT
WIN874Windows CP874ThaiYes1 
WIN1250Windows CP1250Central EuropeanYes1 
WIN1251Windows CP1251CyrillicYes1WIN
WIN1252Windows CP1252Western EuropeanYes1 
WIN1253Windows CP1253GreekYes1 
WIN1254Windows CP1254TurkishYes1 
WIN1255Windows CP1255HebrewYes1 
WIN1256Windows CP1256ArabicYes1 
WIN1257Windows CP1257BalticYes1 
WIN1258Windows CP1258VietnameseYes1ABC, TCVN, TCVN5712, VSCII

Not all client APIs support all the listed character sets. For example, the PostgreSQL JDBC driver does not support MULE_INTERNAL, LATIN6, LATIN8, and LATIN10.

The SQL_ASCII setting behaves considerably differently from the other settings. When the server character set is SQL_ASCII, the server interprets byte values 0-127 according to the ASCII standard, while byte values 128-255 are taken as uninterpreted characters. No encoding conversion will be done when the setting is SQL_ASCII. Thus, this setting is not so much a declaration that a specific encoding is in use, as a declaration of ignorance about the encoding. In most cases, if you are working with any non-ASCII data, it is unwise to use the SQL_ASCII setting because PostgreSQL will be unable to help you by converting or validating non-ASCII characters.

22.3.2. Setting the Character Set

initdb defines the default character set (encoding) for a PostgreSQL cluster. For example,

initdb -E EUC_JP

sets the default character set to EUC_JP (Extended Unix Code for Japanese). You can use --encoding instead of -E if you prefer longer option strings. If no -E or --encoding option is given, initdb attempts to determine the appropriate encoding to use based on the specified or default locale.

You can specify a non-default encoding at database creation time, provided that the encoding is compatible with the selected locale:

createdb -E EUC_KR -T template0 --lc-collate=ko_KR.euckr --lc-ctype=ko_KR.euckr korean

This will create a database named korean that uses the character set EUC_KR, and locale ko_KR. Another way to accomplish this is to use this SQL command:

CREATE DATABASE korean WITH ENCODING 'EUC_KR' LC_COLLATE='ko_KR.euckr' LC_CTYPE='ko_KR.euckr' TEMPLATE=template0;

Notice that the above commands specify copying the template0 database. When copying any other database, the encoding and locale settings cannot be changed from those of the source database, because that might result in corrupt data. For more information see Section 21.3.

The encoding for a database is stored in the system catalog pg_database. You can see it by using the psql -l option or the \l command.

$ psql -l
                                         List of databases
   Name    |  Owner   | Encoding  |  Collation  |    Ctype    |          Access Privileges          
-----------+----------+-----------+-------------+-------------+-------------------------------------
 clocaledb | hlinnaka | SQL_ASCII | C           | C           | 
 englishdb | hlinnaka | UTF8      | en_GB.UTF8  | en_GB.UTF8  | 
 japanese  | hlinnaka | UTF8      | ja_JP.UTF8  | ja_JP.UTF8  | 
 korean    | hlinnaka | EUC_KR    | ko_KR.euckr | ko_KR.euckr | 
 postgres  | hlinnaka | UTF8      | fi_FI.UTF8  | fi_FI.UTF8  | 
 template0 | hlinnaka | UTF8      | fi_FI.UTF8  | fi_FI.UTF8  | {=c/hlinnaka,hlinnaka=CTc/hlinnaka}
 template1 | hlinnaka | UTF8      | fi_FI.UTF8  | fi_FI.UTF8  | {=c/hlinnaka,hlinnaka=CTc/hlinnaka}
(7 rows)

Important: On most modern operating systems, PostgreSQL can determine which character set is implied by the LC_CTYPE setting, and it will enforce that only the matching database encoding is used. On older systems it is your responsibility to ensure that you use the encoding expected by the locale you have selected. A mistake in this area is likely to lead to strange behavior of locale-dependent operations such as sorting.

PostgreSQL will allow superusers to create databases with SQL_ASCII encoding even when LC_CTYPE is not C or POSIX. As noted above, SQL_ASCII does not enforce that the data stored in the database has any particular encoding, and so this choice poses risks of locale-dependent misbehavior. Using this combination of settings is deprecated and may someday be forbidden altogether.

22.3.3. Automatic Character Set Conversion Between Server and Client

PostgreSQL supports automatic character set conversion between server and client for certain character set combinations. The conversion information is stored in the pg_conversion system catalog. PostgreSQL comes with some predefined conversions, as shown in Table 22-2. You can create a new conversion using the SQL command CREATE CONVERSION.

Table 22-2. Client/Server Character Set Conversions

Server Character SetAvailable Client Character Sets
BIG5not supported as a server encoding
EUC_CNEUC_CN, MULE_INTERNAL, UTF8
EUC_JPEUC_JP, MULE_INTERNAL, SJIS, UTF8
EUC_KREUC_KR, MULE_INTERNAL, UTF8
EUC_TWEUC_TW, BIG5, MULE_INTERNAL, UTF8
GB18030not supported as a server encoding
GBKnot supported as a server encoding
ISO_8859_5ISO_8859_5, KOI8R, MULE_INTERNAL, UTF8, WIN866, WIN1251
ISO_8859_6ISO_8859_6, UTF8
ISO_8859_7ISO_8859_7, UTF8
ISO_8859_8ISO_8859_8, UTF8
JOHABJOHAB, UTF8
KOI8RKOI8R, ISO_8859_5, MULE_INTERNAL, UTF8, WIN866, WIN1251
KOI8UKOI8U, UTF8
LATIN1LATIN1, MULE_INTERNAL, UTF8
LATIN2LATIN2, MULE_INTERNAL, UTF8, WIN1250
LATIN3LATIN3, MULE_INTERNAL, UTF8
LATIN4LATIN4, MULE_INTERNAL, UTF8
LATIN5LATIN5, UTF8
LATIN6LATIN6, UTF8
LATIN7LATIN7, UTF8
LATIN8LATIN8, UTF8
LATIN9LATIN9, UTF8
LATIN10LATIN10, UTF8
MULE_INTERNALMULE_INTERNAL, BIG5, EUC_CN, EUC_JP, EUC_KR, EUC_TW, ISO_8859_5, KOI8R, LATIN1 to LATIN4, SJIS, WIN866, WIN1250, WIN1251
SJISnot supported as a server encoding
SQL_ASCIIany (no conversion will be performed)
UHCnot supported as a server encoding
UTF8all supported encodings
WIN866WIN866, ISO_8859_5, KOI8R, MULE_INTERNAL, UTF8, WIN1251
WIN874WIN874, UTF8
WIN1250WIN1250, LATIN2, MULE_INTERNAL, UTF8
WIN1251WIN1251, ISO_8859_5, KOI8R, MULE_INTERNAL, UTF8, WIN866
WIN1252WIN1252, UTF8
WIN1253WIN1253, UTF8
WIN1254WIN1254, UTF8
WIN1255WIN1255, UTF8
WIN1256WIN1256, UTF8
WIN1257WIN1257, UTF8
WIN1258WIN1258, UTF8

To enable automatic character set conversion, you have to tell PostgreSQL the character set (encoding) you would like to use in the client. There are several ways to accomplish this:

  • Using the \encoding command in psql. \encoding allows you to change client encoding on the fly. For example, to change the encoding to SJIS, type:

    \encoding SJIS

  • libpq (Section 31.9) has functions to control the client encoding.

  • Using SET client_encoding TO. Setting the client encoding can be done with this SQL command:

    SET CLIENT_ENCODING TO 'value';

    Also you can use the standard SQL syntax SET NAMES for this purpose:

    SET NAMES 'value';

    To query the current client encoding:

    SHOW client_encoding;

    To return to the default encoding:

    RESET client_encoding;

  • Using PGCLIENTENCODING. If the environment variable PGCLIENTENCODING is defined in the client's environment, that client encoding is automatically selected when a connection to the server is made. (This can subsequently be overridden using any of the other methods mentioned above.)

  • Using the configuration variable client_encoding. If the client_encoding variable is set, that client encoding is automatically selected when a connection to the server is made. (This can subsequently be overridden using any of the other methods mentioned above.)

If the conversion of a particular character is not possible — suppose you chose EUC_JP for the server and LATIN1 for the client, and some Japanese characters are returned that do not have a representation in LATIN1 — an error is reported.

If the client character set is defined as SQL_ASCII, encoding conversion is disabled, regardless of the server's character set. Just as for the server, use of SQL_ASCII is unwise unless you are working with all-ASCII data.

22.3.4. Further Reading

These are good sources to start learning about various kinds of encoding systems.

http://www.i18ngurus.com/docs/984813247.html

An extensive collection of documents about character sets, encodings, and code pages.

CJKV Information Processing: Chinese, Japanese, Korean & Vietnamese Computing

Contains detailed explanations of EUC_JP, EUC_CN, EUC_KR, EUC_TW.

http://www.unicode.org/

The web site of the Unicode Consortium.

RFC 3629

UTF-8 (8-bit UCS/Unicode Transformation Format) is defined here.

Back to top

(С) Виктор Вислобоков, 2008-2023