Team LiB
Previous Section Next Section

Data Type Choices and Query Efficiency

This section provides some guidelines about choosing data types that can help queries run more quickly:

Don't use longer columns when shorter ones will do. If you are using fixed-length CHAR columns, don't make them unnecessarily long. If the longest value you store in a column is 40 characters long, don't define it as CHAR(255); define it as CHAR(40). If you can use MEDIUMINT rather than BIGINT, your table will be smaller (less disk I/O), and values can be processed more quickly in computations. If the column is indexed, using shorter values gives you even more of a performance boost. Not only will the index speed up queries, shorter index values can be processed more quickly than longer values.

If you have a choice about row storage format, use one that is optimal for your storage engine. For MyISAM tables, use fixed-length columns rather than variable-length columns. For example, make all character columns CHAR rather than VARCHAR. The tradeoff is that your table will use more space, but if you can afford the extra space, fixed-length rows can be processed more quickly than variable-length rows. This is especially true for tables that are modified often and therefore more subject to fragmentation.

  • With variable-length rows, you get more fragmentation of a table on which you perform many deletes or updates due to the differing sizes of the records. You'll need to run OPTIMIZE TABLE periodically to maintain performance. This is not an issue with fixed-length rows.

  • Tables with fixed-length rows are easier to reconstruct if you have a table crash. The beginning of each record can be determined because they all are at positions that are multiples of the fixed record size, something that is not true with variable-length rows. This is not a performance issue with respect to query processing, but it can certainly speed up the table repair process.

Although converting a MyISAM table to use fixed-length columns can improve performance, you should consider the following issues first:

  • Fixed-length columns are faster but take more space. CHAR(n) columns always take n characters per value (even empty ones) because values are padded with trailing spaces when stored in the table. VARCHAR(n) columns take less space because only as many characters are allocated as necessary to store each value, plus one or two bytes per value to record the length. Thus, if you are choosing between CHAR and VARCHAR columns, the tradeoff is one of time versus space. If speed is your primary concern, use CHAR columns to get the performance benefits of fixed-length columns. If space is at a premium, use VARCHAR columns. As a rule of thumb, you can assume that fixed-length rows will improve performance even though more space is used. But for an especially critical application, you may wish to implement a table both ways and run some tests to determine which alternative actually is better for your particular application.

  • Sometimes you cannot use a fixed-length type, even if you want to. There is no fixed-length type for strings longer than 255 characters, for example.

MEMORY tables currently are stored using fixed-length rows, so it doesn't matter whether you use CHAR or VARCHAR columns. Both are treated implicitly as CHAR.

For InnoDB tables, the internal row storage format does not treat fixed-length and variable-length columns differently (all rows use a header containing pointers to the column values), so using fixed-length CHAR columns is not in itself intrinsically simpler than using variable-length VARCHAR columns. Consequently, the primary performance factor is the amount of storage used for rows. Because CHAR on average takes more space than VARCHAR, it's preferable to use VARCHAR to minimize the amount of storage and disk I/O needed to process rows.

For BDB tables, it usually doesn't make much difference whether you use fixed-length or variable-length columns. You can try a table both ways and run some empirical tests to check whether there's a significant difference for your particular system.

Define columns to beNOT NULL . This gives you faster processing and requires less storage. It will also simplify queries sometimes because you don't need to check for NULL as a special case.

Consider usingENUM columns . If you have a string column that has low cardinality (contains only a limited number of distinct values), consider converting it to an ENUM column. ENUM values can be processed quickly because they are represented as numeric values internally.

UsePROCEDURE ANALYSE() . Run PROCEDURE ANALYSE() to see what it tells you about the columns in your table:

SELECT * FROM tbl_name PROCEDURE ANALYSE();
SELECT * FROM tbl_name PROCEDURE ANALYSE(16,256);

One column of the output will be a suggestion for the optimal data type for each of the columns in your table. The second example tells PROCEDURE ANALYSE() not to suggest ENUM types that contain more than 16 values or that take more than 256 bytes (you can change the values as you like). Without such restrictions, the output may be very long; ENUM definitions are often difficult to read.

Based on the output from PROCEDURE ANALYSE(), you may find that your table can be changed to take advantage of a more efficient type. Use ALTER TABLE if you decide to change a column's type.

Use OPTIMIZE TABLE for tables that are subject to fragmentation . Tables that are modified a great deal, particularly those that contain variable-length columns, are subject to fragmentation. Fragmentation is bad because it leads to unused space in the disk blocks used to store your table. Over time, you must read more blocks to get the valid rows, and performance is reduced. This is true for any table with variable-length rows, but is particularly acute for BLOB or TEXT columns because they can vary so much in size. Use of OPTIMIZE TABLE on a regular basis helps keep performance on the table from degrading.

OPTIMIZE TABLE can be used with MyISAM and BDB tables, but defragments only MyISAM tables. A defragmentation method that works for any storage engine is to dump the table with mysqldump, and then drop and re-create it using the dump file:

% mysqldump --opt db_name tbl_name > dump.sql
% mysql db_name < dump.sql

Pack data into aBLOB orTEXT column . Using a BLOB or TEXT column to store data that you pack and unpack in your application may allow you to get everything with a single retrieval operation rather than with several. This can also be helpful for data values that are not easy to represent in a standard table structure or that change over time. In the discussion of the ALTER TABLE statement in Chapter 2, one of the examples dealt with a table being used to hold results from the fields in a Web-based questionnaire. That example discussed how you could use ALTER TABLE to add columns to the table whenever you add questions to the questionnaire.

Another way to approach this problem is to have the application program that processes the Web form pack the data into some kind of data structure, and then insert it into a single BLOB or TEXT column. For example, you could represent the questionnaire responses using XML and store the XML string in a TEXT column. This adds application overhead on the client side for encoding the data (and decoding it later when you retrieve records from the table), but simplifies the table structure, and eliminates the need to change the table structure when you change your questionnaire.

On the other hand, BLOB and TEXT values can cause their own problems, especially if you do a lot of DELETE or UPDATE operations. Deleting such values can leave large holes in the table that will be filled in later with a record or records of probably different sizes. (The preceding discussion of OPTIMIZE TABLE suggests how you might deal with this.)

Use a synthetic index. Synthetic index columns can sometimes be helpful. One method is to create a hash value based on other columns and store it in a separate column. Then you can find rows by searching for hash values. However, note that this technique is good only for exact-match queries. (Hash values are useless for range searches with operators such as < or >=.) Hash values can be generated by using the MD5() function. Other options are to use SHA1() or CRC32(). Or you can compute your own hash values using logic within your application. Remember that a numeric hash value can be stored very efficiently. Also, if the hash algorithm might produce string values that have trailing spaces, do not store them in a CHAR or VARCHAR column, which are subject to trailing space removal.

A synthetic hash index can be particularly useful with BLOB and TEXT columns. It can be much quicker to find BLOB values using a hash as an identifier value than by searching the BLOB column itself.

Avoid retrieving largeBLOB orTEXT values unless you must . For example, a SELECT * query isn't a good idea unless you're sure the WHERE clause is going to restrict the results to just the rows you want. Otherwise, you may be pulling potentially very large values over the network for no purpose. This is another case where BLOB or TEXT identifier information stored in a synthetic index column can be useful. You can search that column to determine the row or rows you want and then retrieve the BLOB or TEXT values from the qualifying rows.

SegregateBLOB orTEXT columns into a separate table . Under some circumstances, it may make sense to move these columns out of a table into a secondary table, if that allows you to convert the table to fixed-length row format for the remaining columns. This will reduce fragmentation in the primary table and allow you to take advantage of the performance benefits of having fixed-length rows. It also allows you to run SELECT * queries on the primary table without pulling large BLOB or TEXT values over the network.

    Team LiB
    Previous Section Next Section