Since the early 2000s most of consumer hard drive capacities are grouped in certain size classes measured in gigabytes. The exact capacity of a given drive is usually some number above or below the class designation. Although most manufacturers of hard disk drives and flash-memory disk devices define 1 gigabyte as 1000000000bytes, software like Microsoft Windows reports size in gigabytes by dividing the total capacity in bytes by 1073741824, while still reporting the result with the symbol “GB”. This practice is a cause of confusion, as a hard disk with a manufacturer-rated capacity of 400 gigabytes might be reported by the operating system as only “372 GB”, for instance. Other software, like Mac OS X 10.6 and some components of the Linux kernel measure using the decimal units.
In the Content Delivery Network industry, where billing is often done by the Gigabyte, there can be a difference up to 7% between vendors based on the definition of the size. Some companies including 3Crowd and BitGravity, have referenced agreed upon definitions for 1024 Megabytes multiplied by 1000 as a Barretbyte GbB, referencing BitGravity co-founder Barrett Lyon for billing purposes.
The JEDEC memory standards uses the IEEE 100 nomenclatures which defines a gigabyte as 1073741824bytes (or 230 bytes).
The difference between units based on SI and binary prefixes increases as a semi-logarithmic (linear-log) function—for example, the SI kilobyte value is nearly 98% of the kibibyte, a megabyte is under 96% of a mebibyte, and a gigabyte is just over 93% of a gibibyte value. This means that a 300 GB (279 GiB) hard disk is indicated only as 279 GB. As storage sizes increase and larger units are used, this difference becomes even more pronounced. Some legal challenges have been waged over this confusion such as a suit against Western Digital. Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity.
Because of its physical design, computer memory is addressed in multiples of base 2, thus, memory size at the hardware level can always be factored by a power of two. It is thus convenient to use binary units for non-disk memory devices at the hardware level, for example, in using DIMM memory boards. Software application, however, allocate memory, usually virtual memory in varying degrees of granularity as needed to fulfill data structure requirements, and binary multiples are usually not required. Other computer measurements, like storage hardware size, data transfer rates, clock speeds,operations per second, etc., do not depend on an inherent base, and are usually presented in decimal units.
From Wikipedia, the free encyclopedia: http://en.wikipedia.org/wiki/Gigabyte#Consumer_confusion