2.1.10+Outline+the+way+in+which+data+is+represented+in+the+computer


 * To include strings, integers,** **characters and colours. This should** **include considering the space** **taken by data, for instance the** **relation between the hexadecimal** **representation of colours and the** **number of colours available.** **TOK, INT Does binary represent an** **example of a lingua franca?** **S/E, INT Comparing the number** **of characters needed in the Latin** **alphabet with those in Arabic and** **Asian languages to understand the** **need for Unicode.**

__Integer__: Each integer is represented in binary. Where a single number will usually be represented in one byte. __Characters__: Each character is usually one byte, represented in binary. [|A list of binary values and the characters they represent] Unicode is a standardisation of assigning values to a specific character, this is needed as there are hundreds of different characters in different languages and if done by each community there would likely be overlaps. __Strings__: A concatenation of characters. Will be represented in binary, as each 8 bit character following each other. Each world will be around 16-32 bits. __Colours__: Is represented in hexadecimal number system. Will be 6 hexadecimal values 2 for each prime colour, in the order of Red, Green, Blue (RGB.) For example the colour red will be FF 00 00. When viewed on a screen it will usually have a hash tag before the value, e.g. #FF 00 00. A maximum of around 16.8 million different colours may be input.(16 to the power of 6 or 2 to the power of 24.) [|Over 500 names of colours and there Hexadecimal value]

__Latin alphabet contained 23 letters__ It is written from left to right.

__The Arabic alphabet has 28 basic letters__ It is written from right to left.

__Unicode__ //"A standard for representing characters as integers. Unlike ASCII, which uses 7 bits for each character, Unicode uses 16 bits, which means that it can represent more than 65,000 unique characters. This is a bit of overkill for English and Western-European languages, but it is necessary for some other languages, such as Greek, Chinese and Japanese. Many analysts believe that as the software industry becomes increasingly global, Unicode will eventually supplant ASCII as the standard character coding format."//

http://www.webopedia.com/TERM/U/Unicode.html