UTF-8, UTF-16, and UTF-32 can encode any language character using a byte-sequence and ultimately 1's and 0's. UTF-8 supports all languages and it's the most prevalent. UTF-16 is now beginning to be used and it can do with one 16 bit character what was represented with two UTF-8 characters e.g. the euro sign is E2 82 AC (byte sequence) in UTF-16 is 0x20AC. It looks like UTF-16 is more efficient but it's not used as much.
Long story short, UTF-8 should be our standard until someone complains that UTF-8 needs to be upgraded.