The encoding is the act and the result of coding. This verb, for its part, can refer to modifying the expression of a message or to register something through the rules of a code. It can also refer to the formation of a body of laws that is constituted as a system.

To understand what encoding is, therefore, you must first be clear about what a code is. It is a combination of signs (numbers, letters, etc.) that has a certain value within the framework of a system or that makes it possible to reformulate and understand a secret message. Codes are also compilations of laws.

The character encoding, in this context, is to transform the character of the alphabet or other natural language (such as a syllabary) on a symbol belonging to another system of representation. Through encoding rules, for example, Morse code allows intermittent telegraph signals to be converted into letters and numbers.

The character encoding set known as ASCII, commonly pronounced as “asqui,” can encode a maximum of 128 symbols. This limit is due to the fact that it has seven binary digits destined to the combination of values for the definition of characters, since the last one is used for the detection of transmission errors.

These 128 possibilities are enough for the inclusion of the entire English alphabet with its upper and lower case letters, in addition to punctuation marks, numbers and certain control characters (such as the one that tells the printer to start working with the page following). That said, ASCII cannot meet the needs of our language, as it does not include accented characters, leading question marks and exclamation marks, among other symbols that we need in various contexts.

These limitations led to the definition of other character encoding systems, including extended ASCII, also 8 bits. Despite this, they also do not have enough space for the inclusion of all the alphabets in the world, but they also require division into several, which are used according to the needs of each user.

To solve this problem of capacity of character encoding systems, in 1991 the use of the standard called Unicode was agreed at an international level, a table of considerable dimensions, which today has more than fifty thousand symbols, each one with its own code, to cover a large number of forms of writing, including the ideograms used in Chinese, Korean and Japanese, as well as the characters of all the languages ​​of the European continent.

The transmission standard is known as the definition of the way in which the encoded characters are transmitted through a communication channel, such as the Internet. Today, messages are sent in packets of an integer number of octets; error detection is not performed with the eighth digit, but specific octets are allocated for this task.

According to DigoPaul, an encoding involves the conversion of data systems, making the resulting data equivalent to the original. In the case of digital coding, it consists of translating electrical voltage values ​​into the binary system: thus the analog signal is written as zeros and ones.

In the field of law, codifications are legal compilations that are used to administer justice. A civil code and a penal code are the result of a codification process. These codes order and systematize the norms and define crimes, eliminating legal loopholes and redundancies.