Episode 13: Notational Systems: Binary, Hexadecimal, Decimal, and Octal
Notational systems define the way numbers and data are represented in computing, and they are fundamental to how all digital devices store, process, and display information. On the CompTIA Tech Plus exam F C zero dash U seven one, you’ll be expected to recognize binary, decimal, hexadecimal, and octal formats, and to understand when and why each is used. These systems are the basis for programming, data processing, and hardware design. In this episode, we’ll focus on identifying each format, converting between them, and understanding their roles in real-world applications.
In information technology, a number system is a structured method of representing numeric values. Every number system is defined by its base, which determines how many unique symbols are used to represent values. Computing systems rely on base systems to store data in memory, transmit information between devices, and perform calculations. Binary, or base two, is the foundational system for all digital devices, but other bases play key roles in making data more readable and manageable for humans.
Binary notation is the simplest but most essential of all the number systems used in computing. It uses only two digits—zero and one—to represent all possible values. Each binary digit, or bit, represents a power of two based on its position in the sequence, with groups of eight bits forming a byte that can represent characters, symbols, or numbers. Understanding binary is essential for decoding low-level operations, reading configuration settings, and interpreting raw system data.
Common use cases for binary appear across every domain of technology. Machine code, the set of instructions executed directly by a processor, is entirely binary. Network configuration tasks, such as subnetting, rely on binary math to determine address ranges. Storage capacities, hardware registers, and device-level settings are often represented in binary form. Binary also naturally models on–off states or true–false logic, which is why it maps so well to how digital electronics operate.
Decimal, or base ten, is the number system most familiar to humans and the one we use in everyday life. It contains ten digits—zero through nine—and each position in a decimal number increases in value by powers of ten. In I T, decimal is often used as the bridge between machine data and human-readable output, providing a format that non-technical users can easily understand. Many technical values are ultimately displayed in decimal form even if they are stored internally in binary.
In the context of the Tech Plus exam, decimal appears in many specifications and measurements. Throughput rates, such as megabits per second, are often expressed in decimal units. Processor speeds, like gigahertz, and memory sizes are frequently listed in base ten. Decimal is also used for numeric IDs, form input fields, and basic file size descriptions. Recognizing decimal values ensures you can interpret performance specifications and user-facing data correctly.
Hexadecimal, or base sixteen, is a compact, human-readable way of representing binary data. It uses digits zero through nine and letters A through F to represent values. Each hex digit corresponds directly to a four-bit binary segment, called a nibble. Because of this mapping, hexadecimal can condense long binary strings into a much shorter form, making it ideal for use in programming, memory addressing, and visual representation of data.
In practice, hexadecimal appears frequently in networking, web design, and system diagnostics. MAC addresses, which uniquely identify network interfaces, are often displayed in hex. HTML and CSS use hex color codes, such as number sign F F zero zero zero zero for red. Memory dumps and low-level diagnostic tools also output system data in hexadecimal. Being able to read and interpret hex strings is an important skill for any technician.
Octal, or base eight, uses digits zero through seven and was more common in earlier computing eras. Each octal digit maps to a group of three binary bits, making it easier to read certain values without working directly in binary. While less common today, octal is still used in some contexts, especially in legacy systems and Unix-like operating systems for file permissions. It can appear in exam questions tied to historical or specialized environments.
One of the most visible uses of octal is in file permissions on Linux or Unix systems. A permission setting like seven five five or six four four is expressed in octal, with each number representing the access rights for the user, group, and others. Understanding these octal values helps you correctly interpret and modify permission sets. Even though octal is less prominent today, its role in access control and configuration means it remains relevant for the exam.
Comparing notational systems shows why each exists. Binary is the underlying language of all digital devices but can be cumbersome for humans to read. Hexadecimal and octal are ways to compress binary for easier readability without losing precision. Decimal is the format most familiar to users and is often used for display and reporting. Recognizing which system is in use prevents costly interpretation errors when working with technical data.
Conversions between bases are a practical skill you will need for both the exam and real-world troubleshooting. Converting binary to decimal involves multiplying each bit by two to the power of its position and adding the results. Decimal to binary uses repeated division by two and recording remainders. Binary to hex or octal involves grouping bits into nibbles or triplets and converting directly. Practicing these conversions improves your speed and accuracy in technical problem-solving.
For more cyber related content and books, please check out cyber author dot me. Also, there are other prepcasts on Cybersecurity and more at Bare Metal Cyber dot com.
A common exam task is converting binary numbers into decimal form. To do this, you multiply each binary digit by two raised to the power of its position, starting from zero on the right. For example, the binary number one zero one zero equals one times eight plus zero times four plus one times two plus zero times one, giving a total of ten in decimal. This process not only helps you interpret system data but also builds your ability to work through networking problems, memory addresses, and low-level configuration settings.
The reverse process, converting decimal numbers into binary, involves repeatedly dividing the decimal value by two and recording the remainders. For example, to convert thirteen into binary, divide thirteen by two to get six remainder one, then six by two to get three remainder zero, then three by two to get one remainder one, and finally one by two to get zero remainder one. Reading the remainders from bottom to top gives one one zero one. This skill is frequently used in networking, where binary notation underlies subnet masks and IP address calculations.
Binary and hexadecimal are closely linked because each hexadecimal digit maps exactly to four binary bits. To convert binary to hexadecimal, group the binary digits in sets of four starting from the right, then convert each group into its hex equivalent. For instance, one one zero one becomes the letter D in hex, while one zero one zero becomes A. This compression makes long binary strings more readable, especially when examining MAC addresses, memory dumps, or system log files.
The opposite conversion, hexadecimal to binary, is equally straightforward. Each hex digit is replaced by its four-bit binary equivalent, ensuring accuracy and consistency when expanding or decoding values. For example, hex A becomes one zero one zero, hex F becomes one one one one, and hex one becomes zero zero zero one. This is a key skill in command-line environments, programming, and when analyzing diagnostic outputs from hardware or software tools.
Decimal to hexadecimal conversion uses repeated division by sixteen, recording the remainders as hex digits, while converting hex to decimal involves multiplying each digit by sixteen raised to the power of its position and summing the results. These conversions often appear in troubleshooting scenarios, such as interpreting port numbers, memory addresses, or encoded configuration data. While calculator tools can speed up the process, understanding the method ensures you can work through the problem even without a device.
Octal and binary conversions are also useful to know, even if octal is less common in modern environments. Each octal digit maps to a group of three binary bits. For example, the octal digit seven becomes one one one, while five becomes one zero one. So the octal number seventy-five becomes the binary string one one one one zero one. Reversing the process compresses binary back into octal, which is helpful when working with file permissions in Unix or Linux environments.
In exam questions, you may be asked to recognize the notation in use before doing any conversions. Many systems use prefixes or suffixes to indicate the format, such as zero x for hexadecimal, zero b for binary, or a trailing h to mark hex values. Misreading these indicators can lead to incorrect answers even when your math is correct, so part of mastering notational systems is learning to spot these visual cues immediately.
Base systems have direct real-world applications in almost every area of IT. Binary math drives network subnetting, IP addressing, and routing logic. Hexadecimal is standard in programming for working with memory locations, color codes, and encryption keys. Systems engineers use both binary and hex when examining system logs or debug output. Even basic administrative tasks, like setting file permissions or encoding text, may require comfort with these number systems.
There are many tools that can assist with conversions, from the programmer mode in Windows Calculator to Linux’s bc command-line calculator, as well as numerous mobile apps. While these tools are helpful for speed and verification, they should not replace the ability to perform simple conversions manually. Being able to work through a problem without tools ensures you can make informed decisions and troubleshoot effectively in any environment.
Some common mistakes occur when working with notational systems, and being aware of them will help you avoid errors. Confusing binary and hexadecimal because of similar-looking patterns, miscounting digit positions during conversions, or forgetting to switch calculator modes between bases can all lead to wrong answers. Visual fatigue can also make it easy to transpose zeros and ones, so careful attention is important when working with large values.
One of the best ways to reinforce your skills is through practice. Using flashcards with examples of binary, decimal, hexadecimal, and octal values helps build quick recall. You can group exercises by conversion type, such as binary to decimal or hex to binary, and complete them in short, focused sessions. Audio reviews, like the guided examples in this PrepCast, also support auditory learning and make it easier to retain the steps for each conversion method.
The key takeaway from this domain is that binary, decimal, hexadecimal, and octal each serve a specific role in computing. Conversions between them are common both on the Tech Plus exam and in practical IT work. Understanding how to move between these systems will make you more accurate in decoding data, troubleshooting configurations, and analyzing technical outputs. Mastery of notational systems strengthens your overall technical fluency and builds a foundation for more advanced topics.
In the next episode, we will cover how digital systems are measured—focusing on units for storage, throughput, and processing speed. You will learn to interpret bits, bytes, megabits per second, gigahertz, and other measurements as they appear in performance specifications, network configurations, and exam scenarios. This understanding will also support your ability to compare and evaluate different technologies effectively.
