Chapter 01 · Unit 1: Foundations

Binary

00001

What does it mean for information to be digital? We start at the very beginning — with fingers, toes, and the surprisingly deep idea of counting itself — and work our way up to the number system that makes modern computing possible.

What is digital computation? First, what is digital? And further — what is a digit?

According to the Oxford Dictionary, a digit is:

A finger (including the thumb) or a toe.

Okay, there's actually a more immediately relevant definition:

Any of the numerals from 0 to 9, especially when forming part of a number.

But I promised first-principles, so let's start with the fingers and toes. Stick with me here.

Note that your fingers and toes are generally viewed as atomic units, especially when used to count. This means they can't be meaningfully divided into smaller parts for the purpose of counting. Of course they can be physically divided — at the joint, into skin and bone and blood, into cells, into molecules, into atoms. But atoms themselves aren't truly atomic either, as J. Robert Oppenheimer would grimly confirm. Some things, it turns out, can always be divided further.

This gets at something fundamental. Some things are infinitely divisible into smaller and smaller parts. We call these things continuous, or smooth. The real numbers possess this trait. Consider:

What's the "next" number after 1.3?

You might say 1.4. But you skipped 1.35. That skipped 1.345. Which skipped 1.3449, and so on, ad infinitum.

Continuous

A set of values is continuous (or smooth) if there is no smallest step between values — you can always subdivide further. The real numbers are continuous. Temperature on an analog thermometer is continuous.

Discrete

A set of values is discrete (or digital) if it consists of distinct, countable units with no values between them. Integers are discrete. Fingers are discrete. You can have 3 fingers or 4 fingers, but not 3.7 fingers.

So: fingers and toes, when used to count the integers, are digits — and that maps directly onto the second Oxford definition. A digit is a discrete counting unit. Digital, then, refers to the representation of information using discrete digits.

That's nothing new — you've been doing it since kindergarten.

Decimal vs. Binary

In principle, a digital computer could use all ten digits (0–9) to represent numbers, just as you and I do. Such a computer would operate in the decimal system — from the Latin decem, meaning "ten." Decimal is also called base-10.

Modern digital computers, however, don't use ten digits. They use only two: 0 and 1. This is the binary system — from Latin bis, meaning "twice." Binary is also called base-2.

Why binary? We'll answer this in depth in Chapter 2, when we explore how bits exist physically inside a computer. The short version: an electronic switch is either on or off. Two states. That's binary. It's not a design preference — it's a physical reality.

In other words, binary and decimal are both subsets of the broader concept of digital. Binary just happens to use the smallest possible number of digits: two.

All numbers, pictures, sounds, words, and other kinds of data stored or processed by a computer are ultimately represented using just 0 and 1. For now, let's focus on numbers.

Counting in Decimal

Take out a piece of paper and write the decimal numbers from 0 to 10 in a vertical column. It may seem silly, but the pattern we're about to identify is the whole game.

Notice what happens when you reach 10. You need a new "place." In this new place, you still only get the same digits (0–9), but now they're worth more — ten times more, to be exact. We call this worth-more-ness place value.

· 0
·1
·2
·3
·4
·5
·6
·7
·8
·9
10
10s1s
10¹10⁰

The Counting Algorithm

Let's make explicit what you've been doing intuitively since childhood:

  1. Start with 0 in the rightmost place.
  2. Increment the rightmost digit until you run out of digits.
  3. When you run out, increment the next place to the left and reset the rightmost place to 0.
  4. Repeat as needed.

That algorithm works for any base system. The only thing that changes is how many digits you get before you "run out" and need a new place.

Counting in Binary

Now apply the same algorithm with only two digits — 0 and 1 — and place values that are powers of 2 instead of powers of 10.

Below are the binary representations of zero through ten (i.e., "none" through "all your fingers"):

· · · 0
···1
··10
··11
·100
·101
·110
·111
1000
1001
1010
8s4s2s1s
2⁰

The last row — 1010 — is the number we call "ten" (all your fingers). In decimal that's 10. Same number, different representation. The concept of ten doesn't change; only the notation does.

Translating Binary to Decimal

To translate a binary number to decimal, use place values — exactly as you do in decimal, just with powers of 2.

Let's translate 1100:

1 1 0 0
8421
2⁰
  • 1 × 8 = 8
  • 1 × 4 = 4
  • 0 × 2 = 0
  • 0 × 1 = 0

8 + 4 + 0 + 0 = 12

Therefore: 1100 (base-2) = 12 (base-10). There are 1100 eggs in a dozen.

Leading zeros: Just like in decimal, leading zeros have no effect. 0101 = 101 = 5.

Translating Decimal to Binary

Going the other direction uses a simple greedy algorithm: find the largest power of 2 that fits, place a 1 there, subtract, and repeat with the remainder.

The Algorithm

  1. List binary place values (powers of 2) up to your number.
  2. Find the largest place value ≤ your number. Write a 1 there.
  3. Subtract that place value from your number.
  4. Repeat with the remainder.
  5. Fill all unused places with 0.

Let's convert 11 to binary:

1 0 1 1
8421
2⁰
  • 11 ≥ 8 → place a 1. Remainder: 11 − 8 = 3.
  • 3 < 4 → place a 0.
  • 3 ≥ 2 → place a 1. Remainder: 3 − 2 = 1.
  • 1 ≥ 1 → place a 1. Remainder: 1 − 1 = 0.

Result: 11 (base-10) = 1011 (base-2).

Bits, Bytes, and Bigger Numbers

A single binary digit — a 0 or a 1 — is called a bit (short for binary digit). The examples above used 4 bits. With 4 bits, the largest number you can represent is 1111 = 15.

For larger numbers, you simply need more bits. With 8 bits:

1111 1111
128643216 8421
2⁷2⁶2⁵2⁴ 2⁰

11111111 = 128+64+32+16+8+4+2+1 = 255. Eight bits all set to 1 gives you 255 — the largest value representable in 8 bits.

Eight bits grouped together have a special name: a byte. You'll encounter bytes constantly in IT work — file sizes, memory capacity, network speeds — all measured in bytes or multiples thereof. We'll go deeper on bytes in Chapter 3.

What About Everything Else?

We've said computers represent all information using only 0s and 1s. Numbers, sure — but what about text? Images? Sound?

The short version: computers represent everything as numbers, and then represent those numbers in binary.

For example, in a system called ASCII (American Standard Code for Information Interchange), every English character is assigned a number. The uppercase letter A is ASCII value 65. There's no deep reason for this — humans simply decided that A = 65 when ASCII was designed in the 1960s.

65 in binary? 1000001.

So when you press A on your keyboard, your computer sees: 1000001.

But wait — if 65 = 1000001 and "A" = 1000001, how does the computer know which it's dealing with? That's where data types come in — context that tells the computer how to interpret a pattern of bits. We'll cover this in Chapter 3.

Looking ahead: The byte value 255 will reappear constantly. IP addresses, color values, permission masks — all of them involve 8-bit numbers bounded by 0 and 255. Once you know why, you'll never have to memorize it.
Quiz Chapter 1 Quiz
1. Convert 1000 (binary) to decimal.
2. Convert 1101 (binary) to decimal.
3. Convert 0111 (binary) to decimal.
4. Convert 9 (decimal) to binary.
5. Convert 14 (decimal) to binary.
6. How many unique values can be represented with 4 bits?
7. The word binary comes from the Latin bis. What does bis mean?
8. Which of the following best describes the difference between a continuous and a discrete system?