Integer
Created by Masashi Satoh | 12/09/2025
- Shaping the spine of the ICT curriculum in Waldorf education
- The History of Computers(Currently being produced)
- Details on Constructing an Adder Circuit Using Relays
- Internet
- Learning Data Models
- Learning Programming and Application Usage Experience(Currently being produced)
- Human Dignity and Freedom in an ICT-Driven Society(Currently being produced)
Introduction
This article serves to reinforce the learning of “Learning Data Models.”
Numerical range
To introduce integer types, it’s best to start by discussing ranges with students. Focus on the maximum value that can be handled for each number of bits. The way the upper limit of numbers expressible with a specific number of bits increases exponentially as the bit count grows evokes a sense of wonder. The clarity of binary further enhances the fascination of this learning experience.
- 1bit : 1
- 2bit : 3
- 3bit : 7
- 4bit : 15
- 5bit : 31
- 6bit : 63
- 7bit : 127
- 8bit : 255
- 9bit : 511
- 10bit : 1023
- 11bit : 2047
- 12bit : 4095
- 13bit : 8191
- 14bit : 16383
- 15bit : 32767
- 16bit : 65535(65 thousand)
- 17bit : 131071
- 18bit : 262143
- 19bit : 524287
- 20bit : 1048575
- 21bit : 2097151
- 22bit : 4194303
- 23bit : 8388607
- 24bit : 16777215(16.8 million)
- 25bit : 33554431
- 26bit : 67108863
- 27bit : 134217727
- 28bit : 268435455
- 29bit : 536870911
- 30bit : 1073741823
- 31bit : 2147483647
- 32bit : 4294967295(4.3 billion)
- 33bit : 8589934591
- 34bit : 17179869183
- 35bit : 34359738367
- 36bit : 68719476735
- 37bit : 137438953471
- 38bit : 274877906943
- 39bit : 549755813887
- 40bit : 1099511627775(1 trillion)
- 41bit : 2199023255551
- 42bit : 4398046511103
- 43bit : 8796093022207
- 44bit : 17592186044415
- 45bit : 35184372088831
- 46bit : 70368744177663
- 47bit : 140737488355327
- 48bit : 281474976710655(280 trillion)
Just seeing how the range of numbers they can handle increases exponentially with each doubling of bits, as the term “doubling game” suggests, is an exciting experience for students. It’s also a good idea to try the game of folding a large piece of paper in half multiple times.
Additionally, I mention to students the importance of anticipating the expected range of information to be handled and reserving memory space accordingly. Since memory is finite, we should avoid unnecessarily occupying large amounts of space.
If you understand memory as a grid with no margins, you should naturally wonder what happens when a calculation result exceeds the prepared range.
Overflow
Considering overflow errors that occur when the result of addition or multiplication exceeds the range is a valuable learning experience. Understanding error handling clarifies the relationship between computers and humans.
The first step is to clearly distinguish between calculations performed on paper and those performed in memory (registers).
On paper, you can simply write the additional digits of the result in the margins. But when performing calculations on memory—a strictly partitioned medium—it doesn’t work the same way. It’s crucial to visualize this.
As shown in the figure below, assume each _ represents 1 bit of memory. If 3 bits are allocated to the addend, operand, and result respectively, performing the calculation directly would result in the loss of 1 bit.
110
+010
=___ ⇒ 1000
On paper, you can simply keep adding calculations in the margins. But in computers, which handle numbers by storing them in fixed-length containers, designers must plan in advance how to handle such cases. Since computers cannot resolve unexpected processing on their own, human intervention becomes necessary.
There are various ways to handle this, but a typical approach is to generate an error and require human intervention. You could demonstrate how an error occurs to students using some kind of interpreter language.
This is a fascinating experience. For it is precisely at the moment when a computer’s predictable operation halts due to an error that the identity of its creator is revealed (which, of course, is human spiritual activity).
The execution process of a computer itself is a closed system governed by determinism. Once the initial state is determined, it can only proceed toward a predetermined outcome. A computer cannot roll dice.
As proof, many programming languages include functions that generate random numbers, capable of producing patterns that appear random at first glance. However, unless the seed of randomness is provided from outside the system, these functions can only generate the same pattern no matter how many times they are repeated.
The computer’s execution process itself is a closed system governed by determinism. Once the initial state is determined, all that remains is to race toward the predetermined outcome. Computers cannot roll dice.
The computer’s seemingly living-like behavior stems from external inputs of real-world information into this closed system, or interrupts occurring in sync with real-world time.
The reason AI behaves like humans is simply because the vast amount of information it accumulates and organizes possesses a system of “human-like qualities.” This is the decisive difference that distinguishes humans — who possess their own intentions and contain an infinitely open spiritual world — from computers.
Thinking this way, it becomes clear that errors are like cracks in the computer’s closed system, presenting an opportunity for the computer’s true nature to be revealed before humans.
Computers began to be used to control various devices, and major accidents started occurring due to errors hidden within their control programs. Among such accidents were those caused by overflow errors. One such incident was the 1996 failure of the European Space Agency’s Ariane 5 rocket. This was a catastrophic failure in which the rocket exploded just under 40 seconds after launch, along with the satellite it was carrying, despite the massive investment of funds.
The cause was reported to be an overflow error. The program controlling the engines based on accelerometer readings was designed assuming the values would remain within the range of 16-bit integers. However, values exceeding this range were transmitted, causing the overflow.
Just this small error can cause a major accident. This is a topic you’ll definitely want to cover when learning about overflow errors. This is a topic you should definitely cover when teaching about overflow errors.
There is abundant material on this accident, including publicly available video footage, so you won’t lack sources for lesson preparation. Waldorf education draws a clear line from simplistic object-based teaching. Please refrain from showing students videos; instead, tell the story vividly as if you had witnessed it yourself.
*This case study was shared with me by Mr. Hideomi Asai, who implemented this lesson at Aichi Steiner School. I would like to express my gratitude to him here.
Representation of Negative Integers and Arithmetic Operations
Now, let’s move on to the next step.
I ask the students directly: “So far, we’ve been thinking about how to handle positive numbers. How should we handle negative integers in a computer?”
In our study of “Details on Building an Adder Circuit Using Relays,” we briefly touched on two’s complement, but we didn’t delve into its use for representing negative numbers. Therefore, students will likely come up with various ideas.
Having separate information for positive and negative values, like “-1” and “-250,” is actually a good idea. In fact, it’s worth exploring this model with students. It’s certainly clear and works well.
However, this method is rarely used in actual computers. The reason lies in the early days of computing, when available memory was severely limited, making compact information handling an absolute priority. Minimizing the cost of computing hardware was also a critical consideration.
This led to the concept of representing negative numbers using two’s complement.
Negative Number Representation Using Two’s Complement
There are numerous explanations of two’s complement, which is defined as the complement of a number in base 2. It is described as the smallest additive that causes a carry when added to the original number. Some explanations also touch on its relationship with congruence equations. How deeply to delve into this should be considered in conjunction with the broader mathematics curriculum, so consulting with a math teacher is advisable.
Here, it should suffice to cover at least the following points:
- A representation format for negative numbers characterized by the fact that, after fixing the number of significant digits, the result of subtraction is obtained by truncating the bits that overflow when added to the addend.
- The most significant bit is set to 1, making it easy to distinguish between positive and negative values.
- Compared to handling only positive numbers, the number of values that can be represented with the same number of bits is reduced by about half.
- The two’s complement can be obtained through a simple operation: invert all bits of the number, add 1 after fixing the number of significant digits, and discard any overflowed bits.
If you have the time, discussing the following points will be helpful when learning programming.
- Explaining the existence of Boolean values and their nature as integer values.
Most programming systems treat 1 as true and 0 as false, but some treat all bits set as true and all bits unset as false. Some students might find it interesting that the latter’s true value, when converted to a number, becomes -1 in two’s complement representation.
It’s also worthwhile to have students consider why each system’s designer chose that particular specification. How would you approach it? Some students might even suggest that consuming one entire word to store binary information is wasteful, arguing that managing each bit individually is more efficient.
Indeed, in low-level interfaces, it’s common to see examples where multiple Boolean values are stored within a single word as flag information. Have students consider what procedure is needed to determine the truth value of a specific bit within such flag information.
ABCDEXYZ
To check whether a flag is set, you prepare a mask pattern corresponding to that specific bit, perform an AND operation, and determine whether the result is 0.
ABCDEXYZ
00000100 AND
00000X00
Writing to a flag requires an even more complicated procedure. If the value of the flag to be written is 0, you prepare a pattern where only the corresponding flag bit is 0 and all other bits are 1. You then perform an AND operation with the current flag set and write the result.
ABCDEXYZ
11111011 AND
ABCDE0YZ
If the value is 1, you prepare a pattern where only the corresponding flag bit is 1 and perform an OR operation instead.
ABCDEXYZ
00000100 OR
ABCDE1YZ
Discussing how to consider this cost is also worthwhile.
Regarding the four methods above, I briefly explained them in the “Computer System Overview” section of “Details on Constructing an Adder Circuit Using Relays.” I could explain them again in detail here.
Using the property described in point 1 above, subtraction becomes possible through the mechanism of addition. Or rather, it is precisely because of this property that the two’s complement representation has become established as the standard for representing negative integers. Rather than getting bogged down in the finer points of two’s complement, it should suffice if this nuance comes across.
Regarding division, we should also mention the need for handling division by zero to generate an error.
We will also tackle the modulo operation. Modulo operations are frequently used to extract periodic information from numbers, such as determining even/odd values or calculating days of the week.
Additionally, when you want to smoothly draw diagonal lines on the screen’s seat surface, using modulo arithmetic allows you to perform coordinate corrections for the line’s slope using only integer operations. It might be interesting to think about why that is, then actually write a program and try drawing the line.
The most well-known implementation that internally represents Boolean values as -1 is a language derived from Microsoft’s BASIC. Since Excel and Word also have VBA built-in, you can easily verify this.
Open the VBE development environment, open the Immediate Window, and type the following code:
a=True debug.Print a True debug.Print CInt(a) -1 debug.Print Hex(a) FFFF a=False debug.Print CInt(a) 0
Arithmetic operations, comparison and conditional processing
Once you understand the mechanism of subtraction, all four arithmetic operations become possible. At this point, it’s worth touching on shift operations. Shifting all information one bit to the left (toward the higher-order digits) and filling the least significant digit with zeros yields the same result as multiplying by 2 in the binary world. The opposite effect is achieved by shifting in the opposite direction.
These are called left shift and right shift, and they are used to perform multiplication and division. It also plays a major role in the normalization procedure for floating-point types. Students who have previously played with a sequencer connected to an adder should be able to visualize the shift operation realistically.
Depending on the time available, review the steps for multiplication and division.
In division operations, a process occurs to check whether the result of subtraction becomes negative. We carefully explain that this comparison can be performed by performing a subtraction operation as a placeholder. Whether the result is zero can be determined by taking the OR of all bits in the result. For determining positive or negative, simply check whether the most significant bit of the result is 1 or 0. If it is 1, the result is negative.
Also mention that a mechanism exists to switch the next processing step based on the result, which is the basis for conditional processing.
By carefully tracing these low-level operations and learning the mechanisms of comparison and conditional processing, you can visualize the hidden workings of a computer while learning programming. This is crucial to avoid treating the computer as a black box. It especially serves as a safeguard against understanding conditional processing solely through the concept of “judgment.”
The only events occurring during the comparison process are whether the result of the subtraction is zero, or whether it is positive or negative. The mechanism that switches subsequent procedures based on this result is no different from the mechanism that switches a train’s track points. The meaning of subsequent procedures resulting from that conditional processing belongs to the intent of the person who created the program; the running train itself has no involvement whatsoever in that matter.
The judgment belongs solely to the programmer who is testing the program through thought experiments.
I know I’m repeating myself, but this position is the solid ground that keeps us sane in the face of the tsunami of advanced ICT technologies.
There are many explanations of two’s complement representation, including its mathematical background, but I found the following resource extremely helpful (apologies, it’s in Japanese).
An integer used as IDs
Integers are frequently used as identifiers.
However, it is crucial to understand that its essence lies in being used as a unique pattern without duplication, not as a numerical value.
Computers cannot directly manipulate concepts. Humans associate specific concepts with IDs. Programs, built on the relationship between concepts and IDs, process the ID pattern. Humans then associate the resulting pattern back with concepts for utilization.
In that sense, integer types themselves are essentially IDs. By assigning the concept of integers to patterns aligned with binary numbers, those patterns were subsequently endowed with ordinal characteristics. This is an intriguing discovery.
In other words, humans entrust all concepts to symbols called IDs, input those IDs into a computer, prepare procedures to process those IDs according to the interrelationships of concepts in the human world, and have the computer execute them.
However, the computer has no means of knowing what concepts those IDs are linked to. The computer simply processes those IDs according to the provided procedures. It has no involvement in what the results signify. It merely returns the results to humans without question.
No matter how advanced the AI, it is no exception to this.
The most useful aspect of the mathematical properties of integers is when treating IDs as ordinal numbers and creating a mechanism to add new IDs while ensuring uniqueness.
Ask your students: “How can we generate random patterns and guarantee they won’t duplicate?” They’ll quickly realize this is quite challenging. But with ordinal numbers, simply keeping track of the largest value issued ensures duplicates are avoided. Here, the numerical nature of integers used as IDs proves useful.
The ordinal nature of IDs also holds significance in applications that standardize by linking various concepts to fixed IDs. A classic example is character encoding. Character encoding will be covered in detail in the next section.
Additionally, pointer types can be viewed as a kind of ID. While they have the unique characteristics of being variable values that represent specific addresses, they function as IDs in the sense that the pattern points to a specific target.
IDs use unique symbols to manipulate real-world concepts, while pointer types use symbols representing address information to manipulate blocks of information in memory. These roles are very similar.
Following the floating-point types in the next section, the subsequent section will cover character types and string types. Since string manipulation using pointers will be covered in the string type section, it is necessary to briefly introduce pointer types here as a preview.
DateTime type linked to daily life and the movements of the sun and stars
Finally, it would be wonderful to touch upon how the simple integer type is applied as the DateTime type, which connects to our lives and the movements of the stars.
This type itself is merely an integer value, yet it plays a crucial role in considering the connection between the closed system of the computer and the open world, much like overflow errors.
The logic operating inside a computer functions within a time independent of the time we humans experience, tied to the movements of the sun and stars. The process that advances one step at a time according to the clock’s instructions operates within its own inherent time, which could be called “process time.” It is fundamentally impossible for this logic to relate to human time. This is because the computer’s clock signal is a purely self-serving impulse, existing solely to govern the process, with no connection whatsoever to the real world.
To put it more plainly, computers fundamentally know nothing about the relationships between the states of a process before and after its transitions. They merely execute the instruction placed before them. Computers do not understand change, do not understand causality, and thus have no involvement with time.
We must realize that everything we see on a computer monitor is merely the tracks left in memory by a computer, blindfolded like a horse, racing down the tracks of code—patterns in memory. To the computer itself, these hold no meaning. It is the human spirit that finds meaning by linking these patterns to specific concepts.
The first human element introduced into such a world is the real-time clock. By adding a mechanism to a closed computer system that triggers interrupts synchronized with real-world time, computer programs can provide processes responsive to real-world time. This mechanism enables programmers to process “now” within the scale of the time axis.
So, how does the DateTime type represent the concept of time? It begins with each system’s designer arbitrarily deciding the starting point for the real historical time handled by that system.
The most widely adopted UNIX time uses January 1, 1970, at 00:00:00 UTC as its origin. The DateTime type in UNIX systems handles an integer value that counts each second elapsed since this origin time.
It seems that the person who decided this starting point for time was not a priest overseeing divine oracles in a temple, nor a king, but rather the scientist and engineer Dennis Ritchie, who was developing the UNIX system. In an interview, he stated the following:
“Back then, we didn’t even have tape. Multiple file systems were running, and we were constantly changing the base time,” said Richie. “Finally, we decided, ‘Let’s set a starting point that won’t overflow for a while.’ January 1, 1970, seemed like the best choice.”
WIRED: The Y2K problem is “no problem” UNIX, counting from January 1, 1970, 00:00:00, to the billionth second
For such rather worldly reasons, computer time was created. By speaking in the context of “the creation of time,” Waldorf school students will grasp many things.
The process of extracting the numerical values for years, months, days, days of the week, hours, minutes, and seconds from this simple counting sequence of integers, following the structure of our culture, evokes the image of Michelangelo carving the figure of Christ from a block of marble. Within the culture of the calendar lives the spiritual world of ancient people, attuned to the movements of the sun and moon. Here, we encounter once more the worldview of the Greeks who created the Antikythera mechanism, which we first met at the dawn of computer history.
Now that you understand the fundamental concept of the DateTime type, if you have time, it’s worth tackling some tricky date calculations. For example, trying to figure out how old someone might be on any given day several years from their registration date, when you only know their age, is a good exercise in experiencing the unexpected difficulty of conceptualizing time as a numerical value.
- Shaping the spine of the ICT curriculum in Waldorf education
- The History of Computers(Currently being produced)
- Details on Constructing an Adder Circuit Using Relays
- Seesaw Logic Elements
- Clock and Memory
- The Origin of the Relay and the Telegraph Apparatus
- About the sequencer
- About the Battery Checker(Currently being produced)
- Internet
- Learning Data Models
- Integer type
- Floating-point type
- Character and String Types
- Pointer type
- Arrays
- Learning Programming and Application Usage Experience(Currently being produced)
- Human Dignity and Freedom in an ICT-Driven Society(Currently being produced)



コメント