Numbers in a Computer

One of the most fundamental things a computer can do is represent numerical data and compute with that data. This session will give a review of the most common set of tools that computers and programming languages use to describe integers and fractions.

The course covers the material in Steven Frank's excellent and low-priced book How to Count, as well as a few extra topics.


None. I only assume a knowledge of multiplication and division, fractions, and the number line.

It should be accessible to non-programmers, but at the same time it covers material I hadn't seen and understood until after I had been teaching for several years.


After this course, you'll be able to answer questions like:

  • Why is integer division, and why are there so many different ways to do it?
  • What are bits and bytes?
  • What is a decimal, binary, or hexadecimal number?
  • How does the computer represent a number like 17 as a pattern of electrical signals?
  • How can I perform addition, multiplication, and division in different bases?
  • Why does writing something as simple as "x + 1" mean something different in Java, JavaScript, and Python?
  • What are floating point numbers, and what are the benefits and drawbacks of using them to represent fractional or decimal numbers?

Learn this topic at your own pace with expert help

Working with a student