Programming 101–3. ones and zeros

Tom Deneire
5 min readSep 27, 2024
Photo by AltumCode on Unsplash

Disclaimer: the Programming 101 series was originally written to be published as a short book. However, since I never got round to finishing it, I’m publishing the chapters here as individual blog posts…

Ones and zeros

Now that we have a theoretical understanding of what a computer and a
program is, we can take a more practical standpoint and look at how
computers and programs are actually implemented.

Bits and bytes

A computer is an electronic device, which really only “understands” on
and off — two states that are manifested by differences in voltage. Think of how the light goes on and off when you flip the switch. In a way, a computer is basically a giant collection of light switches.

This is why a computer’s brain, so to say, its CPU or “central processing unit”, can only operate on 0 (off) and 1 (on), or bits. These can be combined to represent binary numbers, e.g. 100 = 4. This binary stream is how we feed a computer both instructions and data. In it’s binary, most basic form, this is called “machine code”.

It makes sense to group bits into units; otherwise, we would just end up
with one long string of ones and zeros and no way to chop it up into
meaningful…

--

--

Tom Deneire
Tom Deneire

Written by Tom Deneire

Software engineer, technical writer, IT burnout coach @ https://tomdeneire.be/confident_coding

No responses yet