Rethinking the “=” (equal sign) in the Context of Programming

Posted by Yomaira Escano on September 25, 2017

We’ve always understood how the “=” (equal sign) was meant to be read specifically when it came to math sentences. We were led to believe that the equal sign implied actual equality between two sides.

Merriam-Webster.com gives us the following definition of “equal sign”:

***a sign = indicating mathematical or logical equivalence — called also equality sign, equals sign;

while Dictionary.com defines “equal sign” as follows:

***the symbol (=), used in a mathematical expression to indicate that the terms it separates are equal.

In my particular experience while learning Ruby, I’ve had to re-program my brain into thinking about the use of the equal sign and make sure that I apply it correctly. This is not easy, especially when in the midst of helping my first grader and fifth grader with math homework.

Anyway, a response on a codecademy.com forum gives a pretty simple explanation of how to look at the “=” and the “==” signs in programming.

*You use “=” when you’re assigning a value to a variable:

a = 7

***”==” is used to check some condition, for example:

if a == 7:
    print a

***which literally means “if a equals 7, print it”

The concept itself is not complicated. However, writing out this reminder does help in my understanding that concepts we are accustomed to seeing in the real world may have different meanings in the programming world. I’ve encountered the above while learning Ruby. I’m sure I will have to revisit this topic as I learn other programming languages.