Posts tagged Programming

On the relationship between mathematical functions and program functions

:: Programming, Teaching

By: John Clements

NOTE: this text is a brief overview intended for instructors of a CS course that we’re developing; it is not technical. It tries to make the case that our early courses should steer students toward understanding problems in terms of pure functions. If you have suggestions or feedback, I’d love to hear them/it.


This section of the course introduces functions, a crucial topic in the field of computer science AND in the field of math.

Programmers and mathematicians sometimes think about the term “function” somewhat differently. Furthermore, some people who are familiar with both fields assign different meanings to the word “function” in the two fields.

The definition of a function in the mathematical domain is fairly well specified, though of course things get a little fuzzy around the edges. We’re going to define functions as the arrows in the category Set, more or less (if that’s not helpful, ignore it). That is, a function has a specified domain and a specified codomain, and it maps every element of the domain to a particular element in the codomain. There’s no requirement that it map every element of the domain to a different element of the codomain (one-to-one) OR that there be some element of the domain that maps to any chosen element of the codomain (onto). This (I claim) is the standard notion of “function” in math.[*]

Programmers also use and love functions. Nearly every programming language has a notion of functions. Of course, they’re sometimes called “procedures” or even “paragraphs” (I believe that’s COBOL. Yikes.). In programming, functions are often thought of as being elements of abstraction that are designed to allow repetition. And so they are. But it turns out that they also, in most modern programming languages, can be thought of as mathematical functions. Well, some of them can.

For many functions, this is totally obvious. If I consider the function from numbers to numbers that we might write in math class as f(x) = 14x + 2, then I can write that as a function in most programming languages. (If you disagree with me, hold that thought.)

But… things aren’t always so clear. What about a function that doesn’t return at all? What about a function that takes input, or produces output? What about a function that mutates an external variable, or reads a value from a mutable value? What about a function that signals an error? All of these present problems, some more substantial than others. None of these have a totally obvious mapping to mathematical functions.

There certainly are ways to fit these functions into mathematical models, but in general, the clearest lesson is that when there is a natural way to express a problem using functions that map directly to mathematical functions, we should. These are generally called “pure” or “purely functional” functions.

So, why should it matter whether our functions are pure? What benefits do we gain when we express functions in purely functional ways?

The clearest one is predictability, also known as debuggability and testability. When I write a pure function that maps the input “savage speeders” to 17, then I know that it will always map that string to 17; I don’t need to worry that it will work differently when the global foo counter is less than zero, or when it’s run in parallel, or on Wednesday, or when the value in memory location 0x3342a7 is less than the value in memory location 0x3342a8.

Put differently, pure functions allow me to reliably decompose problems into sub-pieces. When I’m debugging and testing, I don’t need to worry about setup and teardown to establish external conditions.

Another way to understand this is to dip into functions that use mutation. If we want to model these as mathematical functions, we need to understand that in addition to their stated inputs, they take additional hidden inputs. In the simplest case, this may be the value of a global counter. Things get much more complex when we allow mutation of data structures; now we need to worry about whether two values are the “same” value; that is, whether mutating one of them will change the other. Worse still, mutating certain values may affect the evaluation of other concurrent procedures.

For these reasons and others like them, pure functions are vastly easier to reason about, debug, and maintain. Over time, many of our programming domains and paradigms are migrating toward primarily-pure settings. Examples include the spread of the popular map-reduce frameworks, and the wild explosion of popularity in deep learning networks. In both cases, the purity spreads downward from a mathematical framework.

Note that it is not always the case that pure approaches are the most natural “first choice” for programmers, especially introductory programmers, for whom programs are often imagined as a “sequence of changes”; do this, then do this, then do this, then you’re done. In this model, the program is performing a series of mutations on a larger world. Helping introductory programmers move to a purer model is a challenge, but one with substantial payoff.

For this reason, this section focuses directly on pure functions, and invites students to conceive of programs using the models that they’ve been taught in elementary and secondary school, most particularly tables mapping inputs to outputs.

[*] The only reason I mention the category Set is to draw attention to the distinction between “codomain” and “range”; every function has a named codomain, regardless of whether its range covers it. For instance, the “times-two” function from Reals to Reals is a different function from the “times-two” function from integers to integers, and the “times-two” function from integers to reals is yet a third function.

knowing what’s out there

:: Programming, Teaching

By: John Clements

I’m teaching a class to first-year college students. I just had a quick catch-up session with some of the students that had no prior programming experience, and one of them asked a fantastic question: “How do you know what library functions are available?”

In a classroom setting, teachers can work to prevent this kind of question by ensuring that students have seen all of the functions that they will need, or at least that they’ve seen enough library functions to complete the assignment.

But what about when they’re trying to be creative, and do something that might or might not be possible?

Let’s take a concrete example: a student in a music programming question wants to reverse a sound. How can this be done?

Confessions of a Programmer

:: Programming

By: John Clements

I am a programmer.

Here are some of the things that I love.

  • I love programming.
  • I love abstraction and purity.
  • I love complexity, and fiddly details.
  • I love math.
  • I love knowing that I have powers and knowledge that other people don’t.

I want to talk about that last one. It’s the one I’m least comfortable talking about, and I think it’s somewhere near the center of a number of discussions I’ve had with others and with myself about motivation, teaching, and the future of Computer Science.


A few days ago, I picked up G.H. Hardy’s A Mathematician’s Apology again. He is, to be blunt, an unreconstructed bigot in the area of Native Mathematical Ability. He believes that either you got it or you ain’t, and if you do, it is a thing that sets you apart from other men. (Not from other women, because in the 1920s, everyone was a man. Look it up.)

On the one hand, I can see the appalling lack of mental rigor in some of his arguments. I won’t go point by point, but it appears to me that the nailed-down-mathematical-argument part of his brain simply wasn’t engaged when he made arguments like the assertion that in general those with mathematical talent aren’t likely to have any other useful one.

Nevertheless, much of his thinking pervades mine. I think that thinking such as his fuels much of my own drive (such as it is), and indeed is relatively close to my central joy in life.

Let me put it differently, and more positively: Richard Feynman writes about the simple joy of discovering things for oneself. I completely subscribe to this. Yesterday, of my own accord, I examined the notion of continuity on functions from the reals to the reals and the more general one of continuity on topological spaces, and convinced myself that they’re perfectly aligned (on the functions from reals to reals). That’s a really beautiful thing, and I wanted to run around and tell everyone about it (and here I am doing that). I got to figure that out for myself, and that was a beautiful thing.

Let’s dig a little deeper, though, and try to understand why that should be a beautiful thing. Why is it thrilling to know things, or even better to discover them for ourselves?

Ultimately, I think that it comes down to this: it gives us power, and fuels our egos.

That doesn’t sound very nice, does it?

Well, in many ways it’s the same thing that drives us to work hard at anything. Why do we run races? Why do we compete at Golf? Why do we work hard for promotions, or try to explore new lands? Why do we … do anything?

Well, on a basic level, we’re trying to survive as a species. In order to do this, every individual is programmed to pull hard, to do her very best work, to train harder than the other guy.

When we put it that way, it doesn’t sound so bad. Survival of the species, and more locally, survival of our own DNA. I want to be the best at something, in order to give my DNA a competitive advantage. This is locally visible as an advantage over other people, but manifests itself globally as a path toward improving the survival of the species.

Looking at it in the global context, though, tempers the drive for individual success; my success benefits the species only when it doesn’t come at the expense of the species as a whole. If I try to ensure the success of my genes by eliminating those around me, I’m not benefitting the species, I’m hurting it, and I won’t be around very long.

So, what do we have? We’ve developed a system where we compete—to a point. We compete in venues where our individual success leads to improvement for our families, our communities, and our species.

But enough about you.

What does this mean for me?

When I was a child, I did well at math. I understood things easily, and none of the concepts gave me trouble. When there was an interesting new concept, I spent a bit of time thinking about it, and then I understood it. I got the same kind of satisfaction from math that you might from a video game: a bit of work, and lots of instant gratification. To be sure: I had a privileged upbringing, and lots of attention from teachers to help me succeed. I have relatively few illusions about the part that my background made in my success.

It’s no surprise, then, that I chose to focus on math; Math class was my favorite, and until college, I could convince myself that I was the best in every class (following Hardy here, I’ll suggest that some mild egotism is not out of place here—it’s helpful to believe that you’re the best, even when the evidence is weak).

In college, I discovered lots of folks that were better mathematicians than I was.

Hmm… I can see that I’m going astray here; from a discussion intended to be about the motivations of programmers and programming students, I’m drifting over into nature-vs-nurture.

Further Editing Required.

Or, as Piet Hein put it,

Things Take Time.

Is teaching programming like teaching math?

:: Programming, Programming Languages, Teaching

By: John Clements

One of my children is in third grade. As part of a “back-to-school” night this year, I sat in a very small chair while a teacher explained to me the “Math Practices” identified as part of the new Common Core standards for math teaching.

Perhaps the small chair simply made me more receptive, taking me back to third grade myself, but as she ran down the list, I found myself thinking: “gosh, these are exactly the same skills that I want to impart to beginning programmers!”

Here’s the list of Math Practices, a.k.a. “Standards for Mathematical Practice”:

  1. Make sense of problems and persevere in solving them.
  2. Reason abstractly and quantitatively.
  3. Construct viable arguments and critique the reasoning of others.
  4. Model with Mathematics.
  5. Use appropriate tools strategically.
  6. Attend to precision.
  7. Look for and make use of structure.
  8. Look for and express regularity in repeated reasoning.

Holy Moley! Those are incredibly relevant in teaching programming. Furthermore, they sound like they were written by someone intimately familiar with the How To Design Programs or Bootstrap curricula. Indeed, in the remainder of my analysis, I’ll be referring specifically to the steps 1–4 of the design recipe proposed by HtDP (as, e.g., “step 2 of DR”).

Let’s take those apart, one by one:

Different ways of summing string lengths

:: Programming

By: John Clements

Three different ways to sum string lengths:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
int numChars(char* word)
{
   int i, j, sum = 0;

   for(i = 0; word[i][j] != '\0'; i++)
   {
      for(j = 0; word[i][j] != '\0'; j++)
      {
         sum++;
      }
   }
   return sum;
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
int numChars(char * strings[], int len) {

  int sum = 0, i;

  for (i = 0; i < len; i++) {
    sum += strlen(strings[i]);
  }

  return sum;
}
1
2
3
4
5
int numChars(char * strings[], int len) {

  sumOf(lengths(strings));

}