phil@dler

Alternative Patterns to Recursion in Imperative Languages

Introduction

Recursion is one of those patterns which I find to be theoretically interesting. In practictal software engineering it seems to cause more headaches than it solves.

In this post, I‘m going to examine the issues with recursive algorithms. I will present some alternative patterns to solving problems which might otherwise be solved with recursive algorithms. I make the assumption that the reader is already familiar with recursion and recursive algorithms, though I will make reference to both simple and complex examples. I also presume that the reader has a passing understanding of the difference between stack and heap memory. Lastly, I assume that the reader is aware of FIFO queues, stacks and linked-list data types.

Implementations/demonstrations will be done in Python, since that is enough like pseudo-code that the patterns should translate well to other languages. I will try to avoid, where possible, using python-specific functionality.

The Problem with Recursion

I should probably start by clarifying that I am referring specifically to imperative rather than functional languages. Obviously, in languages like Haskell recursion is often the only legitimate way to solve a problem, and the compilers contain the necessary optimisations and analytics to make that work.

In imperative languages, the presence of a recursive algorithm can make code brittle. Many languages have some limit as to how far they can recurse. These limitations can be implicit, such as in languages like C where infinite recursion will eventually lead us to the famed stack overflow. Alternatively, the limitations can be explicit, such as with Python, which has a runtime limit on the extent to which it will permit recursion before throwing an exception. The resultant situation is that we have to write code which guards against inputs which result in these undesireable levels of recursion, or we have to alter our runtime environments to permit some arbitrarily higher limit on the recursion.

It is worth noting that if the depth of recursion is not dictated by program inputs, then it is possible to know that the level of recursion will not exceed the maximum limit and so may not represent a problem.

That we have to write code to handle these edge cases is no bad thing at all, but having to alter our runtime environment in order to handle ‘good’ edge cases seems to me to be a leaky abstraction. The individual running a program with such a recursive implementation will be met with some error condition for some complexity of input, and will then have to adjust their runtime environment to cope. It would be a cleaner abstraction if this was an explicit argument to the program being run, where required.

Some imperative language compilers have some facility for coping with some kinds of recursive algorithm. In particular, some of these concerns may not apply to algorithms where tail-call optimisation can be applied. However, to the best of my knowledge python still doesn‘t have tail call optimisation. Moreover, this argument is not very helpful if for some reason the optimisation does not apply to the specific algorithm under examination.

I will not be considering environments in which recursive algorthms are used as an optimisation for environments where dynamic memory allocation may not be available, or other such constraints.

In general, I categorise recursion cases into the following groups. The grouping is empirical rather than theoretical, and is based mostly on what I personally have found to be the simplest way to solve each problem.

I will now explore each of these in turn and demonstrate some alternatives to recursive algorithms. At the end, I will demonstrate a General, but naïve pattern, which should work for almost any case, but can be very tricky to implement for anything other than trivial problems.

Trivial Recursion

My current working definition of “trivial” is that the input to the function does not contain any arbitrarily nested structures.

I’ve never actually encountered a case like this in production code, I suspect mostly because most software engineers would not consider recursion to be the obvious pattern for these kinds of problems. As such, I normally come across them as part of learning materials for programming, as a way of introducing the concept of recursion.

Nevertheless, I include them here for completeness, and because I am writing this section before I’ve finished my first cup of coffee.

Factorial

The go-to example of this would be the factorial calculation. The factorial calculation is defined for a non-negative integer “n” as the product of all positive integers equal to or less than n. The factorial of 0 is defined to be 1.

A recursive implementation of this might look like

def factorial(n):
    # base case
    if n == 0:
        return 1
    # recursive case
    else:
        return n * factorial(n-1)

As we can see, this matches my requirement to have no looping, and is also one of the cases where there is only one argument to the initial function. Incidentally, if you want to prove we are solving a problem that is “real”, try running this function with the following:

import sys

factorial(sys.getrecursionlimit()+1)

Such examples are easily unpacked into a for-loop:

def factorial(n):
  # base case
  result = 1
  for i in range(1, n+1):
    result = result * i
  return result

The pattern works because we iteratiely re-assign the result to the result variable, which gets re-used, and so-forth and so on.

Fibbonacci

While we’re at it, lets look at another example, the fibbonacci sequence, in which the next number is found by adding up the two numbers before it. To find the nth fibbonaci number, we might define a factorial function:

def fib(n):
  # first base case
  if n==1:
    return 0
  # second base case
  elif n==2:
    return 1
  # recursive case
  else:
    return fib(n-1) + fib(n-2)

There’s a little additional gotcha here in that there are two base cases, but it’s not really going to radically alter our pattern of turning the recursion into a for-loop

def fib(n):
  previous = [0, 1]
  if n == 1:
    return previous[0]
  elif n == 2:
    return previous[1]
  else:
    for i in range(2, n):
      next = previous[0] + previous[1]
      previous[0] = previous[1]
      previous[1] = next
    return previous[1]

The additional complexity here really manifests because of the need to keep track of the previous two results so that they can be summed in the loop. Again, the result is preserved, but this time in the list previous, so that it can be iteratively reused.

I could have maintained the previously two results using a FIFO queue if I really wanted to add unecessarily complexity in favour of semantics, but I leave that as an exercise for the reader.

Another typical example for recursive algorithms is processing data stored in structures which are nested to an arbitrary depth. Whether you need or wish to process depth-first is partly a question of problem domain. Apart from in-memory data structures, a very typical example that I come across is processing file/folder directory structures on hard drives.

For depth-first processing, I have two patterns which are relevant to slight variations of this problem.

In this particular case, I will use arbitrarily nested (python) lists containing integers as the target data structure. Let’s say that someone requires me to write code which will work through this list, printing the integers, and giving brackets and indentation into indicate sub lists.

Yes, it’s a very contrived example, but hopefully it will illustrate the salient points.

Recursive Solution

I'll first sketch out the recursive algorithm:

def process(item, nesting_level):
  indent = '  '*nesting_level
  if type(item) is list:
    print(indent + '[')
    for subitem in item:
      # process subitems, incrementing the nesting level
      process(subitem, nesting_level+1)
    print(indent + ']')
  else:
    print(indent + '{}'.format(item))

A little attention for the indent variable for who are not familiar with python; this makes a string which is a number of double-spaces (' ') equal to the indentation level, followed by the string-formatted data in the item.

Depth First Recursion Using a Stack

The first pattern is to keep track of the items we have yet to process. I find that this works well enough for most problems of this type.

Stacks in python are easy, since they are provided by the built-in list type. The first thing we will put onto our stack is our top-level list. Because I’m being asked to indent for each nesting level, we should also add some information which indicates that the outermost list is at the 0th nesting level. We'll put these together in a tuple.

stack = []
l = [1, [3, 2], 5, [[7]]]
stack.append(
        (l, 0)
)

Now we need to write code which is going to process our stack. Note the code comments which explain the implementation.

def process(stack):
  # we only need to continue while there are still items on the
  # stack.
  while len(stack) > 0:

    # Get the next item for processing
    next_item = stack.pop()
    # Each item is a tuple; index 0 is the data, index 1 is the
    # indentation level
    if type(next_item[0]) is list:
      # for lists, we add items to the stack for further processing

      # We add an item to the stack which will cause the print
      # of an enclosing bracket.
      stack.append((']', next_item[1]))

      # we have to add the list items to the stack in reverse
      # order, so that when they are popped off the stack we
      # get the results in the original order
      for item in reversed(next_item[0]):
        stack.append((item, next_item[1]+1))

      # Add an item to the stack to print the opening bracket
      stack.append(('[', next_item[1]))
    else:
      # for elements which are not lists, we print them
      # at the indented level
      print('  '*next_item[1] + '{}'.format(next_item[0]))

The first key observation here is that the code itself has only one more functional line of code than the recursive algorithm, which is pretty cheap for proofing against the recursion limits in python. The drawback here, in my opinion, is that it becomes somewhat less obvious what it is you are trying to do; you may have to include additional documentation to explain your function to your colleagues.

Depth First Recursion Using a Linked List

I have found using linked lists for processing string-encoded recursive structures to be extremely useful, if something of a niche application.

Python doesn’t have linked lists in it’s standard library, but there is a module called llist which I will use here to avoid getting caught up in reimplementing that functionality.

Setting up the linked list from the llist library is very straightforward:

from llist import sllist
l = [1, [3, 2], 5, [[7]]]
ll = sllist()
ll.append((l, 0))
lprocess(ll)

It is important to note that the setup of the linked list is not radically different from the set up of the stack in the previous example.

I’m going to just dump the implementation for the process function for the linked list implementation:

def process(linked_list):
  while len(linked_list) > 0:
    next_item = linked_list.first
    if type(next_item.value[0]) is list:
      anchor = linked_list.insertafter(
        ('[', next_item.value[1]),
        next_item
      )

      for item in next_item.value[0]:
        anchor = linked_list.insertafter(
          (item, next_item.value[1]+1), anchor)

      linked_list.insertafter(
        (']', next_item.value[1]), anchor)
    else:
      indent = '  '*next_item.value[1] 
      print(indent + '{}'.format(next_item.value[0]))
    linked_list.popleft()

The reason why I’ve just dropped this with no explanation, is that I want the reader to compare it to the algorithm for the stack-based implementation.

Hopefully, it won’t have escaped your notice that the two algorithms share very similar steps. The overall key difference is that the list-based algorithm does not require iterating over the list elements in reverse order. This can be extremely useful for some cases. A good example is arbitrarily large data coming from a source that can only be iterated in one direction, like a stream. This allows us to process the information forwards, without needing to cache the whole, arbitrarily large structure in memory.

The only place I have personally come across breadth-first recursion has been for traversing graphs. In this example, I‘m going to use a graph without cycles, since I’m focussing on the recursion aspect. It‘s a very simple graph joining places on the East Coast US. It’s also decidedly inaccurate.

The purpose of the algorithm is to find the path with the fewest hops between locations on the graph.

For a quick bit of setup, we’re going to need to need a node class to represent locations and their relationships.

class GraphNode(object):

  def __init__(self, label, neighbours):
    self.label = label
    self.neighbours = neighbours

We can then set up a navigation graph

boston = GraphNode('Boston', [])
ny = GraphNode('New York City', [boston])
bristol = GraphNode('Bristol', [ny])
reading = GraphNode('Reading', [ny, bristol])
washington_dc = GraphNode('Washington DC', [])
philly = GraphNode('Philadelphia', [reading, washington_dc])

Here code for the breadth-first algorithm with annotations.

def fewest_hops(start_node, end_node):
  # create a fifo queue to manage the exploration
  place_queue = queue.Queue()

  # elements on the queue are tuples, where the first
  # item is a location node to walk out from,
  # and the second item is a list of items which make
  # up the path walked so far.
  # hence, the first item always has the first item on
  # the path
  place_queue.put((start_node, [start_node]))

  # initialise the algorithm's first argument
  current_element = place_queue.get()

  # We keep searching the graph until we reach the
  # target destination
  while current_element[0] is not end_node and not place_queue.empty():

    for neighbour in current_element[0].neighbours:

      # we have to copy the path because otherwise
      # all of the paths will be using the same,
      # mutable list
      path = copy.copy(current_element[1])

      # add the next step in the path
      path.append(neighbour)

      # place these elements on the queue for processing
      new_element = (neighbour, path)
      place_queue.put(new_element)

    # next iteration of the loop
    current_element = place_queue.get()

  if current_element[0] is end_node:
    return current_element[1]
  else:
    return None

Again, this algorithm bears more than a passing resemblance to the depth-first, stack-based algorithm. I personally find it remarkable (though entirely rational) that changing a LIFO queue for a FIFO queue changes the pattern of an algorithm from depth-first to breadth-first.

Incidentally, and without getting too bogged down in the detail, this is very closely related to an implementation of Djikstra’s algorithm where all paths are unweighted. Where the paths are not unweighted, you can use a priority queue instead of a simple FIFO queue.

General Naïve Solution

An assertion I see quite a lot which is often presented without example is the entirely-reasonable sounding idea that runs thus:

  1. The reason for recursion limits ultimately revolves around the size limit of the call stack
  2. Stacks can be built on heap memory.
  3. I can solve the recursion problem by using an explicit stack on the heap.

To be fair, this is not a bad line of reasoning, and it will work, but it is quite a lot of effort. Let’s revisit the factorial and Fibonacci algorithms to see what it would take to re-impliment this for a trivial case, and we can evaluate if it is worth the effort!

Let’s restate the recursive factorial algorithm again:

def factorial(n):
  # base case
  result = 1
  for i in range(1, n+1):
    result = result * i
  return result

Now I'm going to step through building our own call stack.

If we’re going to make our own call stack, the first thing we will need is a stack data type. Helpfully, the list in python has all of the methods necessary for this:

l = []
# push
l.append(1)
# pop
l.pop()

Our stack contains our steps and information about our progress. We need something to run those steps, so I’ll build that next.

def run_stack(stack):
  result = None
  while len(stack) > 0:
    frame = stack.pop()
    result = frame.run(result)

  # This return could be a print or anything else...
  return result 

Making this little snippet has also given us some detail about some of the interface we need from our stack frames - there must be a method on the object called run, and that must accept the result from the previous frame that was run.

Our first frame is going to be the replacement for our original call to factorial. Let's build a class to represent that, starting with the run method.

class FactorialFrame(object):

  def run(self, downstream_result):

    # Note that this method doesn’t actually use the result of the previously
    # run frame.

    if self.integer > 0:
      # We will want to multiply the result of the next calculation
      # with the integer we were constructed with. We add a frame for that
      # which we will need to define.
      self.stack.append(MultiplyFrame(self.stack, self.integer)

      # We need to define the next step
      # in the factorial as the next frame to be executed
      self.stack.append(FactorialFrame(self.stack, self.integer-1)
    else:
      # The base case always returns one. We know the next frame will be a
      # MultiplyFrame (because we just made sure it was the next item on the
      # stack, so when this method returns 1, that will be passed to the
      # MultiplyFrame, which will roll up the results.
      # There is one scenario where the next frame would not be a multiply
      # frame, which is if this is the only frame. This is the base case,
      # and so the return of 1 is the right answer for htat scenario as
      # as well
      return 1

Note the explanatory comments for this method; they are essential to understanding the code.

From this, we can see we are going to need to pass in the stack to the class constructor, and we can see that FactorialFrame objects are initialised using the stack and the integer for that step in the factorial calculation. Let’s add in that constructor.

class FactorialFrame(object):

  # for non-python folks, this is effectively the constructor
  def __init__(self, stack, integer):
    self.stack = stack
    self.integer = integer

  def run(self, downstream_result):
    if self.integer > 0:
      self.stack.append(MultiplyFrame(self.stack, self.integer))
      self.stack.append(FactorialFrame(self.stack, self.integer-1))
    else:
      return 1

Good. Now we also need to define the MultiplyFrame class, which should be much simpler to understand:

class MultiplyFrame(object):

  def __init__(self, stack, integer):
    # This object doesn’t actually use the stack, so we discard the
    # reference.
    self.integer = integer

  def run(self, downstream_result):
    # We’ll be multiplying either with the base case output
    # from a FactorialFrame, OR the output of a previous
    # multiplication result
    return self.integer * downstream_result

Right. We now have all of that done, and this defines our ”trivial” factorial recursion. To run the overall calculation, we run:

stack = []
# Add the initial call to the stack
stack.append(FactorialFrame(stack, 5))
# Call the run loop for the stack
run_stack(stack)

That's nearly 50 lines of code for something that earler, was done in 5 lines (whether we’re looking at the iterative or recursive implementation). That’s a lot. Imagine how complicated that might get for more complicated problems. I’m sure no one will be surprised or disappointed if I leave the Fibonacci example to the reader! Moreover, this pattern, to me at least, does not seem particularly clear without a lot of explanatory comments for one’s fellow engineers.

It is also a valid criticism here in that I’ve been fairly pathalogical in implementing the full stack (you can see a simpler stack example previously). It’s possible to shorten this significantly by keeping a stack of integers and rolling that up; but I wanted to provide an approach that would work even for the most challenging problems.

That said, this can be a useful technique if you come across a case I haven’t met yet, and so none of the other patterns available work, and so you need to ”brute force” the problem away.

Special Mention: Tower of Hanoi

The Tower of Hanoi is a mathematical puzzle that has, if anything, been over-analysed. There are a number of ways of solving the puzzle, and at the time of writing, the Wikipedia page has a good analysis on the matter.

The reason why this puzzle warrants a specific and special mention is twofold:

  1. The problem has a number of analyses such that it can be solved with a number of the patterns presented in this post
  2. The problem also has a non-recursive solution based around the binary representation of the state of the problem

The second point here makes the problem interesting, but I've not seen another notionally recursive problem that also has this kind of solution, and I don’t off-hand know what properties would make a problem amenable to this kind of solution. Given that a single instance of a solution does not constitute a pattern, I don't feel like it is helpful to restate the whole solution here, but as a curio within the problem domain, I wanted to make reference to it.

Conclusion

In this post, I have demonstrated a number of different patterns for problems where recursion might be applied, without using recursive algorithms. These patterns are helpful for ensuring code will work in the face of large or arbitrarily nested input. They will avoid problems like recursion limits and stack overflows.

Note that I have not discussed guarding these functions against very large or problematic inputs, and therefore the presented examples are still vulnerable to the runtime environment running out of memory. Always check your user inputs.