Wrongly implemented Fibonacci Sequence in Scala -


was @ scala meetup , discussing "scala way" of doing things...

someone asked different developer how / implement fibonacci sequence in scala... person answered following code (only told while worked, non-optimal):

def fibonacci(n: int):bigint = n match {     case 0 => 0     case 1 => 1     case _ => fibonacci(n - 1) + fibonacci(n - 2) } 
  1. what's wrong method?

  2. what ways improve code, scala way?

the problem function, described non-tail recursive calls. means recursivity involved here needs stack work (in sample, it's call stack). in other words, function equivalent of:

import scala.collection.mutable.stack  def fibonacci(n: int): bigint = {   var result = bigint(0)   val stack = stack.empty[int]   stack.push(n)    while (stack.nonempty) {     val x = stack.pop()     if (x == 1) {       result += 1     }     else if (x > 1) {       stack.push(x - 2)       stack.push(x - 1)     }   }    result } 

as can see, that's not efficient, it? on each iteration, stack's size grows 1 , because can view calls being made tree, proper binary tree size depends on n , number of leaves on approximately 2n (actually less constant factors don't matter when n big), talking o(2n) time complexity , o(n) memory complexity (i.e. needed stack size n). that's exponential growth time , linear growth memory used. means takes loooong time process , uses more memory should. btw, it's idea software developer reason in terms of big o notation, because that's first thing need @ when talking performance or memory consumption.

thankfully, fibonnaci don't need recursion. here's more efficient implementation:

def fibonacci(n: int): bigint = {   var = bigint(0)   var b = bigint(1)   var idx = 0    while (idx < n) {     val tmp =     = b     b = tmp +     idx += 1   }    } 

this plain loop. doesn't need stack work. memory complexity o(1) (meaning needs constant amount of memory work, independent of input). in terms of time algorithm o(n), meaning process result loop involving n iterations involved, growth in time depend on input n, linear , not exponential.

in scala, can describe tail recursion:

import annotation.tailrec   def fibonacci(n: int): bigint = {   @tailrec   def loop(a: bigint, b: bigint, idx: int = 0): bigint =      if (idx < n)        loop(b, + b, idx + 1)     else          loop(0, 1) } 

this loop described recursive function, because that's "tail-recursive call", compiler rewrites function simple loop. can see presence of @tailrec annotation. it's not strictly necessary, compiler optimize loop without it, if use annotation, compiler error if described function not tail-recursion - nice, because it's easy make mistakes when relying on tail-recursion work (i.e. make change , without noticing, bam, function not tail-recursion anymore). use annotation, because compiler can protect if do.

so in case work immutable things (no more vars specified), yet have same performance characteristics while loop. version prefer, that's preferences. prefer later, because it's easier me spot invariants , exit conditions, plus immutability, other people prefer former. , idiomatic way of doing this, can fancy lazy stream or iterable, nobody in fp department complain tail recursions :-)


Comments

Popular posts from this blog

javascript - RequestAnimationFrame not working when exiting fullscreen switching space on Safari -

Python ctypes access violation with const pointer arguments -