# 3 Functional Programming Concepts Written by Massimo Carli

In the previous chapter, Chapter 2, “Function Fundamentals”, you had a deep dive into what category theory is and how it relates to the concepts of types and functions, which are the pillars of functional programming. You touched on many things along the way. You learned:

• That a function is a way to map elements of a domain set into values of a range set.
• The relation between sets and types.
• The logic category.
• What the Kotlin `Nothing` and `Unit` types really are.

You also had the opportunity to learn the concept of type of a function. You defined the `typealias` `Fun<A, B>` to represent all the functions mapping values in `A` to values in `B`. For instance, `Fun<Int, String>` represents the set of all the functions from `Int` to `String`. On the other hand, not all functions are the same, and what you’ve seen so far is true for a specific type: pure functions. You’ll learn more about this as you read on.

It’s also important to mention that all the functions you use in programming contain some sort of computation, and the output is often calculated from the input as the result of expressions.

In this chapter, you’ll learn some vital concepts about functions, and in particular:

• What a pure function is.

• What an expression is, and what referential transparency means.

• How to prove a function is pure using the substitution model.

• What a side effect is.

• How to handle side effects in a functional way.

• How side effects change the way you compose functions.

You’ll learn all these concepts using the Kotlin language and a bunch of entertaining exercises and challenges.

## Pure functions

In the previous chapter, you learned that a function is a way to map values from a set `A` to values in a set `B`. You use sets to represent values of a specific type. For functions from `A` to `B`, you defined the following `typealias` where `A` is the type for the domain, and `B` is the type for the range:

Note: If you need a reminder about domain and range, skip back to Chapter 2, “Function Fundamentals”.

``````typealias Fun<A, B> = (A) -> B
``````

Note: You’ll find this definition already in the Aliases.kt file in this chapter’s starter project.

Now, consider the following functions and add them to Pure.kt in the same starter project:

``````fun twice(x: Int): Int = x * 2

fun twiceAndLog(x: Int): Int {
val result = x * 2
println("2 * \$x = \$result")
return result
}
``````

In this code, `twice` and `twiceAndLog` have type `Fun<Int, Int>`, and you can prove this by adding the following code to the same file:

``````fun main() {
var f: Fun<Int, Int> = ::twice // 1
println("Executing twice(10)")
f(10) // 2
f = ::twiceAndLog // 3
println("Executing twiceAndLog(10)")
f(10) // 4
}
``````

In this code, you:

1. Assign `twice`’s reference to a variable `f` of type `Fun<Int, Int>`.
2. Invoke `f(10)`.
3. Change the value of `f`, assigning the reference to `twiceAndLog`.
4. Invoke `f(10)`.

Compile the code and run it. It provides the following output:

``````Executing twice(10)
Executing twiceAndLog(10)
2 * 10 = 20 // HERE
``````

Of course, the invocation of `twice` produced no log output but `twiceAndLog` did because of the `println` statement in its body, pointed out with “HERE”. But in terms of functions, they should be exactly the same because they map the same input values to the same output values. So, what’s the difference? The difference is that `twice` is a pure function and `twiceAndLog` is not. So, how would you define what a “pure function” is?

### Pure function definition

A function `f` is pure if the following properties are both true:

1. `f` always maps the same input values to the same output values.
2. The universe doesn’t change after each invocation of `f`.

The first point means that if you invoke the function `f` with specific input parameters, you’ll always get the same output. In other words, the output of `f` depends only on the input parameters and nothing else. Referential transparency gives this concept a formal definition, and you’ll learn all about it very soon.

The second property might look weird. If you consider the universe as a very big object with some properties, it has a state. You can think of the universe as everything else that exists outside the function’s body. As mentioned in the introduction of Chapter 2, “Function Fundamentals”, the state of an object is the set of all its property values — remember the Ferrari and Fiat 500?

If the execution of `f` changes the state of the universe, such as a property on one of those cars, then `f` is not pure. The formal definition says that, in this case, `f` has side effects. You’ll learn a lot about side effects in this chapter as well as the following ones.

More formally, you can now say that a function is pure if:

1. Its body is a referential transparent expression.
2. It doesn’t have any side effects.

As mentioned, you’ll see what referential transparency and side effects are very soon. For right now, it’s useful to see some examples and describe why pure functions are so important in functional programming.

Note: The term “pure” comes from Lisp, a functional language that also allows side effects. Functions with side effects aren’t considered to be good functions, and so are somehow impure. :]

### Examples of pure and impure functions

Open the Pure.kt file of the starter project in the material for this chapter and look at `twice`, which you met earlier in the chapter:

``````fun twice(x: Int): Int = x * 2
``````

This is a pure function because:

1. Every time you invoke `twice` with the same parameter value for `x`, you always get the same result, `x * 2`, in output.
2. It just returns an output value that is double the value you pass as input. Nothing happens to the universe outside the function body, no matter how many times it’s called.

To get an idea of the first property, add the following code to Pure.kt:

``````fun main() {
// ...
println(twice(10))
println(twice(10))
println(twice(10))
println(twice(10))
println(twice(10))
}
``````

Note: The previous code just gives you an idea of referential transparency: that the same input always maps to the same output. To be rigorous, you should prove that the property is true for all the values in the domain of `twice` and not just for the value `10`. :]

Here, you invoke `twice(10)` five times and, with `println`, you can verify the result is always `20`. Run `main` and check the following output:

``````20
20
20
20
20
``````

Now, follow the same process with `twiceAndLog` that you already wrote in the Pure.kt file:

``````fun twiceAndLog(x: Int): Int {
val result = x * 2
println("2 * \$x = \$result")
return result
}
``````

In this case, `twiceAndLog` is not pure. To understand why, check the aforementioned two properties:

1. Every time you invoke `twiceAndLog` with the same parameter `x`, you always get the same result, `x * 2`, as output. As said, from this point of view, `twiceAndLog` is the same function as `twice`.
2. Unfortunately, returning a result isn’t the only thing this function does. It also changes the universe in the sense that the world now has some output messages the function has written using `println`. The state of the universe, specifically the standard output of your small program, is now different.

Exercise 3.1: Is `inc` a pure function?

``````var count = 0
fun inc(x: Int): Int = ++count + x
``````

Try arguing your answer and check Appendix C to see how you did.

Now, consider the following example, and add it in Pure.kt:

``````fun randomInc(a: Int): Int = a + Random.nextInt()
``````

This function adds a random integer to the value that was passed in. If you check the two properties for a pure function, you see that:

1. Invoking `randomInc` multiple times with the same input parameter, you get different output values because of the random value you get with `Random.nextInt()`. This alone allows you to say that `randomInc` isn’t pure. It’s worth checking the second property regardless.
2. Every time you invoke `randomInc`, you also change the universe. This isn’t obvious because you don’t have any `println`, and apparently, you’re not changing anything outside the function’s body. This actually isn’t true because every time you invoke `Random.nextInt()`, you’re changing the state of `Random`, the object responsible for generating random numbers. `Random` works as a global object, which is part of the universe.

You can easily prove the first point by adding the following code to `main` in Pure.kt:

``````fun main() {
// ...
println(randomInc(10))
println(randomInc(10))
println(randomInc(10))
println(randomInc(10))
}
``````

If you run `main`, you’ll get different values like this as output:

``````424226219
-1433325033
-326412163
1941055914
``````

Note: Of course, the values in your output will be different because of the randomness of `Random.nextInt()`.

In Why are pure functions important?, you’ll learn how to recognize pure and impure functions with other examples. But why is it so important to understand if a function is pure?

Exercise 3.2: Is `inc2` a pure function?

``````val count = 0
fun inc2(x: Int): Int = x + count + 1
``````

Give it a try and check Appendix C to see how you did.

## Why are pure functions important?

Understanding if a function is pure is a fundamental concept in functional programming. Pure functions provide many significant benefits, and in particular, they are:

• Easy to understand.
• Easy to test and debug.
• Composable in other pure functions.
• Easy to run in parallel.
• Idempotent and easy to recover.
• Memoizable.
• Able to be performed lazily.

It’s worth going over each of these advantages with some small examples. You’ll also have the chance to see each of these in depth in the rest of the book.

### Pure functions are easy to understand

A pure function doesn’t depend on anything other than its input parameters. This means you can understand it simply by following the logic path from the input to the output, which you define using the expression in the function body. If a pure function is too complex, you can decompose it into smaller functions and recompose them later into a final, single result. You don’t need any knowledge of the universe because all you need is in the function body.

Add the following example to Pure.kt:

``````fun abs(x: Int) = if (x < 0) -x else x
``````

This function checks if `x` is negative, changing its sign if it is. Because `x` is all you need, understanding what `abs` does is very simple. If this is too complex, you could replace it like this:

``````fun negate(x: Int) = -x // 1
fun identity(x: Int) = x // 2
fun abs(x: Int) = if (x < 0) negate(x) else identity(x) // 3
``````

In this case, you define:

1. The pure function `negate`, which negates the input value.
2. The `identity` function, which is also pure.
3. `abs`, which applies some logic to understand if it should invoke `identity` or `negate`.

The complexity of a function is in the interaction with the external world. These interactions are called side effects. By definition, side effects aren’t present in pure functions.

### Pure functions are easy to test and debug

The absence of side effects also makes pure functions easier to test. Imagine you want to count how many times you invoke `abs`, which you defined earlier.

A possible approach is the following. Add it to Counter.kt:

``````var count = 0 // 1

fun countedAbs(x: Int): Int { // 2
count++
return abs(x)
}
``````

In this simple code, you:

1. Define `count` and initialize it to 0.
2. Create `countedAbs` as a function, which increments the value of `count` and returns the output of `abs`.

From what you learned earlier, this is impure because it changes the state of the universe, and in particular, its property `count`, which is a mutable global variable.

You’ll see in the rest of the book why this solution is far from being the best. But in this case, you just need to understand whether or not it’s easy to test and debug. To test it, create a test using IntelliJ. Select the name `countedAbs` and press OPTION+ENTER to get the drop-down with a Create test option in Figure 3.1:

Select Create test and, after switching to JUnit4 in Testing Library, you’ll see the following dialog:

Select OK and choose the test/kotlin folder, like in Figure 3.3:

Selecting OK for the last time, you’ll end up at an initially empty CounterKtTest.kt file. In this file, write your first test like this:

``````class CounterKtTest {
@Test
fun test100Times() {
100.invokeTimes { // 1
countedAbs(it) // 2
}
assertThat(count).isEqualTo(100) // 3
}
}
``````

In this test, you simply:

1. Use `invokeTimes`, which is a simple utility method you’ll find in Util.kt. This allows you to run the lambda you pass as a parameter the number of times you use as the receiver. In this case, you invoke the lambda 100 times.
2. Invoke `countedAbs`, passing the current execution number as a parameter. For this test, you don’t really care about the input parameter for `countedAbs`.
3. You assert that the counter value is `100`.

If you run `test100Times`, you’ll get a green result, which means the test succeeded. So far, so good.

Now, suppose you add another test like the following to the same CounterKtTest.kt file:

``````class CounterKtTest {
//...
@Test
fun test50Times() {
50.invokeTimes {
countedAbs(it)
}
assertThat(count).isEqualTo(50)
}
}
``````

If you now run the two tests, you’ll get some problems. In particular, you’ll get something like this:

``````value of: getCount()
expected: 100
but was : 150
value of: getCount()
expected: 100
but was : 150
at com.raywenderlich.fp.CounterKtTest.test100Times(CounterKtTest.kt:47)
``````

The specific test `test100Times` fails because you were expecting a count of `100`, but you get a count of `150` instead. The reason is obvious: The previous test, `test50Times`, changed the universe and you didn’t run `test100Times` in isolation like every test should be. In this case, the problem is that you need to know what’s already happened to the universe to predict the result of your function. What already happened depends on many things, like the number of times the same `countedAbs` was already executed, making the test difficult to implement.

You might argue that just resetting the value of `count` before each test, like the following, could fix the problem:

``````@Test
fun test100Times() {
count = 0 // HERE
100.invokeTimes {
countedAbs(it)
}
assertThat(count).isEqualTo(100)
}
``````

This is true, but only if all the tests aren’t run concurrently. In that case, this fix wouldn’t work because of the risk of resetting the value of `count` in the middle of the execution of another test.

How could you solve this problem, then? Would a pure version of `countedAbs` solve this testability problem?

#### Spoiler alert: A pure countedAbs

Remember, a pure function doesn’t change the universe, and it has all it needs in the input parameters. You’ll see these concepts again in the book, but to give you some idea, consider the following function and add it to Counter.kt:

``````fun countedAbs(count: Int, a: Int): Pair<Int, Int> =
abs(a) to count + 1
``````

As you see, `countedAbs` now has two parameters: `count` is the number of times the function has been invoked before this, and `a` is the input parameter for `abs`. The return type is now very interesting because it’s a `Pair<Int, Int>`. The first value is the result of `abs`, and the second is the new value of `count`, which is the previous `+1`. You can easily verify that `countedAbs` is now a pure function:

• The output depends only on the input parameters.
• Invoking `countedAbs` doesn’t change the universe. It also causes no side effects.

What about the test, then? Open CounterKtTest.kt and replace the existing tests with the following:

``````class CounterKtTest {
//...
@Test
fun test50TimesAbs() {
var c = 0  // 1
50.invokeTimes {
val (count, _) = countedAbs(c, it) // 2
c = count // 3
}
assertThat(c).isEqualTo(50) // 4
}

@Test
fun test100TimesAbs() {
var c = 0  // 1
100.invokeTimes {
val (count, _) = countedAbs(c, it) // 2
c = count // 3
}
assertThat(c).isEqualTo(100) // 4
}
}
``````

In this case, you:

1. Use a local variable `c` for the count, which you initialize at 0.
2. Invoke `countedAbs` 50 times in the first test and 100 in the second, passing the value you got from the previous execution for the `count` parameter.
3. Update the value of the local variable `c`.
4. Assert the value of `c` to check it’s what you expected.

Now, run the tests, and everything will pass. Consider also that the tests run in complete isolation. Even if executed in a different thread, they’d be successful. In this case, all of the changes are local to the scope of a single test.

Note: In the previous tests, you invoked `countedAbs` many times, but one invocation would’ve been enough to test that it works! This is because this version of `countedAbs` is pure. :]

### Pure functions are composable

As you know, by definition, in a pure function, the output depends only on the input parameters, making the composition of pure functions pure as well. You might argue that you can combine impure functions in the same way you combine pure functions. This is true, but composition wouldn’t fix their purity. Add the following functions to Pure.kt:

``````var sharedCount = 1 // 1

fun comp1(x: Int): String = "Hello \${x + sharedCount}" // 2

fun comp2(x: String): Int = x.length - sharedCount // 3
``````

Here, you:

1. Initialize the global variable `sharedCount` to `1`.
2. Define `comp1` as `Fun<Int, String>`, which returns a formatted `String` using `sharedCount`. This is impure.
3. Define `comp2` as `Fun<String, Int>`, which returns the length of the `String` as input minus `sharedCount`. This is also impure.

Both `comp1` and `comp2` are impure because their output doesn’t depend just on the input parameters but also on `sharedCount`. You can now compose them like this using `after`, which you defined in the previous chapter and is found in Category.kt:

``````val comp2AfterComp1 = ::comp2 after ::comp1
``````

Here, you defined `comp2AfterComp1`, which is still impure because its output doesn’t depend just on its input but still on `sharedCount`. If you have instead composed two pure functions, the result would also be pure. In the rest of the book, you’ll see many examples of pure functions composed of other pure functions.

### Pure functions can run in parallel

As you’ll see in Section III, thread safety is one of the big advantages of functional programming. This thread safety is mainly because of the use of pure functions.

As you’ve read many times now, the output of a pure function just depends on the input. Another way of saying this is that, with the input values, the function has all it needs to return an output value. Pure functions don’t have any dependency with the external universe and don’t share data with it. Pure functions can run in parallel.

Open the Parallel.kt file and add the following code:

``````fun square(x: Double): Double = x * x // 1

fun root(x: Double): Double = Math.sqrt(x) // 2

fun distance( // 3
x0: Double,
x1: Double,
y0: Double,
y1: Double
): Double {
val s1 = square(x1 - x0) // 4
val s2 = square(y1 - y0) // 4
return root(s1 + s2) // 5
}
``````

In this simple code, you define the following functions:

1. `square`, which calculates the square of the `Double` value you pass as input. This is a pure function.

2. `root`, which calculates the square root of the `Double` value you pass as input. This is a pure function.

3. `distance`, which calculates the distance between two points given their coordinates. This is also a pure function.

Look into the body of `distance`, and you’ll see that you:

1. Calculate the square of the difference for the `x` coordinates first and then the square of the difference of `y` values and save them in `s1` and `s2`.
2. Return the square root of the sum of `s1` and `s2`.

The interesting thing here is that the order you use to calculate `s1` and `s2` doesn’t matter. You could’ve calculated `s2` first and then `s1`. The important thing is that you invoke `sqrt` when you have both `s1` and `s2`. Because the order in which you calculate `s1` and `s2` doesn’t matter, it means you could calculate them in parallel and then join the result when invoking `sqrt`. All the options in Figure 3.4 are valid:

It is possible because `square` is pure. If it wasn’t pure and had side effects, the result of `square`‘s two executions would depend on what’s in the universe, which also depends on the execution order.

The chance to run pure functions in parallel is also essential in the case of distributed systems. You might run the functions on different machines, providing just the input parameters and collecting the result.

Race conditions and deadlocks are two of the main problems when you run functions in a concurrent environment. These especially happen when multiple threads share mutable data. If a function is pure, it has all it needs in its input parameters, so it doesn’t need to share anything.

Note: The code you execute as the body of a pure function must run in a single thread.

### Pure functions are idempotent and easy to recover

Idempotent is a term you normally use in the context of a RESTful service in the case of requests using PUT or DELETE methods. The effect of an idempotent request is the same in the case of a single or multiple requests. A pure function is idempotent because it doesn’t have side effects, so invoking the function once or multiple times doesn’t make any difference. Another way of saying this is that pure functions are safe. You can execute them multiple times without any problems besides the CPU usage.

The execution of a function can fail for many reasons. Imagine a function that takes some time, and then, in the middle of the execution, the machine crashes. Because the output depends on the input, you can simply retry the function’s execution, and you’ll be sure to get the result you’re expecting. If the result is very important, you could run the function on multiple machines and collect the result from the first one that completes.

### Pure functions are easy to memoize

Like in the previous paragraph, imagine you have a function that takes some time to execute. If the function is pure, for the same input value, you’ll always get the same output value. If the function takes some time and you already know the result for a given input, why would you repeat the computation?

You’d always get the same result. The best solution, then, is to save the previous result somewhere and return it if you invoke the function with the same input. This process is called memoization.

Note: In Chapter 5, “Higher-Order Functions”, you’ll see how to implement a higher-order function for adding memoization to any other existing function.

### Pure functions are lazily executable

There’s a definition saying that “a function is a way to describe some code you can maybe run later”. This definition is apt because it makes explicit that you can:

• Execute the function long after you’ve implemented it.
• Completely avoid the execution of the function, saving time and resources.

Because pure functions don’t depend on the universe around them, you can execute them at any time, wait to execute them, or even never execute them if you never need the output value!

In the next chapter, Chapter 4, “Expression Evaluation, Laziness & More About Functions”, you’ll learn more about laziness and why it’s an important feature of pure functions.

## Referential transparency and the substitution model

In the previous part of the chapter, you learned why pure functions are important. You also learned that a pure function always returns the same output values when you provide the same input values. As previously mentioned, when you write the body of a pure function, you basically write an expression. You define expression as a combination of constants, variables, operators and functions that your computer computes, producing another value. The following are some basic examples of expressions:

``````val exp1 = 1 + 2 * 2  // 1
val exp2 = exp1 * 20 // 2
val exp3 = 2 * (expr2 - expr1) // 3
``````

Here, you:

1. Define `exp1` as the result of the sum of `1` and the result of the multiplication of `2` with itself.
2. Reuse the result from the previous expression, `exp1`, in a second one, `expr2`, which multiplies the previous value by `20`.
3. Reuse both the previously calculated expressions, `exp1` and `exp2`, for a third, slightly more complex `exp3`.

To get the result of `exp3`, you basically reduce all the other expressions until you get a single value. In the previous case, you calculate the result like this:

``````exp1 == 1 + 2 * 2 == 5
exp2 == exp1 * 20 = 5 * 20 == 100
exp3 == 2 * (expr2 - expr1) == 2 * (100 - 5) == 2 * 95 == 190
``````

An expression like `expr3` is referentially transparent if, in every place it’s used, you can replace it with its final result. More formally, any program using the expression `exp3` shouldn’t change its behavior if you replace `exp3` with `190`.

A function `f` is pure if the expression `f(x)` is referentially transparent for all referentially transparent `x`. If an expression isn’t referentially transparent, you say it’s referentially opaque.

It’s worth seeing some more examples.

### Examples of referentially transparent expressions

Referential transparency is a mandatory property of pure functions. Keep reading for some more examples.

Consider the following expressions:

``````// 1
val a = expr
val (a1, a2) = a to a

// 2
val (a1, a2) = expr to expr
``````

Is the definition in 1 equivalent to the one in 2? The answer is yes, but if and only if `expr` is pure. To prove that, copy the following code into ReferentialTransparency.kt:

``````fun main() {
val expr1 = { 3 * 10 } // 1
val (a1, a2) = expr1() to expr1() // 2
val expr1Eval = expr1() // 3
val (a1Eval, a2Eval) = expr1Eval to expr1Eval // 4
assertOrThrow("expr1 is not RT") { // 5
a1 == a1Eval && a2 == a2Eval
}
}
``````

In this code, you:

1. Define the simple referentially transparent expression `expr1`, multiplying `3` and `10`. Note how the expression is `3 * 10`, but you put it as the body of a lambda for it to be evaluated later.
2. Create a `Pair<Int, Int>` evaluating `expr1` for the first and second values.
3. Evaluate `expr1` once in `expr1Eval`.
4. Create another `Pair<Int, Int>` using `expr1Eval` twice.
5. Assert that the two `Pair<Int, Int>` are the same. You find `assertOrThrow` in Ext.kt in the util package.

Run the code and see that there’s no exception thrown. This means the two expressions are equivalent, so `expr1` is referentially transparent. Can you do the same with a referentially opaque function? To verify that, just add the following code to `main`, which differs from the previous because it uses `expr2`, which isn’t referentially transparent. It changes an outside `count` variable:

``````var count = 1 // 1
val expr2 = { 3 * 10 * ++count } // 2
val (b1, b2) = expr2() to expr2()
val expr2Eval = expr2()
val (b1Eval, b2Eval) = expr2Eval to expr2Eval
assertOrThrow("expr2 is not RT") {
b1 == b1Eval && b2 == b2Eval
}
``````

In this case, `expr2` isn’t referentially transparent because it returns different values every time you invoke it.

This is because it depends on the value of `count` it changes at every execution. When you run the code, you now get:

``````Exception in thread "main" java.lang.AssertionError: expr2 is not RT
``````

In this case, the number of times you execute `expr2` changes its output, making it referentially opaque.

Exercise 3.3: Is `expr3`:

``````val expr3 = 42
val (a1, a2) = expr3 to expr3
``````

The same as the following?

``````val (a1, a2) = 42 to 42
``````

Give it a try and check Appendix C to see how you did.

Consider, now, the following expression:

``````val expr3 = { println("Hello World!") }
``````

Is this referential transparency? The answer is no, but if you add and run the following code in ReferentialTransparent.kt, you don’t get any errors:

``````val expr3 = { println("Hello World!") }
val (c1, c2) = expr3() to expr3()
val expr3Eval = expr3()
val (c1Eval, c2Eval) = expr3Eval to expr3Eval
assertOrThrow("expr2 is not RT") {
c1 == c1Eval && c2 == c2Eval
}
``````

So, what’s wrong? The previous code just tests if the output of the expression is always the same for the given input. In this case, there’s no input, and the output is the output of `println`, which is `Unit`. To understand why `expr3` isn’t pure, you need to check the definition.

Given:

``````val expr3 = { println("Hello World!") }
``````

Is the program:

``````val (c1, c2) = expr3() to expr3()
``````

Equivalent to the following?

``````val expr3Eval = expr3()
val (c1Eval, c2Eval) = expr3Eval to expr3Eval
``````

The answer, again, is no. The first produces as output:

``````Hello World!
Hello World!
``````

The second produces:

``````Hello World!
``````

In the second case, you’ve replaced the expression’s invocation with its result value, and the program has changed. This proves that `expr3` isn’t referentially transparent.

In the previous examples, you’ve basically replaced every invocation of a given expression with its result and checked if the resulting program was somehow different. This model of substituting values for expressions is called the substitution model.

Exercise 3.4: Suppose you have the following code:

``````val CounterIterator = object : Iterator<Int> {

private var count = 0

override fun hasNext(): Boolean = true

override fun next(): Int = count++
}
``````

Is the following expression referentially transparent?

``````val expr4 = { CounterIterator.next() }
``````

Give it a try and check Appendix C to see how you did.

### Referentially transparent expressions and Kotlin inline functions

You just learned that you can replace a referentially transparent expression with its result in every place it’s used without breaking the related program. This is something that probably reminds you of the concept of inline functions in Kotlin. The question you want to answer now is: Can a referentially transparent function always be inlined?

``````fun cube(x: Int): Int = x * x * x
``````

This simple function calculates the cube of the input value. This is clearly a pure function, and its body contains an expression that’s clearly referentially transparent. Now, add the following new function to the same file:

``````fun doubleCube(x: Int) = cube(x) + cube(x)
``````

This is another simple pure function calculating the sum of the values you get from invoking `cube` twice. To calculate the result of `doubleCube` for `2`, do the following:

``````doubleCube(2) ==
cube(2) + cube(2) ==
(2 * 2 * 2) + (2 * 2 * 2) ==
8 + 8 ==
16
``````

You apply the substitution model and reduce the expression to the single value `16`. Because the expression in the body of `doubleCube` is referentially transparent, you could replace every call to `doubleCube(2)` with the simple value `16`.

Write and run the following code to verify that the result is `16`:

``````fun main() {
assertOrThrow("") {
doubleCube(2) == 16
}
}
``````

It’s interesting now to look at the code Kotlin generates from the previous code.

In the Tools menu, go to the Kotlin submenu and select the Show Kotlin Bytecode option, like in Figure 3.5:

This will display some bytecode like in Figure 3.6.

Reading Kotlin bytecode can be made simpler with the Decompile button, so press that now.

That leads to a more readable representation of the code the Kotlin compiler generates from your source code, like in Figure 3.7:

Note: Don’t worry if you see some compilation errors in the decompiled code. What you see here isn’t supposed to compile successfully, but it’s useful to have an idea of how the Kotlin compiler works.

As you can see, the generated code isn’t much different from the one you wrote initially. You still have:

1. The same definition of `cube`.
2. `doubleCube`, which still invokes `cube` twice.

As said, `cube` and `doubleCube` are pure functions, and the expressions in their bodies are referentially transparent. What about defining them as inline functions? Change the previous implementations to include the `inline` keyword. They should now look like this:

``````inline fun cube(x: Int): Int = x * x * x

inline fun doubleCube(x: Int) = cube(x) + cube(x)
``````

The only difference is the use of the `inline` keyword. If you now repeat the process to get the Kotlin bytecode, this time removing what’s not critical, you’ll get something like Figure 3.8:

As you can see, the Kotlin compiler has replicated the expression you wrote in `cube` in every place where the same function has been invoked in `doubleCube`. If you look at the generated code, you also note that:

1. Kotlin generates a variable, `var10000`, with a copy of the expression in `cube`.
2. The return value of `doubleCube` reuses `var10000` and then another copy of the expression in `cube`.

For a correct application of the replication model, the return value would’ve been:

``````return x * x * x + x * x * x
``````

You can say, then, that you can inline a pure function, but what’s really happening isn’t a complete application of the substitution model. In addition, when you look carefully at your source code, you’ll see a warning from IntelliJ, like in Figure 3.9:

The message basically says the benefit in performance you get by inlining a function like `cube` or `doubleCube` is insignificant and probably useless. The message also says the possible benefits happen when you have parameters of functional types. This means higher-order functions, which you’ll learn about in detail in Chapter 5, “Higher-Order Functions”.

## Side effects

You know that a pure function returns output values, which depend only on the values you provide as input. The function calculates the output value, resolving the referentially transparent expression you define in its body. You also learned that a pure function doesn’t change the state of the universe. This means you can run it an unlimited number of times without any external effect.

It’s very easy to fall into the trap of thinking that side effects are things you must avoid completely. This isn’t actually true because side effects are the reasons you create and run programs. For instance, if you want your app to persist some data in a database or fetch some data by accessing the network, you need to have some functions with side effects. Side effects are everywhere. So, what’s the problem?

One of the problems you want to solve with functional programming is to control side effects or isolate and execute them safely. As said, you’ll study side effects much more in the following chapters, but it’s now important to give an example of what it means to have control of side effects. Open SideEffects.kt, and add the following code:

``````fun shiftLeft(x: Int) = x shl 1 // 1

fun not(x: Int) = x.inv() // 2
``````

These are two basic pure functions that:

1. Shift the input value left by 1 position.
2. Negate the binary representation of the input value.

Using `after`, you can easily compose the two functions like this:

``````val shiftLeftAndNot = ::not after ::shiftLeft
``````

You can test `shiftLeftAndNot` by running the following code:

``````fun main() {
val comp1 = not(shiftLeft(10))
val comp2 = shiftLeftAndNot(10)
assertOrThrow("comp1 != comp2") {
comp1 == comp2
}
}
``````

So far, so good! Now, suppose you want to log the operation you’re executing in `shiftLeft` and `not`. How could you solve this problem in a functional way?

A first, impure, approach would be to use the following. Add it to SideEffect2.kt:

``````fun shiftLeftAndLog(x: Int): Int {
println("Shift Left of \$x") // 1
return x shl 1
}

fun notAndLog(x: Int): Int {
println("Negate \$x") // 2
return x.inv()
}
``````

In this code, you:

1. Used `println` to print a message in the body of `shiftLeftAndLog`.
2. Did the same for `notAndLog`.

The new functions `shiftLeftAndLog` and `notAndLog` are not pure because they have side effects! Each function is changing the universe by writing something in the standard output. The side effects aren’t under control, and it’s like each function is firing its own effect in a different direction.

The first consequence involves testability. How can you test `shiftLeftAndLog` and `notAndLog`? Trying to improve testability is a reason to improve the previous example by introducing the `Logger` abstraction. Add the following code in SideEffects3.kt:

``````interface Logger { // 1
fun log(msg: String)
}

val StdLogger = object : Logger { // 2
override fun log(msg: String) {
println(msg)
}
}

fun shiftLeftWithLogger(logger: Logger, x: Int): Int { // 3
logger.log("Shift Left of \$x") // 4
return x shl 1
}

fun notWithLogger(logger: Logger, x: Int): Int { // 3
logger.log("Negate \$x") // 4
return x.inv()
}
``````

In this code, you:

1. Define `Logger` as the abstraction for the object responsible for logging.
2. Provide `StdLogger` as a `Logger` implementation for the standard output.
3. Add `Logger` as an additional input parameter and create `shiftLeftWithLogger` and `notWithLogger`.
4. Use the `Logger` in the body of `shiftLeftWithLogger` and `notWithLogger`.

`shiftLeftWithLogger` and `notWithLogger` are impure, and they have side effects. However, they are — compared to the previous implementations — easier to test. You just need to provide a `MockLogger` implementation like the following:

``````class MockLogger : Logger {

private val internalLog = StringBuilder()
val log: String
get() = internalLog.toString()

override fun log(msg: String) {
internalLog.append(msg)
}
}
``````

And test the previous functions by adding the following code to `main` in SideEffects3.kt:

``````fun main() {
val mockLogger1 = MockLogger()
shiftLeftWithLogger(mockLogger1, 10)
assertOrThrow("Problem testing shiftLeft()") {
mockLogger1.log == "Shift Left of 10"
}
val mockLogger2 = MockLogger()
notWithLogger(mockLogger2, 10)
assertOrThrow("Problem testing not()") {
mockLogger2.log == "Negate 10"
}
}
``````

Here, you’re basically trying to put the result of the side effect into some sort of container, `MockLogger`, and verifying its content at the end of the test. Run `main` and check that everything is fine without any exceptions.

The biggest problem is that now `shiftLeftWithLogger` and `notWithLogger` aren’t composable, like the original `shiftLeft` and `not` were in SideEffects.kt. The output of `shiftLeftWithLogger` has type `Int`, but the input of `notWithLogger` now needs two parameters: one of type `Logger` and one of type `Int`.

How can you implement the initial `shiftLeft` and `not` implementations with logging capabilities as pure functions, making them easier to test and compose?

### A first look at composition with side effects

You’ll learn how to handle side effects in great detail in the following chapters of this book, particularly in Chapter 8, “Composition”. In this case, it’s worth first seeing an example of how you can implement composition when you have a function with side effects.

In the previous examples, you implemented the side effect, invoking `println` in the body of `shiftLeftAndLog` and `notAndLog` or delegating to a `Logger` in `shiftLeftWithLogger` and `notWithLogger`. Now, use a different approach and write the following code in SideEffects4.kt:

``````fun shiftLeftWithEffect(x: Int): Pair<Int, String> { // 1
return x shl 1 to "Shift Left of \$x" // 2
}

fun notWithEffect(x: Int): Pair<Int, String> { // 1
return x.inv() to "Negate \$x" // 2
}
``````

In this code, you:

1. Decorated the output value of `shiftLeftWithEffect` and `notWithEffect` with the information you want to use for the log.
2. Returned a `Pair<Int, String>` with the result of the original function as the first value and the log message as the second.

The idea is to include the description of the side effect in the return value of the function. Both `shiftLeftWithEffect` and `notWithEffect` are pure! Unfortunately they aren’t composable yet. The input for `notWithEffect` is `Int`, but the output of `shiftLeftWithEffect` is `Pair<Int, String>`.

Again, you’ll see these concepts in greater detail later, but now look at the previous functions and their signatures. Define the following `typealias`:

``````typealias Writer<A, B> = (A) -> Pair<B, String>
``````

You can say the previous:

1. `shiftLeftWithEffect` has type `Writer<Int, Int>`.
2. `notWithEffect` has the same type `Writer<Int, Int>`.

You now have two functions of the same type, and you just have to define how composition works for them. In the same SideEffects4.kt file, add the following code:

``````infix fun <A, B, C> Writer<B, C>.after(
w: Writer<A, B>
): Writer<A, C> = { a: A -> // 1
val (b, str) = w(a) // 2
val (c, str2) = this(b) // 3
c to "\$str\n\$str2\n" // 4
}
``````

In the higher-order function `after`, you:

1. Use `Writer<B, C>` as receiver and accept an input parameter of type `Writer<A, B>`. The output type is `Writer<A, C>`.
2. Invoke the `Writer<A, B>` you pass as a parameter on the input `a` of type `A` and, using Kotlin destructuring, save the result in two different local variables: `b` of type `B` and `str` of type `String`.
3. Invoke `Writer<B, C>` using `b` from the previous expression, and destructure the result in `c` of type `C` and `str2` of type `String`.
4. Implement the actual composition, returning a `Pair<Int, String>` where `first` is the value `c` you got from the previous instruction and `second` is the concatenation of the `String`s you got in the previous instruction.

Now, by adding the following code in the same file, you can test everything:

``````fun main() {
val f: Writer<Int, Int> = ::shiftLeftWithEffect // 1
val g: Writer<Int, Int> = ::notWithEffect // 2
val shiftLeftAndNotWithEffects = g after f // 3
println(shiftLeftAndNotWithEffects(10).second) // 4
}
``````

In this example, you:

1. Save in `f` the reference to `::shiftLeftWithEffect`, giving it the explicit type `Writer<Int, Int>`.
2. Do the same for `g` and `::notWithEffect`.
3. Create `shiftLeftAndNotWithEffects` using `after`. This is a composition of `f` and `g`, which is the composition of `shiftLeftWithEffect` and `notWithEffect`.
4. Print `second` in the final result, which is the concatenation of the logs.

Run the previous code, and you’ll get the following output:

``````Shift Left of 10
Negate 20
``````

The new versions of `shiftLeftWithEffect` and `notWithEffect` are pure functions you can compose into another pure function, getting all the advantages you learned earlier.

Note: Don’t worry if this process isn’t completely clear at this point. What you’ve done in this paragraph is anything but obvious, and you’ll have many opportunities to revisit the same concepts in the following chapters of the book.

### Lifting a function to a writer

Look again at the functions you created in SideEffects4.kt in a previous example:

``````fun shiftLeftWithEffect(x: Int): Pair<Int, String> {
return x shl 1 to "Shift Left of \$x"
}

/** A second version of not */
fun notWithEffect(x: Int): Pair<Int, String> {
return x.inv() to "Negate \$x"
}
``````

These are two functions of type `Writer<Int, Int>`, but they’re different from the original ones of type `Fun<Int, Int>` you defined in SideEffects.kt:

``````fun shiftLeft(x: Int): Int  = x shl 1

fun not(x: Int): Int = x.inv()
``````

Is there a way to get the former version from the latter? Of course there is! Write the following function in SideEffects5.kt:

``````fun <A, B> Fun<A, B>.liftW(
log: (A, B) -> String
): Writer<A, B> =
{ a: A ->
val b = this(a)
b to log(a, b)
}
``````

This is an extension function on the `Fun<A, B>` type, which creates a `Writer<A, B>`, adding a string as decoration in the resulting type. The log you use as decoration is the result of the function you pass as an input parameter from the input and output values of the receiver function.

Note: `liftW` is another example of a higher-order function. Don’t worry, you’ll learn all about them in Chapter 5, “Higher-Order Functions”.

As you’ll see in Chapter 9, “Data Types”, you’re lifting a function to a different data type: in this case, `Writer<A, B>`. This allows you to start from a function `f` of type `Fun<A, B>` and lift it to a function of type `Writer<A, B>`. With `liftW`, you can now write the following code:

``````fun main() {
val shiftLeftAndLog = ::shiftLeft.liftW { a, _ -> // 1
"Shift Left \$a"
}
val notAndLog = ::shiftLeft.liftW{ a, _ -> // 2
"Negate \$a"
}
val shiftLeftAndNotAndLog = notAndLog after shiftLeftAndLog // 3
println(shiftLeftAndNotAndLog(10).second) // 4
}
``````

In this code, you:

1. Define `shiftLeftAndLog`, lifting `shiftLeft` to a `Writer<Int, Int>`, passing a lambda containing the logic for the creation of the log in decoration.
2. Do the same for `negate`, creating `notAndLog`.
3. Get `shiftLeftAndNotAndLog` from the composition of `shiftLeftAndLog` and `notAndLog`.
4. Use `shiftLeftAndNotAndLog` and print the log in output.

When you run this code, you’ll get exactly the same output you got earlier:

``````Shift Left 10
Negate 20
``````

You did all this just using pure functions and composition!

Exercise 3.5: The `Writer<A, B>` data type you defined earlier is a very important concept in functional programming. If you use types as objects and the functions `Writer<A, B>` as morphisms, you get a very special category: the Kleisli category.

Can you prove that by using types as objects and `Writer<A, B>` as morphisms, you get a category? Look back at Chapter 2 if you need a reminder for how to prove this.

Again, give it a try and check the solution in Appendix C to see how you did.

## Challenges

In this chapter, you learned two of the most important concepts in functional programming: referential transparency and side effects. You also learned how to recognize a pure function. Now it’s time for a few challenges:

### Challenge 1: Pure or impure?

Is `inc3` a pure function? Why or why not?

``````var count = 0
fun inc3(x: Int): Int {
val result = x + ++count + 1
println("Res: \$result") // WHAT IF YOU REMOVE THIS?
--count
return result
}
``````

What if you remove the `println()` with the comment?

### Challenge 2: Pure or impure?

Is `output` a pure function? Why or why not?

``````fun output(x: Int): Unit = println("x = \$x")
``````

### Challenge 3: Pure or impure?

Is `randomAdd` a pure function? Why or why not?

``````fun randomAdd(a: Int, b: Int): Int = a + b + Random.nextInt()
``````

## Key points

• A function `f` is pure if its body is a referentially transparent expression and it doesn’t have side effects.
• A pure function always returns the same output values for the same input values.
• A pure function depends only on its input parameters and nothing else.
• Pure functions are easy to test and debug.
• You can run pure functions in parallel.
• Because the output of a pure function for a given input is always the same, you can evaluate its expression only once and memoize it for following invocations.
• Pure functions are lazy. You can defer their execution until you really need them.
• An expression is referentially transparent if replacing every occurrence in a program with its value doesn’t alter the behavior of the program itself.
• The substitution model is a way to test if an expression is referentially transparent.
• An expression that isn’t referentially transparent is referentially opaque.
• If the execution of a function changes the state of the universe, the function is not pure, and you have side effects.
• Side effects aren’t a problem per se, but you need to control them to reduce the complexity of your program.
• The `Writer<A, B>` data type is an example of how you can control side effects.
• Using `Writer<A, B>` as morphisms and types as objects, you define the Kleisli category.

## Where to go from here?

Wow, congrats! In this chapter, you did a lot of work and learned many fundamental concepts about functions and functional programming in general. You had the opportunity to see examples of pure and impure functions and learned how to recognize them using the substitution model. You’ve achieved a lot already because you’ve encountered side effects for the first time and, with `Writer<A, B>`, you’ve seen for the first time how to control and compose them.

You did a great job, but you’ve still got many concepts to learn — starting with the concepts of strictness, laziness, immutability and tail recursion. You’ll learn about how they’re related to functional programming and how they can help you create better code.

Have a technical question? Want to report a bug? You can ask questions and report bugs to the book authors in our official book forum here.