Day 7 is an optimisation problem. We're given a *loss function* (here the fuel cost) and asked to find the optimal solution that minimises the loss. There are many ways to solve these problems.

- One approach is brute force: the optimal solution has to lie in the space spanned by the submarines, and that's only about 2000 spaces, so I could just try all of them and find the lowest.
- Some optimisation problems have neat closed-form solutions where you just plug the values into some equation and it spits out the answer. Those generally have mathematically "neat" functions, and the use of absolute values here means a closed-form is unlikely here.
- Gradient descent is a common way to solve optimisation problems. It relies on the fact that the loss function is "smooth", so the loss decreases near the optimal solution and increases as you move away from it.

That's what I did.

The loss function I'm minimising is the fuel cost. I assume, without proof, that the loss is strictly monotone: every step I take away from the optimal solution causes the loss to increase. It will never stay the same, it will never decrease as I move away.

That leads to a simple gradient descent function. I guess a possible solution. I calculate the loss at that place, and the loss at one position each side. If the loss *here* is the lowest, I've found the optimal solution. If not, I update the guess by one, depending on which side has the lowest loss.

```
gradientDescend :: ([Int] -> Int -> Int) -> [Int] -> Int -> Int
gradientDescend loss subs guess =
if | lossLower < lossHere -> gradientDescend loss subs (guess - 1)
| lossHigher < lossHere -> gradientDescend loss subs (guess + 1)
| otherwise -> guess
where lossHere = loss subs guess
lossLower = loss subs (guess - 1)
lossHigher = loss subs (guess + 1)
```

Note that the loss function to use is an argument of `gradientDescend`

. The only difference between parts 1 and 2 is the definition of the loss function, so I define those two, ready to be passed into `gradientDescend`

.

```
loss1 :: [Int] -> Int -> Int
loss1 subs target = sum $ map diff subs
where diff s = abs (target - s)
loss2 :: [Int] -> Int -> Int
loss2 subs target = sum $ map triangleDiff subs
where diff s = abs (target - s)
triangleDiff s = ((diff s) * ((diff s) + 1)) `div` 2
```

Finally, read the input, make a vaguely-sensible guess of where to start, and kick off the gradient descent with the correct loss function.

```
main :: IO ()
main =
do text <- readFile "data/advent07.txt"
let subs = map (read @Int) $ splitOn "," text
print $ part1 subs
print $ part2 subs
part1 = bestFuel loss1
part2 = bestFuel loss2
bestFuel :: ([Int] -> Int -> Int) -> [Int] -> Int
bestFuel loss subs = loss subs best
where meanSub = (sum subs) `div` (length subs)
best = gradientDescend loss subs meanSub
```

## Code

You can get the code from my locally-hosed Git repo, or from Gitlab.