import Data.Array main = print $ minHappy [2..10] minHappy bs = head $ filter ((`all` bs) . happy) [2..] next n b = sum $ (^2) <$> rBase n b lim = 1000 caches = listArray ((2, 1), (10, lim)) $ build [] <$> [2..10] <*> [1..lim] build _ b 1 = True build s b n | n `elem` s = False | otherwise = build (n:s) b $ next n b rBase 0 _ = [] rBase n b = r : rBase q b where (q, r) = divMod n b happy n b | next n b > lim = error "overflow" | otherwise = caches!(b, next n b)
2009 Round 1A
Multi-base happiness
We’ll skip an brute force solution for the small input, as it is a sub-problem we encounter when solving the large input.
The "Important Note" in the problem description suggests one strategy is
to precompute all the answers, because there are only 502 possible inputs,
because each input is a subset of [2..10]
containing at least two elements.
However, we still need a little ingenuity to find all the happy numbers in a reasonable time.
The only reasonable way to find the smallest happy number in a given set of bases is by trying each number in turn. Even if there were advanced and esoteric mathematical tricks that tell us surprising information about the properties of sums of squares of digits in various bases, it would be unfair to expect a Code Jam contestant to find it within a few hours.
Thus for the worst case, that is, the smallest happy number in all bases
[2..10]
, since we must try each number one by one, the answer must be less
than about 10^10. This means the sum of the squares of its digits is at most
810.
Let’s call it an even 1000. For all bases in [2..10]
, we determine if each
number in [1..1000]
is happy by brute force, and record the result. Then
for any number greater than 1000, we can perform a single summation of its
squared digits and one array lookup to see if it ss happy.
To check that 1000 is sufficiently high, we find the minimum happy number for
all bases in [2..10]
, arranging our program to halt with an error message if
the limit is exceeded:
We compute the digit representation in a given base in reverse, due to our use of Haskell lists. We can skip reversing them, because the sum of their squares is the same either way.
After about a minute and a half on my laptop, this program prints 11814485, whose sum-of-squared-digits is well within our limit.
We can compute a lookup table for all subsets of [2..10]
and print it:
table = [(bs, show $ minHappy bs) | bs <- subsequences [2..10]] genMain = print $ show table
If we then paste the output back into our program into a string named pre, we can solve the large input almost instantly with:
import Data.List import Jam main = jam $ fromMaybe undefined . (`lookup` read pre) <$> getints pre = "..." // Replace with precomputed lookup table.
It turns out we could have avoided this dance. Computing the entire table only takes about two and a half minutes on my laptop, so we could have used the following solution instead:
import Data.Array import Data.List import Data.Maybe import Jam main = jam $ fromMaybe undefined . (`lookup` table) <$> getints table = [(bs, show $ minHappy bs) | bs <- subsequences [2..10]] minHappy bs = head $ filter ((`all` bs) . happy) [2..] next n b = sum $ (^2) <$> rBase n b lim = 1000 caches = listArray ((2, 1), (10, lim)) $ build [] <$> [2..10] <*> [1..lim] build _ b 1 = True build s b n | n `elem` s = False | otherwise = build (n:s) b $ next n b rBase 0 _ = [] rBase n b = r : rBase q b where (q, r) = divMod n b happy n b = caches!(b, next n b)
Crossing the Road
Our instincts may cause us to believe this is a graph search problem, with each corner of each intersection being a node, and the street crossings and sides of blocks being edges. However, the traffic lights mean that the weight of the edges change over time.
We need new ideas. One obvious observation is that we know the minimum time needed to reach our starting point: zero! Also obvious is the minimum time needed to reach the corner in the direction of the green traffic light from our starting point is 1 minute. We can build on this a little: from the 1-minute corner, we know the best times for reaching the next corner along the block is 3 minutes.
Thus we may wonder what we can learn from knowing the minimum time needed to reach a given location. Unfortunately, we quickly see this is too little information. Even if we can walk immediately to a nearby location, how do we know going via our current location is the quickest way?
However, suppose we alter the question slightly. What can we learn from knowing
the minimum time to reach all locations that can be reached within t
minutes?
This line of thinking leads to us to consider a priority queue where each
element is a location p
ordered by the minimum known time t
needed to
reach it. Initially it holds just the starting point and the zero time.
Each iteration, we delete the minimum element (t, p)
. If p
has already
been visited, that is, we already deleted an element containing p
, then we
ignore it. Otherwise, we record that we have visited p
, and set the minimum
time needed to reach p
to t
. Next, we compute the time taken to reach the
adjacent locations from p
, and insert the unvisited ones into the queue.
We can inductively show this method produces the fastest way of reach each
location: because p
is first deleted when its associated time t
is the
minimum in the priority queue, there can be no faster way to reach p
, for
all other ways require first reaching a location further along the priority
queue, and hence must take longer.
import Data.Array import Data.Bool import Data.List import Data.List.Split import qualified Data.PQueue.Min as PQ import qualified Data.Set as S import Jam main = jam $ do [n, m] <- getints a <- listArray ((1, 1), (n, m)) . chunksOf 3 . map read . words . unwords <$> getsn n let tgt = ((1, m), (-1, 1)) f v q | p == tgt = t | S.member p v = f v q' | otherwise = f (S.insert p v) $ foldl' (flip PQ.insert) q' nbrs where ((t, p@((x, y), (dx, dy))), q') = PQ.deleteFindMin q [s, w, offset] = a!(x, y) tc = mod (t - offset) (s + w) nbrs = filter (inRange (bounds a) . fst . snd) [ (t + 2, ((x + dx, y), (-dx, dy))) , (t + 2, ((x, y + dy), (dx, -dy))) , (t + bool (s + w - tc + 1) 1 (tc < s), ((x, y), (-dx, dy))) , (t + bool 1 (s - tc + 1) (tc < s), ((x, y), (dx, -dy))) ] pure $ show $ f S.empty $ PQ.singleton (0, ((n, 1), (1, -1)))
The words . unwords
seems redundant at first glance, but we have a list of
strings containing spaces, and we must insert spaces between them when
concatenating them before passing the whole lot to words
.
The first time I attempted this problem I overlooked that the best solution may involve walking south or west, thus I was unable to pass even the small input.
Collecting Cards
Despite being worth the most points, this problems seems to be the easiest to code. Or maybe Haskell happens to be especially suitable for these sorts of problems.
If we have all c
cards, then we stop buying booster packs, giving an expected
value of 0. Otherwise, if we have k < c
cards, then we must buy at
least one more booster pack to complete the set.
Let f k
be the expected number of packs we must buy, and let p k i
be the
probability that we have i
of the c
cards after buying one booster pack and
having started out with k
cards. Then:
f k = 1 + sum [p k i * f i | i <- [0..c]]
We must possess at least k
cards after buying one pack, so p k i = 0
for i ← [0..k-1]
. Thus we can rearrange to:
f k = 1 / (1 - p k k) * (1 + sum [p k (k + d) * f (k + d) | d <- [0..c - k]])
It reamins to compute p k (k + d)
. For each d
, we count the number of
n-subsets of [1..c]
such that d
of them are cards are new to us (so
the other n - d
cards are cards we already own). Then p k (k + d)
is
this count divided by the number of all n
-subsets of [1..c]
, namely
choose c n
.
Out of the c - k
cards we are missing, there are choose (c - k) d
different d
-subsets, and of the k
cards we already own, there are
choose k (n - d)
different (n - d)
-subsets, so the number of different
types of booster packs that increase our collection by d
is:
choose (c - k) d * choose k (n - d)
Whence:
import Data.Ratio import Math.Combinatorics.Exact.Binomial import Jam main = jam $ do [c, n] <- getintegers let f k | k == c = 0 % 1 | otherwise = 1 / (1 - (choose k n % choose c n)) * (1 + sum [choose (c - k) (i - k) * choose k (n - (i - k)) % choose c n * f i | i <- [k + 1..c]]) pure $ show (fromRational $ f 0 :: Double)
This suffices for the small input. For the large, we memoize:
import Data.MemoTrie import Data.Ratio import Math.Combinatorics.Exact.Binomial import Jam main = jam $ do [c, n] <- getintegers let mf k | k == c = 0 % 1 | otherwise = 1 / (1 - (choose k n % choose c n)) * (1 + sum [choose (c - k) (i - k) * choose k (n - (i - k)) % choose c n * f i | i <- [k + 1..c]]) f = memo mf pure $ show (fromRational $ f 0 :: Double)
The limits even for the large input are small enough that we could give exact answers in the form of rationals, but the problem asks for a floating-point approximation. To show off, we convert to a double at the last possible moment.