To prepare for the competition environment, we (the IOI team members) were invited to attend a training camp for the IOI at the University of Waterloo, taking place around a week before the actual IOI. This was quite a fun part of the journey, as we got the chance to meet each other in person for the first time after already being acquainted online for a long time. We also had the opportunity to meet some friends from Waterloo, such as Moses, Roger, and Keenan. Over the span of 2 days, we were thrown a series of 5 contests, each about 3 or 4 hours long. This was admittedly a little less fun, as the virtual machines we were required to use had massive input delays from both the keyboard and the mouse, making it virtually impossible to see what you were typing. Combined with the time crunch introduced by the “Speed Rounds,” the slow VMs made for a frustrating but also entertaining experience. During free time after the barrage of contests ended for the day, we played cards and board games (Scrabble :D), which helped us get to know each other better while still winding down from an entire day of intense focus.

Before I knew it, it was already time to head to Pearson Airport and begin the onslaught of flights to Yogyakarta. Since we were travelling during the pandemic, there were seldom any flights to Indonesia. Thus, our itinerary consisted first of a flight from Toronto to Hong Kong, then a flight from Hong Kong to Jakarta, and finally a flight from Jakarta to Yogyakarta. The flights were anything but boring though, as we spent our time in the air either playing cards, competing in trivia, or working out optimal Othello strategies.

Notably, given how the flights were scheduled, we were forced to take an overnight stay in Jakarta. Furthermore, due to some disturbances in the booking of our intended travel hotel close to the airport, we had to scramble Jakarta for some local hotels and eventually settled with a small one after an hour-long drive that was located in slightly… suboptimal conditions. The streets were narrow and busy, and given that the hotel did not offer complimentary dinner, our only option was to search for nearby restaurants on foot, mere centimetres next to the oncoming traffic!

After narrowly avoiding being run over for around 10 minutes, we found our first big street-side restaurant and decided to settle with it to avoid having to deal with the traffic any longer. Just when we thought we could take a quick breather, we realized that the menus we were handed were written in pure Indonesian, and the waitress who attended to us couldn’t understand what we were saying either! Luckily, a huge sign in the front of the restaurant had pictures of some of the menu items, and most of us settled with what looked the safest: nasi goreng (fried rice). After a brief wait, the food arrived, and it did not disappoint at all! The rice was packed with flavour from the seasoning and the wok hei, and while it was slightly too spicy for some, I found it to be just spicy enough to pack a nice kick that helped me devour the dish in a couple of minutes. As a bonus, the bill totalled to be under $10 CAD for all seven of us, so maybe it was actually worth the chaotic trip there and back!

Night soon passed in Jakarta, and after a gruelling 48 hours of travel, we finally set foot in Yogyakarta. The first hotel we visited upon our arrival was the Hyatt Regency hotel, which was nothing short of amazing. The view was spectacular, the rooms were spacious, the reception was immediate, and the food was beyond delicious.

There was absolutely nothing not to like about the place… Except for the fact that we wouldn’t be staying here for much longer! As it turns out, the Hyatt Regency hotel was only for the team leaders, so the contestants were expected to head to another hotel, the Rich Jogja hotel, located in the city mall, without quite as many top-quality amenities like the Hyatt had. Oh well!

The schedule of the IOI was designed so that we’d have a couple days to rest and adjust to the new environment before jumping into the actual contest. However, before we could roam around the hotel freely, we all had to do a mandatory round of COVID tests, the first of many to come. Now, having been well-accustomed to my cozy shut-in lifestyle ever since the pandemic started, the only kind of COVID test I had done before was the rapid antigen tests, the ones where you could comfortably swirl it around in the shallow end of your nostrils and be good to go. Now that I had to do the infamous “brain-tickling” PCR test, I was naturally quite nervous. Just as I was trying to comfort myself, I noticed one of the earlier victims of the test was left hunched over to the side, crying and coughing as if he had just been doused in a gallon of pepper spray. Full panic mode reactivated!

Luckily, when it finally got to my turn it was honestly done before I even knew it, defying my expectations of utter horror. After confirming my negative result, it was finally time to drop my luggage off in my room and explore what the hotel had to offer. Partnered with my roommate Allen, we walked down any path that seemed like it led somewhere interesting, passing through the backyard swimming pool, an abandoned fitness room, the rooftop dining halls, and even the IOI committee rooms. As luck would have it, one wrong turn into a private meeting room introduced us to a hotel staff member, who was quick to guide us past an obscure sliding glass door into the game room. Although the game room would become quite populated in the later days of the competition, we were almost definitely the first contestants to explore the venue, as there were still other staff members setting things up in the rooms! The game room was still almost completely empty at this point, and the only game that was properly set up was a PS4 booth with two chairs in front of it. *Perfect*.

Even though neither of us had any experience with the games in the catalogue, we nonetheless spent the next two hours bricking free throw after free throw in 2K22, then sending renowned fighters in drunken brawls to the death in UFC 4. Just like that, the first day had already come to an end.

The main event of the second day was the opening ceremony. After being shoved onto buses heading to the Indonesian Institute of the Arts first thing in the morning, our arrival was met with warm welcomes from the organizing committee and the university band. The opening ceremony itself was nothing too out of the ordinary, as we sat through musical performances, cultural dances, team introductions, and an abundance of welcoming speeches. What made it the most memorable, though, was none other than the IOI theme song. The song was so impactful that you could hear contestants singing the chorus long after the IOI itself. Truly an unforgettable piece of music. I must urge you to have a listen yourself, you will not be disappointed.

To pass time for the rest of the day, we figured we would roam the city mall as a team since it was connected to the back of our hotel anyways. The mall was a huge four-storey rectangular complex, with different stores running along each of the four sides. The view from the very top was spectacular, and the aromas from all of the restaurants were still mildly noticeable as you walked along the hallways. We bought snacks (including bubble tea and frozen pudding) from some of the booths we walked past, and we spent some time shopping for souvenirs in the stores. After dinner, we mainly focused on winding down and reviewing lists of essential topics and tricks, trying to clear our minds for the long day ahead of us. The night passed soon after, and it was time for the show to begin.

I still vividly remember the sheer number of washroom trips I had to go on in the hour before the contest started. Although I had gone on a walk outside around the backyard beforehand, the fact that this was the first high-stakes computing contest I had ever written in person made it nevertheless nerve-wracking. The contest venue was a lavishly decorated meeting hall with all of the onsite contestants lined up in an array of workstations, each equipped with the exact same laptop, mouse, and scratch paper. Besides the coding gear, we were also provided with a bottle of water, a plate of snacks, and a pile of convenient signs which we could use to request in-contest essentials such as washroom breaks, technical assistance, and… more banana. The gentle chimes from the softly swaying chandeliers overhead were accompanied by the faint hum of the giant air conditioners as I sat there in complete silence, watching the time on the announcement screens steadily approach one o’clock… The contest had finally begun.

After ripping open the contest problems envelope and scanning through all of the problem statements, this was the first one that caught my eye. The problem presented a unique algorithmic challenge in which you are to devise a strategy for a group of prisoners to compare two hidden numbers using only a number on a whiteboard. The catch is that each prisoner may only peek at one of the hidden numbers, and may not communicate with the other prisoners directly.

To begin paving the path towards a solution, I first considered the most obvious strategy possible: have the first prisoner look at the first number and write it down on the whiteboard, then simply have the second prisoner compare the second number with the number previously recorded on the whiteboard. This does indeed solve the problem, but the issue is that points are allotted based on the largest number written on the whiteboard! To collect more than just the first subtask, we will have to investigate further.

After thinking for a longer while, I noticed that the largest number allowed is around \(2 \log_2 N\), prompting me to consider strategies based on divide-and-conquer. Eventually, I came up with a solution based on the structure of a segment tree. The general idea is to let the first prisoner be the root node. This prisoner will look at the first number: if it is less than \(N/2\), they will tell us to go to their left child in the segment tree, and otherwise tell us to go to the right child (by writing the index of the corresponding child on the whiteboard). Following the first prisoner, each prisoner follows a similar strategy. For example, the left child of the first prisoner will first inspect the second number. If the number is at least \(N/2\), then they can immediately report that the second number is larger. Otherwise, they will signal which child comes next based on whether the second number is less than \(N/4\). The strategies of all other prisoners are analogous.

Since every non-negative integer written on the whiteboard is the index of a node in the segment tree, the largest number written is simply the number of nodes in the segment tree (minus one, to be precise). Although this would be approximately \(2N\) naively, compressing all nodes in each layer into a single node helps reduce this number drastically to approximately \(2 \log_2 N\). Implementing this solution gave me 60/100 points for just over an hour of work, something I was extremely excited about. At my current rate, I should have enough time to approach a full solution on this problem while also scoring good partials on the other two! Since I had already come so far, I decided to try and go for even more points here, especially since even the most minute optimizations at this point could mean a big score difference.

I continued my work on this problem first by considering whether I could optimize the root node, since it seemed like the least efficient use of information in the network so far. After not having luck with that for a while, I tried optimizing the leaf layers of some asymmetric segment trees. This also proved to be futile, however, since \(N=4096\) results in a perfect binary tree structure. At this point, I briefly considered trying out a different number of children (perhaps 3 or 4 would work better?). However, after somehow convincing myself that \(f(b) = b \log_b N\) is a constant function (it is most certainly not!), I tossed this thought aside as well, never to return to it again. Nothing I thought of seemed to work, so I readily moved on to the next problem.

This was the next problem I tried, since it seemed easier to get started on than Catfish Farm. It appeared to be a traditional heavy data structures task, as the problem asks you to answer several weird range queries on a fixed array. My first reaction is usually to think about Mo’s algorithm when dealing with these types of problems, but the fact that the queries were given online in the form of function calls ruled out this possibility. After a while of failing to come up with any major observations that could significantly simplify the problem, I decided to start with the smaller subtasks and see how far I could take them.

The solution to the first subtask came quickly. After all, it is clear that you cannot select more than one tower from a monotonically increasing or decreasing sequence, so the problem really just boils down to checking if you can take two towers, one from each side of the peak at \(k\).

The next two subtasks were more demanding, since they asked for an algorithm to solve the entire problem for a single query. Initially, I tried to bash out the query conditions using DP, but this did not feel viable in the long term as I couldn’t even begin to think about how one would optimize the transitions to work in linear time, much less sublinear time per query. Now stepping back a bit to think about the problem from a different perspective, I realized that the queries were essentially asking for the longest up-down subsequence in the array where consecutive elements of the subsequence differ by at least \(\delta\). Indeed, such a subsequence corresponds to an optimal selection of towers by selecting only the valleys and using the peaks as intermediary towers. This was a much simpler specification than the one given in the problem statement, with many possible implementations as well. I chose to promptly convert this idea into a solution using monotonic stacks, solving each query in \(\mathcal{O}(N)\) time.

At this point, I had accumulated 27 points on the problem but was confused regarding how I could take my solution further when faced with multiple queries. My new specification wasn’t as extendible as I hoped it could be, as the optimal sequence could change considerably given different endpoints and would definitely change completely when given different \(\delta\) values. I spent the next half an hour playing around with various greedy ideas, but none of them bore any fruit when tested against the full problem data. Now more than halfway through the contest, I decided to finally return to the first problem I read: Catfish Farm.

To be honest, I was glad to see this problem on the contest initially, since its unconventional nature led me to think that it was an ad hoc problem. However, after looking at it for a longer while, I realized that the correct approach was almost definitely DP… by far my weakest problem type! Regardless, I knew I had to squeeze as many points as I could out of this problem after suffering a large delay from Radio Towers.

The first subtask was extraordinarily easy, as it was essentially asking for the sum of all numbers in the grid. The second subtask also seemed like it would be a breeze, since there were only two possible \(X\) values. Obviously building two piers could never be optimal since the shorter pier would just be an obstruction to the taller one, so we could just take the maximum of placing a full pier in column 0 and column 1. Right? Haha, no. After almost half an hour of checking for implementation mistakes, off-by-one errors, and integer overflows, I still couldn’t figure out just what could be causing my solution to get wrong answer on a seemingly random test case in the second batch. Annoyed by the fact that my convincingly correct solution was failing again and again, I decided to ignore this subtask and have a go at the later ones first.

The third subtask was much easier to handle, since the constraint on \(Y\) essentially reduced the problem to a simple 1-D subsequence DP. The key observation was that the states of the relevant catfish (free, covered, or already taken) depends only on the piers in the last two columns, reducing the number of states to just \(4N\). Luckily, this observation carries over to the next couple of subtasks too, since we only need to consider pier heights where there is a catfish in one of the neighbouring piers at that height. Extending the state to capture all of these new possibilities leads to a \(\mathcal{O}(N^3)\) solution, and I managed to claim subtasks 3, 4, and 5 in one fell swoop (well, disregarding the implementation and debugging, among all else).

By now there were just over 20 minutes remaining on the clock, and I was reinfused with hope since solving subtask 2 and subtask 7 would net me over half of the total points available from day 1. Both subtasks seemed achievable, since I was sure subtask 2 was just a small bugfix away and I could easily modify my code for subtask 5 to work for subtask 7. However, things did not go as smoothly as I wished they would. I worked on my subtask 7 solution first, and after about 15 minutes of local debugging, it was ready for submission. After running for a minute or so, I was shocked to see that it had gotten wrong answer on one of the cases. Confused, I rushed back to my source code and increased a couple of the constant values in my nested loops to 10 in order to ensure that I was accounting for all of the possible pier heights. The new code ran on the judge until less than 30 seconds were remaining, after which it displayed the most dreadful verdict of them all — Time Limit Exceeded. With too little time to make any further adjustments to my code, I was left with no option but to walk away with only 134 points from day 1 when I knew I could’ve gotten more.

Soon after the contest ended, we gathered as a team outside the contest hall to discuss our performances. To my dismay, I learned from Peter that the full solution to Prisoner Challenge was based on some small optimizations on a ternary segment tree, something I had absolutely considered during the contest! Simply replacing the binary structure with a ternary structure would have netted me multiple extra subtasks, so I was extraordinarily angry at myself for rejecting the idea outright while the contest was still live. Furthermore, later in the day when we were allowed to re-enter the contest hall to make submissions in analysis mode, my heart sank when I learned that just changing all of the 10s in my code to 7s would have been enough for the last submission I made to Catfish Farm to pass subtask 7.

In short, my mistakes during day 1 were twofold. Firstly, I should not have thrown out the base change idea for Prisoner Challenge so carelessly, without actually verifying my convictions beyond some naïve hand calculations. In doing so, I had cut off the only path toward further progress and tricked myself into believing that I was at my wits’ end. Secondly, I should not have been fixated over the weird 6-point subtask from Catfish Farm for so long and overcommitted to finding a bug in my code when no such bug existed. This prevented me from allocating more of my time to the later subtasks, where I had a better idea of what was going on and each subtask was worth more than double the points. Anyways, I sure hope missing those 14 points from Catfish Farm by a couple of seconds doesn’t matter too much at the end of the day…

This day started off in a way that was by no means relaxing. We were once again packed into buses first thing in the morning headed for the Indonesian Institute of the Arts, this time to engage in miscellaneous physical activities as part of an excursion. Our randomly assigned physical activity was traditional Indonesian dancing, and we were put in the same group as the American team and the Japanese team, the two best-performing onsite teams. It was a blast getting to know the other teams better through a common group activity. We all enjoyed chatting with the American team, and Ryan had his own idea of how we should greet the Japanese team (in the form of him shoving me in front of them and forcing me to speak Japanese, and me making a fool of myself by not knowing what to say), although I’m not sure I enjoyed the latter part as much…

After the dance session, we were led to an extravagant dining hall for a special lunch accompanied by live instrumental band music. There, we saw the widest selection of authentic Indonesian cuisine thus far, and I made sure to capitalize on that by taking a small sample of every dish they had to offer. Unfortunately, I failed to realize that it might not be the best idea to mix dozens of different foods together all at once, which led to me spending the next half hour camping out in the nearest washroom. Definitely a learning experience to keep in the mind for the future too.

By 3:00 in the afternoon, we were back at Rich and once again left with an abundance of free time, not knowing what to do. I spent most of that time either lying on my bed, going for random walks in the hotel, or playing Cambio with the team, letting my mind and body relax to maximize my performance for tomorrow. Night soon fell once again, and it was time to face the second half of the contest back in the contest hall.

The problems in this set were just as confusing as those from day 1. Usually, the IOI likes to include one task that most contestants are able to fully solve or at least make significant progress on (e.g. Mutating DNA from 2021 and Connecting Supertrees from 2020), but it seems like they decided to completely skip out on that this year. Alas, it was time for me to head straight to partial farming once again.

Thousands Islands looked to be the hardest problem of the entire set, as it stood out with its simple yet confusing premise. Luckily, its subtasks were much more approachable, so I started the contest with a focus on grabbing those first. The first subtask, \(N=2\), only required the simple observation that at least two canoes must face forwards (since a single canoe cannot be used twice in a row) and one canoe must face backwards to make a full round trip between the two islands. The next subtask had a separate setup, with a complete graph where each edge has exactly one boat in each direction. Although this meant that \(N=2\) was no longer possible, any graph with at least \(3\) islands had a valid trip since there was a simple maneuver involving four boats connecting any three islands. Luckily, this maneuver also extended itself to the general bidirectional setup, since it implies that a valid trip exists as long as there is a path from the starting island leading to an island with at least two unexplored neighbours (necessity is also intuitively obvious, considering the case with a line graph). All in all, this worked out to be the first 31 points of the problem.

The next 24 points were also within my reach. It presented a similar setup from the previous subtask, except this time both boats of an edge face the same direction. My new idea was to use any cycle in the graph as a wrap-around for the trip, since any cycle could be reset by traversing all the first boats, traversing all the second boats, resetting the first boats, and then resetting the second boats. Thus, as long as there was a reachable cycle in the graph, you could simply traverse to the cycle, make a U-turn, and then go back to the starting island. On the contrary, no round-trip can exist if there are no reachable cycles, since the trip will always get stuck at some island without any outgoing boat. In short, the subtask boiled down to simple cycle finding in a directed graph. Easy, right? This can be implemented in linear time using a naïve DFS that keeps track of previously visited islands.

Unfortunately, this is where problems started occurring again. For a completely unknown reason, I was yet again facing a wrong answer verdict when submitting to this problem, giving me flashbacks to my struggles with subtask 2 of Catfish Farm. Even though the code was also only a couple dozen lines long, I still could not for the life of me figure out what the issue was. Now that the weight of the subtask was even higher, I found it much harder to move on from the code and ended up getting stuck watching over an hour of time dwindle away, unable to determine what could possibly be causing the issue.

Now significantly behind on time, I knew I needed to come up with as many subtask solutions as quickly as possible if I wanted a chance at redeeming my performance. Since the scoring distribution on Rarest Insects was not as friendly, I turned to Digital Circuit first in search of salvation. I was actually really surprised to discover that Digital Circuit was a counting problem, since they were never featured on the IOI before (at least not in the last few decades). By no means was this a pleasant surprise for me though, since this almost definitely meant that the solution had something to do with DP…

The first three subtasks all shared the common constraint \(Q \leq 5\), which essentially means that we only have to work with static trees. This turned out to be relatively simple. If we define \(dp[i][j]\) to be the number of ways for the \(i\)-th gate to have state \(j\), then the transitions only need to account for the number of activated (state \(1\)) children, which can be done with tree knapsack DP. As the author of both this and this, I was pretty comfortable with implementing tree knapsack and managed to collect the first three subtasks on the first go.

The next two subtasks required proper handling of range updates, but have the extra condition that the tree is a perfect binary tree. Clearly, range updates can be performed efficiently in this case using lazy propagation on a segment tree. If we work out the explicit transitions from the DP before for only two children, we see that the formulas for \(dp[i][0]\) and \(dp[i][1]\) are actually symmetric with respect to each other. Thus, we can propagate the lazy update flag by simply swapping the two values, and the rest follows naturally. Implementing this idea also only took around 10 minutes, so I ended up earning 34 points from Digital Circuit in the span of around half an hour.

I wanted to push further on this problem given how fast the foundational ideas came to my mind, but the harder subtasks required handling range updates on an arbitrary tree structure which I was not too familiar with in the context of recomputing DP values. Combining this challenge with my significant time investment on Rarest Insects for only 10 points so far, I decided it would be wisest for me to dedicate the rest of my time to getting as much as I can out of the latter problem.

This was a problem which I thought about on and off throughout the entire contest window. Its interesting interactive premise had me thoroughly intrigued (I am a huge sucker for good interactive problems), so I had it constantly burning at the back of my mind. In short, the problem allows you to insert and remove insects from a machine, or activate the machine to count the cardinality of the most common insect type in the machine. The goal is to efficiently determine the cardinality of the rarest insect type overall.

The most obvious solution is to figure out the types of each insect relative to each other, and then explicitly count the least frequent type. In order to do this, for each pair of insects we can add it into the machine and then immediately activate it. If it returns 2 then the insects are of the same type, and if it returns 1 then they are different. This approach requires at most \(N(N-1)\) operations of each type, which is enough to get the first 10 points of the problem.

Beyond the super basic approach, I was stuck when trying to come up with ideas which took significantly fewer operations. I was pretty hooked on trying for a square root-based idea where I can systematically ignore the more common insects, but nothing of value came from that train of thought. The optimal bound of \(3N\) operations also led me to believe there would be some type of constant multi-pass solution to the problem, but it was clear after a while of thinking that this would not be useful beyond simply determining the cardinality of the most common insect type or the number of distinct insect types.

Luckily, inspiration hit during the last hour of the contest when I decided to go all in on this problem. I realized that knowing the number of distinct insect types (obtainable via a single pass through the insects) was conducive to a binary search solution! Indeed, let’s say the number of distinct insect types is \(D\), and we want to determine whether the cardinality of the rarest insect type is at least \(lo\). To do this, we can add the insects one by one to the machine, each time activating the machine to check that the most frequent insect type does not exceed \(lo\) (remove the insect if it does). If by the end of this process we have exactly \(D \times lo\) insects in the machine, then we know that there are at least \(lo\) insects of each of the \(D\) types, so the cardinality of the rarest insect type is at least \(lo\). On the other hand, if we have less than \(D \times lo\) insects in the machine, then we know that some of the insect types have less than \(lo\) insects, so the cardinality of the rarest insect type is less than \(lo\). This check leads directly to a binary search solution terminating in \(\mathcal{O}(\log N)\) iterations. Since each iteration is a single pass through the insects using \(\mathcal{O}(N)\) operations, the solution uses \(\mathcal{O}(N \log N)\) operations of each type overall.

Directly implementing my binary search idea was short and sweet, boosting me directly to the 50 point mark. I felt like I was close to the full solution idea after spending so much time on this problem, but the fact that there were only 20 minutes remaining on the clock made going for smaller constant optimizations the best bet to realistically increase my score. Checking the edge cases (where the rarest cardinality is \(1\) or \(N\)) separately helped boost my score up to 53.81. To squeeze even more points out of the problem, I decided to do separate checks for when the number of distinct insect types is low. This optimization significantly decreases the range of the binary search and helped increase my score to 57.14. At this point, I was seriously running out of time and did not manage to clinch any further optimizations for submission. This left me with 122.14 total points from day 2, once again a score I did not feel satisfied with at all…

After the contest had fully ended, I was devastated to learn that my overall rank of 91 put me so close to but on the wrong side of the silver-bronze cutoff. The cutoff for a silver medal turned out to be 257.80, which meant my final score of 256.14 put me a measly 1.66 points away from getting silver! Those 1.66 points could have come from any of my inexplicably silly mistakes, whether it be the second or seventh subtask of Catfish Farm, the fourth subtask of Thousands Islands, or even just a slightly better constant optimization on Rarest Insects. The fact that I had managed to mess up on each and every one of them meant that I had to settle for a bronze medal when I knew I had the potential to do much better. That night, I had completely lost my usually high energy due to the excruciating frustration I felt towards myself and my performance.

So what was my biggest weakness at the IOI? Definitely implementation. Although my ideas for each of the subtasks I missed came quickly, I lost significant time and score due to my poor implementation skills. To be completely honest, I am still unsure as to what the exact cause of my poor implementation at the IOI was, since I had never identified implementation to be one of my weaknesses in previous training contests among all else. My best guess would be my unfamiliarity with the coding environment provided at the IOI, where I had to use Sublime and the terminal to test my code instead of my usual CLion setup. Not being completely comfortable with my coding environment was admittedly so much more off-putting than I expected it to be, given that the IOI was the first onsite coding competition I had ever participated in. If I was given the chance to redo my training for the IOI, I would dedicate a lot more effort into ensuring that my practice environment was as similar to the actual contest environment as possible so that there are fewer adjustments to make and fewer unpleasant surprises to deal with.

Although I cannot say I am completely happy with my performance, I am still extremely grateful to everyone who helped organize and contribute to the event. The problems were undoubtedly high quality as usual, and I am still especially in love with the full solution to Rarest Insects even though I did not have enough time to come up with it in the actual contest. I sincerely treasure all of the opportunities we were given to connect and bond with like-minded competitors from all across the world, whether that be through an intense match of table tennis in the hotel game room or through running together from the pouring rain on the many beautiful outdoor excursions.

To my team and my coaches, I want to say thank you for accompanying me on this bizarre week-long trip to Indonesia. You guys are some of the nicest and brightest people I have ever met, and I truly felt our connection as a team. Whether it be pinning memes to Richard’s office, getting yelled at for playing BS poker too loudly on the plane, or laughing uncontrollably at an ice cube melting off a straw, I will never forget all the wild antics we’ve been through together. Now that all but one member of the team are off to university, I’m excited to see what the future holds in store for us and the next generation of Canadian high school competitive programmers. Go Team Canada!

]]>This was, in my opinion, the easiest CCO problem in quite a while. A minute or two after reading the problem statement, I had already made the observation of converting the inequalities to a directed graph where an assignment would be possible if and only if the graph is acyclic. Of course, directly precomputing each of the \(\mathcal{O}(N^2)\) possible ranges would be too slow for full marks. However, I realized that it is possible to binary search on the right endpoint at which the graph becomes cyclic if the left endpoint is fixed, leading to an \(\mathcal{O}(N^2 \log N + Q)\) solution overall. I had ideas to further improve the solution to \(\mathcal{O}(N^2 + Q)\) by replacing the binary searches with a two-pointer scan, but was happy enough to see that the simple binary search solution passed under the time limit after submitting. In the end, problem 1 only took around 13 minutes of my contest time, something I was pretty pleased with. Onto the next one.

My initial reaction to this problem was actually to construct a flow network that models the people moving towards the bus shelters. Indeed, if we create \(N - 1\) vertices \(X_1, X_2, ..., X_{N-1}\) to represent the markets and \(N\) vertices \(Y_1, Y_2, ..., Y_N\) to represent the bus shelters, then we can construct the following edges:

- An edge from the source vertex \(S\) to each of the market vertices \(X_i\) with capacity \(P_i\).
- An edge from each of the shelter vertices \(Y_i\) to the sink vertex \(T\) with capacity \(B_i\).
- An edge from each market vertex \(X_i\) to the shelter vertices \(Y_i\) and \(Y_{i+1}\) with infinite capacity.
- An edge from each of the market vertices \(X_i\) directly to the sink vertex \(T\) with a capacity of \(U_i\) and a cost of \(1\).

It can now be seen that the answer to the problem is simply the minimum cost maximum flow (MCMF) of the network. The issue, however, is that I have no idea how to implement MCMF! Whenever I needed to use it, it was always during situations where pre-written code was permitted, so I would just copy-paste the MCMF template from KACTL. A little annoyed by the fact that I had a theoretically correct solution I couldn’t implement, I skipped to problem 3 for a while to calm myself down and see if any free partial marks were waiting for me. Sadly, there weren’t, so I returned to problem 2 in hopes of making a little more progress.

Specifically, I realized that my flow model may not have been all for naught. All I had to do was consider efficient ways of sending flow through the network. After all, even the fastest MCMF implementations would stand no reasonable chance in a network with over a million vertices! Slowing down for a moment, I began to consider various greedy approaches to the problem. Firstly, there was the idea of forcing as many people to the left as possible, then to the right, and finally any left-overs to buy umbrellas in the middle, deeming the case impossible if people still remained after that. However, I was quickly able to construct a case where the algorithm dies completely, so I knew I needed to do better. Then, inspired by the key idea of flow algorithms which is to force as much flow down an edge as possible when creating an augmenting path but also leaving yourself the option to “undo” the flow if necessary, I thought of a process that sounded much better than the previous.

Essentially, we still begin by forcing as many people to the left as possible, then to the right, and then down the middle. However, if there are still people left at this point, the case could still be possible as there may be extra umbrellas on the left! Thus, we should send the remaining people down to the left bus shelter while forcing the extras out to the market on the left, essentially undoing what we previously thought was optimal. Now, we have the exact same situation; we need to assign umbrellas to the people at the market until none are left, and the rest must be forced to the bus shelter on the left, continuing all the way until we reach the first market again. This algorithm is comfortably implemented in \(\mathcal{O}(N^2)\) by doing a DFS backwards whenever necessary, giving me 16 marks. For the remaining 9 marks, I soon realized that saving all the changes for one chain of DFS’s going backwards would give me \(\mathcal{O}(N)\), and a few quick changes to a couple of lines in the code sent me from 16 to 25. All in all, this problem took me a little over an hour, leaving me with over 2 hours to spend on one last problem.

It was now time to pick up where I left off during my brief visit to problem 3 from before. I had formulated many possible DP states, but the main issue with all of them was that they could not detect when the same slide was visited multiple times, resulting in severe issues with overcounting the answer. After a little more investigation, however, I realized that storing the last time I visited the other classroom in the state would solve the issue completely, since I just needed to check whether a new slide has been played since then. This observation gave me a solution that solved the first 2 subtasks, leaving me with a comfortable 15/25 marks.

The next subtask demanded a completely different approach, as my previous DP approach relied on the time values being small. Intuitively, I realized that there aren’t that many important times on the slides. In fact, many visits to slides will be done just as the slide starts being presented! After all, there are only two choices to make after seeing a slide: either start walking to the other classroom immediately or stay here and wait for the next slide to be shown. Using an approach similar to Dijkstra’s algorithm with a priority queue, I managed to snatch the 6 marks from subtask 3 after a moderate amount of debugging along the way. The constraint \(B_{i, j} - A_{i, j} \le 2K\) actually ended up helping a lot here, since my algorithm was not capable of detecting whether a slide had already been visited or not.

Now, only 4 marks away from a perfect score on day 1, I was a little stuck. Actually, scratch that — I was *really* stuck. The last subtask seemed like a huge jump from any of the previous ones. The number of slides was considerably larger so quadratic solutions were no longer permissible, and it wasn’t guaranteed that \(B_{i, j} - A_{i, j} \le 2K\) so a slide could be visited multiple times! I had absolutely no clue where to go from here. Any DP state I could think of was way too large to even declare in my program, and I didn’t know how to extend my priority queue solution to make it faster while also adding detection for multiple visits to the same slide. I spent the entire last hour of the contest dumbfounded, sitting around in my chair waiting for the contest timer to hit 0.

Overall, I must say I was highly satisfied with my day 1 performance, especially when considering it in contrast to my performance from last year. The first two problems were smooth and sweet, with barely any debugging required. I also felt like I performed nearly to the best of my ability when it came to problem 3, wasting little time in securing the crucial partial marks. Perhaps I could’ve spent the last hour of contest time a bit more productively and just tried out random ideas that might have had any chance of working, hoping to stumble upon the genius solution later presented in the solution take up. In any case, as someone who came into day 1 aiming to solve around a problem and a half, I cannot complain about my final score of 71/75. This score also happened to put me in a three-way tie for third place on the unofficial scoreboard, which meant I actually had a shot at the gold medal this time. Exciting!

I mentioned last year that I did a mock CCO contest on the off-day in hopes of improving my performance by any possible amount I could last-minute. This year, I decided to try something completely different. I tried my best to take my mind completely off of competitive programming in order to get the most rest I could for day 2. I was confident enough in the practice that I had done throughout the year that I decided it wasn’t worth tiring out my brain during the rare opportunity I was given for it to rest. In fact, I even decided to go to prom, which with (fortunate?) unfortunate timing was held the night before day 2 of CCO! After returning home, taking a shower, and shaking all the stress off, it was time to catch a good night’s sleep to maximize my performance on day 2.

*P.S. To be fair, I did spend a good portion of my time at prom discussing a Codeforces problem with Allen. However, that was more for personal enjoyment than actual contest preparation!*

The jump in difficulty from day 1 was immediately apparent. I had absolutely no idea where to even start on the first two problems, so the first problem I tackled was the third. It was clearly an ad hoc problem, a category of problems in which I have a good amount of confidence. The first observation I made was that any consecutive sequence of two or more of the same character could be directly removed from the string as necessary (after all, it is not hard to see that any integer no less than 2 can be expressed as the sum of a combination of twos and threes). With this in mind, we can compress the string based on segments sharing the same colour, where each segment is either directly removable (`O`

) or not (`X`

).

Then, if the string has odd length, it is easy to see that it’s winnable if the middle character is `O`

. After all, removing the middle character would merge the two beside it into a middle character that is still `O`

. However, what if it’s not `O`

? In this case, I decided to try to shift the centre of the string until it becomes an `O`

. It is possible to shift the centre to the right by 1 if we remove an `O`

from the left half, and vice versa as well. Obviously, we only care about the closest `O`

on the right, and we also only care about the closest `O`

on the left since that gives us the maximum possible shifting distance. It turns out this strategy provides a correct algorithm to determine whether the string is winnable, and it is easily implemented in \(\mathcal{O}(N)\) by looping from the centre in both directions. However, what if the string has even length?

This case is a little more annoying, since there is no clear centre to work with. To handle this, I tried all possible splitting points so that I am left with two independent strings with odd lengths, and then tried solving the two halves separately. Unfortunately, this means my algorithm is now only \(\mathcal{O}(N^2)\) for strings with even length. Even so, this was solid progress, and I submitted the solution for 16/25 marks.

At this point, I had already spent a lot of time and brainpower on the problem and was not really keen on implementing an entirely different algorithm for full marks. So, I decided to instead try something a little more handwavy. Essentially, I realized that if there is an abundance of `O`

s in the string, then it is highly likely for there to be two `O`

s close to the centre of the second half of the string. So, I precomputed all splitting points where one half of the string would be immediately ready (with an `O`

in the centre), and tried splitting the string at the first 200 of these candidate points. If I still hadn’t found an answer at this point, I would just evaluate the case to be impossible. This reduced my time complexity back down to \(\mathcal{O}(N)\) (admittedly with a rather huge constant factor), and to my surprise, actually passed all the cases! Not complaining at all.

The next most approachable problem for me, at least in terms of subtasks. The second subtask was a pretty straightforward combination of precomputation and binary search, although the various edge cases made it slightly annoying to debug. For the third subtask, I simply tried all \(\mathcal{O}(AB)\) possible pairs of plans. The first subtask was the trickiest out of the first three for me, mainly because I wasn’t entirely sure how to take advantage of the fact that \(N\) was small. After a lot of brainstorming though, I thought about doing something similar to what I did for the second subtask, except using bitsets to explicitly calculate the union of the connected pairs! This gave me a solution that worked in \(\mathcal{O}(N^3 \log N / 64)\), which was actually fast enough in practice to pass the first subtask after some constant optimization. While that submission was judging, I also decided to optimize out the log factor by replacing my binary search with a single monotonic pointer giving me a solution in \(\mathcal{O}(N^3/ 64)\) just in case, but it ended up being unnecessary as the first solution did just fine on its own.

Stuck. Really stuck. A lot of things were going on in this problem, and I couldn’t think of any ways to simplify it at all. I was even stumped by subtask 2, despite how deceptively easy it looked at first glance. In the end, I gladly walked away from this problem with only the first 3/25 marks by brute forcing literally all the possible scenarios. Probably wasn’t getting much more than that regardless.

I was a little nervous after finishing day 2. I wasn’t sure exactly how well I performed compared to the other contestants, especially considering how hard I bricked on problem 1. However, as the scores started rolling in on the unofficial scoreboard, I was astonished to see that I had actually settled in 4th place! It turns out that, yes, I did brick extremely hard on problem 1, but what made up for that was the surprising lack of solves on problem 3, which was actually in my opinion the easiest problem of day 2! The surge of emotions I felt at that moment was incredible: an intense mix of joy, excitement, nervousness, and relief.

Wow… I’m speechless. At the time I am writing this, it has been officially confirmed that I placed in the gold medalist range this year, and have been selected to represent Team Canada at IOI 2022 in Indonesia. One major takeaway from my performance this year was that I was capable of solving extremely challenging problems under contest conditions when I am completely focused and dedicated to the task. However, the contest also helped me identify some weaknesses and areas I can still improve on, for example the cleverly modelled DP solution to day 2 problem 1 outlined during the solution discussions. Regardless, I’m happy I was able to end my high school career on such a high note. In the near future, I hope to also record my experience at IOI in a similar post to this one. Until then!

]]>In order to solve this problem, you must first know at least one of the two following graph traversal algorithms: Breadth First Search (BFS) or Depth First Search (DFS). If you are not familiar with these, I recommend doing a quick Google search to see an overview on how they do what they do. With that out of the way, let’s proceed to the solution.

For the first subtask, notice that there can be a maximum of \(2\,000 \times 2\,000 = 4\,000\,000\) cells in the grid. If we view each cell of the grid as a vertex and we add edges to the orthogonal neighbours of each node, we can directly apply either BFS or DFS to the given grid. Either of these algorithms have a time complexity of \(\mathcal{O}(NM)\). You can see a simple implementation of this algorithm below:

For full marks, note that the algorithm above is simply too inefficient. It would require over \(2 \times 10^{11}\) operations, which is far too much for any modern computer to handle in less than a second. Instead, we need to come up with a more clever application of what we already know. After re-reading the problem constraints, we notice that \(K\) is also suspiciously low. This encourages us to considering an algorithm not based on open cells, but on blocked ones! Let’s analyze how the patterns formed by the walls (blocked cells) in the grid influence whether there is a path from the top-left to the bottom-right. First of all, we can assert that there should never be a path when at least one “chain” of walls goes from the left edge to the right edge, the top edge to the bottom edge, the left edge to the top edge, or the right edge to the bottom edge. We can imagine these as walls running along both extreme boundaries, thus being impassible. If no such chains exist, then we can always “walk around” each segment of walls, and we are never completely restricted by them. Thus, it is sufficient to check that there do no exist any aforementioned “chains”. Since there are only \(K\) walls to traverse as nodes, our new algorithm has a time complexity of \(\mathcal{O}(K)\). However, in order to efficiently store all blocked cells, we may need an array of sets, each set storing the blocked cells in the corresponding row. This will give the solution a modest constant factor, but it should still pass well under the time limit. Below is an implementation of the algorithm:

]]>Consider a recursive `check`

function taking parameters \(l\) and \(r\) for whether the subsequence \(A[l, r]\) is non-boring. For some \(i\) in \([l, r]\), if \(A_i\) is unique (the previous and next appearance of \(A_i\) in \(A\) occurs outside of \([l, r]\)), then all subsequences of \(A[l, r]\) “crossing” \(i\) are non-boring. Thus, it suffices to check that both \(A[l, i-1]\) and \(A[i+1, r]\) are non-boring with a recursive call to `check`

(note that we only need to recurse for one such \(i\) if it exists, think about why this is).

Naively, the algorithm above runs in \(\mathcal{O}(N^2)\), far too slow for the given constraint of \(N \leq 200\;000\). However, what if we try a different order of looping \(i\) in the `check`

function? Instead of looping \(i\) in the order \([l, l+1, l+2, .., r]\), we will loop \(i\) “outside-in”, in the order \([l, r, l+1, r-1, l+2, r-2, ...]\). As it turns out, this provides us with an \(\mathcal{O}(N \log N)\) algorithm, which is a dramatic improvement from what we thought would be \(\mathcal{O}(N^2)\)!

To prove this, consider the tree formed by our decisions for \(i\) for each \(i\) recursively chosen by `check`

. This will be a binary tree, where the number of children in the left child is the size of the left side of our split \([l, i-1]\), and the number of children in the right child is the size of \([i+1, r]\). At each step of the algorithm, we are essentially “unmerging” a set of objects into the left and right children, giving each child the corresponding number of objects to its size. Note that this unmerging happens in a time complexity proportional to the size of the smaller child, by nature of us looping outside-in. However, considering the reverse process, this is exactly the process of small-to-large set merging, which is \(\mathcal{O}(N \log N)\)! Thus, we have obtained the correct complexity of our algorithm, and this problem is solved with barely any pain or book-code. Below shows a C++ implementation of `check`

, where `lst`

and `nxt`

store the index of the previous and next appearance of \(A_i\) respectively:

In conclusion, it may be worth the time to consider seemingly brute force solutions to some problems, as long as there is a merging or unmerging process that can happen proportional to the size of the smaller set, capitalizing on the small-to-large technique when it seems like the last thing one could do.

]]>In short, Plug DP is a bitmasking technique that allows us to solve complicated problems with relatively simple states and transitions. To illustrate Plug DP in its most primitive form, let’s visit a rather classical problem: **How many ways can we fully tile an \(N \times M\) grid with \(1 \times 2\) dominoes?**

This problem can be solved with a standard row-by-row bitmasking approach, but the transitions for that DP state is annoying and unclear at best. Instead, let’s investigate an approach that uses a slightly different state. Our state, \(dp[i][j][mask]\), will represent the number of possible full tilings of all cells in rows \(i-1\) and earlier, and the first \(j\) cells in row \(i\), with a plug mask of \(mask\). The first two dimensions are relatively straightforward, but what do I mean by “plug mask”?

Let’s consider a concrete example to understand the concept of plug masks. Consider the diagram above, where the first two dimensions \((i, j) = (3, 4)\). The red line denotes the line which separates the cells we’ve already processed and the cells we have yet to consider. This line can be split into \(M+1\) segments of length 1, and each of the arrows on these segments represent a plug. The plug itself can represent a variety of things, but for our purposes here it represents whether we have placed a domino that crosses the plug (i.e. the two halves of the domino lie on separate sides of the plug). The plug will be \(1\) (toggled) if there is a domino laid over it, and \(0\) otherwise. For example, the diagram below depicts one of the tilings that has the plugs with states \([1, 0, 1, 0, 1, 0, 0, 1, 0]\) from left to right. We can obviously represent the set of states of the plugs using a bitmask of length \(M+1\), so the DP state which the tiling below belongs to is \(dp[3][4][101010010_2]\) (I’ve written the binary number in reverse here for readability. Just to be clear, the decimal equivalent of this mask is \(149\) and not \(338\)).

In general, we want to transition from cell \((i, j - 1)\) to cell \((i, j)\) (i.e. across each row). Notice that only 2 plugs change locations when we move horizontally, which is the main reason why Plug DP ends up being so powerful. If we number the plugs from \(0\) to \(M\), then only plugs \(j-1\) and \(j\) change locations. Specifically, \(j-1\) goes from the vertical plug in the previous state to a horizontal plug in the next, while \(j\) goes from a horizontal plug to the vertical plug. For example, the diagram below depicts the differences between the set of plugs for a state at \((3, 3)\) versus the set of plugs for a state at \((3, 4)\). The orange plugs are all shared and do not change during the transition, so we only need to consider how plugs \(3\) and \(4\) change in our transition from \((3, 3)\) to \((3, 4)\). It is convenient to note that if we \(1\)-index the columns and \(0\)-index the plugs, then plug \(j\) will always be the vertical plug when considering a state at column \(j\).

So how do we transition? First, we notice that if both plugs \(j-1\) and \(j\) are toggled from the previous state then it leads to an overlap of 2 dominoes on cell \((i, j)\), so we don’t need to consider this case. Let’s handle the other 3 cases separately.

**Case 1:** none of plug \(j-1\) and \(j\) are toggled.

This means that \((i, j)\) does not have anything covering it, so we must place one end of a domino there to cover. We can either place a horizontal domino going from \((i, j)\) to \((i, j+1)\) toggling plug \(j\), or we can place a vertical domino going from \((i, j)\) to \((i+1, j)\) toggling plug \(j-1\). Note that we cannot place a domino going to \((i, j-1)\) or \((i-1, j)\) since these cells are already occupied by the definition of our state.

**Case 2:** only plug \(j-1\) is toggled.

This means that \((i, j)\) is already covered (by a domino going from \((i, j-1)\) to \((i, j)\)), so all we have to do is untoggle plug \(j-1\) and move on.

**Case 3:** only plug \(j\) is toggled.

Extemely similar to the previous case, This means that \((i, j)\) is already covered (by a domino going from \((i-1, j)\) to \((i, j)\)), so all we have to do is untoggle plug \(j\) and move on.

And that’s really all there is! Now we just need to handle some special procedures and we are good to go.

If you’ve been following along, you may be wondering how we go from one row to the next. It turns out that all we need to do is move some values from one place to another. Specifically, when we first process row \(i\), we will transfer all the values stored in \(dp[i - 1][M][mask]\) to \(dp[i][0][mask << 1]\). It may be confusing as to why we are shifting all bits to the left by 1, but the following diagram should clear things up.

As you may notice, the vertical plug \(0\) on the next row shifts all the plug indices by 1, so we must shift all bits in the mask by 1 to compensate. Also, the vertical plugs here \(0\) and \(M\) should never be toggled since having a domino go outside the grid would be absurd, so we don’t have to worry about the bit we lose from shifting or the new bit introduced.

Our base case will be \(dp[0][M][0] = 1\), and you can see how this easily fits in from the previous section. The final answer will be stored in \(dp[N][M][0]\), since having any plugs toggled at that point would mean having a domino go outside of the grid.

Here, you can find my implementation for the procedure described above. I take all values modulo `MOD`

since the number of tilings grows rapidly for larger \(N\) and \(M\). The time complexity is \(\mathcal{O}(NM2^{M+1})\), which means we can solve the problem for \(N, M \le 20\) with ease.

And that was a quick overview of Plug DP! With a firm grasp on the concepts we can easily extend this to a variety of other small grid problems, whether it be about domino tilings or counting circuits in a grid. As a small exercise, try solving the problem above when some of the given cells are blocked, or try solving it for when it does not have to be a full tiling. Anyways, that’s all for now.

Ciao 👋

]]>The first fault in my approach at CCO. I had some initial ideas early on, scoring the first 2 subtasks at around 16 minutes. Over the next hour, I developed a solution which runs in square root log time, believing it would pass with ease due to a fatal misreading of the constraints. The solution worked well for the first three subtasks, but for some reason I couldn’t figure out kept receiving a `WA`

verdict on the last batch. For around another hour, I read over my code countless times and ran multiple fast-slow scripts, but simply couldn’t find out where the error lied. After a gruesome 2 hour and 30 minutes on problem 1, I decided to take a peek at the rest of the problemset before it was too late (although it was already way too much time wasted).

I had a quick glance over problem 2, but decided that the subtasks from problem 3 were a lot more approachable. Really, my goal here was to snatch whatever I can and rush back to problem 1 where I had the most potential points yet to be earned. A quick program that printed the path given by a random line graph revealed the pattern of \(1-x-1-y-1-...-1-N-1-N-...\), for which the first idea that came to mind was binary search. This was the smoothest subtask of day 1 for me, taking only around 20 minutes of time in total.

This problem was intimidating at first, with no clear idea on how to proceed. Again, I was simply controlling the damage of my poor problem 1 performance here, so I was only aiming for the subtask. The subtask provided the constraint that the absolute value of any allowed digit is less than the base, which intuitively meant that any change caused by a higher base couldn’t be reverted by lower bases, no matter what values we assign them. This inspired a brute force recursion in descending order of power, ensuring that the number is in the range \((-b^K, b^K)\) when we are done with the \(K\)-th power (of course, \(b\) denotes the given base here). The solution ended up running surprisingly fast (0.1 seconds), but kept getting `WA`

on case 7. After around 10 minutes of debugging, I decided it was all or nothing at this point and simply slapped `__int128`

into my code, which surprisingly fixed the bug and gave me the first subtask. Overall, this was around 30 minutes spent on 8 marks, which I was relatively pleased with (considering that was more than half of the points I had earned so far).

It was only here that I decided it may be a good idea to reread the statement, and discovered that contrary to my belief that \(Q = 100\;000\), the constraints actually had \(Q = 1\;000\;000\)! A quick 1 line fix to my `MQ`

constant resolved the `WA`

verdict, but replaced it with the ever so agonizing `TLE`

instead. As there were only 30 minutes left at this point, I decided it would be futile to search for new solutions and settled with constant optimizing my square root log. I spammed around 20 different versions of the same code with different block sizes and with/without `pragma`

optimizations, but none of them made it through the last batch. In the last 10 minutes I tried to cheese the time limit by handling numbers with small frequencies separately but that was to no avail either, and the timer hit 0 with only 11 points on problem 1, a problem I dedicated over 3 hours of contest time to.

Clearly, my strategy of tunnel visioning on problem 1 did not work out in my favour at all. Spending over 3 out of 4 hours of such an important contest for 11 points is something that would pain anyone to see, and I was quite sad over my mediocre day 1 performance. If there is one lesson to be learned out of my CCO experience this year, it would be to *read the constraints, and read them carefully*. Also, repeatedly submitting `WA`

submissions or trying to fast-slow for over 30 minutes was just a waste of time, time that could have well been spent going for more partials on problem 3 or even a full solve on problem 2. Finally, it may have been wiser to try and optimize out the log factor from my code instead of spamming flimsy `pragma`

s and cheeses, something that seems more than possible with 30 minutes in hindsight. Regardless, I could not redo what had already been done, and it was time to get well rested and prepare for day 2.

My mindset going into day 2 was to mainly control the damage that had been done on day 1. My lousy day 1 performance had already eliminated any chances of going for gold, so it was time to work on securing that silver. I did a mock CCO contest the day before, just to practice waking up, getting into contest mindset, and not choking or getting stuck on any particular problem for too long.

This goes first since it was the problem I eliminated first. After reading all the problem statements right off the bat, problem 3 already seemed concerningly difficult. The best I could come up with was some 2-SAT approach based on clockwise or counterclockwise travel, but the dependencies and conditions simply did not work out. After fiddling with it for a bit longer, I decided that this was probably the killer problem of CCO 2021, and dropped it completely (of course, the first subtask being worth 12 points helped with that realization as well). Back to the other two.

I invested a fair amount of time into this problem. My first impressions were “oh hey, I finally found the template free easy Tarjan’s problem” that previous CCOs all had (at least, some variation of a template problem). However, the problem quickly managed to shove my words back into my mouth as I pondered the details for around 30 minutes (hint, it wasn’t Tarjan’s at all). Keeping an open mind, I changed to an approach relying on Dijkstra’s algorithm which managed to almost pass the first batch after some debugging. As it turned out, replacing the Dijkstra with a simple BFS allowed my solution to pass subtask 1 in the exact 1 second of allotted time, of course with some flimsy break statements attached as well. I couldn’t find any easy optimizations with multisource BFS that would lead to a full solution, so I decided to move on to the next problem. I was quite shocked to learn after the contest that simply switching the BFS to a DFS and applying memoization was enough for full marks, but I guess that’s just how it is sometimes :)

For this problem, the observation of partitioning the nodes into some blocks of equal distances was immediately apparent, and a naive \(\mathcal{O}(N^3)\) dynamic programming solution soon followed. Here, perhaps slightly foolishly due to the mindset of redeeming my day 1 performance, I decided to search for a \(\mathcal{O}(N)\) greedy algorithm instead of attempting to optimize my DP to \(\mathcal{O}(N^2)\). The reasoning behind this was that I had done quite a number of difficult problems which ended up converting a partial DP solution into full marks with a greedy approach, and I figured that this must be one of those as well. To my dismay, the problem was not actually a greedy problem, and I spent the rest of day 2 searching for something that wasn’t even there in the first place.

After day 2, I was quite sure my tragic performance (or so I thought) on both days would place me in the bronze medal range. The mistake of going for a solution that simply did not exist as compensation for a poor performance from before was unwise and panic-induced. However, the one thing I did manage to do well at CCO was ensuring that I didn’t miss out on trivial partials, and giving all the problems at least a slight jab before moving on to the next. To my surprise, the median score on day 2 was only 2 points, which ended up placing me in the silver medalist range.

This marks the conclusion of my first experience at CCO, and how I managed to earn a silver medal without even scoring full points on a single problem. It turns out that only partial farming (unintentionally) for both days can be sufficient to cross that silver cutoff, and I am glad I was able to leave the contest with a lot more experience and ideas than before. I can’t say that I’m completely happy with how I did, but I am thankful that things didn’t go as bad as I thought. I guess this also means I have a lot more to learn and prepare before the next wave of computing competitions. Anyways, that’s it for my first blog.

Ciao 👋

]]>