Jekyll2022-09-25T15:13:17-04:00http://4fecta.ca//feed.xml4fecta’s SiteMy personal websiteEdward Xiaoyour-email@email.comMy road to gold at the 2022 CCO2022-05-28T13:52:09-04:002022-05-28T13:52:09-04:00http://4fecta.ca//my-road-to-gold-at-the-2022-ccoHello again! As promised, the long-awaited sequel to this post is finally here. My experience at CCO this year was vastly different than the last (perhaps due in part to actually being able to solve some problems fully), and I am ecstatic that my practice and hard work over the past year has paid off big-time. With that being said, here’s a rundown of my problem solving journey at CCO. Please enjoy :D

## Day 1

### P1. Alternating Heights

This was, in my opinion, the easiest CCO problem in quite a while. A minute or two after reading the problem statement, I had already made the observation of converting the inequalities to a directed graph where an assignment would be possible if and only if the graph is acyclic. Of course, directly precomputing each of the $$\mathcal{O}(N^2)$$ possible ranges would be too slow for full marks. However, I realized that it is possible to binary search on the right endpoint at which the graph becomes cyclic if the left endpoint is fixed, leading to an $$\mathcal{O}(N^2 \log N + Q)$$ solution overall. I had ideas to further improve the solution to $$\mathcal{O}(N^2 + Q)$$ by replacing the binary searches with a two-pointer scan, but was happy enough to see that the simple binary search solution passed under the time limit after submitting. In the end, problem 1 only took around 13 minutes of my contest time, something I was pretty pleased with. Onto the next one.

### P2. Rainy Markets

My initial reaction to this problem was actually to construct a flow network that models the people moving towards the bus shelters. Indeed, if we create $$N - 1$$ vertices $$X_1, X_2, ..., X_{N-1}$$ to represent the markets and $$N$$ vertices $$Y_1, Y_2, ..., Y_N$$ to represent the bus shelters, then we can construct the following edges:

• An edge from the source vertex $$S$$ to each of the market vertices $$X_i$$ with capacity $$P_i$$.
• An edge from each of the shelter vertices $$Y_i$$ to the sink vertex $$T$$ with capacity $$B_i$$.
• An edge from each market vertex $$X_i$$ to the shelter vertices $$Y_i$$ and $$Y_{i+1}$$ with infinite capacity.
• An edge from each of the market vertices $$X_i$$ directly to the sink vertex $$T$$ with a capacity of $$U_i$$ and a cost of $$1$$.

It can now be seen that the answer to the problem is simply the minimum cost maximum flow (MCMF) of the network. The issue, however, is that I have no idea how to implement MCMF! Whenever I needed to use it, it was always during situations where pre-written code was permitted, so I would just copy-paste the MCMF template from KACTL. A little annoyed by the fact that I had a theoretically correct solution I couldn’t implement, I skipped to problem 3 for a while to calm myself down and see if any free partial marks were waiting for me. Sadly, there weren’t, so I returned to problem 2 in hopes of making a little more progress.

Specifically, I realized that my flow model may not have been all for naught. All I had to do was consider efficient ways of sending flow through the network. After all, even the fastest MCMF implementations would stand no reasonable chance in a network with over a million vertices! Slowing down for a moment, I began to consider various greedy approaches to the problem. Firstly, there was the idea of forcing as many people to the left as possible, then to the right, and finally any left-overs to buy umbrellas in the middle, deeming the case impossible if people still remained after that. However, I was quickly able to construct a case where the algorithm dies completely, so I knew I needed to do better. Then, inspired by the key idea of flow algorithms which is to force as much flow down an edge as possible when creating an augmenting path but also leaving yourself the option to “undo” the flow if necessary, I thought of a process that sounded much better than the previous.

Essentially, we still begin by forcing as many people to the left as possible, then to the right, and then down the middle. However, if there are still people left at this point, the case could still be possible as there may be extra umbrellas on the left! Thus, we should send the remaining people down to the left bus shelter while forcing the extras out to the market on the left, essentially undoing what we previously thought was optimal. Now, we have the exact same situation; we need to assign umbrellas to the people at the market until none are left, and the rest must be forced to the bus shelter on the left, continuing all the way until we reach the first market again. This algorithm is comfortably implemented in $$\mathcal{O}(N^2)$$ by doing a DFS backwards whenever necessary, giving me 16 marks. For the remaining 9 marks, I soon realized that saving all the changes for one chain of DFS’s going backwards would give me $$\mathcal{O}(N)$$, and a few quick changes to a couple of lines in the code sent me from 16 to 25. All in all, this problem took me a little over an hour, leaving me with over 2 hours to spend on one last problem.

### P3. Double Attendance

It was now time to pick up where I left off during my brief visit to problem 3 from before. I had formulated many possible DP states, but the main issue with all of them was that they could not detect when the same slide was visited multiple times, resulting in severe issues with overcounting the answer. After a little more investigation, however, I realized that storing the last time I visited the other classroom in the state would solve the issue completely, since I just needed to check whether a new slide has been played since then. This observation gave me a solution that solved the first 2 subtasks, leaving me with a comfortable 15/25 marks.

The next subtask demanded a completely different approach, as my previous DP approach relied on the time values being small. Intuitively, I realized that there aren’t that many important times on the slides. In fact, many visits to slides will be done just as the slide starts being presented! After all, there are only two choices to make after seeing a slide: either start walking to the other classroom immediately or stay here and wait for the next slide to be shown. Using an approach similar to Dijkstra’s algorithm with a priority queue, I managed to snatch the 6 marks from subtask 3 after a moderate amount of debugging along the way. The constraint $$B_{i, j} - A_{i, j} \le 2K$$ actually ended up helping a lot here, since my algorithm was not capable of detecting whether a slide had already been visited or not.

Now, only 4 marks away from a perfect score on day 1, I was a little stuck. Actually, scratch that — I was really stuck. The last subtask seemed like a huge jump from any of the previous ones. The number of slides was considerably larger so quadratic solutions were no longer permissible, and it wasn’t guaranteed that $$B_{i, j} - A_{i, j} \le 2K$$ so a slide could be visited multiple times! I had absolutely no clue where to go from here. Any DP state I could think of was way too large to even declare in my program, and I didn’t know how to extend my priority queue solution to make it faster while also adding detection for multiple visits to the same slide. I spent the entire last hour of the contest dumbfounded, sitting around in my chair waiting for the contest timer to hit 0.

### Reflections

Overall, I must say I was highly satisfied with my day 1 performance, especially when considering it in contrast to my performance from last year. The first two problems were smooth and sweet, with barely any debugging required. I also felt like I performed nearly to the best of my ability when it came to problem 3, wasting little time in securing the crucial partial marks. Perhaps I could’ve spent the last hour of contest time a bit more productively and just tried out random ideas that might have had any chance of working, hoping to stumble upon the genius solution later presented in the solution take up. In any case, as someone who came into day 1 aiming to solve around a problem and a half, I cannot complain about my final score of 71/75. This score also happened to put me in a three-way tie for third place on the unofficial scoreboard, which meant I actually had a shot at the gold medal this time. Exciting!

## Day 1.5

I mentioned last year that I did a mock CCO contest on the off-day in hopes of improving my performance by any possible amount I could last-minute. This year, I decided to try something completely different. I tried my best to take my mind completely off of competitive programming in order to get the most rest I could for day 2. I was confident enough in the practice that I had done throughout the year that I decided it wasn’t worth tiring out my brain during the rare opportunity I was given for it to rest. In fact, I even decided to go to prom, which with (fortunate?) unfortunate timing was held the night before day 2 of CCO! After returning home, taking a shower, and shaking all the stress off, it was time to catch a good night’s sleep to maximize my performance on day 2.

P.S. To be fair, I did spend a good portion of my time at prom discussing a Codeforces problem with Allen. However, that was more for personal enjoyment than actual contest preparation!

## Day 2

### P3. Good Game

The jump in difficulty from day 1 was immediately apparent. I had absolutely no idea where to even start on the first two problems, so the first problem I tackled was the third. It was clearly an ad hoc problem, a category of problems in which I have a good amount of confidence. The first observation I made was that any consecutive sequence of two or more of the same character could be directly removed from the string as necessary (after all, it is not hard to see that any integer no less than 2 can be expressed as the sum of a combination of twos and threes). With this in mind, we can compress the string based on segments sharing the same colour, where each segment is either directly removable (O) or not (X).

Then, if the string has odd length, it is easy to see that it’s winnable if the middle character is O. After all, removing the middle character would merge the two beside it into a middle character that is still O. However, what if it’s not O? In this case, I decided to try to shift the centre of the string until it becomes an O. It is possible to shift the centre to the right by 1 if we remove an O from the left half, and vice versa as well. Obviously, we only care about the closest O on the right, and we also only care about the closest O on the left since that gives us the maximum possible shifting distance. It turns out this strategy provides a correct algorithm to determine whether the string is winnable, and it is easily implemented in $$\mathcal{O}(N)$$ by looping from the centre in both directions. However, what if the string has even length?

This case is a little more annoying, since there is no clear centre to work with. To handle this, I tried all possible splitting points so that I am left with two independent strings with odd lengths, and then tried solving the two halves separately. Unfortunately, this means my algorithm is now only $$\mathcal{O}(N^2)$$ for strings with even length. Even so, this was solid progress, and I submitted the solution for 16/25 marks.

At this point, I had already spent a lot of time and brainpower on the problem and was not really keen on implementing an entirely different algorithm for full marks. So, I decided to instead try something a little more handwavy. Essentially, I realized that if there is an abundance of Os in the string, then it is highly likely for there to be two Os close to the centre of the second half of the string. So, I precomputed all splitting points where one half of the string would be immediately ready (with an O in the centre), and tried splitting the string at the first 200 of these candidate points. If I still hadn’t found an answer at this point, I would just evaluate the case to be impossible. This reduced my time complexity back down to $$\mathcal{O}(N)$$ (admittedly with a rather huge constant factor), and to my surprise, actually passed all the cases! Not complaining at all.

### P2. Phone Plans

The next most approachable problem for me, at least in terms of subtasks. The second subtask was a pretty straightforward combination of precomputation and binary search, although the various edge cases made it slightly annoying to debug. For the third subtask, I simply tried all $$\mathcal{O}(AB)$$ possible pairs of plans. The first subtask was the trickiest out of the first three for me, mainly because I wasn’t entirely sure how to take advantage of the fact that $$N$$ was small. After a lot of brainstorming though, I thought about doing something similar to what I did for the second subtask, except using bitsets to explicitly calculate the union of the connected pairs! This gave me a solution that worked in $$\mathcal{O}(N^3 \log N / 64)$$, which was actually fast enough in practice to pass the first subtask after some constant optimization. While that submission was judging, I also decided to optimize out the log factor by replacing my binary search with a single monotonic pointer giving me a solution in $$\mathcal{O}(N^3/ 64)$$ just in case, but it ended up being unnecessary as the first solution did just fine on its own.

### P1. Bi-ing Lottery Treekets

Stuck. Really stuck. A lot of things were going on in this problem, and I couldn’t think of any ways to simplify it at all. I was even stumped by subtask 2, despite how deceptively easy it looked at first glance. In the end, I gladly walked away from this problem with only the first 3/25 marks by brute forcing literally all the possible scenarios. Probably wasn’t getting much more than that regardless.

### Reflections

I was a little nervous after finishing day 2. I wasn’t sure exactly how well I performed compared to the other contestants, especially considering how hard I bricked on problem 1. However, as the scores started rolling in on the unofficial scoreboard, I was astonished to see that I had actually settled in 4th place! It turns out that, yes, I did brick extremely hard on problem 1, but what made up for that was the surprising lack of solves on problem 3, which was actually in my opinion the easiest problem of day 2! The surge of emotions I felt at that moment was incredible: an intense mix of joy, excitement, nervousness, and relief.

## Final Thoughts

Wow… I’m speechless. At the time I am writing this, it has been officially confirmed that I placed in the gold medalist range this year, and have been selected to represent Team Canada at IOI 2022 in Indonesia. One major takeaway from my performance this year was that I was capable of solving extremely challenging problems under contest conditions when I am completely focused and dedicated to the task. However, the contest also helped me identify some weaknesses and areas I can still improve on, for example the cleverly modelled DP solution to day 2 problem 1 outlined during the solution discussions. Regardless, I’m happy I was able to end my high school career on such a high note. In the near future, I hope to also record my experience at IOI in a similar post to this one. Until then!

]]>
Edward Xiaoyour-email@email.com
Path Finder Editorial2022-01-04T12:42:22-05:002022-01-04T12:42:22-05:00http://4fecta.ca//pathfinder-editorialHi, here’s a blog post I found on my old website that I decided to transfer here. The problem being discussed is Path Finder, one of the problems I authored earlier in my career. Please give it a shot if you haven’t already before you proceed to the solution below!

In order to solve this problem, you must first know at least one of the two following graph traversal algorithms: Breadth First Search (BFS) or Depth First Search (DFS). If you are not familiar with these, I recommend doing a quick Google search to see an overview on how they do what they do. With that out of the way, let’s proceed to the solution.

For the first subtask, notice that there can be a maximum of $$2\,000 \times 2\,000 = 4\,000\,000$$ cells in the grid. If we view each cell of the grid as a vertex and we add edges to the orthogonal neighbours of each node, we can directly apply either BFS or DFS to the given grid. Either of these algorithms have a time complexity of $$\mathcal{O}(NM)$$. You can see a simple implementation of this algorithm below:

For full marks, note that the algorithm above is simply too inefficient. It would require over $$2 \times 10^{11}$$ operations, which is far too much for any modern computer to handle in less than a second. Instead, we need to come up with a more clever application of what we already know. After re-reading the problem constraints, we notice that $$K$$ is also suspiciously low. This encourages us to considering an algorithm not based on open cells, but on blocked ones! Let’s analyze how the patterns formed by the walls (blocked cells) in the grid influence whether there is a path from the top-left to the bottom-right. First of all, we can assert that there should never be a path when at least one “chain” of walls goes from the left edge to the right edge, the top edge to the bottom edge, the left edge to the top edge, or the right edge to the bottom edge. We can imagine these as walls running along both extreme boundaries, thus being impassible. If no such chains exist, then we can always “walk around” each segment of walls, and we are never completely restricted by them. Thus, it is sufficient to check that there do no exist any aforementioned “chains”. Since there are only $$K$$ walls to traverse as nodes, our new algorithm has a time complexity of $$\mathcal{O}(K)$$. However, in order to efficiently store all blocked cells, we may need an array of sets, each set storing the blocked cells in the corresponding row. This will give the solution a modest constant factor, but it should still pass well under the time limit. Below is an implementation of the algorithm:

]]>
Edward Xiaoyour-email@email.com
A Deeper Look at the Small-to-Large Technique2021-09-01T20:30:13-04:002021-09-01T20:30:13-04:00http://4fecta.ca//a-deeper-look-at-the-small-to-large-techniqueThe small-to-large technique is well known among the competitive programming community, but most problems that require it are straightforward applications of set merging. Here, we introduce a different way to think about the technique so that it may be applicable to a wider variety of problems. Consider the problem Non-boring Sequences. In short, the problem asks to determine whether all consecutive subsequences of a sequence $$A$$ contain a unique element (such sequences are termed “non-boring”). Surely, after a quick read, we can start to brainstorm some segment tree or other data structure based solution, but what if we try some brute force approaches as well?

Consider a recursive check function taking parameters $$l$$ and $$r$$ for whether the subsequence $$A[l, r]$$ is non-boring. For some $$i$$ in $$[l, r]$$, if $$A_i$$ is unique (the previous and next appearance of $$A_i$$ in $$A$$ occurs outside of $$[l, r]$$), then all subsequences of $$A[l, r]$$ “crossing” $$i$$ are non-boring. Thus, it suffices to check that both $$A[l, i-1]$$ and $$A[i+1, r]$$ are non-boring with a recursive call to check (note that we only need to recurse for one such $$i$$ if it exists, think about why this is).

Naively, the algorithm above runs in $$\mathcal{O}(N^2)$$, far too slow for the given constraint of $$N \leq 200\;000$$. However, what if we try a different order of looping $$i$$ in the check function? Instead of looping $$i$$ in the order $$[l, l+1, l+2, .., r]$$, we will loop $$i$$ “outside-in”, in the order $$[l, r, l+1, r-1, l+2, r-2, ...]$$. As it turns out, this provides us with an $$\mathcal{O}(N \log N)$$ algorithm, which is a dramatic improvement from what we thought would be $$\mathcal{O}(N^2)$$!

To prove this, consider the tree formed by our decisions for $$i$$ for each $$i$$ recursively chosen by check. This will be a binary tree, where the number of children in the left child is the size of the left side of our split $$[l, i-1]$$, and the number of children in the right child is the size of $$[i+1, r]$$. At each step of the algorithm, we are essentially “unmerging” a set of objects into the left and right children, giving each child the corresponding number of objects to its size. Note that this unmerging happens in a time complexity proportional to the size of the smaller child, by nature of us looping outside-in. However, considering the reverse process, this is exactly the process of small-to-large set merging, which is $$\mathcal{O}(N \log N)$$! Thus, we have obtained the correct complexity of our algorithm, and this problem is solved with barely any pain or book-code. Below shows a C++ implementation of check, where lst and nxt store the index of the previous and next appearance of $$A_i$$ respectively:

In conclusion, it may be worth the time to consider seemingly brute force solutions to some problems, as long as there is a merging or unmerging process that can happen proportional to the size of the smaller set, capitalizing on the small-to-large technique when it seems like the last thing one could do.

]]>
Edward Xiaoyour-email@email.com
A Soft Introduction to Plug DP2021-05-17T10:54:01-04:002021-05-17T10:54:01-04:00http://4fecta.ca//a-soft-introduction-to-plug-dpSo I realized that there are literally 0 resources written in English on Plug DP, which is one of my favourite DP tricks/techniques that I know of so far. Thus, I hope this serves as a soft introduction for English speakers to this technique, and maybe sheds some light on how powerful it can be. Before we start, I would recommend having a solid understanding of bitmask DP in general in order to get the most out of this blog. Now, let’s begin.

### What is Plug DP?

In short, Plug DP is a bitmasking technique that allows us to solve complicated problems with relatively simple states and transitions. To illustrate Plug DP in its most primitive form, let’s visit a rather classical problem: How many ways can we fully tile an $$N \times M$$ grid with $$1 \times 2$$ dominoes?

This problem can be solved with a standard row-by-row bitmasking approach, but the transitions for that DP state is annoying and unclear at best. Instead, let’s investigate an approach that uses a slightly different state. Our state, $$dp[i][j][mask]$$, will represent the number of possible full tilings of all cells in rows $$i-1$$ and earlier, and the first $$j$$ cells in row $$i$$, with a plug mask of $$mask$$. The first two dimensions are relatively straightforward, but what do I mean by “plug mask”?

Let’s consider a concrete example to understand the concept of plug masks. Consider the diagram above, where the first two dimensions $$(i, j) = (3, 4)$$. The red line denotes the line which separates the cells we’ve already processed and the cells we have yet to consider. This line can be split into $$M+1$$ segments of length 1, and each of the arrows on these segments represent a plug. The plug itself can represent a variety of things, but for our purposes here it represents whether we have placed a domino that crosses the plug (i.e. the two halves of the domino lie on separate sides of the plug). The plug will be $$1$$ (toggled) if there is a domino laid over it, and $$0$$ otherwise. For example, the diagram below depicts one of the tilings that has the plugs with states $$[1, 0, 1, 0, 1, 0, 0, 1, 0]$$ from left to right. We can obviously represent the set of states of the plugs using a bitmask of length $$M+1$$, so the DP state which the tiling below belongs to is $$dp[101010010_2]$$ (I’ve written the binary number in reverse here for readability. Just to be clear, the decimal equivalent of this mask is $$149$$ and not $$338$$).

### Transitions

In general, we want to transition from cell $$(i, j - 1)$$ to cell $$(i, j)$$ (i.e. across each row). Notice that only 2 plugs change locations when we move horizontally, which is the main reason why Plug DP ends up being so powerful. If we number the plugs from $$0$$ to $$M$$, then only plugs $$j-1$$ and $$j$$ change locations. Specifically, $$j-1$$ goes from the vertical plug in the previous state to a horizontal plug in the next, while $$j$$ goes from a horizontal plug to the vertical plug. For example, the diagram below depicts the differences between the set of plugs for a state at $$(3, 3)$$ versus the set of plugs for a state at $$(3, 4)$$. The orange plugs are all shared and do not change during the transition, so we only need to consider how plugs $$3$$ and $$4$$ change in our transition from $$(3, 3)$$ to $$(3, 4)$$. It is convenient to note that if we $$1$$-index the columns and $$0$$-index the plugs, then plug $$j$$ will always be the vertical plug when considering a state at column $$j$$.

So how do we transition? First, we notice that if both plugs $$j-1$$ and $$j$$ are toggled from the previous state then it leads to an overlap of 2 dominoes on cell $$(i, j)$$, so we don’t need to consider this case. Let’s handle the other 3 cases separately.

Case 1: none of plug $$j-1$$ and $$j$$ are toggled.

This means that $$(i, j)$$ does not have anything covering it, so we must place one end of a domino there to cover. We can either place a horizontal domino going from $$(i, j)$$ to $$(i, j+1)$$ toggling plug $$j$$, or we can place a vertical domino going from $$(i, j)$$ to $$(i+1, j)$$ toggling plug $$j-1$$. Note that we cannot place a domino going to $$(i, j-1)$$ or $$(i-1, j)$$ since these cells are already occupied by the definition of our state.

Case 2: only plug $$j-1$$ is toggled.

This means that $$(i, j)$$ is already covered (by a domino going from $$(i, j-1)$$ to $$(i, j)$$), so all we have to do is untoggle plug $$j-1$$ and move on.

Case 3: only plug $$j$$ is toggled.

Extemely similar to the previous case, This means that $$(i, j)$$ is already covered (by a domino going from $$(i-1, j)$$ to $$(i, j)$$), so all we have to do is untoggle plug $$j$$ and move on.

And that’s really all there is! Now we just need to handle some special procedures and we are good to go.

### Going from row $$i-1$$ to row $$i$$

If you’ve been following along, you may be wondering how we go from one row to the next. It turns out that all we need to do is move some values from one place to another. Specifically, when we first process row $$i$$, we will transfer all the values stored in $$dp[i - 1][M][mask]$$ to $$dp[i][mask << 1]$$. It may be confusing as to why we are shifting all bits to the left by 1, but the following diagram should clear things up.

As you may notice, the vertical plug $$0$$ on the next row shifts all the plug indices by 1, so we must shift all bits in the mask by 1 to compensate. Also, the vertical plugs here $$0$$ and $$M$$ should never be toggled since having a domino go outside the grid would be absurd, so we don’t have to worry about the bit we lose from shifting or the new bit introduced.

### Final details

Our base case will be $$dp[M] = 1$$, and you can see how this easily fits in from the previous section. The final answer will be stored in $$dp[N][M]$$, since having any plugs toggled at that point would mean having a domino go outside of the grid.

### Implementation

Here, you can find my implementation for the procedure described above. I take all values modulo MOD since the number of tilings grows rapidly for larger $$N$$ and $$M$$. The time complexity is $$\mathcal{O}(NM2^{M+1})$$, which means we can solve the problem for $$N, M \le 20$$ with ease.

### Closing Remarks

And that was a quick overview of Plug DP! With a firm grasp on the concepts we can easily extend this to a variety of other small grid problems, whether it be about domino tilings or counting circuits in a grid. As a small exercise, try solving the problem above when some of the given cells are blocked, or try solving it for when it does not have to be a full tiling. Anyways, that’s all for now.

Ciao 👋

]]>
Edward Xiaoyour-email@email.com
How I got CCO silver without solving anything2021-05-16T21:40:20-04:002021-05-16T21:40:20-04:00http://4fecta.ca//how-i-got-cco-silver-without-solving-anythingI would like to preface this by saying that you should not voluntarily attempt what I did at CCO if you are serious about your results. My strategy was mainly damage control from day 1, and it was a miracle I even got silver. However, you may find my first CCO experience interesting or maybe even hilarious, which is why I am making this post. Anyways, please enjoy. 🙂

## Day 1

### P1. Swap Swap Sort

The first fault in my approach at CCO. I had some initial ideas early on, scoring the first 2 subtasks at around 16 minutes. Over the next hour, I developed a solution which runs in square root log time, believing it would pass with ease due to a fatal misreading of the constraints. The solution worked well for the first three subtasks, but for some reason I couldn’t figure out kept receiving a WA verdict on the last batch. For around another hour, I read over my code countless times and ran multiple fast-slow scripts, but simply couldn’t find out where the error lied. After a gruesome 2 hour and 30 minutes on problem 1, I decided to take a peek at the rest of the problemset before it was too late (although it was already way too much time wasted).

### P3. Through Another Maze Darkly

I had a quick glance over problem 2, but decided that the subtasks from problem 3 were a lot more approachable. Really, my goal here was to snatch whatever I can and rush back to problem 1 where I had the most potential points yet to be earned. A quick program that printed the path given by a random line graph revealed the pattern of $$1-x-1-y-1-...-1-N-1-N-...$$, for which the first idea that came to mind was binary search. This was the smoothest subtask of day 1 for me, taking only around 20 minutes of time in total.

### P2. Weird Numeral System

This problem was intimidating at first, with no clear idea on how to proceed. Again, I was simply controlling the damage of my poor problem 1 performance here, so I was only aiming for the subtask. The subtask provided the constraint that the absolute value of any allowed digit is less than the base, which intuitively meant that any change caused by a higher base couldn’t be reverted by lower bases, no matter what values we assign them. This inspired a brute force recursion in descending order of power, ensuring that the number is in the range $$(-b^K, b^K)$$ when we are done with the $$K$$-th power (of course, $$b$$ denotes the given base here). The solution ended up running surprisingly fast (0.1 seconds), but kept getting WA on case 7. After around 10 minutes of debugging, I decided it was all or nothing at this point and simply slapped __int128 into my code, which surprisingly fixed the bug and gave me the first subtask. Overall, this was around 30 minutes spent on 8 marks, which I was relatively pleased with (considering that was more than half of the points I had earned so far).

### The struggle with P1 continues

It was only here that I decided it may be a good idea to reread the statement, and discovered that contrary to my belief that $$Q = 100\;000$$, the constraints actually had $$Q = 1\;000\;000$$! A quick 1 line fix to my MQ constant resolved the WA verdict, but replaced it with the ever so agonizing TLE instead. As there were only 30 minutes left at this point, I decided it would be futile to search for new solutions and settled with constant optimizing my square root log. I spammed around 20 different versions of the same code with different block sizes and with/without pragma optimizations, but none of them made it through the last batch. In the last 10 minutes I tried to cheese the time limit by handling numbers with small frequencies separately but that was to no avail either, and the timer hit 0 with only 11 points on problem 1, a problem I dedicated over 3 hours of contest time to.

### Reflections

Clearly, my strategy of tunnel visioning on problem 1 did not work out in my favour at all. Spending over 3 out of 4 hours of such an important contest for 11 points is something that would pain anyone to see, and I was quite sad over my mediocre day 1 performance. If there is one lesson to be learned out of my CCO experience this year, it would be to read the constraints, and read them carefully. Also, repeatedly submitting WA submissions or trying to fast-slow for over 30 minutes was just a waste of time, time that could have well been spent going for more partials on problem 3 or even a full solve on problem 2. Finally, it may have been wiser to try and optimize out the log factor from my code instead of spamming flimsy pragmas and cheeses, something that seems more than possible with 30 minutes in hindsight. Regardless, I could not redo what had already been done, and it was time to get well rested and prepare for day 2.

## Day 2

### Preparations

My mindset going into day 2 was to mainly control the damage that had been done on day 1. My lousy day 1 performance had already eliminated any chances of going for gold, so it was time to work on securing that silver. I did a mock CCO contest the day before, just to practice waking up, getting into contest mindset, and not choking or getting stuck on any particular problem for too long.

### P3. Loop Town

This goes first since it was the problem I eliminated first. After reading all the problem statements right off the bat, problem 3 already seemed concerningly difficult. The best I could come up with was some 2-SAT approach based on clockwise or counterclockwise travel, but the dependencies and conditions simply did not work out. After fiddling with it for a bit longer, I decided that this was probably the killer problem of CCO 2021, and dropped it completely (of course, the first subtask being worth 12 points helped with that realization as well). Back to the other two.

### P1. Travelling Merchant

I invested a fair amount of time into this problem. My first impressions were “oh hey, I finally found the template free easy Tarjan’s problem” that previous CCOs all had (at least, some variation of a template problem). However, the problem quickly managed to shove my words back into my mouth as I pondered the details for around 30 minutes (hint, it wasn’t Tarjan’s at all). Keeping an open mind, I changed to an approach relying on Dijkstra’s algorithm which managed to almost pass the first batch after some debugging. As it turned out, replacing the Dijkstra with a simple BFS allowed my solution to pass subtask 1 in the exact 1 second of allotted time, of course with some flimsy break statements attached as well. I couldn’t find any easy optimizations with multisource BFS that would lead to a full solution, so I decided to move on to the next problem. I was quite shocked to learn after the contest that simply switching the BFS to a DFS and applying memoization was enough for full marks, but I guess that’s just how it is sometimes :)

For this problem, the observation of partitioning the nodes into some blocks of equal distances was immediately apparent, and a naive $$\mathcal{O}(N^3)$$ dynamic programming solution soon followed. Here, perhaps slightly foolishly due to the mindset of redeeming my day 1 performance, I decided to search for a $$\mathcal{O}(N)$$ greedy algorithm instead of attempting to optimize my DP to $$\mathcal{O}(N^2)$$. The reasoning behind this was that I had done quite a number of difficult problems which ended up converting a partial DP solution into full marks with a greedy approach, and I figured that this must be one of those as well. To my dismay, the problem was not actually a greedy problem, and I spent the rest of day 2 searching for something that wasn’t even there in the first place.

### Reflections

After day 2, I was quite sure my tragic performance (or so I thought) on both days would place me in the bronze medal range. The mistake of going for a solution that simply did not exist as compensation for a poor performance from before was unwise and panic-induced. However, the one thing I did manage to do well at CCO was ensuring that I didn’t miss out on trivial partials, and giving all the problems at least a slight jab before moving on to the next. To my surprise, the median score on day 2 was only 2 points, which ended up placing me in the silver medalist range.

## Final Thoughts

This marks the conclusion of my first experience at CCO, and how I managed to earn a silver medal without even scoring full points on a single problem. It turns out that only partial farming (unintentionally) for both days can be sufficient to cross that silver cutoff, and I am glad I was able to leave the contest with a lot more experience and ideas than before. I can’t say that I’m completely happy with how I did, but I am thankful that things didn’t go as bad as I thought. I guess this also means I have a lot more to learn and prepare before the next wave of computing competitions. Anyways, that’s it for my first blog.

Ciao 👋

]]>
Edward Xiaoyour-email@email.com