Anipang: A Unity Game Simulation case study
Anipang is huge in South Korea, where it’s been downloaded over 35 million times since launching in 2012. For Anipang 4, the SundayToz team integrated new features, game modes, and storytelling, while designing countless new levels to satisfy player demand. Creating much more content than the previous game for their voracious fanbase required a variety of new attempts in terms of both development and operation. SundayToz turned to Unity Game Simulation to exponentially increase the number of tests they could run and ensure a great gameplay experience.
Create a vast quantity of level content for a fresh update to a beloved classic game
Balance difficulty and detect errors using Unity Game Simulation to let the team stay focused on gameplay
330, including 110 Unity users
Seoul, South Korea
Creating and testing more content, faster
Anipang 4 heightens the popular match-three puzzle mobile game format with adorable animated creatures. Match-three games are famously content-hungry, requiring countless levels for casual players to burn through – and even more so for a rabid fanbase like Anipang. To create enough content in time, SundayToz complemented human testing with Unity Game Simulation, delegating the work of balancing difficulty and failure rates so the team could stay focused on creating more awesome content.
- Run 1,000 playtests for each level when new levels are released every week
- Ran over 100 times more tests per level than with previous manual efforts
- Saved one day per week of developer time
- Devoted more team energy to analyzing the fun factor
- Anipang 4 was downloaded over 2.5 million times in the first two months after launch and has over 300,000 DAU
A fresh take on a fan favorite
For Anipang 4, SundayToz wanted to keep what their millions of fans love – cute animals, casual social play, and match-three puzzles – while mixing in 20-player real-time battles, a guild-based social element called Fams, and a storytelling layer to help the game feel fresh and relevant 10 years after its initial release. But more changes mean more work, and the team needed more efficient tools to create and test such large amounts of new content.
More levels in less time
“When it comes to puzzle games, creating levels is the most important part,” explains Donggun Kim, Technical Director at SundayToz. “On average, we do updates every week, during which we add 20 maps. We create about 30 maps internally, then select about 60% of them to include in the update.”
Releasing this much content puts a lot of pressure on the team. First, they need to find innovative ways to engage players, then to quickly playtest and check each new level before release.
“Within the development team, each member had a different idea about the game,” says producer and director Hyunwoo Lee, “so it was essential to go through a cycle of implementing something fast, testing it and checking the results, and re-implementing a revision.”
Maximizing team time
“One of the biggest challenges to improving game quality is the fact that we have limited time and human resources,” says Kim. The team realized that to produce enough content for a full version release and regular updates, they needed to reduce their testing time for level difficulty and finding errors.
SundayToz began to develop an in-house system, but quickly found that this project monopolized their development resources. They began to look for existing simulation solutions and luckily didn’t have to look far to find Unity Game Simulation. They could delegate QA and difficulty testing to Game Simulation, which then “allows us to focus more on our job instead of having to work out a simulation infrastructure,” as Kim puts it.
“In the game development process, Unity Game Simulation can take over the testing part,” Kim explains. This frees the team to create content and playtest for the fun factor, while tasking simulations with predicting the difficulty of puzzles and detecting data errors.
“We’re hoping to increase the precision of difficulty predictions and reduce the time it takes to detect errors to cut costs,” says Kim. “Generally speaking, for each created level, we conduct about 10 tests to detect errors and gauge whether the difficulty is appropriate. We hope to reduce the time it takes for one test to less than a minute using Auto Play so that we can do hundreds of test iterations through Game Simulation.”
How it works
First, SundayToz uses Game Simulation to tune their Auto Play bot. They run simulations on the bot to compare it to real player data, then tune the bot’s parameters until it performs similarly to players. Using the tuned bot, they run simulations on new levels to determine their difficulty and find errors.
SundayToz tests each level with a set of input parameters, such as the given number of turns, certain block types, mission count, and more. The output metric is the failure rate of each level. After the simulations complete, SundayToz determines whether the output metric is within the desirable range. If not, they adjust the parameters - for example, by reducing the number of turns - and rerun the simulation. Once the failure rate is right, the level’s difficulty is deemed to be appropriate.
Trial and errors
Next, the team tests to see whether the new levels have errors. SundayToz flags simulations where the Auto Play stops playing before using all of its given turns. When this happens, SundayToz identifies the cause of the error - for example, an incorrect mission count, wrong object placement, or crashes. After fixing the cause of the error, SundayToz reruns simulations to verify the change.
“We use the results from Game Simulation to come up with estimates,” Kim says, explaining how SundayToz uses the solution for difficulty analysis. “Basically, we base our estimation on the user data of already-launched games to establish the range of input values that may affect user behavior and then verify our estimates through Unity Game Simulation.”
Testing improves games’ odds
“Because of the random nature of puzzle games,” says Kim, “out of 10 people who play the same level, it is plausible that about seven of them manage to clear it, while three are stuck due to errors.” Game Simulation vastly increases the number of playthroughs on a given level – and the more plays, the more accurate the average, so the better results reflect a typical player's experience.
“Before we started using Game Simulation, each team member played a newly created level 10 times for balance-testing. However, we now test about 1,000 or even more times for a newly developed level in order to detect errors and gauge its difficulty,” Kim explains. The result? Fewer problems make it into the final release, dramatically improving players’ experience while freeing up the team’s time.
Focusing on the fun factor
“We can use the time spent on creating puzzle game levels more efficiently than before,” says Kim. “In the past, we had allotted about equal amounts of time to checking the fun factor of the puzzle, difficulty analysis, and error detection. Now, we let Game Simulation take care of difficulty analysis and error detection so that we can spend more time on checking the fun factor.”
“Right now, we’re just focusing on predicting puzzle difficulty and data error detection, but we plan on expanding its use to other areas besides puzzle games,” he says. “Our next mission is to make it so that we can concentrate human resources on testing the functionality of new or important features while we let Game Simulation test the overall functionality.”