- Sep 8, 2019
- 531
- 908
It's honestly less about using modern Java features than about how LT's engine is designed. It's been a while since I've taken a deep dive into the code so it's possible that some of these issues may have been corrected, but many of the performance issues step from a complete lack of understanding of how the tools work.Then again, even accepting that updating the engine would be a pain in the ass, it is still a decision she made to continue with an outdated version, so...
Memory thrashing was/is one serious issue the game had/has. There are several ways of dealing with large lists. A somewhat naïve but effective approach would be to grab the whole collection and cache it. A better approach would be to load bits of the collection on demand. The LT approach was to retrieve the entire collection every time it's needed. If it's used in a tight loop this can cause enormous performance issues. I remember seeing this pattern in a few places:
1. Grab a collection of objects to update (let's call it updateObjects).
2. Get the next item from updateObjects.
3. Grab a large list of supporting data and put it into a new collection.
4. Perform the update on the object.
5. If there are still objects to update, jump to 2. Otherwise, end.
The problem is step #3. It was causing the Java VM to allocate memory for the large collection, populate it, use whatever data is there, and then during the next iteration it would allocation another new collection, populate it, etc etc. Assuming the collection in step 3 is 200MB, if there are 100 objects to update in updateObjects, that's 20GB of allocations. That essentially forces Java to garbage collect (that is, clean up abandoned objects and collections) multiple times in a tight loop, which causes the game to grind to a screeching halt while Java desperately tries to clean up the mess.
To be clear, if this were done in non-GC language like C/C++ the game would crash at this point. Runtimes like Java or .NET will do everything in their power to keep applications available at the expense of performance, which is why laymen tend to refer to them as "slow." The JRE and CLR are actually bloody fast for what they are, but they can only do so much in these sort of conditions. Like I said before: garbage in, garbage out.
This isn't just a hypothetical situation, either. At one point I measured 48GB of memory allocations between turns (with one "turn" being moving a single tile on the map, or finishing a combat or sex action).
I've tried to correct some instances of these, but I recall running into a ton of breakage because there were a lot of unexpected dependencies elsewhere in the code. It felt like every correction I made would lead to hours of hunting down a bunch of random issues, and it really just wasn't worth it for someone else's project, especially given the shaky licensing terms it's released under. Refactoring isn't always easy or straightforward, but it really shouldn't be that bad.