Oh no lol, I thought of the problem quickly and it popped up in my head, not the other way around. Right, it could also be location based and not timed based.
I'm not an expert in python either. I actually loathe the language. But you work with what you have, not what you wish to have or want.
But... aren't FIFO queues of small pre-determined size already implemented/interpreted as arrays with extra index information in optimized libraries? Or this isn't the case in the Python binaries? Genuinely curious.
I don't know 100% for sure, but if something's built on top of arrays, it's not going to be faster than an array. You can't get faster than the thing you're using. If it has extra index information, that's going to slow it down because it has to process, keep track, etc. that index information. Arrays in ASM, if my memory serves, is literally implemented and moved via memory (ram) jumps. In computing, the only things faster than ram jumps and reads are register reads.
I should also mention that predetermined size is a bad idea in this particular case. While good for reads and jumps, you don't want a case where you'll need to constantly resize the limited array as that will, usually, quickly overshadow and performance gains. And since we don't know the max size of all possible events (now and in the future), nor do we want to pre-set an absurdly large size for all eternity, it's better to stick with an expand-able array for now.
With that said, something compiled to a binary doesn't guarantee it's speed. It only, sometimes, guarantees that the interpreter doesn't have to re-translate/compile it down asm code again (sometimes because Java binaries still need to run through a java virtual machine). Optimized also don't guarantee a speed up. It's just written in a way to trim down read and/or run time and/or use different techniques to try to run the fastest version of an algorithm for the situation.
I'm writing the above just in case you have the common misconception that optimized code can run better than barebones code or a different data structure that is closer to solution of the problem. I once knew someone that though that the array optimization in javascript would make array searches as fast as or faster than a hash look up. I ran some tests to find that, while it is fast, direct hash look ups are faster. And that guy wasn't dumb nor a bad programmer, but these misconceptions do pop up in this tech industry.
I would completely agree with your approach on low-level languages. But even then, you might miss out on potential compiler optimization or hardware intrinsics unless you plan to use them.
You might, but you might not. And the compiler optimization might not be faster than what you've created from scratch. It's important to know what the optimization is and when to use it. Looking at the queue object you linked, it's not just providing a FIFO queue, but also:
The
You must be registered to see the links
module implements multi-producer, multi-consumer queues. It is especially useful in threaded programming when information must be exchanged safely between multiple threads. The
You must be registered to see the links
class in this module implements all the required locking semantics.
All those multi-thread features (locking, guaranteeing consistency, preventing deadlocks, etc.) are more work in addition to adding the data to the queue. So, inherently, if the queue module is built on top of arrays and is providing all those as features, it will be slower than straight arrays. Conversely, if array speed is that much of a concern for the language, they should already have built in optimization for array functions. If they don't, well, that's on them (the makers and maintainers of python).
We agree here: trying to make your own data structure in high-level languages (Python) usually results in slower performance than using native libraries unless we are dealing with large scale problems (which I assume is not the case here). I am just afraid that if you want to efficiently implement your own data structure and the operations within it, you would have to resort to things like
You must be registered to see the links
and try to do better than <stdlib.h> in C and that may be overkill for a game of this size.
Agreed. But I would like to warn you about putting libraries on pedestals. In the end, it all computing languages translate the code down to ASM. Just because a library is often used or is native, it doesn't guarantee it'll run better or faster. Again, conversely, if me implementing my own array is slower than a native library's array, the language has a problem. Something as basic as an array should already have it's own optimization. If it doesn't, that's not a language you want to stick with for very long.
I am not a Python specialist, so I will assume you know better
I hope you can help the dev (and that the dev accepts it?), that would be great for the community! Looking forward to what you can do!
It'll depend on me too. As LeatherMax can attest to, I've been trying to get my own stuff done for a while now. Thought it would a few weeks at most. Life had other plans. There's a lot I would like to make/build/fix. Desmume on the linux side needs a memory scanner. I know this forum has a PHP dev position that's been open since forever. I still have other pet projects/ideas after the current 2 I'm working on.