Tool Others F95Checker [WillyJL]

5.00 star(s) 21 Votes

WillyJL

Well-Known Member
Respected User
Mar 7, 2019
1,062
845
A little cosmetic issue... "Size column to fit" only takes the currently visible items into account and not all values in that column
im aware of that, but its for the better. the only solution would be rendering all items even if they are not visible, and trust me that can get VERY messy when you have a lot of games. next to unusable. ill remind you that it is refreshing the screen each frame, so it needs to do as least as possible each frame to be able to draw the whole interface atleast 60 times per second. best solution is to only draw the games that are visible, so its much more manageable
 

FaceCrap

Active Member
Oct 1, 2020
885
619
im aware of that, but its for the better. the only solution would be rendering all items even if they are not visible, and trust me that can get VERY messy when you have a lot of games. next to unusable. ill remind you that it is refreshing the screen each frame, so it needs to do as least as possible each frame to be able to draw the whole interface atleast 60 times per second. best solution is to only draw the games that are visible, so its much more manageable
That I can understand. uh... actually I don't. Why does it need to constantly refresh 60 times per second?
It's static data, at most it only needs refreshing if data changes, i.e. after/during a scroll , window resize or update check, not?)

but... you don't need to have the client deal with this... let the database handle that ;)
e.g. for the Developer field...
Code:
SELECT max(length(developer)) FROM games
limit 1
Or for the name+installed+version trio

Code:
SELECT max(length(name || installed || version)) FROM games
limit 1
Since status is a number and other icons will be showing, you could get around that by adding number of icons*2 to the returned value

oh, btw... the limit 1 will make it so it always only returns one row just in case there's two values with the same length.

EDIT: Simplified a bit... gives same result. The first attempt gave me the full row, but you actually only need the numeric return value so only needed the right side of the WHERE for that

but just in case you want to identify the value itself...

Code:
SELECT [field] FROM games
WHERE length([field]) = (SELECT max(length([field])) FROM games)
limit 1
EDIT2: This will of course not work with NOTES if it contains a multiline value.

EDIT3: Solved issue in EDIT2 like this...
Code:
SELECT max(length(notes)), max(length(substr(notes, 1, instr(notes, char(10))))) FROM games
WHERE length(notes) > 0
limit 1
This will return two values. first one for the row with the longest SINGLE line in NOTES, second value for the longest FIRST line in NOTES if it contains a multiline value.
Use whichever of the two is the largest in case notes column is visible
 
Last edited:
  • Like
Reactions: baloneysammich

FaceCrap

Active Member
Oct 1, 2020
885
619
Hmmm. Been looking through the code and character counts might not be the answer.
imgui.calc_text_size uses the text itself.
With proportional fonts, a string with X characters may still be longer than a string with X+ characters...
 
  • Like
Reactions: WillyJL

WillyJL

Well-Known Member
Respected User
Mar 7, 2019
1,062
845
That I can understand. uh... actually I don't. Why does it need to constantly refresh 60 times per second?
It's static data, at most it only needs refreshing if data changes, i.e. after/during a scroll , window resize or update check, not?)

but... you don't need to have the client deal with this... let the database handle that ;)
e.g. for the Developer field...
Code:
SELECT max(length(developer)) FROM games
limit 1
Or for the name+installed+version trio

Code:
SELECT max(length(name || installed || version)) FROM games
limit 1
Since status is a number and other icons will be showing, you could get around that by adding number of icons*2 to the returned value

oh, btw... the limit 1 will make it so it always only returns one row just in case there's two values with the same length.

EDIT: Simplified a bit... gives same result. The first attempt gave me the full row, but you actually only need the numeric return value so only needed the right side of the WHERE for that

but just in case you want to identify the value itself...

Code:
SELECT [field] FROM games
WHERE length([field]) = (SELECT max(length([field])) FROM games)
limit 1
EDIT2: This will of course not work with NOTES if it contains a multiline value.

EDIT3: Solved issue in EDIT2 like this...
Code:
SELECT max(length(notes)), max(length(substr(notes, 1, instr(notes, char(10))))) FROM games
WHERE length(notes) > 0
limit 1
This will return two values. first one for the row with the longest SINGLE line in NOTES, second value for the longest FIRST line in NOTES if it contains a multiline value.
Use whichever of the two is the largest in case notes column is visible
you did mostly answer yourself afterwards, the font is not monospaced so its dependent on the text itself, not only its character count...

as for refreshing 60 times per second: traditional interface toolkits like Qt, GTK and others tend to have a retained workflow: you define an interface object, and change its properties as time goes on. for example you create a button, then maybe you change its position and state when needed. the output on screen will be automatically redrawn when a visual change is needed. imgui, on the other hand, is a so called immediate mode toolkit (name comes from immediate gui). instead of creating interface elements in memory, you define a workflow for how to draw each element in the interface, and it gets rerun many times per second to ensure the interface is responsive. it might seem very counter intuitive, and i do agree to some extent, but i personally really like that style of interface toolkit, and i feel like it makes sense for this program: for example with a retained gui i would have to create an individual row for each game, then for example when sorting settings get changed id have to move them around and force a redraw, then if filtering is applied id have to hide some elements but not others, then say the shown columns are changed, now i have to go through all rows and enable or disable the visibility of the elements. and just imagine when mess it would be to switch between list and grid mode, the entire layout of the window changes! it ends up being cumbersome to keep user settings and interface state in sync with the actual visual output. with immediate mode, instead, i have a single function which takes the current state of the interface and produces a coherent output image with visual consistency and complete ease of mind. and another big bonus is that, due to this simplicity, its also much quicker and easier to write an immediate mode gui. the only downside really is the refreshing part.

if you want to see for yourself, just open the interface settings section. at the bottom there is a counter for the current framerate (should hover around 90 frames per second), and above that you have a vsync ration setting, you can read about that in the tooltip

PS: a more concrete example:
1666664201638.png
with imgui i check if a setting is enabled, then either draw the button or not. that easy.

with something like qt the button would be always present, then when the setting is changed id have to hide it.

also in that screenshot you can see exactly what i mentioned earlier:
Python:
if not imgui.is_rect_visible(imgui.io.display_size.x, frame_height):
    # Skip if outside view
    imgui.dummy(0, frame_height)
    continue
frame_height is the height of one row in the game list, and display_size.x is the window width. so basically if the current row it is trying to draw is not visible (either above or below the scroll area that is visible) then it skips this game and doesnt draw it
 
Last edited:

ascsd

Newbie
Jul 26, 2021
73
53
you did mostly answer yourself afterwards, the font is not monospaced so its dependent on the text itself, not only its character count...

as for refreshing 60 times per second: traditional interface toolkits like Qt, GTK and others tend to have a retained workflow: you define an interface object, and change its properties as time goes on. for example you create a button, then maybe you change its position and state when needed. the output on screen will be automatically redrawn when a visual change is needed. imgui, on the other hand, is a so called immediate mode toolkit (name comes from immediate gui). instead of creating interface elements in memory, you define a workflow for how to draw each element in the interface, and it gets rerun many times per second to ensure the interface is responsive. it might seem very counter intuitive, and i do agree to some extent, but i personally really like that style of interface toolkit, and i feel like it makes sense for this program: for example with a retained gui i would have to create an individual row for each game, then for example when sorting settings get changed id have to move them around and force a redraw, then if filtering is applied id have to hide some elements but not others, then say the shown columns are changed, now i have to go through all rows and enable or disable the visibility of the elements. and just imagine when mess it would be to switch between list and grid mode, the entire layout of the window changes! it ends up being cumbersome to keep user settings and interface state in sync with the actual visual output. with immediate mode, instead, i have a single function which takes the current state of the interface and produces a coherent output image with visual consistency and complete ease of mind. and another big bonus is that, due to this simplicity, its also much quicker and easier to write an immediate mode gui. the only downside really is the refreshing part.

if you want to see for yourself, just open the interface settings section. at the bottom there is a counter for the current framerate (should hover around 90 frames per second), and above that you have a vsync ration setting, you can read about that in the tooltip

PS: a more concrete example:
View attachment 2122717
with imgui i check if a setting is enabled, then either draw the button or not. that easy.

with something like qt the button would be always present, then when the setting is changed id have to hide it.

also in that screenshot you can see exactly what i mentioned earlier:
Python:
if not imgui.is_rect_visible(imgui.io.display_size.x, frame_height):
    # Skip if outside view
    imgui.dummy(0, frame_height)
    continue
frame_height is the height of one row in the game list, and display_size.x is the window width. so basically if the current row it is trying to draw is not visible (either above or below the scroll area that is visible) then it skips this game and doesnt draw it
regarding the constant redraw, in main loop, im assuming everything in between imgui.new_frame() & imgui.render() is responsible for constructing the state. then render draws it.

could you hook into everything that modifies the state and have a class variable "do_render" that gets set to true
Then you could have:

Python:
if self.do_render:
    self.do_render = False
    imgui.new_frame()
    ...

imgui.render()
to eliminate recreating the state every time until something has changed?
so its the best of both worlds between traditional guis and immediate gui



even better would be if imgui lets you access a gui element to modify it alone, instead of having to reconstruct everything. but idk if thats possible
 
Last edited:

WillyJL

Well-Known Member
Respected User
Mar 7, 2019
1,062
845
regarding the constant redraw, in main loop, im assuming everything in between imgui.new_frame() & imgui.render() is responsible for constructing the state. then render draws it.

could you hook into everything that modifies the state and have a class variable "do_render" that gets set to true
Then you could have:

Python:
if self.do_render:
    self.do_render = False
    imgui.new_frame()
    ...

imgui.render()
to eliminate recreating the state every time until something has changed?
so its the best of both worlds between traditional guis and immediate gui



even better would be if imgui lets you access a gui element to modify it alone, instead of having to reconstruct everything. but idk if thats possible
problem is that its quite hard to determine when a redraw is needed. for example the refresh progress bar needs a redraw to move, if you move your mouse then a redraw is needed in case a new element is hovered, same with keyboard to update input boxes, then also there are gif images, so as long as one is visible a redraw is always needed. and probably many more things im forgetting. i feel like it would take much more to check if a redraw is needed than to actually do the redraw... but ill try to experiment with it
 

WillyJL

Well-Known Member
Respected User
Mar 7, 2019
1,062
845
problem is that its quite hard to determine when a redraw is needed. for example the refresh progress bar needs a redraw to move, if you move your mouse then a redraw is needed in case a new element is hovered, same with keyboard to update input boxes, then also there are gif images, so as long as one is visible a redraw is always needed. and probably many more things im forgetting. i feel like it would take much more to check if a redraw is needed than to actually do the redraw... but ill try to experiment with it
ascsd actually it looks like i was too quick to judge... i threw something together and it works fairly well for me. before i got 8-10% cpu idle and 14-16% cpu with 6 animated images visible. now i get 2-4% cpu usage when idle, but the usage with animated images visible is understandably the same. keep in mind those are per core usages, so 16% per core in my case amounts to 1.2% total cpu usage. using the per core usage just gives a more accurate reading is all.
now it looks out for:
- sort and filter changes
- animated images, and images that are still loading, being visible on screen
- if a refresh is ongoing
- mouse movements if inside the window
- mouse wheel, mouse clicks, text input, and keyboard pressed
and when it finds any of these, it will keep redrawing for the next 0.5 seconds (there are some small animations and transitions that require this). otherwise it keeps polling events without redrawing. give it a shot with the beta build
 
  • Like
Reactions: ascsd

ascsd

Newbie
Jul 26, 2021
73
53
ascsd actually it looks like i was too quick to judge... i threw something together and it works fairly well for me. before i got 8-10% cpu idle and 14-16% cpu with 6 animated images visible. now i get 2-4% cpu usage when idle, but the usage with animated images visible is understandably the same. keep in mind those are per core usages, so 16% per core in my case amounts to 1.2% total cpu usage. using the per core usage just gives a more accurate reading is all.
now it looks out for:
- sort and filter changes
- animated images, and images that are still loading, being visible on screen
- if a refresh is ongoing
- mouse movements if inside the window
- mouse wheel, mouse clicks, text input, and keyboard pressed
and when it finds any of these, it will keep redrawing for the next 0.5 seconds (there are some small animations and transitions that require this). otherwise it keeps polling events without redrawing. give it a shot with the beta build
Awesome. I thought it might be hard to think about all events to capture, but this seems to do the trick

It displayed a white window for a few seconds the first 2 times I ran the new version, it seemed to have finally drawn when i moved my mouse over it, i tried a few times after and couldn't reproduce. so might not be an actual issue
Resizing, however, causes it to show a black window and doesn't draw until it is interacted with

small thing, when the input box at the bottom is focused, the blinking cursor is stuck in whatever state it was last in while redrawing. and sometimes it would go crazy and just keep blinking really fast

idle cpu utilization went down from 0.9-3% to 0.3-1.2%

I did notice something unrelated to this change.. i have 50 games and as I hover over each game (or scroll through grid mode), my memory usage goes up from 170mb to 670mb. I'm assuming its storing the thumbnails in memory? which is great for responsiveness of the current games visible on screen. but im assuming if some people have alot of games, they could end up loading a huge amount of imgs in memory just by scrolling while their mouse is hovering over the list
Not sure what the best course of action would be as it depends on how everything works
Also 10mb per game seems like alot?
 

WillyJL

Well-Known Member
Respected User
Mar 7, 2019
1,062
845
It displayed a white window for a few seconds the first 2 times I ran the new version, it seemed to have finally drawn when i moved my mouse over it, i tried a few times after and couldn't reproduce. so might not be an actual issue
Resizing, however, causes it to show a black window and doesn't draw until it is interacted with

small thing, when the input box at the bottom is focused, the blinking cursor is stuck in whatever state it was last in while redrawing. and sometimes it would go crazy and just keep blinking really fast
This is exactly the kind of attention to details feedback I wanted, thanks! I’ll patch those up later.


I did notice something unrelated to this change.. i have 50 games and as I hover over each game (or scroll through grid mode), my memory usage goes up from 170mb to 670mb. I'm assuming its storing the thumbnails in memory? which is great for responsiveness of the current games visible on screen. but im assuming if some people have alot of games, they could end up loading a huge amount of imgs in memory just by scrolling while their mouse is hovering over the list
Not sure what the best course of action would be as it depends on how everything works
Also 10mb per game seems like alot?
Yea I know about that... thing is imgui is not exactly made for using images, and it’s a lower level library, so images get handled as textures by the underlying rendering framework (OpenGL in this case). And the only painless way I found to use those is reading the images and loading the individual pixel data as rgb arrays, and apparently that takes up quite a bit of memory. Still didn’t find a fix for that, not sure where to start. I’ll poke at it a bit more though. Either way it’s expected, even though not ideal. It’s not just a bug or leak on your end. I was also considering a way to turn off images entirely for that very reason (and will add that even if I fix ram cos they take up disk space too)
 

ascsd

Newbie
Jul 26, 2021
73
53
This is exactly the kind of attention to details feedback I wanted, thanks! I’ll patch those up later.



Yea I know about that... thing is imgui is not exactly made for using images, and it’s a lower level library, so images get handled as textures by the underlying rendering framework (OpenGL in this case). And the only painless way I found to use those is reading the images and loading the individual pixel data as rgb arrays, and apparently that takes up quite a bit of memory. Still didn’t find a fix for that, not sure where to start. I’ll poke at it a bit more though. Either way it’s expected, even though not ideal. It’s not just a bug or leak on your end. I was also considering a way to turn off images entirely for that very reason (and will add that even if I fix ram cos they take up disk space too)
Yea i remember the img pipeline, my gui framework is very similar to imgui but instead i use opencv as backend.
a 1080p image should be 6MB (unless alpha channel needs to be used then its 8MB) in ram (less on disk if its stored as compressed jpg)
On my 1080p monitor, the window fullscreen and a game popup open, the average size of the displayed image is 360p so you could probably get away with downsampling to 360 and it should be around 0.66MB (0.9MB w alpha).
It'll be a tradeoff for clarity but its probably worth it for the decrease in ram usage as its not that important

just copy pasting so no idea if this is helpful as ive never used opengl. the guy shows how you could discard a texture.


One way to deal with it that works for both list and grid would be to keep a list of the visible games, and when its no longer visible, discard the texture. then maybe you wont need to downsample?
 
Last edited:

WillyJL

Well-Known Member
Respected User
Mar 7, 2019
1,062
845
Yea i remember the img pipeline, my gui framework is very similar to imgui but instead i use opencv as backend.
a 1080p image should be 6MB (unless alpha channel needs to be used then its 8MB) in ram (less on disk if its stored as compressed jpg)
On my 1080p monitor, the window fullscreen and a game popup open, the average size of the displayed image is 360p so you could probably get away with downsampling to 320 and it should be around 0.66MB (0.9MB w alpha).
It'll be a tradeoff for clarity but its probably worth it for the decrease in ram usage as its not that important

just copy pasting so no idea if this is helpful as ive never used opengl. the guy shows how you could discard a texture.


One way to deal with it that works for both list and grid would be to keep a list of the visible games, and when its no longer visible, discard the texture. then maybe you wont need to downsample?
a bit of news:
baseline im using for testing is: startup i get 280mb ram. after switching to grid mode and back (this always loads the same 9 still images) i get 430mb.
what i was doing is load image from disk, convert to pixel bytearray, then apply the bytearray to the texture. i was keeping only 1 texture id per image, so for animated images id have to reapply the correct pixel data to the texture id every time the frame is supposed to change. also i never thought about it this deep, maybe the bytearray was needed to remain alive to draw it, who knows. so i was keeping the bytearray in ram even after applying. however turns out that by applying you are sending the pixels to the gpu or whatever other internal component of opengl. so keeping the bytearray was not needed. that saves a bit on still images. and for animated images the correct way to do it would have been to make multiple texture ids, apply each frame to a separate one only once, then cycle texture ids when needed. so what im doing now is load image, convert to bytearray, generate textures, apply pixels, discard the pixel arrays. so this way on the python side i am not wasting memory. now after loading grid view i get 340mb ram, 100 - (340 - 280) / (430 - 280) * 100 = 60% less ram usage. i also tried slowly scrolling all my grid view to load all the images i have, before i got 4.5gb ram, with new system i get 3.4gb ram, 100 - (3400 - 280) / (4500 - 280) * 100 = 26% less wasted ram. either its the animated images that get less of a benefit, or i dont know. in any case still an improvement. however im afraid i cant go much further than that, at least not within my capabilities. i tried resizing the image at multiple steps of the loading process, always got me a segmentation fault. tried using rgb only instead of rgba, but i re-encountered the issue that lead me to use rgba initially: some images for some reason will show up tilted to the side (almost like it misses a few pixels at the end of each pixel row, so it wraps around and looks like a parallelogram) while others show in grayscale and get random rainbow shimmers. what ive been doing is running the loading and conversion to pixels in a thread to not block the main gui, so i tried running in the main thread to see if its a memory leak issue but nope, it actually uses more ram in the main thread!

the link you gave was partially useful, in fact i did use glDeleteTextures to free the texture ids when reloading the image and applying again, but the issue that guy was having is that he was loading, creating and applying the image / texture every frame, so not the issue here unfortunately.

and finally your suggestion of tracking what is visible and whats not, while yeah it would save ram, it would also require reloading each image from the next time it becomes visible... id prefer some ram usage to actively destroying my drive xD

last idea that i got while finishing up this reply is to actually keep the pixel arrays on python side, and then use your suggestion, but instead i *apply* them to the textures when visible then discard the textures when not. will report back soon
 

WillyJL

Well-Known Member
Respected User
Mar 7, 2019
1,062
845
last idea that i got while finishing up this reply is to actually keep the pixel arrays on python side, and then use your suggestion, but instead i *apply* them to the textures when visible then discard the textures when not. will report back soon
as expected the usage went down when going back to list mode, but still higher than with the other solution (430mb (as it was at the start) when in grid with images visible, 380 when in list mode with nothing visible). the question is if this is better than my previous solution when we are dealing with more images... curiously enough i ended up with the same 3.4gb usage... so with few images its worse, and with many images its about the same, perhaps with 1000 images it would save maybe 50mb of ram xD but then again this uses some cpu since it has to apply the textures each time theyre shown, so in the end id say my previous attempt was the slightly better one
 

ascsd

Newbie
Jul 26, 2021
73
53
a bit of news:
baseline im using for testing is: startup i get 280mb ram. after switching to grid mode and back (this always loads the same 9 still images) i get 430mb.
what i was doing is load image from disk, convert to pixel bytearray, then apply the bytearray to the texture. i was keeping only 1 texture id per image, so for animated images id have to reapply the correct pixel data to the texture id every time the frame is supposed to change. also i never thought about it this deep, maybe the bytearray was needed to remain alive to draw it, who knows. so i was keeping the bytearray in ram even after applying. however turns out that by applying you are sending the pixels to the gpu or whatever other internal component of opengl. so keeping the bytearray was not needed. that saves a bit on still images. and for animated images the correct way to do it would have been to make multiple texture ids, apply each frame to a separate one only once, then cycle texture ids when needed. so what im doing now is load image, convert to bytearray, generate textures, apply pixels, discard the pixel arrays. so this way on the python side i am not wasting memory. now after loading grid view i get 340mb ram, 100 - (340 - 280) / (430 - 280) * 100 = 60% less ram usage. i also tried slowly scrolling all my grid view to load all the images i have, before i got 4.5gb ram, with new system i get 3.4gb ram, 100 - (3400 - 280) / (4500 - 280) * 100 = 26% less wasted ram. either its the animated images that get less of a benefit, or i dont know. in any case still an improvement. however im afraid i cant go much further than that, at least not within my capabilities. i tried resizing the image at multiple steps of the loading process, always got me a segmentation fault. tried using rgb only instead of rgba, but i re-encountered the issue that lead me to use rgba initially: some images for some reason will show up tilted to the side (almost like it misses a few pixels at the end of each pixel row, so it wraps around and looks like a parallelogram) while others show in grayscale and get random rainbow shimmers. what ive been doing is running the loading and conversion to pixels in a thread to not block the main gui, so i tried running in the main thread to see if its a memory leak issue but nope, it actually uses more ram in the main thread!

the link you gave was partially useful, in fact i did use glDeleteTextures to free the texture ids when reloading the image and applying again, but the issue that guy was having is that he was loading, creating and applying the image / texture every frame, so not the issue here unfortunately.

and finally your suggestion of tracking what is visible and whats not, while yeah it would save ram, it would also require reloading each image from the next time it becomes visible... id prefer some ram usage to actively destroying my drive xD

last idea that i got while finishing up this reply is to actually keep the pixel arrays on python side, and then use your suggestion, but instead i *apply* them to the textures when visible then discard the textures when not. will report back soon
ok that gives me some idea about whats happening in opengl. what im confused about is, if the data is being sent to the gpu and then the pixel array discarded, why is ram still being used? wouldnt it be in the gpu's vram instead?
also, assuming the apply doesnt get sent to gpu vram and is still in normal ram, that would mean you its either just referencing it so ram should not decrease if you discard the python array, or you have 2 copies of the pixel data. 1 in python and 1 in opengl. so wouldnt discarding the python pixel array reduce the ram to 50% exactly?
so im not sure if there will be a benefit in the last part you want to experiment with, but it will be good to see what happens, might give us a better understanding

as for the resize, the segfault leads me to believe you are resizing after the pixel array has been referenced, eg. it has already been applied to a texture. If it was just a numpy array in python then the resize should not lead to any errors. Unless its because the image needs to be larger than the space opengl has to render it in maybe? but i doubt it

Anyways, tomorrow when i get a chance, ill try digging into the code and understand the process abit more

id prefer some ram usage to actively destroying my drive xD
also 4.5gb is not some ram usage :ROFLMAO: thats more than some laptops have!
 
Last edited:

WillyJL

Well-Known Member
Respected User
Mar 7, 2019
1,062
845
as for the resize, the segfault leads me to believe you are resizing after the pixel array has been referenced, eg. it has already been applied to a texture. If it was just a numpy array in python then the resize should not lead to any errors. Unless its because the image needs to be larger than the space opengl has to render it in maybe? but i doubt it
nono, i was trying to resize way before the pixel array stuff. i load the image with Pillow (PIL) and that has convenient .reduce() and .resize(), and those did work, but in some images they dont, maybe its something to do with animated images, but i also tried with resizing the frame objects instead of the image object, and even more segfaults. something tells me that Pillow is resizing stuff, but the array does not get influenced even though i request it afterwards...? either way seems like a bug in Pillow to me

ok that gives me some idea about whats happening in opengl. what im confused about is, if the data is being sent to the gpu and then the pixel array discarded, why is ram still being used? wouldnt it be in the gpu's vram instead?
thats what id think too, but i said gpu only because searching online most results talk about moving stuff to gpu vram, but clearly thats not (atleast entirely) the case. my guess is that its keeping a copy of the array on its side, maybe its the python bindings of opengl that do that for some weird python reason...?

or you have 2 copies of the pixel data. 1 in python and 1 in opengl. so wouldnt discarding the python pixel array reduce the ram to 50% exactly?
thats also a mistery, there are clearly some more weird shenanigans going on here that are beyond my smol brain

so im not sure if there will be a benefit in the last part you want to experiment with, but it will be good to see what happens, might give us a better understanding
i did make a comment on that if you didnt see it, but then i also tried one last thing. i read the image into memory, then converted to pixel arrays and all the other shenanigans when the image comes into view... and yeah it got me 1.7gb of ram after scrolling the whole grid, but it stuttered (understandably so) everytime an image became visible to the point of being unusable, so thats also not an option






EDIT: also fixed the resize and text cursor with redrawing, pushed a build
 
Last edited:

FaceCrap

Active Member
Oct 1, 2020
885
619
thats what id think too, but i said gpu only because searching online most results talk about moving stuff to gpu vram, but clearly thats not (atleast entirely) the case. my guess is that its keeping a copy of the array on its side, maybe its the python bindings of opengl that do that for some weird python reason...?
Could also be some weird system optimization thingy... Windows for instance doesn't always release freed memory to prevent fragmentation incase a program want's to allocate new memory chunks. Typically it only gets really freed when you minimize or defocus an app. e.g. I had build 553 load all images in grid view, hit the 3GB mark (300+ games), switched back to list view and only saw a very slow decreasing ram usage, but the moment I minimized the app, it dropped to almost 400MB, same thing with defocus.
 
Last edited:
  • Wow
Reactions: WillyJL

WillyJL

Well-Known Member
Respected User
Mar 7, 2019
1,062
845
Could also be some weird system optimization thingy... Windows for instance doesn't always release freed memory to prevent fragmentation incase a program want's to allocate new memory chunks. Typically it only gets really freed when you minimize or defocus an app. e.g. I had build 553 load all images in grid view, hit the 3GB mark, switched back to list view and only saw a very slow decreasing ram usage, but the moment I minimized the app, it dropped to almost 400MB, same thing with defocus
now THATS some very interesting behavior! ill have to test it for myself, but for me on linux nothing like that has ever happened
 

FaceCrap

Active Member
Oct 1, 2020
885
619
Something completely different, and this is more a curiosity than anything else... I use a little tool to allow me to scroll whichever window the mouse is hovering over, even if said window isn't the active one...
For some reason F95checker never reacts if the mouse is over the list/gridview, always need to activate the window first. That's not the only thing weird with mouse handling. I sometimes access my desktop through RDP from mobile... and it won't ever respond to mouse clicks (mouse can be controlled through drag/tap/double tap/long tap).
I do have other python based tools, haven't seen this behavior with those... might be a library thing, no idea...

Any idea what may be the reason? (NOTE, this isn't a big issue for me... it's more an annoyance than anything else, and the need to be able to RDP through mobile ain't that big either... FYI, RDP on a laptop is no issue... clicks work fine than)
 

FaceCrap

Active Member
Oct 1, 2020
885
619
ok, testdriving build 564 and it behaves decidedly different... scrolling through gridview, image loading is much faster. RAM doesn't go higher than around 1GB, same when scrolling through list view so every image is forced to load by the preview. Going back to listview, minimizing or defocusing has no effect at all, RAM stays at around 1GB... same when minimizing to systray... (and that suprised me)
 
5.00 star(s) 21 Votes