Testing this out now and by first experience, it's a great start! Love how you easily handshake with the f95 website and how you handle local caching on the library side aswell.
The biggest gripe i have currently is bulk importing, install location management and multiple versions.
Currently using Game-List and that's pretty much falling apart, most alternatives is also meh, but this project is QUITE promising so far!
Scanning known folders could be done with regexes (F:\basepathto\Games\{creator}\{title}\{version}\game.exe) and looking up the f95 thread based on the scraped data.
Sidenote on scanning, Game-list users will have an `GL_Infos.ini`sidecar file next to the exe or whatever that contains the thread url, title and version (sadly not creator name)
Installing new versions and updates could be done in a really neat and intuitive way since you already got a browser plugin perhaps, by scanning the active downloads or completed downloads from f95 and linking the archive download to the title somehow. Multiple ways you could do this, but it would make it pretty seamless if it picked up files on its own from the download folder.
Obviously local installs will follow a naming scheme the user sets or you set i guess.
For readability outside of programs, doing "\Games\{creator}\{title}\{version}\game.exe" works great.
But for software optimization, doing "\Games\{ThreadID}\{version}\game.exe" might be better.... but that's not human friendly with file navigation, not that you want users to poke through the files.
Outside of the huge, large scale features i mentioned, having a portable mode where all settings, database and storage is under the install location would be really good! Easier to backup, easier to control and manage and all that.
Cached images should also be optimized into a more cost-effective image format instead of saving it directly as png and jpeg files, you could use webp or avif! In testing for the Atlas F95 Managing app the devs turned 80+ gigs of saved images (banners, previews and so on) down to under 5 gigs IIRC so there is huge gains even with just webp.
EDIT: given the browser plugin some more thinking, you do have access to the download system of the browser, you can monitor the urls the user click from a f95 thread, meaning you can map it from the thread url to say the mediafire url. this makes sense, this will allow you to map out without filename matching what file you download without parsing the filename at all. You would then just hand the path to the downloaded archive on completion and give XLibrary the thread url for normal scraping and installing after extraction. Fully possible now that you have a browser extension like this.
Not 100% sure you would need an DDL whitelist for sites like mediafire or whatever.... or let it map this out on the extension side at all times with all urls leading out of f95.
Possible to set the zip name as the "version" value or do fuzzy matching with the latest scraped version value to normalize it down to the scrape data aswell.
Enabling the Launch config to use run commands would also go pretty far!
This would enable Steam, GOG, Itch.io users that bought titles to support the creators to launch the game directly without a shortcut file workaround! (steam://rungameid/2413210 for example) This could also enable you to list these storefronts in the UI and also make it a filter option! For example being able to filter for titles that has a Steam store page or similar, but the user has not mapped/installed it.
Also spreading the word of this client through my discord servers that handle this type of content.
EDIT (i lost count):
On the UI side, tags should probably be clickable, left click to search for the tag specific, this could be any tag or the creator anme for example! Middle click should append it to the current search! Also instead of control clicking to open the thread on an title from the gallery view, middle clicking the title or the main banner image should also open the thread.
Also notice how the updates are checked for and processed which seems quite slow and quite weird? im not sure what you are using to scrape it, but wouldnt it just be easier to load the public json list at a large item size and have it scrape from the last scrape point? i mean you could do that once an hour or more without issues and you get everything for the day on one page pretty much. IIRC after testing scraping with other clients, 90 items on that list page is the sweet spot for whatever reason, and nothing stops you from loading other pages for even more (there is only 252 pages at 90 items per page). that would be this json:
https://f95zone.to/sam/latest_alpha/latest_data.php?cmd=list&cat=games&page=1&sort=date&rows=90