Tool Others Vambropy [v1.12]: VaM packages database, management, and a complement to VarBrowser

commondi32

Newbie
Jul 16, 2020
41
46
Thanks doggava and luna.kitty!

About VarManager I know it, it was an inspiration actually. About the "remove old versions" functionality I am still using another great one, bill prime's . I usually use it for disabling preload morphs I do not need and check for vars not having dependencies and older versions, . It is really fast and allows to do backups and filter and move by var type... It was another inspiration, but I did not want to implement all that functionality because I think it is already good in it. Sure, a all-in-one tool would be great.
I was thinking about a way of identify and mark older versions though. That has to be another column in the database, like "older". For example, when a new version of a plugin gets into the database, it would automatically update earlier versions with "older" = true. Maybe a different background color instead so the column is not visible.

" In future relases a button could be included to do either: a) in Dependencies section, install all (or filtered) external dependencies as symlinks, or b) Install everyhing external symlinked, as 'Available' , so Varbrowser could handle everything from VaM. "
Did I say that? LOL yes, that would be cool but at the moment I do not have much time.
But... since Symlinks are now already supported, maybe I should try extend the new menu option "Install+Deps", adding "Install+Deps symlinked", and that would be close to your idea I think. I gonna try with that one.

New version fixes a couple of things like uninstalling all, broken since earlier update. I fixed also repo3, repo4, repo5 in config.ini that were bugged.
 
Last edited:
  • Like
Reactions: luna.kitty

commondi32

Newbie
Jul 16, 2020
41
46
Maybe this is useful in the meantime: If you need all your vars from external sources symlinked all in a couple of steps you can try:
  • 1. Filter on External
  • 2. Go to the first item in the packages table, and click on it
  • 3. Grab the scrollbar and go all down, then while pressing <Shift> click on the last item. This is like "select all".
  • 4. open context menu and install as symlink.
  • 5. after that go to AddonPackages/symlinked and move the vars to AllPackages/symliked
That way you would have all available in Varbrowser.
I have tried it on a small bunch, selected the same way (just a dozen) and it worked. It should work, but on like 10.000's maybe takes some time.
 

doggava

Newbie
Dec 16, 2021
38
11
Nice!

I wonder why cant we have a tool that updates jsons to use always latest for ALL deps instead of specific version?
 

doggava

Newbie
Dec 16, 2021
38
11
Also, how does tools 'Unpacking option for clothing and hair' work, can you do a large bulk of vars at the same time, and how are duplicates handled?
 

commondi32

Newbie
Jul 16, 2020
41
46
I considered doing that tool (updating jsons) for myself, because with many plugins it is no problem, a different dependency version (mostly plugins) does not break a scene in some cases. But it does in some other cases, and it would be really time consuming to know which one in which case. On the other hand, plugins bloat in VaM is not the reason of slowdowns. Anything that includes clothing, hair and morph it is. In terms of disk space, plugins represent a very small portion. Surely we do not like having about 100 Timeline versions installed, but it really does not affect performance. They just sit there for scenes that need them and VaM does the job of hiding them for you. And VarBrowser even hides them from VaM. Visually though, and in Vambropy for example, it is kind of ugly to see the same package repeated 100 times. I am thinking of a way, after what I have mentioned before about tagging older versions first, to hide them like VaM hides them, some "Show older versions" checkbox, something like that. But editing inside vars is another whole topic, a big community discussion IMHO.

I think over time when the entire VAM ecosystem is over, a community could appear to rescue all the content and repair it to work in the most optimal way, as is seen in some cases with old games where a love work keeps alive and optimizes a game and all a content created by a community, which would no longer exist if it were not for all that great work. It seems to me that in that case we could check complete collections relaunched with all the repairs (i.e. all the ispinox scenes running with the last known version of each plugin). Today, everything continues to change and nothing is definitive, so almost no creator is bothered to review all her/his content and update it with the last dependencies. Within a time who knows, if VaM2 ends up replacing VaM we probably forget about this all, or maybe not. Time will tell.

Unpacking clothing is something I did manually and automatized with scripts, even editing vars and removing duplicate clothing from looks and scenes, and it would require some steps, it was prone to errors (and editing vars is dangerous if your files are not organized because you may end with an archived var that is not 100% original anymore), it required to review in a previous temporary directory where all the content would go, because you have clothing with assets, clothing with sound (!), clothing with scenes and presets, scripts and textures for presentation purposes mainly, and files placed anywhere, sometimes in locations that have nothing to do with VaM conventions. So it is a little bit complicated to blindly unpack if you care about it. I remember I have tried some collections and ended unpacking what we liked, so it appears in gray, a local item in the clothing browser, in VaM. And it is a bit faster that way. Scenes and appearances have no problems with it because when package is not found, VaM tries with the UID and it finds it anyway. But you still see the missing dependency, and that is the reason for the "dummyvar", a empty package with only meta.json file, which I put in a subdirectory so it does not mix with the real packages. That turned into a feature in Vambropy, but I have limited the whole unpackingg thing it to clothing and hair packages wihout scenes, to avoid bloating Custom folder with unconventional stuff, to keep it simple. So a conventional unpacked clothing item is a Custom/Clothing folder, maybe a Custom/Atom/Person/Clothing if it has some presets, maybe an assetbundle if it has some-CUA, but that it is enough. Vambropy can unpack all clothing where in tags you read "clothing", "clothing, hair", "clothing, assets", "Hair", "Hair, assests". Of course you can edit the python script and modify to extend but I consider it an unnecessary thing in terms of optimization. With some personal favorites (a hundred, couple of hundred maybe?) simple clothing and hair items unpacked it is enough to give you an extra boost of performance. Having a whole collection unpacked I think it defeats the purpose of VarBrowser. I try to find some kind of balance but anyone can have a different view on it, that is the hard part to decide on a feature.

All the Vambropy menu functions are scripts that iterate on each package selected, so if you select a large bulk it should work. If there are invalid items in it (like a clothing with a scene) it should skip and not unpack but the rest should unpack.
If, for some reason, some unknown bug yet, you end with any item coloured as unpacked and it is not really, you can fix it by deleting it from the database. On refresh it will appear again clean with the correct color (make sure related dummyvar is manually deleted).
Reverting unpacking is done by checking if the real package contents is present in the Custom tree. If is not, if a single file is not found, it will not revert and you will need to manually delete the subdirectories with the clothing/hair items. That is another reason for deciding on simple clothing because if you delete a scene that was installed by unpacking a clothing, then the clothing uninstall feature would not work. So this whole unpacking concept maybe it is a little advanced to most users, an experimental thing most VaM users do not need to worry about.
 

qube21

New Member
Apr 13, 2025
6
0
commondi32 your manger is the best I've used so far, congratulations, and thank you!

I hope you don't mind, I've added a new column to the UI with var_size information. Sorting works as well, and performance is not impacted from what I could tell.

Screenshot 2025-04-27 123235.png

Only made changes to the "ui.py" and "db_functions.py". I'm an amateur programmer in python, so beware if you plan to use these changes :D (also grok3 was used..) Most likely I've broken some functionality as well (progress percentage??), but I've deleted the packages.db file and it recreates it without issues.

You're welcome to implement the changes to the main build if you find them useful.

Modified module files (ui.py, db_functions.py):

I really like the direction this app is going towards, very good progress so far! And thanks for source ;)
 

commondi32

Newbie
Jul 16, 2020
41
46
Great! I'll update it as soon as I have some time with the code. Feel free to try new stuff in it, that's a way to learn too. I tried to build in a way that it would allow to expand functionality without breaking anything but sometimes some things get complicated and a new feature implies changing some functions altogether.

I like having file size info. I was thinking about it and about adding older version info too, maybe just a tag with "older=True", or a list containing found versions to avoid repeated records, like a record with name:"AcidBubbles.Timeline" and another field version:"283, 275, 234, 230, 223, 211...." but that one would need some heavier code changes.

Let us think about more data that would require adding new database fields, so it is added all together in a new release, to avoid people having to recreate their database too often. For example, there is morphs hash info in recent releases, the program is not using it yet but it was added with future plans on it. We can just add the fields needed and then we work on it. At some point a db rebuild will be needed anyway.

I remember having more fields displayed but I was having a problem with the way the treeview resized that I could not control, and that was the reason to have just the fields that fit. I wanted to have more fields and use the horizontal scrollbar to reach them, but (for some reason I do not remember now) I could not make it work within that frame size. But yes the idea is to have all that useful info and use the scrollbar to reach the unseen fields. And I think custom field order and field display size would be cool too.

Thanks qube21 for the contributions and more ideas are always welcome!
 

qube21

New Member
Apr 13, 2025
6
0
I've made a couple of UI adjustments that should help with managing large libraries:
  1. click+drag selection.
  2. shift-click+drag selection (for multiple groups).
  3. ctrl+a (select all filtered).
  4. count nr. of selected rows in title-bar.
  5. sum total size of selected.
Performance is snappy for 1TB library, other managers struggle with this load.

ezgif.com-video-to-gif-converter(1).gif
 

qube21

New Member
Apr 13, 2025
6
0
adding older version info too, maybe just a tag with "older=True", or a list containing found versions to avoid repeated records, like a record with name:"AcidBubbles.Timeline" and another field version:"283, 275, 234, 230, 223, 211...." but that one would need some heavier code changes.
Can you expand a bit more on this feature? I have something in the works but not sure if it's exactly that, or if I implemented it correctly. What I have at the moment is a new field in the DB for 'var_outdated' (meaning there are versions more recent available on hand) it's a basic 1/0 flag.

ezgif.com-video-to-gif-converter(2).gif
 

commondi32

Newbie
Jul 16, 2020
41
46
Wow that is fantastic! And it is really good to know about its performance, as I have not had references for it. Thanks!

About older versions: yes, it is a boolean field, as you mentioned, 1/0 in the database. The other way I was thinking about would more complex because It requires to change the Name format (removing the number from it) and to add all version numbers to a list-type field. The metadata would get updated to the last version only and the previous versions do not exist in the database as a record, just a number that indicates previous versions exist, but it would be complicated to manage because it would need to refresh every time to check for all the versions, converting and iterating with each list... I think it would increase the process time. The simpler way just accumulates older packages in the database, but is is not much of a problem since sqlite can manage thousands of records very fast. A filtering checkbox would help to keep it neat visually, showing only one record, the last version, when you want it. think it is better the simpler way, just as you have said.

In a couple of days I will get into it and update with all your new stuff. Thanks for the help!
 

qube21

New Member
Apr 13, 2025
6
0
I've made a few UI adjustments that make use of more space on the screen and limits blank spaces (there's still room for improvement).

This is the original UI, with lots of empty space when resizing.
ezgif.com-video-to-gif-converter(3).gif

And this is the changed UI with an option to detach the thumbnail window. This adds the option to have the previews on second screen if desired.
ezgif.com-optimize(4).gif

Changes attached under .zip
 

qube21

New Member
Apr 13, 2025
6
0
One more addition about performance. Since I have a large .var collection of 1TB+ I ran some benchmarks on the building the packages.db and the generation of thumbnails.

Building DB of 18000 vars (packages.db size = 20.6 MB): 29 seconds to complete (the db_functions.py is quite optimized already, and I couldn't improve performance. Every run took 29 seconds to build DB since packages.db was manually removed)

Building thumbnails original code: 2min 40sec to complete. 16776 images created, total size of 975MB.

Building thumbnails using refactored code (image_tools.py): 1min 40sec, that's a 60% speed increase. 16776 images created, total size of 911MB.

Of course, different system configs will give different results, but on my system the improvement for initial thumbnail creation is very good. My system is: 5950X, 64GB 3600MT, Gen.4 NVME SSD.

This is the explanation given by grok of why the original code was slower:
The bottleneck in the original code was the inefficient JPG filtering logic. Specifically, the original version used nested loops to process files in a zip archive. For each JPG file, it checked every other file in the archive to see if there was another file with the same base name (e.g., to identify duplicates or related files). If you have m JPGs and n total files in the zip, this approach had a time complexity of O(m × n). In the worst case, if most files are JPGs (i.e., m is close to n), this could approach O(n²), meaning the number of operations grows quadratically with the number of files. For example, with 500 JPGs and 1000 total files, that’s up to 500,000 checks—pretty slow!
The refactored code tackled this bottleneck head-on by optimizing the filtering process. Instead of nested loops, it used a Counter (a highly efficient Python dictionary-like object) to count the base names of all files in one pass, which takes O(n) time. Then, for each JPG, it simply checks the precomputed count, which takes O(m) time. Since m is less than or equal to n, the total time complexity becomes O(n + m), or effectively O(n). Using the same example (1000 files, 500 JPGs), this reduces the work to about 1500 operations—a massive improvement over 500,000!

Refactored code attached as .zip
 

doggava

Newbie
Dec 16, 2021
38
11
One more addition about performance. Since I have a large .var collection of 1TB+ I ran some benchmarks on the building the packages.db and the generation of thumbnails.

Building DB of 18000 vars (packages.db size = 20.6 MB): 29 seconds to complete (the db_functions.py is quite optimized already, and I couldn't improve performance. Every run took 29 seconds to build DB since packages.db was manually removed)

Building thumbnails original code: 2min 40sec to complete. 16776 images created, total size of 975MB.

Building thumbnails using refactored code (image_tools.py): 1min 40sec, that's a 60% speed increase. 16776 images created, total size of 911MB.

Of course, different system configs will give different results, but on my system the improvement for initial thumbnail creation is very good. My system is: 5950X, 64GB 3600MT, Gen.4 NVME SSD.

This is the explanation given by grok of why the original code was slower:



Refactored code attached as .zip
Could commondi32 integrate all your improvements to the tool as a one package?
 

qube21

New Member
Apr 13, 2025
6
0
Could @commondi32 integrate all your improvements to the tool as a one package?
Yes, absolutely. The code changes are attached to my replies and can be ran already from source. Just replace the original .py modules and run.

Since this app is commondi32's creation I'm waiting for him to apply the changes and tweak them if needed.

If anyone has any suggestions for improving the app just write here. The underlying concept is really good and can be built upon with new features.
 

commondi32

Newbie
Jul 16, 2020
41
46
Could commondi32 integrate all your improvements to the tool as a one package?
I will, doggava... just gimme some time! lol
I am working on exactly what qube21 said, applying most of the changes and tweaking some things here and there, there are many little changes, for better. I want to make sure the UI code is more clear and ready for new improvements and features.
 
  • Like
Reactions: doggava

Lin-X

Newbie
Mar 5, 2024
52
18
After the first launch, and in the future I get this error. After about 20 minutes...

11121.png

In the "AllPackages" folder I have 45,000+ files ;-)
In the "thumbnails" folder I have 41,947 thumbnails.
(ram 64Gb, ssd 2Tb)