Tool Others F95Checker [WillyJL]

5.00 star(s) 23 Votes

evilution382

Member
Oct 17, 2018
119
283
i need help. i keep on getting this error
We've had to impose stricter rate limiting site wide. When multiple user's checks coincide it can effectively cause a DDoS, the most recent downtime 2 days ago was directly attributed to this.

As WhiteVanDaycare suggested, lowering your worker count and minimizing the number of full rechecks should allow you to stay under the rate limiting.
Not much to really do about it atm
 
  • Like
Reactions: antonymorato

LucasG

Newbie
Jun 28, 2023
20
13
i need help. i keep on getting this error
Error 429, the main theme on the last pages with even Sam , our SysAdmin, explaining where it comes from, literally two (2) posts before yours -> here

And no, no known method to prevent, avoid or circumvent it so far. Can happen to everyone due to high numbers of requests on the server (all of them, not only yours alone)
 

blackop

Newbie
Apr 14, 2022
27
16
Here is my custom build for linux and windows:


Include (all are mine but 429 status fix isn't uploaded to repo):
  • rate limiter: "API RPM" under "Refresh" settings section - you can try your luck and use it instead of next one, otherwise leave it at 0 (unlimited)
  • retries for 429 response: "Retry on 429" under "Refresh" settings section, right under previous one, so far it worked for me

I suggest to make a copy/backup of your current f95checker settings directory since my build will modify settings table in sqlite file.

- WIN: C:\Users\Username\AppData\Roaming\f95checker
- Linux (at least on my manjaro): /home/username/.config/f95checker/


Upd:
Uploaded changes for last two - . Version of asynciolimiter used in requirements.txt also slightly modified.
You should remove .dev69 from the end if you want to build it by yourself, but expect AssertionError from time to time due to .
I replaced those lines with plain print to stderr (which is probably not working for some reason since I never saw it in console, but at least it's not blocking update process anymore).

Upd. 2: This workaround is only applied to full rechecks, fast one still may fail, but it's easy to restart it by yourself.

Upd. 3: Updated archives in MEGA. Added this, fixed bug in my changes in asynciolimiter, slightly rewrote debug related logic - now message about pause will be printed to console if running F95Checker-Debug binary from terminal/cmd/etc.
 
Last edited:

LucasG

Newbie
Jun 28, 2023
20
13
Here is my custom build for linux and windows:


Include (all are mine but 429 status fix isn't uploaded to repo):
  • rate limiter: "API RPM" under "Refresh" settings section - you can try your luck and use it instead of next one, otherwise leave it at 0 (unlimited)
  • retries for 429 response: "Retry on 429" under "Refresh" settings section, right under previous one, so far it worked for me

I suggest to make a copy/backup of your current f95checker settings directory since my build will modify settings table in sqlite file.

- WIN: C:\Users\Username\AppData\Roaming\f95checker
- Linux (at least on my manjaro): /home/username/.config/f95checker/
Just testing it on Devuan daedalus (stable) with python 3.11.2 installed.
Backed up the folders ~/.config/f95checker and ~/.f95checker but the later seems to be from older versions
(Local used version is 10.2 as i hate how GitHub wants to verify per email at every login)

Unable to import OpenGL.arrays.numpymodule.NumpyHandler: No numpy module present: No module named 'numpy'
Got this line with the debug binary but i didn't see anything strange coming from it.

Used the "Retry on 429" option an ran it on my 453 entries (outdated for months, was lazy)
Was slow as expected due to a few waiting times but nothing i would call "as slow as a debian update" ;)
 

blackop

Newbie
Apr 14, 2022
27
16
Unable to import OpenGL.arrays.numpymodule.NumpyHandler: No numpy module present: No module named 'numpy'
Got this line with the debug binary but i didn't see anything strange coming from it.
Yeah, I have same always. Not sure if it actually affect anything. I think I had same with last official release too.
 

harem.king

Engaged Member
Aug 16, 2023
3,583
6,165
Lately I have been getting errors due to too many connections.
I dropped it 5. Then ended up dropping it to 1 concurrent connection to actually fix it.
I think F95 changed their server settings to restrict multiple parallel connections
 

GrammerCop

Well-Known Member
Donor
Mar 15, 2020
1,831
1,792
Lately I have been getting errors due to too many connections.
I dropped it 5. Then ended up dropping it to 1 concurrent connection to actually fix it.
I think F95 changed their server settings to restrict multiple parallel connections
Try reading this thread. This is all we have been discussing for the last few days.
 

WillyJL

Veni, vidi, vici
Donor
Respected User
Mar 7, 2019
1,141
956
The quick checks aren't a problem as they're hitting a cached API.

It's the 'full' checks that are the problem, as they make requests to Xenforo.

The solution would be to space out the requests more, and to sleep when a 429 is encountered then retry.
seems like a band-aid fix, and from what i understand any single thread request is quite heavy on your backend.
might be time to make my own heavily cached api that does full checks to f95zone slowly every now and then and keeps results to serve quickly to everyone.
would you be ok with something like this?

general idea im thinking of is:
- keep a server of mine monitoring the latest updates page
- when an update is up then full check it and save parsed data
- when a user asks for a thread info serve the cached info
- if the cached info is like more than 6 or 12 hours old, then recheck to f95zone
this way, each thread will be fetched from f95zone at most once per 6 hours, across whole f95checker userbase, since users dont reach out to f95zone anymore.
would make the dedicated checker api you made useless.
only thing that would reach to f95zone directly from users would be notification checking basically

for the record i have no clue when i will have enough time to implement this and let alone make it stable, but atleast havign a plan is a start
 
Last edited:

WillyJL

Veni, vidi, vici
Donor
Respected User
Mar 7, 2019
1,141
956
  • rate limiter: "API RPM" under "Refresh" settings section - you can try your luck and use it instead of next one, otherwise leave it at 0 (unlimited)
  • retries for 429 response: "Retry on 429" under "Refresh" settings section, right under previous one, so far it worked for me
this is actually making the problem even worse. when you get a 429, continuing to retry will only worsen the situation. the server is saying "youre going too fast, i cant handle this" and you go "ok let me keep trying another 20 times very quickly".
as sam said, if we wanted to fix this in place, the solution is sleeping when a 429 is received until its cleared.

Sam do you have a rough number for how long the 429 is? im thinking of detecting a 429 and sleeping for twice that time. atleast as a temporary solution, until ill make the dedicated cache api i outlined above (if you agree with that proposal)
 
  • Like
Reactions: effninefivehuman

blackop

Newbie
Apr 14, 2022
27
16
this is actually making the problem even worse. when you get a 429, continuing to retry will only worsen the situation. the server is saying "youre going too fast, i cant handle this" and you go "ok let me keep trying another 20 times very quickly".
as sam said, if we wanted to fix this in place, the solution is sleeping when a 429 is received until its cleared.

Sam do you have a rough number for how long the 429 is? im thinking of detecting a 429 and sleeping for twice that time. atleast as a temporary solution, until ill make the dedicated cache api i outlined above (if you agree with that proposal)
Why? I agree there is a chance that X out of N games will still be updated after first 429 response (not that I see more than 2 errors in console at a time though), but after that there will be (modules/api.py:612) before retrying. It's not like I'm shooting requests one after another without any delays until refresh is finished.

Yeah, stopping refresh process completely and continue it after delay is better, but keeping update status in right bottom corner and on "refresh" button is a bit tricky. If I just cancel all tasks, re-add them to queue and call sleep before next gather counter will drop to zero, which ain't cool

Upd: finally understood why I never saw more than 2 errors in console - semaphore for full checks is "workers / 10" which is 2 in my case. So worst scenario with current implementation is "Successfully updated X games, got 429, issued max W / 10 additional requests, sleep for 1 minute, continue". Still agree that no pointless requests after 429 and delaying is better, but unless you have over 100 workers it's not that big of a problem.
 
Last edited:

blackop

Newbie
Apr 14, 2022
27
16
Maybe something like this is better ( s429_sem is Semaphore(1) )
Python:
        if s429_sem.locked():
            await s429_sem.acquire()
            s429_sem.release()
        async with request("GET", game.url, timeout=globals.settings.request_timeout * 2) as (res, req):
            if req.status == 429 and globals.settings.retry_on_429:
                async with s429_sem:
                    if globals.debug:
                        print(f"[{datetime.now()}] Got 429 error during \"{game.name}\" update, retry in 1 minute", file=sys.stderr)
                    await asyncio.sleep(60)
                    return full_check(game, version, limiter)
As I see it, it might help to limit pointless requests in some cases, but if X threads already passed that point and went into async with request this logic won't help. But same (again, I might be wrong) applies to cancelling all tasks on 429 status - there is no guarantee that some of them didn't "start" request already.

WillyJL will appreciate your thoughts on this
 
Last edited:

antonymorato

Member
Jun 11, 2020
131
26
Error 429, the main theme on the last pages with even Sam , our SysAdmin, explaining where it comes from, literally two (2) posts before yours -> here

And no, no known method to prevent, avoid or circumvent it so far. Can happen to everyone due to high numbers of requests on the server (all of them, not only yours alone)
Thanks, it kind of fixed on it own
 

blackop

Newbie
Apr 14, 2022
27
16
Maybe something like this is better ( s429_sem is Semaphore(1) )
Python:
        if s429_sem.locked():
            await s429_sem.acquire()
            s429_sem.release()
        async with request("GET", game.url, timeout=globals.settings.request_timeout * 2) as (res, req):
            if req.status == 429 and globals.settings.retry_on_429:
                async with s429_sem:
                    if globals.debug:
                        print(f"[{datetime.now()}] Got 429 error during \"{game.name}\" update, retry in 1 minute", file=sys.stderr)
                    await asyncio.sleep(60)
                    return full_check(game, version, limiter)
As I see it, it might help to limit pointless requests in some cases, but if X threads already passed that point and went into async with request this logic won't help. But same (again, I might be wrong) applies to cancelling all tasks on 429 status - there is no guarantee that some of them didn't "start" request already.

WillyJL will appreciate your thoughts on this
WillyJL tested it, no more requests if we got 429 until timeout is over
1730991260125.png

Also cought situation I mentioned above - one of games issued request before 429 (on another game) and got response after 429.
Changes already included in MEGA builds in original post.
 

LucasG

Newbie
Jun 28, 2023
20
13
WillyJL tested it, no more requests if we got 429 until timeout is over
View attachment 4209109

Also cought situation I mentioned above - one of games issued request before 429 (on another game) and got response after 429.
Changes already included in MEGA builds in original post.
Made any updates on your system between builds?
You don't have permission to view the spoiler content. Log in or register now.
Seems to be a case of "everything is faster than a debian-update, even glaciers"
 

blackop

Newbie
Apr 14, 2022
27
16
Made any updates on your system between builds?
You don't have permission to view the spoiler content. Log in or register now.
Seems to be a case of "everything is faster than a debian-update, even glaciers"
Kinda. Yesterday build was done on ubuntu 22.04 with python 3.11.x, today on 24.04 with python 3.12.3. But it worked on my host manjaro system so I assumed everything is fine. Give me 10-15 mins and I'll make one on yesterday VM.

Upd:
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.

LucasG uploaded new build to shared folder, renamed old one and specified glibc version from build env for both.

Upd 2:
Added build made on dockerized ubuntu 20.04 with glibc 2.31 and Python 3.12.7
 
Last edited:

blackop

Newbie
Apr 14, 2022
27
16
My antivirus is flagging it and showing this:

Trojan/Wacatac.B!ml.
Not sure how I can help with it. Here is results of for win11 archive. Also there is a link to , you can build it yourself from source code for any platform, just remove .dev69 from asynciolimiter version in requirements.txt as mentioned in original post.
 
5.00 star(s) 23 Votes