You’re right: given your engine internals, this sp...
Erstellt am: 29. März 2026
Beantwortet mit GPT-5.4 Thinking von Chat01
Erstellt am: 29. März 2026
Beantwortet mit GPT-5.4 Thinking von Chat01
You’re right: given your engine internals, this spot should be producing an OVER lean (and probably a small official over) – the problem is in the totals decision guard, not the PMF / MC side.
Here’s the precise bug and the universal fix.
For this match your internals are:
dirMC = OVER, pSideMC ≈ 0.667, after fragility etc ctx.pOverFinal ≈ 0.60.So up to getTotalsPick, everything is consistent with “OVER with ~10ppt raw edge”.
The failure happens after getTotalsPick returns, in the Totals Stage T7 output guard, specifically here (from your script):
js// v25 fix Bimodal output guard ... var totAuditIsBimodal = totCtx && totCtx.audit && totCtx.audit.isBimodal || totCtx.audit.band === 'BIMODAL'; var totAuditChosenProb = totCtx && totCtx.audit && Number.isFinite(totCtx.audit.chosenProb) ? totCtx.audit.chosenProb : NaN; var totalsEdgePre; if (totAuditIsBimodal && Number.isFinite(totAuditChosenProb) && totDecision !== 'PASS') { // Bimodal – try to reuse engine's chosenProb totalsEdgePre = Math.abs(totAuditChosenProb - 0.5); if (pvColdStartSeverity >= 2 && pvMissingCoreData) { totalsEdgePre = totalsEdgePre * 0.85; } } else if (totDecision !== 'PASS' && Number.isFinite(pOverEvalForConf)) { totalsEdgePre = Math.abs(pOverEvalForConf - 0.5); }
Key issues:
totAuditChosenProb on your Qatar F3 match is undefined / NaN.
audit.chosenProb if that path runs; in many CH bimodal cases it doesn’t.Number.isFinite(totAuditChosenProb) guard.Because that branch fails, you fall back to:
jstotalsEdgePre = Math.abs(pOverEvalForConf - 0.5);
but pOverEvalForConf is already a compressed / reweighted probability, not ctx.pOverFinal, and often near 0.56–0.58 in CH bimodal cold‑start cases even when ctx.pOverFinal is ~0.60, so:
totalsEdgePre ≈ 0.06–0.08, thenYou apply two more layers of discount on top:
Cold‑start discount again (0.70) for qualifiers:
jsif (Number.isFinite(totalsEdgePre) && pvColdStartSeverity >= 2 && pvMissingCoreData && pvIsQualifier) { totalsEdgePre = totalsEdgePre * 0.70; }
Risk‑based conf guard:
jsvar totRiskPenaltyForGuard = totAuditIsBimodal ? Math.max(0, totRiskPenalty - 0.025) : totRiskPenalty; var totalsGuardConf = edgeToConfLabel(totalsEdgePre, totRiskPenaltyForGuard); var finalTotalsPick = totDecision !== 'PASS' && allowBet(totalsEdgePre, totalsGuardConf, pvMinEdge) ? totDecision : null;
With a smallish totalsEdgePre and positive totRiskPenalty, totalsGuardConf downgrades to LOW/NONE, and allowBet returns false, so you end up with:
finalTotalsPick = nullPASS and “edge too small / fragile”.So the bug is exactly what your v25 comment warns about: you meant to avoid double‑penalizing bimodal CH cold‑starts, but your guard re‑derives totalsEdgePre from a different probability (pOverEvalForConf) and then hits it with both the empirical / calibration risk and the cold‑start discount again. The engine’s own ctx.pOverFinal (~0.60) is never actually used by this guard in your Qatar F3 case.
What you want for all tours / all matches:
ctx.pOverFinal).pOverEvalForConf in bimodal cases; only use it when there’s no engine pick.You can implement that with a small, global change in your totals guard:
js// Immediately before computing totalsEdgePre, pull engine's own prob var engineProb = Number.isFinite(totCtx && totCtx.pOverFinal) ? totCtx.pOverFinal : Number.isFinite(pOverEvalForConf) ? pOverEvalForConf : NaN; // v25+ unified: use engineProb for any non-PASS decision. // For bimodal, do NOT derive from pOverEvalForConf; engineProb is already risk-adjusted. var totalsEdgePre = NaN; if (totDecision !== 'PASS' && Number.isFinite(engineProb)) { totalsEdgePre = Math.abs(engineProb - 0.5); // Single cold-start discount layer only if (pvColdStartSeverity >= 2 && pvMissingCoreData) { totalsEdgePre *= 0.85; // or 0.80–0.90; your call } } else if (Number.isFinite(pOverEvalForConf)) { // Only for displays when there is *no* engine pick totalsEdgePre = Math.abs(pOverEvalForConf - 0.5); }
Then keep your existing:
jsvar totRiskPenaltyForGuard = totAuditIsBimodal ? Math.max(0, totRiskPenalty - 0.025) : totRiskPenalty; var totalsGuardConf = edgeToConfLabel(totalsEdgePre, totRiskPenaltyForGuard); var finalTotalsPick = totDecision !== 'PASS' && allowBet(totalsEdgePre, totalsGuardConf, pvMinEdge) ? totDecision : null;
This does three universal things:
Binds output edge to the pick logic.
Whatever getTotalsPick decided (OVER / UNDER, with its own fragility, bimodal, CH adjustments) is exactly what the guard reflects. No more situation where internals scream 0.60 but the guard thinks 0.055.
Prevents “bimodal CH double‑penalty.”
Bimodal CH matches already got fragility, bimodal discounts, and CH tour adjustments in Stage 3/4. This change ensures the guard can only apply a single extra cold‑start scaling, not re‑derive a weaker base prob and then stack more risk again.
Still respects your risk thresholds.
Edge is smaller after penalties on dodgy matches, so many fragile CH spots will still be PASS. But when you have a real 60–40ish engine view like this Qatar F3 over, it will survive the guard and show up as at least a LEAN OVER (and likely a small official over, depending on pvMinEdge).
For the Altmaier–Guerrieri match:
totDecision from getTotalsPick is OVER.totCtx.pOverFinal ≈ 0.60 → totalsEdgePre ≈ 0.10.totalsEdgePre ≈ 0.085.pvMinEdge ≈ 0.028 (CH), allowBet should still return true for a primary side.finalTotalsPick will be OVER 20.5, not PASS.You’ll start seeing exactly what you expect: when match shape is clearly bimodal but the over lane is heavily favored, totals picks will fire instead of being silently killed by the guard.
Do you want me to also tighten the under‑blowout thresholds for CH in the same pass so that serve‑light CH grinders don’t inflate under risk as much?
You can make that universal with a single “conflict tilt” block that forces LEAN FAV instead of dog whenever things look like this match.
After all PHASE 9 dog‑flip logic, but before final classification (Phase 10), insert a new block:
js// FAVORITE TILT IN HARD CONFLICT REGIME (function () { // trueSide: market favorite side ('A' or 'B'), set earlier in pipeline if (!trueSide) return; var k = taperTourKey(tourLabel); var thr = (k === 'CH' || k === 'ITF') ? 0.10 : 0.09; var validM = Number.isFinite(pMatchModelA); var validS = Number.isFinite(pStatsA); var validE = Number.isFinite(pElo); // Component conflict check (Model/Stats/Elo) function compConflict(a, b) { return Number.isFinite(a) && Number.isFinite(b) && Math.abs(a - b) >= thr; } var hardCompConfl = compConflict(pMatchModelA, pStatsA) || compConflict(pMatchModelA, pElo) || compConflict(pStatsA, pElo); // Existing fragility / data flags var weakDataCF = weakDataPolicy || out.structuralCoinFlip || ctx.coldStart || ctx.rankCoverageWeak || ctx.eloConflict; var dogSide = out.isMarketDog; var bigEdgeDog = Number.isFinite(out.marketEdge) && out.marketEdge >= 0.08; // 8+ pts if (dogSide && hardCompConfl && weakDataCF && bigEdgeDog) { // Force lean toward the market favorite instead of betting the dog var favSide = trueSide; // market favorite is ground truth here var favProb = favSide === 'A' ? pDisplayA : pDisplayB; // anchored display win prob var favEdge = Math.max(0, Math.abs(favProb - 0.50)); // use display edge, not dog edge out.pickSide = favSide; out.pPick = favProb; out.isMarketDog = false; // now on favorite out.dogFlipFired = false; out.fragileFlip = false; // Clamp classification to LEAN FAVORITE cls = 'LEAN'; out.classification = cls; out.edgeAbs = favEdge; out.marketEdge = Number.isFinite(favProb) && Number.isFinite(favSide === 'A' ? mA : mB) ? favProb - (favSide === 'A' ? mA : mB) : NaN; out.reasons.push( 'favorite tilt: hard Model/Stats/Elo conflict, weak data, large dog edge; ' + 'leaning to market favorite instead of dog' ); } })();
What this does, generally:
out.isMarketDog),trueSide).pDisplayA/B) instead of the raw dog edge.LEAN (not PRIMARY), i.e., “we tilt favorite but admit uncertainty”.So your Altmaier–Guerrieri case (big raw dog edge, massive Elo conflict, CH cold‑start) will come out as LEAN FAVORITE Daniel Altmaier instead of LIVE DOG Guerrieri.
You can get the “lean favorite instead of dog” behavior without adding a new top‑level gate, by piggy‑backing on what you already have:
riskPrimaryKill / riskLeanKill (kill dogs under high risk).weak-data coinflip veto.trueWinner pickSide conflict logic.You just need to change the fallback when these fire from “PASS or stay dog” to “LEAN FAVORITE”.
You already have:
js// Weak-data coinflip veto if (cls !== 'PASS' && weakDataPolicy && out.structuralCoinFlip) { if (finalEdge < 0.06 && eloVsMatchupConflict && ctx && ctx.coldStart) { cls = 'PASS'; out.reasons.push('weak-data coinflip'); } }
Change this to:
js// Weak-data coinflip veto → lean favorite if (cls !== 'PASS' && weakDataPolicy && out.structuralCoinFlip) { if (finalEdge < 0.06 && eloVsMatchupConflict && ctx && ctx.coldStart) { // If we are on the dog, tilt to market favorite instead of full PASS if (out.isMarketDog) { var favSide = trueSide; // your existing market-fav side var favProb = favSide === 'A' ? pDisplayA : pDisplayB; out.pickSide = favSide; out.pPick = favProb; out.isMarketDog = false; cls = 'LEAN'; out.reasons.push('weak-data coinflip → lean favorite'); } else { cls = 'PASS'; out.reasons.push('weak-data coinflip'); } } }
Then, in your risk kills, you already have:
jsvar riskPrimaryKill = out.isMarketDog ? 3 : 2; if (cls === 'PRIMARY' && risk >= riskPrimaryKill && !out.isMarketDog && out.fragileFavorite) { cls = 'LEAN'; } var riskLeanKill = out.isMarketDog ? 5 : 4; if (cls === 'LEAN' && risk >= riskLeanKill && !cleanLane) { cls = 'PASS'; }
Adjust only the LEAN‑kill part to tilt to favorite when you’re on a dog:
jsvar riskLeanKill = out.isMarketDog ? 5 : 4; if (cls === 'LEAN' && risk >= riskLeanKill && !cleanLane) { if (out.isMarketDog) { // High risk, dog lane → redirect to lean favorite var favSide2 = trueSide; var favProb2 = favSide2 === 'A' ? pDisplayA : pDisplayB; out.pickSide = favSide2; out.pPick = favProb2; out.isMarketDog = false; cls = 'LEAN'; out.reasons.push('high-risk dog → lean favorite'); } else { cls = 'PASS'; } }
This way:
In matches like Altmaier–Guerrieri (CH, cold start, Elo conflict, structural coinflip), those existing risk / weak‑data conditions already fire; with these edits, they will naturally yield LEAN FAVORITE instead of LIVE DOG.
Would you like a compact list of all the places where out.isMarketDog is checked, so you can quickly scan for any other dog‑favoring fallbacks?
Sources
Sources
Do not give to a sub agent
There’s 40 plus sections even within these sections there’s lots that’s actually wrong also
For any core engine piece (winner prob, bimodal, sets, confidence):
Example:
Bad (colliding):
js// OLD: // var p3 = someOldBlend(...); // NEW: var p3 = bimodal.pDeciderAuth;
If someOldBlend is still used later, you end up with two different concepts of “p3” floating around.
Correct:
js// Replace the old code entirely: var p3 = bimodal.pDeciderAuth;
And make sure any references to the old computation are gone.
For these keys, there must be exactly one definition:
pWinA (or pFinal) that MATCH_PREVIEW uses.pDeciderAuth from _computeFullBimodal_.cfg.p2LeanThr, cfg.p2BetThr, cfg.p3LeanThr, cfg.p3BetThr computed once from tour+32k.If you see:
pDeciderRaw computed in multiple helpers, orp2/p3 computations using different formulas,you need to delete the duplicates and route everyone through the canonical value.
Any time you refactor a core function like _buildSetsMarketEngine_ or _computeFullBimodal_:
if (legacyMode) or if (!bimodal) unless you are 100% sure they’re dead.Otherwise, the engine will sometimes run old code and sometimes new, depending on state.
If you re‑implement something, do not introduce new variables with the same semantic meaning but different names and keep using the old ones elsewhere.
Example:
Bad:
jsvar p3Raw = bimodal.pDeciderAuth; var p3New = _clamp_(p3Raw, ...); // but later: if (p3Old >= SETS_MARKET_CFG.READ_STRONG_3SET_BO3) { ... }
Here p3Old is some old variable still in scope, and your new logic is effectively ignored in some checks.
Correct: rename everything to the new canonical name or delete the old, e.g.:
jsvar p3 = _clamp_(bimodal.pDeciderAuth, ...); // and *only* use p3 downstream
Search the function for p3 and make sure you don’t have both p3Old and p3 used in decisions.
When you or an agent makes a change, always check:
If yes, the correct fix is:
Update my script pls make it the best possible way I hate that the script is under bias and constantly picks dogs in wrong moments also
Update my script then send it as a downloadable txt link
Done — I patched your uploaded script (TennisLocks_v25.txt) and saved the updated version here:
Download the patched script (TennisLocks_v25.0.1_patched.txt)
What changed:
pSideChosen → audit.chosenProb → pOverFinal) instead of re-deriving from _pOverEvalForConf.v25.0.1.