# 1. Winner-probability stack (inputs to `MATCH_PR...
創建於:2026年3月28日
使用 GPT-5.4 Thinking 回答 Chat01
創建於:2026年3月28日
使用 GPT-5.4 Thinking 回答 Chat01
MATCH_PREVIEW)To fix the core win probability, we must locate and correct exactly how probabilities from simulation, stats, and Elo are blended, how the market anchor is applied, and how opponent-quality adjustments are capped. Follow these steps in the script:
Find the blending function (winner probability): In your code (likely in MATCH_PICK or a similar decision helper), locate the block where pFinal is first set to 0.5 and then weighted by model/stats/Elo (you’ll see THRESHOLDS.BLEND_W_MODEL, BLEND_W_STATS, BLEND_W_ELO used). It typically starts like:
jsvar pFinal = 0.5; var _hasElo = (Number.isFinite(pElo) && pElo > 0 && pElo < 1); if (_hasElo) { var _wM = THRESHOLDS.BLEND_W_MODEL; var _wS = THRESHOLDS.BLEND_W_STATS; var _wE = THRESHOLDS.BLEND_W_ELO; … // final blend happens here }
Within this if (_hasElo) block, replace the existing blend logic with the exact weighted-average pattern:
jsvar wM = THRESHOLDS.BLEND_W_MODEL; // 0.42 var wS = THRESHOLDS.BLEND_W_STATS; // 0.42 var wE = THRESHOLDS.BLEND_W_ELO; // 0.16 var wSum = wM + wS + wE; if (wSum <= 0) { wM = 0.5; wS = 0.5; wE = 0; wSum = 1; } var pBlendA = (wM * pSimA + wS * pKernelA + wE * pEloA) / wSum; pBlendA = clamp01(pBlendA);
Here pSimA is the simulation-based win-prob, pKernelA is the stats/kernel-based win-prob, and pEloA is the Elo-based prob. Use these variables (or their equivalents in your code) in place of your current computation. This enforces the 0.42/0.42/0.16 weights and clamps the result between 0 and 1.
Remove any alternative overrides: After blending, make sure no other assignment overrides this probability. Search for any later lines like pFinal = pMarkovA, pFinal = pEloA, or similar. These should be removed or treated as logging only. The final win probability (pFinal or whatever your script uses) must come only from the weighted blend (and subsequent anchor/damp steps), not replaced by a single source.
Verify clamp and defaults: Ensure after blending you do pBlendA = clamp01(pBlendA). If your code uses clamp01 or similar, keep it. Also check the “zero-weight” fallback: if for some reason wM+wS+wE is zero, you must set them to .5/.5/0 as above to avoid NaN.
In the same decision function, find the market anchor block. It usually follows the blend, and will have variables like marketDivergence_pick, pull_pick, etc. For example, you might see code that computes an implied market probability (often from devigOdds) and then applies a “pull” to the blend.
Swap the divergence sign: The current code may compute marketDivergence = pMktA - pBlendA. Invert this so that diff = pBlendA - pMktA (model minus market). In other words, wherever you see something like
jsmarketDivergence_pick = pMktA - pBlendA;
change it to
jsvar diff = pBlendA - pMktA;
Then compute sign based on that (sign = (diff>0)?1:(diff<0)?-1:0).
Apply capped pull in correct direction: Use a hard cap (8% = 0.08) on the pull and move toward the market without crossing it. The exact replacement code should be:
jsvar maxPull = 0.08; var pull = Math.min(Math.abs(diff), maxPull); var anchored = pBlendA - pull * sign; // Prevent crossing: if it crosses, set to market prob if ((pBlendA > pMktA && anchored < pMktA) || (pBlendA < pMktA && anchored > pMktA)) { anchored = pMktA; } pFinal = clamp01(anchored);
In summary: pull = min(|model–market|, 0.08), then subtract that amount (in the sign direction) from the model probability. This ensures favorites are not pulled past fair odds.
Check for existing pull logic: If your code uses something like _dir_pull_pick or other constructs, ensure that its logic matches the pattern above. For example, if you see code doing pFinal = clamp01(pFinal + pull * dir), verify that dir is computed as (pBlendA > pMktA ? -1 : +1) or similar. The key is that a positive diff (model > market) should cause a negative pull (pushing down toward the market).
Audit any anchor flags: Some code may set flags like marketAnchorApplied. These can remain (they may be for auditing), but make sure they reflect the new logic. The important fix is the sign and clamp logic above.
Find the function that applies opponent-quality boosts to stats, usually named _adjustStatsByOppQuality_ (or similar). This function will adjust hold %, serve/return points, etc., based on how weak the opponent is. We need to cap how much those stats can shift (in logit space) so that beating weak foes doesn’t inflate win-prob too much.
Inside _adjustStatsByOppQuality_, after it computes an adjusted stat (e.g. adjHold, adjSPW, adjRPW or similar) from the raw stat, do the following for each rate: hold %, SPW, RPW, etc. Instead of directly assigning the adjusted value, do:
js// Example for a single adjusted rate var rawRate = /* original stat from stats object */; var adjRate = /* the computed adjusted stat before capping */; var logRaw = _logit_(rawRate); var logAdj = _logit_(adjRate); var delta = logAdj - logRaw; var maxShift = 0.35; // corresponding to about ±8–10 percentage points if (delta > maxShift) delta = maxShift; if (delta < -maxShift) delta = -maxShift; var finalRate = _sigmoid_(logRaw + delta); // Now overwrite the stat with finalRate stats.hold = finalRate; // or stats.SPWin = finalRate, etc.
In practice, locate where the code says something like stats.hold = someValue; (or sets SPW, RPW). Replace that assignment with the logit-based cap. In each case use _logit_ and _sigmoid_ functions (which should be defined in the script) to translate to log-odds, cap the change (delta), and convert back. This ensures no stat jumps more than ~9–10% up or down from the raw value.
This part fixes how the script handles 3-set matches (“deciders”) and ensures consistency between totals, sets, and coinflip language.
In _computeFullBimodal_(…), the code calculates a “decider probability” (pDeciderAuth or pDeciderDisplay). We must use that one value everywhere instead of recomputing or blending it again later.
In _computeFullBimodal_, ensure you return a single field named something like pDeciderAuth (often already there or as pDeciderDisplay) as part of the result object. For example, append:
jsreturn { // … existing fields … pDeciderAuth: pDeciderValue, // … other fields … };
(Replace pDeciderValue with whatever your code currently uses as the authoritative decider prob.)
In _computeDeciderSetProb_(sim, 3), instead of recomputing from setScores or holds, use the bimodal result if available. For example, if that function has access to the bimodal object, have it return bimodal.pDeciderAuth (clamped between any min/max if needed). If not easily possible, at least avoid blending p2 vs p3 there. The goal is to rely on the already-calculated decider.
In _buildSetsMarketEngine_(ctx), find where it sets up BO3 config. Locate the lines where p3Raw is defined. Change it to use the bimodal auth value. For example:
js// Before: maybe compute from DISP/RAW/HOLD, etc. var p3Raw = bimodal.pDeciderAuth; // use the canonical decider
Then clamp it into empirical tour bounds:
jsvar tourK = tourInfo.tour; // e.g. 'ATP','WTA','CH','ITF' var cfgT = SETS_MARKET_CFG.TOUR[tourK] || SETS_MARKET_CFG.TOUR.DEFAULT; var p3 = _clamp_(p3Raw, cfgT.minP3, cfgT.maxP3); var p2 = 1 - p3;
This ensures the same decider probability drives all decisions (total points language, coinflip mentions, sets pick).
In the BO3 section of _buildSetsMarketEngine_, after you load the 32k-match calibration (var cal = get32kCalibration(tourK); giving cal.mvtDecHigh, .mvtDecMid, .mvtDecLow), set the pick thresholds from it. For example, right after retrieving cal32k, insert:
js// Using 32k calibration to set BO3 pick thresholds cfg.p3BetThr = Math.max(cfgT.maxP3 - 0.02, cal.mvtDecHigh - 0.02); cfg.p3LeanThr = Math.max(cfgT.baseP3 + 0.06, cal.mvtDecMid); cfg.minDeciderThr = cal.mvtDecLow - 0.03; cfg.p2LeanThr = 1 - cfg.p3LeanThr; cfg.p2BetThr = 1 - cfg.p3LeanThr + 0.04;
This replaces any hard-coded 0.60/0.54/0.44/0.38 thresholds. After this, make sure the logic uses cfg.p3BetThr, cfg.p3LeanThr, cfg.p2BetThr, cfg.p2LeanThr for deciding bets/leans. In the decision rules (like where you compute can2Bet, can2Lean, can3Bet, can3Lean), it should now look like:
jsvar can2Bet = !deciderVeto2 && (p2Final >= cfg.p2BetThr + riskPenalty2) && sup2Count >= 2 && (agreementLevel!=='DISAGREE' || p2Final>=0.66); var can2Lean = !deciderVeto2 && (p2Final >= cfg.p2LeanThr + riskPenalty2) && sup2Count >= 1; var can3Bet = !hardSweep3Kill && !minDecider3Kill && (p3Final >= cfg.p3BetThr + riskPenalty3) && sup3Count >= 2 && (agreementLevel!=='DISAGREE' || p3Final>=0.66); var can3Lean = !hardSweep3Kill && !minDecider3Kill && (p3Final >= cfg.p3LeanThr + riskPenalty3) && sup3Count >= 1;
(Here riskPenaltyN are your penalties for risk, already in code.) The key is use the cfg thresholds instead of any fixed constants (SETS_MARKET_CFG.READ_*). This integrates the calibration.
To prevent low-confidence “BET 2 SETS” picks on matches that are mixed or fragile, add a veto flag.
_computeFullBimodal_: before returning its result, add:
jsvar vetoSets = false; // Veto if the match is BIMODAL or FRAGILE with a moderate decider probability if ((shape === 'BIMODAL' || hookFlag === 'FRAGILE') && pDeciderAuth > 0.30 && pDeciderAuth < 0.45) { vetoSets = true; } // Return this in the object return { // ...existing fields..., vetoSets: vetoSets };
_buildSetsMarketEngine_: after you determine the final pick and conf for the sets market (2-sets vs 3-sets), add:
This means if vetoSets is true (i.e. a mixed match) and our confidence isn’t very high, cancel the sets pick. Now “BET 2 SETS” won’t appear on fragile/mismatched decider probabilities.jsif (bimodal.vetoSets && conf !== 'HIGH') { pick = ''; conf = 'NONE'; }
Finally, clean up how confidence labels (“HIGH”, “MID”, “LOW”) are assigned, and add a final guard against weak edges.
Find where you assign a confidence string based on the winner edge (often using something like riskScore * 0.025). Replace that with a function of the actual probability edge:
Define a helper function, e.g. near the top of your script or in a utils area:
jsfunction confFromEdge(edge, riskPenalty) { var effEdge = Math.max(0, edge - riskPenalty); if (effEdge >= 0.10) return 'HIGH'; if (effEdge >= 0.06) return 'MID'; if (effEdge >= 0.03) return 'LOW'; return 'NONE'; }
Then use it consistently:
edge is (model_prob – market_prob) or similar. Compute winnerEdge = Math.abs(pModelA - impliedMarketProbA). Then do winnerConf = confFromEdge(winnerEdge, riskScore * 0.025).edge = |pOver - 0.5| (distance from 50%).edge = Math.abs(p3Final - 0.5) (for 3-set bet) or similarly for 2-set. Then setsConf = confFromEdge(setsEdge, riskPenalty).Remove any hard-coded “if edge > X then HIGH” logic. Now all confidences come from this unified mapping.
MATCH_PREVIEWAt the very end of your MATCH_PREVIEW function (just before assembling the output text), insert a universal guard. Define a function:
jsfunction _guardPick(pSide, edge, confLabel) { if (!pSide) return ''; if (!Number.isFinite(edge) || edge < THRESHOLDS.MIN_PICK_CONF) return ''; if (confLabel === 'NONE') return ''; return pSide; }
Then apply it before printing each pick. For example, replace direct usage of winnerPick, totalsPick, setsPick with guarded versions:
jsvar finalWinnerPick = _guardPick(winnerSide, winnerEdge, winnerConf); var finalTotalsPick = _guardPick(totalsSide, totalsEdge, totalsConf); var finalSetsPick = _guardPick(setsSide, setsEdge, setsConf);
And in the preview output, use finalWinnerPick etc. If _guardPick returns '', it means omit the “BET” line for that market entirely (leave just neutral commentary). This ensures no weak edges (below MIN_PICK_CONF) slip through on any tour or market.
To systematically apply these changes, do them in this sequence:
_computeFullBimodal_, add the vetoSets flag at the end; in _buildSetsMarketEngine_, insert the veto check after picks are chosen._buildSetsMarketEngine_ (BO3 path only):
var p3Raw = bimodal.pDeciderAuth; and clamp it as shown.cal32k = get32kCalibration(tourK), assign cfg.p3BetThr, cfg.p3LeanThr, cfg.minDeciderThr, cfg.p2LeanThr, cfg.p2BetThr using the formulas above.can2Bet/can3Bet/etc. logic uses these cfg thresholds (with risk penalties), not old constants.vetoSets logic to clear the pick if needed (as in step 2).MATCH_PREVIEW, before printing picks, define _guardPick and wrap each market’s pick. Replace any direct output of winnerSide/totalsSide/setsSide with the guarded result.After these fixes, your preview text will align with the corrected probabilities and thresholds. Each change above should be done by directly editing the specified functions in your script. Double-check each section for typos, and test on a known example (like the Altmaier–Guerrieri match) to ensure “BET 2 SETS” no longer appears erroneously.
Update my script then send it as a downloadable txt do not write new code on old code else the old code will work u gotta replace old code
I updated the script by replacing the target blocks in your uploaded file, not by appending duplicate logic on top of the old paths. The updated file is here, based on your uploaded script (TennisLocks_v22_FIXED.gs.txt)
Download the updated script txt
I replaced the core winner blend, market anchor, canonical decider flow, BO3 threshold wiring, veto handling, confidence mapping, opponent-quality caps, and final preview pick guards. I did not fully runtime-test it in Google Apps Script, so treat it as a patched source file ready for your own validation pass.