The FPS Engineering
Methodology.
A defined four-stage process applied to every engagement. Forensic measurement before any intervention. Validation after every change. No assumptions, no guesswork.
Performance engineering is not optimisation by intuition. It is a structured process of measurement, controlled change, and empirical validation. Every decision is data-driven. Every change is verified against benchmark output before it is kept.
This removes the guesswork that makes most "optimisation" advice either ineffective or actively harmful. We do not apply a list of tweaks. We apply a methodology.
Forensic Baseline Measurement
Before any configuration is changed, the system's current state is fully documented. This creates the baseline against which every subsequent intervention is measured. Without a baseline, there is no way to determine whether a change helped, harmed, or had no effect.
The baseline captures frametime data at the workload level — not synthetic benchmarks, but real-world performance in the games, applications, or workflows that matter to the client.
A GPU running at 65% utilisation while the system is performance-limited is not a GPU problem. It is a CPU, configuration, or thermal problem. Without measurement, this misdiagnosis leads to hardware upgrades that do not resolve the actual constraint. Baseline measurement prevents this.
Controlled Refinement
Changes are applied one intervention at a time, in order of risk-adjusted impact. Highest expected gain, lowest risk — first. This ensures that each change's individual effect is measurable, and that any instability introduced can be traced to a specific intervention and reverted.
Every change is explained to the client before it is applied. The reasoning, expected effect, risk level, and reversion method are all stated. Nothing happens as a black box.
BIOS configuration
XMP/EXPO profile activation, Resizable BAR, fan curve profiles, power limits. High impact, low risk, immediately reversible.
OS and power management
Power plan, process scheduling, background service reduction, Game Mode, hardware-accelerated GPU scheduling.
GPU driver configuration
NVIDIA Control Panel or AMD Software settings: power management mode, shader cache, anisotropic filtering, frame pacing controls.
Game and application settings
In-game API selection, frame cap configuration, resolution scaling, VRR setup. Aligned with the specific workload and hardware ceiling.
Thermal and Stability Validation
After each intervention block, the system is validated under representative load. Temperature headroom is confirmed against throttle thresholds. Memory stability is checked. Frametime consistency is verified before the next intervention proceeds.
Thermal throttling is one of the most common and least visible causes of performance degradation. A CPU throttling at 95°C under sustained gaming load will show no indication of this in standard FPS monitoring — but the 1% lows will reveal it clearly in frametime analysis.
A system restore point is created before any changes are applied. If any intervention introduces instability — crashes, frametime regression, or thermal problems — it is reverted immediately. Stability is a prerequisite for any further optimisation work.
Empirical Verification
The final stage is identical to the first: a CapFrameX frametime recording under the same conditions as the baseline. The before and after data is compared across every relevant metric. If the numbers don't improve, the change is reverted.
The comparison is delivered as part of the written report — not as a claim, but as measurable evidence. This is what separates structured engineering from intuition-based optimisation.
What we will not do.
Regardless of potential gain.
The methodology applies in full to every engagement. There are no shortcuts by tier.