Engineering Framework

The FPS Engineering
Methodology.

A defined four-stage process applied to every engagement. Forensic measurement before any intervention. Validation after every change. No assumptions, no guesswork.

Core principle

Performance engineering is not optimisation by intuition. It is a structured process of measurement, controlled change, and empirical validation. Every decision is data-driven. Every change is verified against benchmark output before it is kept.

This removes the guesswork that makes most "optimisation" advice either ineffective or actively harmful. We do not apply a list of tweaks. We apply a methodology.

What this is not
Tweak packs or batch scripts applied blindly
Disabling security features for marginal gains
Reckless overvolting or aggressive overclocking
Registry modification without validation
Promises based on YouTube tutorials
Changes that can't be cleanly reverted
01
Measurement

Forensic Baseline Measurement

Before any configuration is changed, the system's current state is fully documented. This creates the baseline against which every subsequent intervention is measured. Without a baseline, there is no way to determine whether a change helped, harmed, or had no effect.

The baseline captures frametime data at the workload level — not synthetic benchmarks, but real-world performance in the games, applications, or workflows that matter to the client.

Baseline data captured
CapFrameX frametime recording — average FPS, 1% lows, 0.1% lows, frametime variance
HWiNFO hardware monitoring — CPU and GPU temperatures, clock speeds, power draw, utilisation
Configuration snapshot — BIOS settings, XMP status, power plan, driver versions, overlay load
Stability assessment — crash history, throttling indicators, frametime consistency
Why this matters

A GPU running at 65% utilisation while the system is performance-limited is not a GPU problem. It is a CPU, configuration, or thermal problem. Without measurement, this misdiagnosis leads to hardware upgrades that do not resolve the actual constraint. Baseline measurement prevents this.

02
Refinement

Controlled Refinement

Changes are applied one intervention at a time, in order of risk-adjusted impact. Highest expected gain, lowest risk — first. This ensures that each change's individual effect is measurable, and that any instability introduced can be traced to a specific intervention and reverted.

Every change is explained to the client before it is applied. The reasoning, expected effect, risk level, and reversion method are all stated. Nothing happens as a black box.

Intervention categories (in priority order)
A

BIOS configuration

XMP/EXPO profile activation, Resizable BAR, fan curve profiles, power limits. High impact, low risk, immediately reversible.

B

OS and power management

Power plan, process scheduling, background service reduction, Game Mode, hardware-accelerated GPU scheduling.

C

GPU driver configuration

NVIDIA Control Panel or AMD Software settings: power management mode, shader cache, anisotropic filtering, frame pacing controls.

D

Game and application settings

In-game API selection, frame cap configuration, resolution scaling, VRR setup. Aligned with the specific workload and hardware ceiling.

03
Validation

Thermal and Stability Validation

After each intervention block, the system is validated under representative load. Temperature headroom is confirmed against throttle thresholds. Memory stability is checked. Frametime consistency is verified before the next intervention proceeds.

Thermal throttling is one of the most common and least visible causes of performance degradation. A CPU throttling at 95°C under sustained gaming load will show no indication of this in standard FPS monitoring — but the 1% lows will reveal it clearly in frametime analysis.

<10°C
Thermal headroom — marginal
<5°C
Active throttling likely under peak load
15°C+
Adequate headroom for stable boost
Stability protocol

A system restore point is created before any changes are applied. If any intervention introduces instability — crashes, frametime regression, or thermal problems — it is reverted immediately. Stability is a prerequisite for any further optimisation work.

04
Verification

Empirical Verification

The final stage is identical to the first: a CapFrameX frametime recording under the same conditions as the baseline. The before and after data is compared across every relevant metric. If the numbers don't improve, the change is reverted.

The comparison is delivered as part of the written report — not as a claim, but as measurable evidence. This is what separates structured engineering from intuition-based optimisation.

Metrics compared
Average FPS — the headline number, but not the only metric
1% low FPS — the stability indicator that affects perceived smoothness
0.1% low FPS — the spike ceiling that defines worst-case experience
Frametime variance — consistency over time, not just average output
CPU and GPU temperature under load — confirming thermal improvements
A 200 average / 80 one-percent-low feels worse than 140 average / 120 one-percent-low. Average FPS is not the correct metric.
Fixed Constraints

What we will not do.
Regardless of potential gain.

No security feature removal
Disabling Spectre/Meltdown mitigations, Windows Defender, or Secure Boot can marginally improve benchmark numbers. The security cost is not worth any performance benefit for any use case we serve. We will not do this.
No reckless voltage manipulation
CPU and GPU voltage adjustments can damage hardware, void warranties, and introduce instability without measurable performance benefit for most workloads. Any voltage work — where applicable — is treated as advanced scope and requires explicit client consent after risk disclosure.
No bulk registry modification
Registry tweak scripts apply dozens of undocumented changes simultaneously. This makes it impossible to identify what improved, what degraded, and what had no effect. It is not engineering — it is guesswork at scale.
No unattended remote access
Every guided session uses screen share with the client present and in control throughout. Remote desktop access, where required for advanced scope, is used only with the client's explicit consent, is fully visible, and can be terminated by the client instantly.

The methodology applies in full to every engagement. There are no shortcuts by tier.