Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/69676758-d944-8003-a3d4-835e33b4e372
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== gpt-5-mini == === User: How was the cpu/gpu time split determined? === How was the cpu/gpu time split determined? === Assistant: The CPU/GPU split comes from practical observation, pipeline theory, and the architecture of your target system, not arβ¦ === The CPU/GPU split comes from practical observation, pipeline theory, and the architecture of your target system, not arbitrary numbers. Let me break it down clearly. ==== You target 60 FPS β 16.67 ms per frame. ==== This is your hard upper limit for all game logic + rendering. ==== ### ==== * Responsible for: - Game logic - AI - Physics - Animation evaluation - Command buffer submission * All of this prepares work for the GPU, but cannot execute pixel shaders. Rule of thumb from real-world engines (Unity, Unreal, proprietary): * Mid-tier CPU-bound system: ~40β50% of frame budget spent on CPU * Mid-tier GPU-bound system: ~50β60% on GPU Why? Because even a βGPU-heavyβ frame needs CPU to feed commands and update game state. You cannot ignore the CPU. For your system (i7-3770K): * 4 cores, ~3.5 GHz * Modern GPU is strong (RX 6600 XT) * CPU becomes the bottleneck first in open-world logic-heavy scenarios β ~7β8 ms for CPU, remaining ~9β10 ms for GPU This is what I used in the previous table. ===== - Handles pixel processing, shaders, post-processing, draw calls, etc. ===== * A mid-range GPU like the 6600 XT has ~10β12 ms to render a full 1080p frame without stuttering. * If your frame budget is 16.67 ms, and CPU takes 7β8 ms, GPU can consume ~9β10 ms, leaving a small margin for OS overhead, swapchain, etc. ==== In real engine design, the split is determined by: ==== # Profiling similar hardware / engines - Look at existing games on similar GPUs + CPUs - Identify CPU-heavy and GPU-heavy workloads # Pipeline theory - Frame time = max(CPU_frame_time, GPU_frame_time) - You canβt push GPU faster than CPU feed rate (CPU becomes limiting), and vice versa # Hardware knowledge - CPU: limited cores, cache, memory bandwidth β limits AI/physics - GPU: thousands of cores, high memory bandwidth β can handle visuals more comfortably at 1080p # Safety margin - Always leave ~10β15% frame budget as headroom for spikes, OS overhead, and streaming ===== <syntaxhighlight>Target: 60 FPS β 16.67 ms/frame ===== Estimate CPU-heavy systems take ~45% of budget: 16.67 Γ 0.45 β 7.5 ms β CPU 16.67 β 7.5 β 9.17 ms β GPU </syntaxhighlight> * Then you break CPU/GPU budgets into subsystems (physics, AI, draw calls, shaders) to guide design. ==== Time split is a guideline, not a rule. ==== * You aim for CPU β€ 7β8 ms, GPU β€ 10 ms. * The final profile may shift depending on scene, complexity, and LOD. * Engines use adaptive techniques to respect these budgets at runtime. If you want, I can make a diagram showing exactly which parts of the frame go to CPU vs GPU, including game logic, physics, rendering submission, and pixel shading, for your 3770K + 6600 XT target. Itβs very illustrative. === User: Ok === Ok === Assistant: Hereβs a clear diagram of how a single frame splits between CPU and GPU for your 3770K + RX 6600 XT system targeting 60β¦ === Hereβs a clear diagram of how a single frame splits between CPU and GPU for your 3770K + RX 6600 XT system targeting 60β―FPS (16.67β―ms per frame). Iβve kept numbers realistic and rounded slightly for clarity. ===== <syntaxhighlight>βββββββββββββββββββββββββββββββββββββββββββββββββ ===== β TOTAL FRAME: 16.67 ms β βββββββββββββββ¬βββββββββββββββ¬ββββββββββββββββββ€ β CPU β GPU β Notes β βββββββββββββββΌβββββββββββββββΌββββββββββββββββββ€ β 7.5 ms β 9.2 ms β CPU + GPU must run in parallel; frame time is max(CPU, GPU) βββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββ </syntaxhighlight> ===== | | | | ===== | --- | --- | --- | | | | | | | | | | | | | | | | | | | | | | | | | ===== | | | | ===== | --- | --- | --- | | | | | | | | | | | | | | | | | | | | | ===== 1. CPU-limited mechanics - Cap number of active AI/NPCs to fit 3β―msβ4β―ms total AI + physics per frame. - Spread expensive updates over multiple frames (tick-scheduling). ===== # GPU-limited mechanics - Keep triangle count β€ ~10β15M on screen. - Avoid >2k Γ 2 shadow maps per frame. - Limit full-screen post-process passes to β€ 2β―ms. # Scaling - LOD reduces geometry & animation cost with distance. - Streaming reduces memory footprint; chunk updates amortized over frames. If you want, I can overlay this diagram with memory/VRAM budgets so you get a full CPU+GPU+RAM/VRAM constraint table for every mechanic on this exact system. That gives a complete βdesign on paperβ blueprint. === User: Ok === Ok === Assistant: Perfect. === Perfect. Letβs combine time (CPU/GPU) and space (RAM/VRAM) budgets into a single practical design blueprint for your 3770K + 16β―GB RAM + RX 6600β―XT system. This gives you a βpaper-readyβ set of constraints for each game mechanic. ==== - Frame rate: 60β―FPS β 16.67β―ms/frame ==== * CPU: i7-3770K (4C/8T, ~3.5β―GHz) * System RAM: 16β―GB DDR3 * GPU: RX 6600β―XT (8β―GB VRAM) * Resolution: 1080p * Storage: SATA SSD ==== | | | | | ==== | --- | --- | --- | --- | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Total: CPU ~7.5β―ms, GPU ~9.2β―ms : ==== ### ==== | | | | | --- | --- | --- | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ===== | | | | ===== | --- | --- | --- | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ==== When designing each feature, write per-unit budgets: ==== | | | | | | | --- | --- | --- | --- | --- | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | : ==== 1. Cap entity counts based on CPU time and RAM usage. ==== # Cap polygon counts & textures based on VRAM and GPU time. # Spread expensive updates over frames (AI, physics, streaming). # Use headroom in CPU/GPU budgets to handle spikes. # Target worst-case scenarios for mechanics, not average. This gives you a fully constrained design-on-paper template. You can now assign explicit CPU, RAM, and GPU budgets to every mechanic before coding a line. If you want, I can make a visual βbudget mapβ diagram that shows CPU, GPU, RAM, and VRAM all in one frameβitβs very helpful for quick reference while designing. === User: Ok lets look at it the opposite way, how do i calculate the hardware cost of my game design on paper === Ok lets look at it the opposite way, how do i calculate the hardware cost of my game design on paper === Assistant: Got it β now weβre flipping the problem. === Got it β now weβre flipping the problem. Instead of βgiven hardware, what can my game do,β you want βgiven a game design, what hardware specs are required to run it at target performance.β This is exactly what studios do for βminimumβ and βrecommendedβ specs. Hereβs a step-by-step method. ==== 1. Define your design targets ==== Before estimating hardware, you must know: * Frame rate β e.g., 60β―FPS β 16.67β―ms/frame * Resolution β e.g., 1080p * Maximum simultaneous entities β NPCs, physics objects, particles * Maximum texture & geometry fidelity β polygons, texture resolution * Maximum post-processing effects β shadows, lighting, screen-space effects : These numbers are the design inputs for the calculation. ==== 2. Estimate CPU cost per mechanic ==== For each system, estimate time per unit per frame. | Mechanic | CPU per unit (ms) | Units | Total CPU (ms/frame) | | ----------------- | ----------------- | ----- | -------------------- | | NPC AI | 0.025 | 100 | 2.5 | | Physics body | 0.002 | 500 | 1.0 | | Animation eval | 0.010 | 150 | 1.5 | | Game logic | β | β | 1.0 | | Scripting / audio | β | β | 0.5 | Sum β CPU time per frame Example: 6.5β―ms β required CPU must have at least this much per core (or multi-threaded equivalent). ==== 3. Estimate GPU cost per mechanic ==== Same approach, but in GPU milliseconds or complexity units: | Mechanic | GPU cost per unit | Units | Total GPU cost | | ---------------------------- | ----------------- | ----- | -------------- | | Character geometry & shading | 0.05β―ms | 50 | 2.5β―ms | | Environment props | 0.005β―ms | 500 | 2.5β―ms | | Particles | 0.01β―ms | 200 | 2.0β―ms | | Post-processing | β | β | 2.0β―ms | | Shadows | β | β | 1.0β―ms | Sum β GPU frame time Example: 10β―ms β GPU must be able to execute that workload in β€ 10β―ms at the target resolution. ==== 4. Convert CPU/GPU time to hardware class ==== Once you have total CPU & GPU time per frame: # CPU: ``<code> Required CPU performance = total CPU workload / frame time per core </code>`` - Adjust for multithreading and per-core efficiency. - Use benchmark numbers of real CPUs: - e.g., if your design uses 6.5β―ms CPU per frame on one βsimulationβ core, a CPU like i7-3770K (~3.5β―GHz) can handle it. If >10β―ms, need a stronger CPU. # GPU: - Convert total GPU ms β GPU throughput needed: - Vertex throughput (Mvertices/s) - Pixel throughput (MPixels/s) - Shader performance (GFlops) - Compare to benchmarked GPUs. - Example: total shading cost = 10β―ms at 1080p β 6600β―XT class can handle it. 4K would need stronger GPU. ==== 5. Estimate memory requirements ==== ===== RAM ===== # Sum all gameplay, physics, AI, audio, streaming buffers. # Add OS overhead (3β4β―GB for modern Windows). # Add safety margin (~10β15%). Example: design requires 8β―GB + 4β―GB OS + 10% β 13.2β―GB β recommend 16β―GB RAM minimum. ===== VRAM ===== # Sum all textures, geometry buffers, render targets, shadows, post effects. # Add headroom (0.5β1β―GB). # Example: 6β―GB total β recommend 8β―GB GPU. ==== 6. Factor in bandwidth ==== * CPU: RAM bandwidth may limit AI/physics-heavy systems. * GPU: Memory bandwidth may limit high-resolution textures or full-screen effects. * Include headroom (~10β20%) in both cases. ==== 7. Define minimum / recommended / high-end specs ==== * Minimum β can just hit your frame target at lowest settings. * Recommended β comfortable frame rate, medium-high settings. * High-end β maxed-out settings, higher resolution / FPS. | Tier | CPU | RAM | GPU | Notes | | ----------- | ------------------- | -------- | ----------------------- | ------------------ | | Minimum | ~i5/i7 3rd gen | 16β―GB | 6600β―XT / GTX 1060 | 1080p 60β―FPS | | Recommended | 6β8 cores, 3.5β―GHz+ | 16β32β―GB | RTX 3060 / 6600 XT high | 1080p Ultra 60β―FPS | | High-end | 8+ cores | 32β―GB | RTX 4070+ / 6800 XT | 1440p 60β120β―FPS | ===== β Key principle ===== * Start with frame budget and mechanics cost. * Sum per-unit costs to get total CPU/GPU time and memory. * Compare to benchmarked hardware β pick hardware tier. * Iterate: tweak mechanic complexity or entity counts until budget fits desired tier. If you want, I can make a worked example calculation for one specific system, like your 3770K + 6600β―XT, showing exactly how to calculate minimum CPU cores, GPU power, RAM, and VRAM from a hypothetical open-world game design. That turns theory into a step-by-step βon-paper hardware calculator.β
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)