Choosing video card wisely for high-end architectural visualization work
Demistifying the GPU's role in 3D viewport performance
Video card or GPU is essential hardware in high-end architectural visualization work. However, having a more powerful video card in most cases doesn’t result in better graphics performance, or higher FPS rates in the given professional 3D softwares’, like 3ds Max’s or Cinema 4D’s viewport. My perceptions below may help you better understand how your computer navigates the 3D viewport by using both the CPU and the GPU, and as a result, making wiser decisions when choosing a video card for your desktop or workstation. The perceptions below apply to 3dsMax and Cinema 4D, softwares I’ve been using for many years, however likely are true for other professional 3D applications, like Blender, etc. Please note, that my suggestions assume you use a CPU-based renderer like Corona Renderer or V-Ray, and not a GPU-based renderer like Octane Render or Unreal Engine.
/…/ having a more powerful video card in most cases doesn’t result in better graphics performance, or higher FPS rates in the given professional 3D softwares’, like 3ds Max’s or Cinema 4D’s viewport.
In many cases, 3D softwares’ viewport may become sluggish as your arch-viz scene grows, no matter how hard you try to optimize your resources. There’re primary factors that may slow down your viewport performance, like having a large number of polygons, or quite simply, a large number of objects, and a lot more. At a point, you may feel you can’t optimize your scene further, and will investigate your video card as the culprit for poor viewport performance.
Inspecting CPU as the bottleneck in viewport performance
However, in 99 percent of the cases, the bottleneck wouldn’t be your GPU, but your CPU, and more specifically your 3D software, that can’t utilize your full video card power to deliver better viewport performance yet. Each time you interact with your 3D software – like 3ds Max or Cinema 4D – a single thread of your CPU must prepare the viewport for the GPU to render. In most arch-viz scenes the ‘preparation’ time spent on a single thread of the CPU is likely much longer, than the time spent on the GPU itself. As well, the process requires a single thread of the CPU to run at full performance, along 100 percent thread-activity, but will hardly utilize the GPU more than a fraction of its performance. You may justify the perception yourself by monitoring task manager, and read more on the subject here, applying the above to a Quadro M4000 graphics card.
/…/ in 99 percent of the cases, the bottleneck wouldn’t be your GPU, but your CPU, and more specifically your 3D software, that can’t utilize your full video card power to deliver better viewport performance yet.
As a result, if you confirm the above happens on your workstation, having a more powerful video card won’t improve 3D viewport graphics performance at all. In general, choosing even an entry-level gaming video card, like the current GeForce 1650, or its equivalent in future generations will maximize graphics performance in terms of most 3D applications’ viewport, even when paired with a high single-thread performance CPU. Until 3D software vendors can’t better optimize GPU utilization, the perceptions above remain true. All you can do to improve viewport speed is to change to a higher single-thread performance CPU, or better optimize your scene.
Choosing an AMD or nVidia, consumer- or professional grade GPU for your workstation
In terms of GPU vendors, choosing an AMD or nVidia graphics card, or – more generally – even a consumer-, or professional grade video card wouldn’t make much of a difference either. However, new GPU-based innovations, if vendor-specific, will likely be compatible with your nVidia card only – like the nVidia OptiX denoiser – so that choosing an nVidia card may always be a safer option. As well, 2019 nVidia introduced ‘Creation Ready’, or ‘Studio’ drivers, so that 3D-software specific optimization is now available for consumer-grade GeForce line-up too, for a great value.
Taking advantage of highest possible GPU performance in GPU-based rendering, as an exception
In specific cases, especially if you not only use your GPU to navigate your 3D viewport, but for rendering too, you will of course greatly benefit from buying the most powerful video card you can get, whether it’s a consumer, or a professional-grade product. GPU renderers are built to utilize the video card at 100 percent, and require much less CPU power to run the process. If you use software that utilize CPU for rendering though, like Corona Renderer or V-Ray, GPU is of secondary importance in your professional work.