Graphical Computing — How Video Games Elevate Science
Game developers have always tried to push graphics to the limit. The foundation of gaming graphics began with the commercial success of pong — simple 4 bit paddles and a ball that bounced back and forth. These days, players can see even the finest details such as threads in fabric or strands of hair. Advancements in computing power allow for the processing of millions of pixels in moments using Graphical Processing Units (GPUs). GPUs were specifically designed to compute complex math equations instantly — something Computer Processing Units (CPUs) don’t do so efficiently. Other industries have taken advantage of the advancements in cheap, fast GPUs to create 3D models. The biotech industry, for example, models molecules in programs like Pymol to visualize the microscopic world. The impact of GPU development on biotech is best understood by following its progression through the gaming industry.

Rise of the GPU
IBM created some of the first graphic display machines in 1965 with the release of the IBM 2250. The 2250 used a cathode ray tube to manipulate images on a screen using a light pen — the original touch screen. The drawing done on the screen was processed through specific subroutines which returned characters to the screen. Display was limited to 63 characters, but could be reprogrammed. The price tag of $280,000 put it outside the range of most gamers, however. The first personal computers — Apple Lisa and MacIntosh — appeared in the late 80s with video cards that used Random Access Memory (RAM) to handle 2D graphics. Pixel manipulation was input through a mouse or controller, processed in RAM, and displayed. Unlike the 2250, which was used for writing characters, the MacIntosh displayed a “desktop” with graphical representations of file folders and sheets of paper in windows — the interface we still recognize today. A Central Processing Unit (CPU) was used for all the computations which heavily taxed the computer and prevented it from doing any additional work.

The Professional Graphics Adapter (PGA), created by IBM, first introduced a graphics card which incorporated its own CPU and uncoupled the display process. This was not created for the personal computer market and remained an outlandishly expensive component. It wasn’t until Microsoft partnered with IBM to incorporate their Graphical User Interface (GUI) software, AKA Windows, that the PC market was exposed to display processing. This lead to a consumer desire for more powerful GPUs that worked independently of the CPU. Eventually, Application Programming Interfaces (API), like OpenGL, were introduced in the early 90s which allowed software developers to access both 2D and 3D functionality of a workstation’s processing unit. The industry became so popular that independent manufacture of GPUs like 3dfx’s Voodoo-1 were commercially affordable for everyday consumers in the late 90s. This card marked the beginning of a new era — that of consumer-level 3D graphics cards.
Despite its large success, Voodoo cards suffered from a few major drawbacks. It only processed 3D graphics which meant a separate card was needed for standard 2D processing and it did not support the OpenGL API. Competitors like ATI Technologies introduced their own line of graphics cards starting with the 3D Rage. They partnered with the game developer id Software when they released their highly successful games Quake and Doom. This allowed ATI to quickly eclipse 3dfx as the foremost producer of graphics cards. NVidia entered the market with the release of the GeForce 256 — the first card that could compute lighting and vertex transformation (depth of vision) in the same chip. It was also the first card to render fixed-function pipeline GPUs — fixed-function pipeline is the sequence of operations the 3D polygons go through in order to be rendered as pixels on the screen. While this approach led to high performance GPUs, it lacked the flexibility to render high details computationally that were not related to graphics. The GPUs’ driving force was gaming which is required to render thousands to millions of polygons in a fraction of a second. The demand for realistic scenes in each new generation of games required GPUs to have tremendous computational performance. In order to keep up with the gaming industry’s demand, Nvidia released the GeForce3. This card housed the first chip to allow the programmer to run small custom programming in the pipeline. These programs, called shaders, would be run many times on different input data and at the same time in separate parts of the chip. This made it possible to compute something completely unrelated to graphics! It became clear to the scientific community that GPUs could be utilized for high performance calculations.

Scientific Application
The scientific community realized GPUs excelled at performing the same operation hundreds of thousands of times with unbelievable speed and accuracy. This was perfect for the Human Genome Project which became a dramatic scientific race between private companies and the public project to align a library of DNA sequences. GPUs easily out-matched multi-core machines in their duties. As the gaming community continued to call for better graphical processing, GPU manufacturer NVidia released the Tesla series GPUs which quadrupled the double precision performance of their equivalent GeForce cards—a function that was well-suited for scientific computation. Seeing a new market in scientific research, AMD designed the Radeon Instinct GPU series to center its function on deep-learning by optimizing Machine Intelligence Open (MIOpen) libraries. Additionally, AMD developed the competitive edge in CPU multi-threaded processing with their EPYC server processors. These CPUs have up to 32 available cores and 64 threads! AMD’s OpenCL driver package allows their GPUs and CPUs to share an OpenCL platform — a simple approach to optimizing utilization. At Macromoltek, we enjoy both the deep-learning capabilities of NVidia’s CUDA technology as well as the multi-threaded performance of AMD’s hardware to perform in-depth calculations for Simulated Annealing, Molecular Dynamics Simulation, and Molecule Interaction.
Today, scientific research utilizes GPUs on an everyday basis. Data analysis which once took years for humans to do and weeks for CPUs to complete, now takes only a few hours for GPUs. Interactive visualization of microscopic objects in 3D software such as PyMol would not be possible without the GPU APIs. Visualization of molecular shapes and comparative sizes in interactive 3D reveals new insights into interactions and behavior of these molecules. Whole new fields have sprung up around the use of GPUs. We cannot forget the tireless work and on-going contributions of the gaming industry for continuing advancements of computational power. Though their work is for entertainment, its scientific benefits are undeniable.
Links and Citations
- Game On, Science https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0057990
- The History of the Modern Graphics Processor https://www.techspot.com/article/650-history-of-the-gpu/
- Graphics Processing Units in Bioinformatics https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5862309/

Looking for more information about Macromoltek, Inc? Visit our website at www.Macromoltek.com
Interested in molecular simulations, biological art, or learning more about molecules? Subscribe to our Twitter and Instagram!









