Term of the Moment

chat


Look Up Another Term


Redirected from: COODA

Definition: CUDA


(Compute Unified Device Architecture) A platform from NVIDIA that enables the GPUs on its graphics cards to be used for general-purpose parallel processing. The CUDA programming interface (API) exposes the GPU's parallel processing capabilities to the developer.

CUDA was introduced in 2007 when NVIDIA's GPU was the GeForce 8. Unlike a CPU, which may contain a dozen or more cores for general data processing, a GPU can contain thousands of CUDA cores that perform a single multiply-and-accumulate calculation in parallel for 3D graphics rendering, high-performance computing and AI applications. See GeForce.

CUDA and Tensor Cores
NVIDIA GPUs may also include Tensor cores, which also operate in parallel, but on matrices (see Tensor core). See GPGPU and PhysX.

CUDA C/C++ and CUDA Fortran
CUDA operations are typically programmed in C++ and compiled with NVIDIA's CUDA compiler. A CUDA Fortran compiler was developed by the Portland Group (PGI), which was acquired by NVIDIA. See GPU and CUDA core.