Kytcs.Blogspot.com: Parallel and Distributed Computing Unit - 2 mcq

Parallel and Distributed Computing Unit - 2 mcq

 Parallel and Distributed Computing Unit - 2 mcq

 

1. CUDA stand for
(A) Compute Unified Device Architecture
(B) Compute Unit Device Architecture
(C) Compute Unified Disk Architecture
(D) Compute Unit Device Architecture

Correct option is A


2. CUDA is a
(A) Parallel computing platform
(B) Programming model
(C) Both A & B
(D) None of these

Correct option is C


3. CUDA is developed by
(A) NVIDIA
(B) AMD
(C) INTEL
(D) RAPIDS

Correct option is A


4. CUDA is used to enabled ___
(A) Graphics Processing Unit
(B) Graphical Processing Unit
(C) Graphics Processed Unit
(D) Graphical Processed Unit

Correct option is A


5. CUDA platform is designed to work with programming languages
(A) C
(B) C++
(C) Fortan
(D) All of the above

Correct option is D


6. Which of the following statements are true with regard to compute capability in CUDA
(A) Code compiled for hardware of one compute capability will not need to be re-compiled to run on hardware of another
(B) Different compute capabilities may imply a different amount of local memory per thread
(C) Compute capability is measured by the number of FLOPS a GPU accelerator can compute.

Correct option is B


7. Which of the following correctly describes a GPU kernel
(A) All thread blocks involved in the same computation use the same kernel
(B) A kernel is part of the GPU's internal micro-operating system, allowing it to act as in independent host
(C) A kernel may contain a mix of host and GPU code
(D) None of these

Correct option is A


8. Which of the following is not a form of parallelism supported by CUDA


(A) Vector parallelism - Floating point computations are executed in parallel on wide vector units
(B) Thread level task parallelism - Different threads execute a different tasks
(C) Block and grid level parallelism - Different blocks or grids execute different tasks
(D) Data parallelism - Different threads and blocks process different parts of data in memory

Correct option is A


9. The style of parallelism supported on GPUs is best described as


(A) SISD - Single Instruction Single Data
(B) MISD - Multiple Instruction Single Data
(C) SIMT - Single Instruction Multiple Thread
(D) None of these

Correct option is C


10. Shared memory in CUDA is accessible to
(A) All threads associated with a single kernel
(B) All threads in a single block
(C) Both the host and GPU
(D) None of these

Correct option is B


11. Parallel portions of an application are executed on the device as.
(A) Kernels
(B) Thread
(C) Both A & B
(D) None of these

Correct option is A


12. A CUDA kernel is executed by an _____
(A) array of batches
(B) array of threads
(C) single thread
(D) single batches

Correct option is B


13. _____ is a form of parallelization which relies on splitting the computation by subdividing data across multiple processors.
(A) Data parallelism
(B) Task parallelism
(C) Function parallelism
(D) Object parallelism

Correct option is A


14. Data parallelism performed ____, Task parallelism performed _____ .
(A) Synchronous, Asynchronous Computation
(B) Synchronous, Synchronous Computation
(C) Asynchronous, Synchronous Computation
(D) Asynchronous, Asynchronous Computation

Correct option is A


15. CUDA source file can have a mixture of
(A) Host Code
(B) Device Code
(C) Both A & B
(D) None of these

Correct option is C


16. Declares a function that is executed on device and callable from device only
(A) _device_
(B) _global_
(C) _host_
(D) None of these

Correct option is A


17. What is(are) true about __global__?
(A) Declares a function as kernel
(B) Must have void return type
(C) The function is only executable on the device, callable from the host and device
(D) All of these

Correct option is D


18. What is(are) true about variable types qualifier _device_ ?
(A) Resides in global memory (DRAM)
(B) Is accessible from all the threads within the grid
(C) Is accessible from the host through the runtime library
(D) All of these

Correct option is D


19. What is(are) true about variable types qualifier _shared_ ?
(A) Resides in shared memory of a thread block
(B) Is only accessible from all the threads within the block
(C) Both A & B
(D) None of these

Correct option is C


20. What strategy does the GPU employ if the threads within a warp diverge in their execution?
(A) All possible execution paths are run by all threads in a warp serially so that thread instructions do not diverge
(B) Threads are moved to different warps so that divergence does not occur within a single warp
(C) Threads are allowed to diverge
(D) None of these

Correct option is A


 

No comments:

Post a Comment

Followers

Ad Space