Title:
|
GPU USAGE ESTIMATION OF DEEP LEARNING TRAINING FUNCTION FOR SERVERLESS COMPUTING |
Author(s):
|
Chei-Yol Kim and Gyu-Il Cha |
ISBN:
|
978-989-8533-82-1 |
Editors:
|
Pedro IsaĆas and Hans Weghorn |
Year:
|
2018 |
Edition:
|
Single |
Keywords:
|
Serverless Computing, Micro Service Architecture, Deep Learning, GPGPU Sharing |
Type:
|
Full Paper |
First Page:
|
101 |
Last Page:
|
108 |
Language:
|
English |
Cover:
|
|
Full Contents:
|
click to dowload
|
Paper Abstract:
|
Serverless computing is in the spotlight recently as a new form of cloud computing. And one of the most interested software domain in recent years is deep learning applications. Now serverless computing environment is still just CPU-based. This is because GPU devices are not shared by different processes at the same time unlike CPU. To support deep learning applications in serverless computing with low cost, it is essential to support GPU resource sharing. Nvidia supports MPS with execution resource provisioning on the latest Volta architecture GPU. In order to apply MPS and resource provisioning to GPU-based servlerless computing, it is necessary to know the accurate GPU usage of long-term deep learning functions. In this paper, we propose a technique to predict GPU usage of long-term deep learning training function without watching complete execution of it. The proposed technique is composed of sliding window method and coverage based usage estimation. Through the proposed technique, deep learning training functions can be effectively applied to serverless computing with GPU sharing. |
|
|
|
|