POSTS
VGG-19 on Keras/PlaidML backend
PLAIDML, which is rumored to be faster than HIP-TENSORFLOW
Introduction
Hello!
HIP-TensorFlow is a library implemented by performing an CUDA simulation of TensorFlow, but since its execution speed is still under development or based on the old TensorFlow, there is a speed difference when compared against the latest NVIDIA + TensorFlow in the DeepLearning. Also, since it works at the same speed for RX 580 as for superior GPUs like Vega 56 and Vega 64, it is still an immature library in that it cannot demonstrate the potential of the Vega series.(Mar/13th/2018)
However, PlaidML is the library that can compensate for it, and is characterized by the fact that it can be used as a backend for Keras instead of TensorFlow.
ROCm-TensorFlow and PlaidML library stack
ROCm-TensorFlow | PlaidML |
---|---|
Keras or something TensorFlow MIOpen(CUDA simulation layer) ROCm(GPU computing driver) AMD-GPU driver OS Native GPU | Keras or something PlaidML PlaidML ROCm or AMDGPUPRO or CUDASDK AMD-GPU driver/NVIDIA-GPU driver OS Native GPU |
As can seen from the library stack, PlaidML is used as Keras’ backend. Other well-known backend options for Keras include Google’ s TensorFlow, Microsoft ’s CNTK, Université de Montréal’s Theano, etc.
Installation
We will install PlaidML into our AMD GPU-based Ubuntu 16.04 instance. First of all we install it on the AMD GPU driver. In this case, it is simple because it is ROCm based.
wget -qO - http://repo.radeon.com/rocm/apt/debian/rocm.gpg.key | sudo apt-key add -
sudo sh -c 'echo deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main > /etc/apt/sources.list.d/rocm.list'
sudo apt update
sudo apt install -y libnuma-dev rocm-dkms rocm-opencl-dev
sudo usermod -a -G video $LOGNAME
Please run the following OpenCL-based command to confirm. /opt/rocm/opencl/bin/x86_64/clinfo
root@C-639ab3c2-c201-401e-9cc2-08dc90fef661-1:~# /opt/rocm/opencl/bin/x86_64/clinfo
Number of platforms: 1
Platform Profile: FULL_PROFILE
Platform Version: OpenCL 2.1 AMD-APP.internal (2545.0)
Platform Name: AMD Accelerated Parallel Processing
Platform Vendor: Advanced Micro Devices, Inc.
Platform Extensions: cl_khr_icd cl_amd_object_metadata cl_amd_event_callback
Platform Name: AMD Accelerated Parallel Processing
Number of devices: 1
Device Type: CL_DEVICE_TYPE_GPU
Vendor ID: 1002h
Board name: Device 6861
Device Topology: PCI[ B#5, D#0, F#0 ]
Max compute units: 64
Max work items dimensions: 3
Max work items[0]: 1024
Max work items[1]: 1024
Max work items[2]: 1024
Max work group size: 256
Preferred vector width char: 4
Preferred vector width short: 2
Preferred vector width int: 1
Preferred vector width long: 1
Preferred vector width float: 1
Preferred vector width double: 1
Native vector width char: 4
Native vector width short: 2
Native vector width int: 1
Native vector width long: 1
Native vector width float: 1
Native vector width double: 1
Max clock frequency: 1500Mhz
Address bits: 64
Max memory allocation: 14588628172
Image support: Yes
Max number of images read arguments: 128
Max number of images write arguments: 8
Max image 2D width: 16384
Max image 2D height: 16384
Max image 3D width: 2048
Max image 3D height: 2048
Max image 3D depth: 2048
Max samplers within kernel: 26721
Max size of kernel argument: 1024
Alignment (bits) of base address: 1024
Minimum alignment (bytes) for any datatype: 128
Single precision floating point capability
Denorms: Yes
Quiet NaNs: Yes
Round to nearest even: Yes
Round to zero: Yes
Round to +ve and infinity: Yes
IEEE754-2008 fused multiply-add: Yes
Cache type: Read/Write
Cache line size: 64
Cache size: 16384
Global memory size: 17163091968
Constant buffer size: 14588628172
Max number of constant args: 8
Local memory type: Scratchpad
Local memory size: 65536
Max pipe arguments: 16
Max pipe active reservations: 16
Max pipe packet size: 1703726284
Max global variable size: 14588628172
Max global variable preferred total size: 17163091968
Max read/write image args: 64
Max on device events: 0
Queue on device max size: 0
Max on device queues: 0
Queue on device preferred size: 0
SVM capabilities:
Coarse grain buffer: Yes
Fine grain buffer: Yes
Fine grain system: No
Atomics: No
Preferred platform atomic alignment: 0
Preferred global atomic alignment: 0
Preferred local atomic alignment: 0
Kernel Preferred work group size multiple: 64
Error correction support: 0
Unified memory for Host and Device: 0
Profiling timer resolution: 1
Device endianess: Little
Available: Yes
Compiler available: Yes
Execution capabilities:
Execute OpenCL kernels: Yes
Execute native function: No
Queue on Host properties:
Out-of-Order: No
Profiling : Yes
Queue on Device properties:
Out-of-Order: No
Profiling : No
Platform ID: 0x7f16fc2423f0
Name: gfx900
Vendor: Advanced Micro Devices, Inc.
Device OpenCL C version: OpenCL C 2.0
Driver version: 2545.0 (HSA1.1,LC)
Profile: FULL_PROFILE
Version: OpenCL 1.2
Extensions: cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p
Next installation is PlaidML.
The official installation method is set forth in the following url. https://github.com/plaidml/plaidml
But this time, we will use our comany’s AMD GPU instance and ROCm, so please run the following commands.
sudo add-apt-repository universe && sudo apt update
sudo apt install python-pip
sudo pip install -U plaidml-keras h5py
plaidml-setup
Now, installation is completed.
We will execute VGG-19 according to the official methodology. Please save the following official code as vgg.py.
#!/usr/bin/env python
import numpy as np
import time
# Install the plaidml backend
import plaidml.keras
plaidml.keras.install_backend()
import keras
import keras.applications as kapp
from keras.datasets import cifar10
(x_train, y_train_cats), (x_test, y_test_cats) = cifar10.load_data()
batch_size = 8
x_train = x_train[:batch_size]
x_train = np.repeat(np.repeat(x_train, 7, axis=1), 7, axis=2)
model = kapp.VGG19()
model.compile(optimizer='sgd', loss='categorical_crossentropy',
metrics=['accuracy'])
print("Running initial batch (compiling tile program)")
y = model.predict(x=x_train, batch_size=batch_size)
# Now start the clock and run 10 batches
print("Timing inference...")
start = time.time()
for i in range(10):
y = model.predict(x=x_train, batch_size=batch_size)
print("Ran in {} seconds".format(time.time() - start))
We will confirm the status.
root@C-639ab3c2-c201-401e-9cc2-08dc90fef661-1:~/vgg# python vgg.py
/usr/local/lib/python2.7/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Downloading data from http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170491904/170498071 [============================>.] - ETA: 0sINFO:plaidml:Opening device "gfx900.0"
Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg19_weights_tf_dim_ordering_tf_kernels.h5
574627840/574710816 [============================>.] - ETA: 0sRunning initial batch (compiling tile program)
INFO:plaidml:Analyzing Ops: 44 of 195 operations complete
INFO:plaidml:Analyzing Ops: 100 of 195 operations complete
INFO:plaidml:Analyzing Ops: 162 of 195 operations complete
Timing inference...
Ran in 0.758494853973 seconds
It seems that it can be installed more easily than expected and there are no problems with the operation. Keras is a library commonly used in the DeepLearning, and since github also has a lot of the latest Keras-based code, cross-platform libraries like PlaidML are very handy.
GreatJOB, vertex.ai!
Are you interested in working with us?
We are actively looking for new members for developing and improving GPUEater cloud platform. For more information, please check here.
The world’s first AMD GPU-based Deep Learning Cloud.
GPU EATER https://gpueater.com
References
- HIP-TensorFlow https://github.com/ROCmSoftwarePlatform/hiptensorflow
- ROCm https://github.com/RadeonOpenCompute/ROCm
- MIOpen https://gpuopen.com/compute-product/miopen/
- vertex.ai Official Top http://vertex.ai/
- vertex.ai PlaidML http://vertex.ai/blog/announcing-plaidml
- PlaidML Github https://github.com/plaidml/plaidml