GPUEATER Developers
  • Posts

Posts

March 20, 2018

Benchmarks on MATRIX MULTIPLICATION | TitanV TensorCore (FP16=>FP32)

Introduction This is continued from the last article. I’ll be writing about benchmarks for the multiplications of matrices, including for the TensorCore (FP16=>FP32), which was first incorporated in the Volta. Matrix Multiplication with TensorCore NVIDIA TitanV’s TensorCore (FP16=>FP32) and other FP32 benchmarks. I used the same setup as before: Ubuntu16.04 incorporating Python3.5+390.30, CUDA9.0, cuDNN7, and TensorFlow1.6. https://devblogs.nvidia.com/programming-tensor-cores-cuda-9/ *It’s not really ideal to make comparisons against FP32 where they should be made against FP16, and give exaggerated descriptions the way they do on NVIDIA’s official site, but please note that this graph does make comparisons between FP32 and TensorCore (FP16).
March 12, 2018

VGG-19 on Keras/PlaidML backend

PLAIDML, which is rumored to be faster than HIP-TENSORFLOW Introduction Hello! HIP-TensorFlow is a library implemented by performing an CUDA simulation of TensorFlow, but since its execution speed is still under development or based on the old TensorFlow, there is a speed difference when compared against the latest NVIDIA + TensorFlow in the DeepLearning. Also, since it works at the same speed for RX 580 as for superior GPUs like Vega 56 and Vega 64, it is still an immature library in that it cannot demonstrate the potential of the Vega series.
March 7, 2018

Benchmarks on MATRIX MULTIPLICATION | A comparison between AMD Vega and NVIDIA GeForce series

Introduction ACUBE Corp. graciously allowed us to borrow a Radeon Pro WX9100, so we have decided to make a report on the card and a record of the results here on our company blog. We would like to extend our heartfelt gratitude to ACUBE Corp. for this opportunity. This report focuses on the Radeon Pro WX9100 card and makes comparisons with the Radeon RX560/580 and RadeonVega56/64/Frontier Edition from the same manufacturer, as well as with the GeForce series from NVIDIA.
March 1, 2018

Wednesday, March 7, 2018, The World's First AMD GPU-based Cloud Instances for Deep Learning

IRVINE, CALIF. (PRWEB) MARCH 06, 2018 California startup Pegara, Inc. launched the world’s first set of deep learning instances based on GPUs from American chip maker AMD through its “GPU EATER” heterogeneous cloud computing service. Although much of today’s deep learning research is conducted using GPUs (graphics processing units) from NVIDIA, in conjunction with libraries it provides, such as CUDA and cuDNN, the revision of the company’s EULA (End User License Agreement) content for its consumer graphics drivers around December 2017 has raised strong voices of concern among researchers and developers at domestic and overseas universities and enterprises about potential termination of research projects and delays in their practical application.
  • ««
  • «
  • 1
  • 2
  • »
  • »»
© 2024 GPUEATER Developers