AI lets FPGA get a new lease of life: the CPU+GPU is a problem of machine learning, and Microsoft and Baidu solved it with FPGA

 fpga.
At this year's Hot Chips 2017 conference, Microsoft launched Brainwave, an ultra-low latency computing platform based on FPGA, to speed up deep learning in the cloud. The platform USES Intel's Stratix 10 FPGA, which tests that Brainwave does not require any batching to achieve the performance of 39.5 Teraflops at large GRU.
Microsoft will DPU or processing unit within DNN merge into FPGA, hope that through strengthening deep learning network research, more quickly adapt to the cloud service infrastructure construction, to cope with the near real-time processing requirements.
Also at the Hot Chips conference, baidu released a 256 core, FPGA based cloud computing accelerator XPU, whose partner is the famous FPGA manufacturer Xilinx. The goal of XPU is to strike a balance between performance and efficiency, to handle diverse computing tasks, and the FPGA itself is good at handling certain computations
fpga
 
Baidu researcher jay goulding said: "the FPGA is efficient and can focus on specific computing tasks. Traditional CPU is good at general computing tasks, especially based on the rules of the computing tasks, at the same time very flexible. The GPU aimed at the parallel computing, thus has a very strong performance, XPU is focus on computationally intensive, based on rule of diversified computing tasks, improve the efficiency and performance, and lead to similar CPU flexibility."
Nearly two years with the advent of wave of artificial intelligence, make some unpopular FPGA the community back to life again, in fact, each big technology firms have long been aware of the advantage of FPGA in the field of AI have, and start to layout.
As early as in 2015, Microsoft has put forward a kind of using FPGA to accelerate the Bing search engine, the CPU + FPGA mixer invested in the production and arrangement, to accelerate the Bing page ranking function. Last year, amazon AWS also introduced the FPGA based cloud service EC2 F1. In fact, baidu, aliyun, tencent cloud and other giants have also launched FPGA cloud services.
The FPGA, which has been widely watched by the industry, was due to Intel's biggest ever acquisition in 2015 - a $16.7 billion deal to acquire Altera, a big FPGA plant. Intel estimates that cpus + FPGA's heterogeneous computing will account for a third of the transport bureau's central market by 2020.
Compared with traditional CPU, GPU, FPGA no instructions, no Shared memory architecture can have higher computational efficiency, in dealing with similar matrix operations, image processing, machine learning, compression, Bing in the search for compute-intensive tasks such as sorting has quite good performance. In addition, the FPGA has lower power consumption, and the FPGA version of baidu's brain, which is used on baidu's online service, is more than ten times the performance of the tianhe-2 supercomputer in the same power consumption.
Moore's law is no longer in line with the current technology development status, and machine learning and the scale of the Web services are to exponential growth, the rapid development of technology and industry, put forward for the processor can be reprogrammed to adapt to the new type of computing tasks, the FPGA is such a kind of can refactor the system structure, and this is why the technology giant so on FPGA.