Global Technical Service E-mail:[email protected] Europe Technical Service E-mail:[email protected]
Home?>?Solutions?>?Hot Topic?>?AI

Leading AI Total Solutions Provider

Inspur provides the world's leading AI total solutions, cutting-edge hardware, comprehensive AI system software, powerful AI application optimization capability


Inspur provides the world's leading GPU / MIC / FPGA AI computing platform, comprehensive AI system software, powerful AI application optimization capability and end-to-end vertical solution

What's new

Innovations in computing drives strong momentum for AI development. Inspur is committed to providing our users with exclusive AI computing platforms. We are the only manufacturer with the capability of providing a complete set of turnkey solutions for AI. Inspur is an important partner and supplier of world-leading CSPs such as Baidu, Tencent, and Alibaba. Inspur’s AI platforms are widely adopted for autonomous car technology, image and voice recognition.?


In addition to data samples with a large number of tags and deep learning models and algorithms, a high performance system platform is also crucial to the success of deep learning. Deep learning involves offline training and online identification.

For the former, a high performance cluster system architecture which draws on the combination of GPU/KNM+IB/10GE/25GE high-speed network and distributed parallel storage can be adopted.

Due to the ever-increasing samples that require training, high performance parallel storage with large capacity and high bandwidth is required for the storage of and quick access to sample data, such as images of 100-million-level pixels and the voice of 100,000 hours with PB-level data volumes.

The long training period requires not only GPU acceleration but also parallel processing of large-scale cluster systems.

For some of the models, the parameters will reach billion-level and thus need a fast network with high bandwidth and low latency to ensure speedy updates of parameters between nodes as well as the convergence of the models.

As for online identification, thousands of nodes are needed for the provisioning of external services, which is a serious power consumption challenge.

Using a low-power FPGA structure to build an online identification platform can help solve this problem.

Inspur Deep Learning System Platform Architecture


Inspur has created a holistic system solution as illustrated below, with a focus on deep learning, which connects high-performance parallel storage solutions to computational acceleration nodes through high speed networking and provides data services.

Computational acceleration nodes that are suitable for offline training adopt high-power GPUs with strong floating point computational capacity with single precision or use KNM acceleration cards when available.

On the other hand, computational acceleration nodes that are used for online identification adopt low-power GPUs with strong INT8 computational capacity or low-power FPGA customized with recognition programs.

Operating deep learning frameworks such as TensorFlow, Caffe, and CNTK on the computation nodes, AIStation management platform provides task management, login interface, parameter tuning and other services.

AIStation also performs state monitoring and scheduling for nodes and computational acceleration components.

This whole platform set will provide support for applications based on top-level artificial intelligence.


In the future, offline training and online identification involved in deep learning will be integrated, with which online data will be directly trained offline, and models trained offline will be used to update those online.

One possible trend for the realization of online-offline unification for deep learning is likely to be a high performance low-power system structure that combines GPU+FPGA+IB high-speed network and distributed parallel storage.



Resource Encapsulation of Deep Learning Framework


The current open-source deep learning frameworks, which are rather dependent on third-party libraries and distinguish between versions, are unfriendly towards framework deployment and development of AI applications. This is especially the case when fast iteration of versions is needed. Frequent updates of OS and third-party libraries have created a lot of unnecessary extra work for the developers.
Inspur carries out unified resource encapsulation of deep learning frameworks and the libraries they depend on into one image, after which the image can be loaded anytime on any Inspur platform that supports resource encapsulation. Users can start working immediately and effectively improve their productivity as their working environment is completely consistent with the original environment. Supporting distributed mapping storage in the image and the storage, scheduling, management and monitoring of mirrors, Inspur deep learning system solutions use resource encapsulation technology to improve the efficiency of deep learning framework deployment and productivity of app development. At the same time, they offer optimized integration of resource encapsulation technology and system solutions.



End-to-End System Delivery Service


Inspur deep learning system solutions provide not only a comprehensive set of hardware, but also end-to-end delivery services for system solutions.?

●Consultation on application scenarios and design of system solutions

Inspur AI solution experts and AI end users discuss deep learning application scenarios and jointly analyze computation hotspots and bottlenecks to design system solutions that are suitable to the application scenarios.

●Transportation and optimization of application codes

Inspur heterogeneous application experts can help clients analyze features of CPU codes, determine whether their migration to heterogeneous acceleration components is appropriate, and collaborate to transport and optimize code hotspots to improve computational efficiency of applications and reduce time.

●Holistic solutions that integrate software and hardware

Inspur possesses not only a comprehensive server product line, high-speed network, and parallel storage products for deep learning, but also AIStation software management platform and Teye feature analysis tools for deep learning. The holistic solutions which integrate software and hardware can bring performance of deep learning frameworks to full play.

●Horizontal assessment on performance of computation acceleration components

Inspur's well-developed horizontal assessment on GPU/FPGA/KNM and other mainstream heterogeneous acceleration components provides solution choices.

●Implementation and deployment of mainstream deep learning frameworks

Focused on mainstream deep learning frameworks such as Caffe, TensorFlow, and CNTK, with resource encapsulation of codes and files from required third-party libraries and image creation that can be quickly deployed to platforms. Easy to implement and learn without having to master complex deployment procedures.

重庆快乐十分开奖号码:ABOUT US





Copyright ? 2018 Inspur. All Rights Reserved.

inspur logo
  • Support:


  • Sales Inquiries:


  • 【手绘H5】我们的领袖习近平 2019-06-26
  • 人民日报:电商“扫黄”当协作 2019-06-26
  • 嘱望上合 青岛之约 2019-06-25
  • 成都一小区交房半年电梯坏50余次:曾有8名学生被困 2019-06-24
  • 2018年汉诺威IT展全新亮相 2019-06-23
  • 抢占公共交通支付场景 腾讯与上海地铁达成战略合作 2019-06-22
  • 意大利留学生毕业致辞中国的白开水厉害得不得了 2019-06-22
  • 足球“视频助理裁判”或致判罚更严厉 2019-06-21
  • 希望不是拿混吃等死做文章 2019-06-21
  • 丹霞旅行地中国国家地理网 2019-06-20
  • 有的课靠抢有的课没人上 高校课堂为何冷热不均? 2019-06-19
  • 骨灰级乐高迷用15000多块积木建成弹球机 2019-06-19
  • 男子高速酒驾飞出立交桥 车身粉碎人无碍 2019-06-18
  • [新闻直播间]我国不动产登记体系全面运行 2019-06-18
  • 北京天安门广场更换花卉 2019-06-17
  • 内蒙古福彩快三技巧 福彩30选5开奖公告 浙江十一选五开奖一定 体育彩票20选5开奖结果查询 北京赛车pk10漏洞 内蒙古快3日统计 长期免费三肖中特 双色球红中3个蓝球中1 百度贵州快三开奖结果查询今天 彩宝彩票怎么赚钱 北京赛车pk10冠军预测 电子游戏手柄 香港六合彩白小姐透码 半全场胜负中了全场算对吗 广西快乐10分网址