Global Technical Service E-mail:[email protected] Europe Technical Service E-mail:[email protected]
Home?>?Solutions?>?Hot Topic?>?AI

Leading AI Total Solutions Provider

Inspur provides the world's leading AI total solutions, cutting-edge hardware, comprehensive AI system software, powerful AI application optimization capability

Solutions

Inspur provides the world's leading GPU / MIC / FPGA AI computing platform, comprehensive AI system software, powerful AI application optimization capability and end-to-end vertical solution

What's new

Innovations in computing drives strong momentum for AI development. Inspur is committed to providing our users with exclusive AI computing platforms. We are the only manufacturer with the capability of providing a complete set of turnkey solutions for AI. Inspur is an important partner and supplier of world-leading CSPs such as Baidu, Tencent, and Alibaba. Inspur’s AI platforms are widely adopted for autonomous car technology, image and voice recognition.?

Features

In addition to data samples with a large number of tags and deep learning models and algorithms, a high performance system platform is also crucial to the success of deep learning. Deep learning involves offline training and online identification.

For the former, a high performance cluster system architecture which draws on the combination of GPU/KNM+IB/10GE/25GE high-speed network and distributed parallel storage can be adopted.

Due to the ever-increasing samples that require training, high performance parallel storage with large capacity and high bandwidth is required for the storage of and quick access to sample data, such as images of 100-million-level pixels and the voice of 100,000 hours with PB-level data volumes.

The long training period requires not only GPU acceleration but also parallel processing of large-scale cluster systems.

For some of the models, the parameters will reach billion-level and thus need a fast network with high bandwidth and low latency to ensure speedy updates of parameters between nodes as well as the convergence of the models.

As for online identification, thousands of nodes are needed for the provisioning of external services, which is a serious power consumption challenge.

Using a low-power FPGA structure to build an online identification platform can help solve this problem.

Inspur Deep Learning System Platform Architecture

??

Inspur has created a holistic system solution as illustrated below, with a focus on deep learning, which connects high-performance parallel storage solutions to computational acceleration nodes through high speed networking and provides data services.

Computational acceleration nodes that are suitable for offline training adopt high-power GPUs with strong floating point computational capacity with single precision or use KNM acceleration cards when available.

On the other hand, computational acceleration nodes that are used for online identification adopt low-power GPUs with strong INT8 computational capacity or low-power FPGA customized with recognition programs.

Operating deep learning frameworks such as TensorFlow, Caffe, and CNTK on the computation nodes, AIStation management platform provides task management, login interface, parameter tuning and other services.

AIStation also performs state monitoring and scheduling for nodes and computational acceleration components.

This whole platform set will provide support for applications based on top-level artificial intelligence.

4

In the future, offline training and online identification involved in deep learning will be integrated, with which online data will be directly trained offline, and models trained offline will be used to update those online.

One possible trend for the realization of online-offline unification for deep learning is likely to be a high performance low-power system structure that combines GPU+FPGA+IB high-speed network and distributed parallel storage.

????

????

Resource Encapsulation of Deep Learning Framework

???

The current open-source deep learning frameworks, which are rather dependent on third-party libraries and distinguish between versions, are unfriendly towards framework deployment and development of AI applications. This is especially the case when fast iteration of versions is needed. Frequent updates of OS and third-party libraries have created a lot of unnecessary extra work for the developers.
Inspur carries out unified resource encapsulation of deep learning frameworks and the libraries they depend on into one image, after which the image can be loaded anytime on any Inspur platform that supports resource encapsulation. Users can start working immediately and effectively improve their productivity as their working environment is completely consistent with the original environment. Supporting distributed mapping storage in the image and the storage, scheduling, management and monitoring of mirrors, Inspur deep learning system solutions use resource encapsulation technology to improve the efficiency of deep learning framework deployment and productivity of app development. At the same time, they offer optimized integration of resource encapsulation technology and system solutions.

5

???

End-to-End System Delivery Service

???

Inspur deep learning system solutions provide not only a comprehensive set of hardware, but also end-to-end delivery services for system solutions.?

●Consultation on application scenarios and design of system solutions

Inspur AI solution experts and AI end users discuss deep learning application scenarios and jointly analyze computation hotspots and bottlenecks to design system solutions that are suitable to the application scenarios.

●Transportation and optimization of application codes

Inspur heterogeneous application experts can help clients analyze features of CPU codes, determine whether their migration to heterogeneous acceleration components is appropriate, and collaborate to transport and optimize code hotspots to improve computational efficiency of applications and reduce time.

●Holistic solutions that integrate software and hardware

Inspur possesses not only a comprehensive server product line, high-speed network, and parallel storage products for deep learning, but also AIStation software management platform and Teye feature analysis tools for deep learning. The holistic solutions which integrate software and hardware can bring performance of deep learning frameworks to full play.

●Horizontal assessment on performance of computation acceleration components

Inspur's well-developed horizontal assessment on GPU/FPGA/KNM and other mainstream heterogeneous acceleration components provides solution choices.

●Implementation and deployment of mainstream deep learning frameworks

Focused on mainstream deep learning frameworks such as Caffe, TensorFlow, and CNTK, with resource encapsulation of codes and files from required third-party libraries and image creation that can be quickly deployed to platforms. Easy to implement and learn without having to master complex deployment procedures.

快乐8金木水火土玩法:ABOUT US

SUPPORT

WHERE TO BUY

CONTACT US

FOLLOW INSPUR

Copyright ? 2018 Inspur. All Rights Reserved.

inspur logo
  • Support:

    1-844-860-0011

  • Sales Inquiries:

    1-800-697-5893

  • “调整断面”治污,如此欺下瞒上必须严查 2018-12-14
  • 那是当然。我国的两弹一星都是在公有制企业里搞出来的。 2018-12-13
  • 湖北手机报“一县一报”用户跨越百万 2018-12-12
  • 婴儿呼吸急促,当心肺炎 2018-12-11
  • 丸子-热门标签-华商生活 2018-12-11
  • 河北衡水:北方强对流扎堆  南方降雨频繁——遭遇冰雹  车窗被砸农作物受损 2018-12-10
  • 中外学者齐聚拉萨 首届中国西藏拉萨阿里象雄文化国际学术研讨会召开 2018-12-10
  • 新华时评:成果不易,更需诚意与信任呵护 2018-12-09
  • 一直在提速降费 为何手机用户话费不降反升 2018-12-09
  • 地中海上漂了8天 被“拒收”的移民船终靠岸 2018-12-08
  • 辽宁贯彻十九大精神:领导沉下去 群众用心学 2018-12-08
  • 重庆的底气,都在人口规划里了 2018-12-07
  • 安徽贯彻十九大:振兴美丽乡村,造福乡里乡亲 2018-12-06
  • 中国电视剧“抱团出海” 又有哪些新作品走出国门 2018-12-06
  • 出卖社会主义,发不了大财。 2018-12-05
  • 587| 508| 601| 773| 862| 134| 769| 441| 712| 925|