Intel® Server System S9200WK Product Family
Of IT organizations cite legacy infrastructure as the biggest barrier to business transformation.
Less revenue loss from unplanned downtime reported by organizations that modernize.
Companies that modernized their IT infrastructure enjoy a 6x faster rate of product innovation and time to market.
After committing to IT modernizations, organizations reported an increase of 70% in new customer acquisition.
End of Support. Extended support ends on July 9, 2019 for SQL Server 2008 and 2008 R2 and
January 14, 2020 for Windows Server 2008 and 2008 R2.
The Case for IT Modernization. The relative performance of an IT asset declines by 22% in year
three, by 33% in year four, and 59% in year seven. (IDC)
Modernize IT and Unlock Your Data’s Power
Modern software runs best on modern hardware. Transform your data centers with the coming End of support for SQL Server 2008 and SQL Server 2008 R2. Get ready for the future with modern software from Microsoft, optimized for Intel® Xeon® Scalable processor-based platforms.
Introducing the High Performance Computing (HPC) and Artificial Intelligence (AI) Intel® Server System S9200WK Product Family Featuring Intel® Xeon® Platinum 9200 Processors.
Intel® Server System S9200WK
The Intel® Server System S9200WK product family is a purpose built, performance-optimized data center block ideal for use in High Performance Computing (HPC) and Artificial Intelligence (AI) applications. Designed for Intel® Xeon® Platinum 9200 series processors, with up to 24 DDR4 DIMM slots per compute module, the S9200WK family maximizes processor and memory bandwidth to provide leadership performance for the most demanding compute use requirements.
Today, companies around the world are deploying HPC and AI applications to disrupt entire industries. But integrating, validating and deploying the powerful infrastructure required by those workloads is complex and time consuming for IT. Intel Server System S9200WK enables you to resolve those challenges and quickly expand your HPC and AI capabilities. These highly-integrated, pre-validated, workload-optimized data center blocks offer an unprecedented combination of density and performance; greatly simplifying and accelerating HPC and AI infrastructure deployment to help you innovate and grow faster.
Download the Product Brief to learn more about
how the Intel Server System S9200WK Product
Family is the ultimate density optimized system
solution for today’s broadest variety of HPC
and AI applications and use cases.
Unleash Data-Centric Innovation
Data is everywhere, creating unprecedented opportunities for business transformation. But businesses cannot compete on old infrastructure. Now is the time to modernize IT updating both SW and HW to eliminate bottlenecks and unleash performance, efficiency, and security. Transform and deploy faster with optimized Intel® Select Solutions built on 2nd Gen Intel® Xeon® Scalable processors.
Need More Convincing?
As noted above, 71% of IT organizations cite legacy
infrastructure as their biggest barrier to business
transformation. While updating your software is a
critical first step, there are a multitude of reasons
why your infrastructure needs to be current as well.
This infographic explains why.
“4 Reasons to Modernize Your IT Infrastructure”
Why Advanced HPC
Advanced HPC customers get the best of both worlds – a partner that understands their industry needs and one that has the proven capabilities to develop, deploy and support the right HPC or AI solution unique to those needs. That’s why the likes of Stanford University, Qualcomm, NASA, Memorial Sloan Kettering Cancer Center and truck automation start-up TuSimple turn to Advanced HPC, again and again.
The Intel® Server System S9200WK product family is ideal for your HPC and AI initiatives. Advanced HPC is equally ideal to transform those initiatives into reality by providing:
- HPC AI workload integration
- Best-in-class building blocks.
- Access to the “latest and greatest” technology.
- “White Glove” support.
- Best flops-per-dollar/capacity in the industry.
- Thoroughly tested and certified solutions before shipping to customer.
- Reliability with the track record to prove it.
- Responsive sales and service teams at every step.
- Each system is qualified by Advanced HPC engineers to ensure compatibility.
- Lifetime technical support.
- Single point of contact.
“Intel boosts class-leading speed with Optane SSD 905P drives” (ZDNet, 5-6-18)
“ . . . New Optane 905P is the Fastest SSD Ever” (Tom’s Hardware, 5-2-18)
“ . . . Optane 905P is a Ludicrously Fast SSD” (Tech Radar, 5-2-18)
“ . . . Blazing Fast Optane 905P SSDs” (Tech Spot, 5-3-18)
“. . . New Fastest SSDs in the World” (Game Debate, 5-3-18)
Intel® Optane™ SSD 905P delivers breakthrough performance to meet today’s most demanding storage workloads. (And did we mention it’s really fast?)
The Intel® Optane™ SSD 905P is designed for the most demanding storage workloads, delivering high random read/write performance coupled with ultra-low latency and industry-leading endurance. Built with Intel Optane technology, a revolutionary class of non-volatile memory, the Intel Optane SSD 905P is empowering everyone from enterprise users to university researchers to extract greater platform performance.
While the SSD 905P comes in U.2 or HHHL (half-height half-length) expansion card form factors like its workhorse predecessor in the 900 family, it doubles the storage capacity to 480GB and 960GB, respectively. Plus, the SSD 905P provides an industry-leading 10 DWPD, making it the highest endurance SSD in the market today.
Professionals with the most demanding storage workloads and largest data sets can now tackle even bigger projects with no worry – and no angst about the hassle of frequent drive replacements.
The performance, resilience and responsiveness of the SSD 905P means the processor can spend less time waiting and more time computing, resulting in greatly increased efficiency.
905P Notable Features:
- Provides random storage performance of up to 575K/550K IOPs (4K random reads/writes).
- High throughput at low queue depth.
- 3D XPoint™ memory media.
- 960GB drive equipped with the add-in card (AIC) form factor top supplement functionality.
Download the Intel® Optane™ SSD 905P Product Brief below for complete feature information.
Intel® Optane™ SSD 905P Series (960GB)
- 960 GB Capacity
- HHHL (CEM3.0) Form Factor
- PCIe NVMe 3.0 x4 Interface
Intel® Optane™ SSD 905P Series (480GB)
- 480 GB Capacity
- U.2 15mm Form Factor
- PCIe NVMe 3.0 x4 Interface
Intel® SSD DC S4600 and DC S4500 Series Meets Growing Data Center Needs with Inventive Design
Now available in the revolutionary “ruler” form factor
The Intel® SSD DC P4500 Series—a member of the Intel 3D NAND SSD family—delivers an all new design to support cloud storage and software-defined infrastructures. The Intel SSD DC P4500 Series is combines high performance, capacity, manageability, and reliability to help data centers fast-track their business. To meet data center’s exacting needs for growing capacity, easy serviceability, and thermal efficiency, the DC P4500 is now available in the revolutionary “ruler” form factor.
The “ruler” form factor, so-called for its long, ruler-like shape, shifts storage from the legacy 2.5-inch and 3.5-inch form factors that follow traditional hard disk drives – and the add-in card form factor, which takes advantage of PCIe card slots – and delivers on the promise of non-volatile storage technologies to eliminate constraints on shape and size. The form factor delivers the most storage capacity for a server, with the lowest required cooling and power needs. The next-generation “ruler” form factor SSDs using Intel® 3D NAND technology enables up to 1PB in a 1U server – enough storage for 300,000 HD movies, or about 70 years of nonstop entertainment.
The Intel SSD DC S4500 and S4600 Series combine a new Intel-developed SATA controller, innovative SATA firmware and the industry’s highest density 32-layer 3D NAND. These storage-inspired SSDs preserve legacy infrastructure, ensuring a simple transition from hard disk drives to SSDs, while enabling data centers to reduce storage cost, increase server efficiency and minimize service disruptions.
Intel: World-Class AI Systems and Solutions
Early adopters of artificial intelligence (AI) across industries and organizations are uncovering significant breakthroughs based on deep information within data. AI is paving the way to solve highly complex medical challenges, advance scientific research, and better predict events and human behavior. Intel has the industry’s most comprehensive AI building-block ecosystem – from hardware infrastructure and interconnect technology to far-ranging AI platforms, development software and an unparalleled global partner network; uncommon assets that together deliver increasingly extensive capabilities and genuinely inventive approaches for today’s AI applications and tomorrow’s even more complex AI ambitions.
Intel AI Portfolio
- Intel® Xeon® Scalable Processors
- Intel® Nervana™ Neural Network Processor (NNP)
- Intel® FPGA
- Intel® Movidius™ Myriad™ VPU
- Intel® Saffron™ AI Solutions
Intel® Xeon® Scalable Processors
Powerful silicon to handle the broadest range of AI workloads, including deep learning.
- Synergy among compute, network, and storage is built in.
- Intel® Xeon® Scalable processors optimize interconnectivity with a focus on speed without compromising data security.
Optimize Performance. New features such as Intel® Advanced Vector Extension 512 (Intel® AVX-512) improve with workload-optimized performance and throughput increases for advanced analytics, high performance computing (HPC) applications, and data compression.
Accelerate Critical Workloads. Speed up data compression and cryptography with integrated Intel® QuickAssist Technology (Intel® QAT).
Operate More Efficiently. High-speed Integrated Intel Ethernet (up to 4x10GbE) helps reduce total system cost. It also lowers power consumption and improves transfer latency of large storage blocks and virtual machine migration.
Improve Security. Deploy hardware-enhanced security to protect data and system operations without compromising performance.
Intel® Nervana™ Neural Network Processor (NNP)
A New Class of Hardware that is AI by Design
A new architecture built from the ground up for neural networks, the Intel® Nervana™ Neural Network Processor (NNP) provides the needed flexibility to support all deep learning primitives while making core hardware components as efficient as possible. The NNP is designed to eliminate the limitations imposed by existing hardware not explicitly designed for AI.
To solve a large data problem with a neural network, it is necessary for the designer to iterate quickly on different possible neural networks using large data sets. There are multiple important factors to achieve this, including:
- maximizing compute utilization
- easily scaling to more compute nodes
- doing so with as little power as possible
The NNP architecture provides innovative solutions to these problems and will give neural network designers powerful tools for solving larger and more difficult problems.
Some of the many AI-acceleration benefits provided by the Intel Nervana NNP include:
- Blazingly-fast Data Access. The Intel Nervana NNP leverages high-capacity, high-speed High Bandwidth Memory to provide the maximum level of on-chip storage and blazingly-fast memory access. It utilizes separate pipelines for computation and data management, making new data available faster.
- High Degree of Numerical Parallelism. To achieve higher degrees of throughput for neural network workloads, Intel created Flexpoint, a new numerical data format for the Intel Nervana NNP that delivers higher speed and higher compute density than conventional numerical formats. Flexpoint enables a vast increase in parallelism on a die while simultaneously decreasing power per computation.
- Achieve New Levels of Scalability. Designed with high speed on- and off-chip interconnects, the Intel Nervana NNP enables massive bi-directional data transfer distributed across multiple chips. This makes multiple chips act as one large virtual chip that can accommodate larger models, allowing customers to capture more insight from their data.
At Intel’s AI DevCon hosted in San Francisco, Intel announced that it will launch its first commercial NNP product offering, the Intel Nervana NNP-L1000 (i.e., “Spring Crest”), in 2019. The company anticipates that the Nervana NNP-L1000 will achieve “3-4 times the training performance” of its first-generation Lake Crest product.
At the conference, Intel also asserted its support for bfloat16, a numerical format being adopted industrywide for neural networks, in the Intel Nervana NNP-L1000. Over time, Intel will be extending bfloat16 support across all of its AI product lines, including Intel Xeon processors and Intel FPGAs. This is part of a cohesive and comprehensive strategy to bring leading AI training capabilities to Intel’s silicon portfolio.
Naveen Rao, VP and GM, Artificial Intelligence Products Group (AIPG) for Intel, presents at the 2018 Intel AI DevCon
Real-time, programmable acceleration for deep learning inference workloads.
Intel offers an expansive range of FPGA devices – from the high performing Stratix series to the flexible MAX 10.
The Stratix Series of FPGAs and SoC FPGAs is optimized for the most demanding systems where performance is paramount. Stratix 10 FPGAs and SoC FPGAs are the latest product within the Stratix family. Fabricated on Intel 14nm Tri-Gate technology, Stratix 10 FPGAs and SoCs offer industry-leading capacity, performance, and architectural innovation for the most challenging computing, signal-processing, and software-defined networking applications.
The Arria Series of FPGAs and SoC FPGAs deliver a balance of performance and power efficiency. Arria 10 FPGAs and SoC FPGAs are the latest product within the Arria family. Arria 10 FPGAs and SoC FPGAs at 20 nm feature a unique combination of speed, DSP performance, capacity and power efficiency, and are the only 20 nm FPGA to integrate an embedded processor system.
The MAX Series of programmable logic devices features a non-volatile architecture and provides a mix of low cost and low power. MAX devices are broadly used for general-purpose and power-sensitive designs in a wide variety of market segments to perform functions that include I/O expansion, interface bridging, power management, and FPGA configuration control. MAX 10 FPGAs are the latest product within the MAX family, delivering a unique blend of small footprint, instant-on operation, and FPGA flexibility, including more than enough capacity for soft CPU cores.
The Cyclone Series of FPGAs and SoC FPGAs are optimized for low-cost, high-volume systems. Cyclone V FPGAs and SoC FPGAs deliver capacity, performance, and IP ideal for the majority of embedded applications used in the industrial and automotive markets.
The Enpirion product line offers the industry’s most compact, energy-efficient, and sophisticated DC-DC converters for meeting the power requirements of FPGAs. When power rails demand programmable on/off, fast transient response and extra-low noise, Enpirion devices go where no switching regulator has gone before.
Intel® Movidius™ Myriad™ VPU
Cutting edge solutions for deploying on-device neural networks and computer vision applications at ultra-low power
In March of 2018, Microsoft introduced Windows ML, which enables developers to perform machine learning tasks in the Windows OS. Windows ML efficiently uses hardware for any given artificial intelligence (AI) workload and intelligently distributes work across multiple hardware types – now including Intel Vision Processing Units (VPU). The Intel VPU, a purpose-built chip for accelerating AI workloads at the edge, will allow developers to build and deploy the next generation of deep neural network applications on Windows clients.
The Intel Movidius Myriad X VPU is the industry’s first system-on-chip shipping with a dedicated neural compute engine for hardware acceleration of deep learning inference at the edge. By coupling highly parallel programmable compute with workload-specific hardware acceleration, and co-locating these components on a common intelligent memory fabric, Movidius achieves a unique balance of power efficiency and high performance. The Windows ML and Intel VPU combination has the potential to enable more intelligent customer applications and core OS features, such as personal assistants, enhanced biometric security, smart music, and photo search and recognition.
Intel® Saffron™ AI Solutions
Cognitive and machine-reasoning systems for associative learning
With associative memory learning and reasoning at its core, the Intel Saffron AI Quality and Maintenance Decision Support Suite identifies connections, patterns and trends across data sets to build transparent and auditable recommendations that drive faster and more confident human decisions.
Freed from the constraints of traditional training models, it learns from highly variable and even incomplete data. Both structured data from enterprise systems and unstructured text written by employees, partners and customers are ingested as they come in, delivering fast insight without the need for long test-learn-deploy cycles.
The suite is comprised of two software applications:
Similarity Advisor. Find the closest match to the issue under review, across both resolved and open cases, identifying paths to resolution from previous cases and surfacing duplicates to reduce backlogs.
Classification Advisor. Automatically classifies work orders into pre-set categories, regular mandated or self-defined, to speed up and increase accuracy of reporting and improve operations planning.
One of the best features of the Saffron™ AI Solutions is that it offers wide range of applications for a diverse slate of industries, from manufacturing and aerospace to software development and financial services.
Intel Saffron for Manufacturing
Resolve manufacturing issues faster and more efficiently with data‑driven decision support.
Key Manufacturing Benefits
- Decrease issue backlogs
- Improve issue resolution efficiency
- Boost product quality
- Optimize supply chain
- Eliminate duplicate work
Intel Saffron for Aerospace
Empowering tech ops teams to make confident AI-augmented decisions for faster and more efficient problem resolution and reporting
Key Aerospace Benefits
- Increase aircraft uptime
- Faster issue resolution time
- Optimize supply chain and inventory
- Help meet key FAA regulatory requirement
- Lower risk and liability
Intel Saffron for Software Development
Enabling engineering teams to leverage AI for faster and more efficient bug resolution
Key Software Development Benefits
- Reduce cost by eliminating duplicated bugs
- Propagate tribal knowledge across teams and regions
- Improve product quality
- Faster time to market
Intel Saffron for Financial Services Industry: Intel Saffron AML Advisor
Derive hidden insights from your data to fight financial crime without replacing existing analytics tools
Key Financial Services Industry Benefits
- Uses explainable Associative Memory AI to accelerate path to decision for investigators and analysts.
- Unifies structured and unstructured data linked into a 360-degree view at the individual entity level, to make sense of the patterns found across boundaries wherever the data is stored.
- Reduces the human cognitive burden through automation thought processes that work with and for the investigator allowing them to focus on higher value activities.
- Explains the rationale behind the recommendations to help banks meet regulatory compliance regulations, mitigating fines and countless hours reworking reports.
Scientists, engineers, and data analysts need ever-increasing performance from advanced computing solutions to speed time to results, handle unprecedented data volumes, and improve the accuracy and precision of their applications. Intel® architecture is designed to address the heavy demands of high performance computing at every level making them one of Advanced HPC’s Elite Partners.
Latest Intel Whitepapers
Select the Best Infrastructure Strategy to Support Your AI Solution
For many organizations, the question is not about whether, but when, and most importantly how, to deploy Artificial Intelligence (AI). The purpose of this whitepaper is to help decision makers to select an AI infrastructure approach that accelerates adoption and enables experience to be gained without embedding cost or creating longer-term issues. Read More…