La présentation est en train de télécharger. S'il vous plaît, attendez

La présentation est en train de télécharger. S'il vous plaît, attendez

FJ remark: NO, only 3Gb/s supported

Présentations similaires


Présentation au sujet: "FJ remark: NO, only 3Gb/s supported"— Transcription de la présentation:

0 PRIMERGY CX Serveurs Multi noeuds
New Granularity For Scale-Out Server Deployment Copyright 2011 FUJITSU

1 FJ remark: NO, only 3Gb/s supported
Intel NDA Rules Platform 2S Server & 2S Workstation Product Name Intel® Xeon® processor E family (Sandy Bridge), Intel® C600 series chipset (Patsburg) Intel® Server boards and associated Server Systems featuring Intel® C600 chipset (S2600GZ, S2600GL, S2600JP, S2600IP, S2600CP4, S2600WP, S2600CO4, W2600CR) Product Introduction Date 6 Mar 2012 Sales and Advertising Dates (to end users) Advertising: NO forms of advertising or promotions to end users allowed (includes print/web) until Product Introduction Date. Sales, Shipments: NO sales/shipments to end users until Product Introduction Date; allowed only to distributors/retail/resellers under NDA to enable sales at product introduction date plus other previously approved early ship accounts under NDA. Intel Activity/Timing To Be Defined Disclosure Guidelines (unless otherwise stated, restrictions will be lifted on Product Introduction Date) Benchmarks: NO disclosure of benchmark results / demos of performance measuring software until Product Introduction date except for approved disclosures at SC’11 event in November. Demos: NO public demos except with the specific approval of Daniel W Brown or Eric C Fox until Product Introduction date. Press Guidelines (unless otherwise stated, restrictions will be lifted on Product Introduction Date) Demos: Demos to Press/Analysts under NDA is allowed one month prior to Product Introduction Date. However, NO public release, review, articles, write-ups are allowed until Product Introduction Date. Seeding: System samples to press for review under NDA is allowed one month prior to Product Introduction Date.  However, NO public release, review articles, write-ups are allowed until Product Introduction Date. Press Release: Any press release must follow What’s Public & Disclosure Guidelines PR activities (inc under embargo) should be coordinated with & approved by Intel PR How should this product be referenced in public (Use Product Name on Product Introduction Date) This product should be referenced publicly prior to Product Introduction Date as “future Intel® Xeon® processor E5 family”. What’s Public (OEM/ ALLOWED to reference) / What’s Not OK for public disclosure: OEM’s system name / system features, price ranges for these future systems, intent to offer future systems, market availability in early 2012 Already Public Information: 8C, 16T per socket (16C, 32T per 2S E5 system) Integrated 6Gb/s SAS Integrated PCIe (NO mention of version) Support for Intel® Advanced Vector Extensions, Intel® Trusted Execution Technology, and Intel® AES New instructions Low voltage DIMM (LVDIMM) & load-reduced DIMM support  (LRDIMM) NOT OK for public disclosure: reference to these Intel products – e.g. name/processor number, chipset name/number, frequency, pricing, performance, features not already disclosed by Intel, product code name, no OEM system availability timing FJ remark: NO, only 3Gb/s supported Copyright 2011 FUJITSU

2 Quelles utilisation pour ce type de serveurs?
Le PRIMERGY CX400 au sein du portfolio PRIMERGY CX The Overall Advantages Le High Performance Computing Clustered configurations based, many HDD needs and “Hyper-Scale” scenarios What Analysts tell us What we expect Copyright 2011 FUJITSU Copyright 2011 FUJITSU

3 PRIMERGY CX – le portfolio
Le PRIMERGY CX1000, et le PRIMERGY CX400 font partie de la famille des serveurs Cloud Extension. Le PRIMERGY CX400 : “4 serveurs dans 2U” : une nouvelle granularité au services de déploiements massifs. Déploiements de 4 serveurs indépendants Une granularité optimisée Le PRIMERGY CX1000 : „Scale Big“ grâce à 38 nœuds dans 1 rack Empreinte au sol réduite Refroidissement centralisé PRMERGY CX400 PRIMERGY CX400 Multi-Node Servers High compactness, a shared power and cooling infrastructure and ease of use aiming at decreased capital and operational expenses are central design characteristics of the PRIMERGY CX400 architecture. As part of the PRIMERGY CX CloudExtension server line, it provides a new condensed granularity for scale-out server deployment by packing up to 4 half-wide dual socket server nodes into an only 2 U chassis in standard 19” rack form factor. Thus CX400 enables to scale out easily with up to 4 independent server nodes packaged into a single multi-node enclosure, providing shared power and cooling for lower energy budgets. Its new “4 in 2U” granularity not only provides enhanced flexibility by individual node serviceability and configuration. Moreover it enables a fine tuned scale-out granularity plus an easy rack-wide team play with servers and storage that are part of the current datacenter infrastructure. Hot-plugging of server nodes, power supplies and local storage drives provide enhanced availability and higher ease of use. The PRIMERGY CX2xx server nodes at half width size address the deployment of scale-out server platforms for HPC, terminal or storage virtualization farms, hosting, and for Virtual Client Computing. With their optional GPGPU graphic engines, even highest performance demands are easily achievable. The fine granularity for scale-out computing plus conformity to conventional 19” rack infrastructure makes the PRIMERGY CX400 Multi-Node server a perfect fit for scaling-out as demand dictates. PRMERGY CX1000 Copyright 2011 FUJITSU Copyright 2011 FUJITSU

4 PRIMERGY CX400 S1 – Global Advance
Châssis rack 2U avec ventilation classique et connectivité en face arrière Coûts d’installation et de fonctionnement optimisés (rack, air-conditionné existants) Alimentations efficaces et ventilateurs Réduction de la consommation énergétique vs. Serveurs Rack Noeuds serveurs Hot-plug Alimentations Hot-plug et redondantes Disques durs Hot-plug Disponibilité accrue et facilité d’utilisation Pas de partage de Fabrics, d’I/O ou de composants d’administration Complexité réduite vs. blade servers server nodes demi-largeur offre une densité doublée Meilleure utilisation de l’espace au sol 24 disques 2.5“/3.5“ SATA ou SAS, eco, business critical et enterprise class Flexibilité du choix, répond à tous les besoins 3 ans de garantie Nouveaux processeurs Intel® Xeon®, Bande passante mémoire et I/O élevée La meilleure performance, sans goulet d’étranglement pour la mémoire et les I/Os Copyright 2011 FUJITSU

5 PRIMERGY CX400 - HPC Design
Les besoins du HPC sont couverts Serveur demi largeur double la densité Choix de nœuds serveurs (1U) sans, et (2U) avec GPGPU (General Purpose Computing on Graphics Processing Units), en fonction des applications Processeur Intel® Xeon® E , 4, 6, et 8 cores De nouveaux nœuds serveur basé sur le E seront lancés mi 2012 Des alimentations redondantes, hot-plug améliorent la disponibilité et réduisent les efforts de maintenance Bande passant mémoire élevée (1,600 MHz) Infiniband interconnect FDR (56 Gbit/s) en option offrent une bande passante maximale avec une latence réduite Cartes NVIDIA® Tesla™ série 20 en option, le processeur parallèle le plus performant pour le HPC Copyright 2011 FUJITSU Copyright 2011 FUJITSU

6 PRIMERGY CX400 - Hyper-Scale Server
Une grande capacité de stockage  24 disques dans un châssis CX400 Technologie SATA/SAS pour les disques durs et les SSD Affectation souple des disques aux 4 nœuds serveurs de 3 disques par nœuds à 12 disques (24 disques avec « special release ») Nœuds serveurs demi largeur pour une plus grande densité Des alimentations redondantes, hot-plug améliorent la disponibilité et réduisent les efforts de maintenance Une grande variété d’applications peuvent être servies, celles qui ont besoin de beaucoup d’I/Os, de beaucoup de capacité de stockage ou d’un grand nombre de nœuds de calcul. Tous les environnements sont couverts, des PME aux grands Datacenters Copyright 2011 FUJITSU Copyright 2011 FUJITSU

7 Gartner’s Differentiation Approach
Gartner‘s classification describes architectures like PRIMERGY CX400 as „Skinless Servers“. Source: Gartner Data Center & IT Operations Summit, Andrew Butler, Carl Claunch: Does the Server Have a Role in the Fabric-Enabled Data Center? Nov , 2011 Copyright 2011 FUJITSU Copyright 2011 FUJITSU

8 IDC‘s x86 Server Forecast
Hyper-Scale servers like PRIMERGY CX400 show the highest CAGR between 2011 and 2015. Copyright 2011 FUJITSU

9 PRIMERGY CX400 – Sales Expectations
Although the new systems are still in development phase, there are some promising prospects being pursued – up to 8K units potential. Currently India, Japan and UK are the frontrunners (plus Australia, Germany, USA) India One of currently three Indian HPC prospects concerns statistics research Besides some rack servers (PRIMERGY RX300 and RX350) roughly 100 PRIMERGY CX400 chassis, fully equipped with four CX250 nodes per chassis All Indian projects are expected to use PRIMERGY CX250 Japan Three Japanese HPC related prospect promise more than 1,700 server nodes Three other prospects are public and private cloud related (volume still not fixed) One prospect, scheduled to end of 2012, is Hadoop related (server nodes with many storage drives) UK The main UK prospects (HPC) are about 2 different customers requiring several hundred PRIMERGY CX400 chassis, each fully equipped with 1U or 2U nodes Customer 1: Financial services, PRIMERGY CX250 w/o GPGPU -> WON The challenge: delivery in March, 2012 Customer 2: Scientific organization, PRIMERGY CX250 plus CX270 incl. GPGPUs (existing customer) One of these should – in case we‘ll win – be usable as reference near launch time A case study would serve best for this purpose Copyright 2011 FUJITSU Copyright 2011 FUJITSU

10 PRIMERGY CX400 S1 – Les Ingrédients
Un châssis pour 3 modèles de nodes serveur Copyright 2011 FUJITSU Copyright 2011 FUJITSU

11 PRIMERGY CX400 – généralités
Le châssis PRIMERGY CX400 est au format 2U dans lequel on peut intégrer 4 nœuds serveurs demi-largeur et jusqu’à 24 disques. Le châssis CX400 S1 + 2 alimentations, 4 ventilateurs + module disques (12 disques 3.5” ou 24 disques 2.5”) + 2 ou 4 nœuds serveurs soit 8 proc., 72 modules mémoire, et 2 IO contrôleurs + GPGPU 64 cores, 2.048To de RAM et 72To de stockage Nœuds serveurs 1U hot-plug biprocesseurs Le CX210 S1 est un nœud entrée de gamme équipé de processeurs Intel® Xeon® E5-2400 CX250 S1 est un nœud de calcul orienté HPC équipé de processeurs Intel® Xeon® processor E5-2600 Nœuds serveurs 2U hot-plug biprocesseurs avec GPGPU CX270 S1 est un nœud de calcul orienté HPC acceptant des GPGPU équipé de processeurs Intel® Xeon® processor E5-2600 The PRIMERGY CX400 S1 represents the first generation of multi-node systems for large datacenters as well as for small and medium size environments. It is the 2U rack optimized enclosure to be filled with all infrastructure components necessary for flexible operation of a wide range of application types. Due to its conventional front-to-back cooling and rear-side external connectivity it enables for cost- effectiveness through installation and operation in existing rack infrastructures and air conditioning. A low level of complexity is granted by avoiding sharing of fabrics, I/O or management components. The main ingredients like server nodes, power supply and disk drives are hot-plug enabled for enhanced availability and lowered servicing effort. The approach to minimized energy consumption is underlined by up to two redundant, highly efficient (92%) 80Plus Gold PSUs, providing power for all built-in parts. Efficient cooling is guaranteed by the four centrally installed fan units. The system offers free choice between server nodes without (1U) and with (2U) GPGPU (General Purpose Computing on Graphics Processing Units), depending on the specific applications’ characteristics. All server nodes are half-wide, thus two of them may be positioned besides each other, and up to four 1U or two 2U nodes are the maximum number per PRIMERGY CX400 enclosure, thus density may be doubled compared to standard rack servers. Great flexibility is given through the different types of local disk drives, being installed at the enclosure front side. Up to twelve 3.5” or twenty-four 2.5” storage drives are usable, be it HDDs or SSDs, both with either SATA or SAS interfaces. The drives are assigned to the installed server blades group wise, adaptable to any demand and wallet. The wide variety offered by the PRIMERGY CX400 multi-node system makes it an ideal base for use in HPC environments, virtual client connectivity areas, medium sized application deployments, and many more. Copyright 2011 FUJITSU Copyright 2011 FUJITSU

12 PRIMERGY CX400 S1 – En détails
Châssis de serveurs multi nœuds disponibles avec 2 formats de cage disques : au format 2.5“ ou 3.5“ CX400 avec 12 disques durs 3.5“ CX400 avec 24 disques durs 2.5“ PRIMERGY CX400 S1 intègre en standard 2 alimentations 1400 Watt avec 92% d’efficience (80Plus Gold), redondantes, hot-plug 4 ventilateurs 80 mm, non-hp, non-redondant 12 disques durs 3.5“ ou 24 disques durs 2.5” hot-plug SAS/SATA HDD ou SSD Copyright 2011 FUJITSU

13 PRIMERGY CX400 S1 – Disques durs 2.5”
Affectation des 24 disques durs 2.5“ Répartition fixe par nœud serveur En standard : 6 ou 12 disques durs 3.5” par noeud Via special release : 24 disques durs 3.5” pour 1 unique nœud Copyright 2011 FUJITSU

14 PRIMERGY CX400 S1 – Disques durs 3.5”
Affectation des 12 disques durs 3.5“ Répartition fixe par nœud serveur En standard : 3 ou 6 disques durs 3.5” par nœud Via « special release » : 12 disques durs 3.5” pour 1 unique nœud Copyright 2011 FUJITSU

15 PRIMERGY CX210 S1 - Caractéristiques
CX210 S1 est un nœud serveur 1U économique, intégrant les dernières technologies processeur, une capacité mémoire, et des IO suffisants. Item CX210 S1 Chipset Intel® C lien QPI Processeur 2x Intel® Xeon® E5-24xx 2-/4-/6-/8-Core avec HT + TB, Mémoire UDIMM / RDIMM (LV + Std.) Max. 192 Go DDR3 RDIMM 3 canaux x 2 DIMMs par CPU, ECC; Mirroring, sparing, SDDC (sauf UDIMM) I/O PCIe Gen3 1x x16 riser slot 1x x16 mezzanine slot pour SAS RAID (interne) ou 10 GbE Intel® 82599 # disques 2.5“: 6 x / 3.5”: 3 x (SATA/SAS HDD) Connectors rear side VGA en option, 2x USB 2x LAN+ 1x service, 1x port COM LoM (LAN on board) 2x 1GbE: Intel I350 (Powerville) BMC BMC AST2300 Port d’administration dédié : 10/100 Mbit (SMSC LAN9303) Réseau partagé : 10/100/1000 Mbit (Intel I350) 4 nœuds dans un châssis CX400 S1 Copyright 2011 FUJITSU

16 PRIMERGY CX210 S1 - Appearance
3 4 1 2 5 1 2 x CPU socket 12 x DDR3 DIMM slot 6 x SATA port SAS/10GbE Mezzanine Board connector PCI-E Gen3 x16 riser slot Rear IO: 2 x USB, 2+1 LOM, COM/VGA, POWER BUT/LED 2 3 4 5 Copyright 2011 FUJITSU

17 PRIMERGY CX210 S1 – IO Form Factors
Interposer PCIe Low Profile Card PCIe x16 SAS Mezzanine Card PCIe x8 Copyright 2011 FUJITSU

18 PRIMERGY CX210 S1 – Block Diagram
For PCIe Riser For SAS Mezzanine Copyright 2011 FUJITSU

19 Fujitsu CX250 S1 – En bref Intel® Xeon® E5-2600 family
Processeurs 8 cores et Turbo Boost 2.0 Faible latence Répond aux besoins actuels et futurs Au moins 30% de performance en plus Optimisé pour les scenarios HPC et de virtualisation Evolutivité accrue 16 DIMMs 512Go DDR3 24 disques durs (CX400) 2x PCIe Gen3 Protection de l’investissement Evolutivité maximale pour répondre aux besoins futurs Bande passante maximale pour les I/Os (virtualization & HPC) Opération 2 alimentations hot-plug avec une efficience de 92% Cost efficient opérations Réduction de la consommation énergétique Copyright 2011 FUJITSU

20 PRIMERGY CX250 S1 - Caractéristiques
CX250 S1 est un nœud serveur 1U hautes performances, offrant des processeurs, une capacité mémoire, et des IO élevées. Item CX250 S1 Chipset Intel® C600 with QPI Processeur 2x Intel® Xeon® E5-26xx avec HT + TB, 2-/4-/6-/8-Core Mémoire UDIMM / RDIMM / LRDIMM (LV + Std.) Max. 512 Go DDR3 LRDIMM, 4 canaux x 2 DIMMs par CPU, ECC; Mirroring, sparing, SDDC (not UDIMM) I/O PCIe Gen3 1x x16 riser slot 1x x16 mezzanine slot pour SAS RAID (interne) ou pour10 GbE Intel® 82599 # disques 2.5“: 6 x, 3.5”: 3 x (SAS/SATA HDD&SSD) Connectors rear side VGA option, USB: 2x, 1x UFM support int. LAN: 2x + 1x service, COM port: 1x LoM (LAN on board) 2x 1GbE: Intel I350 (Powerville) BMC BMC AST2300 Port d’administration dédié : 10/100 Mbit (SMSC LAN9303) Réseau partagé : 10/100/1000 Mbit (Intel I350) 4 dans un PRIMERGY CX400 S1 Copyright 2011 FUJITSU

21 PRIMERGY CX250 S1 – Parts View
3 1 1 3 2 2 4 1 2 x CPU socket 16 x DDR3 DIMM slot 6 x SATA ports SAS/10GbE Mezzanine Board connector PCI-E Gen3 x16 riser slot Rear IO: 2 x USB, 2+1 LOM, COM/VGA, POWER BUT/LED 2 3 4 5 Copyright 2011 FUJITSU

22 PRIMERGY CX250 S1 – IO Cards Carte mezzanine SAS Raid ou dual 10GbE en option SAS 6G Mezzanine (Option) 10GbE Mezzanine (Option) Copyright 2011 FUJITSU

23 PRIMERGY CX250 S1 – IO Form Factors
PCI-e LP card Mezzanine card Copyright 2011 FUJITSU

24 PRIMERGY CX250 S1 – vue interne
Copyright 2011 FUJITSU

25 PRIMERGY CX250 S1 – Block Diagram
For PCIe Riser SMSC LAN9303 Copyright 2011 FUJITSU 25

26 PRIMERGY CX270 S1 - Caractéristiques
CX270 S1 est un nœud serveur 2U haute performance, qui intègre les processeurs les plus performants, une forte capacité mémoire, et la puissance d’un GPGPU. Item CX270 S1 Chipset Intel® C600 with QPI Processeur 2x Intel® Xeon® E5-26xx avec HT + TB, 2-/4-/6-/8-Core Mémoire UDIMM / RDIMM / LRDIMM (LV + Std.) Max. 512 Go DDR3 LRDIMM, 4 canaux x 2 DIMMs par CPU, ECC; Mirroring, sparing, SDDC (not UDIMM) I/O PCIe Gen3: 1x x16 riser card pour 2x x8 LP 1x x16 riser card pour GPGPU # disques 2.5“: 12 x, 3.5”: 6 x (SAS/SATA HDD&SSD) Connectors rear side VGA option, USB: 2x, 1x UFM support int. LAN: 2x + 1x service, COM port: 1x LoM (LAN on board) 2x 1GbE: Intel I350 (Powerville) BMC BMC AST2300 Port d’administration dédié : 10/100 Mbit (SMSC LAN9303) Réseau partagé : 10/100/1000 Mbit (Intel I350) 2 dans un PRIMERGY CX400 S1 Copyright 2011 FUJITSU

27 PRIMERGY CX270 S1 – IO Cards GPGPU Card Location
GPGPU PCIe x16 Riser Card Supports two x8 PCI-e slots for LP PCIe Card (Standard LP PCIe IB Card/Standard LP PCIe SAS RAID Card) PCIe x16 Riser Card Copyright 2011 FUJITSU

28 PRIMERGY CX270 S1 – IO Riser Forms
Additional riser cards for CX270 S1 PCIe x16 Riser Card GPGPU Riser Card Copyright 2011 FUJITSU

29 PRIMERGY CX2x0 S1 – En résumer (1)
Model CX250 S1 Rel. Date 4/2012 CX270 S1 Rel. Date 5/2012 CX210 S1 Rel. Date 8/2012 Type Noeud serveur biproc. 1U demi-hauteur 2U demi-hauteur Slots needed in CX400 chassis 1 slot (4 unités max) 2 slots (2 unités max) Processors  2 processeurs Intel® Xeon® E (135W max)  2 processeurs Intel® Xeon® E (95W max) Memory 2x 4 canaux à 2 bancs (16 slots): Go (2/4 GB UDIMM) ou Go (4/8/16 GB RDIMM) ou Go (16/32 GB LRDIMM) 2x 3 canaux à 2 bancs (12 slots): Go (2/4 GB UDIMM) ou 4 – 192 Go ( 4/8/16 Go RDIMM) Memory RAS ECC; RDIMM & LRDIMM: SDDC (Chipkill), Mirroring, Sparing Memory specs Low Voltage UDIMM: 1,600 MHz, 2 Go: 1 rank / 4 Go: 2 ranks Low Voltage RDIMM: 1,600 MHz, 4 Go: 1 rank / 8+16 GB: 2 ranks / 4 Go w/o SDDC: 2 ranks Low Voltage LRDIMM: 1,333 MHz, Go: 4 ranks Max. operational frequencies (MHz): UDIMM: / RDIMM: / LRDIMM: I/O Slots (PCI / Mezzanine) 1 slot PCIe Gen3 x16 & 1 slot PCIe Gen3 x16 Mezzanine slot 2 slot PCIe Gen3 x8 via x16 Riser & 1 slot PCIe Gen3 x16 GPGPU 1 slot PCIe Gen3 x16 Mezzanine Copyright 2011 FUJITSU Copyright 2007 FUJITSU LIMITED

30 PRIMERGY CX2x0 S1 – En résumer (2)
Model CX250 S1 Rel. Date 4/2012 CX270 S1 Rel. Date 5/2012 CX210 S1 Rel. Date 8/2012 SW RAID (intégré) FJ LSI SW RAID 0/1/10 onboard1 (chipset) for up to 4 SATA disk drives (6 drives w/o RAID) HW RAID for SAS or > 4 RAID drives SAS 2.0 RAID 0,1,10,5 mezzanine card (LSI 2108/QCI) w. 512 MB cache, optional BBU PCIe LP SAS 2.0 RAID 0,1,10 (LSI 2008/Lynx) or RAID 0, 1, 10, 5, 6, 50, 60 (LSI 2108/Cougar) MB cache, optional BBU SAS 2.0 RAID 0,1,10,5 mezzanine card (LSI2108/QCI) w. 512 MB cache, optional BBU Nombre de disques durs Hot-plug max. 3x 3.5” ou 6x 2.5” 6x 3.5” ou 12x 2.5” ou 24x 2.5” (Special release) Types de disques durs, format, capacités Disques durs 3.5” ECO SATA 6G 7.2K: Go / 3.5” BC HDD SATA 6G 7.2K: 500 – 3000 GB Disques durs 2.5” BC SATA 6G 7.2K: 250 – 1000 Go Disques durs 2.5” EP HDD SAS 6G 10K: 300 – 900 Go Disques durs 2.5” EP HDD SAS 6G 15K: Go 2.5“ Mainstream SSD SATA 3G: Go 2.5“ Performance SSD SAS 6G: 100 – 400 Go Alimentations 2x redundant, hot-plug shared PSU, 92% efficiency (in CX400 S1 enclosure) OS Support Microsoft® Windows® Server 2008 R2, RHEL 5 (KVM/XEN)/6 (KVM), SLES 10 (XEN), 11 (XEN), VMware ESXi (≥ 4.1, 5.0), Citrix XEN Server, CentOS Microsoft® Windows® HPC Server 2008 R2 1 := Onboard with 3Gbps Copyright 2011 FUJITSU Copyright 2007 FUJITSU LIMITED

31 PRIMERGY CX2x0 S1 – I/O Controllers
Model Release CX250 S1 CX270 S1 CX210 S1 Mezzanine cards 1 10 Gbps Ethernet: Dual port PCIe x8, SFP+ (Intel® Niantic based) 6E/2012 PCIe cards 1x x16 2x x8 + 1 Gbps Ethernet: Dual port PCIe 2.0 x4 D2735-2, FH, Cu (Intel® Kawela based) Quad port PCIe x4 D2745, FH, Cu (Intel® Barton Hills based) 4/2012 Dual port PCIe 2.0 x8 D2755-2, LP FH, SFP+ (Intel® Niantic based) 10 Gbps Ethernet CNA (FCoE): Dual port PCIe 2.0 x8, 170mm, SFP+ (Emulex® OCe10102) Dual port PCIe 3.0 x8, 170mm, SFP+ (Emulex® OCe11102) tbd Tbd 40 Gbps QDR Infiniband: Single port PCIe x8, LP, QSFP (Mellanox ConnectX-2) Dual port PCIe 2.0 x8, LP, QSFP (Mellanox ConnectX-2) 56 Gbps FDR Infiniband: Single port PCIe 3.0 x8, LP, QSFP (Mellanox ConnectX-3) Dual port PCIe 3.0 x8, LP, QSFP (Mellanox ConnectX-3) GPGPU PY NVIDIA Tesla M2075 Computing-Processor (448 cores), FH PY NVIDIA Tesla M2090 Computing-Processor (512 cores), FH 5/2012 6/2012 Copyright 2011 FUJITSU Copyright 2007 FUJITSU LIMITED

32 PRIMERGY CX400 S1 – Competition Preliminary Overview
HP ProLiant DL2000 Multi Node Server HP ProLiant SL6000 / SL6500 Dell PowerEdge C6100 Rack Server Supermicro 2U twin Copyright 2011 FUJITSU Copyright 2011 FUJITSU

33 HP ProLiant DL2000 Multi Node Server
2U chassis e2000 G6 with 2 or 4 independent, rear serviceable 2S server nodes Doubles the density in an industry standard design Ideal for high density performance computing, memory intensive applications and scale-out grid computing Shared power supplies (redundant, hp) and fans (non-redundant, non-hp) Up to 8/12x 3.5“ or 16/24x 2.5“ hp disk drives Cooling-optimized HDD positioning 2x 2U DL2x170e G6 or 4x 1U DL170e G6 server node 16 DIMM slots 2x 1GbE RJ45 PCIe 1U: 1x x16 LP PCIe 2U options: 3x x8 or 1x x16 + 2x x4 or 1x x16 + 1x x8 4-node configuration rear side Current CPUs only <= 95W 1U version: 1x PCIe only 2-node configuration rear side DL2000 8x 3.5” disk DL x 2.5” disk Copyright 2011 FUJITSU

34 HP ProLiant SL6000 Scalable System
CHASSIS z6000 WITHDRAWN Ideal for Scale-out, Web 2.0 and HPC environments ProLiant SL6000 Scalable System consists of z6000 G6 2U Chassis + two 1U server nodes: HP ProLiant SL160z G6 Server HP ProLiant SL170z G6 Server HP ProLiant SL165z G7 Server HP ProLiant SL2x170z G6 Server Shared PSUs and fans per 2U Up to 6 3.5” SAS/SATA or ” SAS/SATA/SSD per 1U Each system board is half-wide and DL170h is identical to SL170z Up to 4 CPUs per 1U with SL2x170z G6 Entirely withdrawn Copyright 2011 FUJITSU

35 HP ProLiant SL6500 – Scalable System
Highest compute density, power efficiency, GPU density and single node serviceability Chassis s6500 4U with up to 8 independent, front-serviceable 2S server nodes with 1, 2, or 4U, left and right halves are different Each system board is a half-width board SL170s G6: 8x 1U, 16 DIMM slots PCIe: 1x x16 LP / 2x 3.5” or 4x 2.5” non-hp disk drives Standard: 2x 1 GbE RJ45 SL390s G7, 12 DIMM slots 8x 1U: PCIe: 1x x16 LP / 2x 3.5” or 4x 2.5” non-hp disk drives 4x 2U: PCIe: 1x x8 LP / 3x x16 for 3 GPUs / 4x 2.5“ hp + 2x non-hp disk drives 2x 4U: PCIe: 1x x8 LP / 8x x16 for 8 GPUs / 8x 2.5“ hp disk drives Standard: 2x 1 GbE RJ45 + 1x 10 GbE SFP+ Option: 1x 40 Gb QDR Infiniband 2-4 shared 94% PSUs (hp), 750 or 1200W, 8 fans optional redundant and hp 1 year warranty SL390s G7 2U SL170s G6 SL390s G7 1U Required rack depth: 1,200 mm No standard SL390s G7 4U Copyright 2011 FUJITSU

36 Dell PowerEdge C610x Multi-Node Servers
Hyper scale inspired server designed to bring the most compute power in the least amount of space 24x 2.5“ or 12x 3.5“ HDD hp bay C610x: 1U half-width Multi Node Servers (4 nodes) similar to HP SL170e C6100: Intel Xeon 5500/5600 C6105: AMD Opteron 4000 12 DIMM slots Dual port GbE Intel® 82576 1x PCIe x16 riser slot and 1x mezzanine slot for Infiniband, 10GbE, and or SAS RAID card (LSI 2008) 2x 1100W PSU redundant, hot-plug SLES 11 SP1 x6 / RHEL 5.5 / MS Win Svr R2 Enterprise / HPC Server 2008 R2 Optionally embedded: Citrix® XenServer Enterprise 5.6, VMware® ESX V4.1, or MS Win Svr Hyper-V Current CPUs only <= 95W Up to 12 DIMM slots 4x C610x + disk drives Copyright 2011 FUJITSU

37 Supermicro 2U Twin Very similar to PRIMERGY CX400
Total cable-less design - better reliability - enhance air flow Memory up to 192GB DDR3 Reg. ECC Intel® Xeon® 5500/5600 series CPU supported 4 hot-swappable computational nodes in 2U rack mount space - space saving - power saving Mellanox ConnectX QDR InfiniBand onboard (selected module) IPMI KVM with dedicated LAN, SuperDoctor®III, Watch Dog Very similar to PRIMERGY CX400 PCI-Express 2.0 x16 (low profile) Node 1 Node 2 Node 3 Node 4 Gold level high efficient 1400W redundant power supply 6 Hot swap 2.5” SAS/SATA per node or 3 Hot swap 3.5” SAS/SATA per node Copyright 2011 FUJITSU

38 PRIMERGY CX Roadmap 2011/12 Cloud Optimized Infrastructure
CX400 S1 Multi Node Server Enclosure for up to 4 x CX2n0 node plus 24 x 2.5” or 12 x 3.5” disk drive Cloud/ Multi Node Server Infrastructure CX1000 S1 Rack and Infrastructure for up to 38 x CX1x0 cloud server S2: Intel® Xeon® processor E5 v2 family CX270 S1– 2U / 2 x E cores / 16 x DIMM / GPGPU / half-wide board CX250 S1– 1U / 2 x E cores / 16 x DIMM / half-wide board CX210 S1– 1U / 2 x E cores / 12 x DIMM / half-wide board Cloud Multi Node Server CX122 S2- 2x Sandy Bridge-EN8C 12DIMM for customer projects tbd. CX122 S1 – 1U / 2x /6 cores / 18 x DIMM / full wide board CX120 S1 – 1U / 2 x /6 cores / 8 * DIMM / full wide board Copyright 2011 FUJITSU

39 Planning de lancement PRIMERGY CX400 S1
Modèle Press Announcement Liste de prix General Availability (MS70) CX400, CX250 Mars 2012 01 Mars 2012 28 mars 2012 CX270 01 Avril 2012 15 mai 2012 CX210 01 Juillet 2012 15 août 2012 Collaterals on PRIMERGY Preview (https://partners.ts.fujitsu.com/teams/d/mkt/esb-preview/cx400) Copyright 2011 FUJITSU

40 Copyright 2011 FUJITSU

41 Backup Copyright 2011 FUJITSU Copyright 2011 FUJITSU

42 PRIMERGY Server Portfolio
High Availability Scalability Price/Performance Density RX900 S2 BX960 S1 BX924 S2 RX600 S6 BX922 S2 BX920 S2 TX300 S6 RX300 S6 CX400 S1 RX200 S6 TX150 S7 TX120 S3 TX140 S1 CX122 S1 TX200 S6 CX120 S1 RX100 S7 TX100 S3 MX130 S1 Copyright 2011 FUJITSU Copyright 2009 FUJITSU TECHNOLOGY SOLUTIONS

43 Fast Track Multi Node servers
Highly SCALE-OUT server deployments High Performance Computing WEB Services & Private Clouds/ XaaS Hosting Common Key Needs Highly efficient power and cooling, optimized DC density, lower management efforts, lower CAPEX, lower OPEX, than „conventional formfactor” platform Utmost performance per socket Infiniband CNA next gen Fabric Water cooling option “Very large” scale-out (1 shot) GPGPU Support LINUX & derivatives vendor support Lowered DC footprint Rip & replace HW service Flexible storage connections Lower energy/cooling costs Easy management & fast deployment Certified cloud s/w stacks Stepwise scaling Lowest cost per VM slice Lowest power consumption costs (esp. idle power cons.) Container option “Endless” scale-out Collect&return service model Highest density per sqm Hadoop ready HW (12 disk drives & redundancy options) Specific Key Needs Minimum size= 40 units Minimum size= 200 units Minimum size= 20 units Copyright 2011 FUJITSU

44 Multi Node Server Differentiation
Fujitsu’s answer to address MNS demands Expanding Fujitsu’s offerings to MNS named PY CX400 Plus adding GPGPU cards for HPC usage and Offerings for many HDD server usage in a high density design Less complex than blades No common switch and management environment needed Less flexible than rack servers In terms of number of PCI-e slots and PSU options Less power efficient than CX1000 design for massive scale out Operating in standard Rack environments Server View Suite support limited to target market needs The „Average Customer Perception“ for „Multi Node“ / „Slim line“ servers is: Not as Complex as Blades, not as Flexible as Rack servers But some sense in High Scale Out environments Copyright 2011 FUJITSU

45 PRIMERGY CX400 S1 – Disk Options
SAS Zoning options Raid / Non-Raid options Copyright 2011 FUJITSU Copyright 2011 FUJITSU

46 Disk Zoning - Block Diagram
HDD1 ~ HDD12 HDD13 ~ HDD24 ………………………………. ………………………………. LSI SAS2 x 36 Switch to enable / disable Zoning LSI SAS Ports SAS Expander Plus Cougar 2 in CX270 S1 x6 x6 MB4 Slot CX270 S1 MB2 Slot CX270 S1 Copyright 2011 FUJITSU

47 Dual Node Zoning - Block Diagram
HDD1 ~ HDD12 HDD13 ~ HDD24 ………………………………. ………………………………. LSI SAS2 x 36 Switch to enable / disable Zoning LSI SAS Ports SAS Expander Zoning Enabled x6 x6 MB4 Slot MB2 Slot Copyright 2011 FUJITSU

48 Single Node Configuration - Block Diagram
HDD1 ~ HDD12 HDD13 ~ HDD24 ………………………………. ………………………………. LSI SAS2 x 36 Switch to enable / disable Zoning LSI SAS2 36 Ports SAS Expander Zoning Disabled x6 x6 MB4 Slot MB2 Slot Special release required Copyright 2011 FUJITSU

49 Raid / Non-Raid Options
Node EN EP HDD RAID Support SAS expander iBBU sup. IBBU08 1U Patsburg A Patsburg  A 6 x SATA (raid for 4x) (2x AHCI & 4x SCU) SW RAID  (0/1/10) no mezz 6 x SAS/SATA HW RAID ( 0/1/5/6/10) yes (12-24) yes 2U SW RAID ( 0/1/10) Lp PCIe Ctrl Note: CX node design (2x AHCI & 4xSCU) allowing 3 Gbps with Patsburg S/W Raid supports up to 4x SATA HDD via SCU Non RAID configurations support up to 6x SATA HDD Concept Patsburg A will run with Fujitsu LSI SW Raid (4 x SATA SCU) All SAS will run with HW Raid – (mezzanine or PCI-e ctr.) Any Raid with more than 4x HDD or Raid 5/6: usage of HW Raid (6Gbps) Copyright 2011 FUJITSU

50 Copyright 2011 FUJITSU


Télécharger ppt "FJ remark: NO, only 3Gb/s supported"

Présentations similaires


Annonces Google