Matinée Partenaires VERITAS Software - La Défense 9 Février 2005

Slides:



Advertisements
Présentations similaires
TortoiseSVN N°. Subversion : pour quoi faire ? Avoir un espace de stockage commun – Tous les étudiants du SIGLIS ont un espace svn commun Partager vos.
Advertisements

Making PowerPoint Slides Avoiding the Pitfalls of Bad Slides.
PERFORMANCE One important issue in networking is the performance of the network—how good is it? We discuss quality of service, an overall measurement.
Le système Raid 5 Table des matières Qu'est ce que le RAID ? Les objectifs Le raid 5 Les avantages et les inconvénients Les composants d’un Raid.
1 Case Study 1: UNIX and LINUX Chapter History of unix 10.2 Overview of unix 10.3 Processes in unix 10.4 Memory management in unix 10.5 Input/output.
 Components have ratings  Ratings can be Voltage, Current or Power (Volts, Amps or Watts  If a Current of Power rating is exceeded the component overheats.
IP Multicast Text available on
Office 365 Preview Workshop. “Office as we intended” Faster update cadence than on-premises software What we learn in online is built in to on-premises.
Template Provided By Genigraphics – Replace This Text With Your Title John Smith, MD 1 ; Jane Doe, PhD 2 ; Frederick Smith, MD, PhD 1,2 1.
From Implementing Cisco IP Routing (ROUTE) Foundation Learning Guide by Diane Teare, Bob Vachon and Rick Graziani ( ) Copyright © 2015 Cisco Systems,
Update on Edge BI pricing January ©2011 SAP AG. All rights reserved.2 Confidential What you told us about the new Edge BI pricing Full Web Intelligence.
1. Neal Creative | click & Learn more Neal Creative © TIP │ Use the built-in c olor palette with green and yellow for callouts and accents Neal Creative.
Subject: CMS(Content Management System) Université Alioune DIOP de Bambey UFR Sciences Appliquées et Technologies de l’Information et de la Communication.
Overview of SUN’s Unix Campus-Booster ID : **XXXXX Copyright © SUPINFO. All rights reserved Introduction to Solaris 10.
Architecture de Réseaux Travaux Pratiques
Programmateurs connectés
Theme Three Speaking Questions
Reflexive verbs and morning routine FR2
Hyperconverged infrastructure for data protection
IGTMD réunion du 4 Mai 2007 CC IN2P3 Lyon
Speaking Exam Preparation
Theme Two Speaking Questions
Architecture de Réseaux Travaux Pratiques
IDL_IDL bridge The IDL_IDLBridge object class allows an IDL session to create and control other IDL sessions, each of which runs as a separate process.
Cross Cutting Mapping Project
Architecture de Réseaux Travaux Pratiques
Why is it important to plan ahead for the future?
Samples for evaluation from All Charts & Templates Packs for PowerPoint © All-PPT-Templates.comPersonal Use Only – not for distribution. All Rights Reserved.
Reflective verbs or Pronominal verbs
Quantum Computer A New Era of Future Computing Ahmed WAFDI ??????
Statistics & Econometrics Statistics & Econometrics Statistics & Econometrics Statistics & Econometrics Statistics & Econometrics Statistics & Econometrics.
Theme Two Speaking Questions
Nouveaux Systèmes de stockage de données (SAN/NAS)
Theme One Speaking Questions
F RIENDS AND FRIENDSHIP Project by: POPA BIANCA IONELA.
Copyright 2007 – Biz/ed Globalisation.
P&ID SYMBOLS. P&IDs Piping and Instrumentation Diagrams or simply P&IDs are the “schematics” used in the field of instrumentation and control (Automation)
1 ISO/TC 176/SC 2/N1219 ISO 9001:2015 Revision overview - General users July 2014.
Lect12EEE 2021 Differential Equation Solutions of Transient Circuits Dr. Holbert March 3, 2008.
Essai
High-Availability Linux Services And Newtork Administration Bourbita Mahdi 2016.
Le soir Objectifs: Talking about what you do in the evening
Qu’est-ce que tu as dans ta trousse?
Forum national sur l’IMT de 2004.
Definition Division of labour (or specialisation) takes place when a worker specialises in producing a good or a part of a good.
Quelle est la date aujourd’hui?
Standards Certification Education & Training Publishing Conferences & Exhibits Automation Connections ISA EXPO 2006 Wed, 1:00 Oct 18.
1-1 Introduction to ArcGIS Introductions Who are you? Any GIS background? What do you want to get out of the class?
By:- Israr K. Raja Islamabad, Pakistan. Supply Chain Activities those Affect the Financial Performance Supply chain managers make decisions and use organizational.
Vulnerability Analysis by : Wail Belhouchet Dr Djouad Tarek 1.
WRITING A PROS AND CONS ESSAY. Instructions 1. Begin your essay by introducing your topic Explaining that you are exploring the advantages and disadvantages.
Making PowerPoint Slides Avoiding the Pitfalls of Bad Slides.
POWERPOINT PRESENTATION FOR INTRODUCTION TO THE USE OF SPSS SOFTWARE FOR STATISTICAL ANALISYS BY AMINOU Faozyath UIL/PG2018/1866 JANUARY 2019.
SLIDE TITLE PowerPoint Presentation SUBTITILE HERE.
By : HOUSNA hebbaz Computer NetWork. Plane What is Computer Network? Type of Network Protocols Topology.
Microsoft Azure Quelles protections des données à l'heure du Cloud ?
les instructions Bonjour la classe, sortez vos affaires
Ftpworldwide-Company FTP Worldwide has a simple, secure and flexible solution for your managed file transfer and file sharing needs. Unlike many of our.
HUAWEI TECHNOLOGIES CO., LTD. VLAN Principles. Copyright © 2016 Huawei Technologies Co., Ltd. All rights reserved. Page 2 Foreword A Virtual Local Area.
1 Sensitivity Analysis Introduction to Sensitivity Analysis Introduction to Sensitivity Analysis Graphical Sensitivity Analysis Graphical Sensitivity Analysis.
Avoiding the Pitfalls of Bad Slides Tips to be Covered Outlines Slide Structure Fonts Colour Background Graphs Spelling and Grammar Conclusions Questions.
REPLICA Hyper-V Comme solution à un PRA
Le Passé Composé (Perfect Tense)
University : Ammar Telidji Laghouat Faculty : Technology Department : Electronics 3rd year Telecommunications Professor : S.Benghouini Student: Tadj Souad.
Cold Fusion High Availability “Taking It To The Next Level” Presenter: Jason Baker, Digital North Date:
+ Siham Boutayeb BMC Sr Software Consultant.
LF
Revenue Planning SAP Best Practices. ©2013 SAP AG. All rights reserved.2 Purpose, Benefits, and Key Process Steps Purpose  Plan the revenue and the costs.
IMPROVING PF’s M&E APPROACH AND LEARNING STRATEGY Sylvain N’CHO M&E Manager IPA-Cote d’Ivoire.
M’SILA University Information Communication Sciences and technology
Transcription de la présentation:

Matinée Partenaires VERITAS Software - La Défense 9 Février 2005 Optimisation de l’Infrastructure de Stockage VERITAS Storage Foundation Matinée Partenaires VERITAS Software - La Défense 9 Février 2005

Agenda 8h30 Accueil - Petit Déjeuner 9h Gestion du stockage primaire Les couches fondamentales d’une infrastructure de stockage avec VERITAS Storage Foundation 9h45 Pause 10h Innovations et Technologies avancées Cluster File System, Partage de Données, Qualité de Service, intégration applicative… 11h Etude de Cas: VERITAS Storage Foundation appliquée à Microsoft Windows 2003 11h45 Conclusion - Questions/Réponses 12h Fin

VERITAS Software Aujourd’hui #1 Mondial des Editeurs de Logiciels de Stockage 2,042 Mds $ CA 2004 ~7400 employés Présent dans 40 pays 99% de Fortune500

Gestion du stockage primaire Philippe Nicolas Senior Solutions Marketing Manager EMEA Philippe.Nicolas@veritas.com

Gestion des Données et du Stockage Organisation et Accès rapide aux données VERITAS Storage FoundationTM 4.0 Famille de produits VERITAS Volume ManagerTM + VERITAS File SystemTM Mouture QS, Standard et Enterprise Fonctionnalités 64 Bits (10 Millions To - Fichier et File System) Qualité du Service de Stockage (QoSS) avec Multi-Device/Volume FS Partage de Données séquentiel hétérogéne (Portable Data Container) Migration Raw vers Système de Fichiers VERITAS Storage FoundationTM Fonctionnalités supplémentaires: QoSS, Partage de Données… VERITAS File SystemTM VERITAS Volume ManagerTM Standard de l’industrie depuis plus de 10 ans !

Gestion des Données et du Stockage Organisation et Accès rapide aux données Fonctionnalités (suite) Storage Provisionning Template File Change Log (FCL) Option HA et Partage de Données concurrent VERITAS Storage Foundation HA VERITAS Storage Foundation Cluster File System (HA) - Scale-out Clusters VERITAS Storage Foundation for Oracle RAC VERITAS FlashSnapTM Snapshot (Clone ou St. CheckPointTM) indépendant du Stockage VERITAS SAN Volume Manager Au cœur du réseau de stockage installé sur un serveur ou embarqué dans un switch intelligent (Cisco ou Brocade)

Les différentes offres VERITAS Storage Foundation

Gestion des Données et du Stockage Organisation et Accès rapide aux données Redimensionnement à chaud 3 2 1 Re-localisation à chaud et réorganisation FS à chaud Log Réplication sur IP FS FS Miroir multi-baies 3 2 1 3 2 Snapshot 1 3 2 1

Gestion des Données et du Stockage Solution pour les Bases de Données Optimisation et harmonisation de la gestion, de l’accès et du format de données pour toutes les applications basées sur Oracle, Sybase ou DB2 Tout sur VxFS/VxVM et tout en ligne redim., defrag., snap., backup… Augmentation de performance Dépasse la performance raw Support mono-instance, configuration HA et multi-instances (parallèles avec 9iRAC) Adaptation aux besoins et réduction des temps d’arrêts

Gestion des Données et du Stockage Solution pour les Bases de Données VERITAS Storage Foundation for Databases « Le meilleur des 2 mondes »

File Change Log (FCL) Suivi des opérations sur les fichiers Fichiers et Répertoires log opérations pas les données Parfait pour les applications qui « parcourent » le File System Backup, indexation, recherche… Paramètre modifiable API disponible / File Change Log VxFS File System write(fd, buf, 1024) write(fd, buf, 1024) write(fd, buf, 1024) ? creat(“newfile”, …) creat(“newfile”, …) creat(“newfile”, …)

Souplesse du stockage Migration du raw vers le système de fichiers Fichier = Souplesse de gestion Raw Device = Performance Conversion en quelques minutes Accroissement simple de l’espace pour la base de données / /dbfs sous système de fichiers Conversion vs. Encapsulation Conversion – Time consuming, painful, may have kept you from moving to a file system in the past Add storage Configure storage and file system(s) Shutdown database “dd” from raw to files Alter database rename file ‘source’ TO ‘destination’ ; Startup database Remove raw volumes Remove storage Encapsulation – facilitates move to file system – suddenly you can benefit from all of the manageability that we offer – we’re making it easier Create small volume for inodes Encapsulate storage Tablespaces sous raw devices

Exemples de Scénario Cas 1 – SUN Cluster et RAC en raw RAC utilisé en raw avec SUN Cluster Client souhaite passer à SF Oracle RAC et bénéficier d’un CFS, fencing, ODM… Cas 2 – Sybase sur raw volumes Sybase sur volumes raw Client souhaite passer à SF for Databases car Sybase a valider (enfin) QIO et surtout bénéficier de la gestion facile en file system Arrêt minimum et réduction des efforts d’administrations… (backup, mkfs et restore…)

Allocation simple du stockage Modèle d’allocation d’espace de stockage Templates de configuration Configuration automatique des volumes grâce aux Templates Mémorisation de la configuration d’origine Export des Templates pour différents serveurs Oracle Database Template EMC Storage 3 Mirrors One Mirror in BLDG A Two Mirrors in BLDG B Marketing Template JBOD No Mirrors Resides in Intern’s Office

VERITAS FlashSnap “Cliché” de données intègres et permanents Suppression des temps d’arrêts non planifiés Réduction de la fenêtre de sauvegarde Traitement déporté pour limiter l’impact sur la production Synchronisation uniquement des blocs modifiés In addition, we offer other tools that will help you reduce planned downtime for other events like backup. For example, using an option called VERITAS FlashSnap with Storage Foundation you can take a third mirror of a set of data, break it off, deport it to another host for backup, off-host processing, etc. Then simply re-introduce that host to the volume. The great thing is that FlashSnap will actually only synchronize the changed bits of data, decreasing the time it takes to sync these things up! This means that you can more quickly snap off volumes for off-host processing.

Recouvrement Standard Backup sans Snapshot Backup Complet Time 0:00 12:00 24:00 15:00 Transactions Usually you conduct your backups or incremental backups nightly. (In this diagram) If your business is running all day and then you have a failure at 3:00, you have to go back and recover from your last backup. This can be a very lengthy and tedious process that causes your organization a lot of downtime by the time you finally recover. System administrators need the ability to recovery quickly from data corruption problems. VERITAS Volume Manager’s FlashSnap Option provides administrators with the ability to immediately recover from data corruption issues such as user error or viruses. Restauration depuis la sauvegarde Transactions rejouées de 00:00 - 15:00 Retour en service

Recouvrement Rapide VERITAS Storage Checkpoint Backup Complet snapshot sur disque Time 0:00 10:00 12:00 14:00 24:00 15:00 Transactions Restauration depuis le snapshot You have taken snapshots on regular intervals (in this diagram every two hours). If you have a failure at 3:00, all you need to do is go back to the last snapshot that was taken at 2:00 and recover from that point forward. This is an ideal way to recover from logical errors that is painless and increases your business ability to be resilient to unplanned downtime. Transactions rejouées de 14:00 - 15:00 Retour en Service TRES RAPIDE

VERITAS Storage Checkpoint Images efficaces des données Pas de copie complète seuls les blocs modifiés (COW) Images permanentes des données Résiste au reboot Snapshot stocké dans l’espace libre du FS Possibilité de « faire tourner » les images Le snapshot « suit » le FS en cas de déplacement du FS Snapshot avec Data et Meta-Data ou Meta-Data seul Relation entre les snapshots Snapshot « montable » et non-montable Mode RO et RW

VERITAS Storage Foundation Benchmark GiantLoop

Le Benchmark GiantLoop/Solaris Société Américaine de services autour du Data Management Evaluation, Architecture, Test, Déploiement et Administration GiantLoop Testing and Certification (GTAC) Lab ouvert aux clients et fournisseurs Qualifications disponibles de baies de disques, réplication longue distance… Le Benchmark Solaris Comparaison de la performance des outils standards SUN/Solaris 9 avec VERITAS Storage Foundation www.giantloop.com

Le Benchmark GiantLoop/Solaris Objectifs Le Benchmark Solaris (suite) Facilité d’utilisation établie à la génération ou regénération des systèmes de fichiers VxFS et UFS Disponibilité mesurée par le temps de recouvrement (retour en ligne) après une défaillance Performance en charge simulant une activité importante d’un serveur Internet (Web, email…) simulée et mesurée grâce au benchmark public PostMark* *www.netapp.com/tech_library/3022.html

Le Benchmark GiantLoop/Solaris Configuration Le Benchmark Solaris (suite) Les modes opératoires de tests sont les plus standards possibles sans optimisation ni recommandation particulière La configuration Serveur standards SUN SF4810 (8CPUs – 8GB RAM) Solaris 9 + patch 11223-01 generic VERITAS Foundation Suite 3.5 (Storage Foundation) DMP utilisé PostMark* 1.5 Stockage RAID EMC Symmetrix 8730 (1 To) JBOD Eurologic SANbloc (500 Go) Connectivité McData pour EMC et Brocade pour Eurologic *www.netapp.com/tech_library/3022.html

Le Benchmark GiantLoop/Solaris Résultats Fonctions testées Solaris 9 (outils natifs – sans VERITAS) Solaris 9 avec VERITAS St. FoundationTM Génération/Création de FS (144Go) 30 sec pour 1 FS 30 minutes au total Quasi instantané pour 1 FS 5 minutes au total VERITAS : 6 fois plus rapide à la création Disponibilité (reprise après défaillance électrique) UFS 362,71 secs UFS + Log 274,71 secs VxFS 39,4 secs VERITAS : 7 fois plus disponible (cas favorable Solaris), 9 fois cas favorable VERITAS VxFS + QL Performance (stress généré par PostMark – 1000 répertoires, 20000 fichiers par répertoire, 1-4-8-12 et 16 PostMark concurrents = utilisateurs) JBOD 181 ops/sec 2334 ops/sec 2552 ops/sec EMC Symmetrix 302,4 ops/sec 1640 ops/sec 1515,7 ops/sec Performance (Temps d’exécution – 16 PostMarks) 2497,3 273,4 244,7 Performance (Utilisation CPU –Temps CPU active/Temps CPU total – 16 PostMarks) ~20% 12% VERITAS: 15 fois plus rapide en JBOD, 5,4 fois en EMC Symmetrix

Le Benchmark GiantLoop/Solaris Résultats

Le Benchmark GiantLoop/Solaris Approche financière Serveurs et configurations standards SUN Sun Fire 4810 VERITAS Storage Foundation 3.5 Soit 10% de coût supplémentaire en VERITAS pour Performance x15 (+1400%)* Disponibilité x7 (+600%)** www.giantloop.com/gtac_vxfs_abstract.shtml 10% représente un gain de 1400% en Performance et 600% en Disponibilité sans considérer les coûts d’arrêts *Performance x15 (+1400%) en JBOD (x5.4 avec EMC Symmetrix) **Disponibilité x7 (+600%) contre UFS Log (x9 contre UFS) VERITAS Storage Foundation la solution fondamentale pour l’infrastructure de stockage

Pause 15’

Copyright © 2002 VERITAS Software Corporation. All rights reserved Copyright © 2002 VERITAS Software Corporation. All rights reserved. VERITAS, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation. VERITAS and the VERITAS Logo Reg. U.S. Pat. & Tm Off. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies.

Agenda 8h30 Accueil - Petit Déjeuner 9h Gestion du stockage primaire Les couches fondamentales d’une infrastructure de stockage avec VERITAS Storage Foundation 9h45 Pause 10h Innovations et Technologies avancées Cluster File System, Partage de Données, Qualité de Service, intégration applicative… 11h Etude de Cas: VERITAS Storage Foundation appliquée à Microsoft Windows 2003 11h45 Conclusion - Questions/Réponses 12h Fin

Innovations et Technologies avancées

VERITAS File System (VxFS) VERITAS Volume Manager (VxVM) Gestion des Données et du Stockage Qualité de Service de Stockage (QoSS) Ecriture sur les espaces disques en fonction des classes de stockage configurées Déplacement au cours du temps des données (fichiers) sur les autres espaces disques en fonction des critères choisis = transparence pour l’application (le chemin du fichier est inchangé) SF4 Classe 1 Classe 3 Classe 2 VERITAS File System (VxFS) VERITAS Volume Manager (VxVM)

Saut entre primaire et secondaire Gestion des Données et du Stockage Le QoSS appliqué au cycle de vie de la Donnée Autres classes Classe 1 Classe 3 Classe 2 Saut entre primaire et secondaire Stockage primaire Stockage secondaire

Gestion des Données et du Stockage Efficacité du Stockage – Partage de Données Solaris HP-UX AIX Linux Copie Copie Copie 4 fois l'espace de stockage dont 3 inutiles => TROP CHER Temps de copie lié à la quantité de données =>TROP LONG

Gestion des Données et du Stockage Efficacité du Stockage – Partage de Données* Solaris HP-UX AIX Linux SF4 SF4 SF4 SF4 Exemple de séquence de traitement Solaris enregistre les données HP-UX lance quelques batchs AIX charge les données dans le data warehouse Linux sauvegarde les données Simplifie le traitement séquentiel Permet des migrations faciles et rapides Réduit la dépendance matérielle Elimine la redondance inutile * séquentiel

Exemples de Scénario Cas 1 – Service Fichiers Client tourne HPUX sur Itanium Pour le Service Fichiers Client pourrait tourner Linux sur le même serveur Client veut s’assurer que la future migration sera facile et simple Si le client utilise VxFS/OJFS, VxVM, et du stockage externe, les migrations futures seront faciles Pour préparer, il faut simplement upgrader en SF4 (DL 6 et DG 110), c’est fini Cas 2 – Facturation Telco 3 serveurs Solaris pour les données en saisie et les imports HPUX pour les batchs de nuit AIX pour le chargement des données dans le data-warehouse la nuit Le stockage est placé sur un SAN Chaque serveur possède son propre stockage NFS pour transférer chaque jour les données pour le batch Solution: Deport volume depuis Solaris, Import sur HPUX, traitement puis déport, Import sur AIX, traitement puis déport, Et retour sur Solaris Utilisation du 1/3 du volume de stockage et plus rapidement sans copie Plus rapide que NFS aussi sans surcharge réseau

Partage au niveau Système de Fichiers Cluster File System Appelé aussi Shared Data Cluster dans l’industrie Très peu d’offres sur le marché Tous les noeuds “montent” le Système de Fichiers et accèdent aux données Compréhension de la structure du FS par tous les noeuds

Partage au niveau Système de Fichiers Cluster File System – Définition et Architecture Présence d’une structure Cluster pour assurer la gestion du cluster et offrir des services: entrée/sortie de nœud, perte de nœud et dialogue entre nœuds – caches, verrous… DLM – Lock Manager étendu au cluster Pas de serveur de Meta-Data comme les SAN File System Single File System Image (SFSI) I/O Fencing

Partage au niveau Système de Fichiers VERITAS Cluster File System Disponible dans la suite VERITAS Storage Foundation Cluster File System (HA) ou VERITAS Storage Foundation for Oracle RAC VERITAS Cluster Server pour la couche Cluster (membership…) VERITAS Cluster Volume Manager VERITAS Cluster File System Architecture Primaire/Secondaire(s) pour les Meta-Data

Partage au niveau Système de Fichiers VERITAS Cluster File System SAN LAN Central Metadata Server 1 serveur (Primaire) accède et met à jour les Meta-Data du FS Accède aux données en direct Tous les nœuds accèdent en direct aux données Election automatique parmi les secondaires d’un nouveau Primaire en cas de perte du Primaire d’origine

Partage au niveau Système de Fichiers VERITAS Cluster File System SAN GAB LLT GLM LAN Central Metadata Server Intégration à une couche cluster Messages inter-nœuds, Membership, GLM (verrous) qui s’appuient sur Global Atomic Broadcast protocol (GAB) Low Latency Transport protocol (LLT)

Ferme de serveurs web, ftp… Partage au niveau Système de Fichiers VERITAS Cluster File System - Exemples Ferme de serveurs web, ftp… Tous en RO et 1 en RW par exemple Application de calcul scientifique (HPC), multimédia… Application BDD parallèle type Oracle RAC ou DB2 (partitionné)

Cluster Volume Manager Partage au niveau Système de Fichiers VERITAS Cluster File System – Ferme Web/ftp Web/ftp Server HeartBeat Lock Management First Host Second Host Cluster File System Cluster Volume Manager Cluster Storage Network Shared Disks Optional Layer

Applications Parallèles Accès concurrents Accès de multiples instances sur une seule Base de Données Toujours une instance active Partage de données homogène Accès simultané en lecture/écriture par plusieurs systèmes Le plus haut niveau de Performance: multiplication et agrégation des accès Disponibilité: redondance des accès Administration: images des données cohérentes entre les systèmes (SFSI) LAN RAC RAC RAC VERITAS CFS

Applications Parallèles Accès concurrents Mode Cluster File System et Oracle RAC I/O Fencing et interconnexion Testé et certifié par Oracle Disponible sous Solaris, HP-UX, AIX et Linux Couplage avec Réplication possible pour un modèle DR complet Le premier et seul Partage de Données (CFS) supporté et validé par Oracle sur Solaris !! LAN RAC RAC RAC VERITAS CFS VERITAS Storage Foundation Cluster File System VERITAS Storage Foundation for Oracle RAC

Tolérance aux Pannes Solutions VERITAS pour Oracle 9i RAC CRM ERP CRM SCM RAC HeartBeat Lock Management RAC RAC 1er Serveur Nteme Serveur VERITAS SF for Oracle RAC Réseau de stockage Disponible sur AIX, HP-UX, Solaris et Linux *RAC signifie Real Application Clusters

VERITAS Storage Foundation Davantage de valeurs par couplages de technologies SF4 SF4 SF4 CFS CFS PDC QoSS

VERITAS Storage Foundation™ 4.2 for Windows Philippe Bonbonnelle Ing. Avant-Vente Philippe.Bonbonnelle@veritas.com

Storage Solutions pour Windows 4.2 Storage Foundation Base + Options Storage Foundation pour Windows Storage Foundation HA pour Windows (SFW plus VERITAS Cluster Server) Options: Option Dynamic Multi-Pathing Option FlashSnap Option Volume Replicator Option Cluster pour MSCS Option Global Cluster Option Cluster Agent pour Microsoft SQL Option Cluster Agent pour Microsoft Exchange Option Cluster Agent pour EMC SRDF Cluster Agent for IBM PPRC Cluster Agent for Hitachi TrueCopy

Objectifs de Storage Foundation for Windows Gestion avancée de Volume Management pour Windows 2003 et 2000 Amélioration des niveaux de disponibilité, performance et de gestion dans les environnements Windows 2003 & 2000 Faciliter la gestion “On-Line” des données stockées Réduire les arrêts planifiés ou non planifiés Offrir une interface de gestion commune pour diffèrents environnements Windows ou Unix Indépendance vis à vis des choix matériels de stockage The key to higher availability is reducing downtime – both planned and unplanned. As we will discuss later in this presentation, the logical disk management capabilities available in Windows Server 2003 is a light version of VERITAS’ volume management technology. VERITAS Storage Foundation for Windows enables enhanced levels of availability, performance and manageability in Windows Server 2003 and Windows 2000 environments Everything I tell you about in today’s presentation can be done on-line. If you only remember one thing, remember the word “on-line”. By virtualizing the storage, VSFW removes the physical limitations of hardware arrays. Once virtualized, the storage can be managed in a much more flexible manner by VSFW, enabling it to be kept online during many of the operations where previously the server had to be re-booted. For organizations that have heterogeneous environments, Administrators need to manage each array with a different tool. With VERITAS, Administrators have the ability to manage all storage from one easy-to-use console and therefore don’t have to learn different tools for each environment. Online administration of heterogeneous environments increases IT staff productivity. Increasing IT staff productivity and reducing down time reduces TCO.

Storage Foundation : Un leader dans l’informatique La technologie Volume Manager de VERITAS est utilisé par 1600 sur 2000 des plus grosses entreprises pour assurer la disponibilité de leurs données Microsoft utilise VERITAS Pour développer la gestion de disque pour Windows 2000 et 2003 La version basique du produit est connu sous le nom de “Logical Disk Manager” (LDM) Storage Foundation for Windows est conjointement développé par Veritas et Microsoft Industry LEADER Storage Virtualization Source: Gartner Dataquest 2002 1600 of Global 2000 VERITAS Volume Management technology is the undisputed leader in storage virtualization and was chosen by user “as a product they love” A ‘light’ version of VSFW is built-in to every version of Microsoft Windows 2000 and Windows Server 2003. This light version is called Logical Disk Manager or ‘LDM’. We have a large engineering team in Redmond, Washington about one mile down the road from Microsoft. Our Engineers worked on Microsoft campus in the joint development of LDM. Also mention support for Solaris, HP-UX, AIX, and Linux.

Certification de Storage Foundation for Windows ? Une assurance pour la compatibilité Un même niveau de certification pour Microsoft et les ISVs Tous les tests sont réalisés par VeriTest Trois niveaux de certification, une pour chaque version de Windows 2003 With the new release of Windows Server 2003, Microsoft has updated the “Certified for Windows” program. Being certified by Microsoft ensures customers that these products were specifically built for and tested on the new OS platforms. These stringent requirements are the same requirements that Microsoft applications themselves have to meet! The certification is done by a 3rd party company called VeriTest. For Windows Server 2003, VSFW is Datacenter Server. Datacenter Certification also means VSFW is certified on Server and Enterprise as well. Also mention we have VM for W2K – it’s W2K Server, Advanced Server and Datacenter certified.

Storage Foundation for Windows Fonctions supplémentaires pour Windows Feature NT4 LDM W2K LDM W2K3 VSFW Heterogeneous Storage Management Console  Proactive Storage Resource Monitoring & Notification Command Line Support  RAID 0,1,5 Online Growth Simple/Spanned Online Growth RAID 0, 1, 5, & Mirrored Stripes Drag-n-Drop Storage Move Easy Storage Migration using Disk Groups Never Run out of Storage Better Performance with Striping across Hardware Mirrors Hot Spares, Hot Relocation, Undo Hot Relocation RAID-5 and Mirror Logging for Quick Recovery Online Performance Monitoring & Tuning Dynamic Multi-Pathing (DMP) for SANs  Private Disk Group Protection for SANs Cluster Support (dynamic volumes) FlashSnap Option Why Upgrade from LDM to VSFW? This table compares the features available in Logical Disk Manager (LDM), the disk and volume manager that comes with Windows Server 2003 and Storage Foundation for Windows (as well as Windows 2000, and Volume Manager for Windows 2000). VERITAS developed LDM in cooperation with Microsoft. The fully featured VSFW for W2K3 (& VxVM for W2K) is an upgrade that replaces LDM, extending its capabilities. Most of the points in this chart will be discussed in later slide. Point out: LDM can only grow simple and spanned volumes online. Online performance monitoring and tuning DMP – full capabilities with VERITAS, limited capabilities with LDM. MPIO is a DDK and Self-test Kit made available to Microsoft partners for development of multipath drivers for arrays. No multipathing support ships with W2K3. Cluster support – this is one to really hit in an MSCS environment. LDM does not support clustering. There will be more slides on this later FlashSnap - W2K3 does offer VSS, discussed later in presentation. VSS has two capabilities, and API for taking snapshots and COW. Legend – filled in circles indicate fully supported features Legend – bold open circles means partial or limited support Legend – light open circles means no real support as in the case of MPIO

Virtualisation du stockage comment ça fonctionne Mettre les LUN’s physiques dans des disques logiques, les Dynamic Disk Garder une vue de type disque Permettre les manipulations dans ce disque logique LAN Host Server (Control Console) data flow Brand “B” Disk Array Brand “A” Disk Array JBOD SFW allows system administrators to more efficiently administer environments by virtualizing storage with logical volume management. Virtualization is the process of taking multiple heterogeneous physical storage devices and combining them into logical (virtual) storage devices that are allocated to applications and users at will. Once virtualized, the storage can be managed in a much more flexible manner, enabling it to be kept online during many of the operations in which the server previously had to be taken offline. Describe what you see on the slide With virtualization, you can do everything online without taking systems offline and disrupting access to data. Virtualized Disks

Virtualisation du Stockage VERITAS Storage Foundation automatise et simplifie l’administration d’ensemble de stockage complexe Bénéfices: Indépendance matériels Interface graphique intuitive et CLI pour gérer la performance, la disponibilité et l’utilisation du stockage Gestion, surveillance et notification pro-active Modification des volumes à chaud Migration des données avec un simple “Drag-n-Drop” I/O Data Path “Storage virtualization”

Virtualisation du Stockage Disponibilité RAID Logiciel: Mirrored, Striped, Mirrored Striped, RAID-5 Gestion de la capacité des volume “On-Line” Migration de données “Drag-n-Drop” Performance VxCache Volume Level Caching Détection des goulots d’étranglement et ré-allocation des données pour réduire les points chauds Dynamic Multipathing & MPIO Preferred Read Plex pour les volumes en miroir I/O Data Path As we discussed earlier there is a light version of VERITAS volume management technology in the Windows 2000 and Windows Server 2003 operating system. When moving the advanced storage management from VERITAS Storage Foundation for Windows, you can: Create and grow RAID volumes, easily perform storage migration, monitor your environment to increase performance and enable local area disaster recovery with stretch mirroring. In addition to the management and availability benefits, there are also performance benefits by striping and load balancing across arrays to accelerate reads and rights. VSFW allows you to select which mirror you would like to read from. This can result in significant performance gains as you can read from the highest performing disks. We will be discussing performance improvement with our new VxCache feature. “Storage virtualization”

Storage Foundations pour Windows Disponibilité: Migration par “Drag-n-Drop” Dynamic Disk Consolidation du stockage Migration de DAS vers SAN Performance Mise à jour de Baie Changement de matériel Opérations Additionnels Sub-Disk Join Sub-Disk Split

Storage Foundation pour Windows Maximiser les Performances Augmenter les performances d’exchange de 25-40% avec VERITAS VxCache Block level Caching Support de PAE mémoire jusqu’à 64GB dans Windows 2000/2003 Advanced/Enterprise and Datacenter Editions Support de mémoire jusqu'à 4GB pour les système non-PAE I/O Data Path VxCache System Files Exchange Databases Exchange Logs

Storage Foundation pour Windows Maximiser les Performances Optimisation des performances par la migration de “sub-disk” à la volé Maintien des performances sur l’ensemble des disques d’un SAN Track I/O over time at subdisk and dynamic volume levels: Read Requests / sec Write Requests / sec Read Blocks / sec Write Blocks / sec Average time per Read Block Average time per Write Block VSFW lets you optimize storage performance based on your usage patterns. SFW identifies storage bottlenecks and allows you to migrate data to other devices even while applications and their data remain online and available. Use Use I/O statistics tracking to identify hotspots. Monitor and track the following 6 different I/O types, at both the subdisk level and the dynamic volume level: Read Requests per second Write Requests per second Read Blocks per second Write Blocks per second Average time per Read Block Average time per Write Block Use online subdisk move to relocate the area of high I/O to an area with more available bandwidth, all while users are online and even accessing the data being moved.

Storage Foundation for Windows Fonctions de snapShot Récupèration des données en quelques minutes Les snapshot permettent une récupération des données proche de l’instantanée Utilisation simple et efficace par l’interface graphique Les snapshot peuvent être utilisés par les outils de sauvegardes (NetBackup, BackupExec, ….) SnapShot de volume des Storage Group Exchange ou de base SQL La perte du volume original n’affecte pas le SnapShot Spécificité pour Exchange et SQL Server Exchange: Intégration avec Volume Shadow Copy (VSS) de Windows 2003 SQL : Intégration avec SQL Virtual Device Interface With our 4.0 release we introduced support for Windows Server 2003 Volume Shadow Copy Service. FlashSnap’s VxSnap allows easy creation of VSS-enabled snapshots for quick recovery and off-host processing. By leveraging VSS, which is Microsoft’s approved and supported technology for taking snapshots, SFW can create exact point-in-time copies of data to be used instantly for backup, quick recovery or other off-host processing.

Storage Foundation for Windows Intégration avec Windows Server 2003 Storage API Automated System Recovery (ASR) ASR sauvegarde et restaure le noyau Storage Foundation, les disques dynamique et la structure des volumes Virtual Disk Service (VDS) VDS Software Provider Volume Shadow Copy Service (VSS) VSS Requester & Provider for FlashSnap Multi-Path I/O (MPIO) Coexistence avec MPIO VSFW integrates and coexist (MPIO) with the new Windows Server 2003 Storage Management Technologies: VSFW supports ASR for backup and recovery. When an ASR backup is completed, the VSFW binaries, disk layouts and disk group layouts will be backed up. When an ASR recovery occurs, ASR will restore VSFW and all of the disk layouts as part of the ASR process. VSFW is exposed as a VDS Software Provider. This allows VDS Applications such as the Windows Server 2003 Disk Administrator GUI and diskpart CLI utility to access servers runningV SFW VSFW/Flashsnap is exposed as a VSS Software Provider. This allows VSS Requesters (VxSnap CLI in VSFW, BE9x, NBU50, CommVault, Legato, etc.) to use FlashSnap split mirror snapshot capabilities via the VSS infrastructure VSFW will allow DMP to coexist with any MPIO multipath drivers. Since there are no known MPIO drivers available from any HW array vendor, the VSFW Team wrote a prototype MPIO multipath driver for testing purposes. Now lets talk about these W2K3 technologies and how they integrate with VSFW in detail.

Storage Foundation™ 4.2 for Windows & iSCSI

Storage Foundation & iSCSI Intégré dans SFW 4.2 Microsoft iSCSI Initiator Qlogic QLA4010 iSCSI HBA Support des Dynamic Disk (Non disponible avec LDM) SFW, SFW HA, MSCS, VVR, FlashSnap DMP nécessite MPIO DSM pour certain constructeur de stockage IP Storage is a new storage technology that is gaining in growing popularity. Many organizations are finding that IP storage opens up a new alternative to move Windows servers into a SAN environment with the benefit of lower hardware and management costs. IP Storage and iSCSI SANS are increasingly being deployed with servers in areas that are not being served by fibre channel SANs. Storage Industry Analysts predict that IP storage and iSCSI SAN use will grow significantly in the next fewyears. The entire Windows 4.2 stack (SFW, SFWHA/VCS and options) support both Microsoft’s iSCIS initiator and hardware iCSI initiators. The Microsoft iSCSI initiator only support basic disks, this is a huge advantage for VERITAS. There is a comprehensive white paper discussing VERITAS Foundation support for iSCIS for more information.

iSCSI vs Fiber Channel Value FC iSCSI Performance Availability Management Cost Remote Access Security iSCSI : Pour des serveurs peu critiques Pour des sites distants Pour des client sans infrastructure SAN Many organizations are beginning to use the enormous potential of SANs to keep server applications available. However, there are drawbacks to fibre channel based SANs that have slowed adoption outside high-end servers in Datacenters. FC SANS are very expensive to purchase, require specialized SAN equipment, and specialized training. iSCSI SANs overcome many of the disadvantages of FC SANS. iSCSI SANS are based upon standard ethernet networking equipment, so they are easy to learn. The use of standard Ethernet networking equipment provides lower infrastructure and hardware costs, faster learning curves and lower cost management. iSCSI SANs leverage existing investments in Ethernet infrastructure and expertise and offer less SAN complexity. iSCIS SANs work over Local Area Networks and Wide Area Networks so storage resources can be accessed anywhere in an organizations Infrastructure. This provides the ability for longer distance replication and disaster recovery and centralized backup capabilities. iSCSI SANS also provide better security than FC SANS as the security capabilities of Ethernet are mature. The only drawback to iSCSI SANs is performance. FC SANs today run at 2GBps data rates which is double the 1GBps data rate of GB Ethernet. While iSCIS SANs are expected to have the same or better performance that FC SANS over time, this is a consideration for high I/O applications and servers, especially in the datacenter.

Architecture de Haute Disponibilité            Greeting Good Morning, It’s a little hard to follow that whimsical view of business - but I hope it gave you a view of the issues most of you face in your business today. What I like to do now is take you a little deeper into how VERITAS can offer you a strategy for increasing availability of your business information.

Storage Foundation avec option MSCS Support des disques dynamiques avec Microsoft Cluster Server (MSCS) Storage Foundation for Windows est le seul outil qui offre un support complet des disques dynamiques avec MSCS ou Veritas Cluster Server (VCS) Server Failover Windows Server 1 Windows Server 2 SAN & Cluster Disk Management with Shared Storage Multiple diskgroups which are critical in clustering and SAN applications Support for dynamic volumes with VERITAS cluster Server and MSCS Disk Volume for Server 2 Disk Volume for Server 1 Disk Volume for Server 1 VSFW Managed Storage Disk Volume for Server 2 Diskgroup 1 Diskgroup 2

Suppression du “Single Point of Failure” dans MSCS Mise en miroir du Quorum MSCS ne supporte que les disques basiques La perte du quorum est une faille de MSCS Storage Foundation for Windows permet la création d’un disque dynamique avec plusieurs disque pour le Quorum Le Quorum peur être mis en miroir (cluster distant) Une alternative face à Microsoft MNS (Majority Node Set) Microsoft does not provide support for dynamic disks in a server cluster (MSCS) environment. You can use the add-on product from Veritas to add the dynamic disk features to a server cluster.” http://support.microsoft.com/support/kb/articles/Q237/8/53.ASP Additional fault tolerance is provided by mirroring the MSCS quorum disk – something that Microsoft LDM cannot do Standard MSCS supports basic quorum with only a single disk. This is a single point of failure. MSCS fails if the basic quorum disk fails. VSFW allows the creation of dynamic quorum resource and can mirror the quorum disk, which removes the single point of failure. On Microsoft’s web pages, they recommend using VERITAS volume management in MSCS environments.

Architecture de cluster avec Storage Foundation & option MSCS Metro DR with Remote Mirroring & MSCS Local Clustering (LAN) Key Message: It is important for companies to have the flexibility to scale from local high availability to disaster recovery. VERITAS provides a solution that can scale from LAN, MAN and WAN as simple as turning on license keys. Integration between Storage Foundation and Cluster Server ensures not only data is protected in the event of an outage but also applications are kept running. With a single solution, VERITAS Cluster Server, a company can provide high availability to applications in a single datacenter to providing disaster recovery over wide area. With VERITAS Storage Foundation, Data can be protected either within a single storage array to providing disaster recovery and data protection over a wide area using IP. It’s as simple as that! ****************************************************************************** More information on what each configuration provides and the Advantages and Disadvantages of each: Local Clustering Shared-storage clusters are considered second generation clusters, and today are the most prevalent (over shared-nothing) for providing HA through application failover, primarily for RDBMS applications, e.g., Oracle, DB2, Sybase, etc.   Questions to ask your customer to see if this infrastructure is appropriate for their environment: Do you have a storage area network (SAN) infrastructure? Is your data center in one location or are other components if your data center in another location near campus or off site? Are you satisfied with local availability? Architectural characteristics of this configuration include:  A redundant server, network and storage architecture for application and data availability through the linking of multiple servers with shared storage; Systems are linked with private heartbeats, usually ethernet, which they use to communicate state status – VCS uses a fast proprietary protocol, GAB/LLT, to communicate status; Each system in the cluster can access the storage of any other system. There is no replication or mirroring of data, as opposed to a shared-nothing or stretch cluster A SAN facilitates larger clusters (> 2 nodes), and is typically present in all clusters, i.e., switches or hubs are used; All cluster components – servers, SAN infrastructure, storage – are co-located on a single site. What products can be offered to support this solution? VCS and Storage Foundation  Advantages Applications can be easily migrated from one server in the cluster to any other server, facilitating application uptime. Redundant components prevent single point of failure; Use of SAN enables data access and sharing; Disadvantages Complexity Cost ********************************************* Metro DR with Remote Mirroring This architecture typically gets deployed when customers want DR over short distances, and they have a SAN infrastructure in place. Many VERITAS customers in the Wall Street area have set up campus clusters with VM mirroring to separate their data centers over several miles, thus providing DR against such disasters as terrorist attacks. This would not provide long distance DR against a natural disaster such as an earthquake. Characteristics include: Single VCS cluster spanning multiple locations Can have multiple VCS nodes at each site (2 sites maximum) Uses VxVM to create a mirror with plexes in two locations No host or array replication involved With new data switches using DWDM, support for up to 100KM distances have been claimed. VCS is testing with some of these. Requires Professional Services to Implement Separation Range dependent on infrastructure provider Storage Foundation and Cluster Server (or any 3rd party array vendor that provides data mirroring) Advantages: ·        Effective configuration for Disaster Recovery at a low cost ·        Quick restore in the event of a disaster (remote mirroring is quick – always in synch) ·        Maximum use of infrastructure – effective use of fibre infrastructure that company already has in place. ·        Most disasters are localized and this configuration would protect against most disasters. Disadvantages: Cost of DWDM/Fibre infrastructure. The key to this architecture slide is to emphasize that a customer who purchases Volume Manager and Cluster Server (or Foundation Suite/HA) can achieve local availability as well as metropolitan disaster recovery for no additional costs. Metropolitan Disaster Recovery (also known as Campus Clustering or Stretch Clustering) has been a prominent deployment for several companies who’s service level agreements don’t require a disaster recovery site to be across the country. Rather than investing in a disaster recovery site thousands of miles away, a customer can invest in a disaster recovery data center in the same city. The cost savings are tremendous! The minimal requirements is that the customer has a SAN infrastructure. The distance limitations depend on the latency that the customer is willing to afford. VERITAS recommends within 100 km. Remote Mirroring technology found in Volume Manager is used to replicate the data between the two sites synchronously. If the customer has already purchased Volume Manager and Cluster Server, this is a bonus that is part of the feature set. A customer example using metropolitan disaster recovery is a large bank in NYC who has fiber under the Hudson. Their remote site is in New Jersey. This is sufficient to meet their disaster recovery needs at an affordable price. *************************************************************** Metro DR with Replication Metro DR with Replication is a VCS, shared nothing configuration using replication between nodes to allow geographical separation of the cluster, thus providing both HA and DR benefits. Due to latency, the separation permitted will not be as extensive as a wide-area, TCP/IP solution, but RDC does provide a straight-forward, single cluster solution for many DR scenarios. The replication must be synchronous. Currently, VVR and SRDF replications are supported. The solution supports a cluster with 32 nodes. X number of nodes can be at the primary site, Y nubmer of nodes at the secondary site. In the event of a failure, VCS will attempt to failover the service group on a node at the primary site first before failing over to the remote site. A question associated with this architecture is when to position it over the campus cluster solution with VM mirroring. The general rule is that if a customer does not want to invest in a SAN, they can just run private Ethernet networks for VCS heartbeating, and set up Metro DR with Replication. If the customer has a SAN infrastructure in place, they would implement Metro DR with Remote Mirroring.   This configuration is supported with VVR, SRDF and True Copy in a Solaris environment and SRDF only in a Windows environment. There are distance limitations to this configuration. Distance can be greater than a campus cluster architecture but less than DR (WAN) architecture. This is due to LLT (heartbeat) connections. This configuration can be stretched as far as the network latency for LLT is acceptable. This generally means no more than 500ms (1/2 second) return trip response between the two sites. ********************************************************** Wide Area Disaster Recovery GCM is both a management solution and a DR solution, involving multiple VCS clusters. Of the HA and DR solutions offered by VERITAS, it is the only one which involves multiple clusters. The stretch clusters use multiple sites, but always a single cluster. GCM is used for DR where unlimited geographical separation is called for, or for when management from a single console is needed to manage an enterprise’s clusters worldwide. The management solution (Base Option) does not implement replication. The DR option, which builds on the Base Option, always uses replication, either host-based from VERITAS, or array based, from 3rd party vendors like EMC (SRDF), or HDS (TruCopy). Can support any distance Multiple replication solutions Multiple clusters Multiple OSes Storage Foundation avec MSCS Windows 2003 & 2000

Storage Foundation HA (Veritas Cluster Server) Solution évolutive jusqu’à 32 noeuds Installation, configuration & administration aisée (Wizard) Architecture SAN/NAS Large support matériel AIX, Sun, HP, Intel, AMD, EMC, Hitachi, Netapps, ….. Large support logiciel AIX, Solaris, Hp-UX, Linux RH & Suse, W2K, W2K3 Stratégie hétérogène Une même solution cluster pour tout les environnements Une seule interface graphique, Web et CLI VCS is the most scalable clustering solution available to date. It supports everything from two node clusters up to thirty two node clusters. Architected for SAN Followed our FW roll with broad support for application agents which are already configured which speed deployment. Followed or FW roll with a broad support for 3rd party disks. (slide to come) Key component in our GEOCluster product Designed with a common source to make the ease of porting to new platforms simple. Currently have the best heterogeneous strategy available on the market.

Un cluster avec une vue interne des ressources groupes Un “service group” rassemble les éléments nécessaires au fonctionnement d’une application Les agents surveillent l’état de chaque ressource ANIMATED SLIDE (1 click) An application service is the service the end user perceives when accessing a particular network address. An application service is typically composed of multiple resources, some hardware and some software based, all cooperating together to produce a single service. For example, a database service may be composed of a logical network (IP) addresses, the DBMS software, underlying file systems, logical volumes and a set of physical disks being managed by the volume manager. If this service, typically called a Service Group, needs to be migrated to another node for recovery purposes, all of its resources migrate together to re-create the service on another node, without affecting other Service Groups. A single large node may host any number of service groups, each providing a discrete service to networked clients. If multiple service groups are running on a single node, then they are monitored and managed independently. Independent management allows a service group to be automatically failed over or manually idled (e.g. for administrative or maintenance reasons) without necessarily impacting any of the other service groups running on a node. Of course, if the entire server crashes (as opposed to just a software failure or hang), then all the service groups on that node must be failed over elsewhere. At the most basic level, the fault management process includes monitoring a service group and, when a failure is detected, restarting that service group automatically. This could mean restarting it locally or moving it to another node and then restarting it, as determined by the type of failure incurred. In the case of local restart in response to a fault, the entire service group does not necessarily need to be restarted; perhaps just a single resource within that group may need to be restarted to restore the application service. Agents have 3 functions Start the resource Stop the resource Monitor the resource “Start,” “stop,” and “monitor” are resource-specific Some agents built into VERITAS Cluster Server Disk, file, file share, print share, IIS, IP, Service group (application) Some agents distributed separately For databases: Oracle, mySQL, DB2, etc. For applications: Exchange, SAP, Siebel, PeopleSoft, Apache, NetBackup, etc. SEE VNET FOR COMPLETE AGENT LISTING!

Gestion du Cluster Multi plate-forme Gestion des clusters avec une seule console Choix entre une console WEB ou Java-GUI ou CLI The GUI consol enables IT managers to monitor their cluster closely and to see what clusters are online, offline, monitor the health of applications, etc. As you can see on the left, with our cross platform cluster management tool, we can monitor multiple clusters on multiple platforms from a single GUI. In this case, the environment consists of Windows SQL, Windows Exchange, Linux, and Solaris. As you can see on the right, a window is shown to monitor each service group. Error logs are tracked and are viewable by the person with the appropriate user privileges.

Gestion du Cluster Multi plate-forme This is a view from a personalized web page specific to a user with his/her privileges. This window, also called myVCS, provides A personalized, scalable, web based view of VCS (the specific environment that the person has access to) Wizard based configuration A set of extensible, pre-designed layouts Group and system level selection The views consist of different blocks of information, like consolidated states of objects system-wise states of objects resource dependency graphs recent, critical log messages * NNTP Virtual Servers are not supported in this release

Storage Foundation HA Windows Agents applicatifs Agents inclus dans Storage Foundation HA Files Servers Print servers Services & applications (générique) IIS MSVirtualServer NetBackup Option Cluster Agent pour Microsoft SQL Option Cluster Agent pour Microsoft Exchange Option Cluster Agent pout Oracle 8i & 9i Mise en oeuvre facilitée des agents par Wizard d’installation

Storage Foundation HA for Windows 3rd Party Replication Support Option Enterprise Agents EMC SRDF IBM PPRC Hitachi TrueCopy VERITAS Volume Replicator Site A Site B Replication Replication role reversal as well as DNS updates New Enterprise Agents for VCS similar to EMC SRDF IBM PPRC (version?) Hitachi TrueCopy (version?) VPI Integration and documentation for supporting 3rd party replication with SFW HA 4.2 Supports replication management similar to EMC SRDF support

Storage Foundation HA for Windows Architecture: SFWHA pour MVS 2005 Hardware (x86) IA-32 Server Windows Virtual Machine 1 Windows Virtual Machine 2 MSFT Virtual Server 2005 Virtual hardware Virtual Hardware Windows Server 2003 Hardware (x86) IA-32 Server Windows Virtual Machine 2 MSFT Virtual Server 2005 Virtual hardware Virtual Hardware Windows Server 2003 VERITAS Cluster Server VERITAS Cluster Server Benefit: provides comprehensive system availability and management, minimal downtime and a reduction in IT operational costs using existing computing infrastructures. VERITAS Cluster Server found in Storage Foundation HA for Windows is the industry’s leading open system clustering solution that provides virtual server availability across local and remote area data center configurations. VERITAS clustering technology automates three key components of virtual server’s availability: bringing up the storage to a usable state, bringing up the Microsoft Virtual Server 2005 virtual guests, and bringing up the network identity of the virtual guests known by clients. MVS Agent: comprehensive management of Microsoft Virtual Server 2005 virtual guests and deep-level monitoring of the virtual guests to help ensure the cluster is fully aware of the state of each underlying device. VCS enables failover of Microsoft Virtual Server 2005 virtual guests independently within the local host, local cluster and globally dispersed clusters

Storage Foundation HA for Windows Bundled Agent pour MVS 2005 ‘MSVirtualServer’ Bascule de machine physique avec SFWHA Bascule de Virtual Server Windows 2000/2003 Virtual Servers This is a new bundled agent with VCS 4.2, called ‘MSVirtualServer’ Support failover of virtual machines between virtual machines on physical hosts or between physical hosts Does not imply support for VCS running within the virtual machines (will be tested in a future release)

Reduire la complexité avec une solution de Clustering unique Local Clustering (LAN) Metropolitan Disaster Recovery (MAN) Metro DR with Remote Mirroring Metro DR with Replication Wide Area Disaster Recovery (WAN) Key Message: It is important for companies to have the flexibility to scale from local high availability to disaster recovery. VERITAS provides a solution that can scale from LAN, MAN and WAN as simple as turning on license keys. Integration between Storage Foundation and Cluster Server ensures not only data is protected in the event of an outage but also applications are kept running. With a single solution, VERITAS Cluster Server, a company can provide high availability to applications in a single datacenter to providing disaster recovery over wide area. With VERITAS Storage Foundation, Data can be protected either within a single storage array to providing disaster recovery and data protection over a wide area using IP. It’s as simple as that! ****************************************************************************** More information on what each configuration provides and the Advantages and Disadvantages of each: Local Clustering Shared-storage clusters are considered second generation clusters, and today are the most prevalent (over shared-nothing) for providing HA through application failover, primarily for RDBMS applications, e.g., Oracle, DB2, Sybase, etc.   Questions to ask your customer to see if this infrastructure is appropriate for their environment: Do you have a storage area network (SAN) infrastructure? Is your data center in one location or are other components if your data center in another location near campus or off site? Are you satisfied with local availability? Architectural characteristics of this configuration include:  A redundant server, network and storage architecture for application and data availability through the linking of multiple servers with shared storage; Systems are linked with private heartbeats, usually ethernet, which they use to communicate state status – VCS uses a fast proprietary protocol, GAB/LLT, to communicate status; Each system in the cluster can access the storage of any other system. There is no replication or mirroring of data, as opposed to a shared-nothing or stretch cluster A SAN facilitates larger clusters (> 2 nodes), and is typically present in all clusters, i.e., switches or hubs are used; All cluster components – servers, SAN infrastructure, storage – are co-located on a single site. What products can be offered to support this solution? VCS and Storage Foundation  Advantages Applications can be easily migrated from one server in the cluster to any other server, facilitating application uptime. Redundant components prevent single point of failure; Use of SAN enables data access and sharing; Disadvantages Complexity Cost ********************************************* Metro DR with Remote Mirroring This architecture typically gets deployed when customers want DR over short distances, and they have a SAN infrastructure in place. Many VERITAS customers in the Wall Street area have set up campus clusters with VM mirroring to separate their data centers over several miles, thus providing DR against such disasters as terrorist attacks. This would not provide long distance DR against a natural disaster such as an earthquake. Characteristics include: Single VCS cluster spanning multiple locations Can have multiple VCS nodes at each site (2 sites maximum) Uses VxVM to create a mirror with plexes in two locations No host or array replication involved With new data switches using DWDM, support for up to 100KM distances have been claimed. VCS is testing with some of these. Requires Professional Services to Implement Separation Range dependent on infrastructure provider Storage Foundation and Cluster Server (or any 3rd party array vendor that provides data mirroring) Advantages: ·        Effective configuration for Disaster Recovery at a low cost ·        Quick restore in the event of a disaster (remote mirroring is quick – always in synch) ·        Maximum use of infrastructure – effective use of fibre infrastructure that company already has in place. ·        Most disasters are localized and this configuration would protect against most disasters. Disadvantages: Cost of DWDM/Fibre infrastructure. The key to this architecture slide is to emphasize that a customer who purchases Volume Manager and Cluster Server (or Foundation Suite/HA) can achieve local availability as well as metropolitan disaster recovery for no additional costs. Metropolitan Disaster Recovery (also known as Campus Clustering or Stretch Clustering) has been a prominent deployment for several companies who’s service level agreements don’t require a disaster recovery site to be across the country. Rather than investing in a disaster recovery site thousands of miles away, a customer can invest in a disaster recovery data center in the same city. The cost savings are tremendous! The minimal requirements is that the customer has a SAN infrastructure. The distance limitations depend on the latency that the customer is willing to afford. VERITAS recommends within 100 km. Remote Mirroring technology found in Volume Manager is used to replicate the data between the two sites synchronously. If the customer has already purchased Volume Manager and Cluster Server, this is a bonus that is part of the feature set. A customer example using metropolitan disaster recovery is a large bank in NYC who has fiber under the Hudson. Their remote site is in New Jersey. This is sufficient to meet their disaster recovery needs at an affordable price. *************************************************************** Metro DR with Replication Metro DR with Replication is a VCS, shared nothing configuration using replication between nodes to allow geographical separation of the cluster, thus providing both HA and DR benefits. Due to latency, the separation permitted will not be as extensive as a wide-area, TCP/IP solution, but RDC does provide a straight-forward, single cluster solution for many DR scenarios. The replication must be synchronous. Currently, VVR and SRDF replications are supported. The solution supports a cluster with 32 nodes. X number of nodes can be at the primary site, Y nubmer of nodes at the secondary site. In the event of a failure, VCS will attempt to failover the service group on a node at the primary site first before failing over to the remote site. A question associated with this architecture is when to position it over the campus cluster solution with VM mirroring. The general rule is that if a customer does not want to invest in a SAN, they can just run private Ethernet networks for VCS heartbeating, and set up Metro DR with Replication. If the customer has a SAN infrastructure in place, they would implement Metro DR with Remote Mirroring.   This configuration is supported with VVR, SRDF and True Copy in a Solaris environment and SRDF only in a Windows environment. There are distance limitations to this configuration. Distance can be greater than a campus cluster architecture but less than DR (WAN) architecture. This is due to LLT (heartbeat) connections. This configuration can be stretched as far as the network latency for LLT is acceptable. This generally means no more than 500ms (1/2 second) return trip response between the two sites. ********************************************************** Wide Area Disaster Recovery GCM is both a management solution and a DR solution, involving multiple VCS clusters. Of the HA and DR solutions offered by VERITAS, it is the only one which involves multiple clusters. The stretch clusters use multiple sites, but always a single cluster. GCM is used for DR where unlimited geographical separation is called for, or for when management from a single console is needed to manage an enterprise’s clusters worldwide. The management solution (Base Option) does not implement replication. The DR option, which builds on the Base Option, always uses replication, either host-based from VERITAS, or array based, from 3rd party vendors like EMC (SRDF), or HDS (TruCopy). Can support any distance Multiple replication solutions Multiple clusters Multiple OSes Storage Foundation HA Global Cluster Option License Key Enabled

Storage Foundation™ 4.2 for Windows Prix publics

Storage Foundation™ + Flashsnap- 4 servers Prix publics Storage Foundations sur 4 noeuds MSCS avec Windows 2003EE Plate-forme installée avec Microsoft Windows Server 2003 Enterprise Edition (ou Windows 2000 Advanced Server) La solution comprend pour les quatre serveurs : VERITAS Volume Manager VERITAS Flashsnap Key Message: It is important for companies to have the flexibility to scale from local high availability to disaster recovery. VERITAS provides a solution that can scale from LAN, MAN and WAN as simple as turning on license keys. Integration between Storage Foundation and Cluster Server ensures not only data is protected in the event of an outage but also applications are kept running. With a single solution, VERITAS Cluster Server, a company can provide high availability to applications in a single datacenter to providing disaster recovery over wide area. With VERITAS Storage Foundation, Data can be protected either within a single storage array to providing disaster recovery and data protection over a wide area using IP. It’s as simple as that! ****************************************************************************** More information on what each configuration provides and the Advantages and Disadvantages of each: Local Clustering Shared-storage clusters are considered second generation clusters, and today are the most prevalent (over shared-nothing) for providing HA through application failover, primarily for RDBMS applications, e.g., Oracle, DB2, Sybase, etc.   Questions to ask your customer to see if this infrastructure is appropriate for their environment: Do you have a storage area network (SAN) infrastructure? Is your data center in one location or are other components if your data center in another location near campus or off site? Are you satisfied with local availability? Architectural characteristics of this configuration include:  A redundant server, network and storage architecture for application and data availability through the linking of multiple servers with shared storage; Systems are linked with private heartbeats, usually ethernet, which they use to communicate state status – VCS uses a fast proprietary protocol, GAB/LLT, to communicate status; Each system in the cluster can access the storage of any other system. There is no replication or mirroring of data, as opposed to a shared-nothing or stretch cluster A SAN facilitates larger clusters (> 2 nodes), and is typically present in all clusters, i.e., switches or hubs are used; All cluster components – servers, SAN infrastructure, storage – are co-located on a single site. What products can be offered to support this solution? VCS and Storage Foundation  Advantages Applications can be easily migrated from one server in the cluster to any other server, facilitating application uptime. Redundant components prevent single point of failure; Use of SAN enables data access and sharing; Disadvantages Complexity Cost ********************************************* Metro DR with Remote Mirroring This architecture typically gets deployed when customers want DR over short distances, and they have a SAN infrastructure in place. Many VERITAS customers in the Wall Street area have set up campus clusters with VM mirroring to separate their data centers over several miles, thus providing DR against such disasters as terrorist attacks. This would not provide long distance DR against a natural disaster such as an earthquake. Characteristics include: Single VCS cluster spanning multiple locations Can have multiple VCS nodes at each site (2 sites maximum) Uses VxVM to create a mirror with plexes in two locations No host or array replication involved With new data switches using DWDM, support for up to 100KM distances have been claimed. VCS is testing with some of these. Requires Professional Services to Implement Separation Range dependent on infrastructure provider Storage Foundation and Cluster Server (or any 3rd party array vendor that provides data mirroring) Advantages: ·        Effective configuration for Disaster Recovery at a low cost ·        Quick restore in the event of a disaster (remote mirroring is quick – always in synch) ·        Maximum use of infrastructure – effective use of fibre infrastructure that company already has in place. ·        Most disasters are localized and this configuration would protect against most disasters. Disadvantages: Cost of DWDM/Fibre infrastructure. The key to this architecture slide is to emphasize that a customer who purchases Volume Manager and Cluster Server (or Foundation Suite/HA) can achieve local availability as well as metropolitan disaster recovery for no additional costs. Metropolitan Disaster Recovery (also known as Campus Clustering or Stretch Clustering) has been a prominent deployment for several companies who’s service level agreements don’t require a disaster recovery site to be across the country. Rather than investing in a disaster recovery site thousands of miles away, a customer can invest in a disaster recovery data center in the same city. The cost savings are tremendous! The minimal requirements is that the customer has a SAN infrastructure. The distance limitations depend on the latency that the customer is willing to afford. VERITAS recommends within 100 km. Remote Mirroring technology found in Volume Manager is used to replicate the data between the two sites synchronously. If the customer has already purchased Volume Manager and Cluster Server, this is a bonus that is part of the feature set. A customer example using metropolitan disaster recovery is a large bank in NYC who has fiber under the Hudson. Their remote site is in New Jersey. This is sufficient to meet their disaster recovery needs at an affordable price. *************************************************************** Metro DR with Replication Metro DR with Replication is a VCS, shared nothing configuration using replication between nodes to allow geographical separation of the cluster, thus providing both HA and DR benefits. Due to latency, the separation permitted will not be as extensive as a wide-area, TCP/IP solution, but RDC does provide a straight-forward, single cluster solution for many DR scenarios. The replication must be synchronous. Currently, VVR and SRDF replications are supported. The solution supports a cluster with 32 nodes. X number of nodes can be at the primary site, Y nubmer of nodes at the secondary site. In the event of a failure, VCS will attempt to failover the service group on a node at the primary site first before failing over to the remote site. A question associated with this architecture is when to position it over the campus cluster solution with VM mirroring. The general rule is that if a customer does not want to invest in a SAN, they can just run private Ethernet networks for VCS heartbeating, and set up Metro DR with Replication. If the customer has a SAN infrastructure in place, they would implement Metro DR with Remote Mirroring.   This configuration is supported with VVR, SRDF and True Copy in a Solaris environment and SRDF only in a Windows environment. There are distance limitations to this configuration. Distance can be greater than a campus cluster architecture but less than DR (WAN) architecture. This is due to LLT (heartbeat) connections. This configuration can be stretched as far as the network latency for LLT is acceptable. This generally means no more than 500ms (1/2 second) return trip response between the two sites. ********************************************************** Wide Area Disaster Recovery GCM is both a management solution and a DR solution, involving multiple VCS clusters. Of the HA and DR solutions offered by VERITAS, it is the only one which involves multiple clusters. The stretch clusters use multiple sites, but always a single cluster. GCM is used for DR where unlimited geographical separation is called for, or for when management from a single console is needed to manage an enterprise’s clusters worldwide. The management solution (Base Option) does not implement replication. The DR option, which builds on the Base Option, always uses replication, either host-based from VERITAS, or array based, from 3rd party vendors like EMC (SRDF), or HDS (TruCopy). Can support any distance Multiple replication solutions Multiple clusters Multiple OSes Storage Foundation + Flashsnap

SF™ HA + Flashsnap- 4 serveurs Prix publics Veritas Cluster géographique 4 noeuds avec Windows 2003 SE Plate-forme installée avec Microsoft Windows Server 2003 Standard Edition (ou Windows 2000 Server) La solution comprend pour les quatre serveurs : VERITAS Volume Manager & VERITAS Cluster Server VERITAS Flashsnap Key Message: It is important for companies to have the flexibility to scale from local high availability to disaster recovery. VERITAS provides a solution that can scale from LAN, MAN and WAN as simple as turning on license keys. Integration between Storage Foundation and Cluster Server ensures not only data is protected in the event of an outage but also applications are kept running. With a single solution, VERITAS Cluster Server, a company can provide high availability to applications in a single datacenter to providing disaster recovery over wide area. With VERITAS Storage Foundation, Data can be protected either within a single storage array to providing disaster recovery and data protection over a wide area using IP. It’s as simple as that! ****************************************************************************** More information on what each configuration provides and the Advantages and Disadvantages of each: Local Clustering Shared-storage clusters are considered second generation clusters, and today are the most prevalent (over shared-nothing) for providing HA through application failover, primarily for RDBMS applications, e.g., Oracle, DB2, Sybase, etc.   Questions to ask your customer to see if this infrastructure is appropriate for their environment: Do you have a storage area network (SAN) infrastructure? Is your data center in one location or are other components if your data center in another location near campus or off site? Are you satisfied with local availability? Architectural characteristics of this configuration include:  A redundant server, network and storage architecture for application and data availability through the linking of multiple servers with shared storage; Systems are linked with private heartbeats, usually ethernet, which they use to communicate state status – VCS uses a fast proprietary protocol, GAB/LLT, to communicate status; Each system in the cluster can access the storage of any other system. There is no replication or mirroring of data, as opposed to a shared-nothing or stretch cluster A SAN facilitates larger clusters (> 2 nodes), and is typically present in all clusters, i.e., switches or hubs are used; All cluster components – servers, SAN infrastructure, storage – are co-located on a single site. What products can be offered to support this solution? VCS and Storage Foundation  Advantages Applications can be easily migrated from one server in the cluster to any other server, facilitating application uptime. Redundant components prevent single point of failure; Use of SAN enables data access and sharing; Disadvantages Complexity Cost ********************************************* Metro DR with Remote Mirroring This architecture typically gets deployed when customers want DR over short distances, and they have a SAN infrastructure in place. Many VERITAS customers in the Wall Street area have set up campus clusters with VM mirroring to separate their data centers over several miles, thus providing DR against such disasters as terrorist attacks. This would not provide long distance DR against a natural disaster such as an earthquake. Characteristics include: Single VCS cluster spanning multiple locations Can have multiple VCS nodes at each site (2 sites maximum) Uses VxVM to create a mirror with plexes in two locations No host or array replication involved With new data switches using DWDM, support for up to 100KM distances have been claimed. VCS is testing with some of these. Requires Professional Services to Implement Separation Range dependent on infrastructure provider Storage Foundation and Cluster Server (or any 3rd party array vendor that provides data mirroring) Advantages: ·        Effective configuration for Disaster Recovery at a low cost ·        Quick restore in the event of a disaster (remote mirroring is quick – always in synch) ·        Maximum use of infrastructure – effective use of fibre infrastructure that company already has in place. ·        Most disasters are localized and this configuration would protect against most disasters. Disadvantages: Cost of DWDM/Fibre infrastructure. The key to this architecture slide is to emphasize that a customer who purchases Volume Manager and Cluster Server (or Foundation Suite/HA) can achieve local availability as well as metropolitan disaster recovery for no additional costs. Metropolitan Disaster Recovery (also known as Campus Clustering or Stretch Clustering) has been a prominent deployment for several companies who’s service level agreements don’t require a disaster recovery site to be across the country. Rather than investing in a disaster recovery site thousands of miles away, a customer can invest in a disaster recovery data center in the same city. The cost savings are tremendous! The minimal requirements is that the customer has a SAN infrastructure. The distance limitations depend on the latency that the customer is willing to afford. VERITAS recommends within 100 km. Remote Mirroring technology found in Volume Manager is used to replicate the data between the two sites synchronously. If the customer has already purchased Volume Manager and Cluster Server, this is a bonus that is part of the feature set. A customer example using metropolitan disaster recovery is a large bank in NYC who has fiber under the Hudson. Their remote site is in New Jersey. This is sufficient to meet their disaster recovery needs at an affordable price. *************************************************************** Metro DR with Replication Metro DR with Replication is a VCS, shared nothing configuration using replication between nodes to allow geographical separation of the cluster, thus providing both HA and DR benefits. Due to latency, the separation permitted will not be as extensive as a wide-area, TCP/IP solution, but RDC does provide a straight-forward, single cluster solution for many DR scenarios. The replication must be synchronous. Currently, VVR and SRDF replications are supported. The solution supports a cluster with 32 nodes. X number of nodes can be at the primary site, Y nubmer of nodes at the secondary site. In the event of a failure, VCS will attempt to failover the service group on a node at the primary site first before failing over to the remote site. A question associated with this architecture is when to position it over the campus cluster solution with VM mirroring. The general rule is that if a customer does not want to invest in a SAN, they can just run private Ethernet networks for VCS heartbeating, and set up Metro DR with Replication. If the customer has a SAN infrastructure in place, they would implement Metro DR with Remote Mirroring.   This configuration is supported with VVR, SRDF and True Copy in a Solaris environment and SRDF only in a Windows environment. There are distance limitations to this configuration. Distance can be greater than a campus cluster architecture but less than DR (WAN) architecture. This is due to LLT (heartbeat) connections. This configuration can be stretched as far as the network latency for LLT is acceptable. This generally means no more than 500ms (1/2 second) return trip response between the two sites. ********************************************************** Wide Area Disaster Recovery GCM is both a management solution and a DR solution, involving multiple VCS clusters. Of the HA and DR solutions offered by VERITAS, it is the only one which involves multiple clusters. The stretch clusters use multiple sites, but always a single cluster. GCM is used for DR where unlimited geographical separation is called for, or for when management from a single console is needed to manage an enterprise’s clusters worldwide. The management solution (Base Option) does not implement replication. The DR option, which builds on the Base Option, always uses replication, either host-based from VERITAS, or array based, from 3rd party vendors like EMC (SRDF), or HDS (TruCopy). Can support any distance Multiple replication solutions Multiple clusters Multiple OSes Storage Foundation HA + Flashsnap

Conclusion

Les différentes offres VERITAS Storage Foundation

Conclusion VERITAS Storage Foundation Standard de fait depuis plus de 10 ans Couche fondamentale du stockage Solution Universelle et tout-en-un Innovations technologiques pour de vrais gains financiers Des opportunités réelles de ventes licences et services Environnement applicatif et calcul exigeant Projet de consolidation, migration…

& QUESTIONS REPONSES www.veritas.com/van

Copyright © 2002 VERITAS Software Corporation. All rights reserved Copyright © 2002 VERITAS Software Corporation. All rights reserved. VERITAS, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation. VERITAS and the VERITAS Logo Reg. U.S. Pat. & Tm Off. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies.