Télécharger la présentation
La présentation est en train de télécharger. S'il vous plaît, attendez
Publié parAxelle De Modifié depuis plus de 11 années
1
Automatisation des Tests Que faites vous pour vos données ?
Olivier Jouannic Automatisation des Tests Que faites vous pour vos données ? Author Notes: This is the PowerPoint template for the Innovate 2010 Track Sessions Confused whether to convert this deck to Lotus Symphony? Learn more here: %20Migrating%20to%20Lotus%20Symphony Additional IBM Rational presentation resource links can be found on Rational’s Managing the Brand W3 Intranet site rt_rtb?opendocument?opendocument
2
Automatisation des tests : que faites vous pour vos données ?
Démultipliez l'efficacité de l'automatisation des tests Jusqu’où peut on rejouer une campagne toutes choses étant égales par ailleurs ? Les étapes de gestion des données sont elles prises en compte dans les plans de test ? La fabrication du « cas de test de données » est elle aussi bien faite que celle du cas de test fonctionnel ‘procédural’. Tout cela peut il être piloté depuis l’outil de gestion des test (Rational Functional Tester) ? Créer, dupliquer, rafraichir, recycler des données de test réalistes intègres et représentatives peut s'avérer être un challenge pour les équipes de test, développement ou recette. Le plus souvent on contourne cette difficulté en clonant la base de production plusieurs fois. En créant de toutes pièces des cas de données de test En fabriquant soi même des extracteurs Dans tous les cas a-t-on bien examiné les impacts : Cout Cohérence des données Exposition des données Réutilisation / automatisation Optim Test Data Management c’est autssi les test de non régression au niveau des données Comparaison de données à différents points dans le temps
3
Pourquoi s’intéresser aux DONNEES des tests ?
Contraintes légales Protection des données personnelles … ou confidentielles ou réglementaires Tests de Non régression Conservation des JER Reutilisation des JER Comparaison des resultats Attendus/obtenus Qualité Données de test « pourries » Cas de test mal identifiés Approche Tests automatisés Parallélisme Effet Tunnel Multi-projets Stockage Réduire les espaces disque Intégration à n projet de contrôle des coûts
4
IBM Rational Test et Optim : un cas typique
Rational Quality Manager Design & Manage Test Campaign Rational Functional / Performance Tester Initiate Data Extract Scripts Extract Production Data for Testing Optim Browse/Edit Data Optim Execute Automated Test Routines Rational Functional / Performance Tester Refresh Test Data Compare Before & After Data Optim Go Production . . .
5
Architecture d’enterprise
Why is it important to have a single, scalable solution that supports all the applications, databases and environments in your enterprise? Because every company – whether large or small – has a mixed or “heterogeneous” infrastructure that includes some combination of mission critical applications, running on a variety of databases and operating systems. And these systems interoperate with one another to form your key business processes – like booking a sales order, shipping a product or posting a payment. By leveraging a single, scalable and interoperable EDM solution, you have a central repository to define and store all the policies for managing, moving, copying, retaining and ultimately disposing of your application business records. You achieve the greatest level of consistency with the fewest resources at the lowest cost. You protect and leverage the your investment in enterprise applications and data assets. OPTIM: Une solution UNIQUE, ADAPATABLE, INTEROPERABLE. Fournit un point de contrôle CENTRAL pour déployer des process d’extraction, conservation, déplacement et protection des données de leur naissance à leur disparition.
6
Objet métier complet = Cas de test ?
Commande What is a “complete business object?” Why is it important to capture a complete business object? Business objects represent the fundamental building blocks of your application data records. From a business perspective, for example, a business object could be a payment, invoice, paycheck or customer record. From a database perspective, a business object represents a group of related rows from related tables across one or more applications, together with its related “metadata” (information about the structure of the database and about the data itself). EDM solutions that capture and process the complete business record thus create a valid snapshot of your business activity at the time the transaction took place – an “historical reference snapshot.” So, when you archive a complete business object, you create a standalone repository of transaction history. If you are ever asked to provide proof of your business activity (for example, if you receive an audit or e-discovery request), your archive represents a “single version of the truth.” You can simply query the archive to locate information or generate reports. Some vendor solutions do not archive a complete business object. Rather, they separate the business object into two parts: the transaction line detail and the associated master and reference information. Then, they move the transaction line detail records to a separate database. But a database of transaction detail records does not represent a “single version of the truth” because the details are separated from their master information – they have no business context. In order to generate meaningful business reports, the user must re-associate the archived transaction details with their related parent rows from the original the production system. A lot of things can go wrong in this scenario. For example, the links between transaction detail and parent rows are sometimes broken. This means that you have created orphan rows in your database. Federated object support means the ability to capture a complete business object from multiple related applications, databases and platforms. For example, Optim can extract a "customer" record from Siebel, together with related detail on purchased items from a legacy DB2 inventory management system. Federated data capture ensures that your data management operations accurately reflect a complete, end-to-end business process. Only Optim provides federated object support. Représente un ensemble cohérent de données – Commande, Client , expédition, paiement … Sous-ensemble référentiellement INTACT de données et de meta-données Permet de créer des photos arrêtées (snapshots) d’objets métier complexes Objets HETEROGENES et REPARTIS (bases, applications, plateformes)
7
Approche Périmètre/Action
Optim Apporte ue vision FONCTIONNELLE sur les données Quand un OBJET METIER est défini on peut ensuite lui appliquer des fonctions Extraction Mapping Maquillage Insérertion Comparaison Restauration …. With the complete business object as its fundamental building block, the EDM solution must then offer the capabilities to: --Extract, or copy, the data; --Store the data in any desired format, on any hardware device or storage tier; --Port, or move the data as its business value changes over time – perhaps to a different storage tier, or even to a different DBMS --Protect the data as it moves out of secure production systems into open test environments, by applying contextual, application-aware, persistent data masking routines
8
Optim™ Gestion des Données de test
Production ou Clone de production Extract Extract Files Dev QA Test Load Insert / Update Compare Créer des environnements de données de test ciblées, à leur juste taille, vite et à moindre coût que le clonage de production. Maintenir, Rafraichir, Réinitialiser les environnements de test Comparer les données pour détecter les anomalies et les régression pour une meilleure qualité de test Accélérer le déploiement en raccourcissant le cycle de test. We begin with a production system or clone of production <Click> Optim extracts the desired data records, based on user specifications, and safely copies them to a compressed file <Click> IT loads the file into the target Development, Test or QA environment. <Click> After running tests, IT can compare the results against the baseline data to validate results and identify any errors. They can refresh the database simply by re-inserting the extract file, thereby ensuring consistency. Compare
9
Optim™ Anonymisation des Données
Production Test EBS / Oracle Custom / Sybase Siebel / UDB EBS / Oracle Custom / Sybase Siebel / UDB Maquillage, Fonctionnel, Cohérent des données confidentielles Substitution des information confidentielles par des valeurs fictives fonctionnellement valides Déploiement de nombreux algorithmes de maquillage Cohérence entre les environnements et les phases de test et entre les environnements de test Permet d’envoyer des données pour les tests off-shore Protège les informations personnelles dans les environnements non-production Production databases contain personally identifying information (“PII”) about clients and employees, and other confidential corporate information such as pricing schedules and financial information that only authorized individuals are permitted to see. <Click> By providing contextual, application-aware and persistent data masking routines, Optim allows companies to create “safe” test environments. Optim substitutes realistic but fictionalized data for confidential information, so that testers can achieve accurate results while still complying with privacy protection programs.
10
Intégration Optim et Rational Functional Tester
Les Données de Test Dans Une Démarche Qualité Intégration Optim et Rational Functional Tester
11
IBM Rational Test et Optim : un cas typique
Rational Quality Manager Design & Manage Test Campaign Rational Functional / Performance Tester Initiate Data Extract Scripts Extract Production Data for Testing Optim Browse/Edit Data Optim Execute Automated Test Routines Rational Functional / Performance Tester Refresh Test Data Compare Before & After Data Optim Go Production . . .
12
Cas d’utilisation Optim
Extraction ciblée de cas de test pour obtenir un jeu d’essai de référence Depuis la production … ou un clone ! Maquillage / Anonymisation des données Privées … ou sensibles ! Comparaison des données de test Avant / Après (Identification des impacts database) Campagne à Campagne (Tests de Non Régression)
13
Déploiement des Produits Intégrés
Toutes les fonctions OPTIM Extraction ciblée (variabilisée) Maquillage Insertion/Refresh Extraction d’images (snapshots) Comparaison …. Peuvent être pilotées directement depuis un gestionnaire de tests car la totalité des fonctions OPTIM est exécutable en BATCH (Ligne Commande) Le code retour Optim est traité par le script du gestionnaire de tests afin de remonter le status ainsi que les rapports de refresh, sauvegardes, comparaisons vers le référentiel de test.
14
Scenario Typique de Test Non Régression
Rafraichissement Exécution des Tests Snapshot résultat Obtenu V n+1 Comparaison avec le résultat attendu (RO de la version N)
15
Application d’Enterprise Snapshot à l‘instant T
Development Test Unitaire V3 QA Non Regression Production Version 1 V2 Recette Fonct. Let’s begin with a look at a typical enterprise application. It might be a financial accounting application, an order entry system, or a claims processing application. At any rate, it is an application that powers the day-to-day operations of the business. So, it must be accurate and reliable at all times. At any given point its lifecycle, there are several versions of the application in existence. A typical scenario is shown here. Version 1 has been implemented in production to manage the daily business operations. At the same time, Quality Assurance and business users are testing Version 2 to verify new features that will be released in a few weeks. Meanwhile, Development is beginning to work on Version 3. They are developing and unit testing the new functionality that will be released next year. Today, we will discuss some of the best practices that organizations can use to effectively manage this kind of parallel development scenario. We will also introduce some techniques that allow the IT organization to reuse the work products from Version 1, such as test data, test cases and test results, across successive versions of the application. These techniques save time and money, while ensuring consistency, improving application quality and speeding the overall time to market. To demonstrate these techniques, we’ll step back to the beginning, when Version 1 was under development. V2 Development and QA Environments
16
Version 1: Test Unitaire
Echantillonnage de Production Extract Test Case Library Test Case Cust1_V1 Development Test Case Cust2_V1 Version 1 Test Database Test Case Cust3_V1 Here’s a snapshot of Version 1 during its development: To perform Unit testing, developers have created « basic » data test cases into the application database, using a relational editor, such as Relational Tools Access or Edit. <CLICK> They also create error and boundary conditions, such as backdated transactions or negative values, to verify error handling. <CLICK> Finally, they have entered some more complicated functional test cases through the newly built application graphical user interface (GUI). <CLICK> Once these test cases have been entered, we want to copy them to Test Case Library so that we can reuse them later in the development cycle. We do this by extracting the test cases, using the Relational Tools selective extract function to copy a file or set of files that can be stored on the network. <CLICK> Development Environnement
17
Version 1: Test de Non Régression
Echantillonnage de Production Insert Test Case Cust1_V1 QA Version 1 Result Cust1_V1 Extract Library – Test Results Process “A” Test Database As we progress to Regression testing for Version 1, we begin to leverage the work that we’ve done in Unit test. Specifically, we can reuse the test cases defined in the Unit test phase. We simply insert the test case data from the Library into the quality assurance database, using the Relational Tools’ Insert function. <CLICK> What is the benefit of re-using the test cases? First, we save the time and expense of recreating our work from scratch. Second, our development processes become more scalable. Because we are reusing our units of work, we have the flexibility to reallocate resources as priorities change, for example, to concentrate on newly-identified strategic initiatives. We also reduce our dependence on any particular individual, because once a domain expert defines a particular unit of work, other technicians can use it. Third, our development processes become more consistent. Our test results are more reliable, we can more easily identify any defects that must be corrected, and the overall quality of our application improves. At this point, we run our regression tests: We run a test of Process “A” on test case “Cust1,” then “Cust2,” and so on. For example, process A could include ordering a new item and generating the associated invoice. Then use Relational Tools to take a snapshot of the results. The snapshot is the data that resulted from running our process test. We will save each snapshot for use later in the development cycle. Finally, we save the snapshots to a file and save it in a “test data results library,” which could simply reside on a network drive. Or, the results file could be stored together with its associated test case data in a version control program. <CLICK> Assurance Qualité Environment
18
Version 1: User Acceptance Test
Echantillonnage de Production Réutilise les cas de test de la “Test Case Library” Faire des snapshots des résultats de test pour comparaison Ultérieure Test Case Cust1_V1 Users Acceptance Test Insert Version 1 Test Database As Version 1 progresses to User Acceptance Test, we again leverage the work created in early phases. We insert the test cases from the Regression test back into the database for QA testing. Again, we take successive snapshots of the results, and store them for later comparison. Extract Result Cust1_V1 Assurance Qualité Environnement
19
Version 2: Test Unitaire
Echantillonnage de Production Test Case Cust1_V1 Insert Dévelopement Version 2 Test Database Mapping des cas de test V1 vers le modèle V2 Insert cas de test V1 Vers la base V2 Extract/Insert de cas additionnels depuis la production Since Version 1 has migrated from User Acceptance to Production, we can turn our attention to Version 2, now in Unit Test. Remember that the Developers have saved their original Version 1 test cases. They will use them again to test Version 2. However, in the mean time, the application data model has changed. We’ve added some new tables, added some new columns and changed the data format of some columns in existing tables. Does that mean that the original test cases are now invalid? No, it does not. The Relational Tools can translate between successive versions of a data model by using the Table Map and Column Map features. Once we have identified and mapped the differences, we can insert the test cases into the Version 2 test database. Additionally, because Version 1 has been in production for some time, the production database will contain additional data reflecting the most current business practices and conditions. We can use the Relational Tools to identify, extract, map and insert some of these new test cases stored in Version and use them in Version 2 for broader and more realistic test coverage. Developpement Environnement
20
Version 2: Test Unitaire
Echantillonnage de Production Test Case Cust1_V1 Insert Development Version 2 Test Database Test Case Cust1_V2 Extract At this point, we have mapped the new data model and populated the Version 2 Unit Test database. We’ll now extract a copy of the Version 2 test cases, and save them for re-use during regression testing (and later, for testing Version 3) . Then we perform our Unit Tests of individual functions. Developpement Environnement
21
Version 2: Test de non Régression
Echantillonnage de Production Test Case Cust1_V2 Insert Development Version 2 Result Cust1_V2 Test Database Extract The next step for Version 2 is the Regression test. We’ve already saved our Unit Test cases. There are typically no additional data model changes at this stage, so we simply Insert the Unit Test cases to create the Regression Test database. As you can see, re-using the test cases saves us the time and effort of recreating the test database from scratch. It also provides more reliable test results, because we are testing against the same conditions at each successive phase. We have a better idea of the results to expect, and can more easily identify and diagnose errors in the code. Now we will run our Regression tests. For each process that we test, we’ll save a snapshot of the results, just like we did for Version 1. We’ll save them to a Results Library file on the network, or to the version control program if we are using one. Developpement Environnement
22
Version 2: Test Régression Process “A”
Result Cust1_V1 Result Cust1_V2 Result Cust2_V1 Result Cust2_V2 Compare Here is where we leverage the test data results libraries that we have created. We use the Relational Tools’ Compare function to identify the differences in the results between Version 1 and Version 2. Specifically, we identify all of the related tables within a defined Test Case. Let’s continue with the example of the Customers, Orders and Details tables. Then using the Relational Tools Compare capability, lets compare the data results for each individual customer and note any differences at in the related Orders and Details tables. We see the process on this slide. We’re comparing the test data results for Customer One in Version 1, to that same Customer 1 in Version 2. Note that we are not looking at the application interface, but instead at the underlying data. Result Cust3_V1 Result Cust3_V2
23
Analyse des résultats de test
Version 1 Deux factures, total $100 Composition différente Aurait on pu manquer cette erreur ? INVOICES The English Patient $80.00 Casablanca $20.00 Version 2 What do we learn by comparing the test data results between Version 1 and Version 2? Why is it important to take this extra step, and not simply confirm the behavior of the application interface? The answers are in the data. Even if the GUI behavior performs correctly, the underlying application data may be invalid. Let’s review the example on the slide. Our functional test has calculated the invoice total for Customer 27645, Order Version 1, the production version, shows an order total of $100. Version 2, in regression test, also shows an order total of $100. If you were looking only at the application interface, you might see the $100 total and believe it to be correct. But by examining the data behind the interface, you can see the difference. The unit prices for the items have changed, probably as the result of an application error. We need to investigate and correct the error before rolling Version 2 to production. If we had missed the error during regression test, a customer might find it in production. And by then it will be more difficult and costly to resolve. INVOICES The English Patient $50.00 Casablanca $50.00
24
Vue détaillée We can find out more by viewing the details of the Compare File. Shown here are the rows of data from the two versions of the Orders table. Let’s take a look at the two columns on the left. Column 1, labeled “Change,” shows the status of corresponding rows, whether they are equal, different, or exist only in one of the two versions. The red ovals indicate rows that differ between the two versions, In rows 25 and 26, the freight charges are different. In rows 51 and 52, the freight charges are different. Again, by reviewing the detailed differences, we can better understand where application errors may exist. These are the areas that will require further review and testing. Column 2, labeled “Source,” indicates whether the row originated from Source file 1 or Source file 2, or whether the row exists in both versions.
25
Filtrer les Détails Imagine that you are looking at a very large table, with millions of rows of data. In this case, you may want to filter the data to highlight only those rows that differ between Versions 1 and 2. Or you may want to view only the rows that have been added or deleted between the two versions. Here, we show the Exclude function, which allows you to eliminate the Equal or common rows, and review just the actual changes. At any point during the display of data, you may also print the rows that you are viewing.
26
Test Case Cust1_V1 Test Case Cust2_V1 Test Case Cust3_V1
Résumé Nous avons construit un bibliothèque de cas de test Ensemble de données reliées composant un objet de gestion répondant a des critères particuliers Les cas de test sont réutilisés d une phase a l autre et d une version a l autre Table maps et column maps permettent de gérer les changements de modèle et de contenu Test Case Cust1_V1 Test Case Cust2_V1 Test Case Cust3_V1 In summary…. We developed a library of test cases that include sets of related tables that satisfy a particular condition. Test cases can be reused across test phases and parallel development versions. Table Maps and Column Maps make it easy to accommodate changes in the data model and still retain the referential integrity of the data. Column Maps provide the added flexibility and a variety of transformation functions for masking personal and financial data in the testing environment, as needed. Test Case Library
27
Avantages de la réutilisation des cas de test
Productivité Par rapport a la reconstruction des cas de test ou de bases a chaque test Flexibilité et adaptation Redéploiement de ressources selon les besoins Cohérence Test sure des cas répétables et identiques Résultat ‘attendu’ fiable, prédictible Recapping our earlier discussion, the benefits of re-using test cases include…. What is the benefit of re-using the test cases? Saving the time and expense of re-creating our work from scratch. Enabling our development processes to become more scalable. Because we are re-using our units of work, we have the flexibility to reallocate resources as priorities change, for example, to concentrate on newly-identified strategic initiatives. We also reduce our dependence on any particular individual, because once a domain expert defines a particular unit of work, other technicians can use it. Building consistency in our development process. Because we’re testing against required conditions every time: our test results are more reliable, we can more easily identify any defects that must be corrected, and the overall quality of our application improves. We have also retained the “artifacts” of our testing program in order to comply with audit requirements or QA standards such as ISO_9001.
28
Result Cust3_V1 Result Cust2_V1 Result Cust1_V1
En deux mots Library – Test Results of Process A Result Cust3_V1 Result Cust2_V1 Result Cust1_V1 Nous avons construit une bibliothèque de résultats Résultat du Process A; Exécuté sur les Cas B, C, et D; en Version n Les résultats de test en termes de données sont comparables sur les versions 1 à n We have also built a library of test data results. A typical results file might include the Result of Process A, run on Test Cases B, C, and D, in application version n. Most importantly, we can compare those sets of results across versions, from Version 1 in production, to Version N in test.
29
Avantages de la comparaison automatisée
Localisation rapide des différences de données entre plusieurs versions Table à table Sur des ensembles de tables liées Identification, recherche et résolution des erreurs Evite leur propagation en Production Plus facile et moins cher à mettre au point en Test Automated data comparison is significant because we can then rapidly locate differences in data across parallel development versions. This is true whether we are performing a simple table to table comparison, or a more complex comparison encompassing multiple sets of related tables. Relational Tools compare function makes it easy to identify, investigate, and resolve application errors in the test phases, when it is faster, easier and less expensive to resolve problems to improve application reliability.
30
Les Données de Test Dans Une Démarche Qualité
Maquillage
31
Généralités Protection des données à caractère personnel
Cartes de paiement Données Métier … Les environnements de Production sont sécurisés mais Les environnements de Développement et de Test (QA...) sont aussi importants, mais peu sécurisés et très utilisés Il est courant, aujoud’hui que les institutions gouvernementales et les sociétés stockent et consultent facilement des informations personnelles et privées sur les citoyens et les clients. Typiquement, les organisations collectent ces informations dans un objectif légitime tels que la santé ou les services sociaux . Mais la multiplication des interconnexions entre les systèmes informatiques et Internet, fait que les données personnelles peuvent être capturées et utilisées de manières abusives. Etant donné l’étendue des possibilités d’agrégations des données et des outils d’extraction disponibles, les informations personnelles peuvent être manipulées dans un objectif que nous n’aurions pas imaginé autrefois, tel que le marketing direct par . Tous ces éléments doivent nous faire prendre conscience que notre vie privée peut être dévoilée. Pour cette raison, les gouvernements, les sociétés et les organismes qui contrôlent ces données cherchent des solutions pour protéger les informations confidentielles chaque fois qu’elles sont utilisées.
32
Thèmes Législatifs Communs
Les lois protègent les consommateurs et les citoyens USA : Sarbanes Oxley Europe : Directive de Protection de Données Personnelles France : Loi n° du 6 Janvier 1978 relative à l'informatique, aux fichiers et aux libertés modifiée par : Loi n° du 23 janvier 2006 (Journal officiel du 24 janvier 2006) Les standards PCI (souvent traités en même temps ue la désidentification) Les défis de la mise en conformité Modifications des processus « business » : outsourcing Mise en place de nouvelles technologies: bases de données Les amendes et peines encourues en cas de fraude Les lois sur les données privées dans les différents pays ont un certain nombre de points communs. Premièrement, ces lois sont édictées pour protéger les individus (citoyens et consommateurs) contre une mauvaise appropriation et un mauvais usage des informations personnelles. En second lieu, ces lois sur les données privées sont souvent complexes et exigent des changements de politiques et de procédures aussi bien que l‘utilisation de nouvelles technologies. Par conséquent, un certain nombre de sociétés et organismes sont généralement en retard et ont besoins de temps pour se mettre en conformité. Et tertio, bien que la mise en oeuvre de la protection puisse se concentrer, tout d’abord, sur l'éducation et la médiation, les lois indiquent généralement des pénalités substantielles pour la non conformité, particulièrement dans les cas d’utilisation frauduleuse ou criminelle.
33
Qu’en disent les analystes spécialisés ?
Groupe Giga (2002): “…it is worth noting that IT’s own access to customer and personnel data must be examined – strictly speaking, none should actually be necessary. Test data must be “anonymized…. ” (il est important de noter que l’accès aux données personnelles dans les services informatiques, doit être étudié - à proprement parler, aucun ne devrait réellement être nécessaire. Les données de test doivent être « anonymisées.. » ) Les analystes spécialisés identifient que l’anonymisation des données personnelle dans les environnements de test d'application est essentielle, et que masquer les données est une approche viable.... Source: Martha Bennett, Giga Information Group, Planning Assumption, “Data Protection in the European Union, Part 2: les moyens de minimiser les risques de violations, Septembre 2002.
34
Désidentifier les données de test
Enlever, masquer ou transformer les éléments qui pourraient être utilisés pour identifier un individu : Nom, adresse, téléphone, numéro de sécurité sociale confidentielles qui doivent être protégées : médicales, bancaires, financières, commerciales Les données masquées ou transformées doivent être cohérentes Formatage conforme (alpha vers alpha) Valeurs autorisées la De-identification des données de test est simplement le processus systématique permettant d'enlever, de masquer ou de transformer les éléments d'informations qui pourraient être employés pour identifier un individu. Des données qui ont été manipulées de cette manière sont généralement considérées comme acceptables pour être employées dans l'environnement de test. Cependant, il est important de noter que le résultat, de la transformation des données, doit être approprié au contexte de l'application. C'est-à-dire que les données issues de la transformation doivent avoir du sens pour la personne contrôlant les résultats des tests. Par exemple, des champs contenant les lettres doivent être substitués par d'autres lettres, ayant une valeur appropriée. En outre, les données transformées doivent être dans la marge des valeurs possibles. Par exemple, si vos codes diagnostiques sont un chiffre sur quatre positions, avec des valeurs de 0001 à 1000, une valeur masquée de 2000 ne sera pas appropriée.
35
Author Notes: Optional IBM Rational “Questions” Breaker Slide
36
IBM Rational software: www.ibm.com/software/rational
Author Notes: Mandatory standard closing slide for IBM session track presenters. Not needed for non-IBM presenters. Additional URLs: IBM Rational software: Rational launch announcements: Rational Software Delivery Platform: Accelerate change and delivery: Deliver enduring quality: Enable enterprise modernization: Ensure Web site security and compliance: Improve project success: Manage architecture: Manage evolving requirements: Small and midsized business: Targeted solutions: Rational trial downloads: Leading Innovation Web site: developerWorks Rational: IBM Rational TV: 0.xml IBM Rational Business Partners: IBM Rational Case Studies: © Copyright IBM Corporation All rights reserved. The information contained in these materials is provided for informational purposes only, and is provided AS IS without warranty of any kind, express or implied. IBM shall not be responsible for any damages arising out of the use of, or otherwise related to, these materials. Nothing contained in these materials is intended to, nor shall have the effect of, creating any warranties or representations from IBM or its suppliers or licensors, or altering the terms and conditions of the applicable license agreement governing the use of IBM software. References in these materials to IBM products, programs, or services do not imply that they will be available in all countries in which IBM operates. Product release dates and/or capabilities referenced in these materials may change at any time at IBM’s sole discretion based on market opportunities or other factors, and are not intended to be a commitment to future product or feature availability in any way. IBM, the IBM logo, Rational, the Rational logo, Telelogic, the Telelogic logo, and other IBM products and services are trademarks of the International Business Machines Corporation, in the United States, other countries or both. Other company, product, or service names may be trademarks or service marks of others.
Présentations similaires
© 2024 SlidePlayer.fr Inc.
All rights reserved.