La présentation est en train de télécharger. S'il vous plaît, attendez

La présentation est en train de télécharger. S'il vous plaît, attendez

(Nom du fichier) - D1 - 01/03/2000 Le présent document contient des informations qui sont la propriété de France Télécom. L'acceptation de ce document.

Présentations similaires


Présentation au sujet: "(Nom du fichier) - D1 - 01/03/2000 Le présent document contient des informations qui sont la propriété de France Télécom. L'acceptation de ce document."— Transcription de la présentation:

1 (Nom du fichier) - D1 - 01/03/2000 Le présent document contient des informations qui sont la propriété de France Télécom. L'acceptation de ce document par son destinataire implique, de la part de ce dernier, la reconnaissance du caractère confidentiel de son contenu et l'engagement de n'en faire aucune reproduction, aucune transmission à des tiers, aucune divulgation et aucune utilisation commerciale sans l'accord préalable écrit de France Télécom R&D France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D1 - 15/09/2014 VTHD PROJECT (Very High Broadband Network Service): French NGI initiative C. GUILLEMOT FT / BD / FTR&D / RTA

2 2 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D2 - 15/09/2014 Presentation Overview è VTHD: french NGI initiative ‡ project objectives ‡ partnership ‡ VTHD network è QoS engineering ‡ rationale ‡ service model ‡ implementation issues è Provisioning & traffic engineering ‡ dynamic provisioning with optical networks ‡ Interworking of IP and X-connected WDM networks ‡ layer 2 traffic engineering è Conclusion

3 3 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D3 - 15/09/2014 è To develop new applications and to ensure that they can be put in use in the broader global Internet. VTHD Project objectives è To set up a strong partnership with higher education and research institutions within the framework of french RNRT and european IST networking development programms. ‡ Open internet R&D è To experiment optical internetworking with two jointed technological objectives: ‡ to assess scalable capacity upgrading techniques ‡ to assess traffic management tools necessary to operate a QoS capable test-bed. è To deploy and operate a high performance network that provides nationwide high capacity interconnection facilities among laboratories at the IP level ‡ that supports experiments for new designs for networking. ‡ with actual traffic levels consistent with interconnexion capacity.

4 4 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D4 - 15/09/2014 Partnership & Applications (1) è Partnership: ‡ France Telecom/FTR&D ‡ INRIA (Computering National Institute) & European G. Pompidou Hospital ‡ High Telecommunications Engineering Schools: ENST ; ENST-Br ; INT ‡ Institut EURECOM (ENST + EPFL: Switzerland) è Data applications: ‡ Grid-computing (INRIA). Middleware platform for distributed computing High performance simulation & monitoring ‡ 3D virtual environment (INRIA) ‡ Data base recovery, data replication (FTR&D) ‡ Distributed caching (Eurecom Institute)

5 5 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D5 - 15/09/2014 è Real time applications ‡ Tele-education ( High Telecommunications Engineering Schools ). Distant-learning,Educational cooperative environment, digital libraries ‡ Tele-medecine (INRIA+ G. Pompidou hospital) High-definition medical images distant analysis & processing Surgery training under distant control ‡ Voice over IP (FTR&D) PABX interconnection: E1 2Mb/s emulation Adaptative VoIP: hierarchical coding ‡ Video-conferencing (FTR&D) Partnership & Applications (2) è Video-streaming ‡ Video-on-demand, Scheduled live-transmission, TV broadcasting (FTR&D) MPEG 1: ~ 1 Mb/s MPEG 4: <~ 1 Mb/s (adaptative video-streaming, multicast) MPEG 2: ~6 Mb/s : high quality video  TV/IP

6 6 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D6 - 15/09/2014 VTHD network è 8 points of presence ‡ interconnected by an IP/WDM backbone ‡ aggregating traffic from campuses ‡ using Giga Ethernet p2p access links. è Transmission resources ( access fibers, long haul WDM optical channels) supplied by France Telecom Network Division on spare resources. è VTHD Network management carried by FT operational IP network staff in a « best effort » mode. è VTHD network usage ‡ No survivability commitment ( neither for links nor routers faults) ‡ Acceptable Usage Policy: notifiable « experimentations » partners are committed to have a commercial Internet access

7 7 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D7 - 15/09/2014 Network Architecture Backbone router access router 8 POPs connected to 18 campuses A weakly meshed topology moving towards a larger POPs connectivity and peering with IST Atrium network Paris Grenoble Lannion Rennes Sophia Lyon WDM Back- office Nancy Rouen Caen WDM Atrium

8 8 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D8 - 15/09/2014 VTHD Routers & DWDM systems Cisco Juniper M40 Cisco 6509 Juniper M20 Avici TSR GigaEthernet STM1/OC 3 FTR&D 2.5 Gb/s STM-16 POS INRIAEURECOMFTR&D INRIA 2.5 Gb/s STM-16 POS HEGP 2.5 Gb/s STM-16 POS FTR&DFT/BD 4 channel STM-16 ring INRIAFTR&DFT/BD ENST INT FTR&D INRIA ENST FTR&D VTHD: A multi-supplier infrastructure

9 9 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D9 - 15/09/2014 VTHD: Routing INRIA Eurécom ENSTFTR&DINRIAINT INRIA AS VTHD ENST INRIA IS-IS I-BGP4 HEGP FTR&D INRIA Static E-BGP4 Protection /IP rerouting RENATER (~ 10 s)

10 10 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D /09/2014 QoS engineering: rationale è Context ‡ VTHD: experimental & operational network that encompasses both the core network, the CPEs and the dedicated (V)LANs. that will progressively have FTR&D operational hosts reachibility (VPN engineering permitting) ‡ traffic: VTHD network interconnects distributed communities (FTR&D, INRIA, Telecom. Engineering schools) supports bandwidth demanding applications for bulk traffic (metacomputing, web traffic, data base back up) ‡ VTHD supports applications that need QoS guarantees : VoIP, E1 virtual leased lines, 3D virtual environment, video conferencing ‡ Traffic load is expected to remain low in the VTHD core network with occasional congestion events: a context indicative of actual ISPs backbones. è Objective ‡ to experiment a differentiated QoS capable platform involving all architectural components, even if their functionalities are basic.

11 11 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D /09/2014 Expected VTHD bulk traffic è Bulk traffic is data traffic: ‡ « web traffic »: INRIA WAGON tool | WAGON is a software tool generating web requests | Web browsing user behaviour is simulated using a stochastic process & starting from data traces of actual web servers. | Web servers generate actual back traffic to virtual users requests | WAGON first objective is web server architecture improvement. | Traffic /server:  160 Mbit/s (CPU limited), 7 servers. ‡ Data base recovery (FTR&D) | 80 Gigabyte transfers (~ few 100 Mb/s ?) ‡ Grid computing (INRIA): | Parallel computing using a Distributed Shared Memory between 16 (soon 32) PC clusters. | Processes (computing, data transfers) are synchronized by the grid middleware. | Data transfers are built on independent PC to PC file transfers | Mean traffic level/ cluster transfer:  500 Mbit/s Web servers 42 Web clients 1 Gb/s Grid cluster  

12 12 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D /09/2014 Actual VTHD bulk traffic

13 13 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D /09/2014 QoS Architecture components Building blocks integral to QoS engine: VTHD service model (PHB, Admission control) Performance metering (QoS parameters measurem.) modelling (traffic matrix, correlation engine) policy based management (policies,COPS protocol) SLA   FTR&D directory   DNS/DHCP      Switches FE, GE operational interconnection facility  policy manager OSSIP  Policy server    VTHD directory  VTHD CPE : Cisco 7206 measurements SLA VTHD backbone PHB, AC engineering Traffic matrix Modelling  Policy server  correlation engine QoS manager  Back Office   VTHD BO directory  VTHD PE  

14 14 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D /09/2014 VTHD backbone Service model (1) è 3 service classes mapped to EF and AF Diff Serv classes both for admission control and service differentiation in the core network. è Scheme applied at PEs ingress interfaces è CPEs in charge of flows classification,traffic conditioning, packet marking. ‡ Class 1: Expedited forwarding intended to stream traffic traffic descriptor: aggregated peak rate QoS guarantees: bounded delay, low jitter, low packet loss rate admission control: token bucket (peak rate, low bucket capacity) –suitable to high speed links: individual flow peak rate is small fraction of link rate so that variations in combined input rate remain low ‡ Class 3: Best effort intended to elastic traffic no traffic descriptor, no admission control best effort delivery

15 15 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D /09/2014 VTHD backbone Service model (2) ‡ Class 2: Assured forwarding intended to elastic traffic that needs minimum throughput guarantee traffic descriptor: ? QoS guarantees: minimum throughput admission control: based on number of active flows & TCP. { whatever the traffic profile, fair sharing of dedicated bandwidth among flows ensures that flow throughput never decreases below some minimum acceptable level for admitted flows (after J.W. Roberts)  assumes that TCP flow control is good approximation for fair sharing  RED algorithm may improve fair sharing by punishing aggressive flows. è Admission control should keep EF & AF cumulative traffic load below congestion and low enough to enable the close loop feedback to take place properly. DS VTHD node ABSOLUTE DROPPER COUNTER - QUEUE 3 BE Feedback Conforming EF CLASSIFIER METER SCHEDULER ALG. DROPPER AF1 FeedbackREMARKING QUEUE DROPPER ALG.

16 16 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D /09/2014 Closed loop operation è loose traffic engineering ‡ admission control: hose model - based on local traffic profile and per interf. SLA - not on global network status - unknown local traffic profile per outgress /destination è traffic dynamics - Topology changes may require admission control & service model to be re-engineered to meet SLAs. - Relevant times scales (minutes to hours) are not consistent with capacity planning.

17 17 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D /09/2014 Implementation issues è Admission control: ‡ EF class: PIRC only supported on GE line cards on Cisco GSR PIRC is lightweight CAR: no access-group, dscp, or qos-group matching is available; rule matches *all* traffic inbound on that interface. ‡ AF class: status information on active flows not available (classification and filtering rules enforcement at the flow granularity level with Internet II Juniper processor) AF flows aggregate filtering based on token bucket descriptor – appropriate token bucket parameters ? è Performance metering ‡ On shelve tools for passive measurements at backbone border are not available at Gb/s rate è Policy based management ‡ COPS protocol not supported by Cisco GSR, Juniper M40, Avici TSR è & many other issues to be addressed: QoS policies, SLA/SLS definition, correlation engine,….

18 18 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D /09/2014 Dynamic provisioning & optical networks è IP pervasiveness & WDM optical technologies are key drivers for: ‡ high demand for bandwidth & transmission cost lowering. which in turn lead to ‡ exponential traffic growth and huge deployment of transport capacities è Exponential nature of traffic growth shifts network capacity planning paradigm from: ‡ fine network dimensioning to ‡ coarse network dimensioning for pre-provisioned transport networks. è Coarse network dimensioning and elastic demand for networking services shift the business model from demand driven to supply driven which in turn calls for. ‡ new service velocity : fast lambda provisioning ‡ arbitrary transport architecture for scalibility & flexibility: shift from ring-based to meshed topology ‡ efficient and open management systems ‡ wider SLA capability ‡ rapid response to dynamic network traffic and failure conditions

19 19 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D /09/2014 MP(Lambda)S optical networks 1  2 3  4 …….. …… 1  2 3  4 …….. …… 1  2 3  4 …….. …… Out of band control channel IP control network 1  2 3  4 …….. …… è Soft-ware centric architecture leveraging on IP protocols ‡ Distributed link state routing protocol: OSPF, (PNNI) ‡ Signaling: Multi Protocol Label Switching (MPLS) / CR-LDP (RSVP-TE) : LDP queries OSPF for the optimal route, resources are checked prior to path set-up ‡ IP control plane interconnection facility decoupled from data plane. IP router address (control) + “IP” switch address (data) per X-connect. « optical » X-connect

20 20 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D /09/2014 VTHD Configuration è Sycamore opaque LSA features ‡ Switch Capability LSA l Switch IP address l Minimum grooming unit supported by the node l Identified user groups that have reserved and available grooming resources l User groups resources to be pre-emptable l Software revision ‡ Trunk Group LSA l Administrative cost of trunk group l Protection strategy for individual trunks within trunk group l User group assignment of trunk group l Conduit through which the trunks run l Available bandwidth of the trunk group l Trunk allocated for preemption Avici TSR Rennes Paris STLParis MSO Paris AUB Rouen Sycamore Xconnected network Avici TSR

21 21 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D /09/2014 Dynamic provisioning for trunks è TSR Composite Links: bundling of STM16 links ‡ Composite link is presented as a single PPP connection to IP and MPLS ‡ IP traffic is load balanced over member links based on a hash function ‡ Link failures are rerouted over surviving member links in under 45msecs may be faster than restoration at optical level ‡ Decoupling of IP routing topology (software/control plane) from router throughput (hardware/data plane).  Relevant to IP/WDM backbone router: number of line cards scaled on nbr of x nbr of fibres.  - dynamic provisioning for composite link capacity upgrading ‡ pre-provisined transport network: capacity pool ‡ standard or diversely routed additional link (packet ordering preservation) ‡ need signaling between router & optical X-connect.

22 22 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D /09/2014 O-UNI signaling è UNI signaling : ‡ OIF draft: oif ‡ signaling protocols: RSVP-TE or CR-LDP ‡ Avici & Sycamore first release scheduled next June ‡ VTHD experiment: Avici/FTR&D/Sycamore partnership Optical Network ND: UNI neighbor discovery ONE: Optical Network Element ND UNI Client UNI -C ONE UNI ND Client UNI -C UNI -N ND Client UNI -C UNI -N UNIND Client UNI -C UNI -N UNI UNI -N Internal Connectivity è UNI functions : ‡ Connection creation, deletion, status enquiry ‡ Modification of connection properties End points Service bandwidth Protection/restoration requirements ‡ Neighbor discovery  Bootstrap the IP control channel  Establish basic configuration  Discover port connectivity ‡ Address resolution  registration  query  client addresses type: IPv4, IPv6, ITU-T E.164, ANSI DCC ATM End System Address address, NSAP) è COP usage for UNI for outsourcing policy provisioning within the optical domain

23 23 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D /09/2014 Conclusion è Where do we stand now ‡ French partnership kernel. ‡ IP network deployment completed ‡ Partners usage and related applications rising up. ‡ Sycamore platform lab tests. è What ’s to come ‡ VPN service provisioning (first IPSEC based then MPLS based) to enable secured usage from « regular » hosts. ‡ QoS capable test-bed. ‡ IPv6 service provisioning. ‡ New applications/services support within the RNRT/ RNTL or IST framework ?

24 24 France Télécom R&D La communication de ce document est soumise à autorisation de FTR&D © France Télécom - (Nom du fichier) - D /09/2014 Thank you!


Télécharger ppt "(Nom du fichier) - D1 - 01/03/2000 Le présent document contient des informations qui sont la propriété de France Télécom. L'acceptation de ce document."

Présentations similaires


Annonces Google