La présentation est en train de télécharger. S'il vous plaît, attendez

La présentation est en train de télécharger. S'il vous plaît, attendez

VTHD PROJECT (Very High Broadband Network Service): French NGI initiative C. GUILLEMOT FT / BD / FTR&D / RTA christian.guillemot @francetelecom.com.

Présentations similaires


Présentation au sujet: "VTHD PROJECT (Very High Broadband Network Service): French NGI initiative C. GUILLEMOT FT / BD / FTR&D / RTA christian.guillemot @francetelecom.com."— Transcription de la présentation:

1 VTHD PROJECT (Very High Broadband Network Service): French NGI initiative
C. GUILLEMOT FT / BD / FTR&D / RTA

2 Presentation Overview
VTHD: french NGI initiative project objectives partnership VTHD network QoS engineering rationale service model implementation issues Provisioning & traffic engineering dynamic provisioning with optical networks Interworking of IP and X-connected WDM networks layer 2 traffic engineering Conclusion

3 VTHD Project objectives
To set up a strong partnership with higher education and research institutions within the framework of french RNRT and european IST networking development programms. Open internet R&D To develop new applications and to ensure that they can be put in use in the broader global Internet. To experiment optical internetworking with two jointed technological objectives: to assess scalable capacity upgrading techniques to assess traffic management tools necessary to operate a QoS capable test-bed. To deploy and operate a high performance network that provides nationwide high capacity interconnection facilities among laboratories at the IP level that supports experiments for new designs for networking. with actual traffic levels consistent with interconnexion capacity.

4 Partnership & Applications (1)
France Telecom/FTR&D INRIA (Computering National Institute) & European G. Pompidou Hospital High Telecommunications Engineering Schools: ENST ; ENST-Br ; INT Institut EURECOM (ENST + EPFL: Switzerland) Data applications: Grid-computing (INRIA) . Middleware platform for distributed computing High performance simulation & monitoring 3D virtual environment (INRIA) Data base recovery, data replication (FTR&D) Distributed caching (Eurecom Institute)

5 Partnership & Applications (2)
Video-streaming Video-on-demand, Scheduled live-transmission, TV broadcasting (FTR&D) MPEG 1: ~ 1 Mb/s MPEG 4: <~ 1 Mb/s (adaptative video-streaming, multicast) MPEG 2: ~6 Mb/s : high quality video  TV/IP Real time applications Tele-education (High Telecommunications Engineering Schools). Distant-learning,Educational cooperative environment, digital libraries Tele-medecine (INRIA+ G. Pompidou hospital) High-definition medical images distant analysis & processing Surgery training under distant control Voice over IP (FTR&D) PABX interconnection: E1 2Mb/s emulation Adaptative VoIP: hierarchical coding Video-conferencing (FTR&D)

6 VTHD network 8 points of presence
interconnected by an IP/WDM backbone aggregating traffic from campuses using Giga Ethernet p2p access links. Transmission resources (access fibers, long haul WDM optical channels) supplied by France Telecom Network Division on spare resources. VTHD Network management carried by FT operational IP network staff in a « best effort » mode. VTHD network usage No survivability commitment ( neither for links nor routers faults) Acceptable Usage Policy: notifiable « experimentations » partners are committed to have a commercial Internet access

7 Network Architecture Atrium A weakly meshed topology moving towards
Paris Grenoble Lannion Rennes Sophia Lyon WDM Back-office Nancy Rouen Caen A weakly meshed topology moving towards a larger POPs connectivity and peering with IST Atrium network 8 POPs connected to 18 campuses access router Backbone router

8 VTHD: A multi-supplier infrastructure
VTHD Routers & DWDM systems Cisco 12000 Juniper M40 Cisco 6509 Juniper M20 Avici TSR GigaEthernet STM1/OC 3 VTHD: A multi-supplier infrastructure FTR&D FT/BD 2.5 Gb/s STM-16 POS INRIA FTR&D INRIA ENST 2.5 Gb/s STM-16 POS 2.5 Gb/s STM-16 POS 4 channel STM-16 ring FTR&D INRIA FTR&D FT/BD ENST INT INRIA INRIA EURECOM FTR&D HEGP

9 Protection /IP rerouting
VTHD: Routing ENST FTR&D INRIA INT AS VTHD IS-IS I-BGP4 HEGP Static E-BGP4 Protection /IP rerouting RENATER (~ 10 s) Eurécom INRIA

10 QoS engineering: rationale
Context VTHD: experimental & operational network that encompasses both the core network, the CPEs and the dedicated (V)LANs. that will progressively have FTR&D operational hosts reachibility (VPN engineering permitting) traffic: VTHD network interconnects distributed communities (FTR&D, INRIA, Telecom. Engineering schools) supports bandwidth demanding applications for bulk traffic (metacomputing, web traffic, data base back up) VTHD supports applications that need QoS guarantees : VoIP, E1 virtual leased lines, 3D virtual environment , video conferencing Traffic load is expected to remain low in the VTHD core network with occasional congestion events: a context indicative of actual ISPs backbones. Objective to experiment a differentiated QoS capable platform involving all architectural components, even if their functionalities are basic.

11 Expected VTHD bulk traffic
Bulk traffic is data traffic: « web traffic »: INRIA WAGON tool WAGON is a software tool generating web requests Web browsing user behaviour is simulated using a stochastic process & starting from data traces of actual web servers. Web servers generate actual back traffic to virtual users requests WAGON first objective is web server architecture improvement. Traffic /server:  160 Mbit/s (CPU limited), 7 servers. 1 Gb/s Grid cluster Web servers 42 Web clients 1 Gb/s Grid computing (INRIA): Parallel computing using a Distributed Shared Memory between 16 (soon 32) PC clusters. Processes (computing, data transfers) are synchronized by the grid middleware. Data transfers are built on independent PC to PC file transfers Mean traffic level/ cluster transfer:  500 Mbit/s Data base recovery (FTR&D) 80 Gigabyte transfers (~ few 100 Mb/s ?)

12 Actual VTHD bulk traffic

13 QoS Architecture components
À FTR&D directory DNS/DHCP : Ç Switches FE , GE operational interconnection facility policy manager OSSIP À Policy server VTHD directory VTHD CPE : Cisco 7206 measurements SLA VTHD backbone PHB, AC engineering Traffic matrix Modelling À Policy server correlation engine QoS manager Back Office VTHD BO directory PE Building blocks integral to QoS engine: VTHD service model (PHB, Admission control) Performance metering (QoS parameters measurem.) modelling (traffic matrix, correlation engine) policy based management (policies,COPS protocol) SLA

14 VTHD backbone Service model (1)
3 service classes mapped to EF and AF Diff Serv classes both for admission control and service differentiation in the core network. Scheme applied at PEs ingress interfaces CPEs in charge of flows classification,traffic conditioning, packet marking. Class 1: Expedited forwarding intended to stream traffic traffic descriptor: aggregated peak rate QoS guarantees: bounded delay, low jitter, low packet loss rate admission control: token bucket (peak rate, low bucket capacity) suitable to high speed links: individual flow peak rate is small fraction of link rate so that variations in combined input rate remain low Class 3: Best effort intended to elastic traffic no traffic descriptor, no admission control best effort delivery

15 VTHD backbone Service model (2)
Class 2: Assured forwarding intended to elastic traffic that needs minimum throughput guarantee traffic descriptor: ? QoS guarantees: minimum throughput admission control: based on number of active flows & TCP . whatever the traffic profile, fair sharing of dedicated bandwidth among flows ensures that flow throughput never decreases below some minimum acceptable level for admitted flows (after J.W. Roberts) assumes that TCP flow control is good approximation for fair sharing RED algorithm may improve fair sharing by punishing aggressive flows. ABSOLUTE DROPPER COUNTER - QUEUE 3 BE Feedback Conforming EF CLASSIFIER METER SCHEDULER ALG. AF1 REMARKING DS VTHD node Admission control should keep EF & AF cumulative traffic load below congestion and low enough to enable the close loop feedback to take place properly .

16 Closed loop operation loose traffic engineering traffic dynamics
admission control: hose model - based on local traffic profile and per interf. SLA - not on global network status - unknown local traffic profile per outgress /destination traffic dynamics - Topology changes may require admission control & service model to be re-engineered to meet SLAs. - Relevant times scales (minutes to hours) are not consistent with capacity planning.

17 Implementation issues
Admission control: EF class: PIRC only supported on GE line cards on Cisco GSR PIRC is lightweight CAR: no access-group, dscp, or qos-group matching is available; rule matches *all* traffic inbound on that interface. AF class: status information on active flows not available (classification and filtering rules enforcement at the flow granularity level with Internet II Juniper processor) AF flows aggregate filtering based on token bucket descriptor appropriate token bucket parameters ? Performance metering On shelve tools for passive measurements at backbone border are not available at Gb/s rate Policy based management COPS protocol not supported by Cisco GSR, Juniper M40, Avici TSR & many other issues to be addressed: QoS policies, SLA/SLS definition, correlation engine,….

18 Dynamic provisioning & optical networks
IP pervasiveness & WDM optical technologies are key drivers for: high demand for bandwidth & transmission cost lowering. which in turn lead to exponential traffic growth and huge deployment of transport capacities Exponential nature of traffic growth shifts network capacity planning paradigm from: fine network dimensioning to coarse network dimensioning for pre-provisioned transport networks. Coarse network dimensioning and elastic demand for networking services shift the business model from demand driven to supply driven which in turn calls for. new service velocity : fast lambda provisioning arbitrary transport architecture for scalibility & flexibility: shift from ring-based to meshed topology efficient and open management systems wider SLA capability rapid response to dynamic network traffic and failure conditions

19 MP(Lambda)S optical networks
Soft-ware centric architecture leveraging on IP protocols Distributed link state routing protocol: OSPF, (PNNI) Signaling: Multi Protocol Label Switching (MPLS) / CR-LDP (RSVP-TE) : LDP queries OSPF for the optimal route, resources are checked prior to path set-up IP control plane interconnection facility decoupled from data plane. IP router address (control) + “IP” switch address (data) per X-connect. 12 34 …….. …… Out of band control channel IP control network « optical » X-connect

20 Sycamore Xconnected network
VTHD Configuration Rennes Paris AUB Sycamore opaque LSA features Switch Capability LSA Switch IP address Minimum grooming unit supported by the node Identified user groups that have reserved and available grooming resources User groups resources to be pre-emptable Software revision Trunk Group LSA Administrative cost of trunk group Protection strategy for individual trunks within trunk group User group assignment of trunk group Conduit through which the trunks run Available bandwidth of the trunk group Trunk allocated for preemption Rouen Avici TSR 1 l 2 l 2 l 3 l Sycamore Xconnected network 3 l Avici TSR Paris STL Paris MSO

21 Dynamic provisioning for l trunks
TSR Composite Links: bundling of STM16 links Composite link is presented as a single PPP connection to IP and MPLS IP traffic is load balanced over member links based on a hash function Link failures are rerouted over surviving member links in under 45msecs may be faster than restoration at optical level Decoupling of IP routing topology (software/control plane) from router throughput (hardware/data plane). Relevant to IP/WDM backbone router: number of line cards scaled on nbr of l x nbr of fibres. l - dynamic provisioning for composite link capacity upgrading pre-provisined transport network: capacity pool standard or diversely routed additional link (packet ordering preservation) need signaling between router & optical X-connect.

22 O-UNI signaling UNI signaling : UNI functions :
Optical Network ND: UNI neighbor discovery ONE: Optical Network Element ND UNI Client UNI-C ONE UNI-N Internal Connectivity UNI signaling : OIF draft: oif signaling protocols: RSVP-TE or CR-LDP Avici & Sycamore first release scheduled next June VTHD experiment: Avici/FTR&D/Sycamore partnership UNI functions : Connection creation, deletion, status enquiry Modification of connection properties End points Service bandwidth Protection/restoration requirements Neighbor discovery Bootstrap the IP control channel Establish basic configuration Discover port connectivity Address resolution registration query client addresses type: IPv4, IPv6, ITU-T E.164, ANSI DCC ATM End System Address address , NSAP) COP usage for UNI for outsourcing policy provisioning within the optical domain

23 Conclusion Where do we stand now What ’s to come
French partnership kernel. IP network deployment completed Partners usage and related applications rising up. Sycamore platform lab tests. What ’s to come VPN service provisioning (first IPSEC based then MPLS based) to enable secured usage from « regular » hosts. QoS capable test-bed. IPv6 service provisioning. New applications/services support within the RNRT/ RNTL or IST framework ?

24 Thank you!


Télécharger ppt "VTHD PROJECT (Very High Broadband Network Service): French NGI initiative C. GUILLEMOT FT / BD / FTR&D / RTA christian.guillemot @francetelecom.com."

Présentations similaires


Annonces Google