Présentation au sujet: "VTHD PROJECT (Very High Broadband Network Service): French NGI initiative C. GUILLEMOT FT / BD / FTR&D / RTA christian.guillemot @francetelecom.com."— Transcription de la présentation:
1VTHD PROJECT (Very High Broadband Network Service): French NGI initiative C. GUILLEMOTFT / BD / FTR&D / RTA
2Presentation Overview VTHD: french NGI initiativeproject objectivespartnershipVTHD networkQoS engineeringrationaleservice modelimplementation issuesProvisioning & traffic engineeringdynamic provisioning with optical networksInterworking of IP and X-connected WDM networkslayer 2 traffic engineeringConclusion
3VTHD Project objectives To set up a strong partnership with higher education and research institutions within the framework of french RNRT and european IST networking development programms.Open internet R&DTo develop new applications and to ensure that they can be put in use in the broader global Internet.To experiment optical internetworking with two jointed technological objectives:to assess scalable capacity upgrading techniquesto assess traffic management tools necessary to operate a QoS capable test-bed.To deploy and operate a high performance network that provides nationwide high capacity interconnection facilities among laboratories at the IP levelthat supports experiments for new designs for networking.with actual traffic levels consistent with interconnexion capacity.
4Partnership & Applications (1) France Telecom/FTR&DINRIA (Computering National Institute) & European G. Pompidou HospitalHigh Telecommunications Engineering Schools: ENST ; ENST-Br ; INTInstitut EURECOM (ENST + EPFL: Switzerland)Data applications:Grid-computing (INRIA) .Middleware platform for distributed computingHigh performance simulation & monitoring3D virtual environment (INRIA)Data base recovery, data replication (FTR&D)Distributed caching (Eurecom Institute)
5Partnership & Applications (2) Video-streamingVideo-on-demand, Scheduled live-transmission, TV broadcasting (FTR&D)MPEG 1: ~ 1 Mb/sMPEG 4: <~ 1 Mb/s (adaptative video-streaming, multicast)MPEG 2: ~6 Mb/s : high quality video TV/IPReal time applicationsTele-education (High Telecommunications Engineering Schools).Distant-learning,Educational cooperative environment, digital librariesTele-medecine (INRIA+ G. Pompidou hospital)High-definition medical images distant analysis & processingSurgery training under distant controlVoice over IP (FTR&D)PABX interconnection: E1 2Mb/s emulationAdaptative VoIP: hierarchical codingVideo-conferencing (FTR&D)
6VTHD network 8 points of presence interconnected by an IP/WDM backboneaggregating traffic from campusesusing Giga Ethernet p2p access links.Transmission resources (access fibers, long haul WDM optical channels)supplied by France Telecom Network Division on spare resources.VTHD Network management carried by FT operational IP network staff in a « best effort » mode.VTHD network usageNo survivability commitment ( neither for links nor routers faults)Acceptable Usage Policy: notifiable « experimentations »partners are committed to have a commercial Internet access
7Network Architecture Atrium A weakly meshed topology moving towards ParisGrenobleLannionRennesSophiaLyonWDMBack-officeNancyRouenCaenA weakly meshed topology moving towardsa larger POPs connectivityand peering with IST Atrium network8 POPs connected to 18 campusesaccess routerBackbone router
10QoS engineering: rationale ContextVTHD: experimental & operational networkthat encompasses both the core network, the CPEs and the dedicated (V)LANs.that will progressively have FTR&D operational hosts reachibility (VPN engineering permitting)traffic: VTHD networkinterconnects distributed communities (FTR&D, INRIA, Telecom. Engineering schools)supports bandwidth demanding applications for bulk traffic(metacomputing, web traffic, data base back up)VTHD supports applications that need QoS guarantees :VoIP, E1 virtual leased lines, 3D virtual environment , video conferencingTraffic load is expected to remain low in the VTHD core network with occasional congestion events: a context indicative of actual ISPs backbones.Objectiveto experiment a differentiated QoS capable platform involving all architectural components, even if their functionalities are basic.
11Expected VTHD bulk traffic Bulk traffic is data traffic:« web traffic »: INRIA WAGON toolWAGON is a software tool generating web requestsWeb browsing user behaviour is simulated using a stochastic process & starting from data traces of actual web servers.Web servers generate actual back traffic to virtual users requestsWAGON first objective is web server architecture improvement.Traffic /server: 160 Mbit/s (CPU limited), 7 servers.1 Gb/sGrid clusterWeb servers42 Web clients1 Gb/sGrid computing (INRIA):Parallel computing using a Distributed Shared Memory between 16 (soon 32) PC clusters.Processes (computing, data transfers) are synchronized by the grid middleware.Data transfers are built on independent PC to PC file transfersMean traffic level/ cluster transfer: 500 Mbit/sData base recovery (FTR&D)80 Gigabyte transfers (~ few 100 Mb/s ?)
13QoS Architecture components ÀFTR&D directoryDNS/DHCP:ÇSwitches FE , GEoperational interconnectionfacilitypolicy managerOSSIPÀPolicy serverVTHD directoryVTHD CPE:Cisco 7206measurementsSLAVTHDbackbonePHB, AC engineeringTraffic matrixModellingÀPolicy servercorrelationengineQoS managerBack OfficeVTHD BO directoryPEBuilding blocks integral to QoS engine:VTHD service model(PHB, Admission control)Performance metering (QoS parameters measurem.)modelling (traffic matrix, correlation engine)policy based management (policies,COPS protocol)SLA
14VTHD backbone Service model (1) 3 service classes mapped to EF and AF Diff Serv classes both for admission control and service differentiation in the core network.Scheme applied at PEs ingress interfacesCPEs in charge of flows classification,traffic conditioning, packet marking.Class 1: Expedited forwardingintended to stream traffictraffic descriptor: aggregated peak rateQoS guarantees: bounded delay, low jitter, low packet loss rateadmission control: token bucket (peak rate, low bucket capacity)suitable to high speed links: individual flow peak rate is small fraction of link rate so that variations in combined input rate remain lowClass 3: Best effortintended to elastic trafficno traffic descriptor, no admission controlbest effort delivery
15VTHD backbone Service model (2) Class 2: Assured forwardingintended to elastic traffic that needs minimum throughput guaranteetraffic descriptor: ?QoS guarantees: minimum throughputadmission control: based on number of active flows & TCP .whatever the traffic profile, fair sharing of dedicated bandwidth among flows ensures that flow throughput never decreases below some minimum acceptable level for admitted flows (after J.W. Roberts)assumes that TCP flow control is good approximation for fair sharingRED algorithm may improve fair sharing by punishing aggressive flows.ABSOLUTEDROPPERCOUNTER-QUEUE3BEFeedbackConformingEFCLASSIFIERMETERSCHEDULERALG.AF1REMARKINGDS VTHD nodeAdmission control should keep EF & AF cumulative traffic load below congestion and low enough to enable the close loop feedback to take place properly .
16Closed loop operation loose traffic engineering traffic dynamics admission control: hose model- based on local traffic profile and per interf. SLA- not on global network status- unknown local traffic profile per outgress /destinationtraffic dynamics- Topology changes may require admission control & service model to be re-engineered to meet SLAs.- Relevant times scales (minutes to hours) are not consistent with capacity planning.
17Implementation issues Admission control:EF class: PIRC only supported on GE line cards on Cisco GSRPIRC is lightweight CAR: no access-group, dscp, or qos-group matching is available; rule matches *all* traffic inbound on that interface.AF class: status information on active flows not available(classification and filtering rules enforcement at the flow granularity level with Internet II Juniper processor)AF flows aggregate filtering based on token bucket descriptorappropriate token bucket parameters ?Performance meteringOn shelve tools for passive measurements at backbone border are not available at Gb/s ratePolicy based managementCOPS protocol not supported by Cisco GSR, Juniper M40, Avici TSR& many other issues to be addressed: QoS policies, SLA/SLS definition, correlation engine,….
18Dynamic provisioning & optical networks IP pervasiveness & WDM optical technologies are key drivers for:high demand for bandwidth & transmission cost lowering.which in turn lead toexponential traffic growth and huge deployment of transport capacitiesExponential nature of traffic growth shifts network capacity planning paradigm from:fine network dimensioning tocoarse network dimensioning for pre-provisioned transport networks.Coarse network dimensioning and elastic demand for networking services shift the business model from demand driven to supply driven which in turn calls for.new service velocity : fast lambda provisioningarbitrary transport architecture for scalibility & flexibility: shift from ring-based to meshed topologyefficient and open management systemswider SLA capabilityrapid response to dynamic network traffic and failure conditions
19MP(Lambda)S optical networks Soft-ware centric architecture leveraging on IP protocolsDistributed link state routing protocol: OSPF, (PNNI)Signaling: Multi Protocol Label Switching (MPLS) / CR-LDP (RSVP-TE): LDP queries OSPF for the optimal route, resources are checked prior to path set-upIP control plane interconnection facility decoupled from data plane.IP router address (control) + “IP” switch address (data) per X-connect.1234……..……Out of band control channelIP control network« optical » X-connect
20Sycamore Xconnected network VTHD ConfigurationRennesParis AUBSycamore opaque LSA featuresSwitch Capability LSASwitch IP addressMinimum grooming unit supported by the nodeIdentified user groups that have reserved and available grooming resourcesUser groups resources to be pre-emptableSoftware revisionTrunk Group LSAAdministrative cost of trunk groupProtection strategy for individual trunks within trunk groupUser group assignment of trunk groupConduit through which the trunks runAvailable bandwidth of the trunk groupTrunk allocated for preemptionRouenAvici TSR1 l2 l2 l3 lSycamore Xconnected network3 lAvici TSRParis STLParis MSO
21Dynamic provisioning for l trunks TSR Composite Links: bundling of STM16 linksComposite link is presented as a single PPP connection to IP and MPLSIP traffic is load balanced over member links based on a hash functionLink failures are rerouted over surviving member links in under 45msecsmay be faster than restoration at optical levelDecoupling of IP routing topology (software/control plane) from router throughput (hardware/data plane).Relevant to IP/WDM backbone router: number of line cards scaled on nbr of l x nbr of fibres.l - dynamic provisioning for composite link capacity upgradingpre-provisined transport network: capacity poolstandard or diversely routed additional link (packet ordering preservation)need signaling between router & optical X-connect.
22O-UNI signaling UNI signaling : UNI functions : Optical NetworkND: UNI neighbor discoveryONE: Optical Network ElementNDUNIClientUNI-CONEUNI-NInternal ConnectivityUNI signaling :OIF draft: oifsignaling protocols: RSVP-TE or CR-LDPAvici & Sycamore first release scheduled next JuneVTHD experiment: Avici/FTR&D/Sycamore partnershipUNI functions :Connection creation, deletion, status enquiryModification of connection propertiesEnd pointsService bandwidthProtection/restoration requirementsNeighbor discoveryBootstrap the IP control channelEstablish basic configurationDiscover port connectivityAddress resolutionregistrationqueryclient addresses type: IPv4, IPv6, ITU-T E.164, ANSI DCC ATM End System Address address , NSAP)COP usage for UNI for outsourcing policy provisioning within the optical domain
23Conclusion Where do we stand now What ’s to come French partnership kernel.IP network deployment completedPartners usage and related applications rising up.Sycamore platform lab tests.What ’s to comeVPN service provisioning (first IPSEC based then MPLS based) to enable secured usage from « regular » hosts.QoS capable test-bed.IPv6 service provisioning.New applications/services support within the RNRT/ RNTL or IST framework ?