This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

10G and beyond

14 March 2008

Data centres face a number of challenges to cope with growth in demand, globalisation, power costs, resilience and technological advances. Francis Daniels provides a technical briefing on the key drivers in this sector

THE STATISTICS AROUND DATA CENTRES IS AWESOME. In the US, data centres accounted for 60bn KWH of electricity consumption (2006), some 1.5 per cent of total US power usage. In the UK, BT which operates 55 data centres globally, is the biggest user of electricity with 0.7 per cent of total output on the grid. New data centre growth here is rocketing at 35 per cent CAGR, with 11 per cent growth in the Asia Pacific region expected.

The centralisation of operations is driving the storage industry. And storage, security and access to information is driving centralisation. Data centre challenges include:

● Increasing power and cooling requirements
● The consolidation of multiple smaller data centres into large data centres. Typical is Hewlett-Packard’s move of 81 centres into six strategic sites
● The virtualisation of servers and storage
● A greater focus on the Uptime Institute’s Tier level ratings to meet service level agreements
● The life cycle of cabling (typically 20 years) vs that of electronics (typically up to 24 months for equipment hardware) poses redundancy issues
● Automation of processes to maximise uptime and remove wherever possible the potential for human error.

It is well known that with the deployment of 1U/blade servers, there is an increasing server count per rack which leads directly to denser, higher power servers giving rise to power and cooling issues. Added to this, there is double the rack power consumed as there was in 2006. Data centre power usage is increasing 8 per cent a year. More than 50 per cent cooling is wasted due to poor air flow. Power and cooling costs now equal the costs of new servers and are increasing. Cable plant is critical in the design of a good data centre – network infrastructure placement can have a positive impact on the amount of energy consumed in these centres.

Energy consumption
Data centre energy consumption has doubled in the last five years and will double again in the next five years (see and There are initiatives such as concerning energy efficiency in data centres, and the energy efficient Ethernet standard IEEE 802.3 regarding power usage by servers during idling time index.html

Cable placement and thermal management are linked. Spaghetti-like cable arrangements under floor tiling can never be managed well. For every 10-degrees above 21-degrees C, the mean time to failure of active equipment is reduced by 50 per cent. The average large data centre requires 2.7 times more cooling than necessary due to poor airflow management. More than 50 per cent cooling capability is wasted due to poor air flow. Some problems can be attributed to the placing of underfloor cabling – in the hot aisles for example, it’s vital not to impede the flow of cold air to equipment in the racks; there needs to be application of the proper cable trays with multiple cables with no exceeding of tray capacity – and copper and fibre need to be run separately.

Should cabling be run underfloor or overhead? With underfloor, the advantages are a clean appearance and improved security; the disadvantages are it may impede cold air flow, and it’s difficult to maintain and upgrade. With the overhead approach, the advantages are ease of supplier maintenance and upgrade, and has no affect on underfloor cooling, but as the cables are exposed, security and appearance are issues.

A design with switches centralised to simplify administration has been proposed. The advantages are simpler administration and security, higher throughput to access switches – while the disadvantages are seen as less flexibility and scalability, and difficulty in upgrading capability.

The industry is seeing more modular switches distributed within zones in data centres, with all the associated attributes of scalability and flexibility, modularity and efficient use of cabling following TIA 942 recommendations. The disadvantages are that the administration and security of localised switches is difficult.

Should a data centre manager opt for copper or fibre? The cost is apparently roughly the same, i.e. 24 Cat 6A cables = 48 fibre indoor cable; 10G fibre electronics have been available since the late ‘90s while 10G copper electronics are scheduled for June 2008 standardisation.

10G copper electronics linkages will be available in 2008 at 40 per cent less cost than fibre interfaces. Indeed the word in technical seminars and analyst briefings is that investing in Cat 6A cabling is considered to make sound financial sense. Companies including Cisco, CommScope, Solarflare, Foundry Networks, Teranetics, Intel and Chelsio are all busy in this area.

Moving to 10G Base-T sees 40 times more processing required than with 100 Base-T. There are issues with (electronic) noise from cables – all of which has been anticipated and can be accommodated through design engineering.

A new approach is to prepare pre-terminated copper cables made off-site to length, brought onto site, snapped into place and then operational. Operators are in and out of the data centre quickly with minimum installation time.

● OM1/OM2 cabling costs less but does not support 10G to practical distances
● SM cabling costs less but has a big premium for 1G and 10G options
● LRM and LX4 are more costly, with LRM working at distances of 220m only
● OM3 + 10G Base-S most economical for 10G data centres.

Of the trends in data centres, server virtualisation is attracting a lot of attention. The driving force behind the trend is the fact that the average usage rate of stand-alone servers is about 20 per cent. Virtualisation allows multiple applications to run on one server and thus provides better utilisation of the servers in the centre. The key point is that servers must be able to handle the increased traffic loads – this is where 10G cabling can assist.

A pre-terminated fibre array cabling system provides an easy upgrade from duplex to parallel connectivity, and overhead fibre containment is better.

To help meet today’s IT challenges, real time information management in the data centre can prove a positive boon. One company out of the very many populating this technology space is CommScope which collects data for the configuration management database at OSI Layer 1.

Real time infrastructure management in the data centre provides real-time capacity management, automated circuit provisioning, work order system, critical disconnect alarms, rack visualisations, network and storage area network environments, and integrated server provisioning. The system can be, and is, applied to data centres on all continents. Data centre designs now and in future have to allow for operational reliability; quick changes, including additions and rapid expansion; online monitoring and status; life cycle management; customer access; physical security; and the rapid detection, identification and resolution of faults.

More info

● Francis Daniels is a freelance writer

Print this page | E-mail this page