This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

Command To Control

18 September 2008

How can the information generated at different levels in the data centre be connected together to map how the centre is delivering the business processes and enterprise resilience? Frank Booty explores the possibilities

DIFFERENT LANGUAGES EXIST IN DATA CENTRES and the translations between them is imperfect. The
physical data centre is the resilient physical structure provided by the FM that forms a secure and complex M&E environment to host electronic platforms. The IT layer or technology infrastructure is provided by the Chief Technology Officer and server, storage and network storage professionals, who translate the logical business into physical technology through engineering and deploying a businessspecific processing platform. The applications and business process is serviced by the business leaders, Chief Information Officer and application professionals who understand the business and move the process from brain power to silicon through developing business-specific application code.

But who does what? The business level obtains telemetry at the business process and transaction level, with measurements in transactions and customer interactions per second. The IT layer obtains telemetry at a switch, server and processor level with measurements in megabytes per second, MIPS, FLOPS and
gigabytes per allocation. The data centre level obtains telemetry at the component and integrated system level with measurements in kWh, gpm (gallons/min),cfm (cubic ft/min), and kWh/sq ft/year metrics.

The issue is to take the different pieces of information and tie them together into a map of how the centre is delivering the business processes – the core point being how to build this end-to-end map.

“Each group speaks a different language, uses different metrics, and has very different motivations and rewards,” says James Dow, Principal – technical systems architecture, CS Technology. “If you look at
resiliency, there will be different views from the business owner, user, CIO, CTO, data centre and enterprise.”

Enterprise resiliency is determined by the service distribution and application design within the data centre portfolio. Data centre resiliency is only site specific – enterprise resiliency is not the sum of the resiliency of
individual sites. Command and control brings many benefits, Dow points out by providing visibility into resiliency configurations across the stack; visibility into efficiency configuration across the stack; and financial efficiency and driving slack out of the system.

There could be light at the end of this particular tunnel. There are 19 vendors currently contributing to command and control – including in no particular order – IBM, Rittal, APC, Knurr, Emerson, Active Power, Perle and Trane. Dow recommends asking such vendors where their solution provides visibility into the stack and how they help to provide an integrated view.

Standards
There are currently no standards on ‘command and control’ although the Green Grid has picked up the data centre baton in the US and ASHRAE has published a booklet for the data centre space. Issues include who owns the data and where does it flow? IT manages the data, and data will flow ever closer to the business.
Despite despite the two disciplines moving closer together, there still remains an IT/FM divide in the datacentre.

Ed Ansett, MD EMEA, EYP Mission Critical Facilities, comments: “A BMS is very expensive, and 95 per cent of what a BMS will do can be done by IT doing it, and tying it up with the software side. Up to 40 per cent of data centre managers will see the power bill, and then it becomes a major driver towards energy efficiency and automating controls.” Many of the systems in a datacentre environment are bespoke. Ansett continues:
“There are different controls, UPS, generators, etc that all need to be tied together.

Convergence is required. Integration of this arrangement is the panacea but there is a skills shortage in this sector of mission-critical facilities. The situation in developing countries is worse.”

APC by Schneider Electric recently announced plans to integrate APC and TAC software management platforms with IBM Tivoli Monitoring energy management software. APC and TAC, part of global energy management company. Schneider Electric, offer physical infrastructure and building management systems, respectively, to customers worldwide. The integration, currently in development, will, it is claimed, allow APC’s InfraStruXure Central management platform, TAC’s building management system and IBM Tivoli Monitoring software to share key data points including alarm notifications, historical data and asset tracking information. The combined solution is expected to deliver the visibility, control and automation needed for a more efficient enterprise by enabling optimisation of data centre physical infrastructure and building systems while maintaining IT service levels.

According to Kevin Brown, APC’s VP Marketing, Data Centre Solutions, “While successful demos have proved the concept, production quantities are expected in the first half of 2009. With availability being the key
‘must-have’ out of data centres, a major thrust is the elimination of waste through adoption of different design methodologies. Our agreement with IBM Tivoli is one step in that direction.”

Greenhouse gas emissions from data centres have reportedly passed the output of Argentina, and are due to pass those of all airlines by 2020. McKinsey & Company and the Uptime Institute have published a report which pushes a new metric called Corporate Average Data Efficiency (CADE) that combines both IT and facilities costs to monitor energy use. They have called for ‘energy tsar positions to be created to manage
energy efficiency.

Digital Realty Trust, a key owner and manager of corporate and Internet gateway data centres, announced in May that it has published the industry's first energy efficiency data about its data centre facilities. Digital Realty Trust has opted to use the PUE metric as the methodology for measuring and reporting energy efficiency in its portfolio of facilities in North America and Europe. It is the first company to provide customers with detailed and actionable information about how data centre operations are meeting their corporate green IT objectives. The company will also publish benchmarks that will support industry-wide initiatives to make data centres greener and reduce expenditures on energy.

APC prefers to push DCiE as “the only metric that is compliant with the metric recommendations of the August 2007 US Environmental Protection Agency report to Congress on data centre efficiency.”

Visual management
In the UK, Sainsbury’s is using Aperture Technologies’ Aperture Vista 500, to improve visibility into the physical elements of its data centres nationwide. Vista helps organisations to visually manage the complex physical environment of the data centre, and with capacity planning and utilisation. Managers can view infrastructure variables, including space, power, cooling, network and storage, to keep data centres operating at peak efficiency. GDCM’s nlyte product provides data centre managers with a web-enabled graphical interface which allows them to locate equipment, understand utilisation levels and address resource management resulting in savings in power bills from its deployment. Customers include HBOS in the UK, lawyers’ firm Clifford Chance (global) and online publisher ProQuest in the US.

Recently GDCM released Workflow v2.1, an upgrade for nlyte Workflow v1.1. The nlyte Workflow application leverages nlyte’s physical and logical data centre configuration information, allowing organisations to build a
customisable and integrated workflow process and methodology. As an example, the nlyte Workflow engine can be used to document, audit, and implement an ITIL-based process – all groups, personnel, policies, reports, forms, actions and existing support systems.

Data growth
Data centre growth is causing significant strain on IT budgets and will continue to do so as companies add more servers unless there is a key focus on increasing energy efficiency. The McKinsey report stressed the point that “we need to know better what's coming into and out of our data centres."

CADE measures total data centre efficiency as a percentage. The higher the result, the more energy efficient the data centre. It takes into consideration average CPU utilisation, total IT load, facility capacity and the total energy consumed by a data centre. However, some bits are still missing, such as the average energy
efficiency for servers, storage and networking equipment. It’s a metric that combines facilities and IT. Each constituency tends to look at each part of the problem, and when you look at it separately, it doesn't seem too bad. But when you do put them together, it's glaringly obvious people should do something.

All of which brings refocusses on the different languages in today’s data centres.

Should the data centre industry be left to get its own house in order rather than relying on external legislators? There will be ‘green’ data centres but resilience will not be sacrificed to energy efficiency. There are competing challenges to achieve the efficient physical infrastructure - power reliability and availability;
TCO power costs, legacy facilities, high density IT hardware, dynamic loading, cooling and airflow, scalability, and uptime redundancy. FMs may not like this to be said but the greater the degree of IT control over the physical data centre, the better.

Datecentre Efficiency Metrics
DCiE - Data Centre Infrastructure Efficiency - measures electrical efficiency of a data centre as the fraction of the total electrical power supplied to the data centre that is ultimately delivered to the IT load. A "DCiE rating of 0.5 means that only 50% of the power consumed goes to the IT equipment.

PUE - Power Utilisation Effectiveness - dividing the amount of power entering a data center by the power used to run the computer infrastructure within it to obtain the PUE expressed as a ratio, with overall efficiency improving as the quotient decreases toward 1.

SI-POM - Site Infrastructure Power Overhead Multiplier - a ratio defined as data centre power consumption at the utility meter divided by total hardware AC power consumption at the plug for all IT equipment.

SI-POM captures all the conversion losses in the transformers, UPS, PDU and in crtitical power distribution losses as well as cooling systems, lights plus other minor building loads.

CADE - Corporate Average Data Efficiency - rates the overall energy efficiency of an organisation's data centres. Introduced in the Uptime Institute/McKinsey report as a single KPI used to compare the energy consumption of one data centre against another. CADE combines measurements of the energy efficiency and utilisation of IT equipment and facilities into a single percentage. A higher CADE indicates a more energy efficient data centre.

● Frank Booty is a freelance writer


Print this page | E-mail this page

https://www.asckey.com/
https://www.dalrod.co.uk/
PFM
https://www.bona.com/en/


https://eastoncommercialinteriors.co.uk/
https://www.floorbrite.co.uk/
https://www.floorbrite.co.uk/