The First Steps Toward Data Center Optimization

IT needs to take these steps before a company can benefit from a virtualized infrastructure.

A software-defined data center (SDDC), in which infrastructure is virtualized and delivered as a service, can provide numerous benefits. The availability of pooled server, storage and network hardware, for instance, reduces the need for specialized components and servers in favor of affordable, off-the-shelf hardware that is easier to maintain.

Once IT professionals stop focusing on wiring things together and begin using software to manage operations, they can enjoy the benefits of greater efficiency, enhanced reliability and lower costs. In addition to the virtualization of infrastructure such as servers, storage and networking, data centers can implement orchestration and automation solutions to automate routine tasks and reduce the burden on IT staff.

A Practical Approach to Data Center Optimization

There is a right way and a wrong way to begin the journey toward data center optimization via virtualization. The wrong way is to sit back and let events — such as system overloads, equipment failures and vendor updates — dictate decisions. The right way to move forward is with proactive planning.

A growing number of organizations are recognizing the importance of virtualization. A survey conducted by the consulting firm Protiviti recently found virtualization to be a top priority for IT decision-makers. More than half of the respondents said they are currently undergoing a major IT transformation that will last a year or more. About 64 percent said their IT transformation projects are aimed at simplifying existing systems and reducing costs, while 55 percent noted enabling new functionality as a driving force.

The first step toward data center optimization is to create a strategy that combines best practices, addresses current requirements and anticipates future needs and technology advancements.

A good next step for any data center optimization project is to assess the current state of efficiency. Metrics created with the help of key performance indicators (KPIs) in areas such as equipment mean time between failure, average data center rack utilization and data center floor usage will provide valuable benchmarks for estimating future efficiency gains.

Data center agility is achieved by recognizing, understanding and deploying effective new technologies, such as server and network virtualization as well as cloud and hyperscale computing. KPIs and other metrics designed to reveal the benefits and costs of such technologies should be calculated early in the planning process.

Understanding Risk

Next, any promising data center optimization technology, process or approach must be evaluated not only for its potential benefit to the organization, but also for the level of risk it presents. Before any optimization initiative begins, a risk assessment should be conducted to detect any lurking vulnerabilities or threats that could potentially harm the organization or its IT systems. There are four aspects to risk assessment:

As the number of organizations embracing virtualization grows, so do the number of virtualization options. Comparing and evaluating various hardware and software platforms to ensure a perfect fit and optimal performance requires time and research. KPIs alone won’t cut it. An experienced partner can leverage its real-world experiences and staff expertise to ensure that all virtualization components meet or exceed their benchmarks and work together seamlessly.

“Where to begin?” That’s the first question many IT leaders ask themselves as they contemplate a data center optimization initiative. Because a massive, sweeping data center overhaul may not be possible due to cost and other limitations, the logical approach is to target areas that promise the most significant impact in terms of enhanced performance, efficiency improvements and cost savings. KPIs can be used to pinpoint the servers, storage devices, networking equipment and other resources that are prime candidates for virtualization.

  • Vulnerability: an error or a weakness in the design, implementation or operation of a system
  • Threat: a technical, natural or human adversary that is motivated to exploit a system vulnerability
  • Impact: the consequences that could result if a vulnerability is exploited
  • Probability: the chances of a risk becoming reality

As the number of organizations embracing virtualization grows, so do the number of virtualization options. Comparing and evaluating various hardware and software platforms to ensure a perfect fit and optimal performance requires time and research. KPIs alone won’t cut it. An experienced partner can leverage its real-world experiences and staff expertise to ensure that all virtualization components meet or exceed their benchmarks and work together seamlessly.

“Where to begin?” That’s the first question many IT leaders ask themselves as they contemplate a data center optimization initiative. Because a massive, sweeping data center overhaul may not be possible due to cost and other limitations, the logical approach is to target areas that promise the most significant impact in terms of enhanced performance, efficiency improvements and cost savings. KPIs can be used to pinpoint the servers, storage devices, networking equipment and other resources that are prime candidates for virtualization.

Learn more about SDDCs by downloading the white paper, "The Modern Data Center for a Digital World."

Wavebreakmedia Ltd/ThinkStock
May 16 2016

Sponsors