Mar 09 2016
Data Center

Data Center Optimization Leads to Better Business

Organizations that optimize their data centers to improve performance and become more energy efficient also save money and space.

The National Fish and Wildlife Foundation awards hundreds of grants each year for conservation projects designed to protect and restore wildlife habitats in forests, wetlands and other waterways.

But in 2015, to ensure that NFWF would continue to run smoothly, the nonprofit organization had to update something much closer to home: its own data center.

NFWF upgraded to a converged infrastructure. The new setup allowed the organization to virtualize all of its servers and improve the delivery and quality of its IT services, which includes allowing organizations to apply for grants online.

“As we grow and the number of grants increases, we have to make sure we have high availability,” says Dave Radomsky, the foundation’s CIO. “We built out the new data center to make sure we’re solid for the next four to five years.”

As IT infrastructure ages or new business requirements arise, companies and nonprofits invest in new technology to improve their data center operations, support growth, boost productivity and ensure business continuity.

Some, like NFWF, modernize data centers with converged infrastructure, where server, storage and networking components work as an integrated unit; others choose hyperconverged systems, which combine the separate elements into a single appliance. These unified systems support virtualization and consolidation, are simpler to deploy and manage, and are more energy-efficient.

“These products are appealing because they have a much more efficient footprint and allow organizations to scale up without requiring a massive data center,” says analyst Amy DeCarlo of Current Analysis. “It’s the quest for greater efficiency, improving performance and driving down costs.”

Alternatively, organizations that don’t want to do everything in-house can fine-tune their data center operations by using colocation facilities or offloading some or all of their IT needs to managed service providers.

Legacy Data Center Infrastructure Wears Out Its Welcome

NFWF, a nonprofit organization headquartered in Washington, D.C., collaborates with the public and private sectors to fund conservation projects that total hundreds of millions of dollars a year.

In 2013, the foundation took charge of a five-year, $2.5 billion fund to benefit Gulf Coast natural resources. It was part of a settlement between the federal government, British Petroleum and Transocean for the 2010 Deepwater Horizon oil spill.

Last year, the organization’s IT infrastructure began to strain under the increased workload. NFWF’s 125-plus employees, who depend on a grants management system, document management system, email and financial software, noticed applications were slower and it took longer to access files.

“Data was growing exponentially,” Radomsky says. “As the number of grants increase, the amount of documentation needed to support those grants grows as well. Emails, maps, videos and other large files must be scanned in and put into our system.”

The problems were old servers, an increase in bandwidth usage that taxed the network, and a storage system reaching 85 percent capacity.

A Data Center That's More Efficient

NFWF needed more storage to comply with data retention requirements and so it could virtualize its remaining servers. At the time, 60 percent were virtualized.

To solve these issues, Radomsky purchased a FlexPod converged solution, a prevalidated, preintegrated, scalable infrastructure that includes five Cisco Systems UCS B-Series blade servers, a 50-terabyte NetApp FAS2552 storage area network (SAN), VMware virtualization software and Cisco Nexus 5000 Series switches, which upgraded the core network speed from 4 gigabits per second to 10Gbps.

The three-person IT department installed the new equipment last spring with the help of CDW engineers. The new technology takes only one rack of space compared with the previous setup, which took two and a half racks, Radomsky says, and it also reduces power and cooling needs.

The data center has improved productivity. Users see much faster application performance and stability, while the IT staff finds the streamlined infrastructure much easier to manage, Radomsky says.

Because the FlexPod equipment is tightly integrated, the IT staff centrally manages the technology. And if IT administrators need help, they call one phone number, and Cisco, NetApp and VMware engineers collaborate to resolve issues, says Pablo Blasi, NFWF’s senior network and systems engineer.

The new infrastructure also saves IT staff time. IT administrators can deploy a new virtual server within five minutes and have it ready for production in 30 minutes. Before, it took four to six hours to configure a new physical server, Blasi says.

In addition, with all the servers virtualized, the IT staff can patch a server by simply moving the virtual machines to another server without downtime, Radomsky says.

IT That Is Truly Disaster Ready

NFWF IT staff also improved disaster recovery as part of their data center upgrade. However, rather than build a secondary data center, the foundation saves money with the cloud.

The foundation’s most important virtual servers are replicated and sent to a cloud service provider. If the main data center goes down, the cloud provider can bring up the most important applications within four to eight hours, Radomsky says.

That’s a huge improvement from the organization’s previous disaster recovery strategy, which was to make backups to tape and send them offsite.

Midwest Family Mutual Insurance in Chariton, Iowa, also needed to bolster its disaster recovery strategy by establishing a secondary data center, but it did so through a new colocation provider to house its equipment.

The company, which offers personal and commercial insurance, previously contracted with a colocation provider for its one and only data center in Minnesota. For disaster recovery, the firm’s IT staff had backed up applications and data to a second SAN in another location, but if the data center went down, it would take several weeks to bring services back up, says Benjamin Harwood, a systems administrator for the firm.

Ninety-five percent of the company’s 110 employees telecommute, and they need access to the Voice over IP phone system, email and a Citrix virtual desktop to do their work. “We’re a business that’s growing and expanding, and having the company down for several weeks is not an option,” Harwood says.


Benjamin Harwood, systems administrator for Midwest Family Mutual Insurance

To better ensure uptime, Midwest Family’s executives wanted two data centers: one near its headquarters in Iowa and another near a company office in Minnesota with a fast Internet connection between them.

It was a requirement the firm’s original colocation provider couldn’t meet. So last year the IT staff found a new provider that fit the bill. In 2015, they spent most of the year migrating its main data center and setting up a second data center at the new facilities.

The firm’s existing hardware, including two NetApp FAS8020 SANs, was still current, so the IT department needed to buy new servers and networking equipment only for the second data center.

They purchased five Cisco UCS B-Series blades and Cisco networking equipment that included Cisco Nexus 9000 Series switches.

“We have Cisco, NetApp, Citrix and VMware in both data centers, so they are almost identical,” Harwood says.

A Beneficial Investment in Backup

The firm’s IT department designed and installed the equipment with the assistance of CDW’s engineers. They placed critical applications (including phone services and email) in an active-active state, meaning if the main data center in Eden Prairie, Minn., goes down, it will immediately fail over to the secondary data center in Des Moines, Iowa, with no downtime. Everything else is in active-standby, meaning it would take about four hours to bring back up, Harwood says.

Overall, the data center optimization project cost Midwest Family about $750,000, but the investment has proved worthy, Harwood says.

“It’s a large investment, but the benefits are even larger,” he says. “We can survive a catastrophic failure, and we are not going to lose any data.”

Why the Hyperconverged Move Makes Sense

Sevier County Bank in Tennessee recently upgraded its data center with hyperconverged infrastructure, and the decision is paying off.

The bank’s core customer account processing systems running on mainframes were never at risk. But last year, the bank’s data center, which runs email, file servers and check imaging, started showing significant age. Its 15 servers were between 5 to 7 years old and running Windows 2003, which Microsoft planned to stop supporting. The bank also didn’t have a backup data center — it used tape backups.

The bank, which wanted to adopt virtualization and build a secondary data center, considered a more traditional IT approach with separate servers, storage and networking equipment, but the $180,000 price tag was too expensive.

Josh Carr, Sevier County Bank’s IT manager and systems administrator, discovered that Scale Computing’s HC3 appliance provided the functionality the bank needed at one-third the cost.

Hyperconverged Infrastructure Offers the Total Package

The HC3 device features a server, storage and open-source KVM hypervisor for virtualization. Carr purchased a three-node Scale HC2000 cluster for the primary site and added a three-node Scale HC1000 cluster for the secondary site. He also installed two network switches at each site for redundancy.

Carr says the hyperconverged infrastructure met the bank’s requirements: ease of use, scalability and high availability. “The all-in-one package is a great option for small to medium-sized businesses,” he says. “It doesn’t take a lot of technical know-how and is extremely easy to use.”

Of all the data center options, the hyperconverged equipment was most efficient and cost-effective, adds bank CEO Matthew Converse.

“Banks are among the least adventurous organizations because of security, confidentiality and redundancy needs,” Converse says. “But going the route of new technology, being a little adventurous, and seeing how it might fit our industry paid great dividends.”

Skip Brown
Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.