Applications Rule, Zettabytes Are Coming – BizTech Quick Take
Applications Will Run the World
There’s a new sheriff in town, and its name is application. Applications have been buzzed about in the cloud computing world for much of 2011, as IT staff try to figure out how to build application-centric infrastructures to support their organizations.
Wisdom of the Clouds blogger James Urquhart attended a conference recently and realized that the true power of cloud computing is that it shifts the monitoring and management tasks to the application and not the underlying infrastructure.
In a post from June 30, Urquhart breaks down three choices IT staff have for building an application-centric cloud infrastructure. But he also makes clear that moving to an application-centric model doesn’t mean that data center management goes away:
In addition to application-centric operations, someone has to deliver the service that the application is deployed to (or the application itself, in the case of software as a service) and the infrastructure that supports the service.
Data center operations and managed services do not go away. Rather, they become the responsibility of the cloud provider (public or private), not the end user.
Ultimately, I think this new separation of concerns in operations is at the heart of the difficult cultural change that many IT organizations face. However, the result of that change is the ability for business units to focus on business functionality.
Read the full post on Urquhart’s The Wisdom of Clouds blog.
The Zettabyte Era Is Near
Consumption of rich media is set to move the computing world into a new era by 2015: the Zettabyte era.
What’s a zettabyte? A zettabyte is equal to 1,000 exabytes, but Cisco breaks it down further into real-world terms, stating that a zettabyte “has the capacity to hold over 36,000 years’ worth of HD-quality video … or stream the entire Netflix catalog more than 3,000 times. A zettabyte is equivalent to about 250 billion DVDs.”
To make sense of what sextillion bytes of data means for IT, Cisco has put together a detailed infographic explaining zettabytes. Check it out in this post on Cisco’s blog.
Stay on Top of Unruly Mailboxes
Keeping track of overstuffed mailboxes in an organization can be an unpleasant but necessary task for IT staff. Windows PowerShell, thankfully, makes the task easy with the GET-MAILBOXSTATISTICS cmdlet and the TOTALITEMSIZE property.
Exchange Server Pro has put together a simple how-to on pulling this data for Exchange 2010 organizations, which should come in handy for when IT staff have to crack the whip on e-mail hoarders.
Read the full guide on how to calculate mailbox sizes in Exchange 2010 on Exchange Server Pro.
Is Your DR Plan Aligned with the Business?
Planning for disasters is challenging, because you never know when or how one will strike. Most people know that having a disaster plan is essential for IT staffs, but is your disaster plan really more of an IT asset protection plan?
John Parkinson of CIO Insight’s Biz-Tech 3.0 blog reflects on the numerous disaster recovery plans he’s come across in a June 29 blog post, and he argues that far too many disaster plans are not aligned with the economics of the business.
A good disaster recovery plan includes a business impact analysis (BIA), accounts for varying levels of disaster, understands the dependencies between infrastructure and applications and matches the current operational environment.
"Fundamentally … these were IT asset protection plans — ways for IT to survive, even if the business doesn't. Which clearly makes no sense," writes Parkinson.
Read Parkinson’s post about disaster recovery on the Biz-Tech 3.0 blog.
Don’t Expose Your IT Secrets to Google
Google has done a bang-up job at being the world’s No. 1 search engine. And most good IT staffs will make sure to avoid letting secure or test information out into the wild. But beware: Forgetting to flag one piece of content for no indexing by search engines can lead to disaster.
David Schwartzberg of Sophos’ Naked Security blog did some deep diving in Google waters and found that over 100 private PGP encryption keys were indexed and available on the search engine.
This means that those organizations had gone through the wise step of encrypting their data, but had left the keys to unlock that encryption out in the open. It’s the cyber equivalent of installing a fancy security system in your car but leaving the key in the door.
Read about Schwartzberg’s thorough analysis on the dangers of exposed keys on Google in the Naked Security blog.
Network Breaches Are Real
When it comes to IT security, some organizations lull themselves into an “it’ll never happen to us” mentality. But that doesn’t play out in the real world.
A June 2011 survey conducted by the Ponemon Institute found that 80 percent of respondents replied yes to the question: “How many times has your company’s network security been successfully breached over the past 12 months?”
In an assessment of this survey, Michael Thelander, product marketing manager for Tripwire, implored organizations to harden their security, monitor their infrastructures through automation and make sure their networks are more secure than their neighbors’ networks.
Read more of Thelander’s advice and thoughts on the Ponemon survey on the Tripwire blog.
The Acronym Soup of Failure Metrics
Is there a more dreaded phrase than, “The ____ is down” for IT staff?
Whether it’s the phone system, the server or the website, IT staff are the emergency teams who are called upon to brings systems back up when they’re down.
Given that system failures are no one’s idea of a good time, it’s unfortunate that the terminology for the metrics of failure has become convoluted and confusing.
IT consultant Stephen Foskett attempts to demystify the ambiguity between the phrases “mean time to failure” (MTTF), “mean time between failures” (MTBF) and “mean time to repair” (MTBR) so IT staff can figure out how to best measure failures.
Read Foskett’s full-on article on making sense out of failures on his blog.
Protect Your Website from SQL Injections
Dynamic websites have remarkably shifted the way we build and consume information on the Internet, but they also expose websites to vulnerabilities that weren’t present in the earlier static days of the web.
Because dynamic sites use databases to store and serve data when its called upon, hackers have been using a vulnerability in SQL databases known as a SQL injection to attack unprotected sites.
Mike Chapple explains how SQL injections work and how businesses can protect their sites from SQL injection attacks in this BizTech article.
Find great content from the bloggers listed here and other IT blogs by checking out our 50 Must-Read IT Blogs.