Experience is the best teacher, they say. And if you’ve spent the past decade working in enterprise IT, you know your way around a wide area network or two.
If you ask the world’s sharpest CIOs to name some of the most common mistakes they have come across in IT, you’re bound to hear a few “classics.”
Here are some IT blunders that your organization should avoid.
For far too many IT departments, network management is a reactive discipline. IT workers wait for issues to arise, wait for trouble tickets to be logged and then engage in network management practices that attempt to isolate and resolve network, server or application issues.
To meet the challenges of today’s IT environment, IT workers must learn to be increasingly proactive by deploying tools that will alert them to potential problems long before there is an actual trouble ticket in hand.
Additionally, consideration should be given for training and process development in the five major categories of network management defined by the International Organization for Standardization: fault, configuration, accounting, performance and security. The unfortunate truth is that many companies have existing tools and technologies for these key areas; they just fail to use them in a proactive manner.
A lack of network redundancy and incorrect engineering for high availability is a common (and often costly) issue.
Adequate redundancy requires a careful analysis of costs and risks. Failures in the IT infrastructure and services are inevitable; therefore, the careful addition of fault-tolerant technologies at all layers of the network is critical. Unfortunately, many organizations fail to properly study the organization’s needs and may miss the mark when it comes to implementation.
High availability can be very difficult to plan, design, implement and monitor. This is one area of IT where underestimation and poor planning can really leave an organization up a creek without a paddle. All it takes is one natural disaster or a high-profile outage to learn this painful lesson.
In today’s increasingly complex IT world, change is more frequent than ever. A lack of strict and well-regulated change control policies can spell disaster for an organization.
Without these safeguards in place, changes may be implemented too quickly, without proper documentation and rollback procedures. This can sacrifice portions of the network or services and ultimately lead to tremendous and embarrassing operational and financial losses when these projects fail or fall behind.
A firm policy dictating how change management works and who can authorize a change, along with an easily accessible control system for governing configuration modifications, is the best approach to keeping things in order and functioning correctly.
Paralysis in the face of emerging technology and innovation can mean swift obsolescence for many organizations. Virtualization technology is an obvious example. For years, many IT workers were frightened by the trend and clung to physical hardware with all of its inefficiencies. This meant that their organizations missed out while nimbler, more adventurous companies reaped the massive cost savings of a virtualized environment.
What many IT workers fail to realize is that, with most new technologies, there is no need for a forklift upgrade or a complete rip-and-replace approach. Virtualization, for example, can be implemented gradually and carefully, improving the organization in phases.
It’s one thing to be cautious. After all, deploying untested technology in mission-critical areas of a company’s operations is an unwise gamble. But refusing to even dip a toe in the innovation waters guarantees you’ll be left high and dry.
Many organizations today fail to define and document a comprehensive company security policy. Today’s IT workers might be able to quickly and easily configure a firewall or access control lists, but unless they know what is required by law or by the corporation's C-level executives, they can experience major problems in their organization. Worse still, they could risk losing customer data.
The security of corporate data is far too important to leave to chance. IT departments need to do a better job of raising security awareness and educating users on what they are and aren’t responsible for.
Edward Snowden has taught us the dangers of a lax or leaky security policy. IT must inform company employees on what is and isn’t allowed (for example, should transporting company data on USB memory sticks be banned?) and enforce these policies consistently. Otherwise, there will be no happily ever after if the organization is hit by a data breach.