The former CIO of the Federal Communications Commission, David Bray, is proud to say his IT team is increasingly spending less of its time on network hardware and more on delivering mission results. His proof is an empty server room in Washington, D.C., that used to house 70 racks filled to the brim with servers, storage, switches, cabling and other network infrastructure.
The FCC, like many organizations, has adopted cloud and software-driven strategies that reduce the need for floor-to-ceiling racks and rows of networking gear in their own data centers. Where once Bray’s team was responsible for the “day-to-day care and feeding” of physical network hardware boxes, today, it’s a different story. A service provider now manages the FCC’s network hardware components — minus a few high-capacity switches for internet access — and the agency subscribes, where possible, to software as a service.
The transfer of functions from on-premises hardware to server farms housed in external data centers comes as businesses look for ways to cut costs and streamline operations. At the same time, technological innovations in hardware and chip design have made it possible for service providers to pack more differentiated services in a single box, which reduces the need to maintain a large number of devices for different purposes. The result is a networking environment in which enterprises can entrust more of their hardware management to the hands of service providers, allowing IT managers to align their departments more closely with business needs and keep only selected networking hardware in their own data centers.
While on-premises network hardware components will never disappear completely, “What you can expect to see is fewer physical boxes hosting more differentiated virtualized network functions,” said John Burke, CIO and principal research analyst at Nemertes Research. Gone will be the separate appliances to handle load balancing, firewalls and WAN optimization. And if forecasts are correct, a growing percentage of that functionality will be software-driven within high-availability gear, located in racks operated by service providers.
In its research of IT organizations, Nemertes has found that more than 40% of work is being done outside enterprise data centers already. Enterprises are showing a greater willingness to put services in the cloud, and Nemertes expects them to follow suit with vital network functions eventually, as well.
That said, shifting the reliance from on-site hardware to third-party providers will take many months to unfold. Nemertes expects it will be another six to seven years before the percentage of enterprise workloads managed by cloud providers eclipses 70%. Some services will simply stay housed in boxes on site, Burke said.
“A large enterprise might have 10,000 custom applications running inside, and some of them are absolutely business-critical, written in COBOL and running on a mainframe. There is no impetus to develop them on a modern platform,” Burke said.
At the FCC, being in sync with the business has meant a move to virtual desktop infrastructure (VDI), wiping out the need for IT to manage a data center. More than three-quarters of the FCC’s workforce uses VDI in-house, and 100% use it on the road. A small percentage of employees require PCs with more processing power to handle computational and graphical applications.
Bray, who became executive director of the People-Centered Internet coalition to promote global internet access in October, acknowledged that the network hardware components didn’t disappear overnight. For years at the FCC, IT stopped refreshing hardware that was end of life and consolidated what was left. “We rationalized our application portfolio, looking at redundancies, converging them and virtualizing where possible,” he said. The fewer apps, the less hardware and the easier it was to migrate to a cloud environment or SaaS. Also, he was able to shrink the staff required for IT because he no longer needed contractors to be on site, managing and maintaining the physical gear.
City of Angels deployed (mostly) in cloud
In Los Angeles, the city is in the midst of determining which applications and services will remain on-premises and which will migrate to the cloud, according to the city’s CIO, Ted Ross.
“The idea of 100% cloud can be very compelling. But if you own a data center and have an investment in on-premises equipment, there is a lot to be said for optimizing the balance between on premises and cloud,” Ross said.
Already, the city has reduced the use of physical stand-alone servers and has virtualized 93% of its server load. “That’s quite high, especially for government,” he said. At the same time, the city is reducing its physical storage footprint as well, despite increasing storage requirements, by using cloud-based storage. Ross can also repurpose hardware to testing and development environments or to other enterprise applications.
Two metrics of success in reducing networking hardware components, he said, are power consumption, which has decreased 30% over the last two years, and data center uninterruptible power supply (UPS) utilization, which is just under 50%. “So we’re increasing total capacity, but decreasing power usage,” Ross said.
He anticipates certain sensitive workloads such as the police department’s applications and storage and other data-intensive workloads will remain on premises and, therefore, require a certain amount of infrastructure.
Virtualized network reality
Melissa Handy, technology director at a K-12 school system in the western U.S. — that did not want to be identified — said being 100% hardware-free just isn’t realistic, even in her smaller environment. She is trying to get close, however, especially since she has only two full-time employees dedicated to network support for more than 800 students — each with multiple devices, including school-provided laptops.
“Just the networking stuff — hardware, security cameras, firewalls, audio and visual — was taking up so much of our time,” she said. “It was impossible to have an internal team with the skill set to stay on top of that and manage all the switches, do firmware upgrades and make sure the security on the servers is in good shape, let alone rip and replace equipment.”
Like the FCC, the school spent time plotting what hardware in the network could be managed in the cloud and what had to stay on premises. Space limitations, wet weather and an aging power grid in the region also factored in the decision of what to move off-site and what to bring back in-house. For example, the school had moved to hosted VoIP a few years earlier, but Handy found that regional power outages necessitated an on-premises VoIP system that would allow for classroom-to-classroom calls even if the internet was down.
For other school resources, however, the cloud offered a more stable platform, including the learning management system, which was moved to SaaS, and email, which was moved to Gmail. Print jobs are handled through a software client that sends information over local data lines, instead of back and forth from the cloud server across the VPN. That approach saves bandwidth, particularly during print jobs that could be thousands of graphics-heavy pages.
Her main hardware responsibility now is overseeing a fleet of Extreme Networks’ B5 edge series switches, S-Series switches in the core and wireless AP3825 access points.
To accommodate the transfer of applications and services to the cloud, Handy boosted the district’s internet capacity from 100 Mbps to 500 Mbps, with the possibility to expand to 1 Gbps when needed. The main campus uses a dedicated fiber line, with branch locations connecting via a VPN. “I’m paying much less to manage the network now that it’s right-sized, but as a trade-off, we needed a bigger pipe to the internet,” she said, adding the $70,000 that used to be allotted to capital expenses now can used on operational expenses.
Aligning business need with networking needs
The popularity of cloud hosting and cloud-based applications has pushed enterprises to re-evaluate their approach to hardware, according to Andre Kindness, a principal analyst at Forrester Research. This is proving difficult because most have no long-term strategy, he said.
“Infrastructure operations and networking are usually so short-sighted, working project by project and disconnected from the organization’s five-year plan,” he said. “The first step in reducing hardware and moving to the cloud should be for IT to align networking — not just applications — with the business.”
With the business roadmap in mind, IT can conduct better cost justifications for keeping or removing network hardware components. For instance, IT could set aside a server cluster retired from an application that goes to SaaS for an upcoming internet of things project rather than losing the investment altogether. “The network strategy should be fully in sync with the business,” Kindness said.
That strategy has paid dividends at the FCC, where IT’s profile has risen even as the amount of networking hardware it manages has decreased.
“Historically, somewhere along the line, IT became resigned to the role of geeks in the basement and CIOs as chief infrastructure officers,” the FCC’s Bray said. “With hardware out the picture, CIOs can be what they should be — dual-hatted chief information officers and chief innovation officers talking about new capabilities for the business that align to the mission.”
Hardware will never disappear from enterprise networking. But there’s no slowing down the trend that sees service providers and companies finding new ways to move more functions to the cloud. There’s a lot to be said about balancing on-premises and cloud services. As a result, IT managers will find themselves spending more time on business functions and less time on network plumbing.