Kurt Marko

Contributing Editor

Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Solid-State Storage: On The Road To Datacenter Domination

Solid-state storage is marching through the datacenter, displacing disks in everything from servers to standalone storage arrays. The reasons are clear: significantly lower power, faster access times -- particularly for reads -- and most importantly, price points that make SSDs both technically feasible and fiscally preferable alternative to mechanical disks for more and more applications.

SSDs are certainly becoming more common in enterprise datacenters, according to InformationWeek's 2014 State of Enterprise Storage Survey. The survey showed 40% of survey respondents using SSDs in disk arrays, up eight points from last year, while 39% now deploy SSDs in servers, up 10 points from last year. Deployments are still broad, but not deep, as nearly two-thirds of survey respondents outfit 20% or fewer of their servers with SSDs, and just 48% have SSDs in more than 20% of their storage arrays.

More Insights


More >>

White Papers

More >>


More >>

While enterprise SSD use is clearly on the rise, the big driver of SSD adoption is cloud service providers (CSPs) like Apple, Facebook, Google and Microsoft using SSDs in ways few in the industry would have predicted. Early solid-state deployments focused on high-end, transaction-heavy applications where their I/O throughput meant one or two SSDs could replace a shelf full of expensive 15K rpm SAS HDDs. Today, rapid price erosion -- particularly for consumer-grade flash memory -- means CSPs are now turning to SSDs for bulk data storage and caching -- what Kevin Dibelius, director of enterprise storage at Micron, calls "read often, write few" applications.

Read-dominant applications are a good fit for cheaper consumer-grade drives since they don't exacerbate the most significant shortcoming of NAND flash devices: durability. As Nimble Storage Marketing VP Radhika Krishnan points out, there's an inherent tradeoff between flash capacity and reliability. Higher density is achieved using tighter process geometries, multi-level memory cells and less error correction data, all of which make the device less durable and reliable. This doesn't bother CSPs since they are increasingly using flash for cold, archival storage on highly redundant and distributed file systems, where a drive or even system failure isn't catastrophic.

The result is a dramatic change in flash requirements. In the past, when high IOPs, transaction-oriented workloads were the predominant SSD application, devices were typically specified to achieve 10 drive fills per day for 5 years, Dibelius said. That's 10 complete writes of every memory cell, every day for five years or almost 20,000 write cycles. Today, customers often need products only good for one-fill per day, or less -- specs that are in line with consumer-grade MLC drives, he said. Indeed, he said MLC is appropriate for about 90% of Micron's new customer inquiries.

In fact, Dibelius notes that some CSPs have even asked to buy off-spec NAND chips, i.e., those that have failed Micron's QA testing, so they can roll their own flash memory storage systems at an even cheaper price. They can do this because their cloud infrastructure is sufficiently redundant that a high level of drive failures doesn't compromise data integrity. Although Micron hasn't yet sold any of these testing room rejects out of concern over the long-term customer support implications, it's clear that for some flash buyers, price and capacity is far more important than performance, reliability and write endurance.

Even with MLC SSDs crashing through the $1/GB barrier, there's still quite a price and capacity gap between flash and hard disks. However, some flash advocates, such as John Scaramuzzo, SVP and GM of SanDisk's enterprise group, argue that the rate of solid- state memory technology evolution has so far surpassed that of magnetic hard disks that the gap is rapidly closing. HDD manufacturers resorting to increasingly abstruse and expensive techniques like shingled magnetic recording and helium-filled drives illustrate Scaramuzzo's point that HDDs "are running out of gas."

[Read why Howard Marks thinks spinning disks still will be the better bargain through 2020 in "SSDs Cheaper Than Hard Drives? Not In This Decade."]

Meanwhile, NAND flash technology marches on. Scaramuzzo predicts 4 and 8 TB drives by year end and 16 TB next year. Dibelius sets the bar even higher, claiming that Micron's new 16 nm process technology and 16-die stacks should allow the company to achieve capacities of 25 TB before needing to move onto the next so-called one-y (sub-16 nm) technology in a year or two. Of course, these will be MLC devices, but the upshot is that more and more bulk storage applications will become feasible and actually preferable to run on flash systems.

In the near-term, hybrid flash-HDD systems like those from Avere Systems, Nimble, Tegile and most of the major storage vendors can deliver all-flash performance with hard-disk economics. They do this by dynamically adjusting the size of solid state caches and storage partitions in ways that are transparent and non-disruptive to applications.

Much like flash has gradually displaced hard disks for most consumer devices, it is marching through the data center and taking up a growing slice of the storage pie. Although still a relatively small share of total storage, the emergence of flash as a viable bulk cold storage medium for CSPs coupled with the rapid pace of technology improvement, means that predominantly- or all-flash data centers will be a reality for many organizations within this decade.

Kurt Marko is an IT pro with broad experience, from chip design to IT systems.

Related Reading

Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013

TechWeb Careers