Jim O'Reilly

Consultant


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Big Data Means Big Pipes

Crunching big data continues to challenge the industry to deliver faster and bigger solutions. Not only is data growing, but the advent of the Internet of Things and data-rich streams, such as those generated by retail store traffic pattern monitoring, are accelerating both the need to move data and the requirement to analyze it rapidly.

The IT industry is responding. For example, Intel has created a reference design for a memory-oriented system with 6TB of DRAM. This is aimed at extending the reach of in-memory databases, which are currently the leading edge of database performance.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

In just a year or two, Hybrid Memory Cube (HMC) memories will arrive with similar capacity, but much higher bandwidth, reaching terabyte-per-second levels. This type of memory will apply to high performance computing (HPC) as well as big data applications, and will tie into very high core count CPUs. There also are plans for GPU clusters with even more compute power using HMC.

With such a rapid evolution of engine power, there’s a real risk that storage and networking systems will fall behind, in some ways bringing us back to the state of play a few years back. Both the LAN and the storage network need to evolve substantially.

First, there’s LAN speed. Ethernet LANs are the primary in-feed mechanism for systems. That’s where the Internet of Things delivers sensor data, for example. It may be that data will terminate in the big data servers, or it may end up being staged to fast storage. Either way, these will be fat streams, and the LAN speed will need to be higher than today’s performance standard, which is 10 gigabit Ethernet. At just 5% of the putative DRAM bandwidth for HMC, we are looking at 400Gb/sec!

[Read about a new generation of cloud-based log aggregation and analysis services that promises to help IT teams filter gigabytes of log data in "Big Data Analytics, Cloud Services, And IT Security."]

The real key to the need will be the churn rate of the in-memory data. This could be high, as in retail tracking systems or surveillance systems, or it could be low, as in genome data crunching. In the former cases, a single 400GbE might not cut it while 40GbE might work with, for example, genome mining.

Note that these scenarios are based on a single server node. With servers deployed in clusters, network backbones will need all the speed they can get, suggesting that backbones with multiple 100GbE or 400GbE links will be needed for this work in two or three years’ time.

Storage adds to the complex picture. If data is not completely transient, it has to be stored somewhere. Examples are surveillance footage and the raw sensor data in retail. Most large bulk storage arrays achieve around 100Gb/sec of total disk bandwidth. But that’s with HDD. If SSDs are used, the number increases to around 500Gb/sec. One might argue that SSDs are unlikely in this context, but prices have dropped a good deal, and performance may determine that SSD or all-flash arrays are the right choice.

Taking these together, the need for links in the 400Gb/sec range for storage is fast approaching. We’ll see the leading edge of a demand wave before 2017. By then, 100Gb/sec single-fibre links should be in production, with multiple lanes ganged together for higher bandwidth. It won’t be cheap, but if we keep our eye on the ball, we should have the networks to keep up.


Related Reading


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Next Gen Network Reports

Research and Reports

Network Computing: April 2013



TechWeb Careers