Storage Networking World, Day One

So far the convention is going well. The morning was spent getting registered (painless) and then sitting in on lectures for HPC storage.

Lecture 1: Next Generation File Services

Not really a lot was learned in the first lecture,  as it seemed to be more of an introduction to storage for the HPC arena.

Lecture 2: Accelerating Applications with Infiniband Connected Servers

The slides for this lecture were good, and I will have to download them. There are a number of economic considerations for Infiniband that were mentioned in this lecture.

Lecture 3: Solid State Drives

Solid state drives appear to be gaining strength in use, but I don’t see them being used in everything for a couple of years, at least. It is a very young technology that needs some time to mature. There is still a wide variation in quality of the product, both in access capability (particularly write capability) and in number of writes for a given block on the media.

Lecture 4: Fiber Channel Technologies: Current and Future

About the only thing I really got out of this lecture was that 8gb/sec FC is on its way, and should be available for general consumption in the next quarter or two. In addition to this, it appears that 8gb/sec FC will actually be a lower cost point than 4gb/sec, which is useful information particularly if at work we decide to get new storage in about a year.

Lecture 5: IP Storage Protocols: iSCSI

About the only thing that I really took from this was that iSCSI packets are not aligned with IP Packets, which means that IP is responsible for breaking up the iSCSI packets for transmission.

Lecture 6: Comparing Server I/O Consolidation Solutions: iSCSI, Infiniband, and FCoE

This was a useful lecture, though I am now tossing up in the air whether or not I should have attended the lecture on AMD Bridge Bay technology. Anyway, there were some very interesting comparisons, with 10GBE apparently using between 4-8W of power for each port as compared to significantly less with other interfaces. A new version of 10GBE using copper Twinax connections does look somewhat promising, however. Limitations on this technology appear to be about 10m lengths.

I need to pull these slides as well, as there are some references to things like CBA that looked really cool. Apparently Cisco has been able to do some booting over IB as well, though they did not go into exactly HOW they managed to do this.

Lecture 7: SAS & SATA Combine to Change the Storage Market

Another good lecture. Apparently SAS and SATA drives are interchangeable when it comes to the physical interface. SATA drives can be plugged into SAS backplanes without any trouble, but SAS drives cannot be plugged into SATA backplanes unless the SATA backplane has SAS capability.

SAS also allows for aggregation, which means that it is possible to get 1120MB/sec on an X4 link. PCI-E x8 connections are capable of 1600MB/sec. This looks to be a very nice alternative to FC connections. Apparently it takes between 14-19 SATA 3.5″ drives to saturate a 4X SAS link.

There is also a trend at the moment to move to 2.5″ drives in the server market. While this will increase spindle count and reduce power and space requirements, it sacrifices storage space.

A tool that was mentioned and that is something that I am going to have to take a look at is something called SQLIO, which measures IO performance with real SQL queries against a filesystem.

That is about it for day one. Hopefully tomorrow there will be more to write about.