• Tag Archives Storage Networking World
  • Storage Networking World, Day 4

    The last day of the conference… here are the sessions that I went to:

    Session 1: Infiniband Technology Overview

    Some of the things that were of interest to me in this session were things like NFSoRDMA, which could be a useful solution assuming that NFS can handle the traffic. We have seen problems with machines trying to serve out to too many hosts via NFS. The other things was iSER, which is iSCSI over infiniband using IPoIB. It is starting to gain some ground out there, though I am not exactly sure how we would put it to use. I suppose if we were to get some iSCSI targets we could serve those targets over infiniband using iSER.

    One question that came up in my own mind during this was how we might be able to utilize QoS on the infiniband fabric, or even if it would be feasibly useful. It is soemthing to look at for us.

    Session 2: Fibre Channel Over Ethernet (FCoE)

    This was an interesting session as I didn’t know a lot about FCoE. it requires switches that support it which means added cost, and it is intended to stay within the datacenter and not go out on the WAN. This is caused by the requirement for lossless ethernet, something that it hard to do on a WAN network. Normal ethernet will be able to share these links, of course, so basically it would be double utilization of the network.

    A minimum of one FCoE switch is required at the edge. Others can be used, but at least one is needed. I am wondering if this might have a use in the Tier 2 group that we have here.

    Session 3: PCI Express and Storage

    This was a great session. I didn’t write down very much because the slides were pretty self explanatory, but the information on those slides is incredible. Basically gen 2.0 of the PCI Express architecture is going to be twice as fast as gen 1.0, and gen 3.0 will be twice as fast as 2.0. As gen 2.0 was started to be released into the wild in the fourth quarter of 2007, I expect it is starting to make its way into the server board market now.

    That was all of the sessions I went to on Thursday. After that third session, I got in the car and drove home, making a pit-stop at my mom’s house where we went and ate dinner.  🙂


  • Storage Networking World, Day 3

    So on day three there were basically some big general sessions in the morning that I went and attended…

    Session 1: Doing More with Less: The Future of Digital Preservation in a Constrained Fiscal Environment

    An interesting talk from Laura Campbell, the CIO of the Library of Congress. What I wonder about this was whether this is a possible use for the NLR/FLR networks in regards to transmitting data from regional libraries to the Library of Congress. They have a website: (digitalpreservation.gov). I also wonder whether or not I should talk to Ben about this a bit.

    Session 2: The Greening of IT at Marriott

    One of the things that I took away from this was a question on whether or not using SSD drives in place of normal harddrives in a cluster would benefit in the long run from power savings. Obviously speed would be helped quite a bit, but would the power savings be worth it? Since we don’t pay for power ourselves, it is a bit of a moot point, but perhaps the University as a whole would be interested in paying the difference in cost in order to create a power savings.

    Session 3: Next Generation Information Infrastructure

    Some more musings about SSD drives during this lecture. Also, a fact that popped up is that data growth is approximately 57% a year, which is really huge.

    Session 4: The Greening of the Data Center

    As can be seen, a major focus of this years conference was on green computing. The only thing that really popped up to my attention was the concept of a MAID (Massive Array of Idle Disks).

    Session 5: Transform Your Data Center: The Path to a Transparent Infrastructure

    I didn’t really get anything out of this, but as my mind wandered I mused on how the University would know just how much power our group is using at any given time if they decided to start charging the different groups around campus for power. There aren’t really any meters around that measure this sort of thing, so if they went to this kind of format, would each group have to pay for a meter or something?

    Also, is there something that allows for NFS on an ESX server?

    Session 6: NERSC – Extreme Storage and Computation for Science

    While this talk was interesting for me, I noticed that there was a large exodus of people in the audience as the talk progressed. I figure this happened mainly because the talk given by William Kramer of the National Energy Research Scientific Computing Center was not focused properly for the group that was there. It gave a description of what the center was doing without really going into how they were using the technology or how it could benefit the attendees. In addition, the slides that William was using were very dense and had entirely too much information on them, something that academics seem to be consistently guilty of. While the slides may be useful to a select few, most of the information on them is useless for the folks in that audience.

    The rest of the day I spent in the showroom learning more about what was available. I left around dinner time and found an Indian restaurant near Downtown Disney that was quite enjoyable.


  • Storage Networking World, Day One

    So far the convention is going well. The morning was spent getting registered (painless) and then sitting in on lectures for HPC storage.

    Lecture 1: Next Generation File Services

    Not really a lot was learned in the first lecture,  as it seemed to be more of an introduction to storage for the HPC arena.

    Lecture 2: Accelerating Applications with Infiniband Connected Servers

    The slides for this lecture were good, and I will have to download them. There are a number of economic considerations for Infiniband that were mentioned in this lecture.

    Lecture 3: Solid State Drives

    Solid state drives appear to be gaining strength in use, but I don’t see them being used in everything for a couple of years, at least. It is a very young technology that needs some time to mature. There is still a wide variation in quality of the product, both in access capability (particularly write capability) and in number of writes for a given block on the media.

    Lecture 4: Fiber Channel Technologies: Current and Future

    About the only thing I really got out of this lecture was that 8gb/sec FC is on its way, and should be available for general consumption in the next quarter or two. In addition to this, it appears that 8gb/sec FC will actually be a lower cost point than 4gb/sec, which is useful information particularly if at work we decide to get new storage in about a year.

    Lecture 5: IP Storage Protocols: iSCSI

    About the only thing that I really took from this was that iSCSI packets are not aligned with IP Packets, which means that IP is responsible for breaking up the iSCSI packets for transmission.

    Lecture 6: Comparing Server I/O Consolidation Solutions: iSCSI, Infiniband, and FCoE

    This was a useful lecture, though I am now tossing up in the air whether or not I should have attended the lecture on AMD Bridge Bay technology. Anyway, there were some very interesting comparisons, with 10GBE apparently using between 4-8W of power for each port as compared to significantly less with other interfaces. A new version of 10GBE using copper Twinax connections does look somewhat promising, however. Limitations on this technology appear to be about 10m lengths.

    I need to pull these slides as well, as there are some references to things like CBA that looked really cool. Apparently Cisco has been able to do some booting over IB as well, though they did not go into exactly HOW they managed to do this.

    Lecture 7: SAS & SATA Combine to Change the Storage Market

    Another good lecture. Apparently SAS and SATA drives are interchangeable when it comes to the physical interface. SATA drives can be plugged into SAS backplanes without any trouble, but SAS drives cannot be plugged into SATA backplanes unless the SATA backplane has SAS capability.

    SAS also allows for aggregation, which means that it is possible to get 1120MB/sec on an X4 link. PCI-E x8 connections are capable of 1600MB/sec. This looks to be a very nice alternative to FC connections. Apparently it takes between 14-19 SATA 3.5″ drives to saturate a 4X SAS link.

    There is also a trend at the moment to move to 2.5″ drives in the server market. While this will increase spindle count and reduce power and space requirements, it sacrifices storage space.

    A tool that was mentioned and that is something that I am going to have to take a look at is something called SQLIO, which measures IO performance with real SQL queries against a filesystem.

    That is about it for day one. Hopefully tomorrow there will be more to write about.


  • More project work

    Just got over a major hurdle on my kd-tree project. It was segfaulting and I had no idea why it was doing so. Now I know, and that is one of the biggest hurdles in figuring out why something is going wrong. The fix is usually much easier to deal with once you know what is going wrong.

    This project was going well until I hit this snag… at which point I became disillusioned with it and stopped working on it for a good five days now. Finally I can work on it again with the knowledge that the problems that I run into are going to be small things, not big whoppers like this one.

    In other news I am going to be heading to Storage Networking World 2008 down in Orlando next month. It should be quite interesting to go to, as the subject matter is of interest to me. There is going to be some work to be completed prior to heading down there, such as gathering some facts of our work at the HPC Center to be able to talk about.