In the heyday of Unix workstations, the late 1980s, a common configuration was to have one or more large (at least by the standards of the day) NFS fileservers, which provided home and other directories to a farm of workstations. Many of these workstations might, in fact, be diskless: they would boot from the network, and all of their filesystems and swap space would be provided by NFS (and, before NFS, by ND, the Network Disk protocol).
There were several reasons for configurations like this, some of which are obvious and some perhaps not so obvious:
The first three reasons are probably fairly self evident; the final one needs some further explanation. There were three classes of disks in that era.
What may not be obvious is how much faster the big drives were than the toy workstation SCSI drives: I don't know what the differences were, but they were a lot, in bandwidth and especially latency. And workstation were connected by (thickwire) ethernet, which could shift data at 1MB/s on a good day, which was a good deal faster than a SCSI disk could realistically manage. So a sensible approach was to keep your data on the big disks and use them over the network. It was faster, and your data got backed up as well, because no one in their right mind backed up the workstation disks. (You couldn't use RAID to make lots of SCSI disks look like one big disk because RAID was only just being invented.)
It was only one small step from this to a workstation which had no local disk at all.
That world is now long gone. Things have changed almost beyond recognition: quite literally so in the case of enterprise disks. A fair number of changes, not all of them technological have happened in the last 20 years or so.
These factors led, by the mid to late 1990s, to a world where large machines were attached to large storage arrays using SCSI, where smaller systems typically had a few internal disks, also using SCSI, and where desktops and laptops had their own large internal disk, or occasionally disks, using some IDE-derived interface.
Directly attached SCSI storage has some fairly serious problems however:
Enter fibre channel, and SANs.
Fibre channel, in its various forms (which can include copper as well as fibre-optic cables) solved the first three problems: it could have long cable lengths (up to several miles); the cables were much smaller than SCSI; and finally it was significantly faster than SCSI.
SANs claimed to solve the final problem by building a network out of fibre channel, to which large storage arrays and hosts could be connected. This would allow storage to be managed efficiently, as well as simplifying backup and DR provision.
Of course, what has happened here is that storage has migrated back onto a network, except now the network is based on a fibre channel "fabric", and the protocols in use are block-based rather than filesystem-based. And it's much, much more expensive than an ethernet-based network: both because it uses completely different protocols so you need a different group of people to run it, and because the network hardware is extremely expensive.
While this has been happening, something interesting has been happening to ethernet. From 10Mb/s in the mid 90s, it's moved to 100Mb/s, 1Gb/s and now (mid 2008) 10Gb/s. Ethernet trunking is also now widely supported, allowing networks to be configured with multiples of these speeds. The rate of performance increase of ethernet has far outstripped that of fibre channel in the last 10 years. In terms of price/performance the story is even worse for fibre channel.
So, to anyone whose head is not buried in the sand (most people involved in SANs have their heads quite deep in the sand), it's been apparent for a while that fibre channel is going to get displaced by ethernet and TCP/IP. Protocols such as iSCSI allow TCP/IP networks to do everything fibre channel networks could do. Ethernet is also much cheaper, both because the hardware is cheaper and because you now only need one group of network people to look after it.
So, obviously, SANs will start to become ethernet-based, using iSCSI over TCP/IP.
But something else is lurking in the wings: NFS. For years, NFS has been a standing performance joke, especially in the kind of environments where SANs thrive, which typically run large databases which need block-based storage protocols to get the kind of performance they need, as everyone knows. Except, look at what this person is saying, and in particular at this. Before you dismiss him as some mad internet person, look at who he works for, and what he does. He probably does know what he's talking about.
NFS is actually not as bad as you might think.
Obviously it will take some time, especially given the huge investment in fibre channel SANs over the last ten years, but I think it's now pretty clear that the future is going to be quite similar to the past: network storage running over ethernet, using NFS.
And perhaps it will be even more similar to the past than that. Serious systems have supported SAN booting for some time. But it's not widely used because SANs are so fiddly and difficult to set up, and because SAN based storage is far more expensive than local disks. For smaller systems it's so expensive that no one would ever consider it. But local boot disks are a pain for the same reasons they always were a pain: they go wrong, they're hard to back up, and they make it hard to replace a system by another system (or as we should say now, to "virtualise" systems).
But systems can also boot using NFS, and in a world where networked storage is provided over NFS there is no reason at all why they should not, once again, be diskless. Boot disks don't even need very high performance so this could easily happen before the wholesale replacement of fibre channel SANs with ethernet. & NFS. The savings in power, maintenance, & management costs would be quite compelling.July 2008