What is a SAN and how does it differ from NAS?
A storage area network (SAN) is a dedicated, high-speed network that provides access to block-level storage. SANs were adopted to improve application availability and performance by segregating storage traffic from the rest of the LAN.
SANs enable enterprises to more easily allocate and manage storage resources, achieving better efficiency. “Instead of having isolated storage capacities across different servers, you can share a pool of capacity across a bunch of different workloads and carve it up as you need. It’s easier to protect, it’s easier to manage,” says Scott Sinclair, senior analyst with Enterprise Strategy Group.
What is in a SAN?
A SAN consists of interconnected hosts, switches and storage devices. The components can be connected using a variety of protocols. Fibre Channel is the original transport protocol of choice. Another option is Fibre Channel over Ethernet (FCoE), which lets organizations move Fibre Channel traffic across existing high-speed Ethernet, converging storage and IP protocols onto a single infrastructure. Other options include Internet Small Computing System Interface (iSCSI), commonly used in small and midsize organizations, and InfiniBand, commonly used in high-performance computing environments.
Vendors offer entry-level and midrange SAN switches for rack settings, as well as high-end enterprise SAN directors for environments that require greater capacity and performance. Key vendors in the enterprise SAN market include Dell EMC, Hewlett-Packard Enterprise, Hitachi, IBM, NetApp, and Pure Storage.
“A SAN consists of two tiers: The first tier — the storage-plumbing tier — provides connectivity between nodes in a network and transports device-oriented commands and status. At least one storage node must be connected to this network. The second tier — the software tier — uses software to provide value-added services that operate over the first tier,” says research firm Gartner in its definition of SAN.
How is NAS different than a SAN?
SAN and network-attached storage (NAS) are both network-based storage solutions. A SAN typically uses Fibre Channel connectivity, while NAS typically ties into to the network through a standard Ethernet connection. A SAN stores data at the block level, while NAS accesses data as files. To a client OS, a SAN typically appears as a disk and exists as its own separate network of storage devices, while NAS appears as a file server.
SAN is associated with structured workloads such as databases, while NAS is generally associated with unstructured data such as video and medical images. “Most organizations have both NAS and SAN deployed in some capacity, and often the decision is based on the workload or application,” Sinclair says.
What is unified storage?
Unified storage – also known as multiprotocol storage – grew out of the desire to stop procuring SAN and NAS as two separate storage platforms and to combine unified block and file storage in one system. With unified storage, a single system can support Fibre Channel and iSCSI block storage as well as file protocols such as NFS and SMB. NetApp is generally credited with the development of unified storage, though many vendors offer multiprotocol options.
Today, the majority of midrange enterprise storage arrays tend to be multiprotocol, Sinclair says. “Instead of buying a box for SAN storage and a box for NAS storage, you can buy one box that supports all four protocols – it could be Fibre Channel, iSCSI, SMB, NFS, whatever you want,” he says. “The same physical storage can be allocated to either NAS or SAN.”
What’s new with enterprise SANs?
Storage vendors continue to add features to improve scalability, manageability and efficiency. On the performance front, a key innovation is flash storage. Vendors offer hybrid arrays that combine spinning disks with flash drives, as well as all-flash SANs.
In the enterprise storage world, flash so far is making greater inroads in SAN environments because the structured data workloads in a SAN tend to be smaller and easier to migrate than massive unstructured NAS deployments. Flash is impacting both SAN and NAS environments, “but it’s predominantly on the SAN side first, and then it’s working its way to the NAS side,” Sinclair says.
Artificial intelligence is also influencing SAN product development. Vendors are looking to ease management by building artificial intelligence for IT operations (AIOps) capabilities into their monitoring and support toolsets. AIOps uses machine learning and analytics to help enterprises monitor system logs, streamline storage provisioning, troubleshoot congestion, and optimize workload performance, for example.
In its most recent Magic Quadrant for Primary Storage, Gartner includes AIOps features among the key storage capabilities to consider when choosing a platform for structured data workloads. AIOps capabilities can target operational needs, “such as cost optimization and capacity management, proactive support, workload simulation and placement, forecast growth rates, and/or asset management strategies,” Gartner writes.
Impact of hyperconverged infrastructure
While converged arrays and appliances blurred the lines between SAN and NAS, hyperconverged infrastructure (HCI) took the consolidation of storage options even further.
HCI combines storage, computing and networking into a single system in an effort to reduce data center complexity and increase scalability. Hyperconverged platforms include a hypervisor for virtualized computing, software-defined storage, and virtualized networking, and they typically run on standard, off-the-shelf servers.
HCI can contain any type of storage – block, object, and file storage can be combined in a single platform, and multiple nodes can be clustered to create pools of shared storage capacity. The benefits of shared storage are resonating with enterprises, particularly as many modern applications rely on file and object storage, and the growth of unstructured data continues to outpace the growth of structured data. HCI isn’t a replacement for all SAN deployments, but enterprises may opt for HCI depending on the cost, scalability and performance requirements of certain workloads.
Consumption-based IT is a growing trend
Another trend impacting the evolution of traditional SAN storage is the movement toward consumption-based IT. Pay-per-use hardware models are designed to deliver cloud-like pricing structures for on-premises infrastructure. Hardware is deployed on site, and it’s essentially rented from vendors via a variable monthly subscription that’s based on hardware utilization.
Enterprises are looking for alternatives to buying equipment outright, and research firm IDC reports that 61% of enterprises plan to aggressively shift toward paying for infrastructure on a consumption basis. By 2024, half of data-center infrastructure will be consumed as a service, IDC predicts.
Uptake of consumption-based IT is strongest in storage rather than compute. Gartner estimates that by 2025, more than 70% of corporate, enterprise-grade storage capacity will be deployed as consumption-based offerings. That’s up significantly from less than 40% in 2021.
Dell’s Apex line and HPE’s GreenLake platform are examples of consumption-based IT, and both include options for procuring storage on a pay-per-use basis. Dell’s Apex Data Storage Services, for example, offer enterprises a choice of three performance tiers of block and file storage. Subscriptions are available in one- or three-year terms, and capacity starts as low as 50 terabytes.
Copyright © 2022 IDG Communications, Inc.