Storage Networking is a tricky animal...My brush with networked storage platforms started from the time we needed few hundred megabytes of shared storage for building a cluster to enable database and email consolidation in the late 90's.
The essential character of Block based network storage continues, the goal is to protect and consolidate mission critical workloads.
Networked Storage using traditional FC-SAN's are getting more complicated in the quest for speed and functionality.
In my view, there is a need to simplify networked storage to reduce risk, decrease mean-time-to-repair and reduce costs.
Drawing from my personal experience, More complicated SAN sub-systems and network elements are harder to understand, harder to troubleshoot and expensive to deploy....
Personally, I have moved away from SAN implementations with FC-Front end networks after attempting to use use FC-to-ISCSI routers and put up with the complexity in the nework layout and provisioning challenges.
The Approach...I took was to select medium-to-high performance native iscsi storage arrays with FC-Disk based Backend networks. Deployed the front-end network(for connecting to servers) with Redundant Gigabit Ethernet(layer 3 capable) Switches to mimic a FC network for Multi-Pathing and fault tolerence.
This allowed me to maintain the essential performance posture(with FC disk backends) while maintaining the front-end simplicity for iscsi networking.
There is still a question performace in activities like Synchronous replication, where FC-SAN technologies are superior but these needs are becoming less acute.
Applications Like MS-Exchange supporting varied data replication topologies( Cluster Continous Replication-CCR, Local Continous Replication-LCR, Standby Cluster Replication-SCR).
Databases(oracle,MS-SQL, MySQL, Postgres) are supporting high availability using Grid/federation, Mirorring and Master/Slave models.
Advances in Storage awareness in the modern operating systems(Windows, Linux and Solaris) allow support for native multi-pathing for iscsi.
Support for Features like VSS and VDS on Windows, GFS on Linux and ZFS on Solaris allow for less stringent application aware iscsi friendly asynchronous replication between storage systems.
Another Major Challenge is the emergence of Virtual Machines and thier impact on performance and availability of Networked Storage.
The FC-Storage have inherent disadvantage in VM based environment due the multiple layers of device drivers and in-memory indirection of I/O calls.
Practical impact of on processor usage in hypervisor based VM setups using FC-SAN are evident in due to lack availability and optimization of transparent ToE type solutions for FC.
The Ability of a Virtual Machine to use raw ethernet adapters with ToE capable drivers results in little or no loss of Storage I/O performance. there is practically zero impact processor performace due to the use of networked storage.
In conclusion, With the emergence of relatively inexpensive 10 gigE switch and HBA Solutions, continued sophistication of Operating systems and applications, the time is right to start adopting iscsi for your mission critical storage networking needs.
The essential character of Block based network storage continues, the goal is to protect and consolidate mission critical workloads.
Networked Storage using traditional FC-SAN's are getting more complicated in the quest for speed and functionality.
In my view, there is a need to simplify networked storage to reduce risk, decrease mean-time-to-repair and reduce costs.
Drawing from my personal experience, More complicated SAN sub-systems and network elements are harder to understand, harder to troubleshoot and expensive to deploy....
Personally, I have moved away from SAN implementations with FC-Front end networks after attempting to use use FC-to-ISCSI routers and put up with the complexity in the nework layout and provisioning challenges.
The Approach...I took was to select medium-to-high performance native iscsi storage arrays with FC-Disk based Backend networks. Deployed the front-end network(for connecting to servers) with Redundant Gigabit Ethernet(layer 3 capable) Switches to mimic a FC network for Multi-Pathing and fault tolerence.
This allowed me to maintain the essential performance posture(with FC disk backends) while maintaining the front-end simplicity for iscsi networking.
There is still a question performace in activities like Synchronous replication, where FC-SAN technologies are superior but these needs are becoming less acute.
Applications Like MS-Exchange supporting varied data replication topologies( Cluster Continous Replication-CCR, Local Continous Replication-LCR, Standby Cluster Replication-SCR).
Databases(oracle,MS-SQL, MySQL, Postgres) are supporting high availability using Grid/federation, Mirorring and Master/Slave models.
Advances in Storage awareness in the modern operating systems(Windows, Linux and Solaris) allow support for native multi-pathing for iscsi.
Support for Features like VSS and VDS on Windows, GFS on Linux and ZFS on Solaris allow for less stringent application aware iscsi friendly asynchronous replication between storage systems.
Another Major Challenge is the emergence of Virtual Machines and thier impact on performance and availability of Networked Storage.
The FC-Storage have inherent disadvantage in VM based environment due the multiple layers of device drivers and in-memory indirection of I/O calls.
Practical impact of on processor usage in hypervisor based VM setups using FC-SAN are evident in due to lack availability and optimization of transparent ToE type solutions for FC.
The Ability of a Virtual Machine to use raw ethernet adapters with ToE capable drivers results in little or no loss of Storage I/O performance. there is practically zero impact processor performace due to the use of networked storage.
In conclusion, With the emergence of relatively inexpensive 10 gigE switch and HBA Solutions, continued sophistication of Operating systems and applications, the time is right to start adopting iscsi for your mission critical storage networking needs.
Comments