No you couldn't. You can't provide the features of fast vp, you won't be able to nicely failover between storage processors, and you won't be able to provide anywhere near the level of support EMC can provide. And that's assuming he's only using it has a basic SAN and not doing anything extra like replication.
We have got a brand-new VNX5300 waiting to obtain configured, and I need to plan out the network facilities before the EMC tech occurs. It provides 4x1gbit iSCSI per SP (8 ports in overall), and I'd like to obtain the most out of the performance until we jump over to 10gig iSCSI.
From what I can read from the documents - the suggestion is usually to make use of only two ports per SP, with 1 active and 1 unaggressive. Why is this? It seems kind of pointless to have quad-port we/o-modules and after that recommend to not make use of more than two of them?
Furthermore - I'meters a bit unsure about the zoning. The best practices help condition that you should split each slot on each SP from each some other on various logical networks. Does this suggest that I have to produce 4 logical systems to be capable to use all 8 ports?
It furthermore gives the using instance:
Does this lead to that A0 and T0 should sit down on the same physical switch aswell? Earned't this make all visitors go on one change (if both A new1 and M1 are usually passive)?
Edit:Another brainpuzzle
I wear't get it - each host (as in machine) should not possess even more iSCSI bandwidth obtainable than the storage processor. What on earth will this issue? If serverA have got 1gbit and serverB have got 100mlittle bit, then the ensuing bandwith between them can be 100mbit. How can this outcome in some type of oversubscription?
Edit4: Wait around, what. Active and passive ports? The VNX operates in a ALUA settings with asymmetrical energetic/active. there shouldn't become any unaggressive ports, only preferred types.
pauska
pauskapauska18.2k44 gold badges5050 gold badges7575 bronze badges
3 Answers
What EMC's documents seem to end up being discussing can be to have two independent IP put out domains - two individual fabrics on different hardware, so that a misconfig in a provided switch or a switching cycle or somesuch doesn'capital t provide down all storage connectivity.
Along these ranges:
I personally believe it's i9000 a little nuts to maintain creatingadditionalmaterials for each port per SP, though - I'n say just divided them up equally among the storage materials; SP A's other two slots would become 10.168.10.9 for the one connected to fabric 1, and 10.168.11.9 for the one plugged to material 2.
The client's multipathing should end up being the one particular managing all fill evening out and failover. And how the heck are you intended to put a customer with two HBAs into 4 vlans, in any case? They can handle two targets noticeable from a given initiator simply fine.
(no idea on the 'oversubscription' issue.)
Shane Madden♦Shane Madden105k99 silver badges150150 magic badges225225 bronze badges
Simply no, simply no. We need all 8 ports on the same subnet. You by no means want iSCSI visitors to cross subnets. It'll simply slack down while going through the routers. You need both SPs connected to each change. E0,Elizabeth2 should be connected to Switch0 and E1,E3 should become connected to Change1.
Not really certainly what you are seeing in the Edit2 screenshot that makes you wish to rape your sales person with a watermelon (can I watch while you do this)? Slots and slots are different.
You'll want to set up PowerPath (software program deal to purchase) therefore that you obtain the best achievable MPIO setup.
mrdennymrdenny26.4k44 magic badges3636 metallic badges6666 bronze badges
It appears that the best solution would be two different networks, a la dietary fibre, and they put on't route to each some other. Place them all on, probably 2 energetic, two passive if that is definitely a config necessity, in any other case all on ALUA if probable.
Sven♦88.7k1010 platinum badges153153 silver precious metal badges203203 bronze badges
MateLover