View
14
Download
0
Category
Preview:
Citation preview
A Deeper Look into the Inner Workings and Hidden Mechanisms of FICON Performance
• David Lytle, BCAF • Brocade Communications Inc. • Thursday February 7, 2013 -- 9:30am to 10:30am
• Session Number - 13009
QR Code
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 1
Legal Disclaimer • All or some of the products detailed in this presentation may still be under
development and certain specifications, including but not limited to, release dates, prices, and product features, may change. The products may not function as intended and a production version of the products may never be released. Even if a production version is released, it may be materially different from the pre-release version discussed in this presentation.
• NOTHING IN THIS PRESENTATION SHALL BE DEEMED TO CREATE A WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, STATUTORY OR OTHERWISE, INCLUDING BUT NOT LIMITED TO, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NONINFRINGEMENT OF THIRD-PARTY RIGHTS WITH RESPECT TO ANY PRODUCTS AND SERVICES REFERENCED HEREIN.
• Brocade, Fabric OS, File Lifecycle Manager, MyView, and StorageX are registered trademarks and the Brocade B-wing symbol, DCX, and SAN Health are trademarks of Brocade Communications Systems, Inc. or its subsidiaries, in the United States and/or in other countries. All other brands, products, or service names are or may be trademarks or service marks of, and are used to identify, products or services of their respective owners.
• There are slides in this presentation that use IBM graphics. © 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 2
Notes as part of the online handouts I have saved the PDF files for my presentations in such a way that all of the audience notes are available as you read the PDF file that you download. If there is a little balloon icon in the upper left hand corner of the slide then take your cursor and put it over the balloon and you will see the notes that I have made concerning the slide that you are viewing. This will usually give you more information than just what the slide contains. I hope this helps in your educational efforts!
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 3
A deeper look into the Inner Workings and Hidden Mechanisms of FICON Performance
This technical session goes into a fairly deep discussion on some of the design considerations of a FICON infrastructure. • Among the topics this session will focus on is:
• Congestion and Backpressure in FC fabrics • How Buffer Credits get initialized • How FICON utilizes buffer credits • Oversubscription and Slow Draining Devices • Determining Buffer Credit Requirements • FICON RMF Reporting
NOTE: Please check for the most recent copy of this presentation at the SHARE website as I make frequent updates.
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 4
This Section • Congestion and Backpressure Overview
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 5
Road can handle up to 800 cars/trucks per minute
at the traffic speed limit
Road can handle up to 800 cars/trucks per minute
at the traffic speed limit
• Congestion occurs at the point of restriction
• Backpressure is the effect felt by the environment leading up to the point of restriction
These two conditions are not the same thing Congestion and Backpressure Overview
I will use an Interstate highway example to demonstrate these concepts
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 6
Congestion and Backpressure Overview • No Congestion and No Backpressure The highway handles up to 800 cars/trucks per minute and less than
800 cars/trucks per min are arriving
• Time spent in queue (behind slower traffic) is minimal Cut-through routing (zipping along from point A to point B) works well
Road can handle up to 800 cars/trucks per minute so the traffic can run up to the speed limit
Road can handle up to 800 cars/trucks per minute so the traffic can run up to the speed limit
No Congestion and No Backpressure
10am – 3pm
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 7
• Congestion The highway handles up to 800 cars/trucks per minute and more than
800 cars/trucks per min are arriving
• Latency time and buffer credit space consumed increases Cut-through routing cannot decrease the problem
• Backpressure is experienced by cars slowing down and queuing up
Congestion and Backpressure Overview
Congestion and Backpressure Road
can only handle up to 800 cars/trucks per minute so traffic speed reduced (congested)
Road can handle up to 800 cars/trucks per minute
so the traffic is running at the speed limit
3pm – 6pm
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 8
This Section • Very basic flow for the Build Fabric
process and how Buffer Credits get initialized
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 9
Build Fabric Process
Assume • A fiber cable will be
attached between switch A and B
• This will create an ISL (E_Port) between these two devices
Example Topology
Switch ADomain ID: 1
Switch BDomain ID: 2
Fiber cable
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 10
Build Fabric Process • Cable connected • Link Speed Auto-Negotiation • Link is now in an Active state • One credit is granted by default
to allow the port logins to occur • Exchange Link Parms (ELP) Contains the “requested” buffer
credit information for the sender Assume 8 credits are being
granted for this example • Responder Accepts – then
does its own ELP Contains the “requested” buffer
credit information for the responder Assume 8 credits are being
granted for this example • Link becomes an E_Port • Send Link Resets (LR) Initializes Sender credit values
• Link Reset Response (LRR) Initializes Responder credit values
Link Initialization
Switch BDomain ID: 2
1 credit
8 credits Accept
ELP
E_Port
LR, LR, LR, LR
LRR, LRR, LRR, LRR
IDLE
IDLE
1 credit
8 credits
8 credits
8 credits
Switch ADomain ID: 1
Active State1 credit
CABLE CONNECTED
Speed Negotiation
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 11
Link Initialization
Switch BDomain ID: 2
1 credit
8 credits Accept
ELP
E_Port
LR, LR, LR, LR
LRR, LRR, LRR, LRR
IDLE
IDLE
1 credit
8 credits
8 credits
8 credits
Switch ADomain ID: 1
Active State1 credit
CABLE CONNECTED
Speed Negotiation
Build Fabric Process • Cable connected • Link Speed Auto-Negotiation • Link is now in an Active state • One credit is granted by default
to allow the port logins to occur • Exchange Link Parms (ELP) Contains the “requested” buffer
credit information for the sender Assume 8 credits are being granted
for this example • Responder Accepts – then
does its own ELP Contains the “requested” buffer
credit information for the responder Assume 8 credits are being granted
for this example • Link becomes an E_Port • Send Link Resets (LR) Initializes Sender credit values
• Link Reset Response (LRR) Initializes Responder credit values
• Ready for I/O to start flowing © 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 12
This Section • How FICON uses Buffer-to-Buffer Credits
• Data Encoding
• Forward Error Correction (FEC)
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 13
How Buffer Credits Work
FICON X8/8S CHPID
Brocade Dir. Port
A Fiber channel link is a PAIR of paths
A path from "this" transmitter to the "other" receiver and a path from the "other" transmitter to "this“ receiver
The "buffer" resides on each receiver, and that receiver tells the linked transmitter how many BB_Credits are available
Sending a frame through the transmitter decrements the B2B Credit Counter
Receiving an R-Rdy or VC-Rdy through the receiver increments the B2B Credit Counter
DCX family has a buffer credit recovery capability
Fiber Cable transmit receive
receive transmit
I have 8 buffer credits
I have 40 buffer credits
Each receiver on the fiber cable can state a different value! Once established, it is transmit (write) connections that will typically run out of buffer credits
8 Avail.
B2B Credit Cnt
40 Avail. B2B Credit Cnt
BC Pool
BC Pool
Express = fixed 64 BC Express2 = fixed 107 BC - Switch has variable BCs Express4 = fixed 200 BC - DASD has fixed BCs Express8 = fixed 40 BC - Old Tape had variable BCs Express8S = fixed 40 BC
System z
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 14
Buffer-to-Buffer Credits
• After initialization, each port knows how many buffers are available in the queue at the other end of the link This value is known as Transmit (Tx) Credit
Buffer-to-Buffer flow control
CHPID (8Gbps) F_Port (16Gbps) Sw DMN 1
Frames R_RDYs
8Gbps CHPID Has 40 BCs
8Gbps F_Port has 8 BCs
Tx Credit 8 7 6 5 4 3 2 1 Tx Credit 40 0
FICON Fiber Cable
1 2 3 4 5 6 7 8
Q Q Receiver Receiver
F_Port BCs held in the CHPID
N_Port CHPID BCs held in the F_Port
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 15
Buffer-to-Buffer Credits
• Tx Credit is decremented by one for every frame sent from the CHPID
Buffer-to-Buffer flow control Example
CHPID (8Gbps) F_Port (16Gbps) Sw DMN 1
Frames R_RDYs
8Gbps CHPID Has 40 BCs
8Gbps F_Port has 8 BCs
Tx Credit 8 7 6 5 4 3 2 1 Tx Credit 40 0 1 2 3 4 5 6 7 8
Q Q
• No frames may be transmitted after Tx Credit reaches zero • Tx Credit is incremented by one for each R_RDY received from F_Port
Data Frames On to End
Device Receiver Receiver
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 16
BB Credit Droop
R_RDY Acknowledgements (Do not use BCs)
Start Of Frame x
8G Link
R_RDY Acknowledgements
SOFx 8G Link Ample BB Credits
2G Link SOFx
R_RDY Acknowledgements
4 x less BB Credits, still ample
Not Enough BB Credits
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 17
Buffer Credits Working Ideally
Disk shelf capable of sustaining 800MB/sec
Tape capable of sustaining 160MB/sec
uncompressed Frame towards Tape Return BB_Credit token
Frame towards Disk shelf
/
Available BB_Credits
16/16
Available BB_Credits
Available BB_Credits
16/16
13/16 12/16 14/16 15/16 10/16 9/16 11/16
13/16 12/16 14/16 15/16 11/16
8/16 7/16
16/16 13/16 12/16 14/16 15/16 11/16
Buffer Credits are a “Flow Control” mechanism to assure that frames are sent correctly (engineers call this “Frame Pacing”) In an ideal FC network all devices can process frames at the same rate and negotiate equal levels of BB_Credits)
BCs Used
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 18
E-of-F S-of-F
Payload Area of Frame – up to 2112 bytes of data
8b/10b (since 1950s but patented in 1983)
• 1/2/4 and 8Gbps will always use 8b/10b data encoding • 10Gbps and 16Gbps will always use 64b/66b data encoding
Data Encoding – Patented by IBM 8b/10b compared to 64b/66b
8bit BYTE c111c11111
8bit BYTE c111c11111
8bit BYTE c111c11111
8bit BYTE c111c11111 °°°
8b/10b: Each 8 bit Byte becomes a 10 bit Byte – 20% overhead
E-of-F S-of-F
Payload Area of Frame – up to 2112 bytes of data
64b/66b (available since 2003)
8 BYTEs cc◊◊◊◊◊◊◊◊
8 BYTEs cc◊◊◊◊◊◊◊◊
8 BYTEs cc◊◊◊◊◊◊◊◊
8 BYTEs cc◊◊◊◊◊◊◊◊ °°°
64b/66b: Two check bits are added after every 8 Bytes – 3% overhead • At the end of every 32, eight byte groups, we have collected 32 hi-order ck bits • This is a 32 bit check sum to enable Forward Error Correction to clean up links
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 19
• ASIC-based functionality that is enabled by default on Brocade 16G ports and allows them to fix bit errors in a 10G or 16Gbps data stream
• FEC is a standard 16Gbps mechanism • Enabled via the 64b/66b data encoding process • Available on Brocade 16Gbps Gen5 switches • Works on Frames and on Primitives • Corrects up to 11 bit errors per each
2,112 bits in a payload transmission • 11 bit corrections per 264 bytes of payload
• Used on E_Ports that are 16G-to-16G ASIC connections
• Not for F_ports or N_Ports • Does slightly increase frame latency by ~400
nanoseconds per frame
• Significantly enhances reliability of frame transmissions across an I/O network
FICON Cascading Forward Error Correction (FEC) and Proactive Transmission Error Correction
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 20
Some of my favorite photos In Technical Sessions, Your Brain Should Be Allowed To Take A Break!
Brain Interlude Is Over….
Back to Work!
Lock Lomond Scotland from Lock Lomond Castle
Big Sur - California
Alesund, Norway
Bryce Canyon, USA
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 21
This Section • ISL Oversubscription
NOTE: There is also fabric oversubscription
and link oversubscription. In this session I think that ISL
Oversubscription will demonstrate how serious a concern that oversubscription
really can be to the enterprise.
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 22
ISL Oversubscription – Design Architecture
<=50% of Bandwidth
<=50% of Bandwidth
But each fabric really needs to run at no more than 45% busy so that if a failover occurs then the remaining fabric can pickup and
handle the full workload
Provides a potential five-9’s of availability
environment
z/OS’s IOS automatically load balances the FICON I/O across all
of the paths in a Path Group (up to 8 channels in a PG)
Path Group
• Although this potentially provides a five-9s environment, when a fabric does fail how devastating will it be to you in the event that another event occurs and the remaining fabric also fails ?
• Especially when humans get involved, multiple failures too often do occur! © 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 23
ISL Oversubscription – Design Architecture
A little consideration needs to be given to how busy all of
the fabrics are but the remaining fabrics should easily pickup and
handle the full workload
Each set <=25% of Bandwidth
Provides a five-9’s of availability
environment
Path Group
Having 4 fabrics servicing a Path Group protects your enterprise‘s I/O availability!
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 24
ISL Oversubscription – Design Architecture
Not much consideration needs to be given to how busy all of
the fabrics are as the remaining fabrics can easily pickup and
handle the full workload
Each set <=12.5% of Bandwidth
Provides a five-9’s of availability
environment
Path Group
Large, bandwidth sensitive and
failure adverse mainframe customers
might benefit from configurations like this!
Having 8 fabrics servicing a Path Group completely protects
your enterprise‘s I/O availability!
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 25
ISL Oversubscription – Design Architecture <=25% of Bandwidth
<=25% of Bandwidth
• Risk of Loss of Bandwidth is the motivator for deploying FICON fabrics like this
• In this case, 2 paths from an 8 path Path Group are deployed across four FICON fabrics to limit bandwidth loss to no more than 25% if a FICON fabric were to fail
• Each fabric needs to run at no more than ~50-60% busy so that if a failover occurs then the remaining fabrics can pickup and handle the full workload without over-utilization and with some extra utilization to spare per fabric
• If an ISL link in a single fabric fails then that fabric runs at only 50% capability
Path Group
<=50% per ISL link
X
<=25% of Bandwidth
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 26
ISL Oversubscription – After a Failure Demand may exceed capacity
• In this case if a switching device fails …or… if the long distance links in a fabric fails then the frame traffic that was traveling across the now broken links will be rerouted through the other fabrics to reach the storage device
• Those remaining fabrics must have enough reserve capacity in order to pick up all of the rerouted traffic while maintaining performance
• Congestion and potential back pressure could occur if all fabrics are running at high utilization levels – again, probably above 50% or 60% utilization
• Customers should manage their fabrics to allow for rerouted traffic
Path Group
X X
<=33.3% of Bandwidth
<=33.3% of Bandwidth
<=33.3% of Bandwidth
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 27
ISL Oversubscription Creating Backpressure Problems on an ISL
• In This Example: Each 8G CHPID / ISL is capable of 760MBps send/receive (800 * .95=760) Two CHPIDs per mainframe (1520MBps) and 4 mainframes (6080MBps) About 42% of I/O activity is across the ISLs and requires 2550MBps Four ISLs provides 3040MBps – (760MBps * 4) – and a little extra
Backpressure 8G
8G
8G
8G
8G
8G 8G 8G
X
After failure of an ISL link Demand may exceed Capacity
Then one ISL fails leaving only 2280MBps – (760MBps * 3) – not enough redundancy
2280MBps / 2550MBps = 89% of what is required (Congestion Will Occur) Each CHPID experiences backpressure as the remaining 3 ISLs become
congested and unable to handle all of the I/O traffic
Remote Data Replication
Ingress and egress buffer credits can get consumed
Now affects all traffic across the ISL links
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 28
This Section • Slow Draining Devices
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 29
Slow Draining Devices
• Slow draining devices are receiving ports that have more data flowing to them than they can consume. • This causes external frame flow mechanisms to back up their frame
queues and potentially deplete their buffer credits.
• A slow draining device can exist at any link utilization level
• It’s very important to note that it can spread into the fabric and can slow down unrelated flows in the fabric.
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 30
• What causes slow draining devices?
• The most common cause is within the storage device or the server itself. That happens often because a target port has a slower link rate then the I/O source ports(s) or the Fan-In from the rest of the environment overwhelms the target port.
This is potentially a very poor performing, infrastructure!
DASD is about 90% read, 10% write. So, in this case the "drain" of the pipe is the 4Gb CHPID and the "source" of the pipe is the 8Gb storage port.
The Source can out perform the Drain!
This can cause congestion and back pressure towards the CHPID. The switch port leading to the CHPID becomes a slow draining device.
8G Source 4G Drain
Slow Draining Devices – DASD (Revisited from Session 13010) (Adding 8G DASD with only 4G CHPIDs)
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 31
The is a simple representation of a single CHPID connection Of course that won’t be true in a real configuration and the results
could be much worse and more dire for configuration performance
Source Drain
The Affects Of Link Rates (watch out for ISLs!)
Once an ISL begins ingress port queuing then quite a bit more backpressure can build up in the fabric because of multi-exchange congestion building up on the ISL link(s)
Q Q Q Q
Q Q Q
Start 8G I/O Exchange
4 Gbps
4 Gbps
8 Gbps 8 G
bps
Q
Q Buffer Credit queuing will create back-
pressure on the channel paths.
Now a 4G I/O Exchange
Data is being sent faster than it can be received so
after the first few frames are received
queuing begins.
The Bottleneck Starts Here!
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 32
Tape Slow Draining Devices – Real Tape
For 4G tape this is OK – Tape is about 90% write and 10% read on average
The maximum bandwidth a tape can accept and compress (@ 2:1 compression) is about 360MBps for Oracle (T1000C) and about 375MBps for IBM (TS1140)
A FICON Express8S CHPID in Command Mode FICON can do about 620MBps
A 4G Tape channel can carry about 380MBps (400 * .95 = 380MBps)
So a single CHPID attached to a 4G tape interface: Can run a single IBM tape drive (620 / 375 = 1.65) Can run a single Oracle (STK) tape drive (620 / 360 = 1.72)
Drain Source (~375MBps max bandwidth)
BUT… If 8G Tape runs at 375MBps (2X)
it will take 750MBps to feed it and the best 8G CHPID <=620MBps
620 MBps
360-375 MBps 2:1 Compression
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 33
This Section
• Determining Buffer Credits Required
• RMF Reports for Switched-FICON
• Brocade’s Buffer Credit Calculation Spreadsheet
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 34
Buffer Credits Why FICON Never Averages a Full Frame Size
• There are three things that are required to determine the number of buffer credits required across a long distance link
• The speed of the link • The cable distance of the link • The average frame size
• Average frame size is the hardest to obtain • Use the RMF 74-7 records report “FICON Director Activity Report” • You will find that FICON just never averages full frame size • Below is a simple FICON 4K write that demonstrates average frame size
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 35
Buffer Credits for Long Distance The Impact of Average Frame Size on Buffer Credits
Created by using Brocades Buffer Credit Calculator © 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 36
Buffer Credit Starvation Why not just saturate each port with BCs?
• If a malfunction occurs in the fabric ….. or….
• If a CHPID or device is having a problem…
• It is certainly possible that some or all of the I/O will time out
• If ANY I/O does time out then: • All frames and buffers for that I/O (buffer credits) must be discarded • All frames and buffers for subsequently issued I/Os (frames and buffer
credits) in that exchange must be discarded Remember queued I/O will often drive exchanges ahead of time
• The failing I/O must be re-driven • Subsequent I/O must be re-driven
• The recovery effort for the timed out I/O gets more and more complex – and more prone to also failing – when an over abundance of buffer credits are used on ports
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 37
Produce the FICON Director Activity Report by creating the RMF 74-7 records – but this is only available when utilizing switched-FICON! • A FICON Management Server (FMS) license per switching device enables the switch’s Control Unit
Port (CUP) – always FE – to provide information back to RMF at its request
• Analyze the column labeled
AVG FRAME PACING for non-zero numbers. Each of these represents the number of times a frame was waiting for 2.5 microseconds or longer but BC count was at zero so the frame could not be sent
Buffer Credit Starvation Detecting Problems with FICON BCs
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 38
FICON Director Activity Report With Frame Delay
Fabric with zHPF Enabled
Using Buffer Credits is how FC does Flow Control,
also called “Frame Pacing”
Indicators of Buffer Credit Starvation
In the last 15 minutes
This port had a frame to send
but did not have any
Buffer Credits left to use
to send them.
And this happened 270 times during the interval.
And this is an ISL Link!
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 39
We have a BC Calculator that you can use!
Ask your local Brocade SE to provide this to you free of charge! © 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 40
Brocade Proudly Presents… Our Industries ONLY FICON Certification
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 41
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 42
• Certification for Brocade Mainframe-centric Customers
• Available since September 2008
• Updated for 8Gbps in June 2010
• Updated for 16Gbps in November 2012
• This certification tests the attendee’s ability to understand IBM System z I/O concepts, and demonstrate knowledge of Brocade FICON Director and switching infrastructure components
• Brocade would be glad to provide a free 2-day BCAF certification class for Your Company or in Your City!
• Ask me how to make that happen for you!
Brocade FICON Certification
Industry Recognized Professional Certification We Can Schedule A Class In Your City – Just Ask!
SAN Sessions at SHARE this week
Thursday: Time-Session 1300 – 13012: Buzz Fibrechannel - To 16G and Beyond
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 43
Mainframe Resources For You To Use
http://community.brocade.com/community/brocadeblogs/mainframe Visit Brocade’s Mainframe Blog Page at:
Also Visit Brocade’s Mainframe Communities Page at: http://community.brocade.com/community/forums/products_and_solutions/mainframe_solutions
You can also find us on Facebook at: https://www.facebook.com/groups/330901833600458/
© 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees 44
Please Fill Out Your Evaluation Forms!!
This was session:
13009
And Please Indicate On Those Forms If There Are Other Presentations You Would Like To See In This Track At SHARE.
45 © 2012-2013 Brocade - For San Francisco Spring SHARE 2013 Attendees
QR Code
Thank You For Attending Today!
5 = “Aw shucks. Thanks!” 4 = “Mighty kind of you!” 3 = “Glad you enjoyed this!” 2 = “A Few Good Nuggets!” 1 = “You Got a nice nap!”
Recommended