Upload
eurocloud
View
360
Download
0
Tags:
Embed Size (px)
DESCRIPTION
Paolo Romano
Citation preview
!"#$%&'()*+*,-./01,213(&,1#.41/1.-0*42.$&0."#12(3.3#&54.",6*0&,)",/2.
71&#&.8&)1,&.
!
Cloudviews 2011, Porto, Portugal, Nov. 4 2011
About me • "#$%&'!#()!*+,!-'./!01#23&(4#5!6(37&'$3%8!.-!9./&!
• 9&$&#':+&'!#%!,3$%'3;<%&)!18$%&/$!='.<2>!?@A1BC?,>!D3$;.(!E$3(:&!FGGHI!
– 9"2/.:;<!=%:>.?&5,-.8"2"103@"0.ABCC!
• ?(73%&)!2'.-&$$.'!#%!?($J%<%.!1<2&'3.'!KL:(3:.>!D3$;.(!E$3(:&!FGMMI!
1./&!3(%&'(#J.(#N!2'.O&:%$!3(!P+3:+!?!#/!:<''&(%N8!3(7.N7&)Q!
• B..')3(#%.'!.-!%+&!R*S!BN.<)CK"!*'.O&:%!ET<(!FGMGCT<(FGMFI!– U!3(%&'(#J.(#N!2#'%(&'$!-'./!3()<$%'8!#()!#:#)&/8!
• B..')3(#%.'!.-!%+&!B.$%!V:J.(!A<'.CK"!E-#NN!FGMGC-#NN!FGMWI!– *#(CA<'.2&#(!9&$&#':+!(&%P.'X!.(!K'#($#:J.(#N!"&/.'3&$!– YZ!&[2&'%$>!UF!3($J%<J.($>!MF!:.<(%'3&$!
!BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Talk overview
• BN.<)CK"!^7&'73&PQ!– X&8!\.#N$!– ;#:X\'.<()!.(!K'#($#:J.(#N!"&/.'3&$!– 2'.\'&$$&$!$.!-#'!
• 1&N-C.2J/343(\!%'#($#:J.(#N!)#%#!\'3)$Q!– /&%+.).N.\3&$!&[2N.'&)!$.!-#'!– :#$&!$%<)3&$!
• ^2&(!'&$&#':+!_<&$J.($!`!-<%<'&!P.'X!
Cloudviews 2011, Porto, Portugal, Nov. 4 2011
Cloud-TM at a glance
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"#$%&!"'!()*+!! !!!!!!!!!!!!!!!!!,-./01234156!("*+!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!&7"7#7"7!("*+! !!!!!!!!!!!!!!!!!!89:!;62!("$+!
"#$%&'$()!
"$*+',%!,**$-.&#%*$)!)6/-/!8/46</=!"#$%&!"'!()*+!
/0$#1*&)!>0/4!?@<9!ABCB!2/!D6E!ABCF!
"$*2$#33')!>)GH"&*HABBIHJ!K!LMN95OP9!C7A!
40$%5'$!.&6*$3#1*&)!3QRSTTUUU75-/@:2479@!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Key Goals
,&7&N.2!#!%'#($#:J.(#N!)#%#!2N#a.'/!-.'!%+&!BN.<)Q!
M] *'.73)3(\!#!$3/2N&!#()!3(%<3J7&!2'.\'#//3(\!/.)&NQ! +3)&!:./2N&[3%8!.-!)3$%'3;<J.(>!2&'$3$%&(:&>!-#<N%C%.N&'#(:&!
F] "3(3/343(\!#)/3(3$%'#J.(!#()!/.(3%.'3(\!:.$%$Q! #<%./#%&!&N#$J:!'&$.<':&!2'.73$3.(3(\!;#$&)!.(!#22N3:#J.($!b.1!'&_<3'&/&(%$!
W] "3(3/34&!.2&'#J.(#N!:.$%$!73#!$&N-C%<(3(\! /#[3/34&!&c:3&(:8!#)#2J(\!:.($3$%&(:8!/&:+#(3$/$!<2.(!:+#(\&$!.-!P.'XN.#)!#()!#NN.:#%&)!'&$.<':&$!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
TRANSACTIONAL MEMORIES d#:X\'.<()!.(!%+&!BN.<)CK"!*'.\'#//3(\!*#'#)3\/e]!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Transactional Memories!
• K'#($#:J.(#N!"&/.'3&$!EK"IQ!– '&2N#:&!N.:X$!P3%+!#%./3:!%'#($#:J.($!3(!%+&!2'.\'#//3(\!N#(\<#\&!
– +3)&!#P#8!$8(:+'.(34#J.(!3$$<&$!-'./!%+&!2'.\'#//&'!
• #7.3)!)&#)N.:X$>!2'3.'3%8!3(7&'$3.($>!)&;<\\3(\!(3\+%/#'&!• $3/2N&'!%.!'&#$.(!#;.<%>!7&'3-8>!:./2.$&!
– 2*)'#*$D.4"6"#&')",/.&$.'101##"#.1''#*31(&,2.
!BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
!to Distributed Transactional Memories...
• ,3$%'3;<%&)!K'#($#:J.(#N!"&/.'3&$!E,K"IQ!– &[%&()$!K"!#;$%'#:J.(!.7&'!%+&!;.<()#'3&$!.-!#!$3(\N&!/#:+3(&Q!
• &(+#(:&!$:#N#;3N3%8!• )<'#;3N3%8!73#!3(C/&/.'8!'&2N3:#J.(!!
– /3(3/34&!:.//<(3:#J.(!.7&'+&#)!73#Q!• $2&:<N#J.(!• ;#%:+3(\!:.($3$%&(:8!#:J.($!#%!:.//3%CJ/&!
BN.<)!B./2<J(\!&[2&'%$!\'.<2!C!1&2%]!FH!FGMM!C!d'<[&NN&$!
!to Cloud-TM
^2&(C$.<':&!,K"!/3))N&P#'&!2'.73)3(\Q!
• D#(\<#\&!N&7&N!$<22.'%!-.'Q!– .;O&:%C.'3&(%&)!)./#3(!/.)&N!– +3\+N8!$:#N#;N&!#;$%'#:J.($!
!• AN#$J:!$:#N&C<2!#()!$:#N&C).P(!.-!%+&!,K"!2N#a.'/Q!
– #<%./#J:!$:#N3(\!;#$&)!.(!<$&'!)&f(&)!b.1!`!:.$%!:.($%'#3(%$!!
• 1&N-C.2J/34#J.(!#$!#!2&'7#$37&!-&#%<'&Q!– 2<'$<&!/#[3/</!&c:3&(:8!73#!:'.$$CN#8&'!$&N-C%<(3(\!!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
PROGRESSES SO FAR
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Main achievements
• V':+3%&:%<'&!$2&:3f:#J.(!!
• ,&7&N.2/&(%!.-!2'&N3/3(#'8!2'.%.%82&!
• ?((.7#J7&!%'#($#:J.(#N!'&2N3:#J.(!$:+&/&$!
• *N#a.'/!$&N-C%<(3(\!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
!""#$%&'()*+&,-.$
/0/%1/1!23$4
/3/567$
7+8'9:;<&"=>+$0)-?&)=<?+*$
#'@A"&+$1&"9-"8B'9">$4+C'&D$
7+8'9:;<&"=>+$#?'&";+$#D-?+C$
0"?"$%>"E'&C$%&';&"CC)9;$/%!-$
F27GH2
/0$I$J'#$4
23!127$
F27GH2
/0$/3/HKL67$
!"#$%$&'()*+%+,-.)/+#+)01+2$.&)
J'#M8'-?$-N+8):8"B'9$/%!$
31$"456*)01+2$.&)
6>"-B8$#
8">)9;$
4"9";+&$
.-7$".(-)
8.$9'7'$%'%,):)
;<!)%-,$=+=$%)
2=O+8?$5&)*$
4"NN+&$#+"&8P$/%!$
0)-?&)=<?+*$
6Q+8<B'9$
R&"C+A'&S$
/+#+)01+2$.&)>-($%?,".+=$%)@)6"%'%,)
0"?"$%>"E'&C
$
2NBC)T+
&$
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Architecture Specification
Preliminary prototype
• M$%!7&'$3.(!.-!,#%#!*N#a.'/!#N'&#)8!#7#3N#;N&Q!!"#$%%&&&'()*+,-./'0+1!
• ?(%&\'#J.(g&[%&($3.(!.-!/#3($%'&#/!.2&(!$.<':&!2'.O&:%$Q!– -.:<$!.(!3((.7#J.(!`!#7.3)!'&3(7&(J(\!%+&!P+&&N!– /#[3/34&!2'.O&:%h$!73$3;3N3%8!`!-#:3N3%#%&!&[2N.3%#J.(!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Innovative transactional replication schemes
• 1&7&'#N!#22'.#:+&$!+#7&!;&&(!2<'$<&)Q!– .7&'N#2!2'.:&$$3(\!#()!:.//<(3:#J.(!73#!$2&:<N#J.(!
i1*VVFGMG>!?1*VFGMG>!@BVFGMG>!1j1K^9FGMM>19,1FGMMk!
– #$8(:+'.(.<$!N&#$&$!%.!'&)<:&!:.//<(3:#J.(!.7&'+&#)!i"3))N&P#'&FGMGk!
– P&#X&'!:.($3$%&(:8!/.)&N$!i*9,BFGMMk!
• ,3l&'&(%!#22'.#:+&$!P3%+!#!$#/&!:.//.(!\.#NQ!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Data-grid self-optimization
• 1&N-C%<(3(\g2&'-.'/#(:&!-.'&:#$J(\!.-!$&7&'#N!2N#a.'/!N#8&'$!– 1.mP#'&!K'#($#:J.(#N!"&/.'8!N#8&'!
iB"=MGk>!i*AnVMMk!
!– 9&2N3:#J.(!/#(#\&'!
i"3))N&P#'&FGMMk!
!– ='.<2!:.//<(3:#J.(!$8$%&/!
i1V1^MGk>!i*&'-.'/#(:&FGMMk>i?B@BMFk!
!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Talk overview
• BN.<)CK"!^7&'73&PQ!– X&8!\.#N$!– ;#:X\'.<()!.(!K'#($#:J.(#N!"&/.'3&$!– 2'.\'&$$&$!$.!-#'!
• 1&N-C.2J/343(\!%'#($#:J.(#N!)#%#!\'3)$Q!– /&%+.).N.\3&$!&[2N.'&)!$.!-#'!– :#$&!$%<)3&$!
• ^2&(!'&$&#':+!_<&$J.($!`!-<%<'&!P.'X!
Cloudviews 2011, Porto, Portugal, Nov. 4 2011
Methodologies explored so far
• V(#N8J:#N!/.)&N3(\Q!– _<&<3(\!%+&.'8>!/#'X.7!2'.:&$$&$!– $%.:+#$J:!%&:+(3_<&$!
• "#:+3(&!N&#'(3(\Q!– .lCN3(&!%&:+(3_<&$Q!
• ,&:3$3.(!K'&&$>!@&<'#N!(&%P.'X$>!1<22.'%!n&:%.'!"#:+3(&!
– .(CN3(&!%&:+(3_<&$!E'&3(-.':&/&(%!N&#'(3(\IQ!• 6Bd!#N\.'3%+/!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Analytical modeling
• P+3%&!;.[!#22'.#:+Q!– '&_<3'&$!)&%#3N&)!X(.PN&)\&!.-!3(%&'(#N!)8(#/3:$!
• \..)!&[%'#2.N#J.(!2.P&'Q!!– #NN.P!-.'&:#$J(\!$8$%&/!;&+#73.'!3(!<(&[2N.'&)!'&\3.($!.-!3%$!2#'#/&%&'$h!$2#:&!!!
• /3(3/#N!N&#'(3(\!J/&Q!– ;#$3:#NN8!2#'#/&%&'$!3($%#(J#J.(!!!
• :./2N&[!#()!&[2&($37&!%.!)&$3\(g7#N3)#%&!"!• $<;O&:%!%.!<(#7.3)#;N&!#22'.[3/#J.(!&''.'$!"!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Machine learning
• ;N#:X!;.[!#22'.#:+Q!– .;$&'7&!3(2<%$>!:.(%&[%!#()!.<%2<%$!.-!#!$8$%&/!– <$&!$%#J$J:#N!/&%+.)$!%.!3)&(J-8!2#o&'($g'<N&$!
• \..)!#::<'#:8!3(!#N'&#)8!&[2N.'&)!'&\3.($!.-!%+&!2#'#/&%&'$h!$2#:&!!!
• e;<%!2..'!&[%'#2.N#J.(!2.P&'!"!• N&#'(3(\!J/&!\'.P$!&[2.(&(J#NN8!P3%+!(</;&'!.-!-&#%<'&$Q!– ;<%!&7&(%<#NN8!.<%2&'-.'/$!#(#N8J:#N!/.)&N$!E%823:#NN8pI!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Hybrid techniques :><EF.-"/./@".G"2/.&$./@"./H&.H&0#42.
KP.!#N%&'(#J7&!#22'.#:+&$!$.!-#'Q!M] ,373)&C#()C:.(_<&'Q!
• V"!-.'!P&NNC$2&:3f&)!$<;C:./2.(&(%$!• "D!-.'!$<;C:./2.(&(%$!%+#%!#'&Q!
– %..!:./2N&[!%.!/.)&N!&[2N3:3%N8>!.'!– P+.$&!3(%&'(#N!)8(#/3:$!#'&!.(N8!2#'J#NN8!$2&:3f&)!
F] 6$&!V"!%.!3(3J#N34&!"D!X(.PN&)\&Q!• '&)<:&!N&#'(3(\!J/&!.-!"D!%&:+(3_<&$!• :.''&:%!V"!<$3(\!-&&);#:X!-'./!.2&'#J.(#N!$8$%&/!
!BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Case studies
• ,8(#/3:!$&N&:J.(!#()!$P3%:+3(\!.-!)#%#!'&2N3:#J.(!2'.%.:.N$Q!– %.%#N!.')&'!;#$&)!'&2N3:#J.(!2'.%.:.N$!EB#$&!$%<)8!MIQ!
• 2<'&N8!;#$&)!.(!"#:+3(&!D&#'(3(\!%&:+(3_<&$!– $3(\N&C/#$%&'!7$!/<NJC/#$%&'!EB#$&!$%<)8!FIQ!
• +8;'3)!"DCV"!$.N<J.(!q!)373)&C&%C3/2&'#!
• ='.<2!B.//<(3:#J.(!18$%&/!$&N-C.2J/34#J.(Q!– ;#%:+3(\!3(!%.%#N!.')&'!2'.%.:.N$!EB#$&!$%<)8!WI!
• +8;'3)!"DCV"!q!"D!;..%$%'#22&)!P3%+!V"!X(.PN&)\&!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
SELF-OPTIMIZING DATA CONSISTENCY PROTOCOLS
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
The search for the holy grail %'#($#:J.(#N!)#%#!
:.($3$%&(:8!2'.%.:.N$!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
13(\N&!/#$%&'!!E2'3/#'8C;#:X<2I!
"<NJ!/#$%&'!!
F*BC;#$&)!K.%#N!.')&'C;#$&)!
1%#%&!/#:+3(&!'&2N3:#J.(!B&'Jf:#J.(!
@.(C7.J(\!
dRB!n.J(\!
• low traffic • read-dominated • low conflict
• low #resources: - minimum costs • primary-backup: - low % write: low load on primary
• multi-master: - hi % write primary overwhelmed • higher scalability
• auto-scale up: - new nodes hired for read-only requests • primary-backup: - low % write: primary stands the load
• hi traffic • read-dominated • low conflict
• hi traffic • write dominated • low conflict
• auto-scale down: - minimum costs • switch back to primary-backup
• low traffic • read dominated • low conflict
: node processing read&write requests : node processing read-only requests
!"#$%$#&'()$(&*+,*-."+,$
/'$%$#&'()$
0'1)$
&)2,3,)-$
!"#$
%&'($
%$#&'()$
!"#$
%&'($
!"#$(&*4-$
/'$(&*4-$
The Cloud-TM vision
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Self-optimizing data replication: key challenges
M] #NN.P!&c:3&(%!$P3%:+!#/.(\!/<NJ2N&!'&2N3:#J.(!2'.%.:.N$Q!– #7.3)!;N.:X3(\!%'#($#:J.(!2'.:&$$3(\!)<'3(\!%'#($3J.($!
F] )&%&'/3(&!%+&!.2J/#N!'&2N3:#J.(!$%'#%&\8!\37&(!%+&!:<''&(%!P.'XN.#)!:+#'#:%&'3$J:$Q!
– /#:+3(&!N&#'(3(\!/&%+.)$!E;N#:X!;.[I!– #(#N8J:#N!/.)&N$!EP+3%&!;.[I!– +8;'3)!#(#N8J:#Ng$%#J$J:#N!#22'.#:+&$!E\'#8!;.[I!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Case studies
• ,8(#/3:!$&N&:J.(!#()!$P3%:+3(\!;&%P&&(!'&2N3:#J.(!2'.%.:.N$Q!– %.%#N!.')&'!;#$&)!'&2N3:#J.(!2'.%.:.N$!EB#$&!$%<)8!MIQ!
• 2<'&N8!;#$&)!.(!"#:+3(&!D&#'(3(\!%&:+(3_<&$!– KP.!2+#$&!:.//3%!7$!2'3/#'8!;#:X<2!EB#$&!$%<)8!FIQ!
• +8;'3)!"DCV"!$.N<J.(!q!)373)&C&%C3/2&'#!
• ='.<2!B.//<(3:#J.(!18$%&/!$&N-C.2J/34#J.(Q!– ;#%:+3(\!3(!%.%#N!.')&'!2'.%.:.N$!EB#$&!$%<)8!WI!
• +8;'3)!"DCV"!q!"D!;..%$%'#22&)!P3%+!V"!X(.PN&)\&!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
POLYCERT: POLYMORPHIC SELF-OPTIMIZING REPLICATION FOR IN-MEMORY TRANSACTIONAL GRIDS
!"#'3#!B.<:&3'.>!*#.N.!9./#(.>!D<3$!9.)'3\<&$!VB"g?R?*g61A@?r!MF%+!?(%&'(#J.(#N!"3))N&P#'&!B.(-&'&(:&!E"3))N&P#'&!FGMMI!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Where they fit in the picture
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
13(\N&!/#$%&'!!E2'3/#'8C;#:X<2I!
"<NJ!/#$%&'!!
F*BC;#$&)!K.%#N!.')&'C;#$&)!
1%#%&!/#:+3(&!'&2N3:#J.(!B&'Jf:#J.(!
@.(C7.J(\!
dRB!n.J(\!
Certification (a.k.a. deferred update)
• V!%'#($#:J.(!3$!&[&:<%&)!3()&2&()&(%N8!#%!#!$3(\N&!'&2N3:#!<(JN!3%$!:.//3%!2+#$&Q!– /3(3/34&!(&%P.'X!%'#c:!
• ,3$%'3;<%&)!:&'Jf:#J.(!3$!'<(!%.!)&%&:%!:.(s3:%$!P3%+!%'#($#:J.($!&[&:<%&)!:.(:<''&(%N8!#%!)3l&'&(%!'&2N3:#$!
• B&'Jf:#J.(!3$!%823:#NN8!/<:+!/.'&!N3\+%P&3\+%!%+#(!-<NN!%'#($#:J.(!&[&:<J.(!– \..)!$:#N#;3N3%8!#N$.!3(!P'3%&!3(%&($37&!P.'XN.#)$!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Certification
• K+'&&!)3l&'&(%!#22'.#:+&$!3(!N3%&'#%<'&Q!– @.(C7.J(\!#N\.'3%+/!– n.J(\!#N\.'3%+/!– dRB!
• ^(&!X&8!:.//.(#N3%8Q!– '&N3#(:&!.(!K.%#N!^')&'!d'.#):#$%!%.!#7.3)!)3$%'3;<%&)!)&#)N.:X$!
– K^d!&($<'&$!#\'&&/&(%!.(!)&N37&'8!.')&'!.-!;'.#):#$%!/&$$#\&$!3(!#!(.(C;N.:X3(\!-#$+3.(!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Classic Replication Protocols
• R.:<$!.(!-<NN!'&2N3:#J.(!2'.%.:.N$!
13(\N&!/#$%&'!!E2'3/#'8C;#:X<2I!
"<NJ!/#$%&'!!
F*BC;#$&)!VdC;#$&)!
1%#%&!/#:+3(&!'&2N3:#J.(!B&'Jf:#J.(!
@.(C7.J(\!
dRB!n.J(\!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Non-voting
9M!
9F!
9W!
A[&:<J.(!K'#($#:J.(!KM!
A[&:<J.(!K'#($#:J.(!KF!
K^d!.-!KMh$!'&#)!`!P'3%&$&%!
n#N3)#J.(`B.//3%!KM!
n#N3)#J.(`B.//3%!KM!
n#N3)#J.(`V;.'%!KF!
n#N3)#J.(`V;.'%!KF!
K^d!.-!KFh$!'&#)!`!P'3%&$&%!
+ only validation executed at all replicas: high scalability with write intensive workloads
- need to send also readset: often very large! !BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Classic Replication Protocols
• R.:<$!.(!-<NN!'&2N3:#J.(!2'.%.:.N$!
13(\N&!/#$%&'!!E2'3/#'8C;#:X<2I!
"<NJ!/#$%&'!!
F*BC;#$&)!VdC;#$&)!
1%#%&!/#:+3(&!'&2N3:#J.(!B&'Jf:#J.(!
@.(C7.J(\!
dRB!n.J(\!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Voting
9M!
9F!
A[&:<J.(!K'#($#:J.(!KM!
!KMh$!K^d!E&2I!! KMh$!!
7#N3)#J.(!!
!KMh$!;'.#):#$%!!E!3*.01I!
P#3%!-.'!9Mh$!7.%&!
+ sends only write-set (much smaller than read-sets normally) - Additional communication phase to disseminate decision (vote) !
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Classic Replication Protocols
• R.:<$!.(!-<NN!'&2N3:#J.(!2'.%.:.N$!
13(\N&!/#$%&'!!E2'3/#'8C;#:X<2I!
"<NJ!/#$%&'!!
F*BC;#$&)!VdC;#$&)!
1%#%&!/#:+3(&!'&2N3:#J.(!B&'Jf:#J.(!
@.(C7.J(\!
dRB!n.J(\!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Bloom Filter Certification (BFC)
• 6$&!#!$3(\N&!K^d!#$!3(!(.(C7.J(\>!;<%!&(:.)&!'&#)$&%!3(!#!dN../!fN%&'!– dN../!fN%&'$Q!
• $2#:&C&c:3&(%>!2'.;#;3N3$J:!)#%#!$%'<:%<'&!-.'!%&$%!/&/;&'$+32!• :./2'&$$3.(!3$!#!-<(:J.(!.-!#!E%<(#;N&I!-#N$&!2.$3J7&!'#%&!
• V!%'#($#:J.(!K!3$!:&'Jf&)!$<::&$$-<NN8!.(N8!3-Q!– (.(&!.-!%+&!3%&/$!P'3o&(!;8!:.(:<''&(%!%'#($#:J.($!3$!2'&$&(%!3(!%+&!dR!<$&)!%.!&(:.)&!Kh$!'&#)$&)!
– $%'.(\N8!'&)<:&!(&%P.'X!%'#c:!#%!%+&!:.$%!.-!(&\N3\3;N&!#;.'%!3(:'&#$&!
• Mt!-#N$&!2.$3J7&!83&N)$!<2!%.!WG[!:./2'&$$3.(!!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
BFC vs Voting vs Non-Voting
!"
!#$"
!#%"
!#&"
!#'"
("
(" "()!!!"" "(!!)!!!""
!"#$%&'()*
+,-#".
/-0.
1+
2)%*+3)1+3'()+
*+,"-.,"+,"
Fig. 2. Throughput of three certification strategies with different read-set sizes.
3 System Architecture
The system architecture is depicted in the diagram shown in Figure 3. At the topmost
layer its exposes the API of an object-oriented STM, which is however fully replicated
across a number of distributed nodes. The API provided to applications is a transpar-
ent extension of JVSTM’s API, a state-of-the-art STM relying on an efficient Multi
Version Concurrency Control (MVCC) algorithm [3]; a strong advantage of JVSTM
is that read-only transactions are never required to block. The JVSTM programming
paradigm requires that the programmer encapsulates any shared mutable state within
VBoxes, which are then managed by JVSTM’s MVCC to ensure transactional atomic-
ity and isolation. This allows separating the transactional and non-transactional state of
the application, ensuring strong atomicity[19] at no additional costs. We modularly ex-
tended JVSTM by augmenting it with what we have named a Polymorphic Replication
Manager (PRM).
The PRM is in charge of triggering the execution of a certification protocol for each
of the locally executed transactions, and to participate in the certification of transactions
that have been executed at remote nodes. A unique feature of PRM, with regard to exist-
ing replication managers, is that it is able to determine, for each locally executing trans-
action, which certification algorithm is more appropriate, given the characterization of
the transaction. The logic for determining the certification scheme is encapsulated by
the abstraction of a Replication Protocol Selector Oracle (RPSO), whose interface ex-
ports two main functionalities:
– Given a local transaction, it selects the most appropriate certification protocol to be
executed by the PRM. In Section 4.1 we will present and evaluate two performance
forecasting methods which are based on alternative machine learning techniques.
6
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Self-optimizing data replication: key challenges
M] #NN.P!&c:3&(%!$P3%:+!#/.(\!/<NJ2N&!'&2N3:#J.(!2'.%.:.N$Q!– 3&"I*2/",3"!.-!/<NJ2N&!:&'Jf:#J.(!$:+&/&$!73#!%+&!*.N8/.'+3:!B&'Jf:#J.(!E*.N8B&'%I!2'.%.:.N!
F] )&%&'/3(&!%+&!.2J/#N!'&2N3:#J.(!$%'#%&\8!\37&(!%+&!:<''&(%!P.'XN.#)!:+#'#:%&'3$J:$Q!
– &(J'&N8!;#$&)!.(!)13@*,".#"10,*,-.%&:+(3_<&$!• .lCN3(&Q!)&:3$3.(C%'&&$>!(&<'#N!(&%P.'X>!1n"!• .(CN3(&Q!'&3(-.':&/&(%!N&#'(3(\!E6BdI!!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
PolyCert
• *.N8/.'2+3:!1&N-C^2J/343(\!B&'Jf:#J.(!!
• B.C&[3$%&(:&!.-!%+&!W!:&'Jf:#J.(!$:+&/&$Q!– &[2N.3%!:.//.(!'&N3#(:&!.(!%.%#N!.')&'!;'.#):#$%!
• "#:+3(&CN&#'(3(\!%&:+(3_<&$!%.!)&%&'/3(&!%+&!.2J/#N!:&'Jf:#J.(!$%'#%&\8!2&'!%'#($#:J.(!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Replication Protocol Selector Oracle
• KP.!3/2N&/&(%#J.($Q!
– ^lCN3(&!"#:+3(&!D&#'(3(\!K&:+(3_<&$!
– ^(CN3(&!9&3(-.':&/&(%!D&#'(3(\!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Off-line Machine Learning Techniques
• R.'!&#:+!%'#($#:J.(Q!– ,&%&'/3(&!$34&!.-!&[:+#(\&)!/&$$#\&$!-.'!&#:+!:&'Jf:#J.(!$:+&/&!
– R.'&:#$%!Vd!N#%&(:8!-.'!&#:+!/&$$#\&!$34&]!u&!&7#N<#%&)!$&7&'#N!"D!#22'.#:+&$Q!
• 9&\'&$$3.(!)&:3$3.(!%'&&$!E;&$%!'&$<N%$I!!• @&<'#N!(&%P.'X$!!• 1<22.'%!7&:%.'!/#:+3(&!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Off-line Machine Learning Techniques
• 6$&$!<2!%.!YW!/.(3%.'&)!$8$%&/!#o'3;<%&$Q!!– B*6!– "&/.'8!!– @&%P.'X!!– K3/&C$&'3&$!
• 9&_<3'&$!:./2<%#J.(#N!3(%&($37&!-&#%<'&!$&N&:J.(!#()!%'#3(3(\!2+#$&!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
On-line Reinforcement Learning
• A#:+!'&2N3:#!;<3N)$!.(CN3(&!&[2&:%#J.($!.(!%+&!'&P#')$!.-!&#:+!2'.%.:.NQ!– (.!#$$</2J.(!.(!'&P#')$h!)3$%'3;<J.($!
!• 622&'!B.(f)&(:&!d.<()!E6BdI!#N\.'3%+/Q!!
– N3\+%P&3\+%!#()!2'.7#;N8!.2J/#N!$.N<J.(!%.!%+&!&[2N.'#J.(C&[2N.3%#J.(!)3N&//#Q!
• )3)!?!%&$%!%+3$!.2J.(!$<c:3&(%N8!3(!%+3$!$:&(#'3.v!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
On-line Reinforcement Learning
• ,3$J(\<3$+&$!P.'XN.#)!$:&(#'3.!$.N&N8!;#$&)!.(!'&#)C$&%h$!$34&!– &[2.(&(J#N!)3$:'&J4#J.(!3(%&'7#N$!%.!/3(3/34&!%'#3(3(\!J/&!
• 9&2N3:#$!&[:+#(\&!$%#J$J:#N!3(-.'/#J.(!2&'3.)3:#NN8!%.!;..$%!N&#'(3(\!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Chasing the optimum!
!"
!#$"
!#%"
!#&"
!#'"
("
(" "()!!!"" "(!!)!!!""
!"#$%&'()*
+,-#".
/-0.
1+
2)%*+3)1+3'()+
*+,"-.,"+,"/0123,-"/4"
Fig. 5. Normalized throughput of the adaptive and non-adaptive protocols (Bank benchmark).
transactions, thus allowing us to focus on the performance of the transactions that re-
quire the activation of a commit-time certification phase. As show in Figure 1, around
5% of transactions (namely the so-called long traversal transactions) in this benchmark
have read-set sizes larger than 500K items. As a consequence, when using either NVC
or BFC, this benchmark generates an very high traffic volume that, in all our runs, even-
tually determined the saturation and the collapse of the GCS. This is the reason why in
Figure 8, we only report the throughput of VC, DT, UCB and DistUCB (normalized
with respect to the throughput of the optimal non-adaptive protocol, namely VC). In
this scenario, the adaptive protocols clearly outperform the non-adaptive VC scheme,
thanks to their ability to use the more efficient NVC and BFC protocols to handle trans-
actions with smaller read-set’s size. The speed-up of PolyCert when using the three
alternative oracles range from 25% to 35%, with the best performance also in this case
achieved by DistUCB.
Overall, our experimental data demonstrated the effectiveness and viability of the
proposed self-tuning polymorphic replication technique, highlighting in particular the
efficiency of the DistUCB oracle, which, not needing any time-consuming off-line train-
ing phases, and being totally parameter-free, results extremely convenient for deploy-
ment in real-life practical scenarios. Interestingly, PolyCert does not only provide ben-
efits in terms of performance, but also in terms of robustness, avoiding to saturate the
GCS in presence of transactions with extremely large read-sets, a main source of insta-
bility for BFC and, in particular, NVC.
6 Related Work
Our work is clearly related to the vast literature on replication of transactional systems,
and in particular to the more recent works relying on AB to achieve a replica-wide
16
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
!and beating it!
!"
!#$"
!#%"
!#&"
!#'"
("
(#$"
(#%"!"#$%&'()*
+,-#".
/-0.
1+
)*"+,"-*."+/01-*."
Fig. 8. Normalized throughput of the adaptive and VC protocols. NVC and BFC not reported asthey caused the collapse of the GCS layer. (STMBench7 benchmark)
6. Cachopo, J., Rito-Silva, A.: Versioned boxes as the basis for memory transactions. ScienceComputer Programming 63(2), 172–185 (2006)
7. Carvalho, N., Romano, P., Rodrigues, L.: Asynchronous lease-based replication of softwaretransactional memory. In: Middleware 2010, Lecture Notes in Computer Science, vol. 6452,pp. 376–396. Springer Berlin / Heidelberg (2010)
8. Couceiro, M., Romano, P., Carvalho, N., Rodrigues, L.: D2stm: Dependable distributed soft-ware transactional memory. In: Proceedings of the 15th Pacific Rim International Sympo-sium on Dependable Computing (PRDC 09). Shanghai, China (Nov 2009)
9. Couceiro, M., Romano, P., Rodrigues, L.: A machine learning approach to performance pre-diction of total order broadcast protocols. In: SASO’10: Proceedings of the 4th IEEE In-ternational Conference on Self-Adaptive and Self-Organizing Systems. Budapest, Hungary(Sep 2010)
10. Erman, J., Mahanti, A., Arlitt, M., Cohen, I., Williamson, C.: Offline/realtime traffic classi-fication using semi-supervised learning. Perform. Eval. 64(9-12), 1194–1213 (2007)
11. Garces-Erice, L.: Admission control for distributed complex responsive systems. In: Proc.ISPDC ’09. pp. 226–233. IEEE Computer Society, Washington, DC, USA (2009)
12. Gray, J., Helland, P., O’Neil, P., Shasha, D.: The dangers of replication and a solu-tion. In: Proceedings of the 1996 ACM SIGMOD international conference on Man-agement of data. pp. 173–182. SIGMOD ’96, ACM, New York, NY, USA (1996),http://doi.acm.org/10.1145/233269.233330
13. Guerraoui, R., Kapalka, M., Vitek, J.: Stmbench7: a benchmark for software transactionalmemory. SIGOPS Operating Systems Review 41(3), 315–324 (2007)
14. Guerraoui, R., Rodrigues, L.: Introduction to Reliable Distributed Programming. Springer(2006)
15. Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn.Res. 3, 1157–1182 (2003)
16. Haykin, S.: Neural Networks: A Comprehensive Foundation. Prentice Hall PTR, Upper Sad-dle River, NJ, USA (1994)
19
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Case studies
• ,8(#/3:!$&N&:J.(!#()!$P3%:+3(\!;&%P&&(!'&2N3:#J.(!2'.%.:.N$Q!– %.%#N!.')&'!;#$&)!'&2N3:#J.(!2'.%.:.N$!EB#$&!$%<)8!MIQ!
• 2<'&N8!;#$&)!.(!"#:+3(&!D&#'(3(\!%&:+(3_<&$!– $3(\N&C/#$%&'!7$!/<NJC/#$%&'!EB#$&!$%<)8!FIQ!
• +8;'3)!"DCV"!$.N<J.(!q!)373)&C#()C:.(_<&'!
• ='.<2!B.//<(3:#J.(!18$%&/!$&N-C.2J/34#J.(Q!– ;#%:+3(\!3(!%.%#N!.')&'!2'.%.:.N$!EB#$&!$%<)8!WI!
• +8;'3)!"DCV"!q!"D!;..%$%'#22&)!P3%+!V"!X(.PN&)\&!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
SINGLE VS MULTI-MASTER
,3&\.!,3).(#>!1&;#$J#(.!*&N<$.>!*#.N.!9./#(.!#()!R'#(:&$:.!b<#\N3#>!!1&N-C%<(3(\!'&2N3:#J.(!.-!&N#$J:!3(C/&/.'8!%'#($#:J.(#N!)#%#!2N#a.'/$>!!?@A1BC?,!K&:]!9&2]!FYgFGMM>!!"#8!FGMM]!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Classic Replication Protocols
13(\N&!/#$%&'!!E2'3/#'8C;#:X<2I!
"<NJ!/#$%&'!!
F*BC;#$&)!K.%#N!.')&'!;#$&)!
1%#%&!/#:+3(&!'&2N3:#J.(!B&'Jf:#J.(!
@.(C7.J(\! n.J(\!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
• R.:<$!.(!-<NN!'&2N3:#J.(!2'.%.:.N$!
Single Master
• u'3%&!%'#($#:J.($!#'&!&[&:<%&)!&(J'&N8!3(!#!$3(\N&!'&2N3:#!E%+&!2'3/#'8I!
• ?-!#!P'3%&!%'#($#:J.(!3$!'&#)8!%.!:.//3%>!:..')3(#J.(!3$!'&_<3'&)!%.!<2)#%&!#NN!%+&!.%+&'!'&2N3:#$!E;#:X<2$I]!
• 9&#)!%'#($#:J.($!:#(!;&!&[&:<%&)!.(!;#:X<2!'&2N3:#$]!
• ;&.4*2/0*G5/"4.4"14#&3J2.• ;&.4*2/0*G5/"4.3&&04*,1(&,.450*,-.3&))*/.
• K@0&5-@'5/.&$.H0*/"./I2.4&"2,L/.231#".5'.H*/@.,5)G"0.&$.,&4"2.BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Classic Replication Protocols
13(\N&!/#$%&'!!E2'3/#'8C;#:X<2I!
"<NJ!/#$%&'!!
F*BC;#$&)!K.%#N!.')&'!;#$&)!
1%#%&!/#:+3(&!'&2N3:#J.(!B&'Jf:#J.(!
@.(C7.J(\! n.J(\!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
• R.:<$!.(!-<NN!'&2N3:#J.(!2'.%.:.N$!
2PC-based replication
• Transactions executed at all nodes w/o coordination till commit time
• Acquire atomically locks at all nodes using two phase commit (2PC): • 2PC materializes conflicts among concurrent remote transactions
generating:
DISTRIBUTED DEADLOCKS
+ good scalability at low conflict - thrashes at high conflict !BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Goals
• V<%.(./3:#NN8!$&N&:%!%+&!;&$%!$<3%&)!2'.%.:.N!%+#%!– "3(3/34&$!%'#($#:J.($w!$&'73:&!J/&!– "#[3/34&$!#:+3&7#;N&!%+'.<\+2<%!
• V<%./#%&!&N#$J:!$:#N3(\!– 1:#N&!<2!3-!%+&!$8$%&/!(&&)$!/.'&!:./2<%#J.(#N!2.P&'!
– 1:#N&!).P(!3-!%+&!$8$%&/!3$!.7&'$34&)!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Self-optimizing data replication: key challenges
M] #NN.P!&c:3&(%!$P3%:+!#/.(\!/<NJ2N&!'&2N3:#J.(!2'.%.:.N$Q!– +&'&!:.&[3$%&(:&!.-!%+&!F!$:+&/&$!3$!3/2.$$3;N&Q!
• )&$3\(!.-!#(!(.(C;N.:X3(\!2'.%.:.N!$P3%:+3(\!$%'#%&\8!!
F] )&%&'/3(&!%+&!.2J/#N!'&2N3:#J.(!$%'#%&\8!\37&(!%+&!:<''&(%!P.'XN.#)!:+#'#:%&'3$J:$Q!
– #(#N8J:#N!/.)&N$!.-!&l&:%$!.-!)#%#!:.(%&(J.(!– /#:+3(&!N&#'(3(\!/&%+.)$!%.!2'&)3:%!+PC)&2&()#(%!N#%&(:3&$!
• B*6!&[&:<J.(!J/&>!(&%P.'X!9KK!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Key Technical Problem
• x.P!%.!-.'&:#$%Q!– *&'-.'/#(:&$!.-!2'.%.:.N!d!P+3N&!'<((3(\!2'.%.:.N!Vv!
– *&'-.'/#(:&$!P3%+!r!(.)&$!P+3N&!'<((3(\!.(!j!(.)&$v!
\37&(!%+#%!'&2N3:#J.(!2'.%.:.Ng$:#N&!:+#(\&$!#l&:%Q!
– K+&!%'#($#:J.(!:.(s3:%!2'.;#;3N3%8!!– K+&!%'#($2.'%!N#8&'!N#%&(:8!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Methodology
T.3(%!<$#\&!.-!#(#N8J:#N!/.)&N3(\!#()!/#:+3(&!N&#'(3(\!%&:+(3_<&$Q!
– #(#N8J:#N!/.)&N!.-!'&2N3:#J.(!#N\.'3%+/!)8(#/3:$Q!
• N.:X!:.(%&(J.(>!)3$%'3;<%&)!)&#)N.:X!2'.;#;3N3%8!• /&$$#\&!&[:+#(\&!2#o&'(!
– /#:+3(&!N&#'(3(\!%.!-.'&:#$%!2&'-.'/#(:&!.-!\'.<2!:.//<(3:#J.(!N#8&'Q!
• 9KK!#$!#!-<(:J.(!.-!/$\!$34&>!%+'.<\+2<%>!y(.)&$!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Analytical Model
• ,3$%'3;<%&)!N.:X!:.(%&(J.(!)8(#/3:$!:#2%<'&)!73#!#(!#(#N8J:#N!/.)&NQ!– %+&!'&2N3:#J.(!#N\.'3%+/$w!;&+#73.'!3$!-<NN8!$2&:3f&)!– 3%!3$!2.$$3;N&!%.!/#%+&/#J:#NN8!/.)&N!%+&/!!!!!!!4'''5).!*+6!17*.10528)9:1;+.1.!5.<2157*.!0=12.*=9>1?1
• z&8!/&%+.).N.\3&$Q!– "&#(!7#N<&!#(#N8$3$!`!b<&<3(\!%+&.'8!
• 9&N8!.(!"#:+3(&!D&#'(3(\!%.!-.'&:#$%!+#')P#'&!)&2&()&(%!/&%'3:$>!3(!2#'J:<N#'!(&%P.'X!N#%&(:3&$e!
!BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Machine learning techniques
• 9&$.<':&!73'%<#N34#J.(!/#X&$!/#%+&/#J:#N!/.)&N3(\!<(-&#$3;N&Q!– @.!X(.PN&)\&!.-!#:%<#N!N.#)!!– @.!X(.PN&)\&!.-!#:%<#N!2+8$3:#N!'&$.<':&$!
!• K'#($2.'%!N#8&'!N#%&(:8!E9KKI!.-!%+&!%P.!2'.%.:.N$!2'&)3:%&)!73#!)&:3$3.(!%'&&!'&\'&$$.'$!!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Inputs for the ML
• @</;&'!.-!(.)&$!• 9KK!3(!%+&!:<''&(%!:.(f\<'#J.(!• 134&!.-!&[:+#(\&)!/&$$#\&$!• K+'.<\+2<%!.-!%+&!:<''&(%!:.(f\<'#J.(!
– M,J,&H,NNN.– =<&$$&)!<$3(\!%+&!#(#N8J:#N!/.)&N!E/.'&!(&[%I!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Statistical Model Accuracy
• B.''&N#J.(!;&%P&&(!G]{Z!#()!G]{H!• 9&N#J7&!&''.'!;&%P&&(!G]M{!#()!G]FF!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Models Coupling
E,1#D(31#.)&4"#!-.'&:#$%$!%+&!41/1.-0*4./@0&5-@'5/!%#X3(\!#$!3(2<%!%+&!8KK!3(!%+&!%#'\&%!:.(f\<'#J.(]!
.
O13@*,".#"10,*,-.-.'&:#$%$!%+&!8KK!%#X3(\!#$!3(2<%!%+&!41/1.-0*4./@0&5-@'5/!3(!%+&!%#'\&%!:.(f\<'#J.(!
!!!R3[&)!2.3(%!$.N<J.(!-.<()!<$3(\!'&:<'$3.(!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Global Model Accuracy
0
5000
10000
15000
20000
4 5 6 7 8 9 10
Thro
ughp
ut (T
x/se
c)
# Nodes
2PC-est-low conflict2PC-low conflict
2PC-est-high conflict2PC-high conflict
PB-est-low conflictPB-low conflict
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
!and now in action!
2000
2500
3000
3500
4000
4500
5000
5500
6000
6500
7000
0 100 200 300 400 500 600 700 800 900
thro
ughp
ut (t
x/se
c)
time (sec)
’th.txt’ u 1:2
PQR.=Q;SP:=K.
PQR.=Q;SP:=K.
T:UT.=Q;SP:=K.
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Case studies
• ,8(#/3:!$&N&:J.(!#()!$P3%:+3(\!;&%P&&(!'&2N3:#J.(!2'.%.:.N$Q!– %.%#N!.')&'!;#$&)!'&2N3:#J.(!2'.%.:.N$!EB#$&!$%<)8!MIQ!
• 2<'&N8!;#$&)!.(!"#:+3(&!D&#'(3(\!%&:+(3_<&$!– $3(\N&C/#$%&'!7$!/<NJC/#$%&'!EB#$&!$%<)8!FIQ!
• +8;'3)!"DCV"!$.N<J.(!q!)373)&C#()C:.(_<&'!
• ='.<2!B.//<(3:#J.(!18$%&/!$&N-C.2J/34#J.(Q!– ;#%:+3(\!3(!%.%#N!.')&'!2'.%.:.N$!EB#$&!$%<)8!WI!
• +8;'3)!"DCV"!q!"D!;..%$%'#22&)!P3%+!V"!X(.PN&)\&!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
BATCHING IN TOTAL ORDER BROADCAST PROTOCOLS
*#.N.!9./#(.!#()!"#o&.!D&.(&|!@0)A-.+78761B5.(!8761871C*.5)1D=,0=1B=*5,(52.1E=*.*(*)213851F75)9G(5)1H*,0))876157,1I087A*=(0/07.1J05=78761?AAA!?(%&'(#J.(#N!B.(-&'&(:&!.(!B./2<J(\>!@&%P.'X3(\!#()!B.//<(3:#J.($!E?B@BwMFI>!T#(]!FGMF!!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Sequencer based TOB (STOB)
• K.%#N!.')&'!;'.#):#$%!EK^dI!#N\.'3%+/$!'&N8!.(!#!$2&:3#N!2'.:&$$!%.!&($<'&!%.%#N!.')&'Q!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
!!!!/&$$#\&!!!!!.')&'3(\!
*F!
*W!
K^dE/I!
$&_<&(:&'!#$$3\($!%.%#N!.')&'!
/&$$#\&!)3l<$3.(!
/!
/!
$&_! $&_!
Batching in STOB protocols
• 1K^d!+#7&!%+&.'&J:#NN8!.2J/#N!N#%&(:8Q!– F!:.//]!$%&2$>!3()&2&()&(%N8!.-!%+&!(</;&'!.-!2'.:&$$&$!
• e;<%!$&_<&(:&'!;&:./&$!%+&!;.oN&(&:X!#%!+3\+!%+'.<\+2<%!
• d#%:+3(\!#%!%+&!$&_<&(:&'!2'.:&$$Q!– P#3%!-.'!$&7&'#N!/$\$!#()!.')&'!%+&/!#N%.\&%+&'Q!
• #/.'J4&!$&_<&(:3(\!:.$%!#:'.$$!/<NJ2N&!/&$$#\&$!– .2J/#N!P#3J(\!J/&!)&2&()$!.(!/&$$#\&!#''37#N!'#%&Q!
• 7&'8!&l&:J7&!#%!+3\+!%+'.<\+2<%e!• 7&'8!;#)!#%!N.P!%+'.<\+2<%p!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Analytical model
• 6$3(\!_<&<3(\!%+&.'8!#'\</&(%$!P&!:#(!)&%&'/3(&!%+&!.2J/#N!;#%:+3(\!J/&>!;}>!#$!#!-<(:J.(!.-!%+&!:<''&(%!N.#)>!/Q1
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
STOB self-delivery latency as the response time of the queue:
T (b,m) =1
µ(b,m)− λ(b,m)(1)
subject to the stability constraint: 0 ≤ λ(b,m) < µ(b,m). We
take into account the effects of batching by expressing λ(b,m)and µ(b,m) in Equation 1 as follows:
λ(b,m) =m
b(2)
where we denoted with m the average message arrival rate
to the sequencer, and accounted for the fact that, when using
a batching value b, the sequencer processes batches at a rate
inversely proportional to b.
µ(b,m) =1
T1st +(b−1)2m + Tadd(b− 1)
(3)
which expresses µ(b) as the inverse of the sum of three
components:i) the CPU time, denoted as T1st, required to
generate a sequencing message for a single incoming message,
or, equivalently, for a batch containing a single message; ii)
the time required to receive the b− 1 remaining messages of
the batch; iii) the CPU time required to process the b − 1remaining messages of the batch. This typically corresponds
to unmarshalling each incoming message and buffering it in
memory, but it does not include the actual sending of the
sequencing message (which is accounted for by T1st in our
model). Note that, in practice, the CPU time required to
include an additional message into an existing batch (Tadd)
is much lower than the one associated with sending the
sequencing message for a message (T1st). In the following
we will therefore assume that Tadd < T1st.
By merging Equations (1-3) we can derive the average self-
delivery latency as a function of the batching level and of the
average message arrival rate:
T (b,m) =1
1T1st+
(b−1)2m +Tadd(b−1)
− mb
(4)
subject to the constraint:
1
T1st +(b−1)2m + Tadd(b− 1)
− m
b> 0 (5)
By letting b tend to infinity in the above constraint, we can
easily determine an upper bound on the maximum throughput,
m∗sustainable by the sequencer:
m∗ = limb→∞b+ 1
2T1st + 2Tadd(b− 1)=
1
2Tadd(6)
Finally, by imposing∂T (b,m)
∂m = 0 and∂2T (b,m)
∂2m > 0 we
can derive the optimal batching value as a function of the
message arrival rate m, as reported in Equation 7, where we
used the shorthand σ = 1T1st
. The optimal batching value is
predicted via a piece-wise function. For low values of the
arrival rate, as expectable, the model predicts that batching
harms performance, rather than improving it. As the load
increases, as long as it remains lower than the maximum
sustainable throughput, the optimal batching value grows non-
linearly and has a vertical asymptote at m = m∗.
b∗(m) =
1, if m < Taddσ2
2 + 12
�4σ2+2T 2
addσ4
2
2m−σ−2mTaddσ
σ−2mTaddσ+
�2(σ+2m(Taddσ−1))2
(2σTadd−1)2(1+2σTadd)σ2
,
ifTaddσ
2
2 + 12
�4σ2+2T 2
addσ4
2 < m < m∗
(7)
B. Determining the Model’s Parameters
In order to solve our analytical model, it is first necessary
to determine the value of the parameters T1st and Tadd. Even
though the semantics of T1st and Tadd might not appear
immediately manifest, it is in practice possible to estimate
their values via the following, very quick, training phase. To
compute T1st it suffices to observes that, for b = 1, Equation
4 reduces to:
T (1,m) =1
1T1st
−m, where m <
1
T1st
In other words, T1st is simply the inverse of the maximum
throughput sustainable by the system when b = 1, and its value
can therefore be easily computed (in an approximated manner
of course) by setting the batching level to 1 and injecting traffic
at an increasing rate, until saturating the GCS.
The identification of the saturation point of the system can
be seen as a search problem and, as such, one may rely on the
wide range of classic efficient search methods and heuristics.
In our system we use a simple variant of classic binary search
which operates in two distinct phases. We start by generating
traffic with a low rate m0 (e.g.m0=100 msgs/sec) to obtain
the reference value of the self-delivery latency at low load,
say T1,low. Then, at regular intervals of 1 second, we double
the message arrival rate (i.e. mt+1 = 2mt) until we find that
the message rate mt† in which the self-delivery latency falls in
the range [50T1,low,100T1,low]. By aggressively doubling the
message generation rate at each interval, this method allows
to quickly narrow the search interval for the saturation point.
We found, however, that it is prone to lead the GCS to thrash
by forcing it to work at excessively high loads.
In order to avoid this issue, we let the system idle for one
time interval to allow it recover from the overload. Then a
second phase begins, during which we inject traffic starting
from mt†−1, and prudently increase it at each time interval
at the finer-grained pace of 10% (i.e. mt+1 = mt · 1.1),
until we find the message rate value, which we denote as
m∗1, in which the self-delivery latency is, again, in the range
[50T1,low,100T1,low]. Finally, we use1
m∗1
to estimate T1st.
In order to derive the value of the Tadd parameter we exploit
Equation 6. To this end, we set b to the maximum value that we
intend to use in our self-tuning system, which we denote with
bmax and identify, using a technique analogous to the one just
mentioned, the maximum throughput sustainable by the sys-
tem, denoted as m∗bmax
. In all our experiments we found that
the throughput’s gains achievable by increasing the batching
value over 128 become negligible, thus we used bmax = 128.
Using Equation 6, we can then set Tadd = 12m∗
bmax
.
Model Accuracy 20
40
60
80
100
120
0 2000 4000 6000 8000 10000 12000 14000
Opt
imal
Bat
chin
g Va
lue
Average Msg. Arrival Rate (msgs/sec)
Exaustive Manual TuningAnalytical Model
1
10
100
0 2000 4000 6000 8000 10000 12000 14000
Opt
imal
Bat
chin
g Va
lue
Average Msg. Arrival Rate (msgs/sec)
Exaustive Manual TuningAnalytical Model
Figure 3. Validating the accuracy of the analytical model.
C. Model accuracy and efficiency
In order to determine the accuracy of the presented an-
alytical performance model, we used the data collected for
the experiment described in Section IV to manually identify
the optimum batching value as a function of the message
arrival rate. In Figure 3 we compare the optimal batching value
predicted by the analytical model with the one manually found
via exhaustive exploration of the parameters’ space. The two
plots report the same data, but the one on the bottom uses a
log scale on the y-axis. By looking at the top plot, we observe
that globally the model captures quite closely the dynamics of
the real system. The right-hand side plot, on the other hand,
allows to better visualize that the analytical model tends to
underestimate the optimal settings of the batching value at
medium loads (at approx. 3000-6000 msgs./sec).The fact that
the proposed model fails under some circumstances comes
indeed with no surprise, as real systems are characterized by
some degree of uncertainty that no model can escape, and the
proposed model relies on assumptions whose correctness can
only be hypothesized.
In order to assess the actual impact of the model’s error
on system’s performance we inject traffic using the traces
collected by a real system, namely FenixEDU, the Web
application that is responsible for the management of the
whole campus of one of the main universities in Portugal,
the Instituto Superior Tecnico of Lisbon. The traces we used
report the number of messages in input to the cluster hosting
the FenixEDU system during September 3, 2010. In this day,
0
2000
4000
6000
8000
10000
12000
0 2 4 6 8 10 12 14 16 18 20 22 24
Avg
Msg
. Arri
val R
ate
(msg
s/se
c)
Hour of the day (3 Sept. 2010)
Figure 4. Traffic at the FenixEDU system (Sept. 3 2010)
0
5000
10000
16 17 18 19 20 21 22Hour of the day
msg
s/se
c
1
10
100
Late
ncy
(mse
c)Figure 5. Self-tuning based on analytical model
at 18:00, the enrolment of students for the following semester
started, and the FenixEDU system was subject to a spike of
load lasting several hours, see Figure 4.
We injected traffic according to the traces from 16:00 to
22:00, and used the analytical model’s prediction to dynami-
cally adapt, with a frequency of 1Hz, the batching level. Figure
5 shows in the bottom plot the message arrival rate, and in the
top plot the self-delivery latency at the sequencer node (in log
scale). It is possible to note that the self-delivery latency is
subject to two spikes: one during the ramp-up front of the
traffic surge, and one during the ramp-down front, and both
occurring as the average message arrival rate is of around
5000-6000 messages per second. As already discussed, at this
load pressure, in fact, the analytical model underestimates the
optimal batching level, which led the system to thrashing.
Interestingly, since the ramp-up front is faster than the ramp-
down, the system is able to recover from the thrashing caused
by the incorrect choice of the batching level during the ramp-
up. As soon as the ramp-up is completed, at high load,
the analytical model predicts correctly the optimal level of
batching, avoiding the GCS from crashing. Conversely, since
the ramp-down lasts for a much longer time period (several
hours vs around 30 minutes), the prolonged permanence in a
state in which the batching level is erroneously tuned (namely
configured with an excessively low value) causes the GCS to
eventually collapse.
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
".)&N!<()&'&$J/#%&$!.2J/#N!;#%:+3(\!7#N<&!#%!0/&)3</5!N.#)e!
!70&G#")F.;5.(!8761
+7,0=02G/5G*71(5+2021292.0/1872.5;8)8.9>1
Peak period analysis
20
40
60
80
100
120
0 2000 4000 6000 8000 10000 12000 14000
Opt
imal
Bat
chin
g Va
lue
Average Msg. Arrival Rate (msgs/sec)
Exaustive Manual TuningAnalytical Model
1
10
100
0 2000 4000 6000 8000 10000 12000 14000
Opt
imal
Bat
chin
g Va
lue
Average Msg. Arrival Rate (msgs/sec)
Exaustive Manual TuningAnalytical Model
Figure 3. Validating the accuracy of the analytical model.
C. Model accuracy and efficiency
In order to determine the accuracy of the presented an-
alytical performance model, we used the data collected for
the experiment described in Section IV to manually identify
the optimum batching value as a function of the message
arrival rate. In Figure 3 we compare the optimal batching value
predicted by the analytical model with the one manually found
via exhaustive exploration of the parameters’ space. The two
plots report the same data, but the one on the bottom uses a
log scale on the y-axis. By looking at the top plot, we observe
that globally the model captures quite closely the dynamics of
the real system. The right-hand side plot, on the other hand,
allows to better visualize that the analytical model tends to
underestimate the optimal settings of the batching value at
medium loads (at approx. 3000-6000 msgs./sec).The fact that
the proposed model fails under some circumstances comes
indeed with no surprise, as real systems are characterized by
some degree of uncertainty that no model can escape, and the
proposed model relies on assumptions whose correctness can
only be hypothesized.
In order to assess the actual impact of the model’s error
on system’s performance we inject traffic using the traces
collected by a real system, namely FenixEDU, the Web
application that is responsible for the management of the
whole campus of one of the main universities in Portugal,
the Instituto Superior Tecnico of Lisbon. The traces we used
report the number of messages in input to the cluster hosting
the FenixEDU system during September 3, 2010. In this day,
0
2000
4000
6000
8000
10000
12000
0 2 4 6 8 10 12 14 16 18 20 22 24Av
g M
sg. A
rriva
l Rat
e (m
sgs/
sec)
Hour of the day (3 Sept. 2010)
Figure 4. Traffic at the FenixEDU system (Sept. 3 2010)
0
5000
10000
16 17 18 19 20 21 22Hour of the day
msg
s/se
c
1
10
100
Late
ncy
(mse
c)
Figure 5. Self-tuning based on analytical model
at 18:00, the enrolment of students for the following semester
started, and the FenixEDU system was subject to a spike of
load lasting several hours, see Figure 4.
We injected traffic according to the traces from 16:00 to
22:00, and used the analytical model’s prediction to dynami-
cally adapt, with a frequency of 1Hz, the batching level. Figure
5 shows in the bottom plot the message arrival rate, and in the
top plot the self-delivery latency at the sequencer node (in log
scale). It is possible to note that the self-delivery latency is
subject to two spikes: one during the ramp-up front of the
traffic surge, and one during the ramp-down front, and both
occurring as the average message arrival rate is of around
5000-6000 messages per second. As already discussed, at this
load pressure, in fact, the analytical model underestimates the
optimal batching level, which led the system to thrashing.
Interestingly, since the ramp-up front is faster than the ramp-
down, the system is able to recover from the thrashing caused
by the incorrect choice of the batching level during the ramp-
up. As soon as the ramp-up is completed, at high load,
the analytical model predicts correctly the optimal level of
batching, avoiding the GCS from crashing. Conversely, since
the ramp-down lasts for a much longer time period (several
hours vs around 30 minutes), the prolonged permanence in a
state in which the batching level is erroneously tuned (namely
configured with an excessively low value) causes the GCS to
eventually collapse.
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
9#/2C<2!`!'#/2C).P(!%'#($3J.(!%+'.<\+!%+&!02'.;N&/#J:5!#'&#$Q!C '#/2C<2!3$!$<c:3&(%N8!
$+.'%Q!! $8$%&/!0$%'<\\N&$5>!;<%!
'&:.7&'$!
C '#/2C).P(!3$!N.(\&'Q!! !!
What about a pure ML approach?
• *'.;N&/Q!– "D!%&:+(3_<&$!(&&)!%.!&[2N.'&!)3l&'&(%!$.N<J.($!E;#%:+3(\!7#N<&$I!%.!3)&(J-8!.2J/#N!.(&Q!
• N.P!N.#)Q!<$&N&$$!#))3J.(#N!N#%&(:8!
!• /&)3</C+3\+!N.#)Q!3($<c:3&(%!;#%:+3(\!7#N<&$!N&#)!7&'8!'#23)N8!%.!3($%#;3N3%8!#()!%+'#$+3(\!
!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Combining the two approaches
M] ?(3J#N34&!"Dh$!X(.PN&)\&!P3%+!%+&!2'&)3:J.($!.-!%+&!#(#N8J:#N!/.)&NQ!
– '&)<:&!-'&_<&(:8!.-!.;73.<$N8!P'.(\!&[2N.'#J.($!
F] D&%!"D!<2)#%&!%+&!3(3J#N!'&P#')!7#N<&$Q!– :.''&:%!/.)&Nh$!2'&)3:J.(!&''.'$!&[2N.3J(\!
-&&);#:X!-'./!%+&!$8$%&/!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Combining the two approaches
Concerning the computation efficiency of the analyticalmodel, we implemented it in Java pre-computing every ex-pression of the Equation in Figure 7 that is independent ofthe message arrival rate m. We measured the time required tosolve the model by passing as input parameter m the wholeset of integer values in the range [1,14000], and repeated theprocess 100 times. On a machine equipped with an Intel Core2 Duo at 2.53GHz running Mac OS X 10.6.6, the average timeto determine the optimal batching value was in the order of20 nanoseconds.
As a final note, it is noteworthy to highlight (even thoughfor space constraints it will not be possible to report detailedexperimental data) that we have also experimented with theusage of analytical models more complex than the abovedescribed one. These include models in which the (average)time to sequence a batch of size b at load m (essentiallythe denominator of Equation 3) is expressed as a genericpolynomial of b. This kind of approaches could in theoryallow for capturing more complex non-linear dynamics. How-ever, they require a complex and time consuming phase ofidentification of the model parameters via non-linear fittingtechniques [25]. Also, in our experiments, their accuracyresulted only marginally better than the one achievable bythe model presented in Section V-A. These considerationshave ultimately led us to opt for adopting a simpler, andconsequently easier to instantiate and solve, analytical model,whose errors are dynamically compensated via the usage of aRL technique, as we discuss in the following.
VI. COMBINING A RL APPROACH
In order to compensate the errors of the analytical model,our system relies on a RL technique that dynamically updatesthe initial knowledge provided by the model based on thefeedback gathered by observing the consequences of the self-tuning choices. To this end, we cast the problem of decidingthe optimal batching level given the current system load toa classical RL problem, namely the multi-armed bandit [26].In this problem, a gambling agent is faced with a bandit (aslot machine) with k arms, each associated with an unknownreward distribution. The gambler iteratively plays one arm perround and observes the associated reward, adapting its strategyin order to maximize the average reward. Formally, each armi of the bandit, for 0 ≤ i ≤ k, is associated with a sequenceof random variables Xi,n representing the reward of the armi, where n is the number of times the lever has been used.The goal of the agent is to learn which arm i maximizesthe criterion: µi =
�∞n=1
1nXi,n, that is, achieves maximum
average reward. To this purpose, the learning algorithm needsto try different arms in order to estimate their average reward.On the other hand, each suboptimal choice of an arm i costs,on average, µ∗ − µi, where µ∗ is the average obtained bythe optimal lever. Several algorithms have been studied thatminimize the regret r, defined as r = µ∗n−
�Ki=1 µiE[Ti(n)],
where Ti(n) is the number of times arm i has been chosen. Inour system we leverage on a recent result of Auer et al. [10],who introduced an algorithm, UCB, that achieves a logarithmicbound on the number of suboptimal trials not only in thelimit, but also for any finite sequence. Building on the idea ofconfidence bounds this algorithms create an overestimation of
0
5000
10000
16 17 18 19 20 21 22Hour of the day
msg
s/se
c
1
10
100
Late
ncy
(mse
c)
ModelModel+RL
Figure 6. Self-tuning combining RL and analytical model
the reward of each possible decision, and lowers it as moresamples are drawn. Implementing the principle of optimism inthe face of uncertainty the algorithm picks the option with thehighest current bound.
In particular, assuming that rewards are limited in [0,1],each arm is associated with a value:
µi = xi +
�log n
nimin{1/4, Vi(ni)} (8)
where xi is the current estimated reward for arm i, n is thenumber of the current trial, ni is the number of times the leveli has been tried, and:
Vi(s) = |1s
s�
τ=1
X2i,τ − x2
i |+�
2 log n
s
The right-hand part of the sum in Eq. 8 is an upperconfidence bound that decreases as more information on eachoption is acquired. By choosing, at any time, the option withmaximum µi, the algorithm searches for the option with thehighest reward, while minimizing the regret along the way.
In order to apply this technique, we discretized the pa-rameters space, defined by the cartesian product b × m, asfollows. We considered k = 8 different batching levels,denoted as b1, .., bk and such that bi = 2i for 0 ≤ i < 8. We split the message arrival rate into l = 15 intervals,denoted as ml, having endpoints in L = {0, 10, 100} ∪{n ∗ 1000|1 ≤ n ≤ 10} ∪ {12000, 14000, 16000} expressedin messages per second. Each message arrival rate interval ml
is associated with an instance of the bandit problem with karms, where each arm is associated with a different batchinglevel. Since in UCB rewards are bound in the [0,1] interval,given an observed self delivery latency t, we use the followingfunction to defining its reward R(t):
R(t) =maxLatency −min{maxLatency, t}
maxLatency
where maxLatency is a parameter defining the maximumself-delivery observable by the sequencer that we set, con-sistently with what we did in Section V-B, to 100 msec (athreshold above which our GCS started thrashing severely).
As already mentioned, the original UCB technique does notrely on the availability of initial knowledge on the arms’ re-
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Talk structure
• BN.<)CK"!^7&'73&PQ!– X&8!\.#N$!– ;#:X\'.<()!.(!K'#($#:J.(#N!"&/.'3&$!– 2'.\'&$$&$!$.!-#'!
• 1&N-C.2J/343(\!%'#($#:J.(#N!)#%#!\'3)$Q!– $&N-C.2J/343(\!/&%+.).N.\3&$!&[2N.'&)!$.!-#'!– :#$&!$%<)3&$!
• ^2&(!'&$&#':+!_<&$J.($!`!-<%<'&!P.'X!
Cloudviews 2011, Porto, Portugal, Nov. 4 2011
Open research issues
• x.N3$J:!#22'.#:+&$!%.!$&N-C.2J/34#J.(Q!– <()&'$%#()!&l&:%$!.-!$&N-C.2J/343(\!/<NJ2N&>!/<%<#NN8!)&2&()&(%!N#8&'$!!
• b.1C#P#'&!2'.\'#//3(\!2#'#)3\/$Q!– /&%+.).N.\3&$!#()!%..N$!%.!#NN.P!2'.73)&'$!%.!#$$&$$!-&#$3;3N3%8!.-!-<NfNN3(\!1DV$!
• x3\+N8!$:#N#;N&!)#%#!:.($3$%&(:8!/.)&N$Q!– ;&8.()!&7&(%<#N!:.($3$%&(:8!!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
Future work
• R.:<$!.(!&N#$J:C$:#N3(\>!X&&23(\!3(%.!#::.<(%!)#%#!\'3)$!)8(#/3:$Q!– :.($3$%&(:8!:.$%$>!%'#($#:J.(!:.(s3:%$!
• 1%<)8!&l&:%$!.-!$&N-C%<(3(\!/<NJ2N&>!/<%<#NN8!)&2&()&(%!N#8&'$!.-!%+&!)#%#!\'3)!
• x3\+N8!$:#N#;N&!_<#$3C$&'3#N34#;N&!2'.%.:.N$!!BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
THANKS FOR THE ATTENTION
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
References (I) iB"=MGk!*]!,3!1#(4.>!d]!B3:3#(3>!R]!b<#\N3#>!9]!*#N/3&'3!#()!*#.N.!9./#(.>!V(#N8J:#N!".)&NN3(\!.-!B.//3%CK3/&CD.:X3(\!VN\.'3%+/$!-.'!1.mP#'&!K'#($#:J.(#N!"&/.'3&$>!*'.:]!WY%+!?(%&'(#J.(#N!B./2<%&'!"&#$<'&/&(%!='.<2!B.(-&'&(:&!EB"=I>!^'N#().>!RN.'3)#>!B./2<%&'!"&#$<'&/&(%!='.<2>!,&:&/;&'!FGMG!!i*AnVMMk!*]!,3!1#(4.>!d]!B3:3#(3>!R]!b<#\N3#>!9]!*#N/3&'3!#()!*#.N.!9./#(.!^(!%+&!V(#N8J:#N!".)&N3(\!.-!B.(:<''&(:8!B.(%'.N!VN\.'3%+/$!-.'!1.mP#'&!K'#($#:J.(#N!"&/.'3&$Q!%+&!B#$&!.-!B.//3%CK3/&CD.:X3(\>!AN$&73&'!*&'-.'/#(:&!A7#N<#J.(!T.<'(#N>!%.!#22&#'!!i"3))N&P#'&FGMGk!@]!B#'7#N+.>!*#.N.!9./#(.!#()!D]!9.)'3\<&$>!V$8(:+'.(.<$!D&#$&C;#$&)!9&2N3:#J.(!.-!1.mP#'&!K'#($#:J.(#N!"&/.'8>!*'.:&&)3(\$!.-!%+&!VB"g?R?*g61A@?r!MM%+!"3))N&P#'&!B.(-&'&(:&!E"3))N&P#'&I>!d#(\#N.'&>!?()3#>!VB"!*'&$$>[email protected]&/;&'!FGMG!!i"3))N&P#'&FGMMk!*.N8B&'%Q!*.N8/.'2+3:!1&N-C^2J/343(\!9&2N3:#J.(!-.'!?(C"&/.'8!K'#($#:J.(#N!='3)$>!"]!B.<:&3'.>!*]!9./#(.!#()!D]!9.)'3\<&$>!VB"g?R?*g61A@?r!MF%+!?(%&'(#J.(#N!"3))N&P#'&!B.(-&'&(:&!E"3))N&P#'&!FGMMI!!!i1*VVFGMGk!*#.N.!9./#(.>!9]!*#N/3&'3>!R]!b<#\N3#>!@]!B#'7#N+.!#()!D]!9.)'3\<&$>!^(!12&:<N#J7&!9&2N3:#J.(!.-!K'#($#:J.(#N!18$%&/$!Ed'3&-!V((.<(:&/&(%I>!*'.:]!FF()!VB"!18/2.$3</!.(!*#'#NN&N3$/!3(!VN\.'3%+/$!#()!V':+3%&:%<'&$!E1*VVI>!1#(%.'3(3>!='&&:&>!VB"!*'&$$>!T<(&!FGMG!!i?1*VFGMGk!*#.N.!9./#(.>!9]!*#N/3&'3>!R]!b<#\N3#>!@]!B#'7#N+.!#()!D]!9.)'3\<&$>!V(!^2J/#N!12&:<N#J7&!K'#($#:J.(#N!9&2N3:#J.(!*'.%.:.N>!*'.:]!H%+!?AAA!?(%&'(#J.(#N!18/2.$3</!.(!*#'#NN&N!#()!,3$%'3;<%&)!*'.:&$$3(\!P3%+!V22N3:#J.($!E?1*VI>!K#3P#(>!K#32&3>!?AAA!B./2<%&'!1.:3&%8!*'&$$>!1&2%&/;&'!FGMG!!i@BVFGMGk!9]!*#N/3&'3>!*#.N.!9./#(.!#()!R]!b<#\N3#>!V==9^Q!d..$J(\!1K"!9&2N3:#J.(!73#!V\\'&$$37&N8!^2J/3$J:!K'#($#:J.(!*'.:&$$3(\>!*'.:]!{%+!?AAA!?(%&'(#J.(#N!18/2.$3</!.(!@&%P.'X!B./2<J(\!#()!V22N3:#J.($!E@BVI>!B#/;'3)\&>!"#$$#:+<$$&%$>!61V>!?AAA!B./2<%&'!1.:3&%8!*'&$$>!T<N8!FGMG!!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!
References (II)
i1j1K^9FGMMk!@]!B#'7#N+.>!*#.N.!9./#(.!#()!D]!9.)'3\<&$>!1B&'%Q!12&:<N#J7&!B&'Jf:#J.(!3(!9&2N3:#%&)!1.mP#'&!K'#($#:J.(#N!"&/.'3&$>!*'.:&&)3(\$!.-!%+&!U%+!V((<#N!?(%&'(#J.(#N!18$%&/$!#()!1%.'#\&!B.(-&'&(:&!E1j1K^9!FGMMI>!x#3-#>!?$'#&N>!T<(&!FGMM]!!i19,1FGMMk!9]!*#N/3&'3>!R]!b<#\N3#!#()!*#.N.!9./#(.>!^1V9AQ!^22.'%<(3$J:!12&:<N#J.(!3(!V:J7&N8!9A2N3:#%&)!K'#($#:J.(#N!18$%&/$!E1+.'%!*#2&'I>!K+&!WG%+!?AAA!18/2.$3</!.(!9&N3#;N&!,3$%'3;<%&)!18$%&/$!E19,1!FGMMI>!"#)'3)>!12#3(>!^:%]!FGMM!!i1V1^MGk!"]!B.<:&3'.>!*#.N.!9./#(.!#()!D]!9.)'3\<&$>!V!"#:+3(&!D&#'(3(\!V22'.#:+!%.!*&'-.'/#(:&!*'&)3:J.(!.-!K.%#N!^')&'!d'.#):#$%!*'.%.:.N$!>!*'.:]!U%+!?AAA!?(%&'(#J.(#N!B.(-&'&(:&!.(!1&N-CV)#2J7&!#()!1&N-C^'\#(343(\!18$%&/$!E1V1^I>!d<)#2&$%>!x<(\#'8>!?AAA!B./2<%&'!1.:3&%8!*'&$$>!1&2%&/;&'!FGMG!!i*&'-.'/#(:&FGMMk!*#.N.!9./#(.!#()!"]!D&.(&|>!*.$%&'Q!1&N-C%<(3(\!d#%:+3(\!3(!K.%#N!^')&'!d'.#):#$%!*'.%.:.N$!73#!V(#N8J:#N!".)&NN3(\!#()!9&3(-.':&/&(%!D&#'(3(\>!VB"!*&'-.'/#(:&!A7#N<#J.(!9&73&P>!%.!#22&#'!E#N$.!2'&$&(%&)!#%!?R?*!*&'-.'/#(:&!FGMM!18/2.$3</I!!i?B@BMFk!*#.N.!9./#(.!#()!"]!D&.(&|>!1&N-C%<(3(\!d#%:+3(\!3(!K.%#N!^')&'!d'.#):#$%!*'.%.:.N$!73#!V(#N8J:#N!".)&NN3(\!#()!9&3(-.':&/&(%!D&#'(3(\>!?AAA!?(%&'(#J.(#N!B.(-&'&(:&!.(!B./2<J(\>!@&%P.'X3(\!#()!B.//<(3:#J.($>!@&%P.'X!VN\.'3%+/!`!*&'-.'/#(:&!A7#N<#J.(!18/2.$3</!E?B@BwMFI>!T#(]!FGMF!!i*9,BFGMMk!A[2N.3J(\!K.%#N!^')&'!"<NJ:#$%!3(!u&#XN8!B.($3$%&(%!K'#($#:J.(#N!B#:+&$>!*]!9<37.>!"]!B.<:&3'.>!*]!9./#(.!#()!D]!9.)'3\<&$>!*'.:]!?AAA!MS%+!*#:3f:!93/!?(%&'(#J.(#N!18/2.$3</!.(!,&2&()#;N&!B./2<J(\!E*9,BhMMI!
BN.<)73&P$!FGMM>!*.'%.>!*.'%<\#N>[email protected]]!U!FGMM!