148
Improving Natural Language Understanding via Contrastive Learning Methods by Pengyu Cheng Department of Electrical and Computer Engineering Duke University Date: Approved: Lawrence Carin, Advisor Yiran Chen Rong Ge Ricardo Henao Giraldo Vahid Tarokh Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Electrical and Computer Engineering in the Graduate School of Duke University 2021

Improving Natural Language Understanding via Contrastive

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Improving Natural Language Understanding via Contrastive

Improving Natural Language Understanding via Contrastive

Learning Methods

by

Pengyu Cheng

Department of Electrical and Computer EngineeringDuke University

Date:Approved:

Lawrence Carin, Advisor

Yiran Chen

Rong Ge

Ricardo Henao Giraldo

Vahid Tarokh

Dissertation submitted in partial fulfillment of therequirements for the degree of Doctor of Philosophy

in the Department of Electrical and Computer Engineeringin the Graduate School of

Duke University

2021

Page 2: Improving Natural Language Understanding via Contrastive

ABSTRACT

Improving Natural Language Understanding via Contrastive

Learning Methods

by

Pengyu Cheng

Department of Electrical and Computer EngineeringDuke University

Date:Approved:

Lawrence Carin, Advisor

Yiran Chen

Rong Ge

Ricardo Henao Giraldo

Vahid Tarokh

An abstract of a dissertation submitted in partial fulfillment of therequirements for the degree of Doctor of Philosophy

in the Department of Electrical and Computer Engineeringin the Graduate School of

Duke University

2021

Page 3: Improving Natural Language Understanding via Contrastive

Copyright © 2021 by Pengyu Cheng

All rights reserved

Page 4: Improving Natural Language Understanding via Contrastive

Abstract

Natural language understanding (NLU) is an essential but challenging task in Natu-

ral Language Processing (NLP), which aims to automatically extract and understand

the semantic information from raw text or voice data. Among the previous NLU solu-

tions, representation learning methods have recently become the mainstream, which

maps textual data into low-dimensional vector spaces for downstream tasks. With

the development of deep neural networks, text representation learning has achieved

state-of-the-art performance on plenty of NLP scenarios.

Although text representation learning methods with large-scale network encoders

have shown significant empirical gains, many essential properties of the text en-

coders remain unexplored, which hinders models’ further application into real-world

scenarios: (1) The high computational complexity of the large-scale deep networks

limits text encoders to be applied on a broader range of devices, especially on low

calculation-ability resources; (2) the mechanic of networks is agnostic, limiting the

control of the latent representations for downstream tasks; (3) representation learn-

ing methods are data-driven, lead to inherent social bias problems with unbalanced

data.

To address the problems above in deep text encoders, I proposed a series of ef-

fective contrastive learning methods, which supervise the encoders by enlarging the

difference between positive and negative data sample pairs. In this thesis, I first

present a theoretical contrastive learning tool, which bridges the contrastive learning

methods and the mutual information in information theory. Then, I apply contrastive

learning into several NLU scenarios to improve the text encoders’ effectiveness, in-

terpretability, and fairness.

iv

Page 5: Improving Natural Language Understanding via Contrastive

Acknowledgements

Looking back to the past four-year Ph.D. life, I find every moment treasured, espe-

cially that with challenges and difficulties. My mind is full of appreciation to the

people I cooperated with, accompanied, and grew up together in this unforgettable

journey.

First, I would like to express my gratitude to my adviser Dr. Lawrence Carin,

who coached me with constructive instructions and provided me adequate freedom

to explore my research interests. His profound knowledge, rigorous scholarship, and

impressive diligence continuously inspired me to overcome challenges and surpass

myself. More importantly, he behaved as an example bringing me to know the im-

portance of leadership, cooperation, and communication in teamwork.

Besides, I would like to thank my dissertation committee, Dr. Yiran Chen, Dr.

Vahid Tarokh, Dr. Ricardo Henao, and Dr. Rong Ge, for their generous help and

valuable feedback on this thesis. I also appreciate the suggestions from Dr. David

Carlson, Dr. Galen Reeves, and Dr. Henry Pfister as the qualifying exam committee

at the beginning of my research.

I had two memorable internship experiences during the graduate study. I do

appreciate the reception from my internship hosts: Martin Renqiang Min at NEC

Labs America, Jingjing Liu, Zhe Gan, Yu Cheng, and Shuohang Wang at Microsoft.

They helped me settle into new working environments and broadened my research

from the industry perspectives.

Collaborating and discussing with many intelligent and hardworking people is a

great pleasure. I want to thank my colleagues: Chang Liu, Hongteng Xu, Dixin

Luo, Chenyang Tao, Dong Wang, Lei Zhang, Yulai Cong, Shijing Si, Yunchen Pu,

Yizhe Zhang, Chunyuan Li, Wenlin Wang, Dinghan Shen, Yitong Li, Xinyuan Zhang,

v

Page 6: Improving Natural Language Understanding via Contrastive

Kevin Liang, Gregory Spell, Guoyin Wang, Liqun Chen, Shuyang Dai, Ruiyi Zhang,

Jianqiao Li, Paidamoyo Chapfuwa, Siyang Yuan, Hao Fu, Weituo Hao, Jiachang Liu,

Rui Wang, Yuan Li, Bai Li, Serge Assad, Nikhil Mehta, Dhanasekar Sundararaman,

Jianyi Zhang, and Hao Zhang. The time we spent together would always remain as

a precious memory.

Studying abroad is never an easy experience for international students like me,

especially during the difficult time of the Covid-19 pandemic. Thanks to a group of

truehearted friends, who companied me with firmly mental supports. I would like

to express my appreciation to Ming Yang, Fengze Liu, Jidong Li, Dongruo Zhou,

Lang Liu, Ke Bai, Xiang Wang, Xiao Peng, Simeng Deng, Yan Zhang, Bingyuan Liu,

Moquan Jiang, and Minghao Hu.

Finally, I want to express my sincerest gratitude to my dear parents Zhiyi Cheng

and Xiaoyan Zhang. No matter how many ups and downs I have to face, their

unconditional love and tolerance are always my strongest backing.

vi

Page 7: Improving Natural Language Understanding via Contrastive

Contents

Abstract iv

Acknowledgements v

List of Figures xii

List of Tables xiv

1 Introduction 1

1.1 Natural Language Understanding . . . . . . . . . . . . . . . . . . . . 3

1.2 Contrastive Learning Methods . . . . . . . . . . . . . . . . . . . . . . 4

1.3 Mutual Information Estimation . . . . . . . . . . . . . . . . . . . . . 6

2 Improving Efficiency of Text Representations 10

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.3 Proposed Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.3.1 Hard Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.3.2 Random Projection . . . . . . . . . . . . . . . . . . . . . . . . 14

2.3.3 Principal Component Analysis . . . . . . . . . . . . . . . . . . 16

2.3.4 Autoencoder Architecture . . . . . . . . . . . . . . . . . . . . 16

2.3.5 Semantic-preserving Regularizer . . . . . . . . . . . . . . . . . 18

2.4 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.4.1 Pre-trained Continuous Embeddings . . . . . . . . . . . . . . 19

2.4.2 Training Details . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.4.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

vii

Page 8: Improving Natural Language Understanding via Contrastive

2.4.4 Baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.5.1 Task transfer evaluation . . . . . . . . . . . . . . . . . . . . . 21

2.5.2 Nearest Neighbor Retrieval . . . . . . . . . . . . . . . . . . . . 24

2.5.3 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3 Contrastive Log-ratio Upper Bound of Mutual Information 28

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.3 Proposed Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

3.3.1 CLUB with p(y|x) Known . . . . . . . . . . . . . . . . . . . . 33

3.3.2 CLUB with Conditional Distributions Unknown . . . . . . . . 35

3.3.3 CLUB in MI Minimization . . . . . . . . . . . . . . . . . . . . 38

3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.4.1 MI Estimation Quality . . . . . . . . . . . . . . . . . . . . . . 41

3.4.2 Time Efficiency of MI Estimators . . . . . . . . . . . . . . . . 44

3.4.3 MI Minimization in Information Bottleneck . . . . . . . . . . 45

3.4.4 MI Minimization in Domain Adaptation . . . . . . . . . . . . 47

3.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

4 Improving Representation Disentanglement for Text Data 51

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

4.2 Preliminary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.2.1 Mutual Information Variational Bounds . . . . . . . . . . . . 53

4.2.2 Variation of Information . . . . . . . . . . . . . . . . . . . . . 54

viii

Page 9: Improving Natural Language Understanding via Contrastive

4.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

4.3.1 Theoretical Justification of the Objective . . . . . . . . . . . . 56

4.3.2 MI Variational Lower Bound . . . . . . . . . . . . . . . . . . . 57

4.3.3 MI Sample-based Upper Bound . . . . . . . . . . . . . . . . . 58

4.3.4 Encoder-Decoder Framework . . . . . . . . . . . . . . . . . . . 59

4.4 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.4.1 Disentangled Representation Learning . . . . . . . . . . . . . 61

4.4.2 Mutual Information Estimation . . . . . . . . . . . . . . . . . 61

4.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

4.5.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

4.5.2 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . 63

4.5.3 Embedding Disentanglement Quality . . . . . . . . . . . . . . 64

4.5.4 Embedding Representation Quality . . . . . . . . . . . . . . . 66

4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

5 Improving Representation Disentanglement for Voice Data 71

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

5.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

5.3 Proposed Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

5.3.1 MI-based Disentangling Objective . . . . . . . . . . . . . . . . 75

5.3.2 MI Lower Bound Estimation . . . . . . . . . . . . . . . . . . . 75

5.3.3 MI Upper Bound Estimation . . . . . . . . . . . . . . . . . . . 77

5.3.4 Encoder-Decoder Framework . . . . . . . . . . . . . . . . . . . 78

5.4 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

5.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

ix

Page 10: Improving Natural Language Understanding via Contrastive

5.5.1 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . 81

5.5.2 Implementation Details . . . . . . . . . . . . . . . . . . . . . . 82

5.5.3 Style Transfer Performance . . . . . . . . . . . . . . . . . . . . 84

5.5.4 Disentanglement Discussion . . . . . . . . . . . . . . . . . . . 85

5.5.5 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . . 87

5.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

6 Improving Fairness of Text Understanding Models 89

6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

6.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

6.2.1 Data Augmentations with Sensitive Attributes . . . . . . . . . 91

6.2.2 Contrastive Learning Framework . . . . . . . . . . . . . . . . 93

6.2.3 Debiasing Regularizer . . . . . . . . . . . . . . . . . . . . . . 94

6.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

6.3.1 Bias in Natural Language Processing . . . . . . . . . . . . . . 96

6.3.2 Contrastive Learning . . . . . . . . . . . . . . . . . . . . . . . 96

6.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

6.4.1 Bias Evaluation Metric . . . . . . . . . . . . . . . . . . . . . . 97

6.4.2 Pretrained Encoders . . . . . . . . . . . . . . . . . . . . . . . 98

6.4.3 Training of FairFil . . . . . . . . . . . . . . . . . . . . . . . . 99

6.4.4 Debiasing Results . . . . . . . . . . . . . . . . . . . . . . . . . 99

6.4.5 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

6.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

7 Conclusions 105

x

Page 11: Improving Natural Language Understanding via Contrastive

Bibliography 107

Biography 133

xi

Page 12: Improving Natural Language Understanding via Contrastive

List of Figures

1.1 Comparison between generative/predictive learning and contrastivelearning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1 Proposed binarized embedding architectures. . . . . . . . . . . . . . 15

2.2 The comparison between deterministic and stochastic sampling for theautoencoder strategy. . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.3 The test accuracy of different model on the MR dataset across 512,1024, 2048, 4096 bits for the learned binary representations. . . . . . 26

3.1 Simulation performance of MI estimators. . . . . . . . . . . . . . . . 42

3.2 Estimation quality comparison of MI estimators. . . . . . . . . . . . 43

3.3 Estimator speed comparison with different batch size. Both the axeshave a logarithm scale. . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.4 The information-theoretical framework for unsupervised domain adap-tation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4.1 Illustration of the concept of variation of information (VI). . . . . . . 54

4.2 The framework of IDEL. . . . . . . . . . . . . . . . . . . . . . . . . . 60

4.3 Latent spaces t-SNE plots of IDEL on Yelp. . . . . . . . . . . . . . . 63

4.4 t-SNE plots of IDEL− without I(s; c). . . . . . . . . . . . . . . . . . 64

5.1 Training and transfer processes of IDE-VC. . . . . . . . . . . . . . . . 79

5.2 Left: t-SNE visualization for speaker embeddings. Right: t-SNE visu-alization for content embedding. . . . . . . . . . . . . . . . . . . . . . 86

6.1 (a) Contrastive learning framework of FairFil; (b) Illustration of infor-mation in d and d′. . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

6.2 Influence of the training data proportion to debias degree of BERT. . 103

xii

Page 13: Improving Natural Language Understanding via Contrastive

6.3 T-SNE plots of each words contextualized in templates. Left-handside: the original pretrained BERT; right-hand side: FairFil. . . . . . 103

xiii

Page 14: Improving Natural Language Understanding via Contrastive

List of Tables

2.1 Performance on the test set for 10 downstream tasks. . . . . . . . . . 22

2.2 Nearest neighbor retrieval results on the SNLI dataset. . . . . . . . . 23

2.3 Ablation study for the AE-binary-SP model with different choices ofλsp (evaluated with test accuracy on the MR dataset). . . . . . . . . 25

3.1 Performance on the Permutuation invariant MNIST classification. . . 46

3.2 Performance comparison on UDA. Datasets are MNIST (M), MNIST-M (MM), USPS (U), SVHN (SV), CIFAR-10 (C), and STL (S). . . . 49

4.1 Performance comparison of text DRL models. . . . . . . . . . . . . . 65

4.2 Sample divergences between positive and negative content embeddings. 66

4.3 Sample divergences between positive and negative style embeddings. 66

4.4 Examples of text style transfer on Yelp dataset. The style-relatedwords are bold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4.5 Manual evaluation for style transfer on Yelp. . . . . . . . . . . . . . 69

4.6 Ablation tests for style transfer on Yelp. . . . . . . . . . . . . . . . . 69

5.1 Many-to-many VST evaluation results. For all metrics except Dis-tance, higher is better. . . . . . . . . . . . . . . . . . . . . . . . . . . 84

5.2 Zero-Shot VST evaluation results. For all metrics except Distance,higher is better. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

5.3 Speaker identity prediction accuracy on content embedding. . . . . . 86

5.4 Ablation study with different training losses. Performance is measuredby objective metrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

6.1 Examples of generating an augmentation sentence under the sensitivetopic “gender”. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

xiv

Page 15: Improving Natural Language Understanding via Contrastive

6.2 Performance of debiased embeddings on Pretrained BERT and BERTpost SST-2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

6.3 Performance of debiased embeddings on BERT post CoLA and BERTpost QNLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

6.4 Comparison of average debiasing performance on pretrained BERT . 101

xv

Page 16: Improving Natural Language Understanding via Contrastive

Chapter 1

Introduction

Natural Language Processing (NLP) is one of the cutting-edge research fields of Artifi-

cial Intelligence (AI). In the domain of NLP, natural language understanding (NLU)

is an essential task, which aims to extract and understand the semantic informa-

tion from raw-text corpus. Recently, text representation learning, which learns a

parameterized encoder to map the raw-text data into low-dimensional representa-

tion vectors, has become the mainstream solution for natural language understand-

ing [CKS+17a, DCLT19]. With the development of deep neural networks, current

large-scale text representation encoders, such as BERT [DCLT19] and RoBERTa

[LOG+19], have shown significant performance improvement on many NLU down-

stream tasks.

Although these text representation learning approaches have been widely ap-

plied into various application scenarios, many important properties of the text en-

coders remain unexplored. Here we take three important properties as examples:

(1) Efficiency: Current state-of-the-art text representation encoders are implemented

with large deep neural networks with high-dimensional real-valued outputs. With

the real-valued high-dimensional embeddings, the application scenarios of text en-

coder are limited, especially on low calculation-ability resources such as mobile de-

vices. How to reduce the computational complexity of text encoding methods while

keep satisfying performance remains an valuable research topic. (2) Interpretability:

Knowing that the encoders output representations with rich information from the

raw-text data, how can we interpret the low-dimensional representation vectors with

human-understandable explanation? (3) Fairness: Data-driven NLU models are in-

1

Page 17: Improving Natural Language Understanding via Contrastive

evitable from the social bias problem which is originally coming from the input data

(e.g., text encoders can learn the gender bias unconsciously from the input biased

sentences). How to eliminate social bias for text representation learning models is

also an important topic in lack of exploration.

To address the aforementioned problem, I studied a series of effective learning

methods which are commonly recognized as the contrastive learning methods. The

contrastive learning methods improves data-driven models by designing positive and

negative data sample pairs and learning towards the critics which enlarging the dif-

ference between positive and negative sample pairs. In prior works, the contrastive

learning methods have already shown effectiveness in extensive machine learning

tasks, such as metric learning, graph representation learning, and word embedding

learning. Recently, theoretical justification has been proposed for contrastive learning

from a perspective of information theory.

In this thesis, I describe my Ph.D. research progress in details that how I uti-

lized contrastive learning methods to improve the representation learning of NLP

with respect to the three problem mentioned above. More specifically, I divide the

research progress into four parts: (1) prepare and build the theoretical tools of con-

trastive learning for the natural language understanding applications; (2) improve the

efficiency of NLU models with the contrastive learning tools; (3) improve the inter-

pretability of the NLU models via proposed contrative methods; (4) use contrastive

learning to reduce the social bias in the NLU models. In this chapter, I will first in-

troduce the background of natural language understanding, contrastive learning and

mutual information as the preparation for the further description to my research.

2

Page 18: Improving Natural Language Understanding via Contrastive

1.1 Natural Language Understanding

Natural language understanding (NLU) aims to automatically understand and ex-

tract information from the raw-text or speech data, which is one of the fundamental

but challenging task for natural language processing. Prior work for NLU includes

syntax analysis [DS98], regular expression [VNG99], and bag-of-word models [ZJZ10].

Recently, with the development of deep neural networks [LWL+17], text representa-

tion learning [YLZ+15] has become a commonly-used NLU technique. More specif-

ically, text representation learning tokenizes sentences into sequences of words and

encodes them into low-dimensional representation vectors (also called as embeddings

[CKS+17a]). To obtain representations with rich semantic meaning, various text

encoder structures have been introduced, including word embedding aggregation,

convolutional neural networks [CMG17], recurrent neural networks [CKS+17a], and

attention mechanism [VSP+17]. Currently, large-scale pretrained text representation

encoders, such as BERT [DCLT19] and RoBERTa [LOG+19] have achieved state-of-

the-art performance in extensive NLU downstream tasks, including sentiment analysis

[DCLT19], machine translation [ZXW+20], and question answering [QYQ+19].

Although deep text encoders have shown remarkable performance for NLU tasks,

many important encoder properties are still in lack of exploration. Here we mainly

focus on three important properties:

Efficiency: Most of the text embedding methods typically assume that the general-

purpose sentence representations are continuous and real-valued. However, this as-

sumption is sub-optimal from the following perspectives: i) the sentence embeddings

require large storage or memory footprint; ii) it is computationally expensive to

retrieve semantically-similar sentences, since every sentence representation in the

database needs to be compared, and the inner product operation is computation-

ally involved. These two disadvantages hinder the applicability of generic sentence

3

Page 19: Improving Natural Language Understanding via Contrastive

representations to mobile devices, where a relatively tiny memory footprint and low

computational capacity are typically available [RK18].

Interpretability: Although text representation learning methods extract rich in-

formation from the sentences into the low-dimensional vectors, how to interpret the

learned embeddings remains a difficult problem. For example, when calculating the

embedding similarities of text of user reviews, we cannot distinguish whether the sim-

ilarity between reviews reflects user sentiment, the product discussed, or syntactic

patterns with only one overall embedding of each review. The lack of interpretability

hinders the text encoders from process more generalized NLP tasks, such as transfer

learning [JMBV19a]. To address the interpretability problem, the concept of disen-

tangled representations is introduced, where people try to separate different aspects

of the sentence information into different text embedding parts [JBvdM+18].

Fairness: The fairness issue is also broadly recognized as social bias, which denotes

the unbalanced model behaviors with respect to some socially sensitive topics, such

as gender, race, and religion [LLZ+20]. For data-driven NLU models, social bias is an

intrinsic problem mainly caused by the unbalanced data of text corpora [BCZ+16].

To quantitatively measure the bias degree of models, prior works proposed several

statistical tests [CBN17, CM19, BAHAZ19], mostly focusing on word-level embed-

ding models. To evaluate the sentence-level bias in embedding spaces, [MWB+19]

extended the Word Embedding Association Test (WEAT) [CBN17] into a Sentence

Encoder Association Test (SEAT). Based on the SEAT test, [MWB+19] claimed the

existence of social bias in the pretrained sentence encoders.

1.2 Contrastive Learning Methods

Contrastive learning is a broad class of training strategies that learns meaningful

representations by making positive and negative embedding pairs more distinguish-

4

Page 20: Improving Natural Language Understanding via Contrastive

Figure 1.1: Comparison between generative/predictive learning and contrastivelearning.

able. Usually, contrastive learning requires a pairwise embedding critic as a similar-

ity/distance of data pairs. Then the learning objective is constructed by maximizing

the margin between the critic values of positive data pairs and negative data pairs.

Generally, contrastive learning require a well-designed score function in the fol-

lowing formula:

score(f(x), f(x+)) >> score(f(x), f(x−)), (1.1)

where (x,x+) are supposed to be similar (positive pair), and (x,x−) are supposed

to be dissimilar (negative pair).

Previously contrastive learning has shown encouraging performance in various

tasks. For example, metric learning [WBS06, DKJ+07] treats the target learnable

distance as the score function which results in the form:

mindw

ReLU(1 + dw(x,x+)− dw(x,x−)), (1.2)

where dw(·, ·) is a learnable distance with parameter w. The intuition of equa-

tion 1.2 is that good metric should have larger distances for negative data pairs

dw(x,x−) but smaller distances for similar data pairs dw(x,x+). Graph Embedding

(Node2Vec) [GL16] also utilized contrastive learning. The objective of Node2Vec is:

maxZ

∑u∈V

[∑

w∈N (u)

zTu zw − log(∑v∈V

exp zTv zu)], (1.3)

5

Page 21: Improving Natural Language Understanding via Contrastive

which maximizes the difference of inner product between positive pair (connected

nodes) zTu zw and negative pair zTv zu.

Recently, contrastive learning has been applied to the unsupervised visual rep-

resentation learning task, and significantly reduced the performance gap between

supervised and unsupervised learning [HFW+20, CKNH20, QMG+20]. Among these

unsupervised methods, [CKNH20] proposed a simple multi-view contrastive learn-

ing framework (SimCLR). For each image data, SimCLR generates two augmented

images, and then the mutual information of the two augmentation embeddings is

maximized within a batch of training data.

1.3 Mutual Information Estimation

Mutual information (MI) is a fundamental measure of the dependence between two

random variables. Mathematically, the definition of MI between variables x and y is

I(x;y) = Ep(x,y)

[log

p(x,y)

p(x)p(y)

]. (1.4)

This important tool has been applied in a wide range of scientific fields, includ-

ing statistics [GL94, JYL15], bioinformatics [LGLC16, ZANMB16], robotics [JKR14,

CLKM15], and machine learning [CDH+16, AFDM17, HFLM+18, CMS+20].

In machine learning, especially in deep learning frameworks, MI is typically uti-

lized as a criterion or a regularizer in loss functions, to encourage or limit the

dependence between variables. MI maximization has been studied extensively in

various tasks, e.g., representation learning [HFLM+18, HMT+17], generative mod-

els [CDH+16], information distillation [AHD+19], and reinforcement learning [FDA17].

Recently, MI minimization has received increased attention for its applications in dis-

entangled representation learning [CLGD18], style transfer [KST+18], domain adap-

tation [GSR+18], fairness [KAS11], and the information bottleneck [AFDM17].

6

Page 22: Improving Natural Language Understanding via Contrastive

Although it has widespread use in numerous applications, only in a few special

cases can one calculate the exact value of mutual information, since the calculation

requires closed forms of density functions and a tractable log-density ratio between the

joint and marginal distributions. Therefore, various MI estimation methods have been

proposed. Earlier MI estimation approaches include non-parametric binning [DV99],

kernel density estimation [HMSW04], likelihood-ratio estimation [SSSK08], and K-

nearest neighbor entropy estimation [KSG04]. These methods fail to provide reliable

approximations when the data dimension increases [BBR+18]. Also, the gradient

of these estimators is difficult to calculate, which makes them inapplicable to back-

propagation frameworks for MI optimization tasks.

To obtain differentiable and scalable MI estimation, recent approaches utilize

deep neural networks to construct variational MI estimators. Most of these estima-

tors focus on problems involving MI maximization, and provide MI lower bounds.

Specifically, [BA03] replaces the conditional distribution p(y|x) with an auxiliary

distribution q(y|x), and obtains the Barber-Agakov (BA) bound:

IBA := H(x) + Ep(x,y)[log q(x|y)] ≤ I(x;y), (1.5)

where H(x) is the entropy of variable x. [BBR+18] introduces a Mutual Informa-

tion Neural Estimator (MINE), that treats MI as the Kullback-Leibler (KL) diver-

gence [Kul97] between the joint and marginal distributions, and converts it into the

dual representation:

IMINE := Ep(x,y)[f(x,y)]− log(Ep(x)p(y)[ef(x,y)]), (1.6)

where f(·, ·) is a score function (or, a critic) approximated by a neural network.

Nguyen, Wainwright, and Jordan (NWJ) [NWJ10] derives another lower bound based

on the MI f -divergence representation:

INWJ := Ep(x,y)[f(x,y)]− Ep(x)p(y)[ef(x,y)−1]. (1.7)

7

Page 23: Improving Natural Language Understanding via Contrastive

More recently, based on Noise Contrastive Estimation (NCE) [GH10], an MI lower

bound, called InfoNCE, was introduced in [OLV18]:

INCE := E

[1

N

N∑i=1

logef(xi,yi)

1N

∑Nj=1 e

f(xi,yj)

], (1.8)

where the expectation is over N samples {(xi,yi)}Ni=1 drawn from the joint distribu-

tion p(x,y).

Unlike the above MI lower bounds that have been studied extensively, MI up-

per bounds are still lacking extensive published exploration. Most existing MI up-

per bounds require the conditional distribution p(y|x) to be known. For example,

[AFDM17] introduces a variational marginal approximation r(y) to build a varia-

tional upper bound (VUB):

I(x;y) =Ep(x,y)[logp(y|x)

p(y)]

=Ep(x,y)[logp(y|x)

r(y)]−KL(p(y)‖r(y))

≤Ep(x,y)[logp(y|x)

r(y)] = KL(p(y|x)‖r(y)). (1.9)

The inequality is based on the fact that the KL-divergence is always non-negative.

To be a good MI estimation, this upper bound requires a well-learned density ap-

proximation r(y) to p(y), so that the difference DKL(p(y)‖r(y)) is small. However,

learning a good marginal approximation r(y) without any additional information,

recognized as the distribution density estimation problem [MIA99], is challenging,

especially when variable y is in a high-dimensional space. In practice, [AFDM17]

fixes r(y) as a standard normal distribution, r(y) = N (y|0, I), which results in a

high-bias MI estimation. With N sample pairs {(xi,yi)}Ni=1, [POVDO+19] replaces

r(y) with a Monte Carlo approximation ri(y) = 1N−1

∑j 6=i p(y|xj) ≈ p(y) and derives

8

Page 24: Improving Natural Language Understanding via Contrastive

a leave-one-out upper bound (L1Out):

IL1Out := E

[1

N

N∑i=1

[log

p(yi|xi)1

N−1

∑j 6=i p(yi|xj)

]]. (1.10)

This bound does not require any additional parameters, but depends highly on a

sufficient sample size to achieve a satisfying Monte Carlo approximation. In practice,

L1Out suffers from numerical instability when applied to real-world MI minimization

problems.

To overcome the defects of previous MI estimators, I will introduce a Contrastive

Log-ratio Upper Bound (CLUB) in Chapter 3. CLUB bridges mutual information

estimation with contrastive learning [OLV18], where MI is estimated by the difference

of conditional probabilities between positive and negative sample pairs.

9

Page 25: Improving Natural Language Understanding via Contrastive

Chapter 2

Improving Efficiency of Text

Representations

2.1 Introduction

Learning general-purpose sentence representations from large training corpora has

received widespread attention in recent years. The learned sentence embeddings can

encapsulate rich prior knowledge of natural language, which has been demonstrated

to facilitate a variety of downstream tasks (without fine-tuning the encoder weights).

The generic sentence embeddings can be trained either in an unsupervised manner

[KZS+15a, HCK16, JBS17, GPH+17, LL18, PGJ18], or with supervised tasks such

as paraphrase identification [WBGL16], natural language inference [CKS+17b], dis-

course relation classification [NBG17], machine translation [WG18], etc.

Significant effort has been devoted to designing better training objectives for

learning sentence embeddings. However, prior methods typically assume that the

general-purpose sentence representations are continuous and real-valued. However,

this assumption is sub-optimal from the following perspectives: i) the sentence em-

beddings require large storage or memory footprint; ii) it is computationally expen-

sive to retrieve semantically-similar sentences, since every sentence representation in

the database needs to be compared, and the inner product operation is computation-

ally involved. These two disadvantages hinder the applicability of generic sentence

representations to mobile devices, where a relatively tiny memory footprint and low

computational capacity are typically available [RK18].

In this paper, we aim to mitigate the above issues by binarizing the continuous

10

Page 26: Improving Natural Language Understanding via Contrastive

sentence embeddings. Consequently, the embeddings require much smaller footprint,

and similar sentences can be obtained by simply selecting those with closest binary

codes in the Hamming space [KC18]. One simple idea is to naively binarize the con-

tinuous vectors by setting a hard threshold. However, we find that this strategy leads

to significant performance drop in the empirical results. Besides, the dimension of

the binary sentence embeddings cannot be flexibly chosen with this strategy, further

limiting the practice use of the direct binarization method.

In this regard, we propose three alternative strategies to parametrize the transfor-

mation from pre-trained generic continuous embeddings to their binary forms. Our

exploration spans from simple operations, such as a random projection, to deep neu-

ral network models, such as a regularized autoencoder. Particularly, we introduce a

semantic-preserving objective, which is augmented with the standard autoenoder ar-

chitecture to encourage abstracting informative binary codes. InferSent [CKS+17b] is

employed as the testbed sentence embeddings in our experiments, but the binarization

schemes proposed here can easily be extended to other pre-trained general-purpose

sentence embeddings. We evaluate the quality of the learned general-purpose binary

representations using the SentEval toolkit [CKS+17b]. It is observed that the inferred

binary codes successfully maintain the semantic features contained in the continuous

embeddings, and only lead to around 2% performance drop on a set of downstream

NLP tasks, while requiring merely 1.5% memory footprint of their continuous coun-

terparts.

Moreover, on several sentence matching benchmarks, we demonstrate that the

relatedness between a sentence pair can be evaluated by simply calculating the Ham-

ming distance between their binary codes, which perform on par with or even superior

than measuring the cosine similarity between continuous embeddings (see Table 2.1).

Note that computing the Hamming distance is much more computationally efficient

11

Page 27: Improving Natural Language Understanding via Contrastive

than the inner product operation in a continuous space. We further perform a K-

nearest neighbor sentence retrieval experiment on the SNLI dataset [BAPM15a], and

show that those semantically-similar sentences can indeed be efficiently retrieved with

off-the-shelf binary sentence representations. Summarizing, our contributions in this

paper are as follows:

i) to the best of our knowledge, we conduct the first systematic exploration on

learning general-purpose binarized (memory-efficient) sentence representations, and

four different strategies are proposed;

ii) an autoencoder architecture with a carefully-designed semantic-preserving loss

exhibits strong empirical results on a set of downstream NLP tasks;

iii) more importantly, we demonstrate, on several sentence-matching datasets,

that simply evaluating the Hamming distance over binary representations performs

on par or even better than calculating the cosine similarity between their continuous

counterparts (which is less computationally-efficient).

2.2 Related Work

Sentence representations pre-trained from a large amount of data have been shown

to be effective when transferred to a wide range of downstream tasks. Prior work

along this line can be roughly divided into two categories: i) pre-trained models that

require fine-tuning on the specific transferring task [DL15, RH18, RNSS18, DCLT18,

CYyK+18]; ii) methods that extract general-purpose sentence embeddings, which

can be effectively applied to downstream NLP tasks without fine-tuning the encoder

parameters [KZS+15a, HCK16, JBS17, GPH+17, AKB+17, LL18, PGJ18, TdS18].

Our proposed methods belong to the second category and provide a generic and

easy-to-use encoder to extract highly informative sentence representations. However,

our work is unique since the embeddings inferred from our models are binarized and

12

Page 28: Improving Natural Language Understanding via Contrastive

compact, and thus possess the advantages of small memory footprint and much faster

sentence retrieval.

Learning memory-efficient embeddings with deep neural networks has attracted

substantial attention recently. One general strategy towards this goal is to extract

discrete or binary data representations [JGP16, SN17, DGK+17, CMS18, SSC+18,

THG19]. Binarized embeddings are especially attractive because they are more

memory-efficient (relative to discrete embeddings), and they also enjoy the advan-

tages of fast retrieval based upon a Hamming distance calculation. Previous work

along this line in NLP has mainly focused on learning compact representations at

the word-level [SN17, CMS18, THG19], while much less effort has been devoted to

extracting binarized embeddings at the sentence-level. Our work aims to bridge this

gap, and serves as an initial attempt to facilitate the deployment of state-of-the-art

sentence embeddings on on-device mobile applications.

Our work is also related to prior research on semantic hashing, which aims to

learn binary text embeddings specifically for the information retrieval task [SH09,

ZWCL10, WSSJ14, XWT+15, SSC+18]. However, these methods are typically trained

and evaluated on documents that belong to a specific domain, and thus cannot serve

as generic binary sentence representation applicable to a wide variety of NLP taks. In

contrast, our model is trained on large corpora and seeks to provide general-purpose

binary representations that can be leveraged for various application scenarios.

2.3 Proposed Approach

We aim to produce compact and binarized representations from continuous sentence

embeddings, and preserve the associated semantic information. Let x and f denote,

respectively, an input sentence and the function defined by a pre-trained general-

purpose sentence encoder. Thus, f(x) represents the continuous embeddings ex-

13

Page 29: Improving Natural Language Understanding via Contrastive

tracted by the encoder. The goal of our model is to learn a universal transformation

g that can convert f(x) to highly informative binary sentence representations, i.e.,

g(f(x)), which can be used as generic features for a collection of downstream tasks.

We explore four strategies to parametrize the transformation g.

2.3.1 Hard Threshold

We use h and b to denote the continuous and binary sentence embeddings, respec-

tively, and L denotes the dimension of h. The first method (shown in Figure 2.1) to

binarize the continuous representations is to simply convert each dimension to either

0 or 1 based on a hard threshold. This strategy requires no training and directly

operates on pre-trained continuous embeddings. Suppose s is the hard threshold, we

have, for i = 1, 2, ......, L:

b(i) = 1h(i)>s =sign(h(i) − s) + 1

2, (2.1)

One potential issue of this direct binarization method is that the information con-

tained in the continuous representations may be largely lost, since there is no training

objective encouraging the preservation of semantic information in the produced bi-

nary codes [SSC+18]. Another disadvantage is that the length of the resulting binary

code must be the same as the original continuous representation, and can not be flex-

ibly chosen. In practice, however, we may want to learn shorter binary embeddings

to save more memory footprint or computation.

2.3.2 Random Projection

To tackle the limitation of the above direct binarization method, we consider an al-

ternative strategy that requires no training either: simply applying a random projec-

tion over the pre-trained continuous representations. [WK18] has shown that random

14

Page 30: Improving Natural Language Understanding via Contrastive

0<latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit>

1<latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit>

0<latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit>

0<latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit>

1<latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit>

1<latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit>

1<latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit>

0<latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit>

1<latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit>

1<latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit>

0<latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit>

1<latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit>

0<latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit><latexit sha1_base64="JxFURWUH6R+vJirwMs9sg7WrHjU=">AAAB+HicbVBNSwMxEJ2tX7V+VT16CRbBU8mKoMeiF49V7Ae0S8mm2TY0yS5JVqhL/4FXPXsTr/4bj/4T03YPtvXBwOO9GWbmhYngxmL87RXW1jc2t4rbpZ3dvf2D8uFR08SppqxBYxHrdkgME1yxhuVWsHaiGZGhYK1wdDv1W09MGx6rRztOWCDJQPGIU2Kd9IBLvXIFV/EMaJX4OalAjnqv/NPtxzSVTFkqiDEdHyc2yIi2nAo2KXVTwxJCR2TAOo4qIpkJstmlE3TmlD6KYu1KWTRT/05kRBozlqHrlMQOzbI3Ff/zOqmNroOMqyS1TNH5oigVyMZo+jbqc82oFWNHCNXc3YrokGhCrQtnYUsoJy4TfzmBVdK8qPq46t9fVmo3eTpFOIFTOAcfrqAGd1CHBlCI4AVe4c179t69D+9z3lrw8pljWID39QtO0pNe</latexit>

1<latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit><latexit sha1_base64="TLhiafyz+7kVW7hdyQ94ij177M0=">AAAB+XicbVBNSwMxEJ2tX3X9qnr0EiyCp7IRQY9FLx4r2lpol5JNs21okl2SrFCW/gSvevYmXv01Hv0npu0ebOuDgcd7M8zMi1LBjQ2Cb6+0tr6xuVXe9nd29/YPKodHLZNkmrImTUSi2xExTHDFmpZbwdqpZkRGgj1Fo9up//TMtOGJerTjlIWSDBSPOSXWSQ/Y93uValALZkCrBBekCgUavcpPt5/QTDJlqSDGdHCQ2jAn2nIq2MTvZoalhI7IgHUcVUQyE+azUyfozCl9FCfalbJopv6dyIk0Ziwj1ymJHZplbyr+53UyG1+HOVdpZpmi80VxJpBN0PRv1OeaUSvGjhCqubsV0SHRhFqXzsKWSE5cJng5gVXSuqjhoIbvL6v1myKdMpzAKZwDhiuowx00oAkUBvACr/Dm5d679+F9zltLXjFzDAvwvn4BhVSTcw==</latexit>

𝒉 𝒃

Threshold

𝒉𝒃

Threshold

𝒃#

Random ProjectionorPCA

Encoder Decoder

𝒉𝒃𝒃#

𝒉$

Threshold

(a) (b) (c)

Semantic-preserving Loss

Figure 2.1: Proposed binarized embedding architectures.

sentence encoders can effectively construct universal sentence embeddings from word

vectors, while possessing the flexibility of adaptively altering the embedding dimen-

sions. Here, we are interested in exploring whether a random projection would also

work well while transforming continuous sentence representations into their binary

counterparts.

We randomly initialize a matrix W ∈ RD×L, where D denotes the dimension of

the resulting binary representations. Inspired by the standard initialization heuris-

tic employed in [GB10, WK18], the values of the matrix are initialized as sampled

uniformly. For i = 1, 2, . . . , D and j = 1, 2, . . . , L, we have:

Wi,j ∼ Uniform(− 1√D,

1√D

), (2.2)

After converting the continuous sentence embeddings to the desired dimension D

with the matrix randomly initialized above, we further apply the operation in (2.1)

to binarize it into the discrete/compact form. The dimension D can be set arbitrarily

with this approach, which is easily applicable to any pre-trained sentence embeddings

(since no training is needed). This strategy is related to the Locality-Sensitive Hash-

ing (LSH) for inferring binary embeddings [VDL10].

15

Page 31: Improving Natural Language Understanding via Contrastive

2.3.3 Principal Component Analysis

We also consider an alternative strategy to adaptively choose the dimension of the

resulting binary representations. Specifically, Principal Component Analysis (PCA)

is utilized to reduce the dimensionality of pre-trained continuous embeddings.

Given a set of sentences {xi}Ni=1 and their corresponding continuous embeddings

{hi}Ni=1 ⊂ RL, we learn a projection matrix to reduce the embedding dimensions

while keeping the embeddings distinct as much as possible. After centralizing the

embeddings as hi = hi − 1N

∑Ni=1 hi, the data, as a matrix H = (h1, h2, . . . , hN), has

the singular value decomposition (SVD):

H = UΛV T ,

where Λ is an L × N matrix with descending singular values of X on its diagonal,

with U and V orthogonal matrices. Then the correlation matrix can be written as:

HHT = UΛ2UT . Assume that the diagonal matrix Λ2 = diag(λ1, λ2, . . . , λL) has

descending elements λ1 ≥ λ2 ≥ · · · ≥ λL ≥ 0. We select first D rows of U as our

projection matrix W = U1:D, then the correlation matrix of WH is WHHTW T =

diag(λ1, λ2, . . . , λD), which indicates that the embeddings are projected to D inde-

pendent and most distinctive axes.

After projecting continuous embeddings to a representative lower dimensional

space, we apply the hard threshold function at the position 0 to obtain the binary

representations (since the embeddings are zero-centered).

2.3.4 Autoencoder Architecture

The methods proposed above suffer from the common issue that the model objec-

tive does not explicitly encourage the learned binary codes to retain the semantic

information of the original continuous embeddings, and a separate binarization step

16

Page 32: Improving Natural Language Understanding via Contrastive

is employed after training. To address this shortcoming, we further consider an au-

toencoder architecture, that leverages the reconstruction loss to hopefully endow the

learned binary representations with more information. Specifically, an encoder net-

work is utilized to transform the continuous into a binary latent vector, which is then

reconstructed back with a decoder network.

For the encoder network, we use a matrix operation, followed by a binarization

step, to extract useful features (similar to the random projection setup). Thus, for

i = 1, 2, . . . , D, we have:

b(i) = 1σ(Wi·h+k(i))>s(i)

=sign(σ(Wi · h+ k(i))− s(i)) + 1

2, (2.3)

where k is the bias term and k(i) corresponds to the i-th element of k. s(i) denotes

the threshold determining whether the i-th bit is 0 or 1. During training, we may

use either deterministic or stochastic binarization upon the latent variable. For the

deterministic case, s(i) = 0.5 for all dimensions; in the stochastic case, s(i) is uniformly

sampled as: s(i) ∼ Uniform(0, 1). We conduct an empirical comparison between these

two binarization strategies in Section 2.4.

Prior work have shown that linear decoders are favorable for learning binary codes

under the encoder-decoder framework [CPR15, DGK+17, SSC+18]. Inspired by these

results, we employ a linear transformation to reconstruct the original continuous

embeddings from the binary codes:

h(i) = W ′i · b+ k′

(i), (2.4)

where W ′ and k′ are weight and bias term respectively, which are learned. The mean

square error between h and h is employed as the reconstruction loss:

Lrec =1

D

D∑i=1

(h(i) − h(i))2, (2.5)

17

Page 33: Improving Natural Language Understanding via Contrastive

This objective imposes the binary vector b to encode more information from h (leading

to smaller reconstruction error). Straight-through (ST) estimator [Hin12] is utilized

to estimate the gradients for the binary variable. The autoencoder model is optimized

by minimizing the reconstruction loss for all sentences. After training, the encoder

network is leveraged as the transformation to convert the pre-trained continuous

embeddings into the binary form.

2.3.5 Semantic-preserving Regularizer

Although the reconstruction objective can help the binary variable to endow with

richer semantics, there is no loss that explicitly encourages the binary vectors to

preserve the similarity information contained in the original continuous embeddings.

Consequently, the model may lead to small reconstruction error but yield sub-optimal

binary representations [THG19]. To improve the semantic-preserving property of the

inferred binary embeddings, we introduce an additional objective term.

Consider a triple group of sentences (xα, xβ, xγ), whose continuous embeddings

are (hα, hβ, hγ), respectively. Suppose that the cosine similarity between hα and hβ is

larger than that between hβ and hγ, then it is desirable that the Hamming distance

between bα and bβ should be smaller than that between bβ and bγ (notably, both

large cosine similarity and small Hamming distance indicate that two sentences are

semantically-similar).

Let dc(·, ·) and dh(·, ·) denote the cosine similarity and Hamming distance (in the

continuous and binary embedding space), respectively. Define lα,β,γ as an indicator

such that, lα,β,γ = 1 if dc(hα, hβ) ≥ dc(hβ, hγ), and lα,β,γ = −1 otherwise. The

semantic-preserving regularizer is then defined as:

Lsp =∑α,β,γ

max{0, lα,β,γ[dh(bα, bβ)− dh(bβ, bγ)]}, (2.6)

18

Page 34: Improving Natural Language Understanding via Contrastive

By penalizing Lsp, the learned transformation function g is explicitly encouraged

to retain the semantic similarity information of the original continuous embeddings.

Thus, the entire objective function to be optimized is:

L = Lrec + λspLsp, (2.7)

where λsp controls the relative weight between the reconstruction loss (Lrec) and

semantic-preserving loss (Lsp).

2.4 Experimental setup

2.4.1 Pre-trained Continuous Embeddings

Our proposed model aims to produce highly informative binary sentence embeddings

based upon pre-trained continuous representations. In this paper, we utilize InferSent

[CKS+17b] as the continuous embeddings (given its effectiveness and widespread use).

Note that all four proposed strategies can be easily extended to other pre-trained

general-purpose sentence embeddings as well.

Specifically, a bidirectional LSTM architecture along with a max-pooling oper-

ation over the hidden units is employed as the sentence encoder, and the model

parameters are optimized on the natural language inference tasks, i.e., Standford

Natural Language Inference (SNLI) [BAPM15a] and Multi-Genre Natural Language

Inference (MultiNLI) datasets [WNB17].

2.4.2 Training Details

Our model is trained using Adam [KB14], with a value 1×10−5 as the learning rate for

all the parameters. The number of bits (i.e., dimension) of the binary representation

is set as 512, 1024, 2048 or 4096, and the best choice for each model is chosen on

the validation set, and the corresponding test results are presented in Table 2.1. The

19

Page 35: Improving Natural Language Understanding via Contrastive

batch size is chosen as 64 for all model variants. The hyperparameter over λsp is

selected from {0.2, 0.5, 0.8, 1} on the validation set, and 0.8 is found to deliver the

best empirical results. The training with the autoencoder setup takes only about 1

hour to converge, and thus can be readily applicable to even larger datasets.

2.4.3 Evaluation

To facilitate comparisons with other baseline methods, we use SentEval toolkit1

[CK18] to evaluate the learned binary (compact) sentence embeddings. Concretely,

the learned representations are tested on a series of downstream tasks to assess their

transferability (with the encoder weights fixed), which can be categorized as follows:

• Sentence classification, including sentiment analysis (MR, SST), product re-

views (CR), subjectivity classification (SUBJ), opinion polarity detection (MPQA)

and question type classification (TREC). A linear classifier is trained with the

generic sentence embeddings as the input features. The default SentEval set-

tings is used for all the datasets.

• Sentence matching, which comprises semantic relatedness (SICK-R, STS14,

STSB) and paraphrase detection (MRPC). Particularly, each pair of sentences

in STS14 dataset is associated with a similarity score from 0 to 5 (as the cor-

responding label). Hamming distance between the binary representations is

directly leveraged as the prediction score (without any classifier parameters).

For the sentence matching benchmarks, to allow fair comparison with the continu-

ous embeddings, we do not use the same classifier architecture in SentEval. Instead,

we obtain the predicted relatedness by directly computing the cosine similarity be-

tween the continuous embeddings. Consequently, there are no classifier parameters

1https://github.com/facebookresearch/SentEval

20

Page 36: Improving Natural Language Understanding via Contrastive

for both the binary and continuous representations. The same valuation metrics in

SentEval[CK18] are utilized for all the tasks. For MRPC, the predictions are made by

simply judging whether a sentence pair’s score is larger or smaller than the averaged

Hamming distance (or cosine similarity).

2.4.4 Baselines

We consider several strong baselines to compare with the proposed methods, which

include both continuous (dense) and binary (compact) representations. For the

continuous generic sentence embeddings, we make comparisons with fastText-BoV

[JGBM16], Skip-Thought Vectors [KZS+15a] and InferSent [CKS+17b]. As to the

binary embeddings, we consider the binarized version of InferLite [KC18], which, as

far as we are concerned, is the only general-purpose binary representations baseline

reported.

2.5 Experimental Results

We experimented with five model variants to learn general-purpose binary embed-

dings: HT-binary (hard threshold, which is selected from {0, 0.01, 0.1} on the valida-

tion set), Rand-binary (random projection), PCA-binary (reduce the dimensionality

with principal component analysis), AE-binary (autoencoder with the reconstruc-

tion objective) and AE-binary-SP (autoencoder with both the reconstruction objec-

tive and Semantic-Preserving loss). Our code will be released to encourage future

research.

2.5.1 Task transfer evaluation

We evalaute the binary sentence representations produced by different methods with

a set of transferring tasks. The results are shown in Table 2.1. The STS14, STSB

21

Page 37: Improving Natural Language Understanding via Contrastive

Table 2.1: Performance on the test set for 10 downstream tasks.

Model Dim MR CR SUBJ MPQA SST STS14 STSB SICK-R MRPC

Continuous (dense) sentence embeddings

fastText-BoV 300 78.2 80.2 91.8 88.0 82.3 .65/.63 58.1/59.0 0.698 67.9/74.3

SkipThought 4800 76.5 80.1 93.6 87.1 82.0 .29/.35 41.0/41.7 0.595 57.9/66.6

SkipThought-LN 4800 79.4 83.1 93.7 89.3 82.9 .44/.45 - - -

InferSent-FF 4096 79.7 84.2 92.7 89.4 84.3 .68/.66 55.6/56.2 0.612 67.9/73.8

InferSent-G 4096 81.1 86.3 92.4 90.2 84.6 .68/.65 70.0/68.0 0.719 67.4/73.2

Binary (compact) sentence embeddings

InferLite-short 256 73.7 81.2 83.2 86.2 78.4 0.61/- 63.4/63.3 0.597 61.7/70.1

InferLite-medium 1024 76.3 83.2 87.8 88.4 81.3 0.67/- 64.9/64.9 0.642 64.1/72.0

InferLite-long 4096 77.7 83.7 89.6 89.1 82.3 0.68/- 67.9/67.6 0.663 65.4/72.9

HT-binary 4096 76.6 79.9 91.0 88.4 80.6 .62/.60 55.8/53.6 0.652 65.6/70.4

Rand-binary 2048 78.7 82.7 90.4 88.9 81.3 .66/.63 65.1/62.3 0.704 65.7/70.8

PCA-binary 2048 78.4 84.5 90.7 89.4 81.0 .66/.65 63.7/62.8 0.518 65.0/ 69.7

AE-binary 2048 78.7 84.9 90.6 89.6 82.1 .68/.66 71.7/69.7 0.673 65.8/70.8

AE-binary-SP 2048 79.1 84.6 90.8 90.0 82.7 .69/.67 73.2/70.6 0.705 67.2/72.0

and MRPC are evaluated with Pearson and Spearman correlations, and SICK-R

is measured with Pearson correlation. All other datasets are evaluated with test

accuracy. InferSent-G uses Glove (G) as the word embeddings, while InferSent-FF

employs FastText (F) embeddings with Fixed (F) padding. The empirical results of

InferLite with different lengths of binary embeddings, i.e., 256, 1024 and 4096, are

considered.

The proposed autoencoder architecture generally demonstrates the best results.

Especially while combined with the semantic-preserving loss defined in (2.7), AE-

binary-SP exhibits higher performance compared with a standard autoencoder. It

is worth noting that the Rand-binary and PCA-binary model variants also show

competitive performance despite their simplicity. These strategies are also quite

promising given that no training is required given the pre-trained continuous sentence

representations.

Another important result is that, the AE-binary-SP achieves competitive results

relative to the InferSent, leading to only about 2% loss on most datasets and even

22

Page 38: Improving Natural Language Understanding via Contrastive

Table 2.2: Nearest neighbor retrieval results on the SNLI dataset.Hamming Distance (binary embeddings) Cosine Similarity (continuous embeddings)

Query: Several people are sitting in a movie theater .A group of people watching a movie at a theater . A group of people watching a movie at a theater .A crowd of people are watching a movie indoors . A man is watching a movie in a theater .A man is watching a movie in a theater . Some people are sleeping on a sofa in front of the

television .

Query: A woman crossing a busy downtown street .A lady is walking down a busy street . A woman walking on the street downtown .A woman is on a crowded street . A lady is walking down a busy street .A woman walking on the street downtown . A man and woman walking down a busy street .

Query: A well dressed man standing in front of piece of artwork .A well dressed man standing in front of an abstractfence painting .

A man wearing headphones is standing in front ofa poster .

A man wearing headphones is standing in front ofa poster .

A man standing in front of a chalkboard points ata drawing .

A man in a blue shirt standing in front of a garage-like structure painted with geometric designs .

A man in a blue shirt standing in front of a garage-like structure painted with geometric designs .

Query: A woman is sitting at a bar eating a hamburger .A woman sitting eating a sandwich . A woman is sitting in a cafe eating lunch .A woman is sitting in a cafe eating lunch . A woman is eating at a diner .The woman is eating a hotdog in the middle of herbedroom .

A woman is eating her meal at a resturant .

Query: Group of men trying to catch fish with a fishing net .Two men are on a boat trying to fish for food duringa sunset .

There are three men on a fishing boat trying tocatch bass .

There are three men on a fishing boat trying tocatch bass .

Two men are trying to fish .

Two men pull a fishing net up into their red boat . Two men are on a boat trying to fish for food duringa sunset .

performing at par with InferSent on several datasets, such as the MPQA and STS14

datasets. On the sentence matching tasks, the yielded binary codes are evaluated

by merely utilizing the hamming distance features (as mentioned above). To allow

fair comparison, we compare the predicted scores with the cosine similarity scores

based upon the continuous representations (there are no additional parameters for

the classifier). The binary codes brings out promising empirical results relative to

their continuous counterparts, and even slightly outperform InferSent on the STS14

dataset.

We also found that our AE-binary-SP model variant consistently demonstrate su-

perior results than the InferLite baselines, which optimize the NLI objective directly

over the binary representations. This may be attributed to the difficulty of back-

23

Page 39: Improving Natural Language Understanding via Contrastive

propagating gradients through discrete/binary variables, and would be an interesting

direction for future research.

2.5.2 Nearest Neighbor Retrieval

Case Study

One major advantage of binary sentence representations is that the similarity of two

sentences can be evaluated by merely calculating the hamming distance between their

binary codes. To gain more intuition regarding the semantic information encoded

in the binary embeddings, we convert all the sentences in the SNLI dataset into

continuous and binary vectors (with InferSent-G and AE-binary-SP, respectively).

The top-3 closet sentences are retrieved based upon the corresponding metrics,

and the resulting samples are shown in Table 2.2. Given a a query sentence, the left

column shows the top-3 retrieved samples based upon the hamming distance with

all sentences’ binary representations, while the right column exhibits the samples

according to the cosine similarity of their continuous embeddings.

It can be observed that the sentences selected based upon the Hamming distance

indeed convey very similar semantic meanings. In some cases, the results with binary

codes are even more reasonable compared with the continuous embeddings. For

example, for the first query, all three sentences in the left column relate to “watching

a movie”, while one of the sentences in the right column is about “sleeping”.

Retrieval Speed

The bitwise comparison is much faster than the element-wise multiplication operation

(between real-valued vectors) [THG19]. To verify the speed improvement, we sample

10000 sentence pairs from SNLI and extract their continuous and binary embeddings

(with the same dimension of 4096), respectively. We record the time to compute the

24

Page 40: Improving Natural Language Understanding via Contrastive

Table 2.3: Ablation study for the AE-binary-SP model with different choices of λsp(evaluated with test accuracy on the MR dataset).

λsp 0.0 0.2 0.5 0.8 1.0

Accuracy 78.2 78.5 78.5 79.1 78.4

cosine similarity and hamming distance between the corresponding representations.

With our Python implementation, it takes 3.67µs and 288ns respectively, indicating

that calculating the Hamming distance is over 12 times faster. Our implementation is

not optimized, and the running time of computing Hamming distance can be further

improved (to be proportional to the number of different bits, rather than the input

length.

2.5.3 Ablation Study

The effect of semantic-preserving loss

To investigate the importance of incorporating the locality-sensitive regularizer, we

select different values of λsp (ranging from 0.0 to 1.0) and explore how the transfer

results would change accordingly. The λsp controls the relative weight of the semantic-

preserving loss term. As shown in Table 2.3, augmenting the semantic-preserving loss

consistently improves the quality of learned binary embeddings, while the best test

accuracy on the MR dataset is obtained with λsp = 0.8.

Sampling strategy

As discussed in Section 2.3.4, the binary latent vector b can be obtained with either

a deterministic or stochastically sampled threshold. We compare these two sampling

strategies on several downstream tasks. As illustrated in Figure 2.2, setting a fixed

threshold demonstrates better empirical performance on all the datasets. Therefore,

deterministic threshold is employed for all the autoencoder model variants in our

25

Page 41: Improving Natural Language Understanding via Contrastive

MR CR MPQA SUBJ SST SICKE MRPC

Dataset

0.65

0.70

0.75

0.80

0.85

0.90

0.95

Perf

orm

ance

Deterministic

Stochastic

Figure 2.2: The comparison between deterministic and stochastic sampling for theautoencoder strategy.

512 1024 2048 4096Number of Bits

71727374757677787980

Acc

ura

cy (

%)

Random

PCA

AE

AE-SP

Figure 2.3: The test accuracy of different model on the MR dataset across 512,1024, 2048, 4096 bits for the learned binary representations.

experiments.

The effect of embedding dimension

Except for the hard threshold method, other three proposed strategies all possess

the flexibility of adaptively choosing the dimension of learned binary representations.

To explore the sensitivity of extracted binary embeddings to their dimensions, we

run four model variants (Rand-binary, PCA-binary, AE-binary, AE-binary-SP) with

different number of bits (i.e., 512, 1024, 2048, 4096), and their corresponding results

on the MR dataset are shown in Figure 2.3.

For the AE-binary and AE-binary-SP models, longer binary codes consistently

26

Page 42: Improving Natural Language Understanding via Contrastive

deliver better results. While for the Rand-binary and PCA-binary variants, the

quality of inferred representations is much less sensitive to the embedding dimension.

Notably, these two strategies exhibit competitive performance even with only 512

bits. Therefore, in the case where less memory footprint or little training is preferred,

Rand-binary and PCA-binary could be more judicious choices.

2.6 Conclusion

This paper presents a first step towards learning binary and general-purpose sentence

representations that allow for efficient storage and fast retrieval over massive corpora.

To this end, we explore four distinct strategies to convert pre-trained continuous

sentence embeddings into a binarized form. Notably, a regularized autoencoder aug-

mented with semantic-preserving loss exhibits the best empirical results, degrading

performance by only around 2% while saving over 98% memory footprint. Besides,

two other model variants with a random projection or PCA transformation require no

training and demonstrate competitive embedding quality even with relatively small

dimensions. Experiments on nearest-neighbor sentence retrieval further validate the

effectiveness of proposed framework.

27

Page 43: Improving Natural Language Understanding via Contrastive

Chapter 3

Contrastive Log-ratio Upper Bound of

Mutual Information

3.1 Introduction

Mutual information (MI) is a fundamental measure of the dependence between two

random variables. Mathematically, the definition of MI between variables x and y is

I(x;y) = Ep(x,y)

[log

p(x,y)

p(x)p(y)

]. (3.1)

This important tool has been applied in a wide range of scientific fields, includ-

ing statistics [GL94, JYL15], bioinformatics [LGLC16, ZANMB16], robotics [JKR14,

CLKM15], and machine learning [CDH+16, AFDM17, HFLM+18, CMS+20].

In machine learning, especially in deep learning frameworks, MI is typically uti-

lized as a criterion or a regularizer in loss functions, to encourage or limit the

dependence between variables. MI maximization has been studied extensively in

various tasks, e.g., representation learning [HFLM+18, HMT+17], generative mod-

els [CDH+16], information distillation [AHD+19], and reinforcement learning [FDA17].

Recently, MI minimization has received increased attention for its applications in dis-

entangled representation learning [CLGD18], style transfer [KST+18], domain adap-

tation [GSR+18], fairness [KAS11], and the information bottleneck [AFDM17].

However, only in a few special cases can one calculate the exact value of mu-

tual information, since the calculation requires closed forms of density functions and

28

Page 44: Improving Natural Language Understanding via Contrastive

a tractable log-density ratio between the joint and marginal distributions. In most

machine learning tasks, only samples from the joint distribution are accessible. There-

fore, sample-based MI estimation methods have been proposed. To approximate MI,

most previous works focused on lower-bound estimation [CDH+16, BBR+18, OLV18],

which is inconsistent with MI minimization tasks. In contrast, MI upper bound es-

timation lacks extensive exploration in the literature. Among the existing MI upper

bounds, [AFDM17] fixes one of the marginal distribution (p(y) in (3.1)) to a stan-

dard Gaussian, and obtains a variational upper bound in closed form. However, the

Gaussian marginal distribution assumption is unduly strong, which makes the upper

bound fail to estimate MI with low bias. [POVDO+19] develops a leave-one-out upper

bound, that provides tighter MI estimation when the sample size is large. However,

it suffers from high numerical instability in practice when applied to MI minimization

models.

To overcome the defects of previous MI estimators, we introduce a Contrastive

Log-ratio Upper Bound (CLUB). Specifically, CLUB bridges mutual information es-

timation with contrastive learning [OLV18], where MI is estimated by the difference

of conditional probabilities between positive and negative sample pairs. Further, we

develop a variational form of CLUB (vCLUB) into scenarios where the conditional

distribution p(y|x) is unknown, by approximating p(y|x) with a neural network. We

prove that, with good variational approximation, vCLUB can either provide reliable

MI estimation or remain a valid MI upper bound. Based on this new bound, we pro-

pose an MI minimization algorithm, and further accelerate it via a negative sampling

strategy. The main contributions of this paper are summarized as follows.

• We introduce a Contrastive Log-ratio Upper Bound (CLUB) of mutual infor-

mation, which is not only reliable as a mutual information estimator, but also

trainable in gradient-descent frameworks.

29

Page 45: Improving Natural Language Understanding via Contrastive

• We extend CLUB with a variational network approximation, and provide the-

oretical analysis to the good properties of this variational bound.

• We develop a CLUB-based MI minimization algorithm, and accelerate it with

a negative sampling strategy.

• We compare CLUB with previous MI estimators on both simulation studies

and real-world applications, demonstrating that CLUB is not only better in

the bias-variance estimation trade-off, but also more effective when applied to

MI minimization.

3.2 Background

Although it has widespread use in numerous applications, mutual information (MI)

remains challenging to estimate accurately, especially when the closed forms of dis-

tributions are unknown or intractable. Earlier MI estimation approaches include

non-parametric binning [DV99], kernel density estimation [HMSW04], likelihood-

ratio estimation [SSSK08], and K-nearest neighbor entropy estimation [KSG04].

These methods fail to provide reliable approximations when the data dimension in-

creases [BBR+18]. Also, the gradient of these estimators is difficult to calculate,

which makes them inapplicable to back-propagation frameworks for MI optimization

tasks.

To obtain differentiable and scalable MI estimation, recent approaches utilize

deep neural networks to construct variational MI estimators. Most of these estima-

tors focus on problems involving MI maximization, and provide MI lower bounds.

Specifically, [BA03] replaces the conditional distribution p(y|x) with an auxiliary

distribution q(y|x), and obtains the Barber-Agakov (BA) bound:

IBA := H(x) + Ep(x,y)[log q(x|y)] ≤ I(x;y), (3.2)

30

Page 46: Improving Natural Language Understanding via Contrastive

where H(x) is the entropy of variable x. [BBR+18] introduces a Mutual Informa-

tion Neural Estimator (MINE), that treats MI as the Kullback-Leibler (KL) diver-

gence [Kul97] between the joint and marginal distributions, and converts it into the

dual representation:

IMINE := Ep(x,y)[f(x,y)]− log(Ep(x)p(y)[ef(x,y)]), (3.3)

where f(·, ·) is a score function (or, a critic) approximated by a neural network.

Nguyen, Wainwright, and Jordan (NWJ) [NWJ10] derives another lower bound based

on the MI f -divergence representation:

INWJ := Ep(x,y)[f(x,y)]− Ep(x)p(y)[ef(x,y)−1]. (3.4)

More recently, based on Noise Contrastive Estimation (NCE) [GH10], an MI lower

bound, called InfoNCE, was introduced in [OLV18]:

INCE := E

[1

N

N∑i=1

logef(xi,yi)

1N

∑Nj=1 e

f(xi,yj)

], (3.5)

where the expectation is over N samples {(xi,yi)}Ni=1 drawn from the joint distribu-

tion p(x,y).

Unlike the above MI lower bounds that have been studied extensively, MI up-

per bounds are still lacking extensive published exploration. Most existing MI up-

per bounds require the conditional distribution p(y|x) to be known. For example,

[AFDM17] introduces a variational marginal approximation r(y) to build a varia-

tional upper bound (VUB):

I(x;y) =Ep(x,y)[logp(y|x)

p(y)]

=Ep(x,y)[logp(y|x)

r(y)]−KL(p(y)‖r(y))

≤Ep(x,y)[logp(y|x)

r(y)] = KL(p(y|x)‖r(y)). (3.6)

31

Page 47: Improving Natural Language Understanding via Contrastive

The inequality is based on the fact that the KL-divergence is always non-negative.

To be a good MI estimation, this upper bound requires a well-learned density ap-

proximation r(y) to p(y), so that the difference DKL(p(y)‖r(y)) is small. However,

learning a good marginal approximation r(y) without any additional information,

recognized as the distribution density estimation problem [MIA99], is challenging,

especially when variable y is in a high-dimensional space. In practice, [AFDM17]

fixes r(y) as a standard normal distribution, r(y) = N (y|0, I), which results in a

high-bias MI estimation. With N sample pairs {(xi,yi)}Ni=1, [POVDO+19] replaces

r(y) with a Monte Carlo approximation ri(y) = 1N−1

∑j 6=i p(y|xj) ≈ p(y) and derives

a leave-one-out upper bound (L1Out):

IL1Out := E

[1

N

N∑i=1

[log

p(yi|xi)1

N−1

∑j 6=i p(yi|xj)

]]. (3.7)

This bound does not require any additional parameters, but depends highly on a

sufficient sample size to achieve a satisfying Monte Carlo approximation. In practice,

L1Out suffers from numerical instability when applied to real-world MI minimization

problems.

To compare our method with the aforementioned MI upper bounds in more gen-

eral scenarios (i.e., p(y|x) is unknown), we use a neural network qθ(y|x) to approxi-

mate p(y|x), and develop variational versions of VUB and L1Out as:

IvVUB = Ep(x,y)

[log

qθ(y|x)

r(y)

], (3.8)

IvL1Out = E

[1

N

N∑i=1

[log

qθ(yi|xi)1

N−1

∑j 6=i qθ(yi|xj)

]]. (3.9)

We discuss theoretical properties of these two variational bounds in the Supplemen-

tary Material. In a simulation study (Section 3.4.1), we find that variational L1Out

reaches better performance than previous lower bounds for MI estimation. However,

32

Page 48: Improving Natural Language Understanding via Contrastive

the problem of numerical instability still remains for variational L1Out in real-world

applications (as shown in Section 3.4.4). To the best of our knowledge, we provide the

first variational version of VUB and L1Out upper bounds, and study their properties

in both a theoretical analysis and wrt empirical performance.

3.3 Proposed Method

Suppose we have sample pairs {(xi,yi)}Ni=1 drawn from an unknown or intractable

distribution p(x,y). We aim to derive an upper bound estimator of the mutual

information I(x;y) based on the given samples. In a range of machine learning

tasks (e.g., information bottleneck), one of the conditional distributions between

variables x and y (as p(x|y) or p(y|x)) can be known. To efficiently utilize this

additional information, we first derive a mutual information (MI) upper bound with

the assumption that one of the conditional distributions is provided (we suppose

p(y|x) is provided, without loss of generality). Then, we extend the bound into more

general cases where no conditional distribution is known. Finally, we develop a MI

minimization algorithm based on the derived bound.

3.3.1 CLUB with p(y|x) Known

With the conditional distribution p(y|x), our MI Contrastive Log-ratio Upper Bound

(CLUB) is defined as:

ICLUB(x;y) := Ep(x,y)[log p(y|x)]− Ep(x)Ep(y)[log p(y|x)]. (3.10)

33

Page 49: Improving Natural Language Understanding via Contrastive

To show that ICLUB(x;y) is an upper bound of I(x;y), we calculate the gap ∆

between them:

∆ :=ICLUB(x;y)− I(x;y)

=Ep(x,y)[log p(y|x)]− Ep(x)Ep(y)[log p(y|x)]− Ep(x,y) [log p(y|x)− log p(y)]

=Ep(x,y)[log p(y)]− Ep(x)Ep(y)[log p(y|x)]

=Ep(y)

[log p(y)− Ep(x) [log p(y|x)]

]. (3.11)

By the definition of the marginal distribution, we have p(y) =∫p(y|x)p(x)dx =

Ep(x)[p(y|x)]. Note that log(·) is a concave function, and by Jensen’s Inequality, we

have log p(y) = log(Ep(x)[p(y|x)]

)≥ Ep(x)[log p(y|x)]. Applying this inequality to

(3.11), we conclude that the gap ∆ is always non-negative. Therefore, ICLUB(x;y) is

an upper bound of I(x;y). The bound is tight when p(y|x) has the same value for

any x, which means variables x and y are independent. Consequently, we summarize

the above discussion into the following Theorem 3.3.1.

Theorem 3.3.1. For two random variables x and y,

I(x;y) ≤ ICLUB(x;y). (3.12)

Equality is achieved if and only if x and y are independent.

With sample pairs {(xi,yi)}Ni=1, ICLUB(x;y) has an unbiased estimate as:

ICLUB =1

N

N∑i=1

log p(yi|xi)−1

N2

N∑i=1

N∑j=1

log p(yj|xi)

=1

N2

N∑i=1

N∑j=1

[log p(yi|xi)− log p(yj|xi)] . (3.13)

In the estimator ICLUB, log p(yi|xi) provides the conditional log-likelihood of positive

sample pair (xi,yi); {log p(yj|xi)}i 6=j provide the conditional log-likelihood of nega-

tive sample pair (xi,yj). The difference between log p(yi|xi) and log p(yj|xi) is the

34

Page 50: Improving Natural Language Understanding via Contrastive

contrastive probability log-ratio between two conditional distributions. Therefore,

we name this novel MI upper bound estimator Contrastive Log-ratio Upper Bound

(CLUB). Compared with previous MI neural estimators, CLUB has a simpler form,

as a linear combination of log-ratios between positive and negative sample pairs. The

linear form of log-ratios improves the numerical stability for calculation of CLUB and

its gradient, which we discuss in detail in Section 3.3.3.

3.3.2 CLUB with Conditional Distributions Unknown

When the conditional distributions p(y|x) or p(x|y) is provided, the MI can be

directly upper-bounded by (3.13) with samples {(xi,yi)}Ni=1. Unfortunately, in a

large number of machine learning tasks, the conditional relation between variables is

unavailable.

To further extend the CLUB estimator into more general scenarios, we use a vari-

ational distribution qθ(y|x) with parameter θ to approximate p(y|x). Consequently,

a variational CLUB term (vCLUB) is defined by:

IvCLUB(x;y) := Ep(x,y)[log qθ(y|x)]− Ep(x)Ep(y)[log qθ(y|x)]. (3.14)

Similar to the MI upper bound estimator ICLUB in (3.13), the unbiased estimator for

vCLUB with samples {xi,yi} is:

IvCLUB =1

N2

N∑i=1

N∑j=1

[log qθ(yi|xi)− log qθ(yj|xi)]

=1

N

N∑i=1

[log qθ(yi|xi)−

1

N

N∑j=1

log qθ(yj|xi)]. (3.15)

Using the variational approximation qθ(y|x), vCLUB no longer guarantees an upper

bound of I(x;y). However, the vCLUB shares good properties with CLUB. We claim

that with good variational approximation qθ(y|x), vCLUB can still hold a MI upper

bound or become a reliable MI estimator. The following analyses support this claim.

35

Page 51: Improving Natural Language Understanding via Contrastive

Let qθ(x,y) = qθ(y|x)p(x) be the variational joint distribution induced by qθ(y|x).

Generally, we have the following Theorem 3.3.2. Note that when x and y are indepen-

dent, IvCLUB has exactly the same value as I(x;y), without requiring any additional

assumption on qθ(y|x). However, unlike in Theorem 3.3.1 as a sufficient and neces-

sary condition, independence between x and y becomes sufficient but not necessary

to conclude I(x;y) = IvCLUB(x;y), due to the variation approximation qθ(y|x).

Theorem 3.3.2. Denote qθ(x,y) = qθ(y|x)p(x). If

KL (p(x,y)‖qθ(x,y)) ≤ KL (p(x)p(y)‖qθ(x,y)) ,

then I(x;y) ≤ IvCLUB(x;y). The equality holds when x and y are independent.

Proof of Theorem 3.3.2. We calculate the gap between IvCLUB and I(x;y):

∆ :=IvCLUB(x;y)− I(x;y)

=Ep(x,y)[log qθ(y|x)]− Ep(x)Ep(y)[log qθ(y|x)]− Ep(x,y) [log p(y|x)− log p(y)]

=[Ep(y)[log p(y)]− Ep(x)p(y)[log qθ(y|x)]

]−[Ep(x,y)[log p(y|x)]− Ep(x,y)[log qθ(y|x)]

]=Ep(x)p(y)[log

p(y)

qθ(y|x)]− Ep(x,y)[log

p(y|x)

qθ(y|x)]

=Ep(x)p(y)[logp(x)p(y)

qθ(y|x)p(x)]− Ep(x,y)[log

p(y|x)p(x)

qθ(y|x)p(x)]

=DKL(p(x)p(y)‖qθ(x,y))−DKL(p(x,y)‖qθ(x,y)).

Therefore, IvCLUB(x;y) is an upper bound of I(x;y) if and only if

DKL(p(x)p(y)‖qθ(x,y)) ≥ DKL(p(x,y)‖qθ(x,y)).

If x and y are independent, p(x)p(y) = p(x,y). Then, DKL(p(x)p(y)‖qθ(x,y)) =

DKL(p(x,y)‖qθ(x,y)) and ∆ = 0. Therefore, IvCLUB(x;y) = I(x;y), the equality

holds.

36

Page 52: Improving Natural Language Understanding via Contrastive

Theorem 3.3.2 provides insight that vCLUB remains a MI upper bound if the vari-

ational joint distribution qθ(x,y) is “closer” to p(x,y) than to p(x)p(y). Therefore,

minimizing DKL(p(x,y)‖qθ(x,y)) will facilitate the condition in Theorem 3.3.2 to be

achieved. We show that DKL(p(x,y)‖qθ(x,y)) can be minimized by maximizing the

log-likelihood of qθ(y|x), because of the following equation:

minθDKL(p(x,y)‖qθ(x,y))

= minθ

Ep(x,y)[log(p(y|x)p(x))− log(qθ(y|x)p(x))]

= minθ

Ep(x,y)[log p(y|x)]− Ep(x,y)[log qθ(y|x)]. (3.16)

Equation (3.16) equals minθDKL(p(y|x)‖qθ(y|x)), in which the first term has no

relation with parameter θ. Therefore, minθDKL(p(x,y)‖qθ(x,y)) is equivalent to the

maximization of the second term, maxθ Ep(x,y)[log qθ(y|x)]. With samples {(xi,yi)},

we can maximize the log-likelihood function L(θ) := 1N

∑Ni=1 log qθ(yi|xi), which is

the unbiased estimate of Ep(x,y)[log qθ(y|x)].

In practice, the variational distribution qθ(y|x) is usually implemented with neu-

ral networks. By enlarging the network capacity (i.e., adding layers and neurons)

and applying gradient-ascent to the log-likelihood L(θ), we can obtain far more ac-

curate approximation qθ(y|x) to p(y|x), thanks to the high expressiveness of neural

networks [HLY19, OS19]. Therefore, to further discuss the properties of vCLUB, we

assume the neural network approximation qθ achieves DKL(p(y|x)‖qθ(y|x)) ≤ ε with

a small number ε > 0. In the Supplementary Material, we quantitatively discuss the

reasonableness of this assumption. Consider the KL-divergence between p(x)p(y)

and qθ(x,y). If DKL(p(x)p(y)‖qθ(x,y)) ≥ DKL(p(x,y)‖qθ(x,y)), by Theorem 3.3.2,

vCLUB is already a MI upper bound. Otherwise, if DKL(p(x)p(y)‖qθ(x,y)) <

DKL(p(x,y)‖qθ(x,y)), we have the following corollary:

37

Page 53: Improving Natural Language Understanding via Contrastive

Corollary 3.3.3. Given DKL(p(y|x)‖qθ(y|x)) ≤ ε, if

DKL(p(x,y)‖qθ(x,y)) > DKL(p(x)p(y)‖qθ(x,y)),

then |I(x;y)− IvCLUB(x;y)| < ε.

Proof of Corollary 3.3.3. If DKL(p(y|x)‖qθ(y|x)) ≤ ε, then

DKL(p(x,y)‖qθ(x,y)) = Ep(x,y)[logp(x,y)

qθ(x,y)] =Ep(x,y)[log

p(y|x)

qθ(y|x)]

=DKL(p(y|x)‖qθ(y|x)) ≤ ε.

By the condition

DKL(p(x,y)‖qθ(x,y) > DKL(p(x)p(y)‖qθ(x,y)),

we have DKL(p(x)p(y)‖qθ(x,y)) < ε.

Note that the KL-divergence is always non-negative. From the proof of Theo-

rem 3.3.2,

|IvCLUB(x;y)− I(x;y)| = |DKL(p(x)p(y)‖qθ(x,y))−DKL(p(x,y)‖qθ(x,y))|

<max {DKL(p(x)p(y)‖qθ(x,y)), DKL(p(x,y)‖qθ(x,y))} ≤ ε,

which supports the claim.

Combining Corollary 3.3.3 and Theorem 3.3.2, we conclude that with a good

variational approximation qθ(y|x), vCLUB can either remain a MI upper bound,

or become a MI estimator whose absolute error is bounded by the approximation

performance DKL(p(y|x)‖qθ(y|x)).

3.3.3 CLUB in MI Minimization

One of the major applications of MI upper bounds is for mutual information mini-

mization. In general, MI minimization aims to reduce the correlation between two

38

Page 54: Improving Natural Language Understanding via Contrastive

Algorithm 1 MI Minimization with vCLUB

for each training iteration doSample {(xi,yi)}Ni=1 from pσ(x,y)Log-likelihood L(θ) = 1

N

∑Ni=1 log qθ(yi|xi)

Update qθ(y|x) by maximizing L(θ)for i = 1 to N do

if use sampling thenSample k′i uniformly from {1, 2, . . . , N}Ui = log qθ(yi|xi)− log qθ(yk′i |xi)

elseUi = log qθ(yi|xi)− 1

N

∑Nj=1 log qθ(yj|xi)

end ifend forUpdate pσ(x,y) by minimize IvCLUB = 1

N

∑Ni=1 Ui

end for

variables x and y by selecting an optimal parameter σ of the joint variational dis-

tribution pσ(x,y). Under some application scenarios, additional conditional infor-

mation between x and y is known. For example, in the information bottleneck

task, the joint distribution between input x and bottleneck representation y is

pσ(x,y) = pσ(y|x)p(x). Then the MI upper bound ICLUB can be calculated directly

based on (3.13).

For cases in which the conditional information between x and y remains unclear,

we propose an MI minimization algorithm using the vCLUB estimator. At each

training iteration, we first obtain a batch of samples {(xi,yi)} from pσ(x,y). We

then update the variational approximation qθ(y|x) by maximizing the log-likelihood

L(θ) = 1N

∑Ni=1 log qθ(yi|xi). After qθ(y|x) is updated, we calculate the vCLUB

estimator as described in (3.15). Finally, the gradient of IvCLUB is calculated and

back-propagated to parameters of pσ(x,y). The reparameterization trick [KW13]

ensures the gradient back-propagates through the sampled embeddings (xi,yi). Up-

dating joint distribution pσ(x,y) will lead to the change of conditional distribution

pσ(y|x). Therefore, we need to update the approximation network qθ(y|x) again.

39

Page 55: Improving Natural Language Understanding via Contrastive

Consequently, qθ(y|x) and pσ(x,y) are updated alternately during the training (as

shown in Algorithm 1 without sampling).

In each training iteration, the vCLUB estimator requires calculation of all condi-

tional distributions {pσ(yj|xi)}Ni,j=1, which leads to O(N2) computational complexity.

To accelerate the training, we use stochastic sampling to approximate the mean of

conditional probabilities in IvCLUB (Eqn. (3.15)), and obtain a sampled vCLUB esti-

mator:

log qθ(yi|xi)−1

N

N∑j=1

log qθ(yj|xi) ≈ log qθ(yi|xi)− log qθ(yk′i |xi), (3.17)

with k′i uniformly selected from indices {1, 2, . . . , N}. With this sampling strategy,

the computational complexity in each iteration can be reduced to O(N) (as shown

in Algorithm 1 using sampling). A similar sampling strategy can also be applied to

CLUB when p(y|x) is known. This stochastic sampling estimator not only provides

an unbiased estimation to IvCLUB, but bridges the MI minimization with negative

sampling, a commonly used training strategy [GL16, CWT+19, CLZ+20], in which

for each positive data pair (xi,yi), a negative pair (xi,yk′i) is sampled. The mu-

tual information is minimized by reducing the positive conditional probability, while

enlarging the negative conditional probability. Although previous MI upper bounds

also utilize the negative data pairs (such as L1Out in (3.7)), they do not yield an un-

biased estimate when accelerated with negative sampling, because of the non-linear

log function applied after the linear probability summation. The unbiasedness of

our sampled CLUB is manifested thanks to the form of linear log-ratio summation.

In the experiments, we find the sampled vCLUB not only provides comparable MI

estimation performance, but also improves the model generalization abilities.

40

Page 56: Improving Natural Language Understanding via Contrastive

3.4 Experiments

We first show the performance of CLUB as a MI estimator on tractable toy (simu-

lated) cases. Then we evaluate the minimization ability of CLUB on two real-world

applications: Information Bottleneck (IB) and Unsupervised Domain Adaptation

(UDA). In the information bottleneck, the conditional distribution p(y|x) is known,

so we compare both CLUB and variational CLUB (vCLUB) estimators. In other

experiments for which p(y|x) is unknown, all the tested upper bounds require varia-

tional approximation. Without ambiguity, we abbreviate all variational upper bounds

(e.g., vCLUB) with their original names (e.g., CLUB) for simplicity.

3.4.1 MI Estimation Quality

Following the setup from [POVDO+19], we apply CLUB as an MI estimator in two

toy tasks: (i) estimating MI with samples {(xi,yi)} drawn jointly from a multi-

variate Gaussian distribution with correlation ρ; (ii) estimating MI with samples

{(xi, (Wyi)3)}, where (xi,yi) still comes from a Gaussian with correlation ρ, and

W is a full-rank matrix. Since the transformation y → (Wy)3 is smooth and bijec-

tive, the mutual information is invariant [KSG04], and I(x;y) = I(x; (Wy)3). For

both of the tasks, the dimension of samples x and y is set to d = 20. Under Gaussian

distributions, the MI true value can be calculated as I(x,y) = −d2

log(1 − ρ2), and

therefore we set the MI true value in the range {2.0, 4.0, 6.0, 8.0, 10.0} by varying the

value of ρ. At each MI true value, we sample data batches 4000 times, with batch

size equal to 64, for the training of variational MI estimators.

We compare our method with baselines including MINE [BBR+18], NWJ [NWJ10],

InfoNCE [OLV18], VUB [AFDM17] and L1Out [POVDO+19]. Since the conditional

distribution p(y|x) is unknown in this simulation setup, all upper bounds (VUB,

L1Out, CLUB) are calculated with an auxiliary approximation network qθ(y|x).

41

Page 57: Improving Natural Language Understanding via Contrastive

0 5000 10000 15000 20000Steps

0

2

4

6

8

10

12

14

Mut

ual I

nfor

mat

ion

NWJEstimated MITrue MI

MINE NCE L1Out CLUB CLUBSample

0 5000 10000 15000 20000Steps

0

2

4

6

8

10

12

14

Mut

ual I

nfor

mat

ion

NWJEstimated MITrue MI

MINE NCE L1Out CLUB CLUBSample

Figure 3.1: Simulation performance of MI estimators.

The approximation network has the same structure for all upper bounds, parame-

terized in a Gaussian family, qθ(y|x) = N (y|µ(x),σ2(x) · I) with mean µ(x) and

variance σ2(x) inferred by neural networks. On the other hand, all the MI lower

bounds (MINE, NWJ, InfoNCE) require learning of a value function f(x,y). To

make a fair comparison, we set the value function and the neural approximation to

have one hidden layer and the same hidden units. For Gaussian setup, the number

of hidden units is 20; for Cubic setup, the number of hidden units is 40. On the top

of hidden layer outputs, we add the ReLU activation function. The learning rate for

all estimators is set to 1× 10−4.

We report in Figure 3.1 the estimated MI values in each training step. In the top

row, data are from joint Gaussian distributions with the MI true value stepping over

time. In the bottom row, a cubic transformation is further applied to the Gaussian

samples as y. In each figure, the true MI values is a step function shown as the

black line. The estimated values are displayed as shadow blue curves. The dark blue

curves shows the local averages of estimated MI, with a bandwidth equal to 200.

The estimation of VUB has incomparably large bias, so we provide its results in the

Supplementary Material. From Figure 3.1, lower bound estimators, such as NWJ,

MINE, and InfoNCE, provide estimated values mainly under the true MI values step

42

Page 58: Improving Natural Language Understanding via Contrastive

2 4 6 8 100

2

4

6

Bias

Gaussian

2 4 6 8 1010 2

10 1

100

101

Varia

nce

2 4 6 8 10MI Values

0

10

20

30

40

50

MSE

NWJMINENCEL1OutCLUBCLUBSample

2 4 6 8 10

1

2

3

4

5

6

7

Bias

Cubic

2 4 6 8 10

10 2

10 1

100

101

Varia

nce

2 4 6 8 10MI Values

0

10

20

30

40

50

60

MSE

NWJMINENCEL1OutCLUBCLUBSample

Figure 3.2: Estimation quality comparison of MI estimators.

function, while L1Out, CLUB and Sampled CLUB (CLUBSample) estimate values

above the step function, which supports our theoretical analysis about CLUB with

variational approximation.

The numerical results of bias and variance in the estimation are reported in Fig-

ure 3.2. The left column shows the results of estimations under a Gaussian distri-

bution, while the right column is under Cubic setup. In each column, estimation

metrics are reported as bias, variance, and mean-square-error (MSE). In each plot,

the evaluation metric is reported with different true MI values varying from 2 to

10. Among these methods, CLUB and CLUBSample have the lowest bias. The bias

difference between CLUB and CLUBSample is insignificant, supporting our claim in

43

Page 59: Improving Natural Language Understanding via Contrastive

32 64 128 256 512Batch Size

10 2

10 1

Tim

e Co

st (s

)

NWJMINENCEL1OutCLUBCLUBSample

Figure 3.3: Estimator speed comparison with different batch size. Both the axeshave a logarithm scale.

Section 3.3.3 that CLUBSample is an unbiased stochastic approximation of CLUB.

L1Out also provides small bias estimation which is slightly worse than CLUB. NWJ

and InfoNCE have the lowest variance under both setups. CLUBSample has larger

variance than CLUB and L1Out due to the use of the sampling strategy. When con-

sidering the bias-variance trade-off as the mean square estimation error (MSE, equals

bias2+variance), CLUB outperforms other estimators, while L1Out and CLUBSam-

ple also provide competitive performance.

Although the L1Out estimator reaches similar estimation performance as our

CLUB on toy examples, we find L1Out fails to effectively reduce the MI when applied

as a critic in real-world MI minimization tasks. The numerical results in Sections 3.4.3

and 3.4.4 support this claim.

3.4.2 Time Efficiency of MI Estimators

Besides the estimation quality comparison, we further study the time efficiency of

different MI estimators. We conduct the comparison under the same experimental

setup as the Gaussian case in Section 3.4.1. Each MI estimator is tested with a

44

Page 60: Improving Natural Language Understanding via Contrastive

different batch size, from 32 to 512. We count the total time cost of the whole

estimation process and average it into each estimation step. In Figure 3.3, we report

the average estimation time costs of different MI estimators. MINE and CLUBSample

have the best computational efficiency; both have O(N) computational complexity

with respect to the sample size N , because of the negative sampling strategy. Among

other computationalO(N2) methods, CLUB has the highest estimation speed, thanks

to its simple form as mean of log-ratios, which can be easily accelerated by matrix

multiplication. Leave-one-out (L1out) has the highest time cost, because it requires

“leaving out” the positive sample pair each time in the denominator of equation (3.7).

3.4.3 MI Minimization in Information Bottleneck

The Information Bottleneck [TPB00] (IB) is an information-theoretical method for

latent representation learning. Given an input source x ∈ X and a corresponding

output target y ∈ Y , the information bottleneck aims to learn an encoder pσ(z|x),

such that the compressed latent code z is highly relevant to the target y, with

irrelevant source information from x being filtered. In other words, IB seeks to find

the sufficient statistics of x with respect to y [AFDM17], with minimum information

used from x. To address this task, an objective is introduced as

minpσ(z|x)

−I(y; z) + βI(x; z) (3.18)

where hyper-parameter β > 0. Following the setup from [AFDM17], we apply the

IB technique in the permutation-invariant MNIST classification. The input x is a

vector converted from a 28× 28 image of a hand-written number, and the output y

is the class label of this number. The stochastic encoder pσ(z|x) is implemented in

a Gaussian variational family, pσ(z|x) = N (z|µσ(x),Σσ(x)), where µσ and Σσ are

two fully-connected neural networks.

45

Page 61: Improving Natural Language Understanding via Contrastive

Table 3.1: Performance on the Permutuation invariant MNIST classification.

Method Misclass. rate(%)

NWJ [NWJ10] 1.29MINE [BBR+18] 1.17InfoNCE [OLV18] 1.24

DVB (VUB) [AFDM17] 1.13L1Out [POVDO+19] -

CLUB 1.12CLUB (Sample) 1.10vCLUB 1.10vCLUB (Sample) 1.06

For the first part of the IB objective (3.18), the MI between target y and latent

code z is maximized. We use the same strategy as in the deep variational information

bottleneck (DVB) [AFDM17], where a variational classifier qφ(y|z) is introduced to

implement a Barber-Agakov MI lower bound (Eqn. (3.2)) of I(y; z). The second

term in the IB objective requires the MI minimization between input x and the la-

tent representation z. DVB [AFDM17] utilizes the MI variation upper bound (VUB)

(Eqn. (3.6)) for the minimization of I(x; z). Since the closed form of pσ(z|x) is

already known as a Gaussian distribution parameterized by neural networks, we can

directly apply our CLUB estimator for minimizing I(x; z). Alternatively, the vari-

ational CLUB can be also applied under this scenario. Besides CLUB and vCLUB,

we compare previous methods such as MINE, NWJ, InfoNCE, and L1Out. The mis-

classification rates for different MI estimators are reported in Table 3.1, where the

top three methods are MI lower bounds and the rest are MI upper bounds.

MINE achieves the lowest misclassification error among lower bound estimators.

Although providing good MI estimation in the Gaussian simulation study, L1Out

suffers from numerical instability in MI optimization and fails during training. Both

CLUB and vCLUB estimators outperform previous methods in bottleneck representa-

46

Page 62: Improving Natural Language Understanding via Contrastive

tion learning, with lower misclassification rates. Note that sampled versions of CLUB

and vCLUB improve the accuracy compared with the original CLUB and vCLUB, re-

spectively, which verify the claim that a negative sampling strategy improves model

robustness. Besides, using variational approximation qθ(y|x) even attains higher

accuracy than using ground truth pσ(y|x) for CLUB. Although pσ(y|x) provides

more accurate MI estimation, the variational approximation pσ(y|x) can add noise

into the gradient of CLUB. Both the sampling and the variational approximation in-

crease the randomness in the model, which helps to increase the model generalization

ability [HSK+12, BBR+18].

3.4.4 MI Minimization in Domain Adaptation

Another important application of MI minimization is disentangled representation

learning (DRL) [KM18a, CLGD18, LBL+19]. Specifically, we aim to encode the

data into several separate embedding parts, each with different semantic meaning.

The semantically disentangled representations help improve the performance of deep

learning models, especially in the fields of conditional generation [MSG+18], style

transfer [JMBV19b], and domain adaptation [GSR+18]. To learn (ideally) indepen-

dent disentangled representations, one effective solution is to minimize the mutual

information among different latent embedding parts.

We compare performance of MI estimators for learning disentangled representa-

tions in unsupervised domain adaptation (UDA) tasks. In UDA, we have images

xs ∈ X s from the source domain X s and xt ∈ X t from the target domain X t. While

each source image xs has a corresponding label ys, no label information is available

for observations in the target domain. The objective is to learn a model based on

data {xs, ys} and {xt}, which not only performs well in source domain classification,

but also provides satisfying predictions in the target domain.

47

Page 63: Improving Natural Language Understanding via Contrastive

𝐸𝑐

𝐸𝑑

𝒙𝑠

𝒙𝑡 𝒛𝑑

𝒛𝑐

𝐷

𝐶𝒛𝑐𝑠

𝒛𝑑𝑠 , 𝒛𝑑

𝑡

Content loss

Domain loss

Mutual info

Source Flow: Target Flow: Combined Flow:

Figure 3.4: The information-theoretical framework for unsupervised domain adap-tation.

To solve this problem, we use the information-theoretical framework inspired from

[GSR+18]. Specifically, two feature extractors are introduced: the domain encoder

Ed and the content encoder Ec. The former encodes the domain information from

an observation x into a domain embedding zd = Ed(x); the latter outputs a content

embedding zc = Ec(x) based on an input data point x. As shown in Figure 3.4, the

content embedding zsc from the source domain is further used as an input to a content

classifier C(·) to predict the corresponding class label, with a content loss defined as

Lc = E[−ys logC(zsc)]. The domain embedding zd (including zsd and ztd) is input to a

domain discriminator D(·) to predict whether the observation comes from the source

domain or target domain, with a domain loss defined as Ld = Ex∈X s [logD(zd)] +

Ex∈X t [log(1 − D(zd))]. Since the content information and the domain information

should be independent, we minimize the mutual information I(zc, zd) between the

content embedding zc and domain embedding zd. The final objective is:

minEc,Ed,C,D

I(zc, zd) + λcLc + λdLd, (3.19)

The above framework is shown in Figure 3.4. The input data x (including xs and

xt) are passed to a content encoder Ec and a domain encoder Ed, with output feature

zc and zd, respectively. C is the content classifier, and D is the domain discriminator.

48

Page 64: Improving Natural Language Understanding via Contrastive

Table 3.2: Performance comparison on UDA. Datasets are MNIST (M), MNIST-M(MM), USPS (U), SVHN (SV), CIFAR-10 (C), and STL (S).

Method M→MM M→U U→M SV→M C→S S→C

Source-Only 59.9 76.7 63.4 67.1 - -

MI-based Disentangling Framework

NWJ 83.3 98.3 91.1 86.5 78.2 71.0MINE 88.4 98.1 94.8 83.4 77.9 70.5InfoNCE 85.5 98.3 92.7 84.1 77.4 69.4

VUB 76.4 97.1 96.3 81.5 - -L1Out 76.2 96.3 93.9 - 77.8 69.2CLUB 93.7 98.9 97.7 89.7 78.7 71.8CLUB-S 94.6 98.9 98.1 90.6 79.1 72.3

Other Frameworks

DANN 81.5 77.1 73.0 71.1 - -DSN 83.2 91.3 - 76.0 - -MCD 93.5 94.2 94.1 92.6 78.1 69.2

The mutual information between zc and zd is minimized. where λc, λd > 0 are hyper-

parameters.

We apply different MI estimators to the framework (3.19), and evaluate the per-

formance on several DA benchmark datasets, including MNIST, MNIST-M, USPS,

SVHN, CIFAR-10, and STL. A detailed description of the datasets and model se-

tups are provided in the Supplementary Material. Besides the proposed information-

theoretical UDA model, we also compare the performance with other UDA frame-

works: DANN [GUA+16], DSN [BTS+16], and MCD [SWUH18]. The classification

accuracy on target domain is reported in Table 3.2. Among results in MI-based

disentangling framework, the top three are MI lower bounds, while the rest are MI

upper bounds. CLUB-S refers to Sampled CLUB.

From the results, we find our MI-based disentangling shows competitive results

with previous UDA methods. Among different MI estimators, the Sampled CLUB

49

Page 65: Improving Natural Language Understanding via Contrastive

uniformly outperforms other competitive methods on four DA tasks. The stochastic

sampling in CLUBSample improves the model generalization ability and helps the

model avoid overfitting. The other two MI upper bounds, VUB and L1Out, fail to

train a satisfying UDA model, whose results are worse than the MI lower bound esti-

mators. With L1Out, the training loss cannot even decrease on the most challenging

SVHN→MNIST task, due to the numerical instability.

3.5 Conclusions

We have introduced a novel mutual information upper bound called Contrastive Log-

ratio Upper Bound (CLUB). This novel MI estimator can be extended to a variational

version for general scenarios when only samples of the joint distribution are available.

Based on the variational CLUB, we have proposed a new MI minimization algorithm,

and further accelerated it with a negative sampling strategy. We have studied the

good properties of CLUB both theoretically and empirically. Experimental results

on simulation studies and real-world applications show the attractive performance

of CLUB on both MI estimation and MI minimization tasks. This work provides

insight into the connection between mutual information and widespread machine

learning training strategies, including contrastive learning and negative sampling. We

believe the proposed CLUB estimator will have significant applications for reducing

the correlation of different model parts, especially in the domains of interpretable

machine learning, controllable generation, and fairness.

50

Page 66: Improving Natural Language Understanding via Contrastive

Chapter 4

Improving Representation

Disentanglement for Text Data

4.1 Introduction

Disentangled representation learning (DRL), which maps different aspects of data

into distinct and independent low-dimensional latent vector spaces, has attracted con-

siderable attention for making deep learning models more interpretable. Through a

series of operations such as selecting, combining, and switching, the learned disentan-

gled representations can be utilized for downstream tasks, such as domain adaptation

[LYF+18], style transfer [LTH+18], conditional generation [D+17, BHP+18], and few-

shot learning [KVAMR18]. Although widely used in various domains, such as images

[TYL17, LTH+18], videos [YM18, HLH+18], and speech [CcYyLsL18, ZLL+19], many

challenges in DRL have received limited exploration in natural language processing

[JMBV19a].

To disentangle various attributes of text, two distinct types of embeddings are

typically considered: the style embedding and the content embedding [JMBV19a].

The content embedding is designed to encapsulate the semantic meaning of a sen-

tence. In contrast, the style embedding should represent desired attributes, such

as the sentiment of a review, or the personality associated with a post. Ideally, a

disentangled-text-representation model should learn representative embeddings for

both style and content.

To accomplish this, several strategies have been introduced. [SLBJ17] proposed

51

Page 67: Improving Natural Language Understanding via Contrastive

to learn a semantically-meaningful content embedding space by matching the content

embedding from two different style domains. However, their method requires prede-

fined style domains, and thus cannot automatically infer style information from unla-

beled text. [HYL+17b] and [LSS+19] utilized one-hot vectors as style-related features

(instead of inferring the style embeddings from the original data). These models are

not applicable when new data comes from an unseen style class. [JMBV19a] proposed

an encoder-decoder model in combination with an adversarial training objective to

infer both style and content embeddings from the original data. However, their ad-

versarial training framework requires manually-processed supervised information for

content embeddings (e.g., reconstructing sentences with manually-chosen sentiment-

related words removed). Further, there is no theoretical guarantee for the quality of

disentanglement.

In this paper, we introduce a novel Information-theoretic Disentangled Embedding

Learning method (IDEL) for text, based on guidance from information theory. In-

spired by Variation of Information (VI), we introduce a novel information-theoretic

objective to measure how well the learned representations are disentangled. Specif-

ically, our IDEL reduces the dependency between style and content embeddings by

minimizing a sample-based mutual information upper bound. Furthermore, the mu-

tual information between latent embeddings and the input data is also maximized

to ensure the representativeness of the latent embeddings (i.e., style and content

embeddings). The contributions of this paper are summarized as follows:

• A principled framework is introduced to learn disentangled representations of

natural language. By minimizing a novel VI-based DRL objective, our model

not only explicitly reduces the correlation between style and content embed-

dings, but also simultaneously preserves the sentence information in the latent

spaces.

52

Page 68: Improving Natural Language Understanding via Contrastive

• A general sample-based mutual information upper bound is derived to facilitate

the minimization of our VI-based objective. With this new upper bound, the

dependency of style and content embeddings can be decreased effectively and

stably.

• The proposed model is evaluated empirically relative to other disentangled rep-

resentation learning methods. Our model exhibits competitive results in several

real-world applications.

4.2 Preliminary

4.2.1 Mutual Information Variational Bounds

Mutual information (MI) is a key concept in information theory, for measuring the

dependence between two random variables. Given two random variables x and y,

their MI is defined as

I(x;y) = Ep(x,y)[logp(x,y)

p(x)p(y)], (4.1)

where p(x,y) is the joint distribution of the random variables, with p(x) and p(y)

representing the respective marginal distributions.

In disentangled representation learning, a common goal is to minimize the MI

between different types of embeddings [POVDO+19]. However, the exact MI value

is difficult to calculate in practice, because in most cases the integral in Eq. (4.1)

is intractable. To address this problem, various MI estimation methods have been

introduced [CDH+16, BBR+18, POVDO+19]. One of the commonly used estimation

approaches is the Barber-Agakov lower bound [BA03]. By introducing a variational

distribution q(x|y), one may derive

I(x;y) ≥ H(x) + Ep(x,y)[log q(x|y)], (4.2)

53

Page 69: Improving Natural Language Understanding via Contrastive

Figure 4.1: Illustration of the concept of variation of information (VI).

where H(x) = Ep(x)[− log p(x)] is the entropy of variable x.

4.2.2 Variation of Information

In information theory, Variation of Information (VI, also called Shared Information

Distance) is a measure of independence between two random variables. The mathe-

matical definition of VI between random variables x and y is

VI(x;y) = H(x) +H(y)− 2I(x;y), (4.3)

where H(x) and H(y) are entropies of x and y, respectively. In Figure 4.1, we

provide an illustration of VI: the green and purple circles represent the entropy of x

and y, respectively; the intersection (blue region) is the mutual information between

x and y; the symmetric difference of the two circles (green and purple regions) is

VI(x;y).

[KSAG05] show that VI is a well-defined metric, which satisfies the triangle in-

equality:

VI(y;x) + VI(x; z) ≥ VI(y; z), (4.4)

for any random variables x, y and z. Additionally, VI(x;y) = 0 indicates x and y

are the same variable [Mei07]. From Eq. (4.3), the VI distance has a close relation to

54

Page 70: Improving Natural Language Understanding via Contrastive

mutual information: if the mutual information is a measure of “dependence” between

two variables, then the VI distance is a measure of “independence” between them.

4.3 Method

Consider data {(xi, yi)}Ni=1, where each xi is a sentence drawn from a distribution

p(x), and yi is the label indicating the style of xi. The goal is to encode each

sentence xi into its corresponding style embedding si and content embedding ci with

an encoder qθ(s, c|x):

si, ci|xi ∼ qθ(s, c|xi). (4.5)

The collection of style embeddings {si}Ni=1 can be regarded as samples drawn from a

variable s in the style embedding space, while the collection of content embeddings

{ci}Ni=1 are samples from a variable c in the content embedding space. In practice,

the dimension of the content embedding is typically higher than that of the style

embedding, considering that the content usually contains more information than the

style [JMBV19a].

We first give an intuitive introduction to our proposed VI-based objective, then

in Section 4.3.1 we provide the theoretical justification for it. To disentangle the style

and content embedding, we try to minimize the mutual information I(s; c) between s

and c. Meanwhile, we maximize I(c;x) to ensure that the content embedding s suf-

ficiently encapsulates information from the sentence x. The embedding s is expected

to contain rich style information. Therefore, the mutual information I(s; y) should

be maximized. Thus, our overall disentangled representation learning objective is:

LDis = I(s; c)− I(c;x)− I(s; y).

55

Page 71: Improving Natural Language Understanding via Contrastive

4.3.1 Theoretical Justification of the Objective

The objective LDis has a strong connection with the independence measurement in

information theory. As described in Section 4.2.2, Variation of Information (VI)

is a well-defined metric of independence between variables. Applying the triangle

inequality from Eq. (4.4) to s, c and x, we have VI(s;x) + VI(x; c) ≥ VI(s; c).

Equality occurs if and only if the information from variable x is totally separated

into two independent variable s and c, which is an ideal scenario for disentangling

sentence x into its corresponding style embedding s and content embedding c.

Therefore, the difference between VI(s;x) + VI(x; c) and VI(s; c) represents the

degree of disentanglement. Hence we introduce a measurement:

D(x; s, c) = VI(s;x) + VI(x; c)− VI(c; s).

From Eq. (4.4), we know that D(x;y, z) is always non-negative. By the definition of

VI in Eq. (4.3), D(x; s, c) can be simplified as:

VI(c;x) + VI(x; s)− VI(s; c)

=2H(x) + 2[I(s; c)− I(x; c)− I(x; s)].

Since H(x) is a constant associated with the data, we only need to focus on I(s; c)−

I(x; c)− I(x; s).

The measurement D(x; s, c) is symmetric to style s and content c, giving rise

to the problem that without any inductive bias in supervision, the disentangled rep-

resentation could be meaningless (as observed by [LBL+19]). Therefore, we add

inductive biases by utilizing the style label y as supervised information for style em-

bedding s. Noting that s → x → y is a Markov Chain, we have I(s;x) ≥ I(s; y)

based on the MI data-processing inequality [CT12]. Then we convert the mini-

mization of I(s; c) − I(x; c) − I(x; s) into the minimization of the upper bound

I(s; c)− I(x; c)− I(y; s), which further leads to our objective LDis.

56

Page 72: Improving Natural Language Understanding via Contrastive

However, minimizing the exact value of mutual information in the objective LDis

causes numerical instabilities, especially when the dimension of the latent embeddings

is large [CDH+16]. Therefore, we provide several MI estimations to the objective

terms I(s; c), I(x; c) and I(s; y) in the following two sections.

4.3.2 MI Variational Lower Bound

To maximize I(x; c) and I(s; y), we derive two variational lower bounds. For I(x; c),

we introduce a variational decoder qφ(x|c) to reconstruct the sentence x by the

content embedding c. Leveraging the MI variational lower bound from Eq. (4.2), we

have I(x; c) ≥ H(x) + Ep(x;c)[log qφ(x|c)]. Similarly, for I(s; y), another variational

lower bound can be obtained as: I(s; y) ≥ H(y) + Ep(y,s)[log qψ(y|s)], where qψ(y|s)

is a classifier mapping the style embedding s to its corresponding style label y. Based

on these two lower bounds, LDis has an upper bound:

LDis ≤ I(s; c)− [H(x) + Ep(x,c)[log qφ(x|c)]]

−[H(y) + Ep(y,s)[log qψ(y|s)]]. (4.6)

Noting that both H(x) and H(y) are constants from the data, we only need to

minimize:

LDis = I(s; c)− Ep(x,c)[log qφ(x|c)]− Ep(y,s)[log qψ(y|s)]. (4.7)

As an intuitive explanation of LDis, the style embedding s and content embedding

c are expected to be independent by minimizing mutual information I(s; c), while

they also need to be representative: the style embedding s is encouraged to give

a better prediction of style label y by maximizing Ep(y,s)[log qψ(y|s)]; the content

embedding should maximize the log-likelihood Ep(x,c)[log qφ(x|c)] to contain sufficient

information from sentence x.

57

Page 73: Improving Natural Language Understanding via Contrastive

4.3.3 MI Sample-based Upper Bound

To estimate I(s; c), we propose a novel sample-based upper bound. Assume we

have M latent embedding pairs {(sj, cj)}Mj=1 drawn from p(s, c). As shown in The-

orem 4.3.1, we derive an upper bound of mutual information based on the samples.

A detailed proof is provided in the Supplementary Material.

Theorem 4.3.1. If {(sj, cj)}Mj=1 ∼ p(s, c), then

I(s; c) ≤ E[1

M

M∑j=1

Rj] =: I(s; c), (4.8)

where Rj = log p(sj|cj)− 1M

∑Mk=1 log p(sj|ck).

Based on Theorem 4.3.1, given embedding samples {sj, cj}Mj=1, we can minimize

1M

∑Mj=1Rj as an unbiased estimation of the upper bound I(s; c). The calculation

of Rj requires the conditional distribution p(s|c), whose closed form is unknown.

Therefore, we use a variational network pσ(s|c) to approximate p(s|c) with embed-

ding samples.

To implement the upper bound in Eq. (4.8), we first feedM sentences {xj} into en-

coder qθ(s, c|x) to obtain embedding pairs {(sj, cj)}. Then, we train the variational

distribution pσ(c|x) by maximizing the log-likelihood L(σ) = 1M

∑Mj=1 log pσ(sj|cj).

After the training of pσ(s|c) is finished, we calculate Rj for each embedding pair

(sj, cj). Finally, the gradient for 1M

∑Mj=1Rj is calculated and back-propagated to

encoder qθ(s, c|x). We apply the re-parameterization trick [KW13] to ensure the

gradient back-propagates through the sampled embeddings (sj, cj). When the en-

coder weights are updated, the distribution qθ(s, c|x) changes, which leads to the

changing of conditional distribution p(s|c). Therefore, we need to update the ap-

proximation network pσ(s|c) again. Consequently, the encoder network qθ(s, c|x)

and the approximation network pσ(s|c) are updated alternately during training.

58

Page 74: Improving Natural Language Understanding via Contrastive

Algorithm 2 Disentangling s and c

Data {xj}Mj=1, encoder qθ(s, c|x), approximation network pσ(s|c).for each training iteration do

Sample {sj, cj}Mj=1 from qθ(s, c|x)

L(σ) = 1M

∑Mj=1 log pσ(sj|cj)

Update pσ(s|c) by maximize L(σ)for j = 1 to M do

Sample k′ uniformly from {1, 2, . . . ,M}Rj = log pσ(sj|cj)− log pσ(sj|ck′)

end forUpdate qθ(s, c|x) by minimize 1

M

∑Mj=1 Rj

end for

In each training step, the above algorithm requires M pairs of embedding samples

{sj, cj}Mj=1 and the calculation of all conditional distributions pσ(sj|ck). This leads

to O(M2) computational complexity. To accelerate the training, we further approx-

imate term 1M

∑Mk=1 log p(sj|ck) in Rj by log p(sj|ck′), where k′ is selected uniformly

from indices {1, 2, . . . ,M}. This stochastic sampling not only leads to an unbiased

estimation Rj to Rj, but also improves the model robustness (as shown in Algorithm

1).

Symmetrically, we can also derive an MI upper bound based on the conditional

distribution p(c|s). However, the dimension of c is much higher than the dimension

of s, which indicates that the neural approximation to p(c|s) would have worse

performance compared with the approximation to p(s|c). Alternatively, the lower-

dimensional distribution p(s|c) used in our model is relatively easy to approximate

with neural networks.

4.3.4 Encoder-Decoder Framework

One important downstream task for disentangled representation learning (DRL) is

conditional generation. Our MI-based text DRL method can be also embedded into

an Encoder-Decoder generative model and trained end-to-end.

59

Page 75: Improving Natural Language Understanding via Contrastive

Figure 4.2: The framework of IDEL.

Since the proposed DRL encoder qθ(s, c|x) is a stochastic neural network, a nat-

ural extension is to add a decoder to build a variational autoencoder (VAE) [KW13].

Therefore, we introduce another decoder network pγ(x|s, c) that generates a new sen-

tence based on the given style s and content c. A prior distribution p(s, c) = p(s)p(c),

as the product of two multivariate unit-variance Gaussians, is used to regularize the

posterior distribution qθ(s, c|x) by KL-divergence minimization. Meanwhile, the log-

likelihood term for text reconstruction should be maximized. The objective for VAE

is:

LVAE =KL(qθ(s, c|x)‖p(s, c))

− Eqθ(s,c|x)[log pγ(x|s, c)].

We combine the VAE objective and our MI-based disentanglement term to form an

end-to-end learning framework (as shown in Figure 4.2). The total loss function is

Ltotal = βL∗Dis + LVAE,

where L∗Dis replaces I(s; c) in LDis (Eq. (4.7)) with our MI upper bound I(s; c)

from Eq. (4.8); β > 0 is a hyper-parameter re-weighting DRL and VAE terms. We call

60

Page 76: Improving Natural Language Understanding via Contrastive

this final framework Information-theoretic Disentangled text Embedding Learning

(IDEL).

In Figure 4.2, each sentence x is encoded into style embedding s and content

embedding c. The style embedding s goes through a classifier qψ(y|s) to predict the

style label y; the content embedding c is used to reconstruct x. An auxiliary network

pσ(s|c) helps disentangle the style and content embeddings. The decoder pγ(x|s, c)

generates sentences based on the combination of s and c.

4.4 Related Work

4.4.1 Disentangled Representation Learning

Disentangled representation learning (DRL) can be classified into two categories: un-

supervised disentangling and supervised disentangling. Unsupervised disentangling

methods focus on adding constraints on the embedding space to enforce that each

dimension of the space be as independent as possible [BHP+18, CLGD18]. How-

ever, [LBL+19] challenge the effectiveness of unsupervised disentangling without any

induced bias from data or supervision. For supervised disentangling, supervision is

always provided on different parts of disentangled representations. However, for text

representation learning, supervised information can typically be provided only for the

style embeddings (e.g. sentiment or personality labels), making the task much more

challenging. [JMBV19a] tried to alleviate this issue by manually removing sentiment-

related words from a sentence. In contrast, our model is trained in an end-to-end

manner without manually adding any supervision on the content embeddings.

4.4.2 Mutual Information Estimation

Mutual information (MI) is a fundamental measurement of the dependence between

two random variables. MI has been applied to a wide range of tasks in machine learn-

61

Page 77: Improving Natural Language Understanding via Contrastive

ing, including generative modeling [CDH+16], the information bottleneck [TPB00],

and domain adaptation [GSR+20]. In our proposed method, we utilize MI to measure

the dependence between content and style embedding. By minimizing the MI, the

learned content and style representations are explicitly disentangled.

However, the exact value of MI is hard to calculate, especially for high-dimensional

embedding vectors [POVDO+19]. To approximate MI, most previous work focuses on

lower-bound estimations [CDH+16, BBR+18, POVDO+19], which are not applicable

to MI minimization tasks. [POVDO+19] propose a leave-one-out upper bound of

MI; however it is not numerically stable in practice. Inspired by these observations,

we introduce a novel MI upper bound for disentangled representation learning, which

stably minimizes the correlation between content and style embedding in a principled

manner.

4.5 Experiments

4.5.1 Datasets

We conduct experiments to evaluate our models on the following real-world datasets:

Yelp Reviews: The Yelp dataset contains online service reviews with associated

rating scores. We follow the pre-processing from [SLBJ17] for a fair comparison. The

resulting dataset includes 250,000 positive review sentences and 350,000 negative

review sentences.

Personality Captioning: Personality Captioning dataset [SHH+19] collects cap-

tions of images which are written according to 215 different personality traits. These

traits can be divided into three categories: positive, neutral, and negative. We select

sentences from positive and negative classes for evaluation.

62

Page 78: Improving Natural Language Understanding via Contrastive

Figure 4.3: Latent spaces t-SNE plots of IDEL on Yelp.

4.5.2 Experimental Setup

We build the sentence encoder qθ(s, c|x) with a one-layer bi-directional LSTM plus

a multi-head attention mechanism. The style classifier qψ(y|s) is parameterized by

a single fully-connected network with the softmax activation. The content-based

decoder qφ(x|c) is a one-layer uni-directional LSTM appended with a linear layer

with vocabulary size output, outputting the predicted probability of the next words.

The conditional distribution approximation pσ(s|c) is represented by a two-layer

fully-connected network with ReLU activation. The generator pγ(x|s, c) is built by a

two-layer uni-directional LSTM plus a linear projection with output dimension equal

to the vocabulary size, providing the next-word prediction based on previous sentence

information and the current word.

We initialize and fix our word embeddings by the 300-dimensional pre-trained

GloVe vectors [PSM14]. The style embedding dimension is set to 32 and the content

embedding dimension is 512. We use a standard multivariate normal distribution as

the prior of the latent spaces. We train the model with the Adam optimizer [KB14]

with initial learning rate of 5× 10−5. The batch size is equal to 128.

63

Page 79: Improving Natural Language Understanding via Contrastive

Figure 4.4: t-SNE plots of IDEL− without I(s; c).

4.5.3 Embedding Disentanglement Quality

We first examine the disentangling quality of learned latent embeddings, primarily

studying the latent spaces of IDEL on the Yelp dataset.

Latent Space Visualization: We randomly select 1,000 sentences from the Yelp

testing set and visualize their latent embeddings in Figure 4.3, via t-SNE plots

[vdMH08]. The blue and red points respectively represent the positive and nega-

tive sentences. The left side of the figure shows the style embedding space, which is

well separated into two parts with different colors. It supports the claim that our

model learns a semantically meaningful style embedding space. The right side of the

figure is the content embedding space, which cannot be distinguished by the style

labels (different colors). The lack of difference in the pattern of content embedding

also provides evidence that our content embeddings have little correlation with the

style labels.

For an ablation study, we train another IDEL model under the same setup, while

removing our MI upper bound I(s; c). We call this model IDEL− in the following

experiments. We encode the same sentences used in Figure 4.3, and display the corre-

64

Page 80: Improving Natural Language Understanding via Contrastive

Table 4.1: Performance comparison of text DRL models.

Yelp Dataset Personality Captioning DatasetConditional Generation Style Transfer Conditional Generation Style TransferACC BLEU GM ACC BLEU S-BLEU GM ACC BLEU GM ACC BLEU S-BLEU GM

CtrlGen 82.5 20.8 41.4 83.4 19.4 31.4 37.0 73.6 18.9 37.0 73.3 18.9 30.0 34.6CAAE 78.9 19.7 39.4 79.3 18.5 28.2 34.6 72.2 19.5 37.5 72.1 18.3 27.4 33.1ARAE 78.3 23.1 42.4 78.5 21.3 32.5 37.9 72.8 22.5 40.4 71.5 20.4 31.6 35.8BT 81.4 20.2 40.5 86.3 24.1 35.6 41.9 74.1 21.0 39.4 75.9 23.1 34.2 39.1DRLST 83.7 22.8 43.7 85.0 23.9 34.9 41.4 74.9 22.0 40.5 75.7 21.9 33.8 38.3IDEL− 78.1 20.3 39.8 79.1 20.1 27.5 35.1 72.0 19.7 37.7 72.4 19.7 27.1 33.8IDEL 83.9 23.0 43.9 85.7 24.3 35.2 41.9 75.1 22.3 40.9 75.6 23.3 34.6 39.4

sponding embeddings in Figure 4.4. Compared with results from the original IDEL,

the style embedding space (left in Figure 4.4) is not separated in a clean manner. On

the other hand, the positive and negative embeddings become distinguishable in the

content embedding space. The difference between Figures 4.3 and 4.4 indicates the

disentangling effectiveness of our MI upper bound I(s; c).

Label-Embedding Correlation: Besides visualization, we also numerically analyze

the correlation between latent embeddings and style labels. Inspired by the statis-

tical two-sample test [GBR+12], we use the sample-based divergence between the

positive embedding distribution p(c|y = 1) and the negative embedding distribution

p(c|y = 0) as a measurement of label-embedding correlation. We consider four diver-

gences: Mean Absolute Deviation (MAD) [Gea35], Energy Distance (ED) [SSG+13],

Maximum Mean Discrepancy (MMD) [GBR+12], and Wasserstein distance (WD)

[RTC17]. For a fair comparison, we re-implement previous text embedding methods

and set their content embedding dimension to 512 and the style embedding dimen-

sion to 32 (if applicable). Details about the divergences and embedding processing

are shown in the Supplementary Material.

From Table 4.2, the proposed IDEL achieves the lowest divergences between pos-

itive and negative content embeddings compared with CtrlGen [HYL+17b], CAAE

[SLBJ17], ARAE [ZKZ+18], BackTranslation (BT) [LSS+19], and DRLST [JMBV19a],

indicating our model better disentangles the content embeddings from the style la-

65

Page 81: Improving Natural Language Understanding via Contrastive

Table 4.2: Sample divergences between positive and negative content embeddings.

Method MAD ED WD MMD

CtrlGen 0.261 0.105 0.311 0.063CAAE 0.285 0.112 0.306 0.078ARAE 0.194 0.050 0.248 0.042BT 0.211 0.053 0.269 0.049DRLST 0.181 0.048 0.215 0.031

IDEL− 0.217 0.077 0.293 0.051IDEL 0.063 0.015 0.084 0.010

Table 4.3: Sample divergences between positive and negative style embeddings.

Method MAD ED WD MMD

DRLST 1.024 0.503 1.375 0.286IDEL− 0.996 0.489 1.124 0.251IDEL 1.167 0.583 1.392 0.302

bels. For style embeddings, we compare IDEL with DRLST, the only prior method

that infers the text style embeddings. Table 4.3 shows a larger distribution gap be-

tween positive and negative style embeddings with IDEL than with DRLST, which

demonstrates the proposed IDEL has better style information expression in the style

embedding space. The comparison between IDEL and IDEL− supports the effective-

ness of our MI upper bound minimization.

4.5.4 Embedding Representation Quality

To show the representation ability of IDEL, we conduct experiments on two text-

generation tasks: style transfer and conditional generation.

For style transfer, we encode two sentences into a disentangled representation,

and then combine the style embedding from one sentence and the content embedding

from another to generate a new sentence via the generator pγ(x|s, c). For conditional

66

Page 82: Improving Natural Language Understanding via Contrastive

generation, we set one of the style or content embeddings to be fixed and sample

the other part from the latent prior distribution, and then use the combination to

generate text. Since most previous work only embedded the content information, for

fair comparison, we mainly focus on fixing style and sampling context embeddings

under the conditional generation setup.

To measure generation quality for both tasks, we test the following metrics (more

specific description is provided in the Supplementary Material).

Style Preservation: Following previous work [HYL+17b, SLBJ17, JMBV19a], we

pre-train a style classifier and use it to test whether a generated sentence can be

categorized into the correct target style class.

Content Preservation: For style transfer, we measure whether a generation pre-

serves the content information from the original sentence by the self-BLEU score

[ZYS+19, ZCG+20]. The self-BLEU is calculated between one original sentence and

its style-transferred sentence.

Generation Quality: To measure the generation quality, we calculate the corpus-

level BLEU score [PRWZ02] between a generated sentence and the testing data cor-

pus.

Geometric Mean: We use the geometric mean (GM) [JMBV19a] of the above

metrics to obtain an overall evaluation metric of representiveness of DRL models.

We compare our IDEL with previous state-of-the-art methods on Yelp and Per-

sonality Captioning datasets, as shown in Table 4.1. The references to the other

models are mentioned in Section 4.5.3. Note that the original BackTranslation (BT)

method [LSS+19] is a Auto-Encoder framework, that is not able to do conditional

generation. To compare with BT fairly, we add a standard Gaussian prior in its

latent space to make it a variational auto-encoder model.

From the results in Table 4.1, ARAE performs well on the conditional generation.

67

Page 83: Improving Natural Language Understanding via Contrastive

Table 4.4: Examples of text style transfer on Yelp dataset. The style-related wordsare bold.

Content Source Style Source Transferred Result

I enjoy it thoroughly! never before had a bad experience I dislike it thoroughly.at the habit until tonight.

quality is just so so. quality is so bad.

I am so grateful. I am so disgusted.

never before had a bad experience I am so grateful. never had a service that wasat the habit until tonight. enjoyable experience tonight.

quality is just so so. never had a unimpressedexperience until tonight.

quality of food is fantastic. never had awesome routineuntil tonight.

I am so disappointed with palm we were both so impressed. I am so impressed with palmtoday. again.

quality of food is fantastic . I am good with palm today.

never before had a bad experience I am so disgusted with palmat the habit until tonight. today.

Compared to ARAE, our model performance is slightly lower on content preservation

(BLEU). In contrast, the style classification score of IDEL has a large margin above

that of ARAE. The BackTranslation (BT) has a better performance on style transfer

tasks, especially on the Yelp dataset. Our IDEL has a lower style classification

accuracy (ACC) than BT on the style transfer task. However, IDEL achieves high

BLEU on style transfer, which leads to a high overall GM score on the Personality-

Captioning dataset. On the Yelp dataset, IDEL also has a competitive GM score

compared with BT. The experiments show a clear trade-off between style preservation

and content preservation, in which our IDEL learns more representative disentangled

representation and leads to a better balance.

Besides the automatic evaluation metrics mentioned above, we further test our

disentangled representation effectiveness by human evaluation. Due to the limitation

of manual effort, we only evaluate the style transfer performance on Yelp datasets.

68

Page 84: Improving Natural Language Understanding via Contrastive

Table 4.5: Manual evaluation for style transfer on Yelp.

SA CP SF GMCtrlGen 71.2 (3.56) 3.25 3.12 3.30CAAE 63.1 (3.16) 2.83 3.06 3.01ARAE 68.0 (3.40) 3.44 3.09 3.31IDEL 73.7 (3.69) 3.39 3.21 3.42

Table 4.6: Ablation tests for style transfer on Yelp.

ACC BLEU S-BLEU GMLVAE 52.1 24.7 20.8 29.9LVAE + I(s; y) 86.1 23.3 16.4 32.0LVAE + I(x; c) 50.2 24.0 36.3 34.7IDEL− 79.1 20.1 27.5 35.1IDEL∗ 85.5 24.0 35.0 41.5IDEL 85.7 24.3 35.2 41.9

The generated sentences are manually evaluated on style accuracy (SA), content

preservation (CP), and sentence fluency (SF). The CP and SF scores are between

0 to 5. Details are provided in the Supplementary Material. Our method achieves

better style and content preservation, with a little performance sacrifice on sentence

fluency.

Table 4.4 shows three style transfer examples from IDEL on the Yelp dataset. The

first example shows three sentences transferred with the style from a given sentence.

The other two examples transfer each given sentence based on the styles of three

different sentences. Our IDEL not only transfers sentences into target sentiment

classes, but also renders the sentence with more detailed style information (e.g., the

degree of the sentiment).

In addition, we conduct an ablation study to test the influence of different objec-

tive terms in our model. We re-train the model with different training loss combina-

tions while keeping all other setups the same. In Table 4.1, IDEL surpasses IDEL−

69

Page 85: Improving Natural Language Understanding via Contrastive

(without MI upper bound minimization) with a large gap, demonstrating the effec-

tiveness of our proposed MI upper bound. The vanilla VAE has the best generation

quality. However, its transfer style accuracy is slightly better than a random guess.

When adding I(s; y), the ACC score significantly improves, but the content preser-

vation (S-BLEU) becomes worse. When adding I(c;x), the content information is

well preserved, while the ACC even decreases. By gradually adding MI terms, the

model performance becomes more balanced on all the metrics, with the overall GM

monotonically increasing. Additionally, we test the influence of the stochastic calcu-

lation of Rj in Algorithm 1 (IDEL) with the closed form from Theorem 4.3.1 (IDEL∗).

The stochastic IDEL not only accelerates the training but also gains a performance

improvement relative to IDEL∗.

4.6 Conclusions

We have proposed a novel information-theoretic disentangled text representation

learning framework. Following the theoretical guidance from information theory,

our method separates the textual information into independent spaces, constituting

style and content representations. A sample-based mutual information upper bound

is derived to help reduce the dependence between embedding spaces. Concurrently,

the original text information is well preserved by maximizing the mutual information

between input sentences and latent representations. In experiments, we introduce

several two-sample test statistics to measure label-embedding correlation. The pro-

posed model achieves competitive performance compared with previous methods on

both conditional generation and style transfer. For future work, our model can be

extended to disentangled representation learning with non-categorical style labels,

and applied to zero-shot style transfer with newly-coming unseen styles.

70

Page 86: Improving Natural Language Understanding via Contrastive

Chapter 5

Improving Representation

Disentanglement for Voice Data

5.1 Introduction

Style transfer, which automatically converts a data instance into a target style, while

preserving its content information, has attracted considerable attention in various

machine learning domains, including computer vision [GEB16, LPSB17, HB17], video

processing [HWL+17, CLY+17], and natural language processing [SLBJ17, YHD+18,

LSS+19, CMS+20]. In speech processing, style transfer was earlier recognized as

voice conversion (VC) [MBE10], which converts one speaker’s utterance, as if it was

from another speaker but with the same semantic meaning. Voice style transfer

(VST) has received long-term research interest, due to its potential for applications in

security [SZS+18], medicine [NTSS06], entertainment [VB10] and education [MK17],

among others.

Although widely investigated, VST remains challenging when applied to more

general application scenarios. Most of the traditional VST methods require parallel

training data, i.e., paired voices from two speakers uttering the same sentence. This

constraint limits the application of such models in the real world, where data are of-

ten not pair-wise available. Among the few existing models that address non-parallel

data [HHW+16, LW06, GRC11], most methods cannot handle many-to-many trans-

fer [SINT18, KK18, KKTH18], which prevents them from converting multiple source

voices to multiple target speaker styles. Even among the few non-parallel many-to-

71

Page 87: Improving Natural Language Understanding via Contrastive

many transfer models, to the best of our knowledge, only two models [QZC+19, CL19]

allow zero-shot transfer, i.e., conversion from/to newly-coming speakers (unseen dur-

ing training) without re-training the model.

The only two zero-shot VST models (AUTOVC [QZC+19] and AdaIN-VC [CL19])

share a common weakness. Both methods construct encoder-decoder frameworks,

which extract the style and the content information into style and content embed-

dings, and generate a voice sample by combining a style embedding and a content em-

bedding through the decoder. With the combination of the source content embedding

and the target style embedding, the models generate the transferred voice, based only

on source and target voice samples. AUTOVC [QZC+19] uses a GE2E [WWPM18]

pre-trained style encoder to ensure rich speaker-related information in style embed-

dings. However, AUTOVC has no regularizer to guarantee that the content encoder

does not encode any style information. AdaIN-VC [CL19] applies instance normaliza-

tion [UVL16] to the feature map of content representations, which helps to eliminate

the style information from content embeddings. However, AdaIN-VC fails to prevent

content information from being revealed in the style embeddings. Both methods

cannot assure that the style and content embeddings are disentangled without infor-

mation revealed from each other.

With information-theoretic guidance, we propose a disentangled-representation-

learning method to enhance the encoder-decoder zero-shot VST framework, for both

style and content information preservation. We call the proposed method Information-

theoretic Disentangled Embedding for Voice Conversion (IDE-VC). Our model suc-

cessfully induces the style and content of voices into independent representation

spaces by minimizing the mutual information between style and content embeddings.

We also derive two new multi-group mutual information lower bounds, to further

improve the representativeness of the latent embeddings. Experiments demonstrate

72

Page 88: Improving Natural Language Understanding via Contrastive

that our method outperforms previous works under both many-to-many and zero-shot

transfer setups on two objective metrics and two subjective metrics.

5.2 Background

In information theory, mutual information (MI) is a crucial concept that measures

the dependence between two random variables. Mathematically, the MI between two

variables x and y is

I(x;y) := Ep(x,y)

[log

p(x,y)

p(x)p(y)

], (5.1)

where p(x) and p(y) are marginal distributions of x and y, and p(x,y) is the joint

distribution. Recently, MI has attracted considerable interest in machine learning

as a criterion to minimize or maximize the dependence between different parts of a

model [CDH+16, AFDM17, HFLM+18, VFH+18, SKG+19]. However, the calculation

of exact MI values is challenging in practice, since the closed form of joint distribution

p(x,y) in equation (5.1) is generally unknown. To solve this problem, several MI

estimators have been proposed. For MI maximization tasks, Nguyen, Wainwright

and Jordan (NWJ) [NWJ10] propose a lower bound by representing (5.1) as an f -

divergence [MH14]:

INWJ := Ep(x,y)[f(x,y)]− e−1Ep(x)p(y)[ef(x,y)], (5.2)

with a score function f(x,y). Another widely-used sample-based MI lower bound is

InfoNCE [OLV18], which is derived with Noise Contrastive Estimation (NCE) [GH10].

With sample pairs {(xi,yi)}Ni=1 drawn from the joint distribution p(x,y), the In-

foNCE lower bound is defined as

INCE := E[ 1

N

N∑i=1

logef(xi,yi)

1N

∑Nj=1 e

f(xi,yj)

]. (5.3)

73

Page 89: Improving Natural Language Understanding via Contrastive

For MI minimization tasks, [CHD+20] proposed a contrastively learned upper bound

that requires the conditional distribution p(x|y):

I(x;y) ≤ E[ 1

N

N∑i=1

[log p(xi|yi)−

1

N

N∑j=1

log p(xj|yi)]]. (5.4)

where the MI is bounded by the log-ratio of conditional distribution p(x|y) between

positive and negative sample pairs. In the following, we derive our information-

theoretic disentangled representation learning framework for voice style transfer based

on the MI estimators described above.

5.3 Proposed Model

We assume access to N audio (voice) recordings from M speakers, where speaker u

has Nu voice samples Xu = {xui}Nui=1. The proposed approach encodes each voice

input x ∈ X = ∪Mu=1Xu into a speaker-related (style) embedding s = Es(x) and a

content-related embedding c = Ec(x), using respectively a style encoder Es(·) and a

content encoder Ec(·). To transfer a source xui from speaker u to the target style of

the voice of speaker v, xvj, we combine the content embedding cui = Ec(xui) and the

style embedding svj = Es(xvj) to generate the transferred voice xu→v,i = D(svj, cui)

with a decoder D(s, c). To implement this two-step transfer process, we introduce

a novel mutual information (MI)-based learning objective, that induces the style

embedding s and content embedding c into independent representation spaces (i.e.,

ideally, s contains rich style information of x with no content information, and vice

versa). In the following, we first describe our MI-based training objective in Section

5.3.1, and then discuss the practical estimation of the objective in Sections 5.3.2 and

5.3.3.

74

Page 90: Improving Natural Language Understanding via Contrastive

5.3.1 MI-based Disentangling Objective

From an information-theoretic perspective, to learn representative latent embedding

(s, c), it is desirable to maximize the mutual information between the embedding

pair (s, c) and the input x. Meanwhile, the style embedding s and the content c are

desired to be independent, so that we can control the style transfer process with dif-

ferent style and content attributes. Therefore, we minimize the mutual information

I(s; c) to disentangle the style embedding and content embedding spaces. Conse-

quently, our overall disentangled-representation-learning objective seeks to minimize

L = I(s; c)− I(x; s, c) = I(s; c)− I(x; c|s)− I(x; s). (5.5)

As discussed in Locatello et al. [LBL+19], without inductive bias for supervision,

the learned representation can be meaningless. To address this problem, we use the

speaker identity u as a variable with values {1, . . . ,M} to learn representative style

embedding s for speaker-related attributes. Noting that the process from speaker u

to his/her voice xui to the style embedding sui (as u→ x→ s) is a Markov Chain,

we conclude I(s;x) ≥ I(s;u) based on the MI data-processing inequality [CT12]

(as stated in the Supplementary Material). Therefore, we replace I(s;x) in L with

I(s;u) and minimize an upper bound instead:

L = I(s; c)− I(x; c|s)− I(u; s) ≥ I(s; c)− I(x; c|s)− I(x; s), (5.6)

In practice, calculating the MI is challenging, as we typically only have access to

samples, and lack the required distributions [CDH+16]. To solve this problem, below

we provide several MI estimates to the objective terms I(s; c), I(x; c|s) and I(u; s).

5.3.2 MI Lower Bound Estimation

To maximize I(u; s), we derive the following multi-group MI lower bound (The-

orem 5.3.1) based on the NWJ bound developed in Nguyen et al. [NWJ10]. The

75

Page 91: Improving Natural Language Understanding via Contrastive

detailed proof is provided in the Supplementary Material. Let µ(−ui)v = µv repre-

sent the mean of all style embeddings in group Xv, constituting the style centroid of

speaker v; µ(−ui)u is the mean of all style embeddings in group Xu except data point

xui, representing a leave-xui-out style centroid of speaker u. Intuitively, we minimize

‖sui−µ(−ui)u ‖ to encourage the style embedding of voice xui to be more similar to the

style centroid of speaker u, while maximizing ‖sui − µ(−ui)v ‖ to enlarge the margin

between sui and the other speakers’ style centroids µv. We denote the right-hand

side of (5.7) as I1.

Theorem 5.3.1. Let µ(−ui)v = 1

Nv

∑Nvk=1 svk if u 6= v; and µ

(−ui)u = 1

Nu−1

∑j 6=i suj.

Then,

I(u; s) ≥ E[ 1

N

M∑u=1

Nu∑i=1

[− ‖sui − µ(−ui)

u ‖2 − e−1

N

M∑v=1

Nv exp{−‖sui − µ(−ui)v ‖2}

]].

(5.7)

To maximize I(x; c|s), we derive a conditional mutual information lower bound

below:

Theorem 5.3.2. Assume that given s = su, samples {(xui, cui)}Nui=1 are observed.

With a variational distribution qφ(x|s, c), we have I(x; c|s) ≥ E[I], where

I =1

N

M∑u=1

Nu∑i=1

[log qφ(xui|cui, su)− log

( 1

Nu

Nu∑j=1

qφ(xuj|cui, su))]. (5.8)

Based on the criterion for s in equation (5.7), a well-learned style encoder Es

pulls all style embeddings sui from speaker u together. Suppose su is representative

of the style embeddings of set Xu. If we parameterize the distribution qφ(x|s, c) ∝

exp(−‖x − D(s, c)‖2) with decoder D(s, c), then based on Theorem 5.3.2, we can

76

Page 92: Improving Natural Language Understanding via Contrastive

estimate the lower bound of I(x; c|s) with the following objective:

I2 :=1

N

M∑u=1

Nu∑i=1

[−‖xui−D(cui, su)‖2− log

( 1

Nu

Nu∑j=1

exp{−‖xuj −D(cui, su)‖2})].

When maximizing I2, for speaker u with his/her given voice style su, we encourage

the content embedding cui to well reconstruct the original voice xui, with small ‖xui−

D(cui, su)‖. Additionally, the distance ‖xuj −D(cui, su)‖ is minimized, ensuring cui

does not contain information to reconstruct other voices xuj from speaker u. With

I2, the correlation between xui and cui is amplified, which improves cui in preserving

the content information.

5.3.3 MI Upper Bound Estimation

The crucial part of our framework is disentangling the style and the content embed-

ding spaces, which imposes (ideally) that the style embedding s excludes any content

information and vice versa. Therefore, the mutual information between s and c is

expected to be minimized. To estimate I(s; c), we derive a sample-based MI upper

bound in Theorem 5.3.3 base on (5.4).

Theorem 5.3.3. If p(s|c) provides the conditional distribution between variables s

and c, then

I(s; c) ≤ E[ 1

N

M∑u=1

Nu∑i=1

[log p(sui|cui)−

1

N

M∑v=1

Nv∑j=1

log p(sui|cvj)]]. (5.9)

The upper bound in (5.9) requires the ground-truth conditional distribution p(s|c),

whose closed form is unknown. Therefore, we use a probabilistic neural network

qθ(s|c) to approximate p(s|c) by maximizing the log-likelihood

F(θ) =M∑u=1

Nu∑i=1

log qθ(sui|cui).

77

Page 93: Improving Natural Language Understanding via Contrastive

With the learned qθ(s|c), the objective for minimizing I(s; c) becomes:

I3 :=1

N

M∑u=1

Nu∑i=1

[log qθ(sui|cui)−

1

N

M∑v=1

Nv∑j=1

log qθ(sui|cvj)]. (5.10)

When weights of encoders Ec, Es are updated, the embedding spaces s, c change,

which leads to the changing of conditional distribution p(s|c). Therefore, the neural

approximation qθ(s|c) must be updated again. Consequently, during training, the

encoders Ec, Es and the approximation qθ(s|c) are updated iteratively. In the Sup-

plementary Material, we further discuss that with a good approximation qθ(s|c), I3

remains an MI upper bound.

5.3.4 Encoder-Decoder Framework

With the aforementioned MI estimates I1, I2, and I3, the final training loss of our

method is

L∗ = [I3 − I1 − I2]− βF(θ), (5.11)

where β is a positive number re-weighting the two objective terms. Term I3−I1−I2

is minimized w.r.t the parameters in encoders Ec, Es and decoder D; term F(θ) as

the likelihood function of qθ(s|c) is maximized w.r.t the parameter θ. In practice, the

two terms are updated iteratively with gradient descent (by fixing one and updating

another). The training and transfer processes of our model are shown in Figure 5.1.

We name this MI-guided learning framework as Information-theoretic Disentangled

Embedding for Voice Conversion (IDE-VC).

In Figure 5.1, (a) shows the training style encoder Es with objective I1: All voice

samples are encoded into style embedding space. For style embedding sui of xui, we

minimize its distance with speaker u’s style centroid µu, and maximize its distance to

other speaker style centroids µv. (b) shows the training for content encoder Ec and

78

Page 94: Improving Natural Language Understanding via Contrastive

Es(·)<latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="hP+6LrUf2d3tZaldqaQQvEKMXyw=">AAAB2XicbZDNSgMxFIXv1L86Vq1rN8EiuCozbnQpuHFZwbZCO5RM5k4bmskMyR2hDH0BF25EfC93vo3pz0JbDwQ+zknIvSculLQUBN9ebWd3b/+gfugfNfzjk9Nmo2fz0gjsilzl5jnmFpXU2CVJCp8LgzyLFfbj6f0i77+gsTLXTzQrMMr4WMtUCk7O6oyaraAdLMW2IVxDC9YaNb+GSS7KDDUJxa0dhEFBUcUNSaFw7g9LiwUXUz7GgUPNM7RRtRxzzi6dk7A0N+5oYkv394uKZ9bOstjdzDhN7Ga2MP/LBiWlt1EldVESarH6KC0Vo5wtdmaJNChIzRxwYaSblYkJN1yQa8Z3HYSbG29D77odBu3wMYA6nMMFXEEIN3AHD9CBLghI4BXevYn35n2suqp569LO4I+8zx84xIo4</latexit><latexit sha1_base64="ZH/b1M8c8fRi8a9FnE3/+kQXtcg=">AAAB5nicbZBLSwMxFIXv+Ky1anXrJliEuikzbnQpiOCygn1gO5RMJtOGZpIhuSOUof/CjQtF/Enu/Demj4W2Hgh8nJOQe0+USWHR97+9jc2t7Z3d0l55v3JweFQ9rrStzg3jLaalNt2IWi6F4i0UKHk3M5ymkeSdaHw7yzvP3Fih1SNOMh6mdKhEIhhFZz3dDWy9z2KNF4NqzW/4c5F1CJZQg6Wag+pXP9YsT7lCJqm1vcDPMCyoQcEkn5b7ueUZZWM65D2HiqbchsV84ik5d05MEm3cUUjm7u8XBU2tnaSRu5lSHNnVbGb+l/VyTK7DQqgsR67Y4qMklwQ1ma1PYmE4QzlxQJkRblbCRtRQhq6ksishWF15HdqXjcBvBA8+lOAUzqAOAVzBDdxDE1rAQMELvMG7Z71X72NR14a37O0E/sj7/AGHj47w</latexit><latexit sha1_base64="ZH/b1M8c8fRi8a9FnE3/+kQXtcg=">AAAB5nicbZBLSwMxFIXv+Ky1anXrJliEuikzbnQpiOCygn1gO5RMJtOGZpIhuSOUof/CjQtF/Enu/Demj4W2Hgh8nJOQe0+USWHR97+9jc2t7Z3d0l55v3JweFQ9rrStzg3jLaalNt2IWi6F4i0UKHk3M5ymkeSdaHw7yzvP3Fih1SNOMh6mdKhEIhhFZz3dDWy9z2KNF4NqzW/4c5F1CJZQg6Wag+pXP9YsT7lCJqm1vcDPMCyoQcEkn5b7ueUZZWM65D2HiqbchsV84ik5d05MEm3cUUjm7u8XBU2tnaSRu5lSHNnVbGb+l/VyTK7DQqgsR67Y4qMklwQ1ma1PYmE4QzlxQJkRblbCRtRQhq6ksishWF15HdqXjcBvBA8+lOAUzqAOAVzBDdxDE1rAQMELvMG7Z71X72NR14a37O0E/sj7/AGHj47w</latexit><latexit sha1_base64="pWvk1NG3NH14LoW+MwZNH2Flg4g=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBahXkriRY9FETxWsB/YhrLZbNqlm03YnQil9F948aCIV/+NN/+N2zYHbX0w8Hhvhpl5QSqFQdf9dgpr6xubW8Xt0s7u3v5B+fCoZZJMM95kiUx0J6CGS6F4EwVK3kk1p3EgeTsY3cz89hPXRiTqAccp92M6UCISjKKVHm/7ptpjYYLn/XLFrblzkFXi5aQCORr98lcvTFgWc4VMUmO6npuiP6EaBZN8WuplhqeUjeiAdy1VNObGn8wvnpIzq4QkSrQthWSu/p6Y0NiYcRzYzpji0Cx7M/E/r5thdOVPhEoz5IotFkWZJJiQ2fskFJozlGNLKNPC3krYkGrK0IZUsiF4yy+vktZFzXNr3r1bqV/ncRThBE6hCh5cQh3uoAFNYKDgGV7hzTHOi/PufCxaC04+cwx/4Hz+AL3HkEg=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit>

D(·, ·)<latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="hP+6LrUf2d3tZaldqaQQvEKMXyw=">AAAB2XicbZDNSgMxFIXv1L86Vq1rN8EiuCozbnQpuHFZwbZCO5RM5k4bmskMyR2hDH0BF25EfC93vo3pz0JbDwQ+zknIvSculLQUBN9ebWd3b/+gfugfNfzjk9Nmo2fz0gjsilzl5jnmFpXU2CVJCp8LgzyLFfbj6f0i77+gsTLXTzQrMMr4WMtUCk7O6oyaraAdLMW2IVxDC9YaNb+GSS7KDDUJxa0dhEFBUcUNSaFw7g9LiwUXUz7GgUPNM7RRtRxzzi6dk7A0N+5oYkv394uKZ9bOstjdzDhN7Ga2MP/LBiWlt1EldVESarH6KC0Vo5wtdmaJNChIzRxwYaSblYkJN1yQa8Z3HYSbG29D77odBu3wMYA6nMMFXEEIN3AHD9CBLghI4BXevYn35n2suqp569LO4I+8zx84xIo4</latexit><latexit sha1_base64="dbMyJ+VJKqVOpf9B1uo978XOQ+g=">AAAB6nicbZDNSgMxFIXv+Ftr1erWTbAIFaTMuNGloAuXFewPtGPJZDJtaCYzJHeUMvQ93LhQxAdy59uYTrvQ1gMJH+ck5OYEqRQGXffbWVvf2NzaLu2Udyt7+wfVw0rbJJlmvMUSmehuQA2XQvEWCpS8m2pO40DyTjC+meWdJ66NSNQDTlLux3SoRCQYRWs93tb7LEzwvNjPBtWa23ALkVXwFlCDhZqD6lc/TFgWc4VMUmN6npuin1ONgkk+Lfczw1PKxnTIexYVjbnx82LqKTm1TkiiRNulkBTu7xs5jY2ZxIE9GVMcmeVsZv6X9TKMrvxcqDRDrtj8oSiTBBMyq4CEQnOGcmKBMi3srISNqKYMbVFlW4K3/OVVaF80PLfh3btQgmM4gTp4cAnXcAdNaAEDDS/wBu/Os/PqfMzrWnMWvR3BHzmfPy5jkHM=</latexit><latexit sha1_base64="dbMyJ+VJKqVOpf9B1uo978XOQ+g=">AAAB6nicbZDNSgMxFIXv+Ftr1erWTbAIFaTMuNGloAuXFewPtGPJZDJtaCYzJHeUMvQ93LhQxAdy59uYTrvQ1gMJH+ck5OYEqRQGXffbWVvf2NzaLu2Udyt7+wfVw0rbJJlmvMUSmehuQA2XQvEWCpS8m2pO40DyTjC+meWdJ66NSNQDTlLux3SoRCQYRWs93tb7LEzwvNjPBtWa23ALkVXwFlCDhZqD6lc/TFgWc4VMUmN6npuin1ONgkk+Lfczw1PKxnTIexYVjbnx82LqKTm1TkiiRNulkBTu7xs5jY2ZxIE9GVMcmeVsZv6X9TKMrvxcqDRDrtj8oSiTBBMyq4CEQnOGcmKBMi3srISNqKYMbVFlW4K3/OVVaF80PLfh3btQgmM4gTp4cAnXcAdNaAEDDS/wBu/Os/PqfMzrWnMWvR3BHzmfPy5jkHM=</latexit><latexit sha1_base64="nTaFTjfAeRN7z7s08IH43Z2sGTA=">AAAB9XicbVDLSgMxFM3UV62vqks3wSJUkDLjRpdFXbisYB/QjiWTSdvQTDIkd5Qy9D/cuFDErf/izr8xnc5CWw/cy+Gce8nNCWLBDbjut1NYWV1b3yhulra2d3b3yvsHLaMSTVmTKqF0JyCGCS5ZEzgI1ok1I1EgWDsYX8/89iPThit5D5OY+REZSj7glICVHm6qPRoqOMv6ab9ccWtuBrxMvJxUUI5Gv/zVCxVNIiaBCmJM13Nj8FOigVPBpqVeYlhM6JgMWddSSSJm/DS7eopPrBLigdK2JOBM/b2RksiYSRTYyYjAyCx6M/E/r5vA4NJPuYwTYJLOHxokAoPCswhwyDWjICaWEKq5vRXTEdGEgg2qZEPwFr+8TFrnNc+teXdupX6Vx1FER+gYVZGHLlAd3aIGaiKKNHpGr+jNeXJenHfnYz5acPKdQ/QHzucPdLCRzw==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit>

xui<latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="V7F30nWMHAQhmUOen+7xA7xslDc=">AAAB6XicbZBLSwMxFIXv1Fcdq9a1m2ARXJUZN7oU3LisYB/QDiWTudOGJpkhyRTL0D/g1r078Te59ZeYPha29UDgcE7CvfniXHBjg+Dbq+ztHxweVY/9k5p/enZer3VMVmiGbZaJTPdialBwhW3LrcBerpHKWGA3njwu+u4UteGZerGzHCNJR4qnnFHrotaw3giawVJk14Rr04C1hvWfQZKxQqKyTFBj+mGQ26ik2nImcO4PCoM5ZRM6wr6ziko0Ublcc06uXZKQNNPuKEuW6d8XJZXGzGTsbkpqx2a7W4T/df3CpvdRyVVeWFRsNSgtBLEZWfyZJFwjs2LmDGWau10JG1NNmXVk/I0xsZw7JuE2gV3TuW2GQTN8DqAKl3AFNxDCHTzAE7SgDQwSeIN379X78D5X7CreGuIFbMj7+gX/LZDO</latexit><latexit sha1_base64="nSqcl1kyNheexkgtv2ygagyUtxM=">AAAB93icbZA7T8MwFIVvyquUAoWVxaJCYqoSFhiRWBiLRB9SGlWO67RW/YhsB1FF2fgLrLCzIX4NK78Ep+1AW45k6egcW/f6i1POjPX9b6+ytb2zu1fdrx3UD4+OGyf1rlGZJrRDFFe6H2NDOZO0Y5nltJ9qikXMaS+e3pV974lqw5R8tLOURgKPJUsYwdZF4SAW+XMxzDNWDBtNv+XPhTZNsDRNWKo9bPwMRopkgkpLODYmDPzURjnWlhFOi9ogMzTFZIrHNHRWYkFNlM9XLtCFS0YoUdodadE8/fsix8KYmYjdTYHtxKx3ZfhfF2Y2uYlyJtPMUkkWg5KMI6tQ+X80YpoSy2fOYKKZ2xWRCdaYWEeptjImFiWUYB3BpuletQK/FTz4UIUzOIdLCOAabuEe2tABAgpe4Q3evRfvw/tc4Kt4S46nsCLv6xfj+Zbx</latexit><latexit sha1_base64="nSqcl1kyNheexkgtv2ygagyUtxM=">AAAB93icbZA7T8MwFIVvyquUAoWVxaJCYqoSFhiRWBiLRB9SGlWO67RW/YhsB1FF2fgLrLCzIX4NK78Ep+1AW45k6egcW/f6i1POjPX9b6+ytb2zu1fdrx3UD4+OGyf1rlGZJrRDFFe6H2NDOZO0Y5nltJ9qikXMaS+e3pV974lqw5R8tLOURgKPJUsYwdZF4SAW+XMxzDNWDBtNv+XPhTZNsDRNWKo9bPwMRopkgkpLODYmDPzURjnWlhFOi9ogMzTFZIrHNHRWYkFNlM9XLtCFS0YoUdodadE8/fsix8KYmYjdTYHtxKx3ZfhfF2Y2uYlyJtPMUkkWg5KMI6tQ+X80YpoSy2fOYKKZ2xWRCdaYWEeptjImFiWUYB3BpuletQK/FTz4UIUzOIdLCOAabuEe2tABAgpe4Q3evRfvw/tc4Kt4S46nsCLv6xfj+Zbx</latexit><latexit sha1_base64="PzRUj5HBQ7P/zuPxojs7ENEMbic=">AAACAnicbVA9T8MwEL3wWcpXgZHFokJiqhIWGCtYGItEP6Q0qhzXaa3aTmQ7iCrKxl9ghZ0NsfJHWPklOG0G2vKkk57eu9PdvTDhTBvX/XbW1jc2t7YrO9Xdvf2Dw9rRcUfHqSK0TWIeq16INeVM0rZhhtNeoigWIafdcHJb+N1HqjSL5YOZJjQQeCRZxAg2VvL7ocie8kGWsnxQq7sNdwa0SryS1KFEa1D76Q9jkgoqDeFYa99zExNkWBlGOM2r/VTTBJMJHlHfUokF1UE2OzlH51YZoihWtqRBM/XvRIaF1lMR2k6BzVgve4X4n+enJroOMiaT1FBJ5ouilCMTo+J/NGSKEsOnlmCimL0VkTFWmBibUnVhTSiKULzlCFZJ57LhuQ3v3q03b8p4KnAKZ3ABHlxBE+6gBW0gEMMLvMKb8+y8Ox/O57x1zSlnTmABztcvcjiYWg==</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="V7F30nWMHAQhmUOen+7xA7xslDc=">AAAB6XicbZBLSwMxFIXv1Fcdq9a1m2ARXJUZN7oU3LisYB/QDiWTudOGJpkhyRTL0D/g1r078Te59ZeYPha29UDgcE7CvfniXHBjg+Dbq+ztHxweVY/9k5p/enZer3VMVmiGbZaJTPdialBwhW3LrcBerpHKWGA3njwu+u4UteGZerGzHCNJR4qnnFHrotaw3giawVJk14Rr04C1hvWfQZKxQqKyTFBj+mGQ26ik2nImcO4PCoM5ZRM6wr6ziko0Ublcc06uXZKQNNPuKEuW6d8XJZXGzGTsbkpqx2a7W4T/df3CpvdRyVVeWFRsNSgtBLEZWfyZJFwjs2LmDGWau10JG1NNmXVk/I0xsZw7JuE2gV3TuW2GQTN8DqAKl3AFNxDCHTzAE7SgDQwSeIN379X78D5X7CreGuIFbMj7+gX/LZDO</latexit><latexit sha1_base64="nSqcl1kyNheexkgtv2ygagyUtxM=">AAAB93icbZA7T8MwFIVvyquUAoWVxaJCYqoSFhiRWBiLRB9SGlWO67RW/YhsB1FF2fgLrLCzIX4NK78Ep+1AW45k6egcW/f6i1POjPX9b6+ytb2zu1fdrx3UD4+OGyf1rlGZJrRDFFe6H2NDOZO0Y5nltJ9qikXMaS+e3pV974lqw5R8tLOURgKPJUsYwdZF4SAW+XMxzDNWDBtNv+XPhTZNsDRNWKo9bPwMRopkgkpLODYmDPzURjnWlhFOi9ogMzTFZIrHNHRWYkFNlM9XLtCFS0YoUdodadE8/fsix8KYmYjdTYHtxKx3ZfhfF2Y2uYlyJtPMUkkWg5KMI6tQ+X80YpoSy2fOYKKZ2xWRCdaYWEeptjImFiWUYB3BpuletQK/FTz4UIUzOIdLCOAabuEe2tABAgpe4Q3evRfvw/tc4Kt4S46nsCLv6xfj+Zbx</latexit><latexit sha1_base64="nSqcl1kyNheexkgtv2ygagyUtxM=">AAAB93icbZA7T8MwFIVvyquUAoWVxaJCYqoSFhiRWBiLRB9SGlWO67RW/YhsB1FF2fgLrLCzIX4NK78Ep+1AW45k6egcW/f6i1POjPX9b6+ytb2zu1fdrx3UD4+OGyf1rlGZJrRDFFe6H2NDOZO0Y5nltJ9qikXMaS+e3pV974lqw5R8tLOURgKPJUsYwdZF4SAW+XMxzDNWDBtNv+XPhTZNsDRNWKo9bPwMRopkgkpLODYmDPzURjnWlhFOi9ogMzTFZIrHNHRWYkFNlM9XLtCFS0YoUdodadE8/fsix8KYmYjdTYHtxKx3ZfhfF2Y2uYlyJtPMUkkWg5KMI6tQ+X80YpoSy2fOYKKZ2xWRCdaYWEeptjImFiWUYB3BpuletQK/FTz4UIUzOIdLCOAabuEe2tABAgpe4Q3evRfvw/tc4Kt4S46nsCLv6xfj+Zbx</latexit><latexit sha1_base64="PzRUj5HBQ7P/zuPxojs7ENEMbic=">AAACAnicbVA9T8MwEL3wWcpXgZHFokJiqhIWGCtYGItEP6Q0qhzXaa3aTmQ7iCrKxl9ghZ0NsfJHWPklOG0G2vKkk57eu9PdvTDhTBvX/XbW1jc2t7YrO9Xdvf2Dw9rRcUfHqSK0TWIeq16INeVM0rZhhtNeoigWIafdcHJb+N1HqjSL5YOZJjQQeCRZxAg2VvL7ocie8kGWsnxQq7sNdwa0SryS1KFEa1D76Q9jkgoqDeFYa99zExNkWBlGOM2r/VTTBJMJHlHfUokF1UE2OzlH51YZoihWtqRBM/XvRIaF1lMR2k6BzVgve4X4n+enJroOMiaT1FBJ5ouilCMTo+J/NGSKEsOnlmCimL0VkTFWmBibUnVhTSiKULzlCFZJ57LhuQ3v3q03b8p4KnAKZ3ABHlxBE+6gBW0gEMMLvMKb8+y8Ox/O57x1zSlnTmABztcvcjiYWg==</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit>

xuj<latexit sha1_base64="QABZC2gGMUstEy6pNVVnR1+Y24U=">AAACAnicbVDLSsNAFL3xWeur6tJNsAiuSiKCLotuXFawD0hLmUwn7dh5hJmJWEJ2/oJb3bsTt/6IW7/ESZuFbT1w4XDOvdx7Txgzqo3nfTsrq2vrG5ulrfL2zu7efuXgsKVlojBpYsmk6oRIE0YFaRpqGOnEiiAeMtIOxze5334kSlMp7s0kJj2OhoJGFCNjpaAb8vQp66fJQ9avVL2aN4W7TPyCVKFAo1/56Q4kTjgRBjOkdeB7semlSBmKGcnK3USTGOExGpLAUoE40b10enLmnlpl4EZS2RLGnap/J1LEtZ7w0HZyZEZ60cvF/7wgMdFVL6UiTgwReLYoSphrpJv/7w6oItiwiSUIK2pvdfEIKYSNTak8tybkeSj+YgTLpHVe872af3dRrV8X8ZTgGE7gDHy4hDrcQgOagEHCC7zCm/PsvDsfzuesdcUpZo5gDs7XL3UNmF8=</latexit><latexit sha1_base64="QABZC2gGMUstEy6pNVVnR1+Y24U=">AAACAnicbVDLSsNAFL3xWeur6tJNsAiuSiKCLotuXFawD0hLmUwn7dh5hJmJWEJ2/oJb3bsTt/6IW7/ESZuFbT1w4XDOvdx7Txgzqo3nfTsrq2vrG5ulrfL2zu7efuXgsKVlojBpYsmk6oRIE0YFaRpqGOnEiiAeMtIOxze5334kSlMp7s0kJj2OhoJGFCNjpaAb8vQp66fJQ9avVL2aN4W7TPyCVKFAo1/56Q4kTjgRBjOkdeB7semlSBmKGcnK3USTGOExGpLAUoE40b10enLmnlpl4EZS2RLGnap/J1LEtZ7w0HZyZEZ60cvF/7wgMdFVL6UiTgwReLYoSphrpJv/7w6oItiwiSUIK2pvdfEIKYSNTak8tybkeSj+YgTLpHVe872af3dRrV8X8ZTgGE7gDHy4hDrcQgOagEHCC7zCm/PsvDsfzuesdcUpZo5gDs7XL3UNmF8=</latexit><latexit sha1_base64="QABZC2gGMUstEy6pNVVnR1+Y24U=">AAACAnicbVDLSsNAFL3xWeur6tJNsAiuSiKCLotuXFawD0hLmUwn7dh5hJmJWEJ2/oJb3bsTt/6IW7/ESZuFbT1w4XDOvdx7Txgzqo3nfTsrq2vrG5ulrfL2zu7efuXgsKVlojBpYsmk6oRIE0YFaRpqGOnEiiAeMtIOxze5334kSlMp7s0kJj2OhoJGFCNjpaAb8vQp66fJQ9avVL2aN4W7TPyCVKFAo1/56Q4kTjgRBjOkdeB7semlSBmKGcnK3USTGOExGpLAUoE40b10enLmnlpl4EZS2RLGnap/J1LEtZ7w0HZyZEZ60cvF/7wgMdFVL6UiTgwReLYoSphrpJv/7w6oItiwiSUIK2pvdfEIKYSNTak8tybkeSj+YgTLpHVe872af3dRrV8X8ZTgGE7gDHy4hDrcQgOagEHCC7zCm/PsvDsfzuesdcUpZo5gDs7XL3UNmF8=</latexit><latexit sha1_base64="QABZC2gGMUstEy6pNVVnR1+Y24U=">AAACAnicbVDLSsNAFL3xWeur6tJNsAiuSiKCLotuXFawD0hLmUwn7dh5hJmJWEJ2/oJb3bsTt/6IW7/ESZuFbT1w4XDOvdx7Txgzqo3nfTsrq2vrG5ulrfL2zu7efuXgsKVlojBpYsmk6oRIE0YFaRpqGOnEiiAeMtIOxze5334kSlMp7s0kJj2OhoJGFCNjpaAb8vQp66fJQ9avVL2aN4W7TPyCVKFAo1/56Q4kTjgRBjOkdeB7semlSBmKGcnK3USTGOExGpLAUoE40b10enLmnlpl4EZS2RLGnap/J1LEtZ7w0HZyZEZ60cvF/7wgMdFVL6UiTgwReLYoSphrpJv/7w6oItiwiSUIK2pvdfEIKYSNTak8tybkeSj+YgTLpHVe872af3dRrV8X8ZTgGE7gDHy4hDrcQgOagEHCC7zCm/PsvDsfzuesdcUpZo5gDs7XL3UNmF8=</latexit>

cui<latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="V7F30nWMHAQhmUOen+7xA7xslDc=">AAAB6XicbZBLSwMxFIXv1Fcdq9a1m2ARXJUZN7oU3LisYB/QDiWTudOGJpkhyRTL0D/g1r078Te59ZeYPha29UDgcE7CvfniXHBjg+Dbq+ztHxweVY/9k5p/enZer3VMVmiGbZaJTPdialBwhW3LrcBerpHKWGA3njwu+u4UteGZerGzHCNJR4qnnFHrotaw3giawVJk14Rr04C1hvWfQZKxQqKyTFBj+mGQ26ik2nImcO4PCoM5ZRM6wr6ziko0Ublcc06uXZKQNNPuKEuW6d8XJZXGzGTsbkpqx2a7W4T/df3CpvdRyVVeWFRsNSgtBLEZWfyZJFwjs2LmDGWau10JG1NNmXVk/I0xsZw7JuE2gV3TuW2GQTN8DqAKl3AFNxDCHTzAE7SgDQwSeIN379X78D5X7CreGuIFbMj7+gX/LZDO</latexit><latexit sha1_base64="cHbQh792Ono/2s2pw/MXHMyFJJs=">AAAB93icbZBLSwMxFIXv1FetVatbN8EiuCozbnQpuHFZwT5gOpRMmmlD8xiSTKEMs/MvuNW9O/HXuPWXmGm7sK0HAodzEu7NF6ecGev7315lZ3dv/6B6WDuqH5+cNs7qXaMyTWiHKK50P8aGciZpxzLLaT/VFIuY0148fSj73oxqw5R8tvOURgKPJUsYwdZF4SAWOSmGecaKYaPpt/yF0LYJVqYJK7WHjZ/BSJFMUGkJx8aEgZ/aKMfaMsJpURtkhqaYTPGYhs5KLKiJ8sXKBbpyyQglSrsjLVqkf1/kWBgzF7G7KbCdmM2uDP/rwswmd1HOZJpZKslyUJJxZBUq/49GTFNi+dwZTDRzuyIywRoT6yjV1sbEooQSbCLYNt2bVuC3gicfqnABl3ANAdzCPTxCGzpAQMErvMG79+J9eJ9LfBVvxfEc1uR9/QLDPpbc</latexit><latexit sha1_base64="cHbQh792Ono/2s2pw/MXHMyFJJs=">AAAB93icbZBLSwMxFIXv1FetVatbN8EiuCozbnQpuHFZwT5gOpRMmmlD8xiSTKEMs/MvuNW9O/HXuPWXmGm7sK0HAodzEu7NF6ecGev7315lZ3dv/6B6WDuqH5+cNs7qXaMyTWiHKK50P8aGciZpxzLLaT/VFIuY0148fSj73oxqw5R8tvOURgKPJUsYwdZF4SAWOSmGecaKYaPpt/yF0LYJVqYJK7WHjZ/BSJFMUGkJx8aEgZ/aKMfaMsJpURtkhqaYTPGYhs5KLKiJ8sXKBbpyyQglSrsjLVqkf1/kWBgzF7G7KbCdmM2uDP/rwswmd1HOZJpZKslyUJJxZBUq/49GTFNi+dwZTDRzuyIywRoT6yjV1sbEooQSbCLYNt2bVuC3gicfqnABl3ANAdzCPTxCGzpAQMErvMG79+J9eJ9LfBVvxfEc1uR9/QLDPpbc</latexit><latexit sha1_base64="fvnHZecx+kwMDyhEXl0oF6hcfFw=">AAACAnicbVA9T8MwEL2Ur1K+CowsFhUSU5WwwFjBwlgk+iGlUeW4TmvVdiLbQaqibPwFVtjZECt/hJVfgtNmoC1POunpvTvd3QsTzrRx3W+nsrG5tb1T3a3t7R8cHtWPT7o6ThWhHRLzWPVDrClnknYMM5z2E0WxCDnthdO7wu89UaVZLB/NLKGBwGPJIkawsZI/CEVG8mGWsnxYb7hNdw60TrySNKBEe1j/GYxikgoqDeFYa99zExNkWBlGOM1rg1TTBJMpHlPfUokF1UE2PzlHF1YZoShWtqRBc/XvRIaF1jMR2k6BzUSveoX4n+enJroJMiaT1FBJFouilCMTo+J/NGKKEsNnlmCimL0VkQlWmBibUm1pTSiKULzVCNZJ96rpuU3vwW20bst4qnAG53AJHlxDC+6hDR0gEMMLvMKb8+y8Ox/O56K14pQzp7AE5+sXUJaYRQ==</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit>

su<latexit sha1_base64="Q74b4J22y4XgPYqLnWJblbPqo3U=">AAACAXicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfGByhL3NXrJkd+/Y3RPCcZV/wVZ7O7H1l9j6S9xLrjCJDwYe780wMy+IOdPGdb+d0tr6xuZWebuys7u3f1A9PGrrKFGEtkjEI9UNsKacSdoyzHDajRXFIuC0E0xuc7/zRJVmkXww05j6Ao8kCxnBxkqP/UCkOhukSTao1ty6OwNaJV5BalCgOaj+9IcRSQSVhnCsdc9zY+OnWBlGOM0q/UTTGJMJHtGepRILqv10dnGGzqwyRGGkbEmDZurfiRQLracisJ0Cm7Fe9nLxP6+XmPDaT5mME0MlmS8KE45MhPL30ZApSgyfWoKJYvZWRMZYYWJsSJWFNYHIQ/GWI1gl7Yu659a9+8ta46aIpwwncArn4MEVNOAOmtACAhJe4BXenGfn3flwPuetJaeYOYYFOF+/nF+X5g==</latexit><latexit sha1_base64="Q74b4J22y4XgPYqLnWJblbPqo3U=">AAACAXicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfGByhL3NXrJkd+/Y3RPCcZV/wVZ7O7H1l9j6S9xLrjCJDwYe780wMy+IOdPGdb+d0tr6xuZWebuys7u3f1A9PGrrKFGEtkjEI9UNsKacSdoyzHDajRXFIuC0E0xuc7/zRJVmkXww05j6Ao8kCxnBxkqP/UCkOhukSTao1ty6OwNaJV5BalCgOaj+9IcRSQSVhnCsdc9zY+OnWBlGOM0q/UTTGJMJHtGepRILqv10dnGGzqwyRGGkbEmDZurfiRQLracisJ0Cm7Fe9nLxP6+XmPDaT5mME0MlmS8KE45MhPL30ZApSgyfWoKJYvZWRMZYYWJsSJWFNYHIQ/GWI1gl7Yu659a9+8ta46aIpwwncArn4MEVNOAOmtACAhJe4BXenGfn3flwPuetJaeYOYYFOF+/nF+X5g==</latexit><latexit sha1_base64="Q74b4J22y4XgPYqLnWJblbPqo3U=">AAACAXicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfGByhL3NXrJkd+/Y3RPCcZV/wVZ7O7H1l9j6S9xLrjCJDwYe780wMy+IOdPGdb+d0tr6xuZWebuys7u3f1A9PGrrKFGEtkjEI9UNsKacSdoyzHDajRXFIuC0E0xuc7/zRJVmkXww05j6Ao8kCxnBxkqP/UCkOhukSTao1ty6OwNaJV5BalCgOaj+9IcRSQSVhnCsdc9zY+OnWBlGOM0q/UTTGJMJHtGepRILqv10dnGGzqwyRGGkbEmDZurfiRQLracisJ0Cm7Fe9nLxP6+XmPDaT5mME0MlmS8KE45MhPL30ZApSgyfWoKJYvZWRMZYYWJsSJWFNYHIQ/GWI1gl7Yu659a9+8ta46aIpwwncArn4MEVNOAOmtACAhJe4BXenGfn3flwPuetJaeYOYYFOF+/nF+X5g==</latexit><latexit sha1_base64="Q74b4J22y4XgPYqLnWJblbPqo3U=">AAACAXicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfGByhL3NXrJkd+/Y3RPCcZV/wVZ7O7H1l9j6S9xLrjCJDwYe780wMy+IOdPGdb+d0tr6xuZWebuys7u3f1A9PGrrKFGEtkjEI9UNsKacSdoyzHDajRXFIuC0E0xuc7/zRJVmkXww05j6Ao8kCxnBxkqP/UCkOhukSTao1ty6OwNaJV5BalCgOaj+9IcRSQSVhnCsdc9zY+OnWBlGOM0q/UTTGJMJHtGepRILqv10dnGGzqwyRGGkbEmDZurfiRQLracisJ0Cm7Fe9nLxP6+XmPDaT5mME0MlmS8KE45MhPL30ZApSgyfWoKJYvZWRMZYYWJsSJWFNYHIQ/GWI1gl7Yu659a9+8ta46aIpwwncArn4MEVNOAOmtACAhJe4BXenGfn3flwPuetJaeYOYYFOF+/nF+X5g==</latexit>

xui<latexit sha1_base64="H+B1cAmJhRcT55zZBWhDZHjIaAs=">AAACCnicbVDLSsNAFJ3UV62vVJduBovgqiQi6LLoxmUF+4AmhMl00g6dmYSZiVpC/sBfcKt7d+LWn3Drlzhps7CtBy4czrmXczlhwqjSjvNtVdbWNza3qtu1nd29/QO7fthVcSox6eCYxbIfIkUYFaSjqWakn0iCeMhIL5zcFH7vgUhFY3GvpwnxORoJGlGMtJECu+6Nkc68kGdPeR5kKc0Du+E0nRngKnFL0gAl2oH94w1jnHIiNGZIqYHrJNrPkNQUM5LXvFSRBOEJGpGBoQJxovxs9noOT40yhFEszQgNZ+rfiwxxpaY8NJsc6bFa9grxP2+Q6ujKz6hIUk0EngdFKYM6hkUPcEglwZpNDUFYUvMrxGMkEdamrdpCTMiLUtzlClZJ97zpOk337qLRui7rqYJjcALOgAsuQQvcgjboAAwewQt4BW/Ws/VufVif89WKVd4cgQVYX7/6optc</latexit><latexit sha1_base64="H+B1cAmJhRcT55zZBWhDZHjIaAs=">AAACCnicbVDLSsNAFJ3UV62vVJduBovgqiQi6LLoxmUF+4AmhMl00g6dmYSZiVpC/sBfcKt7d+LWn3Drlzhps7CtBy4czrmXczlhwqjSjvNtVdbWNza3qtu1nd29/QO7fthVcSox6eCYxbIfIkUYFaSjqWakn0iCeMhIL5zcFH7vgUhFY3GvpwnxORoJGlGMtJECu+6Nkc68kGdPeR5kKc0Du+E0nRngKnFL0gAl2oH94w1jnHIiNGZIqYHrJNrPkNQUM5LXvFSRBOEJGpGBoQJxovxs9noOT40yhFEszQgNZ+rfiwxxpaY8NJsc6bFa9grxP2+Q6ujKz6hIUk0EngdFKYM6hkUPcEglwZpNDUFYUvMrxGMkEdamrdpCTMiLUtzlClZJ97zpOk337qLRui7rqYJjcALOgAsuQQvcgjboAAwewQt4BW/Ws/VufVif89WKVd4cgQVYX7/6optc</latexit><latexit sha1_base64="H+B1cAmJhRcT55zZBWhDZHjIaAs=">AAACCnicbVDLSsNAFJ3UV62vVJduBovgqiQi6LLoxmUF+4AmhMl00g6dmYSZiVpC/sBfcKt7d+LWn3Drlzhps7CtBy4czrmXczlhwqjSjvNtVdbWNza3qtu1nd29/QO7fthVcSox6eCYxbIfIkUYFaSjqWakn0iCeMhIL5zcFH7vgUhFY3GvpwnxORoJGlGMtJECu+6Nkc68kGdPeR5kKc0Du+E0nRngKnFL0gAl2oH94w1jnHIiNGZIqYHrJNrPkNQUM5LXvFSRBOEJGpGBoQJxovxs9noOT40yhFEszQgNZ+rfiwxxpaY8NJsc6bFa9grxP2+Q6ujKz6hIUk0EngdFKYM6hkUPcEglwZpNDUFYUvMrxGMkEdamrdpCTMiLUtzlClZJ97zpOk337qLRui7rqYJjcALOgAsuQQvcgjboAAwewQt4BW/Ws/VufVif89WKVd4cgQVYX7/6optc</latexit><latexit sha1_base64="H+B1cAmJhRcT55zZBWhDZHjIaAs=">AAACCnicbVDLSsNAFJ3UV62vVJduBovgqiQi6LLoxmUF+4AmhMl00g6dmYSZiVpC/sBfcKt7d+LWn3Drlzhps7CtBy4czrmXczlhwqjSjvNtVdbWNza3qtu1nd29/QO7fthVcSox6eCYxbIfIkUYFaSjqWakn0iCeMhIL5zcFH7vgUhFY3GvpwnxORoJGlGMtJECu+6Nkc68kGdPeR5kKc0Du+E0nRngKnFL0gAl2oH94w1jnHIiNGZIqYHrJNrPkNQUM5LXvFSRBOEJGpGBoQJxovxs9noOT40yhFEszQgNZ+rfiwxxpaY8NJsc6bFa9grxP2+Q6ujKz6hIUk0EngdFKYM6hkUPcEglwZpNDUFYUvMrxGMkEdamrdpCTMiLUtzlClZJ97zpOk337qLRui7rqYJjcALOgAsuQQvcgjboAAwewQt4BW/Ws/VufVif89WKVd4cgQVYX7/6optc</latexit>

Ec(·)<latexit sha1_base64="quJRkwu6MOkwDZiIpkvITlXI1DM=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNujlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A6ZXkDw=</latexit><latexit sha1_base64="quJRkwu6MOkwDZiIpkvITlXI1DM=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNujlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A6ZXkDw=</latexit><latexit sha1_base64="quJRkwu6MOkwDZiIpkvITlXI1DM=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNujlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A6ZXkDw=</latexit><latexit sha1_base64="quJRkwu6MOkwDZiIpkvITlXI1DM=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNujlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A6ZXkDw=</latexit>

I(s; c)<latexit sha1_base64="As2zVH1xZrC5BMsA5C5THSAWNgU=">AAACE3icbVDLSgMxFM34rPU16k43wSLUTZkRQcFN0Y3uKtgHdIaSSTNtaJIZkoxQhgF/wl9wq3t34tYPcOuXmGlnYVsPhBzOuZd77wliRpV2nG9raXlldW29tFHe3Nre2bX39lsqSiQmTRyxSHYCpAijgjQ11Yx0YkkQDxhpB6Ob3G8/EqloJB70OCY+RwNBQ4qRNlLPPvQ40kOMWHqXVb2Apyq7yj+cnfbsilNzJoCLxC1IBRRo9Owfrx/hhBOhMUNKdV0n1n6KpKaYkazsJYrECI/QgHQNFYgT5aeTGzJ4YpQ+DCNpntBwov7tSBFXaswDU5lvrOa9XPzP6yY6vPRTKuJEE4Gng8KEQR3BPBDYp5JgzcaGICyp2RXiIZIIaxNbeWZMwDMTijsfwSJpndVcp+ben1fq10U8JXAEjkEVuOAC1MEtaIAmwOAJvIBX8GY9W+/Wh/U5LV2yip4DMAPr6xfkuJ50</latexit><latexit sha1_base64="As2zVH1xZrC5BMsA5C5THSAWNgU=">AAACE3icbVDLSgMxFM34rPU16k43wSLUTZkRQcFN0Y3uKtgHdIaSSTNtaJIZkoxQhgF/wl9wq3t34tYPcOuXmGlnYVsPhBzOuZd77wliRpV2nG9raXlldW29tFHe3Nre2bX39lsqSiQmTRyxSHYCpAijgjQ11Yx0YkkQDxhpB6Ob3G8/EqloJB70OCY+RwNBQ4qRNlLPPvQ40kOMWHqXVb2Apyq7yj+cnfbsilNzJoCLxC1IBRRo9Owfrx/hhBOhMUNKdV0n1n6KpKaYkazsJYrECI/QgHQNFYgT5aeTGzJ4YpQ+DCNpntBwov7tSBFXaswDU5lvrOa9XPzP6yY6vPRTKuJEE4Gng8KEQR3BPBDYp5JgzcaGICyp2RXiIZIIaxNbeWZMwDMTijsfwSJpndVcp+ben1fq10U8JXAEjkEVuOAC1MEtaIAmwOAJvIBX8GY9W+/Wh/U5LV2yip4DMAPr6xfkuJ50</latexit><latexit sha1_base64="As2zVH1xZrC5BMsA5C5THSAWNgU=">AAACE3icbVDLSgMxFM34rPU16k43wSLUTZkRQcFN0Y3uKtgHdIaSSTNtaJIZkoxQhgF/wl9wq3t34tYPcOuXmGlnYVsPhBzOuZd77wliRpV2nG9raXlldW29tFHe3Nre2bX39lsqSiQmTRyxSHYCpAijgjQ11Yx0YkkQDxhpB6Ob3G8/EqloJB70OCY+RwNBQ4qRNlLPPvQ40kOMWHqXVb2Apyq7yj+cnfbsilNzJoCLxC1IBRRo9Owfrx/hhBOhMUNKdV0n1n6KpKaYkazsJYrECI/QgHQNFYgT5aeTGzJ4YpQ+DCNpntBwov7tSBFXaswDU5lvrOa9XPzP6yY6vPRTKuJEE4Gng8KEQR3BPBDYp5JgzcaGICyp2RXiIZIIaxNbeWZMwDMTijsfwSJpndVcp+ben1fq10U8JXAEjkEVuOAC1MEtaIAmwOAJvIBX8GY9W+/Wh/U5LV2yip4DMAPr6xfkuJ50</latexit><latexit sha1_base64="As2zVH1xZrC5BMsA5C5THSAWNgU=">AAACE3icbVDLSgMxFM34rPU16k43wSLUTZkRQcFN0Y3uKtgHdIaSSTNtaJIZkoxQhgF/wl9wq3t34tYPcOuXmGlnYVsPhBzOuZd77wliRpV2nG9raXlldW29tFHe3Nre2bX39lsqSiQmTRyxSHYCpAijgjQ11Yx0YkkQDxhpB6Ob3G8/EqloJB70OCY+RwNBQ4qRNlLPPvQ40kOMWHqXVb2Apyq7yj+cnfbsilNzJoCLxC1IBRRo9Owfrx/hhBOhMUNKdV0n1n6KpKaYkazsJYrECI/QgHQNFYgT5aeTGzJ4YpQ+DCNpntBwov7tSBFXaswDU5lvrOa9XPzP6yY6vPRTKuJEE4Gng8KEQR3BPBDYp5JgzcaGICyp2RXiIZIIaxNbeWZMwDMTijsfwSJpndVcp+ben1fq10U8JXAEjkEVuOAC1MEtaIAmwOAJvIBX8GY9W+/Wh/U5LV2yip4DMAPr6xfkuJ50</latexit>

(b)<latexit sha1_base64="9gjZDo8a0yOCpgweuWQ1sBwok80=">AAADBnicbVLLbtNAFJ2YVzGvFpZsLNJIRUiRHYFgGVEWLMsjbaU4qsbjSTzKPKyZ66bRaPZs2MJfsENs+Q0+gn9gxjESabmWpaNzzz1z79Utas4MpOmvXnTt+o2bt3Zux3fu3rv/YHfv4bFRjSZ0QhRX+rTAhnIm6QQYcHpaa4pFwelJsTwM+ZNzqg1T8iOsazoTeCHZnBEMnvpwUDw92+2nw7SN5CrIOtBHXRyd7fV+56UijaASCMfGTLO0hpnFGhjh1MV5Y2iNyRIv6NRDiQU1M9v26pKBZ8pkrrT/JSQt+2+FxcKYtSi8UmCozOVcIP+XmzYwfzWzTNYNUEk2D80bnoBKwuBJyTQlwNceYKKZ7zUhFdaYgF9PvPVMIdw2YUBgvdbl1mwWNBOEszqIBV5S7LcP3i3O31C/GE3fq6IxcKiEwLK0uamUBlDODuIkycMEmnLbghq3pRvJGFTuM/jCDeLgLemKbHl4gZuOZp3Pkmo5Ek3AoZtCXVg7fJGvWBk2lYbP2f1cjKHqZ/l5XWEJSthnrnPTbFEB1lqt3L4Llu7vOAoqP40/kOzyOVwFx6Nhlg6zd8/749fdqeygx+gJOkAZeonG6C06QhNE0AJ9Rl/Q1+hT9C36Hv3YSKNeV/MIbUX08w/ct/dM</latexit><latexit sha1_base64="9gjZDo8a0yOCpgweuWQ1sBwok80=">AAADBnicbVLLbtNAFJ2YVzGvFpZsLNJIRUiRHYFgGVEWLMsjbaU4qsbjSTzKPKyZ66bRaPZs2MJfsENs+Q0+gn9gxjESabmWpaNzzz1z79Utas4MpOmvXnTt+o2bt3Zux3fu3rv/YHfv4bFRjSZ0QhRX+rTAhnIm6QQYcHpaa4pFwelJsTwM+ZNzqg1T8iOsazoTeCHZnBEMnvpwUDw92+2nw7SN5CrIOtBHXRyd7fV+56UijaASCMfGTLO0hpnFGhjh1MV5Y2iNyRIv6NRDiQU1M9v26pKBZ8pkrrT/JSQt+2+FxcKYtSi8UmCozOVcIP+XmzYwfzWzTNYNUEk2D80bnoBKwuBJyTQlwNceYKKZ7zUhFdaYgF9PvPVMIdw2YUBgvdbl1mwWNBOEszqIBV5S7LcP3i3O31C/GE3fq6IxcKiEwLK0uamUBlDODuIkycMEmnLbghq3pRvJGFTuM/jCDeLgLemKbHl4gZuOZp3Pkmo5Ek3AoZtCXVg7fJGvWBk2lYbP2f1cjKHqZ/l5XWEJSthnrnPTbFEB1lqt3L4Llu7vOAoqP40/kOzyOVwFx6Nhlg6zd8/749fdqeygx+gJOkAZeonG6C06QhNE0AJ9Rl/Q1+hT9C36Hv3YSKNeV/MIbUX08w/ct/dM</latexit><latexit sha1_base64="9gjZDo8a0yOCpgweuWQ1sBwok80=">AAADBnicbVLLbtNAFJ2YVzGvFpZsLNJIRUiRHYFgGVEWLMsjbaU4qsbjSTzKPKyZ66bRaPZs2MJfsENs+Q0+gn9gxjESabmWpaNzzz1z79Utas4MpOmvXnTt+o2bt3Zux3fu3rv/YHfv4bFRjSZ0QhRX+rTAhnIm6QQYcHpaa4pFwelJsTwM+ZNzqg1T8iOsazoTeCHZnBEMnvpwUDw92+2nw7SN5CrIOtBHXRyd7fV+56UijaASCMfGTLO0hpnFGhjh1MV5Y2iNyRIv6NRDiQU1M9v26pKBZ8pkrrT/JSQt+2+FxcKYtSi8UmCozOVcIP+XmzYwfzWzTNYNUEk2D80bnoBKwuBJyTQlwNceYKKZ7zUhFdaYgF9PvPVMIdw2YUBgvdbl1mwWNBOEszqIBV5S7LcP3i3O31C/GE3fq6IxcKiEwLK0uamUBlDODuIkycMEmnLbghq3pRvJGFTuM/jCDeLgLemKbHl4gZuOZp3Pkmo5Ek3AoZtCXVg7fJGvWBk2lYbP2f1cjKHqZ/l5XWEJSthnrnPTbFEB1lqt3L4Llu7vOAoqP40/kOzyOVwFx6Nhlg6zd8/749fdqeygx+gJOkAZeonG6C06QhNE0AJ9Rl/Q1+hT9C36Hv3YSKNeV/MIbUX08w/ct/dM</latexit><latexit sha1_base64="9gjZDo8a0yOCpgweuWQ1sBwok80=">AAADBnicbVLLbtNAFJ2YVzGvFpZsLNJIRUiRHYFgGVEWLMsjbaU4qsbjSTzKPKyZ66bRaPZs2MJfsENs+Q0+gn9gxjESabmWpaNzzz1z79Utas4MpOmvXnTt+o2bt3Zux3fu3rv/YHfv4bFRjSZ0QhRX+rTAhnIm6QQYcHpaa4pFwelJsTwM+ZNzqg1T8iOsazoTeCHZnBEMnvpwUDw92+2nw7SN5CrIOtBHXRyd7fV+56UijaASCMfGTLO0hpnFGhjh1MV5Y2iNyRIv6NRDiQU1M9v26pKBZ8pkrrT/JSQt+2+FxcKYtSi8UmCozOVcIP+XmzYwfzWzTNYNUEk2D80bnoBKwuBJyTQlwNceYKKZ7zUhFdaYgF9PvPVMIdw2YUBgvdbl1mwWNBOEszqIBV5S7LcP3i3O31C/GE3fq6IxcKiEwLK0uamUBlDODuIkycMEmnLbghq3pRvJGFTuM/jCDeLgLemKbHl4gZuOZp3Pkmo5Ek3AoZtCXVg7fJGvWBk2lYbP2f1cjKHqZ/l5XWEJSthnrnPTbFEB1lqt3L4Llu7vOAoqP40/kOzyOVwFx6Nhlg6zd8/749fdqeygx+gJOkAZeonG6C06QhNE0AJ9Rl/Q1+hT9C36Hv3YSKNeV/MIbUX08w/ct/dM</latexit>

Es(·)<latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="hP+6LrUf2d3tZaldqaQQvEKMXyw=">AAAB2XicbZDNSgMxFIXv1L86Vq1rN8EiuCozbnQpuHFZwbZCO5RM5k4bmskMyR2hDH0BF25EfC93vo3pz0JbDwQ+zknIvSculLQUBN9ebWd3b/+gfugfNfzjk9Nmo2fz0gjsilzl5jnmFpXU2CVJCp8LgzyLFfbj6f0i77+gsTLXTzQrMMr4WMtUCk7O6oyaraAdLMW2IVxDC9YaNb+GSS7KDDUJxa0dhEFBUcUNSaFw7g9LiwUXUz7GgUPNM7RRtRxzzi6dk7A0N+5oYkv394uKZ9bOstjdzDhN7Ga2MP/LBiWlt1EldVESarH6KC0Vo5wtdmaJNChIzRxwYaSblYkJN1yQa8Z3HYSbG29D77odBu3wMYA6nMMFXEEIN3AHD9CBLghI4BXevYn35n2suqp569LO4I+8zx84xIo4</latexit><latexit sha1_base64="ZH/b1M8c8fRi8a9FnE3/+kQXtcg=">AAAB5nicbZBLSwMxFIXv+Ky1anXrJliEuikzbnQpiOCygn1gO5RMJtOGZpIhuSOUof/CjQtF/Enu/Demj4W2Hgh8nJOQe0+USWHR97+9jc2t7Z3d0l55v3JweFQ9rrStzg3jLaalNt2IWi6F4i0UKHk3M5ymkeSdaHw7yzvP3Fih1SNOMh6mdKhEIhhFZz3dDWy9z2KNF4NqzW/4c5F1CJZQg6Wag+pXP9YsT7lCJqm1vcDPMCyoQcEkn5b7ueUZZWM65D2HiqbchsV84ik5d05MEm3cUUjm7u8XBU2tnaSRu5lSHNnVbGb+l/VyTK7DQqgsR67Y4qMklwQ1ma1PYmE4QzlxQJkRblbCRtRQhq6ksishWF15HdqXjcBvBA8+lOAUzqAOAVzBDdxDE1rAQMELvMG7Z71X72NR14a37O0E/sj7/AGHj47w</latexit><latexit sha1_base64="ZH/b1M8c8fRi8a9FnE3/+kQXtcg=">AAAB5nicbZBLSwMxFIXv+Ky1anXrJliEuikzbnQpiOCygn1gO5RMJtOGZpIhuSOUof/CjQtF/Enu/Demj4W2Hgh8nJOQe0+USWHR97+9jc2t7Z3d0l55v3JweFQ9rrStzg3jLaalNt2IWi6F4i0UKHk3M5ymkeSdaHw7yzvP3Fih1SNOMh6mdKhEIhhFZz3dDWy9z2KNF4NqzW/4c5F1CJZQg6Wag+pXP9YsT7lCJqm1vcDPMCyoQcEkn5b7ueUZZWM65D2HiqbchsV84ik5d05MEm3cUUjm7u8XBU2tnaSRu5lSHNnVbGb+l/VyTK7DQqgsR67Y4qMklwQ1ma1PYmE4QzlxQJkRblbCRtRQhq6ksishWF15HdqXjcBvBA8+lOAUzqAOAVzBDdxDE1rAQMELvMG7Z71X72NR14a37O0E/sj7/AGHj47w</latexit><latexit sha1_base64="pWvk1NG3NH14LoW+MwZNH2Flg4g=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBahXkriRY9FETxWsB/YhrLZbNqlm03YnQil9F948aCIV/+NN/+N2zYHbX0w8Hhvhpl5QSqFQdf9dgpr6xubW8Xt0s7u3v5B+fCoZZJMM95kiUx0J6CGS6F4EwVK3kk1p3EgeTsY3cz89hPXRiTqAccp92M6UCISjKKVHm/7ptpjYYLn/XLFrblzkFXi5aQCORr98lcvTFgWc4VMUmO6npuiP6EaBZN8WuplhqeUjeiAdy1VNObGn8wvnpIzq4QkSrQthWSu/p6Y0NiYcRzYzpji0Cx7M/E/r5thdOVPhEoz5IotFkWZJJiQ2fskFJozlGNLKNPC3krYkGrK0IZUsiF4yy+vktZFzXNr3r1bqV/ncRThBE6hCh5cQh3uoAFNYKDgGV7hzTHOi/PufCxaC04+cwx/4Hz+AL3HkEg=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit>

D(·, ·)<latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="hP+6LrUf2d3tZaldqaQQvEKMXyw=">AAAB2XicbZDNSgMxFIXv1L86Vq1rN8EiuCozbnQpuHFZwbZCO5RM5k4bmskMyR2hDH0BF25EfC93vo3pz0JbDwQ+zknIvSculLQUBN9ebWd3b/+gfugfNfzjk9Nmo2fz0gjsilzl5jnmFpXU2CVJCp8LgzyLFfbj6f0i77+gsTLXTzQrMMr4WMtUCk7O6oyaraAdLMW2IVxDC9YaNb+GSS7KDDUJxa0dhEFBUcUNSaFw7g9LiwUXUz7GgUPNM7RRtRxzzi6dk7A0N+5oYkv394uKZ9bOstjdzDhN7Ga2MP/LBiWlt1EldVESarH6KC0Vo5wtdmaJNChIzRxwYaSblYkJN1yQa8Z3HYSbG29D77odBu3wMYA6nMMFXEEIN3AHD9CBLghI4BXevYn35n2suqp569LO4I+8zx84xIo4</latexit><latexit sha1_base64="dbMyJ+VJKqVOpf9B1uo978XOQ+g=">AAAB6nicbZDNSgMxFIXv+Ftr1erWTbAIFaTMuNGloAuXFewPtGPJZDJtaCYzJHeUMvQ93LhQxAdy59uYTrvQ1gMJH+ck5OYEqRQGXffbWVvf2NzaLu2Udyt7+wfVw0rbJJlmvMUSmehuQA2XQvEWCpS8m2pO40DyTjC+meWdJ66NSNQDTlLux3SoRCQYRWs93tb7LEzwvNjPBtWa23ALkVXwFlCDhZqD6lc/TFgWc4VMUmN6npuin1ONgkk+Lfczw1PKxnTIexYVjbnx82LqKTm1TkiiRNulkBTu7xs5jY2ZxIE9GVMcmeVsZv6X9TKMrvxcqDRDrtj8oSiTBBMyq4CEQnOGcmKBMi3srISNqKYMbVFlW4K3/OVVaF80PLfh3btQgmM4gTp4cAnXcAdNaAEDDS/wBu/Os/PqfMzrWnMWvR3BHzmfPy5jkHM=</latexit><latexit sha1_base64="dbMyJ+VJKqVOpf9B1uo978XOQ+g=">AAAB6nicbZDNSgMxFIXv+Ftr1erWTbAIFaTMuNGloAuXFewPtGPJZDJtaCYzJHeUMvQ93LhQxAdy59uYTrvQ1gMJH+ck5OYEqRQGXffbWVvf2NzaLu2Udyt7+wfVw0rbJJlmvMUSmehuQA2XQvEWCpS8m2pO40DyTjC+meWdJ66NSNQDTlLux3SoRCQYRWs93tb7LEzwvNjPBtWa23ALkVXwFlCDhZqD6lc/TFgWc4VMUmN6npuin1ONgkk+Lfczw1PKxnTIexYVjbnx82LqKTm1TkiiRNulkBTu7xs5jY2ZxIE9GVMcmeVsZv6X9TKMrvxcqDRDrtj8oSiTBBMyq4CEQnOGcmKBMi3srISNqKYMbVFlW4K3/OVVaF80PLfh3btQgmM4gTp4cAnXcAdNaAEDDS/wBu/Os/PqfMzrWnMWvR3BHzmfPy5jkHM=</latexit><latexit sha1_base64="nTaFTjfAeRN7z7s08IH43Z2sGTA=">AAAB9XicbVDLSgMxFM3UV62vqks3wSJUkDLjRpdFXbisYB/QjiWTSdvQTDIkd5Qy9D/cuFDErf/izr8xnc5CWw/cy+Gce8nNCWLBDbjut1NYWV1b3yhulra2d3b3yvsHLaMSTVmTKqF0JyCGCS5ZEzgI1ok1I1EgWDsYX8/89iPThit5D5OY+REZSj7glICVHm6qPRoqOMv6ab9ccWtuBrxMvJxUUI5Gv/zVCxVNIiaBCmJM13Nj8FOigVPBpqVeYlhM6JgMWddSSSJm/DS7eopPrBLigdK2JOBM/b2RksiYSRTYyYjAyCx6M/E/r5vA4NJPuYwTYJLOHxokAoPCswhwyDWjICaWEKq5vRXTEdGEgg2qZEPwFr+8TFrnNc+teXdupX6Vx1FER+gYVZGHLlAd3aIGaiKKNHpGr+jNeXJenHfnYz5acPKdQ/QHzucPdLCRzw==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit><latexit sha1_base64="fWaezRIYxaV9BHUAR9GXJD9kfso=">AAAB9XicbVDLSgMxFM34rPVVdekmWIQKUmZE0GVRFy4r2Ae0Y8mkmTY0kwzJHaUM/Q83LhRx67+4829Mp7PQ1gP3cjjnXnJzglhwA6777Swtr6yurRc2iptb2zu7pb39plGJpqxBlVC6HRDDBJesARwEa8eakSgQrBWMrqd+65Fpw5W8h3HM/IgMJA85JWClh5tKl/YVnGb9pFcqu1U3A14kXk7KKEe9V/rq9hVNIiaBCmJMx3Nj8FOigVPBJsVuYlhM6IgMWMdSSSJm/DS7eoKPrdLHodK2JOBM/b2RksiYcRTYyYjA0Mx7U/E/r5NAeOmnXMYJMElnD4WJwKDwNALc55pREGNLCNXc3orpkGhCwQZVtCF4819eJM2zqudWvbvzcu0qj6OADtERqiAPXaAaukV11EAUafSMXtGb8+S8OO/Ox2x0ycl3DtAfOJ8/dfCR0w==</latexit>

xui<latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="V7F30nWMHAQhmUOen+7xA7xslDc=">AAAB6XicbZBLSwMxFIXv1Fcdq9a1m2ARXJUZN7oU3LisYB/QDiWTudOGJpkhyRTL0D/g1r078Te59ZeYPha29UDgcE7CvfniXHBjg+Dbq+ztHxweVY/9k5p/enZer3VMVmiGbZaJTPdialBwhW3LrcBerpHKWGA3njwu+u4UteGZerGzHCNJR4qnnFHrotaw3giawVJk14Rr04C1hvWfQZKxQqKyTFBj+mGQ26ik2nImcO4PCoM5ZRM6wr6ziko0Ublcc06uXZKQNNPuKEuW6d8XJZXGzGTsbkpqx2a7W4T/df3CpvdRyVVeWFRsNSgtBLEZWfyZJFwjs2LmDGWau10JG1NNmXVk/I0xsZw7JuE2gV3TuW2GQTN8DqAKl3AFNxDCHTzAE7SgDQwSeIN379X78D5X7CreGuIFbMj7+gX/LZDO</latexit><latexit sha1_base64="nSqcl1kyNheexkgtv2ygagyUtxM=">AAAB93icbZA7T8MwFIVvyquUAoWVxaJCYqoSFhiRWBiLRB9SGlWO67RW/YhsB1FF2fgLrLCzIX4NK78Ep+1AW45k6egcW/f6i1POjPX9b6+ytb2zu1fdrx3UD4+OGyf1rlGZJrRDFFe6H2NDOZO0Y5nltJ9qikXMaS+e3pV974lqw5R8tLOURgKPJUsYwdZF4SAW+XMxzDNWDBtNv+XPhTZNsDRNWKo9bPwMRopkgkpLODYmDPzURjnWlhFOi9ogMzTFZIrHNHRWYkFNlM9XLtCFS0YoUdodadE8/fsix8KYmYjdTYHtxKx3ZfhfF2Y2uYlyJtPMUkkWg5KMI6tQ+X80YpoSy2fOYKKZ2xWRCdaYWEeptjImFiWUYB3BpuletQK/FTz4UIUzOIdLCOAabuEe2tABAgpe4Q3evRfvw/tc4Kt4S46nsCLv6xfj+Zbx</latexit><latexit sha1_base64="nSqcl1kyNheexkgtv2ygagyUtxM=">AAAB93icbZA7T8MwFIVvyquUAoWVxaJCYqoSFhiRWBiLRB9SGlWO67RW/YhsB1FF2fgLrLCzIX4NK78Ep+1AW45k6egcW/f6i1POjPX9b6+ytb2zu1fdrx3UD4+OGyf1rlGZJrRDFFe6H2NDOZO0Y5nltJ9qikXMaS+e3pV974lqw5R8tLOURgKPJUsYwdZF4SAW+XMxzDNWDBtNv+XPhTZNsDRNWKo9bPwMRopkgkpLODYmDPzURjnWlhFOi9ogMzTFZIrHNHRWYkFNlM9XLtCFS0YoUdodadE8/fsix8KYmYjdTYHtxKx3ZfhfF2Y2uYlyJtPMUkkWg5KMI6tQ+X80YpoSy2fOYKKZ2xWRCdaYWEeptjImFiWUYB3BpuletQK/FTz4UIUzOIdLCOAabuEe2tABAgpe4Q3evRfvw/tc4Kt4S46nsCLv6xfj+Zbx</latexit><latexit sha1_base64="PzRUj5HBQ7P/zuPxojs7ENEMbic=">AAACAnicbVA9T8MwEL3wWcpXgZHFokJiqhIWGCtYGItEP6Q0qhzXaa3aTmQ7iCrKxl9ghZ0NsfJHWPklOG0G2vKkk57eu9PdvTDhTBvX/XbW1jc2t7YrO9Xdvf2Dw9rRcUfHqSK0TWIeq16INeVM0rZhhtNeoigWIafdcHJb+N1HqjSL5YOZJjQQeCRZxAg2VvL7ocie8kGWsnxQq7sNdwa0SryS1KFEa1D76Q9jkgoqDeFYa99zExNkWBlGOM2r/VTTBJMJHlHfUokF1UE2OzlH51YZoihWtqRBM/XvRIaF1lMR2k6BzVgve4X4n+enJroOMiaT1FBJ5ouilCMTo+J/NGSKEsOnlmCimL0VkTFWmBibUnVhTSiKULzlCFZJ57LhuQ3v3q03b8p4KnAKZ3ABHlxBE+6gBW0gEMMLvMKb8+y8Ox/O57x1zSlnTmABztcvcjiYWg==</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="V7F30nWMHAQhmUOen+7xA7xslDc=">AAAB6XicbZBLSwMxFIXv1Fcdq9a1m2ARXJUZN7oU3LisYB/QDiWTudOGJpkhyRTL0D/g1r078Te59ZeYPha29UDgcE7CvfniXHBjg+Dbq+ztHxweVY/9k5p/enZer3VMVmiGbZaJTPdialBwhW3LrcBerpHKWGA3njwu+u4UteGZerGzHCNJR4qnnFHrotaw3giawVJk14Rr04C1hvWfQZKxQqKyTFBj+mGQ26ik2nImcO4PCoM5ZRM6wr6ziko0Ublcc06uXZKQNNPuKEuW6d8XJZXGzGTsbkpqx2a7W4T/df3CpvdRyVVeWFRsNSgtBLEZWfyZJFwjs2LmDGWau10JG1NNmXVk/I0xsZw7JuE2gV3TuW2GQTN8DqAKl3AFNxDCHTzAE7SgDQwSeIN379X78D5X7CreGuIFbMj7+gX/LZDO</latexit><latexit sha1_base64="nSqcl1kyNheexkgtv2ygagyUtxM=">AAAB93icbZA7T8MwFIVvyquUAoWVxaJCYqoSFhiRWBiLRB9SGlWO67RW/YhsB1FF2fgLrLCzIX4NK78Ep+1AW45k6egcW/f6i1POjPX9b6+ytb2zu1fdrx3UD4+OGyf1rlGZJrRDFFe6H2NDOZO0Y5nltJ9qikXMaS+e3pV974lqw5R8tLOURgKPJUsYwdZF4SAW+XMxzDNWDBtNv+XPhTZNsDRNWKo9bPwMRopkgkpLODYmDPzURjnWlhFOi9ogMzTFZIrHNHRWYkFNlM9XLtCFS0YoUdodadE8/fsix8KYmYjdTYHtxKx3ZfhfF2Y2uYlyJtPMUkkWg5KMI6tQ+X80YpoSy2fOYKKZ2xWRCdaYWEeptjImFiWUYB3BpuletQK/FTz4UIUzOIdLCOAabuEe2tABAgpe4Q3evRfvw/tc4Kt4S46nsCLv6xfj+Zbx</latexit><latexit sha1_base64="nSqcl1kyNheexkgtv2ygagyUtxM=">AAAB93icbZA7T8MwFIVvyquUAoWVxaJCYqoSFhiRWBiLRB9SGlWO67RW/YhsB1FF2fgLrLCzIX4NK78Ep+1AW45k6egcW/f6i1POjPX9b6+ytb2zu1fdrx3UD4+OGyf1rlGZJrRDFFe6H2NDOZO0Y5nltJ9qikXMaS+e3pV974lqw5R8tLOURgKPJUsYwdZF4SAW+XMxzDNWDBtNv+XPhTZNsDRNWKo9bPwMRopkgkpLODYmDPzURjnWlhFOi9ogMzTFZIrHNHRWYkFNlM9XLtCFS0YoUdodadE8/fsix8KYmYjdTYHtxKx3ZfhfF2Y2uYlyJtPMUkkWg5KMI6tQ+X80YpoSy2fOYKKZ2xWRCdaYWEeptjImFiWUYB3BpuletQK/FTz4UIUzOIdLCOAabuEe2tABAgpe4Q3evRfvw/tc4Kt4S46nsCLv6xfj+Zbx</latexit><latexit sha1_base64="PzRUj5HBQ7P/zuPxojs7ENEMbic=">AAACAnicbVA9T8MwEL3wWcpXgZHFokJiqhIWGCtYGItEP6Q0qhzXaa3aTmQ7iCrKxl9ghZ0NsfJHWPklOG0G2vKkk57eu9PdvTDhTBvX/XbW1jc2t7YrO9Xdvf2Dw9rRcUfHqSK0TWIeq16INeVM0rZhhtNeoigWIafdcHJb+N1HqjSL5YOZJjQQeCRZxAg2VvL7ocie8kGWsnxQq7sNdwa0SryS1KFEa1D76Q9jkgoqDeFYa99zExNkWBlGOM2r/VTTBJMJHlHfUokF1UE2OzlH51YZoihWtqRBM/XvRIaF1lMR2k6BzVgve4X4n+enJroOMiaT1FBJ5ouilCMTo+J/NGSKEsOnlmCimL0VkTFWmBibUnVhTSiKULzlCFZJ57LhuQ3v3q03b8p4KnAKZ3ABHlxBE+6gBW0gEMMLvMKb8+y8Ox/O57x1zSlnTmABztcvcjiYWg==</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit>

cui<latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="V7F30nWMHAQhmUOen+7xA7xslDc=">AAAB6XicbZBLSwMxFIXv1Fcdq9a1m2ARXJUZN7oU3LisYB/QDiWTudOGJpkhyRTL0D/g1r078Te59ZeYPha29UDgcE7CvfniXHBjg+Dbq+ztHxweVY/9k5p/enZer3VMVmiGbZaJTPdialBwhW3LrcBerpHKWGA3njwu+u4UteGZerGzHCNJR4qnnFHrotaw3giawVJk14Rr04C1hvWfQZKxQqKyTFBj+mGQ26ik2nImcO4PCoM5ZRM6wr6ziko0Ublcc06uXZKQNNPuKEuW6d8XJZXGzGTsbkpqx2a7W4T/df3CpvdRyVVeWFRsNSgtBLEZWfyZJFwjs2LmDGWau10JG1NNmXVk/I0xsZw7JuE2gV3TuW2GQTN8DqAKl3AFNxDCHTzAE7SgDQwSeIN379X78D5X7CreGuIFbMj7+gX/LZDO</latexit><latexit sha1_base64="cHbQh792Ono/2s2pw/MXHMyFJJs=">AAAB93icbZBLSwMxFIXv1FetVatbN8EiuCozbnQpuHFZwT5gOpRMmmlD8xiSTKEMs/MvuNW9O/HXuPWXmGm7sK0HAodzEu7NF6ecGev7315lZ3dv/6B6WDuqH5+cNs7qXaMyTWiHKK50P8aGciZpxzLLaT/VFIuY0148fSj73oxqw5R8tvOURgKPJUsYwdZF4SAWOSmGecaKYaPpt/yF0LYJVqYJK7WHjZ/BSJFMUGkJx8aEgZ/aKMfaMsJpURtkhqaYTPGYhs5KLKiJ8sXKBbpyyQglSrsjLVqkf1/kWBgzF7G7KbCdmM2uDP/rwswmd1HOZJpZKslyUJJxZBUq/49GTFNi+dwZTDRzuyIywRoT6yjV1sbEooQSbCLYNt2bVuC3gicfqnABl3ANAdzCPTxCGzpAQMErvMG79+J9eJ9LfBVvxfEc1uR9/QLDPpbc</latexit><latexit sha1_base64="cHbQh792Ono/2s2pw/MXHMyFJJs=">AAAB93icbZBLSwMxFIXv1FetVatbN8EiuCozbnQpuHFZwT5gOpRMmmlD8xiSTKEMs/MvuNW9O/HXuPWXmGm7sK0HAodzEu7NF6ecGev7315lZ3dv/6B6WDuqH5+cNs7qXaMyTWiHKK50P8aGciZpxzLLaT/VFIuY0148fSj73oxqw5R8tvOURgKPJUsYwdZF4SAWOSmGecaKYaPpt/yF0LYJVqYJK7WHjZ/BSJFMUGkJx8aEgZ/aKMfaMsJpURtkhqaYTPGYhs5KLKiJ8sXKBbpyyQglSrsjLVqkf1/kWBgzF7G7KbCdmM2uDP/rwswmd1HOZJpZKslyUJJxZBUq/49GTFNi+dwZTDRzuyIywRoT6yjV1sbEooQSbCLYNt2bVuC3gicfqnABl3ANAdzCPTxCGzpAQMErvMG79+J9eJ9LfBVvxfEc1uR9/QLDPpbc</latexit><latexit sha1_base64="fvnHZecx+kwMDyhEXl0oF6hcfFw=">AAACAnicbVA9T8MwEL2Ur1K+CowsFhUSU5WwwFjBwlgk+iGlUeW4TmvVdiLbQaqibPwFVtjZECt/hJVfgtNmoC1POunpvTvd3QsTzrRx3W+nsrG5tb1T3a3t7R8cHtWPT7o6ThWhHRLzWPVDrClnknYMM5z2E0WxCDnthdO7wu89UaVZLB/NLKGBwGPJIkawsZI/CEVG8mGWsnxYb7hNdw60TrySNKBEe1j/GYxikgoqDeFYa99zExNkWBlGOM1rg1TTBJMpHlPfUokF1UE2PzlHF1YZoShWtqRBc/XvRIaF1jMR2k6BzUSveoX4n+enJroJMiaT1FBJFouilCMTo+J/NGKKEsNnlmCimL0VkQlWmBibUm1pTSiKULzVCNZJ96rpuU3vwW20bst4qnAG53AJHlxDC+6hDR0gEMMLvMKb8+y8Ox/O56K14pQzp7AE5+sXUJaYRQ==</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit><latexit sha1_base64="SG9ZcAHVRZhq4KYV1SR4tYRI3xg=">AAACAnicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthXSUDbbTbt0Nxt2N0IJufkXvOrdm3j1j3j1l7hpc7CtDwYe780wMy9MONPGdb+dytr6xuZWdbu2s7u3f1A/POpqmSpCO0RyqR5DrClnMe0YZjh9TBTFIuS0F05uC7/3RJVmMn4w04QGAo9iFjGCjZX8figykg+ylOWDesNtujOgVeKVpAEl2oP6T38oSSpobAjHWvuem5ggw8owwmle66eaJphM8Ij6lsZYUB1ks5NzdGaVIYqkshUbNFP/TmRYaD0Voe0U2Iz1sleI/3l+aqLrIGNxkhoak/miKOXISFT8j4ZMUWL41BJMFLO3IjLGChNjU6otrAlFEYq3HMEq6V40Pbfp3V82WjdlPFU4gVM4Bw+uoAV30IYOEJDwAq/w5jw7786H8zlvrTjlzDEswPn6BVHWmEk=</latexit>

Ec(·)<latexit sha1_base64="quJRkwu6MOkwDZiIpkvITlXI1DM=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNujlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A6ZXkDw=</latexit><latexit sha1_base64="quJRkwu6MOkwDZiIpkvITlXI1DM=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNujlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A6ZXkDw=</latexit><latexit sha1_base64="quJRkwu6MOkwDZiIpkvITlXI1DM=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNujlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A6ZXkDw=</latexit><latexit sha1_base64="quJRkwu6MOkwDZiIpkvITlXI1DM=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNujlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A6ZXkDw=</latexit>

xvj<latexit sha1_base64="1bo38ZpXPv8aDeqGB4yYd2nrGrk=">AAACAnicbVDLSsNAFJ3UV62vqks3g0VwVRIRdFl047KCfUAaymQ6acfOI8xMiiVk5y+41b07ceuPuPVLnLRZ2NYDFw7n3Mu994Qxo9q47rdTWlvf2Nwqb1d2dvf2D6qHR20tE4VJC0smVTdEmjAqSMtQw0g3VgTxkJFOOL7N/c6EKE2leDDTmAQcDQWNKEbGSn4v5OlT1k8nj1m/WnPr7gxwlXgFqYECzX71pzeQOOFEGMyQ1r7nxiZIkTIUM5JVeokmMcJjNCS+pQJxooN0dnIGz6wygJFUtoSBM/XvRIq41lMe2k6OzEgve7n4n+cnJroOUirixBCB54uihEEjYf4/HFBFsGFTSxBW1N4K8QgphI1NqbKwJuR5KN5yBKukfVH33Lp3f1lr3BTxlMEJOAXnwANXoAHuQBO0AAYSvIBX8OY8O+/Oh/M5by05xcwxWIDz9Qt2o5hg</latexit><latexit sha1_base64="1bo38ZpXPv8aDeqGB4yYd2nrGrk=">AAACAnicbVDLSsNAFJ3UV62vqks3g0VwVRIRdFl047KCfUAaymQ6acfOI8xMiiVk5y+41b07ceuPuPVLnLRZ2NYDFw7n3Mu994Qxo9q47rdTWlvf2Nwqb1d2dvf2D6qHR20tE4VJC0smVTdEmjAqSMtQw0g3VgTxkJFOOL7N/c6EKE2leDDTmAQcDQWNKEbGSn4v5OlT1k8nj1m/WnPr7gxwlXgFqYECzX71pzeQOOFEGMyQ1r7nxiZIkTIUM5JVeokmMcJjNCS+pQJxooN0dnIGz6wygJFUtoSBM/XvRIq41lMe2k6OzEgve7n4n+cnJroOUirixBCB54uihEEjYf4/HFBFsGFTSxBW1N4K8QgphI1NqbKwJuR5KN5yBKukfVH33Lp3f1lr3BTxlMEJOAXnwANXoAHuQBO0AAYSvIBX8OY8O+/Oh/M5by05xcwxWIDz9Qt2o5hg</latexit><latexit sha1_base64="1bo38ZpXPv8aDeqGB4yYd2nrGrk=">AAACAnicbVDLSsNAFJ3UV62vqks3g0VwVRIRdFl047KCfUAaymQ6acfOI8xMiiVk5y+41b07ceuPuPVLnLRZ2NYDFw7n3Mu994Qxo9q47rdTWlvf2Nwqb1d2dvf2D6qHR20tE4VJC0smVTdEmjAqSMtQw0g3VgTxkJFOOL7N/c6EKE2leDDTmAQcDQWNKEbGSn4v5OlT1k8nj1m/WnPr7gxwlXgFqYECzX71pzeQOOFEGMyQ1r7nxiZIkTIUM5JVeokmMcJjNCS+pQJxooN0dnIGz6wygJFUtoSBM/XvRIq41lMe2k6OzEgve7n4n+cnJroOUirixBCB54uihEEjYf4/HFBFsGFTSxBW1N4K8QgphI1NqbKwJuR5KN5yBKukfVH33Lp3f1lr3BTxlMEJOAXnwANXoAHuQBO0AAYSvIBX8OY8O+/Oh/M5by05xcwxWIDz9Qt2o5hg</latexit><latexit sha1_base64="V7F30nWMHAQhmUOen+7xA7xslDc=">AAAB6XicbZBLSwMxFIXv1Fcdq9a1m2ARXJUZN7oU3LisYB/QDiWTudOGJpkhyRTL0D/g1r078Te59ZeYPha29UDgcE7CvfniXHBjg+Dbq+ztHxweVY/9k5p/enZer3VMVmiGbZaJTPdialBwhW3LrcBerpHKWGA3njwu+u4UteGZerGzHCNJR4qnnFHrotaw3giawVJk14Rr04C1hvWfQZKxQqKyTFBj+mGQ26ik2nImcO4PCoM5ZRM6wr6ziko0Ublcc06uXZKQNNPuKEuW6d8XJZXGzGTsbkpqx2a7W4T/df3CpvdRyVVeWFRsNSgtBLEZWfyZJFwjs2LmDGWau10JG1NNmXVk/I0xsZw7JuE2gV3TuW2GQTN8DqAKl3AFNxDCHTzAE7SgDQwSeIN379X78D5X7CreGuIFbMj7+gX/LZDO</latexit><latexit sha1_base64="0smPMoIdo6lFNXZQmqxJO05e22s=">AAAB93icbZA7T8MwFIVvyquUAoWVxaJCYqoSFhiRWBiLRB9SWlWO67SmfkS2U1FF2fgLrLCzIX4NK78Ep+1AW45k6egcW/f6ixLOjPX9b6+0tb2zu1ferxxUD4+OayfVtlGpJrRFFFe6G2FDOZO0ZZnltJtoikXEaSea3BV9Z0q1YUo+2llC+wKPJIsZwdZFYS8S2XM+yKZP+aBW9xv+XGjTBEtTh6Wag9pPb6hIKqi0hGNjwsBPbD/D2jLCaV7ppYYmmEzwiIbOSiyo6WfzlXN04ZIhipV2R1o0T/++yLAwZiYid1NgOzbrXRH+14WpjW/6GZNJaqkki0FxypFVqPg/GjJNieUzZzDRzO2KyBhrTKyjVFkZE4kCSrCOYNO0rxqB3wgefCjDGZzDJQRwDbdwD01oAQEFr/AG796L9+F9LvCVvCXHU1iR9/UL5w6W8w==</latexit><latexit sha1_base64="0smPMoIdo6lFNXZQmqxJO05e22s=">AAAB93icbZA7T8MwFIVvyquUAoWVxaJCYqoSFhiRWBiLRB9SWlWO67SmfkS2U1FF2fgLrLCzIX4NK78Ep+1AW45k6egcW/f6ixLOjPX9b6+0tb2zu1ferxxUD4+OayfVtlGpJrRFFFe6G2FDOZO0ZZnltJtoikXEaSea3BV9Z0q1YUo+2llC+wKPJIsZwdZFYS8S2XM+yKZP+aBW9xv+XGjTBEtTh6Wag9pPb6hIKqi0hGNjwsBPbD/D2jLCaV7ppYYmmEzwiIbOSiyo6WfzlXN04ZIhipV2R1o0T/++yLAwZiYid1NgOzbrXRH+14WpjW/6GZNJaqkki0FxypFVqPg/GjJNieUzZzDRzO2KyBhrTKyjVFkZE4kCSrCOYNO0rxqB3wgefCjDGZzDJQRwDbdwD01oAQEFr/AG796L9+F9LvCVvCXHU1iR9/UL5w6W8w==</latexit><latexit sha1_base64="pPxOvoGljdJiFc5rX8waSb4yksk=">AAACAnicbVA9T8MwEL3wWcpXgZHFokJiqhIWGCtYGItEP6S0qhzXaU3tOLKdiirKxl9ghZ0NsfJHWPklOG0G2vKkk57eu9PdvSDmTBvX/XbW1jc2t7ZLO+Xdvf2Dw8rRcUvLRBHaJJJL1QmwppxFtGmY4bQTK4pFwGk7GN/mfntClWYyejDTmPYEHkYsZAQbK/ndQKRPWT+dPGb9StWtuTOgVeIVpAoFGv3KT3cgSSJoZAjHWvueG5teipVhhNOs3E00jTEZ4yH1LY2woLqXzk7O0LlVBiiUylZk0Ez9O5FiofVUBLZTYDPSy14u/uf5iQmveymL4sTQiMwXhQlHRqL8fzRgihLDp5Zgopi9FZERVpgYm1J5YU0g8lC85QhWSeuy5rk1796t1m+KeEpwCmdwAR5cQR3uoAFNICDhBV7hzXl23p0P53PeuuYUMyewAOfrF3VjmFw=</latexit><latexit sha1_base64="1bo38ZpXPv8aDeqGB4yYd2nrGrk=">AAACAnicbVDLSsNAFJ3UV62vqks3g0VwVRIRdFl047KCfUAaymQ6acfOI8xMiiVk5y+41b07ceuPuPVLnLRZ2NYDFw7n3Mu994Qxo9q47rdTWlvf2Nwqb1d2dvf2D6qHR20tE4VJC0smVTdEmjAqSMtQw0g3VgTxkJFOOL7N/c6EKE2leDDTmAQcDQWNKEbGSn4v5OlT1k8nj1m/WnPr7gxwlXgFqYECzX71pzeQOOFEGMyQ1r7nxiZIkTIUM5JVeokmMcJjNCS+pQJxooN0dnIGz6wygJFUtoSBM/XvRIq41lMe2k6OzEgve7n4n+cnJroOUirixBCB54uihEEjYf4/HFBFsGFTSxBW1N4K8QgphI1NqbKwJuR5KN5yBKukfVH33Lp3f1lr3BTxlMEJOAXnwANXoAHuQBO0AAYSvIBX8OY8O+/Oh/M5by05xcwxWIDz9Qt2o5hg</latexit><latexit sha1_base64="1bo38ZpXPv8aDeqGB4yYd2nrGrk=">AAACAnicbVDLSsNAFJ3UV62vqks3g0VwVRIRdFl047KCfUAaymQ6acfOI8xMiiVk5y+41b07ceuPuPVLnLRZ2NYDFw7n3Mu994Qxo9q47rdTWlvf2Nwqb1d2dvf2D6qHR20tE4VJC0smVTdEmjAqSMtQw0g3VgTxkJFOOL7N/c6EKE2leDDTmAQcDQWNKEbGSn4v5OlT1k8nj1m/WnPr7gxwlXgFqYECzX71pzeQOOFEGMyQ1r7nxiZIkTIUM5JVeokmMcJjNCS+pQJxooN0dnIGz6wygJFUtoSBM/XvRIq41lMe2k6OzEgve7n4n+cnJroOUirixBCB54uihEEjYf4/HFBFsGFTSxBW1N4K8QgphI1NqbKwJuR5KN5yBKukfVH33Lp3f1lr3BTxlMEJOAXnwANXoAHuQBO0AAYSvIBX8OY8O+/Oh/M5by05xcwxWIDz9Qt2o5hg</latexit><latexit sha1_base64="1bo38ZpXPv8aDeqGB4yYd2nrGrk=">AAACAnicbVDLSsNAFJ3UV62vqks3g0VwVRIRdFl047KCfUAaymQ6acfOI8xMiiVk5y+41b07ceuPuPVLnLRZ2NYDFw7n3Mu994Qxo9q47rdTWlvf2Nwqb1d2dvf2D6qHR20tE4VJC0smVTdEmjAqSMtQw0g3VgTxkJFOOL7N/c6EKE2leDDTmAQcDQWNKEbGSn4v5OlT1k8nj1m/WnPr7gxwlXgFqYECzX71pzeQOOFEGMyQ1r7nxiZIkTIUM5JVeokmMcJjNCS+pQJxooN0dnIGz6wygJFUtoSBM/XvRIq41lMe2k6OzEgve7n4n+cnJroOUirixBCB54uihEEjYf4/HFBFsGFTSxBW1N4K8QgphI1NqbKwJuR5KN5yBKukfVH33Lp3f1lr3BTxlMEJOAXnwANXoAHuQBO0AAYSvIBX8OY8O+/Oh/M5by05xcwxWIDz9Qt2o5hg</latexit><latexit sha1_base64="1bo38ZpXPv8aDeqGB4yYd2nrGrk=">AAACAnicbVDLSsNAFJ3UV62vqks3g0VwVRIRdFl047KCfUAaymQ6acfOI8xMiiVk5y+41b07ceuPuPVLnLRZ2NYDFw7n3Mu994Qxo9q47rdTWlvf2Nwqb1d2dvf2D6qHR20tE4VJC0smVTdEmjAqSMtQw0g3VgTxkJFOOL7N/c6EKE2leDDTmAQcDQWNKEbGSn4v5OlT1k8nj1m/WnPr7gxwlXgFqYECzX71pzeQOOFEGMyQ1r7nxiZIkTIUM5JVeokmMcJjNCS+pQJxooN0dnIGz6wygJFUtoSBM/XvRIq41lMe2k6OzEgve7n4n+cnJroOUirixBCB54uihEEjYf4/HFBFsGFTSxBW1N4K8QgphI1NqbKwJuR5KN5yBKukfVH33Lp3f1lr3BTxlMEJOAXnwANXoAHuQBO0AAYSvIBX8OY8O+/Oh/M5by05xcwxWIDz9Qt2o5hg</latexit><latexit sha1_base64="1bo38ZpXPv8aDeqGB4yYd2nrGrk=">AAACAnicbVDLSsNAFJ3UV62vqks3g0VwVRIRdFl047KCfUAaymQ6acfOI8xMiiVk5y+41b07ceuPuPVLnLRZ2NYDFw7n3Mu994Qxo9q47rdTWlvf2Nwqb1d2dvf2D6qHR20tE4VJC0smVTdEmjAqSMtQw0g3VgTxkJFOOL7N/c6EKE2leDDTmAQcDQWNKEbGSn4v5OlT1k8nj1m/WnPr7gxwlXgFqYECzX71pzeQOOFEGMyQ1r7nxiZIkTIUM5JVeokmMcJjNCS+pQJxooN0dnIGz6wygJFUtoSBM/XvRIq41lMe2k6OzEgve7n4n+cnJroOUirixBCB54uihEEjYf4/HFBFsGFTSxBW1N4K8QgphI1NqbKwJuR5KN5yBKukfVH33Lp3f1lr3BTxlMEJOAXnwANXoAHuQBO0AAYSvIBX8OY8O+/Oh/M5by05xcwxWIDz9Qt2o5hg</latexit><latexit sha1_base64="1bo38ZpXPv8aDeqGB4yYd2nrGrk=">AAACAnicbVDLSsNAFJ3UV62vqks3g0VwVRIRdFl047KCfUAaymQ6acfOI8xMiiVk5y+41b07ceuPuPVLnLRZ2NYDFw7n3Mu994Qxo9q47rdTWlvf2Nwqb1d2dvf2D6qHR20tE4VJC0smVTdEmjAqSMtQw0g3VgTxkJFOOL7N/c6EKE2leDDTmAQcDQWNKEbGSn4v5OlT1k8nj1m/WnPr7gxwlXgFqYECzX71pzeQOOFEGMyQ1r7nxiZIkTIUM5JVeokmMcJjNCS+pQJxooN0dnIGz6wygJFUtoSBM/XvRIq41lMe2k6OzEgve7n4n+cnJroOUirixBCB54uihEEjYf4/HFBFsGFTSxBW1N4K8QgphI1NqbKwJuR5KN5yBKukfVH33Lp3f1lr3BTxlMEJOAXnwANXoAHuQBO0AAYSvIBX8OY8O+/Oh/M5by05xcwxWIDz9Qt2o5hg</latexit>

xu�v,i<latexit sha1_base64="IsOdwIzKP2hQINd+mOt4Unx2MUM=">AAADInicbVLLihNBFK20r7F9TEaXIjRmAoISuoOiy+C4cDmKmRlIh3C7Upku0lXVVN3Og6JW/oobt/oX7sSV4C/4D1YnPWBmvEXD4T5O1Tl9s7LgBuP4Vyu4dv3GzVt7t8M7d+/d328fPDgxqtKUDakqlD7LwLCCSzZEjgU7KzUDkRXsNJsf1fXTBdOGK/kR1yUbCziXfMYpoE9N2o/THNCmmbAr5ya2ilKTK42oosVz7ibtTtyLNxFdBUkDOqSJ48lB6086VbQSTCItwJhREpc4tqCR04K5MK0MK4HO4ZyNPJQgmBnbjQ4XdX1mGs2U9p/EaJP9d8KCMGYtMt8pAHNzuVYn/1cbVTh7PbZclhUySbcXzaoi8jJrU6Ip14xisfYAqOb+rRHNQQNFb124c00m3G7CoAC91tMdbRY1F7TgZd0sYM7A/xn0bGH6lnljNPugssrgkRIC5NReuO5sN4yitFagWWE3oITN6LZlgCr1FVi5blhzS7akOxy+wY3644ZnzrTsi6rG9WsytbK29zJd8mntVFwfZw9TMcC8k6SLMgeJSthnrmHT/DxH0Fot3aGrKd2FHIW5V+MXJLm8DlfBSb+XxL3k/YvO4E2zKnvkEXlCnpKEvCID8o4ckyGh5BP5Qr6Sb8Hn4HvwI/i5bQ1azcxDshPB77/dvwOP</latexit><latexit sha1_base64="IsOdwIzKP2hQINd+mOt4Unx2MUM=">AAADInicbVLLihNBFK20r7F9TEaXIjRmAoISuoOiy+C4cDmKmRlIh3C7Upku0lXVVN3Og6JW/oobt/oX7sSV4C/4D1YnPWBmvEXD4T5O1Tl9s7LgBuP4Vyu4dv3GzVt7t8M7d+/d328fPDgxqtKUDakqlD7LwLCCSzZEjgU7KzUDkRXsNJsf1fXTBdOGK/kR1yUbCziXfMYpoE9N2o/THNCmmbAr5ya2ilKTK42oosVz7ibtTtyLNxFdBUkDOqSJ48lB6086VbQSTCItwJhREpc4tqCR04K5MK0MK4HO4ZyNPJQgmBnbjQ4XdX1mGs2U9p/EaJP9d8KCMGYtMt8pAHNzuVYn/1cbVTh7PbZclhUySbcXzaoi8jJrU6Ip14xisfYAqOb+rRHNQQNFb124c00m3G7CoAC91tMdbRY1F7TgZd0sYM7A/xn0bGH6lnljNPugssrgkRIC5NReuO5sN4yitFagWWE3oITN6LZlgCr1FVi5blhzS7akOxy+wY3644ZnzrTsi6rG9WsytbK29zJd8mntVFwfZw9TMcC8k6SLMgeJSthnrmHT/DxH0Fot3aGrKd2FHIW5V+MXJLm8DlfBSb+XxL3k/YvO4E2zKnvkEXlCnpKEvCID8o4ckyGh5BP5Qr6Sb8Hn4HvwI/i5bQ1azcxDshPB77/dvwOP</latexit><latexit sha1_base64="IsOdwIzKP2hQINd+mOt4Unx2MUM=">AAADInicbVLLihNBFK20r7F9TEaXIjRmAoISuoOiy+C4cDmKmRlIh3C7Upku0lXVVN3Og6JW/oobt/oX7sSV4C/4D1YnPWBmvEXD4T5O1Tl9s7LgBuP4Vyu4dv3GzVt7t8M7d+/d328fPDgxqtKUDakqlD7LwLCCSzZEjgU7KzUDkRXsNJsf1fXTBdOGK/kR1yUbCziXfMYpoE9N2o/THNCmmbAr5ya2ilKTK42oosVz7ibtTtyLNxFdBUkDOqSJ48lB6086VbQSTCItwJhREpc4tqCR04K5MK0MK4HO4ZyNPJQgmBnbjQ4XdX1mGs2U9p/EaJP9d8KCMGYtMt8pAHNzuVYn/1cbVTh7PbZclhUySbcXzaoi8jJrU6Ip14xisfYAqOb+rRHNQQNFb124c00m3G7CoAC91tMdbRY1F7TgZd0sYM7A/xn0bGH6lnljNPugssrgkRIC5NReuO5sN4yitFagWWE3oITN6LZlgCr1FVi5blhzS7akOxy+wY3644ZnzrTsi6rG9WsytbK29zJd8mntVFwfZw9TMcC8k6SLMgeJSthnrmHT/DxH0Fot3aGrKd2FHIW5V+MXJLm8DlfBSb+XxL3k/YvO4E2zKnvkEXlCnpKEvCID8o4ckyGh5BP5Qr6Sb8Hn4HvwI/i5bQ1azcxDshPB77/dvwOP</latexit><latexit sha1_base64="IsOdwIzKP2hQINd+mOt4Unx2MUM=">AAADInicbVLLihNBFK20r7F9TEaXIjRmAoISuoOiy+C4cDmKmRlIh3C7Upku0lXVVN3Og6JW/oobt/oX7sSV4C/4D1YnPWBmvEXD4T5O1Tl9s7LgBuP4Vyu4dv3GzVt7t8M7d+/d328fPDgxqtKUDakqlD7LwLCCSzZEjgU7KzUDkRXsNJsf1fXTBdOGK/kR1yUbCziXfMYpoE9N2o/THNCmmbAr5ya2ilKTK42oosVz7ibtTtyLNxFdBUkDOqSJ48lB6086VbQSTCItwJhREpc4tqCR04K5MK0MK4HO4ZyNPJQgmBnbjQ4XdX1mGs2U9p/EaJP9d8KCMGYtMt8pAHNzuVYn/1cbVTh7PbZclhUySbcXzaoi8jJrU6Ip14xisfYAqOb+rRHNQQNFb124c00m3G7CoAC91tMdbRY1F7TgZd0sYM7A/xn0bGH6lnljNPugssrgkRIC5NReuO5sN4yitFagWWE3oITN6LZlgCr1FVi5blhzS7akOxy+wY3644ZnzrTsi6rG9WsytbK29zJd8mntVFwfZw9TMcC8k6SLMgeJSthnrmHT/DxH0Fot3aGrKd2FHIW5V+MXJLm8DlfBSb+XxL3k/YvO4E2zKnvkEXlCnpKEvCID8o4ckyGh5BP5Qr6Sb8Hn4HvwI/i5bQ1azcxDshPB77/dvwOP</latexit>

sv<latexit sha1_base64="UvXJKSTYYYYzbM0bQKTRsSZ+LLg=">AAADDXicbVLNbhMxEHaWn5blr4UjlxVpJCSkaDcC0WNEOXAsiLQV2Sjyep2sFf+s7NmkkeVn4MIV3oIb4soz8BC8A3aySKRlLEufvpn5PDOeoubMQJr+6kQ3bt66vbd/J7577/6DhweHj86MajShI6K40hcFNpQzSUfAgNOLWlMsCk7Pi8VJ8J8vqTZMyQ+wrulE4LlkM0YweOpjXghr3NQu3fSgm/bTjSXXQdaCLmrtdHrY+Z2XijSCSiAcGzPO0homFmtghFMX542hNSYLPKdjDyUW1EzspmSX9DxTJjOl/ZWQbNh/MywWxqxF4SMFhspc9QXyf75xA7PjiWWyboBKsn1o1vAEVBL6T0qmKQG+9gATzXytCamwxgT8lOKdZwrhdgkDAuu1Lnd6s6CZIJzVIVjgBcX+E8Crxfkb6gej6XtVNAZOlBBYljY3ldIAytlenCR56EBTbjegxpvUbcgQVO49+NL14qAt6YrsaPgANx5MWp0F1XIgmoBDNYW6tLb/Ml+xMkwqDcfZo1wMoepm+bKusAQl7HPXqmk2rwBrrVbuyAVJ97cdBZXvxi9IdnUdroOzQT9L+9m7F93h63ZV9tET9BQ9Qxl6hYboLTpFI0SQRJ/RF/Q1+hR9i75HP7ahUafNeYx2LPr5B6fV+0I=</latexit><latexit sha1_base64="UvXJKSTYYYYzbM0bQKTRsSZ+LLg=">AAADDXicbVLNbhMxEHaWn5blr4UjlxVpJCSkaDcC0WNEOXAsiLQV2Sjyep2sFf+s7NmkkeVn4MIV3oIb4soz8BC8A3aySKRlLEufvpn5PDOeoubMQJr+6kQ3bt66vbd/J7577/6DhweHj86MajShI6K40hcFNpQzSUfAgNOLWlMsCk7Pi8VJ8J8vqTZMyQ+wrulE4LlkM0YweOpjXghr3NQu3fSgm/bTjSXXQdaCLmrtdHrY+Z2XijSCSiAcGzPO0homFmtghFMX542hNSYLPKdjDyUW1EzspmSX9DxTJjOl/ZWQbNh/MywWxqxF4SMFhspc9QXyf75xA7PjiWWyboBKsn1o1vAEVBL6T0qmKQG+9gATzXytCamwxgT8lOKdZwrhdgkDAuu1Lnd6s6CZIJzVIVjgBcX+E8Crxfkb6gej6XtVNAZOlBBYljY3ldIAytlenCR56EBTbjegxpvUbcgQVO49+NL14qAt6YrsaPgANx5MWp0F1XIgmoBDNYW6tLb/Ml+xMkwqDcfZo1wMoepm+bKusAQl7HPXqmk2rwBrrVbuyAVJ97cdBZXvxi9IdnUdroOzQT9L+9m7F93h63ZV9tET9BQ9Qxl6hYboLTpFI0SQRJ/RF/Q1+hR9i75HP7ahUafNeYx2LPr5B6fV+0I=</latexit><latexit sha1_base64="UvXJKSTYYYYzbM0bQKTRsSZ+LLg=">AAADDXicbVLNbhMxEHaWn5blr4UjlxVpJCSkaDcC0WNEOXAsiLQV2Sjyep2sFf+s7NmkkeVn4MIV3oIb4soz8BC8A3aySKRlLEufvpn5PDOeoubMQJr+6kQ3bt66vbd/J7577/6DhweHj86MajShI6K40hcFNpQzSUfAgNOLWlMsCk7Pi8VJ8J8vqTZMyQ+wrulE4LlkM0YweOpjXghr3NQu3fSgm/bTjSXXQdaCLmrtdHrY+Z2XijSCSiAcGzPO0homFmtghFMX542hNSYLPKdjDyUW1EzspmSX9DxTJjOl/ZWQbNh/MywWxqxF4SMFhspc9QXyf75xA7PjiWWyboBKsn1o1vAEVBL6T0qmKQG+9gATzXytCamwxgT8lOKdZwrhdgkDAuu1Lnd6s6CZIJzVIVjgBcX+E8Crxfkb6gej6XtVNAZOlBBYljY3ldIAytlenCR56EBTbjegxpvUbcgQVO49+NL14qAt6YrsaPgANx5MWp0F1XIgmoBDNYW6tLb/Ml+xMkwqDcfZo1wMoepm+bKusAQl7HPXqmk2rwBrrVbuyAVJ97cdBZXvxi9IdnUdroOzQT9L+9m7F93h63ZV9tET9BQ9Qxl6hYboLTpFI0SQRJ/RF/Q1+hR9i75HP7ahUafNeYx2LPr5B6fV+0I=</latexit><latexit sha1_base64="UvXJKSTYYYYzbM0bQKTRsSZ+LLg=">AAADDXicbVLNbhMxEHaWn5blr4UjlxVpJCSkaDcC0WNEOXAsiLQV2Sjyep2sFf+s7NmkkeVn4MIV3oIb4soz8BC8A3aySKRlLEufvpn5PDOeoubMQJr+6kQ3bt66vbd/J7577/6DhweHj86MajShI6K40hcFNpQzSUfAgNOLWlMsCk7Pi8VJ8J8vqTZMyQ+wrulE4LlkM0YweOpjXghr3NQu3fSgm/bTjSXXQdaCLmrtdHrY+Z2XijSCSiAcGzPO0homFmtghFMX542hNSYLPKdjDyUW1EzspmSX9DxTJjOl/ZWQbNh/MywWxqxF4SMFhspc9QXyf75xA7PjiWWyboBKsn1o1vAEVBL6T0qmKQG+9gATzXytCamwxgT8lOKdZwrhdgkDAuu1Lnd6s6CZIJzVIVjgBcX+E8Crxfkb6gej6XtVNAZOlBBYljY3ldIAytlenCR56EBTbjegxpvUbcgQVO49+NL14qAt6YrsaPgANx5MWp0F1XIgmoBDNYW6tLb/Ml+xMkwqDcfZo1wMoepm+bKusAQl7HPXqmk2rwBrrVbuyAVJ97cdBZXvxi9IdnUdroOzQT9L+9m7F93h63ZV9tET9BQ9Qxl6hYboLTpFI0SQRJ/RF/Q1+hR9i75HP7ahUafNeYx2LPr5B6fV+0I=</latexit>

(c)<latexit sha1_base64="oZDLp9JM/IqvtcvMEHg+48x3TS4=">AAADBnicbVLLbtNAFJ2YVzGvFpZsLNJIRUiRHYFgGVEWLMsjbaU4qsbjSTzKPKyZ66bRaPZs2MJfsENs+Q0+gn9gxjESabmWpaNzzz1z79Utas4MpOmvXnTt+o2bt3Zux3fu3rv/YHfv4bFRjSZ0QhRX+rTAhnIm6QQYcHpaa4pFwelJsTwM+ZNzqg1T8iOsazoTeCHZnBEMnvpwQJ6e7fbTYdpGchVkHeijLo7O9nq/81KRRlAJhGNjpllaw8xiDYxw6uK8MbTGZIkXdOqhxIKamW17dcnAM2UyV9r/EpKW/bfCYmHMWhReKTBU5nIukP/LTRuYv5pZJusGqCSbh+YNT0AlYfCkZJoS4GsPMNHM95qQCmtMwK8n3nqmEG6bMCCwXutyazYLmgnCWR3EAi8p9tsH7xbnb6hfjKbvVdEYOFRCYFna3FRKAyhnB3GS5GECTbltQY3b0o1kDCr3GXzhBnHwlnRFtjy8wE1Hs85nSbUciSbg0E2hLqwdvshXrAybSsPn7H4uxlD1s/y8rrAEJewz17lptqgAa61Wbt8FS/d3HAWVn8YfSHb5HK6C49EwS4fZu+f98evuVHbQY/QEHaAMvURj9BYdoQkiaIE+oy/oa/Qp+hZ9j35spFGvq3mEtiL6+QffWPdN</latexit><latexit sha1_base64="oZDLp9JM/IqvtcvMEHg+48x3TS4=">AAADBnicbVLLbtNAFJ2YVzGvFpZsLNJIRUiRHYFgGVEWLMsjbaU4qsbjSTzKPKyZ66bRaPZs2MJfsENs+Q0+gn9gxjESabmWpaNzzz1z79Utas4MpOmvXnTt+o2bt3Zux3fu3rv/YHfv4bFRjSZ0QhRX+rTAhnIm6QQYcHpaa4pFwelJsTwM+ZNzqg1T8iOsazoTeCHZnBEMnvpwQJ6e7fbTYdpGchVkHeijLo7O9nq/81KRRlAJhGNjpllaw8xiDYxw6uK8MbTGZIkXdOqhxIKamW17dcnAM2UyV9r/EpKW/bfCYmHMWhReKTBU5nIukP/LTRuYv5pZJusGqCSbh+YNT0AlYfCkZJoS4GsPMNHM95qQCmtMwK8n3nqmEG6bMCCwXutyazYLmgnCWR3EAi8p9tsH7xbnb6hfjKbvVdEYOFRCYFna3FRKAyhnB3GS5GECTbltQY3b0o1kDCr3GXzhBnHwlnRFtjy8wE1Hs85nSbUciSbg0E2hLqwdvshXrAybSsPn7H4uxlD1s/y8rrAEJewz17lptqgAa61Wbt8FS/d3HAWVn8YfSHb5HK6C49EwS4fZu+f98evuVHbQY/QEHaAMvURj9BYdoQkiaIE+oy/oa/Qp+hZ9j35spFGvq3mEtiL6+QffWPdN</latexit><latexit sha1_base64="oZDLp9JM/IqvtcvMEHg+48x3TS4=">AAADBnicbVLLbtNAFJ2YVzGvFpZsLNJIRUiRHYFgGVEWLMsjbaU4qsbjSTzKPKyZ66bRaPZs2MJfsENs+Q0+gn9gxjESabmWpaNzzz1z79Utas4MpOmvXnTt+o2bt3Zux3fu3rv/YHfv4bFRjSZ0QhRX+rTAhnIm6QQYcHpaa4pFwelJsTwM+ZNzqg1T8iOsazoTeCHZnBEMnvpwQJ6e7fbTYdpGchVkHeijLo7O9nq/81KRRlAJhGNjpllaw8xiDYxw6uK8MbTGZIkXdOqhxIKamW17dcnAM2UyV9r/EpKW/bfCYmHMWhReKTBU5nIukP/LTRuYv5pZJusGqCSbh+YNT0AlYfCkZJoS4GsPMNHM95qQCmtMwK8n3nqmEG6bMCCwXutyazYLmgnCWR3EAi8p9tsH7xbnb6hfjKbvVdEYOFRCYFna3FRKAyhnB3GS5GECTbltQY3b0o1kDCr3GXzhBnHwlnRFtjy8wE1Hs85nSbUciSbg0E2hLqwdvshXrAybSsPn7H4uxlD1s/y8rrAEJewz17lptqgAa61Wbt8FS/d3HAWVn8YfSHb5HK6C49EwS4fZu+f98evuVHbQY/QEHaAMvURj9BYdoQkiaIE+oy/oa/Qp+hZ9j35spFGvq3mEtiL6+QffWPdN</latexit><latexit sha1_base64="oZDLp9JM/IqvtcvMEHg+48x3TS4=">AAADBnicbVLLbtNAFJ2YVzGvFpZsLNJIRUiRHYFgGVEWLMsjbaU4qsbjSTzKPKyZ66bRaPZs2MJfsENs+Q0+gn9gxjESabmWpaNzzz1z79Utas4MpOmvXnTt+o2bt3Zux3fu3rv/YHfv4bFRjSZ0QhRX+rTAhnIm6QQYcHpaa4pFwelJsTwM+ZNzqg1T8iOsazoTeCHZnBEMnvpwQJ6e7fbTYdpGchVkHeijLo7O9nq/81KRRlAJhGNjpllaw8xiDYxw6uK8MbTGZIkXdOqhxIKamW17dcnAM2UyV9r/EpKW/bfCYmHMWhReKTBU5nIukP/LTRuYv5pZJusGqCSbh+YNT0AlYfCkZJoS4GsPMNHM95qQCmtMwK8n3nqmEG6bMCCwXutyazYLmgnCWR3EAi8p9tsH7xbnb6hfjKbvVdEYOFRCYFna3FRKAyhnB3GS5GECTbltQY3b0o1kDCr3GXzhBnHwlnRFtjy8wE1Hs85nSbUciSbg0E2hLqwdvshXrAybSsPn7H4uxlD1s/y8rrAEJewz17lptqgAa61Wbt8FS/d3HAWVn8YfSHb5HK6C49EwS4fZu+f98evuVHbQY/QEHaAMvURj9BYdoQkiaIE+oy/oa/Qp+hZ9j35spFGvq3mEtiL6+QffWPdN</latexit>

Es(·)<latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="hP+6LrUf2d3tZaldqaQQvEKMXyw=">AAAB2XicbZDNSgMxFIXv1L86Vq1rN8EiuCozbnQpuHFZwbZCO5RM5k4bmskMyR2hDH0BF25EfC93vo3pz0JbDwQ+zknIvSculLQUBN9ebWd3b/+gfugfNfzjk9Nmo2fz0gjsilzl5jnmFpXU2CVJCp8LgzyLFfbj6f0i77+gsTLXTzQrMMr4WMtUCk7O6oyaraAdLMW2IVxDC9YaNb+GSS7KDDUJxa0dhEFBUcUNSaFw7g9LiwUXUz7GgUPNM7RRtRxzzi6dk7A0N+5oYkv394uKZ9bOstjdzDhN7Ga2MP/LBiWlt1EldVESarH6KC0Vo5wtdmaJNChIzRxwYaSblYkJN1yQa8Z3HYSbG29D77odBu3wMYA6nMMFXEEIN3AHD9CBLghI4BXevYn35n2suqp569LO4I+8zx84xIo4</latexit><latexit sha1_base64="ZH/b1M8c8fRi8a9FnE3/+kQXtcg=">AAAB5nicbZBLSwMxFIXv+Ky1anXrJliEuikzbnQpiOCygn1gO5RMJtOGZpIhuSOUof/CjQtF/Enu/Demj4W2Hgh8nJOQe0+USWHR97+9jc2t7Z3d0l55v3JweFQ9rrStzg3jLaalNt2IWi6F4i0UKHk3M5ymkeSdaHw7yzvP3Fih1SNOMh6mdKhEIhhFZz3dDWy9z2KNF4NqzW/4c5F1CJZQg6Wag+pXP9YsT7lCJqm1vcDPMCyoQcEkn5b7ueUZZWM65D2HiqbchsV84ik5d05MEm3cUUjm7u8XBU2tnaSRu5lSHNnVbGb+l/VyTK7DQqgsR67Y4qMklwQ1ma1PYmE4QzlxQJkRblbCRtRQhq6ksishWF15HdqXjcBvBA8+lOAUzqAOAVzBDdxDE1rAQMELvMG7Z71X72NR14a37O0E/sj7/AGHj47w</latexit><latexit sha1_base64="ZH/b1M8c8fRi8a9FnE3/+kQXtcg=">AAAB5nicbZBLSwMxFIXv+Ky1anXrJliEuikzbnQpiOCygn1gO5RMJtOGZpIhuSOUof/CjQtF/Enu/Demj4W2Hgh8nJOQe0+USWHR97+9jc2t7Z3d0l55v3JweFQ9rrStzg3jLaalNt2IWi6F4i0UKHk3M5ymkeSdaHw7yzvP3Fih1SNOMh6mdKhEIhhFZz3dDWy9z2KNF4NqzW/4c5F1CJZQg6Wag+pXP9YsT7lCJqm1vcDPMCyoQcEkn5b7ueUZZWM65D2HiqbchsV84ik5d05MEm3cUUjm7u8XBU2tnaSRu5lSHNnVbGb+l/VyTK7DQqgsR67Y4qMklwQ1ma1PYmE4QzlxQJkRblbCRtRQhq6ksishWF15HdqXjcBvBA8+lOAUzqAOAVzBDdxDE1rAQMELvMG7Z71X72NR14a37O0E/sj7/AGHj47w</latexit><latexit sha1_base64="pWvk1NG3NH14LoW+MwZNH2Flg4g=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBahXkriRY9FETxWsB/YhrLZbNqlm03YnQil9F948aCIV/+NN/+N2zYHbX0w8Hhvhpl5QSqFQdf9dgpr6xubW8Xt0s7u3v5B+fCoZZJMM95kiUx0J6CGS6F4EwVK3kk1p3EgeTsY3cz89hPXRiTqAccp92M6UCISjKKVHm/7ptpjYYLn/XLFrblzkFXi5aQCORr98lcvTFgWc4VMUmO6npuiP6EaBZN8WuplhqeUjeiAdy1VNObGn8wvnpIzq4QkSrQthWSu/p6Y0NiYcRzYzpji0Cx7M/E/r5thdOVPhEoz5IotFkWZJJiQ2fskFJozlGNLKNPC3krYkGrK0IZUsiF4yy+vktZFzXNr3r1bqV/ncRThBE6hCh5cQh3uoAFNYKDgGV7hzTHOi/PufCxaC04+cwx/4Hz+AL3HkEg=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit>

xui<latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="V7F30nWMHAQhmUOen+7xA7xslDc=">AAAB6XicbZBLSwMxFIXv1Fcdq9a1m2ARXJUZN7oU3LisYB/QDiWTudOGJpkhyRTL0D/g1r078Te59ZeYPha29UDgcE7CvfniXHBjg+Dbq+ztHxweVY/9k5p/enZer3VMVmiGbZaJTPdialBwhW3LrcBerpHKWGA3njwu+u4UteGZerGzHCNJR4qnnFHrotaw3giawVJk14Rr04C1hvWfQZKxQqKyTFBj+mGQ26ik2nImcO4PCoM5ZRM6wr6ziko0Ublcc06uXZKQNNPuKEuW6d8XJZXGzGTsbkpqx2a7W4T/df3CpvdRyVVeWFRsNSgtBLEZWfyZJFwjs2LmDGWau10JG1NNmXVk/I0xsZw7JuE2gV3TuW2GQTN8DqAKl3AFNxDCHTzAE7SgDQwSeIN379X78D5X7CreGuIFbMj7+gX/LZDO</latexit><latexit sha1_base64="nSqcl1kyNheexkgtv2ygagyUtxM=">AAAB93icbZA7T8MwFIVvyquUAoWVxaJCYqoSFhiRWBiLRB9SGlWO67RW/YhsB1FF2fgLrLCzIX4NK78Ep+1AW45k6egcW/f6i1POjPX9b6+ytb2zu1fdrx3UD4+OGyf1rlGZJrRDFFe6H2NDOZO0Y5nltJ9qikXMaS+e3pV974lqw5R8tLOURgKPJUsYwdZF4SAW+XMxzDNWDBtNv+XPhTZNsDRNWKo9bPwMRopkgkpLODYmDPzURjnWlhFOi9ogMzTFZIrHNHRWYkFNlM9XLtCFS0YoUdodadE8/fsix8KYmYjdTYHtxKx3ZfhfF2Y2uYlyJtPMUkkWg5KMI6tQ+X80YpoSy2fOYKKZ2xWRCdaYWEeptjImFiWUYB3BpuletQK/FTz4UIUzOIdLCOAabuEe2tABAgpe4Q3evRfvw/tc4Kt4S46nsCLv6xfj+Zbx</latexit><latexit sha1_base64="nSqcl1kyNheexkgtv2ygagyUtxM=">AAAB93icbZA7T8MwFIVvyquUAoWVxaJCYqoSFhiRWBiLRB9SGlWO67RW/YhsB1FF2fgLrLCzIX4NK78Ep+1AW45k6egcW/f6i1POjPX9b6+ytb2zu1fdrx3UD4+OGyf1rlGZJrRDFFe6H2NDOZO0Y5nltJ9qikXMaS+e3pV974lqw5R8tLOURgKPJUsYwdZF4SAW+XMxzDNWDBtNv+XPhTZNsDRNWKo9bPwMRopkgkpLODYmDPzURjnWlhFOi9ogMzTFZIrHNHRWYkFNlM9XLtCFS0YoUdodadE8/fsix8KYmYjdTYHtxKx3ZfhfF2Y2uYlyJtPMUkkWg5KMI6tQ+X80YpoSy2fOYKKZ2xWRCdaYWEeptjImFiWUYB3BpuletQK/FTz4UIUzOIdLCOAabuEe2tABAgpe4Q3evRfvw/tc4Kt4S46nsCLv6xfj+Zbx</latexit><latexit sha1_base64="PzRUj5HBQ7P/zuPxojs7ENEMbic=">AAACAnicbVA9T8MwEL3wWcpXgZHFokJiqhIWGCtYGItEP6Q0qhzXaa3aTmQ7iCrKxl9ghZ0NsfJHWPklOG0G2vKkk57eu9PdvTDhTBvX/XbW1jc2t7YrO9Xdvf2Dw9rRcUfHqSK0TWIeq16INeVM0rZhhtNeoigWIafdcHJb+N1HqjSL5YOZJjQQeCRZxAg2VvL7ocie8kGWsnxQq7sNdwa0SryS1KFEa1D76Q9jkgoqDeFYa99zExNkWBlGOM2r/VTTBJMJHlHfUokF1UE2OzlH51YZoihWtqRBM/XvRIaF1lMR2k6BzVgve4X4n+enJroOMiaT1FBJ5ouilCMTo+J/NGSKEsOnlmCimL0VkTFWmBibUnVhTSiKULzlCFZJ57LhuQ3v3q03b8p4KnAKZ3ABHlxBE+6gBW0gEMMLvMKb8+y8Ox/O57x1zSlnTmABztcvcjiYWg==</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="V7F30nWMHAQhmUOen+7xA7xslDc=">AAAB6XicbZBLSwMxFIXv1Fcdq9a1m2ARXJUZN7oU3LisYB/QDiWTudOGJpkhyRTL0D/g1r078Te59ZeYPha29UDgcE7CvfniXHBjg+Dbq+ztHxweVY/9k5p/enZer3VMVmiGbZaJTPdialBwhW3LrcBerpHKWGA3njwu+u4UteGZerGzHCNJR4qnnFHrotaw3giawVJk14Rr04C1hvWfQZKxQqKyTFBj+mGQ26ik2nImcO4PCoM5ZRM6wr6ziko0Ublcc06uXZKQNNPuKEuW6d8XJZXGzGTsbkpqx2a7W4T/df3CpvdRyVVeWFRsNSgtBLEZWfyZJFwjs2LmDGWau10JG1NNmXVk/I0xsZw7JuE2gV3TuW2GQTN8DqAKl3AFNxDCHTzAE7SgDQwSeIN379X78D5X7CreGuIFbMj7+gX/LZDO</latexit><latexit sha1_base64="nSqcl1kyNheexkgtv2ygagyUtxM=">AAAB93icbZA7T8MwFIVvyquUAoWVxaJCYqoSFhiRWBiLRB9SGlWO67RW/YhsB1FF2fgLrLCzIX4NK78Ep+1AW45k6egcW/f6i1POjPX9b6+ytb2zu1fdrx3UD4+OGyf1rlGZJrRDFFe6H2NDOZO0Y5nltJ9qikXMaS+e3pV974lqw5R8tLOURgKPJUsYwdZF4SAW+XMxzDNWDBtNv+XPhTZNsDRNWKo9bPwMRopkgkpLODYmDPzURjnWlhFOi9ogMzTFZIrHNHRWYkFNlM9XLtCFS0YoUdodadE8/fsix8KYmYjdTYHtxKx3ZfhfF2Y2uYlyJtPMUkkWg5KMI6tQ+X80YpoSy2fOYKKZ2xWRCdaYWEeptjImFiWUYB3BpuletQK/FTz4UIUzOIdLCOAabuEe2tABAgpe4Q3evRfvw/tc4Kt4S46nsCLv6xfj+Zbx</latexit><latexit sha1_base64="nSqcl1kyNheexkgtv2ygagyUtxM=">AAAB93icbZA7T8MwFIVvyquUAoWVxaJCYqoSFhiRWBiLRB9SGlWO67RW/YhsB1FF2fgLrLCzIX4NK78Ep+1AW45k6egcW/f6i1POjPX9b6+ytb2zu1fdrx3UD4+OGyf1rlGZJrRDFFe6H2NDOZO0Y5nltJ9qikXMaS+e3pV974lqw5R8tLOURgKPJUsYwdZF4SAW+XMxzDNWDBtNv+XPhTZNsDRNWKo9bPwMRopkgkpLODYmDPzURjnWlhFOi9ogMzTFZIrHNHRWYkFNlM9XLtCFS0YoUdodadE8/fsix8KYmYjdTYHtxKx3ZfhfF2Y2uYlyJtPMUkkWg5KMI6tQ+X80YpoSy2fOYKKZ2xWRCdaYWEeptjImFiWUYB3BpuletQK/FTz4UIUzOIdLCOAabuEe2tABAgpe4Q3evRfvw/tc4Kt4S46nsCLv6xfj+Zbx</latexit><latexit sha1_base64="PzRUj5HBQ7P/zuPxojs7ENEMbic=">AAACAnicbVA9T8MwEL3wWcpXgZHFokJiqhIWGCtYGItEP6Q0qhzXaa3aTmQ7iCrKxl9ghZ0NsfJHWPklOG0G2vKkk57eu9PdvTDhTBvX/XbW1jc2t7YrO9Xdvf2Dw9rRcUfHqSK0TWIeq16INeVM0rZhhtNeoigWIafdcHJb+N1HqjSL5YOZJjQQeCRZxAg2VvL7ocie8kGWsnxQq7sNdwa0SryS1KFEa1D76Q9jkgoqDeFYa99zExNkWBlGOM2r/VTTBJMJHlHfUokF1UE2OzlH51YZoihWtqRBM/XvRIaF1lMR2k6BzVgve4X4n+enJroOMiaT1FBJ5ouilCMTo+J/NGSKEsOnlmCimL0VkTFWmBibUnVhTSiKULzlCFZJ57LhuQ3v3q03b8p4KnAKZ3ABHlxBE+6gBW0gEMMLvMKb8+y8Ox/O57x1zSlnTmABztcvcjiYWg==</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit><latexit sha1_base64="KfLXnJ3ix6MXrAVdKvvNiY4zolE=">AAACAnicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBfMDlCHubvWTJ7u2xuyeGI51/wVZ7O7H1j9j6S9xLrjCJDwYe780wMy9MONPGdb+d0tr6xuZWebuys7u3f1A9PGprmSpCW0Ryqboh1pSzmLYMM5x2E0WxCDnthOPb3O88UqWZjB/MJKGBwMOYRYxgYyW/F4rsadrPUjbtV2tu3Z0BrRKvIDUo0OxXf3oDSVJBY0M41tr33MQEGVaGEU6nlV6qaYLJGA+pb2mMBdVBNjt5is6sMkCRVLZig2bq34kMC60nIrSdApuRXvZy8T/PT010HWQsTlJDYzJfFKUcGYny/9GAKUoMn1iCiWL2VkRGWGFibEqVhTWhyEPxliNYJe2LuufWvfvLWuOmiKcMJ3AK5+DBFTTgDprQAgISXuAV3pxn5935cD7nrSWnmDmGBThfv3N4mF4=</latexit>

xuj<latexit sha1_base64="QABZC2gGMUstEy6pNVVnR1+Y24U=">AAACAnicbVDLSsNAFL3xWeur6tJNsAiuSiKCLotuXFawD0hLmUwn7dh5hJmJWEJ2/oJb3bsTt/6IW7/ESZuFbT1w4XDOvdx7Txgzqo3nfTsrq2vrG5ulrfL2zu7efuXgsKVlojBpYsmk6oRIE0YFaRpqGOnEiiAeMtIOxze5334kSlMp7s0kJj2OhoJGFCNjpaAb8vQp66fJQ9avVL2aN4W7TPyCVKFAo1/56Q4kTjgRBjOkdeB7semlSBmKGcnK3USTGOExGpLAUoE40b10enLmnlpl4EZS2RLGnap/J1LEtZ7w0HZyZEZ60cvF/7wgMdFVL6UiTgwReLYoSphrpJv/7w6oItiwiSUIK2pvdfEIKYSNTak8tybkeSj+YgTLpHVe872af3dRrV8X8ZTgGE7gDHy4hDrcQgOagEHCC7zCm/PsvDsfzuesdcUpZo5gDs7XL3UNmF8=</latexit><latexit sha1_base64="QABZC2gGMUstEy6pNVVnR1+Y24U=">AAACAnicbVDLSsNAFL3xWeur6tJNsAiuSiKCLotuXFawD0hLmUwn7dh5hJmJWEJ2/oJb3bsTt/6IW7/ESZuFbT1w4XDOvdx7Txgzqo3nfTsrq2vrG5ulrfL2zu7efuXgsKVlojBpYsmk6oRIE0YFaRpqGOnEiiAeMtIOxze5334kSlMp7s0kJj2OhoJGFCNjpaAb8vQp66fJQ9avVL2aN4W7TPyCVKFAo1/56Q4kTjgRBjOkdeB7semlSBmKGcnK3USTGOExGpLAUoE40b10enLmnlpl4EZS2RLGnap/J1LEtZ7w0HZyZEZ60cvF/7wgMdFVL6UiTgwReLYoSphrpJv/7w6oItiwiSUIK2pvdfEIKYSNTak8tybkeSj+YgTLpHVe872af3dRrV8X8ZTgGE7gDHy4hDrcQgOagEHCC7zCm/PsvDsfzuesdcUpZo5gDs7XL3UNmF8=</latexit><latexit sha1_base64="QABZC2gGMUstEy6pNVVnR1+Y24U=">AAACAnicbVDLSsNAFL3xWeur6tJNsAiuSiKCLotuXFawD0hLmUwn7dh5hJmJWEJ2/oJb3bsTt/6IW7/ESZuFbT1w4XDOvdx7Txgzqo3nfTsrq2vrG5ulrfL2zu7efuXgsKVlojBpYsmk6oRIE0YFaRpqGOnEiiAeMtIOxze5334kSlMp7s0kJj2OhoJGFCNjpaAb8vQp66fJQ9avVL2aN4W7TPyCVKFAo1/56Q4kTjgRBjOkdeB7semlSBmKGcnK3USTGOExGpLAUoE40b10enLmnlpl4EZS2RLGnap/J1LEtZ7w0HZyZEZ60cvF/7wgMdFVL6UiTgwReLYoSphrpJv/7w6oItiwiSUIK2pvdfEIKYSNTak8tybkeSj+YgTLpHVe872af3dRrV8X8ZTgGE7gDHy4hDrcQgOagEHCC7zCm/PsvDsfzuesdcUpZo5gDs7XL3UNmF8=</latexit><latexit sha1_base64="QABZC2gGMUstEy6pNVVnR1+Y24U=">AAACAnicbVDLSsNAFL3xWeur6tJNsAiuSiKCLotuXFawD0hLmUwn7dh5hJmJWEJ2/oJb3bsTt/6IW7/ESZuFbT1w4XDOvdx7Txgzqo3nfTsrq2vrG5ulrfL2zu7efuXgsKVlojBpYsmk6oRIE0YFaRpqGOnEiiAeMtIOxze5334kSlMp7s0kJj2OhoJGFCNjpaAb8vQp66fJQ9avVL2aN4W7TPyCVKFAo1/56Q4kTjgRBjOkdeB7semlSBmKGcnK3USTGOExGpLAUoE40b10enLmnlpl4EZS2RLGnap/J1LEtZ7w0HZyZEZ60cvF/7wgMdFVL6UiTgwReLYoSphrpJv/7w6oItiwiSUIK2pvdfEIKYSNTak8tybkeSj+YgTLpHVe872af3dRrV8X8ZTgGE7gDHy4hDrcQgOagEHCC7zCm/PsvDsfzuesdcUpZo5gDs7XL3UNmF8=</latexit>

Es(·)<latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="hP+6LrUf2d3tZaldqaQQvEKMXyw=">AAAB2XicbZDNSgMxFIXv1L86Vq1rN8EiuCozbnQpuHFZwbZCO5RM5k4bmskMyR2hDH0BF25EfC93vo3pz0JbDwQ+zknIvSculLQUBN9ebWd3b/+gfugfNfzjk9Nmo2fz0gjsilzl5jnmFpXU2CVJCp8LgzyLFfbj6f0i77+gsTLXTzQrMMr4WMtUCk7O6oyaraAdLMW2IVxDC9YaNb+GSS7KDDUJxa0dhEFBUcUNSaFw7g9LiwUXUz7GgUPNM7RRtRxzzi6dk7A0N+5oYkv394uKZ9bOstjdzDhN7Ga2MP/LBiWlt1EldVESarH6KC0Vo5wtdmaJNChIzRxwYaSblYkJN1yQa8Z3HYSbG29D77odBu3wMYA6nMMFXEEIN3AHD9CBLghI4BXevYn35n2suqp569LO4I+8zx84xIo4</latexit><latexit sha1_base64="ZH/b1M8c8fRi8a9FnE3/+kQXtcg=">AAAB5nicbZBLSwMxFIXv+Ky1anXrJliEuikzbnQpiOCygn1gO5RMJtOGZpIhuSOUof/CjQtF/Enu/Demj4W2Hgh8nJOQe0+USWHR97+9jc2t7Z3d0l55v3JweFQ9rrStzg3jLaalNt2IWi6F4i0UKHk3M5ymkeSdaHw7yzvP3Fih1SNOMh6mdKhEIhhFZz3dDWy9z2KNF4NqzW/4c5F1CJZQg6Wag+pXP9YsT7lCJqm1vcDPMCyoQcEkn5b7ueUZZWM65D2HiqbchsV84ik5d05MEm3cUUjm7u8XBU2tnaSRu5lSHNnVbGb+l/VyTK7DQqgsR67Y4qMklwQ1ma1PYmE4QzlxQJkRblbCRtRQhq6ksishWF15HdqXjcBvBA8+lOAUzqAOAVzBDdxDE1rAQMELvMG7Z71X72NR14a37O0E/sj7/AGHj47w</latexit><latexit sha1_base64="ZH/b1M8c8fRi8a9FnE3/+kQXtcg=">AAAB5nicbZBLSwMxFIXv+Ky1anXrJliEuikzbnQpiOCygn1gO5RMJtOGZpIhuSOUof/CjQtF/Enu/Demj4W2Hgh8nJOQe0+USWHR97+9jc2t7Z3d0l55v3JweFQ9rrStzg3jLaalNt2IWi6F4i0UKHk3M5ymkeSdaHw7yzvP3Fih1SNOMh6mdKhEIhhFZz3dDWy9z2KNF4NqzW/4c5F1CJZQg6Wag+pXP9YsT7lCJqm1vcDPMCyoQcEkn5b7ueUZZWM65D2HiqbchsV84ik5d05MEm3cUUjm7u8XBU2tnaSRu5lSHNnVbGb+l/VyTK7DQqgsR67Y4qMklwQ1ma1PYmE4QzlxQJkRblbCRtRQhq6ksishWF15HdqXjcBvBA8+lOAUzqAOAVzBDdxDE1rAQMELvMG7Z71X72NR14a37O0E/sj7/AGHj47w</latexit><latexit sha1_base64="pWvk1NG3NH14LoW+MwZNH2Flg4g=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBahXkriRY9FETxWsB/YhrLZbNqlm03YnQil9F948aCIV/+NN/+N2zYHbX0w8Hhvhpl5QSqFQdf9dgpr6xubW8Xt0s7u3v5B+fCoZZJMM95kiUx0J6CGS6F4EwVK3kk1p3EgeTsY3cz89hPXRiTqAccp92M6UCISjKKVHm/7ptpjYYLn/XLFrblzkFXi5aQCORr98lcvTFgWc4VMUmO6npuiP6EaBZN8WuplhqeUjeiAdy1VNObGn8wvnpIzq4QkSrQthWSu/p6Y0NiYcRzYzpji0Cx7M/E/r5thdOVPhEoz5IotFkWZJJiQ2fskFJozlGNLKNPC3krYkGrK0IZUsiF4yy+vktZFzXNr3r1bqV/ncRThBE6hCh5cQh3uoAFNYKDgGV7hzTHOi/PufCxaC04+cwx/4Hz+AL3HkEg=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit>

xvk<latexit sha1_base64="SRvWko+zJfK225EspPjZFKK1aMA=">AAADDnicbVJLjhMxEHWa39D8ZmDJpkUmEhJS1B0NgmXEsGA5IDIzUjqK3G4nbcWfll2djyzfgQ1buAU7xJYrcAjugJ00EpmhLEtPVa+eq8pV1JwZSNNfnejGzVu37xzcje/df/Dw0eHR43OjGk3oiCiu9GWBDeVM0hEw4PSy1hSLgtOLYnEa4hdLqg1T8iNsajoReC7ZjBEM3jXOC2HXbmqXCzc97Kb9dGvJdZC1oItaO5sedX7npSKNoBIIx8aMs7SGicUaGOHUxXljaI3JAs/p2EOJBTUTu63ZJT3vKZOZ0v5KSLbefzMsFsZsROGZAkNlrsaC83+xcQOz1xPLZN0AlWT30KzhCagkDCApmaYE+MYDTDTztSakwhoT8GOK954phNt3GBBYb3S515sFzQThrA5kgRcU+18Arxbnb6kfjKYfVNEYOFVCYFna3FRKAyhne3GS5KEDTbndghpvU3eUIajcR/Da9eKgLemK7Gl4ghsPJq3Ogmo5EE3AoZpCra3tv8xXrAyTSsNx9jgXQ6i6Wb6sKyxBCfvCtWqazSvAWquVO3ZB0v1tR0Hlu/ELkl1dh+vgfNDP0n72/qQ7fNOuygF6ip6h5yhDr9AQvUNnaIQIUugz+oK+Rp+ib9H36MeOGnXanCdoz6KffwD4Zfu8</latexit><latexit sha1_base64="SRvWko+zJfK225EspPjZFKK1aMA=">AAADDnicbVJLjhMxEHWa39D8ZmDJpkUmEhJS1B0NgmXEsGA5IDIzUjqK3G4nbcWfll2djyzfgQ1buAU7xJYrcAjugJ00EpmhLEtPVa+eq8pV1JwZSNNfnejGzVu37xzcje/df/Dw0eHR43OjGk3oiCiu9GWBDeVM0hEw4PSy1hSLgtOLYnEa4hdLqg1T8iNsajoReC7ZjBEM3jXOC2HXbmqXCzc97Kb9dGvJdZC1oItaO5sedX7npSKNoBIIx8aMs7SGicUaGOHUxXljaI3JAs/p2EOJBTUTu63ZJT3vKZOZ0v5KSLbefzMsFsZsROGZAkNlrsaC83+xcQOz1xPLZN0AlWT30KzhCagkDCApmaYE+MYDTDTztSakwhoT8GOK954phNt3GBBYb3S515sFzQThrA5kgRcU+18Arxbnb6kfjKYfVNEYOFVCYFna3FRKAyhne3GS5KEDTbndghpvU3eUIajcR/Da9eKgLemK7Gl4ghsPJq3Ogmo5EE3AoZpCra3tv8xXrAyTSsNx9jgXQ6i6Wb6sKyxBCfvCtWqazSvAWquVO3ZB0v1tR0Hlu/ELkl1dh+vgfNDP0n72/qQ7fNOuygF6ip6h5yhDr9AQvUNnaIQIUugz+oK+Rp+ib9H36MeOGnXanCdoz6KffwD4Zfu8</latexit><latexit sha1_base64="SRvWko+zJfK225EspPjZFKK1aMA=">AAADDnicbVJLjhMxEHWa39D8ZmDJpkUmEhJS1B0NgmXEsGA5IDIzUjqK3G4nbcWfll2djyzfgQ1buAU7xJYrcAjugJ00EpmhLEtPVa+eq8pV1JwZSNNfnejGzVu37xzcje/df/Dw0eHR43OjGk3oiCiu9GWBDeVM0hEw4PSy1hSLgtOLYnEa4hdLqg1T8iNsajoReC7ZjBEM3jXOC2HXbmqXCzc97Kb9dGvJdZC1oItaO5sedX7npSKNoBIIx8aMs7SGicUaGOHUxXljaI3JAs/p2EOJBTUTu63ZJT3vKZOZ0v5KSLbefzMsFsZsROGZAkNlrsaC83+xcQOz1xPLZN0AlWT30KzhCagkDCApmaYE+MYDTDTztSakwhoT8GOK954phNt3GBBYb3S515sFzQThrA5kgRcU+18Arxbnb6kfjKYfVNEYOFVCYFna3FRKAyhne3GS5KEDTbndghpvU3eUIajcR/Da9eKgLemK7Gl4ghsPJq3Ogmo5EE3AoZpCra3tv8xXrAyTSsNx9jgXQ6i6Wb6sKyxBCfvCtWqazSvAWquVO3ZB0v1tR0Hlu/ELkl1dh+vgfNDP0n72/qQ7fNOuygF6ip6h5yhDr9AQvUNnaIQIUugz+oK+Rp+ib9H36MeOGnXanCdoz6KffwD4Zfu8</latexit><latexit sha1_base64="PMP+lQxVt6lpKSqQMpe4+Cb/MVQ=">AAAC9XicbVJNbxMxEHWWr7IUaM9cVqSRkJCi3UgIjpHKgWNBpK2UjapZr5O14o+VPds0snzjxJV/wQ3xa/gR/AfsJEikZSxLT2/Gz/NGU7WCW8zzX73k3v0HDx8dPE6fHKZPnz0/Ojy3ujOUTagW2lxWYJngik2Qo2CXrWEgK8EuquVpzF9cM2O5Vp9x3bKZhIXic04BA3V2ddTPh/kmsrug2IE+2cXVce93WWvaSaaQCrB2WuQtzhwY5FQwn5adZS3QJSzYNEAFktmZ2/Tps0Fg6myuTbgKsw377wsH0tq1rEKlBGzs7Vwk/5ebdjh/N3NctR0yRbcfzTuRoc6i6azmhlEU6wCAGh56zWgDBiiG0aR731TS7xMWJZi1qfe8OTRcUsHbWCxhySBMHoNaWr5nYTCGfdJVZ/FUSwmqdqVttEHU3g3SLCujA8OE24AWNk+3JWPUZcjAjR+kUVuxFd3TCAV+OprtdJbMqJHsIo7dVPrGueGbcsXrOKk8Hu9OSjnGpl+U120DCrV0r/1OzfBFg2CMXvkTHyX9Xzsam+Am7EdxexvugvPRsMiHxcecHJAX5CV5RQrylozJB3JGJoSSmnwl35Ivyffkx3aPkt5uoY7JXiQ//wD6svQc</latexit><latexit sha1_base64="s0jmbt3bC1xu2mq100P1XAAoze8=">AAADA3icbVLNjtMwEHbD3xIW2OXKJaJbCQmpSiohOFZaDhwXRHdXaqpq4riN1diO7El/ZPkduHCFt+CGeA4egnfAbotEdxnL0qdvZj7PjKdoam4wTX91ojt3791/cPQwfnT8+MnTk9PjS6NaTdmIqlrp6wIMq7lkI+RYs+tGMxBFza6KxXnwXy2ZNlzJT7hp2ETAXPIZp4CeGueFsGs3tcuFm5500366teQ2yPagS/Z2MT3t/M5LRVvBJNIajBlnaYMTCxo5rZmL89awBugC5mzsoQTBzMRua3ZJzzNlMlPaX4nJlv03w4IwZiMKHykAK3PTF8j/+cYtzt5OLJdNi0zS3UOztk5QJWEASck1o1hvPACqua81oRVooOjHFB88Uwh3SBgUoDe6POjNouaC1rwJwQIWDPwvoFeL83fMD0azj6poDZ4rIUCWNjeV0ojK2V6cJHnoQLPabkED29RdyBBV7j2wdr04aEu2ogcaPsCNB5O9zoJpORBtwKGaQq2t7b/OV7wMk0rDcfYsF0Osulm+bCqQqIR95fZqms8rBK3Vyp25IOn+tqOw8t34BclursNtcDnoZ2k/+5CSI/KcvCAvSUbekCF5Ty7IiFCiyBfylXyLPkffox+7VYo6+516Rg4s+vkHHjb6Qg==</latexit><latexit sha1_base64="s0jmbt3bC1xu2mq100P1XAAoze8=">AAADA3icbVLNjtMwEHbD3xIW2OXKJaJbCQmpSiohOFZaDhwXRHdXaqpq4riN1diO7El/ZPkduHCFt+CGeA4egnfAbotEdxnL0qdvZj7PjKdoam4wTX91ojt3791/cPQwfnT8+MnTk9PjS6NaTdmIqlrp6wIMq7lkI+RYs+tGMxBFza6KxXnwXy2ZNlzJT7hp2ETAXPIZp4CeGueFsGs3tcuFm5500366teQ2yPagS/Z2MT3t/M5LRVvBJNIajBlnaYMTCxo5rZmL89awBugC5mzsoQTBzMRua3ZJzzNlMlPaX4nJlv03w4IwZiMKHykAK3PTF8j/+cYtzt5OLJdNi0zS3UOztk5QJWEASck1o1hvPACqua81oRVooOjHFB88Uwh3SBgUoDe6POjNouaC1rwJwQIWDPwvoFeL83fMD0azj6poDZ4rIUCWNjeV0ojK2V6cJHnoQLPabkED29RdyBBV7j2wdr04aEu2ogcaPsCNB5O9zoJpORBtwKGaQq2t7b/OV7wMk0rDcfYsF0Osulm+bCqQqIR95fZqms8rBK3Vyp25IOn+tqOw8t34BclursNtcDnoZ2k/+5CSI/KcvCAvSUbekCF5Ty7IiFCiyBfylXyLPkffox+7VYo6+516Rg4s+vkHHjb6Qg==</latexit><latexit sha1_base64="WQxNtnfANdK60HsoBgZneHFoyeo=">AAADDnicbVLNbhMxEHaWn5blr4UjlxVpJCSkaDcSgmPUcuBYEGkrZaPI63WyVvyzsmfzI8vvwIUrvAU3xJVX4CF4B+xkkUjLWJY+zXzzeWY8Rc2ZgTT91Ylu3b5z9+DwXnz/wcNHj4+On1wY1WhCR0Rxpa8KbChnko6AAadXtaZYFJxeFouzEL9cUm2Ykh9hU9OJwHPJZoxg8K5xXgi7dlO7XLjpUTftp1tLboKsBV3U2vn0uPM7LxVpBJVAODZmnKU1TCzWwAinLs4bQ2tMFnhOxx5KLKiZ2G3NLul5T5nMlPZXQrL1/pthsTBmIwrPFBgqcz0WnP+LjRuYvZlYJusGqCS7h2YNT0AlYQBJyTQlwDceYKKZrzUhFdaYgB9TvPdMIdy+w4DAeqPLvd4saCYIZ3UgC7yg2P8CeLU4f0v9YDT9oIrGwJkSAsvS5qZSGkA524uTJA8daMrtFtR4m7qjDEHlPoLXrhcHbUlXZE/DE9x4MGl1FlTLgWgCDtUUam1t/1W+YmWYVBqOsye5GELVzfJlXWEJStiXrlXTbF4B1lqt3IkLku5vOwoq341fkOz6OtwEF4N+lvaz92l3eNquyiF6hp6jFyhDr9EQvUPnaIQIUugz+oK+Rp+ib9H36MeOGnXanKdoz6KffwD3Jfu4</latexit><latexit sha1_base64="SRvWko+zJfK225EspPjZFKK1aMA=">AAADDnicbVJLjhMxEHWa39D8ZmDJpkUmEhJS1B0NgmXEsGA5IDIzUjqK3G4nbcWfll2djyzfgQ1buAU7xJYrcAjugJ00EpmhLEtPVa+eq8pV1JwZSNNfnejGzVu37xzcje/df/Dw0eHR43OjGk3oiCiu9GWBDeVM0hEw4PSy1hSLgtOLYnEa4hdLqg1T8iNsajoReC7ZjBEM3jXOC2HXbmqXCzc97Kb9dGvJdZC1oItaO5sedX7npSKNoBIIx8aMs7SGicUaGOHUxXljaI3JAs/p2EOJBTUTu63ZJT3vKZOZ0v5KSLbefzMsFsZsROGZAkNlrsaC83+xcQOz1xPLZN0AlWT30KzhCagkDCApmaYE+MYDTDTztSakwhoT8GOK954phNt3GBBYb3S515sFzQThrA5kgRcU+18Arxbnb6kfjKYfVNEYOFVCYFna3FRKAyhne3GS5KEDTbndghpvU3eUIajcR/Da9eKgLemK7Gl4ghsPJq3Ogmo5EE3AoZpCra3tv8xXrAyTSsNx9jgXQ6i6Wb6sKyxBCfvCtWqazSvAWquVO3ZB0v1tR0Hlu/ELkl1dh+vgfNDP0n72/qQ7fNOuygF6ip6h5yhDr9AQvUNnaIQIUugz+oK+Rp+ib9H36MeOGnXanCdoz6KffwD4Zfu8</latexit><latexit sha1_base64="SRvWko+zJfK225EspPjZFKK1aMA=">AAADDnicbVJLjhMxEHWa39D8ZmDJpkUmEhJS1B0NgmXEsGA5IDIzUjqK3G4nbcWfll2djyzfgQ1buAU7xJYrcAjugJ00EpmhLEtPVa+eq8pV1JwZSNNfnejGzVu37xzcje/df/Dw0eHR43OjGk3oiCiu9GWBDeVM0hEw4PSy1hSLgtOLYnEa4hdLqg1T8iNsajoReC7ZjBEM3jXOC2HXbmqXCzc97Kb9dGvJdZC1oItaO5sedX7npSKNoBIIx8aMs7SGicUaGOHUxXljaI3JAs/p2EOJBTUTu63ZJT3vKZOZ0v5KSLbefzMsFsZsROGZAkNlrsaC83+xcQOz1xPLZN0AlWT30KzhCagkDCApmaYE+MYDTDTztSakwhoT8GOK954phNt3GBBYb3S515sFzQThrA5kgRcU+18Arxbnb6kfjKYfVNEYOFVCYFna3FRKAyhne3GS5KEDTbndghpvU3eUIajcR/Da9eKgLemK7Gl4ghsPJq3Ogmo5EE3AoZpCra3tv8xXrAyTSsNx9jgXQ6i6Wb6sKyxBCfvCtWqazSvAWquVO3ZB0v1tR0Hlu/ELkl1dh+vgfNDP0n72/qQ7fNOuygF6ip6h5yhDr9AQvUNnaIQIUugz+oK+Rp+ib9H36MeOGnXanCdoz6KffwD4Zfu8</latexit><latexit sha1_base64="SRvWko+zJfK225EspPjZFKK1aMA=">AAADDnicbVJLjhMxEHWa39D8ZmDJpkUmEhJS1B0NgmXEsGA5IDIzUjqK3G4nbcWfll2djyzfgQ1buAU7xJYrcAjugJ00EpmhLEtPVa+eq8pV1JwZSNNfnejGzVu37xzcje/df/Dw0eHR43OjGk3oiCiu9GWBDeVM0hEw4PSy1hSLgtOLYnEa4hdLqg1T8iNsajoReC7ZjBEM3jXOC2HXbmqXCzc97Kb9dGvJdZC1oItaO5sedX7npSKNoBIIx8aMs7SGicUaGOHUxXljaI3JAs/p2EOJBTUTu63ZJT3vKZOZ0v5KSLbefzMsFsZsROGZAkNlrsaC83+xcQOz1xPLZN0AlWT30KzhCagkDCApmaYE+MYDTDTztSakwhoT8GOK954phNt3GBBYb3S515sFzQThrA5kgRcU+18Arxbnb6kfjKYfVNEYOFVCYFna3FRKAyhne3GS5KEDTbndghpvU3eUIajcR/Da9eKgLemK7Gl4ghsPJq3Ogmo5EE3AoZpCra3tv8xXrAyTSsNx9jgXQ6i6Wb6sKyxBCfvCtWqazSvAWquVO3ZB0v1tR0Hlu/ELkl1dh+vgfNDP0n72/qQ7fNOuygF6ip6h5yhDr9AQvUNnaIQIUugz+oK+Rp+ib9H36MeOGnXanCdoz6KffwD4Zfu8</latexit><latexit sha1_base64="SRvWko+zJfK225EspPjZFKK1aMA=">AAADDnicbVJLjhMxEHWa39D8ZmDJpkUmEhJS1B0NgmXEsGA5IDIzUjqK3G4nbcWfll2djyzfgQ1buAU7xJYrcAjugJ00EpmhLEtPVa+eq8pV1JwZSNNfnejGzVu37xzcje/df/Dw0eHR43OjGk3oiCiu9GWBDeVM0hEw4PSy1hSLgtOLYnEa4hdLqg1T8iNsajoReC7ZjBEM3jXOC2HXbmqXCzc97Kb9dGvJdZC1oItaO5sedX7npSKNoBIIx8aMs7SGicUaGOHUxXljaI3JAs/p2EOJBTUTu63ZJT3vKZOZ0v5KSLbefzMsFsZsROGZAkNlrsaC83+xcQOz1xPLZN0AlWT30KzhCagkDCApmaYE+MYDTDTztSakwhoT8GOK954phNt3GBBYb3S515sFzQThrA5kgRcU+18Arxbnb6kfjKYfVNEYOFVCYFna3FRKAyhne3GS5KEDTbndghpvU3eUIajcR/Da9eKgLemK7Gl4ghsPJq3Ogmo5EE3AoZpCra3tv8xXrAyTSsNx9jgXQ6i6Wb6sKyxBCfvCtWqazSvAWquVO3ZB0v1tR0Hlu/ELkl1dh+vgfNDP0n72/qQ7fNOuygF6ip6h5yhDr9AQvUNnaIQIUugz+oK+Rp+ib9H36MeOGnXanCdoz6KffwD4Zfu8</latexit><latexit sha1_base64="SRvWko+zJfK225EspPjZFKK1aMA=">AAADDnicbVJLjhMxEHWa39D8ZmDJpkUmEhJS1B0NgmXEsGA5IDIzUjqK3G4nbcWfll2djyzfgQ1buAU7xJYrcAjugJ00EpmhLEtPVa+eq8pV1JwZSNNfnejGzVu37xzcje/df/Dw0eHR43OjGk3oiCiu9GWBDeVM0hEw4PSy1hSLgtOLYnEa4hdLqg1T8iNsajoReC7ZjBEM3jXOC2HXbmqXCzc97Kb9dGvJdZC1oItaO5sedX7npSKNoBIIx8aMs7SGicUaGOHUxXljaI3JAs/p2EOJBTUTu63ZJT3vKZOZ0v5KSLbefzMsFsZsROGZAkNlrsaC83+xcQOz1xPLZN0AlWT30KzhCagkDCApmaYE+MYDTDTztSakwhoT8GOK954phNt3GBBYb3S515sFzQThrA5kgRcU+18Arxbnb6kfjKYfVNEYOFVCYFna3FRKAyhne3GS5KEDTbndghpvU3eUIajcR/Da9eKgLemK7Gl4ghsPJq3Ogmo5EE3AoZpCra3tv8xXrAyTSsNx9jgXQ6i6Wb6sKyxBCfvCtWqazSvAWquVO3ZB0v1tR0Hlu/ELkl1dh+vgfNDP0n72/qQ7fNOuygF6ip6h5yhDr9AQvUNnaIQIUugz+oK+Rp+ib9H36MeOGnXanCdoz6KffwD4Zfu8</latexit><latexit sha1_base64="SRvWko+zJfK225EspPjZFKK1aMA=">AAADDnicbVJLjhMxEHWa39D8ZmDJpkUmEhJS1B0NgmXEsGA5IDIzUjqK3G4nbcWfll2djyzfgQ1buAU7xJYrcAjugJ00EpmhLEtPVa+eq8pV1JwZSNNfnejGzVu37xzcje/df/Dw0eHR43OjGk3oiCiu9GWBDeVM0hEw4PSy1hSLgtOLYnEa4hdLqg1T8iNsajoReC7ZjBEM3jXOC2HXbmqXCzc97Kb9dGvJdZC1oItaO5sedX7npSKNoBIIx8aMs7SGicUaGOHUxXljaI3JAs/p2EOJBTUTu63ZJT3vKZOZ0v5KSLbefzMsFsZsROGZAkNlrsaC83+xcQOz1xPLZN0AlWT30KzhCagkDCApmaYE+MYDTDTztSakwhoT8GOK954phNt3GBBYb3S515sFzQThrA5kgRcU+18Arxbnb6kfjKYfVNEYOFVCYFna3FRKAyhne3GS5KEDTbndghpvU3eUIajcR/Da9eKgLemK7Gl4ghsPJq3Ogmo5EE3AoZpCra3tv8xXrAyTSsNx9jgXQ6i6Wb6sKyxBCfvCtWqazSvAWquVO3ZB0v1tR0Hlu/ELkl1dh+vgfNDP0n72/qQ7fNOuygF6ip6h5yhDr9AQvUNnaIQIUugz+oK+Rp+ib9H36MeOGnXanCdoz6KffwD4Zfu8</latexit>

Es(·)<latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="hP+6LrUf2d3tZaldqaQQvEKMXyw=">AAAB2XicbZDNSgMxFIXv1L86Vq1rN8EiuCozbnQpuHFZwbZCO5RM5k4bmskMyR2hDH0BF25EfC93vo3pz0JbDwQ+zknIvSculLQUBN9ebWd3b/+gfugfNfzjk9Nmo2fz0gjsilzl5jnmFpXU2CVJCp8LgzyLFfbj6f0i77+gsTLXTzQrMMr4WMtUCk7O6oyaraAdLMW2IVxDC9YaNb+GSS7KDDUJxa0dhEFBUcUNSaFw7g9LiwUXUz7GgUPNM7RRtRxzzi6dk7A0N+5oYkv394uKZ9bOstjdzDhN7Ga2MP/LBiWlt1EldVESarH6KC0Vo5wtdmaJNChIzRxwYaSblYkJN1yQa8Z3HYSbG29D77odBu3wMYA6nMMFXEEIN3AHD9CBLghI4BXevYn35n2suqp569LO4I+8zx84xIo4</latexit><latexit sha1_base64="ZH/b1M8c8fRi8a9FnE3/+kQXtcg=">AAAB5nicbZBLSwMxFIXv+Ky1anXrJliEuikzbnQpiOCygn1gO5RMJtOGZpIhuSOUof/CjQtF/Enu/Demj4W2Hgh8nJOQe0+USWHR97+9jc2t7Z3d0l55v3JweFQ9rrStzg3jLaalNt2IWi6F4i0UKHk3M5ymkeSdaHw7yzvP3Fih1SNOMh6mdKhEIhhFZz3dDWy9z2KNF4NqzW/4c5F1CJZQg6Wag+pXP9YsT7lCJqm1vcDPMCyoQcEkn5b7ueUZZWM65D2HiqbchsV84ik5d05MEm3cUUjm7u8XBU2tnaSRu5lSHNnVbGb+l/VyTK7DQqgsR67Y4qMklwQ1ma1PYmE4QzlxQJkRblbCRtRQhq6ksishWF15HdqXjcBvBA8+lOAUzqAOAVzBDdxDE1rAQMELvMG7Z71X72NR14a37O0E/sj7/AGHj47w</latexit><latexit sha1_base64="ZH/b1M8c8fRi8a9FnE3/+kQXtcg=">AAAB5nicbZBLSwMxFIXv+Ky1anXrJliEuikzbnQpiOCygn1gO5RMJtOGZpIhuSOUof/CjQtF/Enu/Demj4W2Hgh8nJOQe0+USWHR97+9jc2t7Z3d0l55v3JweFQ9rrStzg3jLaalNt2IWi6F4i0UKHk3M5ymkeSdaHw7yzvP3Fih1SNOMh6mdKhEIhhFZz3dDWy9z2KNF4NqzW/4c5F1CJZQg6Wag+pXP9YsT7lCJqm1vcDPMCyoQcEkn5b7ueUZZWM65D2HiqbchsV84ik5d05MEm3cUUjm7u8XBU2tnaSRu5lSHNnVbGb+l/VyTK7DQqgsR67Y4qMklwQ1ma1PYmE4QzlxQJkRblbCRtRQhq6ksishWF15HdqXjcBvBA8+lOAUzqAOAVzBDdxDE1rAQMELvMG7Z71X72NR14a37O0E/sj7/AGHj47w</latexit><latexit sha1_base64="pWvk1NG3NH14LoW+MwZNH2Flg4g=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBahXkriRY9FETxWsB/YhrLZbNqlm03YnQil9F948aCIV/+NN/+N2zYHbX0w8Hhvhpl5QSqFQdf9dgpr6xubW8Xt0s7u3v5B+fCoZZJMM95kiUx0J6CGS6F4EwVK3kk1p3EgeTsY3cz89hPXRiTqAccp92M6UCISjKKVHm/7ptpjYYLn/XLFrblzkFXi5aQCORr98lcvTFgWc4VMUmO6npuiP6EaBZN8WuplhqeUjeiAdy1VNObGn8wvnpIzq4QkSrQthWSu/p6Y0NiYcRzYzpji0Cx7M/E/r5thdOVPhEoz5IotFkWZJJiQ2fskFJozlGNLKNPC3krYkGrK0IZUsiF4yy+vktZFzXNr3r1bqV/ncRThBE6hCh5cQh3uoAFNYKDgGV7hzTHOi/PufCxaC04+cwx/4Hz+AL3HkEg=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit><latexit sha1_base64="IT/6yMbZAmEqoEv3DSvkX+eaVds=">AAAB8XicbVBNS8NAEJ34WetX1aOXYBHqpSQi6LEogscK9gPbUDabTbt0sxt2J0Ip/RdePCji1X/jzX/jts1BWx8MPN6bYWZemApu0PO+nZXVtfWNzcJWcXtnd2+/dHDYNCrTlDWoEkq3Q2KY4JI1kKNg7VQzkoSCtcLhzdRvPTFtuJIPOEpZkJC+5DGnBK30eNszlS6NFJ71SmWv6s3gLhM/J2XIUe+VvrqRolnCJFJBjOn4XorBmGjkVLBJsZsZlhI6JH3WsVSShJlgPLt44p5aJXJjpW1JdGfq74kxSYwZJaHtTAgOzKI3Ff/zOhnGV8GYyzRDJul8UZwJF5U7fd+NuGYUxcgSQjW3t7p0QDShaEMq2hD8xZeXSfO86ntV//6iXLvO4yjAMZxABXy4hBrcQR0aQEHCM7zCm2OcF+fd+Zi3rjj5zBH8gfP5A78HkEw=</latexit>

sui<latexit sha1_base64="fLl5ypyYOTapX81TcaHavLUpOgc=">AAADDnicbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZFcIwoB44FkbZSNoq8XidrxT8re7ZpZPkduHCFt+CGuPIKPATvgL1ZJNIylqVPM998nhlPUXNmIE1/9aIbN2/dvrN3N753/8HDR/sHj0+NajShE6K40ucFNpQzSSfAgNPzWlMsCk7PitVxiJ9dUG2Ykh9hU9OZwEvJFoxg8K5pXghr3Nw2zM33++kwbS25DrIO9FFnJ/OD3u+8VKQRVALh2JhpltYws1gDI5y6OG8MrTFZ4SWdeiixoGZm25pdMvCeMlko7a+EpPX+m2GxMGYjCs8UGCpzNRac/4tNG1i8nlkm6waoJNuHFg1PQCVhAEnJNCXANx5gopmvNSEV1piAH1O880wh3K7DgMB6o8ud3ixoJghndSALvKLY/wJ4tTh/S/1gNP2gisbAsRICy9LmplIaQDk7iJMkDx1oym0LatymbiljULmP4Es3iIO2pGuyo+EJbjqadTorquVINAGHagp1ae3wZb5mZZhUGo6zh7kYQ9XP8ou6whKUsC9cp6bZsgKstVq7Qxck3d92FFS+G78g2dV1uA5OR8MsHWbvj/rjN92q7KGn6Bl6jjL0Co3RO3SCJogghT6jL+hr9Cn6Fn2PfmypUa/LeYJ2LPr5B+ND+7Q=</latexit><latexit sha1_base64="fLl5ypyYOTapX81TcaHavLUpOgc=">AAADDnicbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZFcIwoB44FkbZSNoq8XidrxT8re7ZpZPkduHCFt+CGuPIKPATvgL1ZJNIylqVPM998nhlPUXNmIE1/9aIbN2/dvrN3N753/8HDR/sHj0+NajShE6K40ucFNpQzSSfAgNPzWlMsCk7PitVxiJ9dUG2Ykh9hU9OZwEvJFoxg8K5pXghr3Nw2zM33++kwbS25DrIO9FFnJ/OD3u+8VKQRVALh2JhpltYws1gDI5y6OG8MrTFZ4SWdeiixoGZm25pdMvCeMlko7a+EpPX+m2GxMGYjCs8UGCpzNRac/4tNG1i8nlkm6waoJNuHFg1PQCVhAEnJNCXANx5gopmvNSEV1piAH1O880wh3K7DgMB6o8ud3ixoJghndSALvKLY/wJ4tTh/S/1gNP2gisbAsRICy9LmplIaQDk7iJMkDx1oym0LatymbiljULmP4Es3iIO2pGuyo+EJbjqadTorquVINAGHagp1ae3wZb5mZZhUGo6zh7kYQ9XP8ou6whKUsC9cp6bZsgKstVq7Qxck3d92FFS+G78g2dV1uA5OR8MsHWbvj/rjN92q7KGn6Bl6jjL0Co3RO3SCJogghT6jL+hr9Cn6Fn2PfmypUa/LeYJ2LPr5B+ND+7Q=</latexit><latexit sha1_base64="fLl5ypyYOTapX81TcaHavLUpOgc=">AAADDnicbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZFcIwoB44FkbZSNoq8XidrxT8re7ZpZPkduHCFt+CGuPIKPATvgL1ZJNIylqVPM998nhlPUXNmIE1/9aIbN2/dvrN3N753/8HDR/sHj0+NajShE6K40ucFNpQzSSfAgNPzWlMsCk7PitVxiJ9dUG2Ykh9hU9OZwEvJFoxg8K5pXghr3Nw2zM33++kwbS25DrIO9FFnJ/OD3u+8VKQRVALh2JhpltYws1gDI5y6OG8MrTFZ4SWdeiixoGZm25pdMvCeMlko7a+EpPX+m2GxMGYjCs8UGCpzNRac/4tNG1i8nlkm6waoJNuHFg1PQCVhAEnJNCXANx5gopmvNSEV1piAH1O880wh3K7DgMB6o8ud3ixoJghndSALvKLY/wJ4tTh/S/1gNP2gisbAsRICy9LmplIaQDk7iJMkDx1oym0LatymbiljULmP4Es3iIO2pGuyo+EJbjqadTorquVINAGHagp1ae3wZb5mZZhUGo6zh7kYQ9XP8ou6whKUsC9cp6bZsgKstVq7Qxck3d92FFS+G78g2dV1uA5OR8MsHWbvj/rjN92q7KGn6Bl6jjL0Co3RO3SCJogghT6jL+hr9Cn6Fn2PfmypUa/LeYJ2LPr5B+ND+7Q=</latexit><latexit sha1_base64="fLl5ypyYOTapX81TcaHavLUpOgc=">AAADDnicbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZFcIwoB44FkbZSNoq8XidrxT8re7ZpZPkduHCFt+CGuPIKPATvgL1ZJNIylqVPM998nhlPUXNmIE1/9aIbN2/dvrN3N753/8HDR/sHj0+NajShE6K40ucFNpQzSSfAgNPzWlMsCk7PitVxiJ9dUG2Ykh9hU9OZwEvJFoxg8K5pXghr3Nw2zM33++kwbS25DrIO9FFnJ/OD3u+8VKQRVALh2JhpltYws1gDI5y6OG8MrTFZ4SWdeiixoGZm25pdMvCeMlko7a+EpPX+m2GxMGYjCs8UGCpzNRac/4tNG1i8nlkm6waoJNuHFg1PQCVhAEnJNCXANx5gopmvNSEV1piAH1O880wh3K7DgMB6o8ud3ixoJghndSALvKLY/wJ4tTh/S/1gNP2gisbAsRICy9LmplIaQDk7iJMkDx1oym0LatymbiljULmP4Es3iIO2pGuyo+EJbjqadTorquVINAGHagp1ae3wZb5mZZhUGo6zh7kYQ9XP8ou6whKUsC9cp6bZsgKstVq7Qxck3d92FFS+G78g2dV1uA5OR8MsHWbvj/rjN92q7KGn6Bl6jjL0Co3RO3SCJogghT6jL+hr9Cn6Fn2PfmypUa/LeYJ2LPr5B+ND+7Q=</latexit>

(a)<latexit sha1_base64="VTWfPfQaTzUAmi5n6DcbkUwZWC8=">AAADBnicbVJNb9NAEN2Yr2K+WjhysUgjFSFFdgSCY0Q5cCwfaSvFUTVeb+JVvLvW7rhptPKdC1f4F9wQV/4GP4L/wK5jJNIylqWnN2/ezowmq0puMI5/9YJr12/cvLVzO7xz9979B7t7D4+NqjVlE6pKpU8zMKzkkk2QY8lOK81AZCU7yZaHPn9yzrThSn7EdcVmAhaSzzkFdNSHA3h6ttuPh3Eb0VWQdKBPujg62+v9TnNFa8Ek0hKMmSZxhTMLGjktWROmtWEV0CUs2NRBCYKZmW17baKBY/JorrT7JUYt+2+FBWHMWmROKQALcznnyf/lpjXOX80sl1WNTNLNQ/O6jFBFfvAo55pRLNcOANXc9RrRAjRQdOsJt57JRLNNGBSg1zrfms2i5oKWvPJiAUsGbvvo3ML0DXOL0ey9ymqDh0oIkLlNTaE0omrsIIyi1E+gWWlbUEFbupGMUaUuAxfNIPTekq3olocTNNPRrPNZMi1HovbYd5OpC2uHL9IVz/2mYv81dj8VYyz6SXpeFSBRCfus6dw0XxQIWqtVs994y+bvOAoLN407kOTyOVwFx6NhEg+Td8/749fdqeyQx+QJOSAJeUnG5C05IhNCyYJ8Jl/I1+BT8C34HvzYSINeV/OIbEXw8w/aFvdL</latexit><latexit sha1_base64="VTWfPfQaTzUAmi5n6DcbkUwZWC8=">AAADBnicbVJNb9NAEN2Yr2K+WjhysUgjFSFFdgSCY0Q5cCwfaSvFUTVeb+JVvLvW7rhptPKdC1f4F9wQV/4GP4L/wK5jJNIylqWnN2/ezowmq0puMI5/9YJr12/cvLVzO7xz9979B7t7D4+NqjVlE6pKpU8zMKzkkk2QY8lOK81AZCU7yZaHPn9yzrThSn7EdcVmAhaSzzkFdNSHA3h6ttuPh3Eb0VWQdKBPujg62+v9TnNFa8Ek0hKMmSZxhTMLGjktWROmtWEV0CUs2NRBCYKZmW17baKBY/JorrT7JUYt+2+FBWHMWmROKQALcznnyf/lpjXOX80sl1WNTNLNQ/O6jFBFfvAo55pRLNcOANXc9RrRAjRQdOsJt57JRLNNGBSg1zrfms2i5oKWvPJiAUsGbvvo3ML0DXOL0ey9ymqDh0oIkLlNTaE0omrsIIyi1E+gWWlbUEFbupGMUaUuAxfNIPTekq3olocTNNPRrPNZMi1HovbYd5OpC2uHL9IVz/2mYv81dj8VYyz6SXpeFSBRCfus6dw0XxQIWqtVs994y+bvOAoLN407kOTyOVwFx6NhEg+Td8/749fdqeyQx+QJOSAJeUnG5C05IhNCyYJ8Jl/I1+BT8C34HvzYSINeV/OIbEXw8w/aFvdL</latexit><latexit sha1_base64="VTWfPfQaTzUAmi5n6DcbkUwZWC8=">AAADBnicbVJNb9NAEN2Yr2K+WjhysUgjFSFFdgSCY0Q5cCwfaSvFUTVeb+JVvLvW7rhptPKdC1f4F9wQV/4GP4L/wK5jJNIylqWnN2/ezowmq0puMI5/9YJr12/cvLVzO7xz9979B7t7D4+NqjVlE6pKpU8zMKzkkk2QY8lOK81AZCU7yZaHPn9yzrThSn7EdcVmAhaSzzkFdNSHA3h6ttuPh3Eb0VWQdKBPujg62+v9TnNFa8Ek0hKMmSZxhTMLGjktWROmtWEV0CUs2NRBCYKZmW17baKBY/JorrT7JUYt+2+FBWHMWmROKQALcznnyf/lpjXOX80sl1WNTNLNQ/O6jFBFfvAo55pRLNcOANXc9RrRAjRQdOsJt57JRLNNGBSg1zrfms2i5oKWvPJiAUsGbvvo3ML0DXOL0ey9ymqDh0oIkLlNTaE0omrsIIyi1E+gWWlbUEFbupGMUaUuAxfNIPTekq3olocTNNPRrPNZMi1HovbYd5OpC2uHL9IVz/2mYv81dj8VYyz6SXpeFSBRCfus6dw0XxQIWqtVs994y+bvOAoLN407kOTyOVwFx6NhEg+Td8/749fdqeyQx+QJOSAJeUnG5C05IhNCyYJ8Jl/I1+BT8C34HvzYSINeV/OIbEXw8w/aFvdL</latexit><latexit sha1_base64="VTWfPfQaTzUAmi5n6DcbkUwZWC8=">AAADBnicbVJNb9NAEN2Yr2K+WjhysUgjFSFFdgSCY0Q5cCwfaSvFUTVeb+JVvLvW7rhptPKdC1f4F9wQV/4GP4L/wK5jJNIylqWnN2/ezowmq0puMI5/9YJr12/cvLVzO7xz9979B7t7D4+NqjVlE6pKpU8zMKzkkk2QY8lOK81AZCU7yZaHPn9yzrThSn7EdcVmAhaSzzkFdNSHA3h6ttuPh3Eb0VWQdKBPujg62+v9TnNFa8Ek0hKMmSZxhTMLGjktWROmtWEV0CUs2NRBCYKZmW17baKBY/JorrT7JUYt+2+FBWHMWmROKQALcznnyf/lpjXOX80sl1WNTNLNQ/O6jFBFfvAo55pRLNcOANXc9RrRAjRQdOsJt57JRLNNGBSg1zrfms2i5oKWvPJiAUsGbvvo3ML0DXOL0ey9ymqDh0oIkLlNTaE0omrsIIyi1E+gWWlbUEFbupGMUaUuAxfNIPTekq3olocTNNPRrPNZMi1HovbYd5OpC2uHL9IVz/2mYv81dj8VYyz6SXpeFSBRCfus6dw0XxQIWqtVs994y+bvOAoLN407kOTyOVwFx6NhEg+Td8/749fdqeyQx+QJOSAJeUnG5C05IhNCyYJ8Jl/I1+BT8C34HvzYSINeV/OIbEXw8w/aFvdL</latexit>

µu<latexit sha1_base64="mqghtqRzSpy6EZUkmHnqcTqJ5K8=">AAADD3icbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZFcIwoB44FkbZSHFVer9O14p+VPds0svwQXLjCW3BDXHkEHoJ3wE4WibSMZenTzDefZ8ZTNoJbyPNfveTGzVu37+zcTe/df/Dw0e7e42OrW0PZhGqhzWlJLBNcsQlwEOy0MYzIUrCTcnEY4ycXzFiu1UdYNWwmybnic04JBBfGpXRYtv7Mhbvbz4f52rLroOhAH3V2dLbX+40rTVvJFFBBrJ0WeQMzRwxwKphPcWtZQ+iCnLNpgIpIZmduXbTPBsFTZXNtwlWQrb3/ZjgirV3JMjAlgdpejUXn/2LTFuavZ46rpgWm6OaheSsy0FmcQFZxwyiIVQCEGh5qzWhNDKEQ5pRuPVNKv+2wIIlZmWqrNweGSyp4E8mSLBgJ3wBBLcVvWRiMYR902Vo41FISVTlsa20AtHeDNMtw7MAw4dagIevUDWUMGocIufSDNGortqRbGoHgp6NZp7NgRo1kG3GsptSXzg1f4iWv4qTyeLzbx3IMdb/AF01NFGjpXvhOzfDzGogxeun3fZT0f9vRUIduwoIUV9fhOjgeDYt8WLw/6I/fdKuyg56iZ+g5KtArNEbv0BGaIIoa9Bl9QV+TT8m35HvyY0NNel3OE7Rlyc8/Ds/8IA==</latexit><latexit sha1_base64="mqghtqRzSpy6EZUkmHnqcTqJ5K8=">AAADD3icbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZFcIwoB44FkbZSHFVer9O14p+VPds0svwQXLjCW3BDXHkEHoJ3wE4WibSMZenTzDefZ8ZTNoJbyPNfveTGzVu37+zcTe/df/Dw0e7e42OrW0PZhGqhzWlJLBNcsQlwEOy0MYzIUrCTcnEY4ycXzFiu1UdYNWwmybnic04JBBfGpXRYtv7Mhbvbz4f52rLroOhAH3V2dLbX+40rTVvJFFBBrJ0WeQMzRwxwKphPcWtZQ+iCnLNpgIpIZmduXbTPBsFTZXNtwlWQrb3/ZjgirV3JMjAlgdpejUXn/2LTFuavZ46rpgWm6OaheSsy0FmcQFZxwyiIVQCEGh5qzWhNDKEQ5pRuPVNKv+2wIIlZmWqrNweGSyp4E8mSLBgJ3wBBLcVvWRiMYR902Vo41FISVTlsa20AtHeDNMtw7MAw4dagIevUDWUMGocIufSDNGortqRbGoHgp6NZp7NgRo1kG3GsptSXzg1f4iWv4qTyeLzbx3IMdb/AF01NFGjpXvhOzfDzGogxeun3fZT0f9vRUIduwoIUV9fhOjgeDYt8WLw/6I/fdKuyg56iZ+g5KtArNEbv0BGaIIoa9Bl9QV+TT8m35HvyY0NNel3OE7Rlyc8/Ds/8IA==</latexit><latexit sha1_base64="mqghtqRzSpy6EZUkmHnqcTqJ5K8=">AAADD3icbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZFcIwoB44FkbZSHFVer9O14p+VPds0svwQXLjCW3BDXHkEHoJ3wE4WibSMZenTzDefZ8ZTNoJbyPNfveTGzVu37+zcTe/df/Dw0e7e42OrW0PZhGqhzWlJLBNcsQlwEOy0MYzIUrCTcnEY4ycXzFiu1UdYNWwmybnic04JBBfGpXRYtv7Mhbvbz4f52rLroOhAH3V2dLbX+40rTVvJFFBBrJ0WeQMzRwxwKphPcWtZQ+iCnLNpgIpIZmduXbTPBsFTZXNtwlWQrb3/ZjgirV3JMjAlgdpejUXn/2LTFuavZ46rpgWm6OaheSsy0FmcQFZxwyiIVQCEGh5qzWhNDKEQ5pRuPVNKv+2wIIlZmWqrNweGSyp4E8mSLBgJ3wBBLcVvWRiMYR902Vo41FISVTlsa20AtHeDNMtw7MAw4dagIevUDWUMGocIufSDNGortqRbGoHgp6NZp7NgRo1kG3GsptSXzg1f4iWv4qTyeLzbx3IMdb/AF01NFGjpXvhOzfDzGogxeun3fZT0f9vRUIduwoIUV9fhOjgeDYt8WLw/6I/fdKuyg56iZ+g5KtArNEbv0BGaIIoa9Bl9QV+TT8m35HvyY0NNel3OE7Rlyc8/Ds/8IA==</latexit><latexit sha1_base64="PMP+lQxVt6lpKSqQMpe4+Cb/MVQ=">AAAC9XicbVJNbxMxEHWWr7IUaM9cVqSRkJCi3UgIjpHKgWNBpK2UjapZr5O14o+VPds0snzjxJV/wQ3xa/gR/AfsJEikZSxLT2/Gz/NGU7WCW8zzX73k3v0HDx8dPE6fHKZPnz0/Ojy3ujOUTagW2lxWYJngik2Qo2CXrWEgK8EuquVpzF9cM2O5Vp9x3bKZhIXic04BA3V2ddTPh/kmsrug2IE+2cXVce93WWvaSaaQCrB2WuQtzhwY5FQwn5adZS3QJSzYNEAFktmZ2/Tps0Fg6myuTbgKsw377wsH0tq1rEKlBGzs7Vwk/5ebdjh/N3NctR0yRbcfzTuRoc6i6azmhlEU6wCAGh56zWgDBiiG0aR731TS7xMWJZi1qfe8OTRcUsHbWCxhySBMHoNaWr5nYTCGfdJVZ/FUSwmqdqVttEHU3g3SLCujA8OE24AWNk+3JWPUZcjAjR+kUVuxFd3TCAV+OprtdJbMqJHsIo7dVPrGueGbcsXrOKk8Hu9OSjnGpl+U120DCrV0r/1OzfBFg2CMXvkTHyX9Xzsam+Am7EdxexvugvPRsMiHxcecHJAX5CV5RQrylozJB3JGJoSSmnwl35Ivyffkx3aPkt5uoY7JXiQ//wD6svQc</latexit><latexit sha1_base64="+K/o2WWi6dd+DkxgmyE+i1kR9WA=">AAADBHicbVLNbhMxEHaWv7IUaLlyWZFGQkKKdishOEYqB44FkbZSHFWzXidrxT8re7ZpZPkhuHCFt+CGeA0egnfAmwSJtIxl6dM3M59nxlM2UjjM81+95M7de/cf7D1MH+0/fvL04HD/zJnWMj5mRhp7UYLjUmg+RoGSXzSWgyolPy8XJ53//IpbJ4z+hKuGTxXMtZgJBhgpSkvlqWrDpY/3oJ8P87Vlt0GxBX2ytdPLw95vWhnWKq6RSXBuUuQNTj1YFEzykNLW8QbYAuZ8EqEGxd3Ur4sO2SAyVTYzNl6N2Zr9N8ODcm6lyhipAGt309eR//NNWpy9nXqhmxa5ZpuHZq3M0GTdBLJKWM5QriIAZkWsNWM1WGAY55TuPFOqsEs4VGBXttrpzaMViknRdMEKFhziN2BUS+k7Hgdj+UdTtg5PjFKgK09dbSyiCX6QZhntOrBc+jVoYJ26CRmhodED12GQdtqaL9mORgwIk+PpVmfBrT5WbYe7akpz7f3wNV2KqptU3p3gj6gaYd0v6FVTg0aj/KuwVbNiXiNYa5bhKHSS4W87BuvYTVyQ4uY63AZnx8MiHxYfcrJHnpMX5CUpyBsyIu/JKRkTRhryhXwl35LPyffkx2aVkt52p56RHUt+/gEwc/ql</latexit><latexit sha1_base64="+K/o2WWi6dd+DkxgmyE+i1kR9WA=">AAADBHicbVLNbhMxEHaWv7IUaLlyWZFGQkKKdishOEYqB44FkbZSHFWzXidrxT8re7ZpZPkhuHCFt+CGeA0egnfAmwSJtIxl6dM3M59nxlM2UjjM81+95M7de/cf7D1MH+0/fvL04HD/zJnWMj5mRhp7UYLjUmg+RoGSXzSWgyolPy8XJ53//IpbJ4z+hKuGTxXMtZgJBhgpSkvlqWrDpY/3oJ8P87Vlt0GxBX2ytdPLw95vWhnWKq6RSXBuUuQNTj1YFEzykNLW8QbYAuZ8EqEGxd3Ur4sO2SAyVTYzNl6N2Zr9N8ODcm6lyhipAGt309eR//NNWpy9nXqhmxa5ZpuHZq3M0GTdBLJKWM5QriIAZkWsNWM1WGAY55TuPFOqsEs4VGBXttrpzaMViknRdMEKFhziN2BUS+k7Hgdj+UdTtg5PjFKgK09dbSyiCX6QZhntOrBc+jVoYJ26CRmhodED12GQdtqaL9mORgwIk+PpVmfBrT5WbYe7akpz7f3wNV2KqptU3p3gj6gaYd0v6FVTg0aj/KuwVbNiXiNYa5bhKHSS4W87BuvYTVyQ4uY63AZnx8MiHxYfcrJHnpMX5CUpyBsyIu/JKRkTRhryhXwl35LPyffkx2aVkt52p56RHUt+/gEwc/ql</latexit><latexit sha1_base64="EnQAKAa7HPfay7R0YEfkb3u6pVk=">AAADD3icbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZCcIwoB44FkbZSHFVer5O14p+VPds0svwQXLjCW3BDXHkEHoJ3wE4WibSMZenTzDefZ8ZTNoJbyPNfveTGzVu37+zdTe/df/Dw0f7B4xOrW0PZhGqhzVlJLBNcsQlwEOysMYzIUrDTcnkU46cXzFiu1UdYN2wmyULxOacEggvjUjosW3/uwt3v58N8Y9l1UHSgjzo7Pj/o/caVpq1kCqgg1k6LvIGZIwY4FcynuLWsIXRJFmwaoCKS2ZnbFO2zQfBU2VybcBVkG++/GY5Ia9eyDExJoLZXY9H5v9i0hfnrmeOqaYEpun1o3ooMdBYnkFXcMApiHQChhodaM1oTQyiEOaU7z5TS7zosSGLWptrpzYHhkgreRLIkS0bCN0BQS/FbFgZj2AddthaOtJREVQ7bWhsA7d0gzTIcOzBMuA1oyCZ1SxmDxiFCLv0gjdqKreiORiD46WjW6SyZUSPZRhyrKfWlc8OXeMWrOKk8Hu8OsRxD3S/wRVMTBVq6F75TM3xRAzFGr/yhj5L+bzsa6tBNWJDi6jpcByejYZEPi/d5f/ymW5U99BQ9Q89RgV6hMXqHjtEEUdSgz+gL+pp8Sr4l35MfW2rS63KeoB1Lfv4BDY/8HA==</latexit><latexit sha1_base64="mqghtqRzSpy6EZUkmHnqcTqJ5K8=">AAADD3icbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZFcIwoB44FkbZSHFVer9O14p+VPds0svwQXLjCW3BDXHkEHoJ3wE4WibSMZenTzDefZ8ZTNoJbyPNfveTGzVu37+zcTe/df/Dw0e7e42OrW0PZhGqhzWlJLBNcsQlwEOy0MYzIUrCTcnEY4ycXzFiu1UdYNWwmybnic04JBBfGpXRYtv7Mhbvbz4f52rLroOhAH3V2dLbX+40rTVvJFFBBrJ0WeQMzRwxwKphPcWtZQ+iCnLNpgIpIZmduXbTPBsFTZXNtwlWQrb3/ZjgirV3JMjAlgdpejUXn/2LTFuavZ46rpgWm6OaheSsy0FmcQFZxwyiIVQCEGh5qzWhNDKEQ5pRuPVNKv+2wIIlZmWqrNweGSyp4E8mSLBgJ3wBBLcVvWRiMYR902Vo41FISVTlsa20AtHeDNMtw7MAw4dagIevUDWUMGocIufSDNGortqRbGoHgp6NZp7NgRo1kG3GsptSXzg1f4iWv4qTyeLzbx3IMdb/AF01NFGjpXvhOzfDzGogxeun3fZT0f9vRUIduwoIUV9fhOjgeDYt8WLw/6I/fdKuyg56iZ+g5KtArNEbv0BGaIIoa9Bl9QV+TT8m35HvyY0NNel3OE7Rlyc8/Ds/8IA==</latexit><latexit sha1_base64="mqghtqRzSpy6EZUkmHnqcTqJ5K8=">AAADD3icbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZFcIwoB44FkbZSHFVer9O14p+VPds0svwQXLjCW3BDXHkEHoJ3wE4WibSMZenTzDefZ8ZTNoJbyPNfveTGzVu37+zcTe/df/Dw0e7e42OrW0PZhGqhzWlJLBNcsQlwEOy0MYzIUrCTcnEY4ycXzFiu1UdYNWwmybnic04JBBfGpXRYtv7Mhbvbz4f52rLroOhAH3V2dLbX+40rTVvJFFBBrJ0WeQMzRwxwKphPcWtZQ+iCnLNpgIpIZmduXbTPBsFTZXNtwlWQrb3/ZjgirV3JMjAlgdpejUXn/2LTFuavZ46rpgWm6OaheSsy0FmcQFZxwyiIVQCEGh5qzWhNDKEQ5pRuPVNKv+2wIIlZmWqrNweGSyp4E8mSLBgJ3wBBLcVvWRiMYR902Vo41FISVTlsa20AtHeDNMtw7MAw4dagIevUDWUMGocIufSDNGortqRbGoHgp6NZp7NgRo1kG3GsptSXzg1f4iWv4qTyeLzbx3IMdb/AF01NFGjpXvhOzfDzGogxeun3fZT0f9vRUIduwoIUV9fhOjgeDYt8WLw/6I/fdKuyg56iZ+g5KtArNEbv0BGaIIoa9Bl9QV+TT8m35HvyY0NNel3OE7Rlyc8/Ds/8IA==</latexit><latexit sha1_base64="mqghtqRzSpy6EZUkmHnqcTqJ5K8=">AAADD3icbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZFcIwoB44FkbZSHFVer9O14p+VPds0svwQXLjCW3BDXHkEHoJ3wE4WibSMZenTzDefZ8ZTNoJbyPNfveTGzVu37+zcTe/df/Dw0e7e42OrW0PZhGqhzWlJLBNcsQlwEOy0MYzIUrCTcnEY4ycXzFiu1UdYNWwmybnic04JBBfGpXRYtv7Mhbvbz4f52rLroOhAH3V2dLbX+40rTVvJFFBBrJ0WeQMzRwxwKphPcWtZQ+iCnLNpgIpIZmduXbTPBsFTZXNtwlWQrb3/ZjgirV3JMjAlgdpejUXn/2LTFuavZ46rpgWm6OaheSsy0FmcQFZxwyiIVQCEGh5qzWhNDKEQ5pRuPVNKv+2wIIlZmWqrNweGSyp4E8mSLBgJ3wBBLcVvWRiMYR902Vo41FISVTlsa20AtHeDNMtw7MAw4dagIevUDWUMGocIufSDNGortqRbGoHgp6NZp7NgRo1kG3GsptSXzg1f4iWv4qTyeLzbx3IMdb/AF01NFGjpXvhOzfDzGogxeun3fZT0f9vRUIduwoIUV9fhOjgeDYt8WLw/6I/fdKuyg56iZ+g5KtArNEbv0BGaIIoa9Bl9QV+TT8m35HvyY0NNel3OE7Rlyc8/Ds/8IA==</latexit><latexit sha1_base64="mqghtqRzSpy6EZUkmHnqcTqJ5K8=">AAADD3icbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZFcIwoB44FkbZSHFVer9O14p+VPds0svwQXLjCW3BDXHkEHoJ3wE4WibSMZenTzDefZ8ZTNoJbyPNfveTGzVu37+zcTe/df/Dw0e7e42OrW0PZhGqhzWlJLBNcsQlwEOy0MYzIUrCTcnEY4ycXzFiu1UdYNWwmybnic04JBBfGpXRYtv7Mhbvbz4f52rLroOhAH3V2dLbX+40rTVvJFFBBrJ0WeQMzRwxwKphPcWtZQ+iCnLNpgIpIZmduXbTPBsFTZXNtwlWQrb3/ZjgirV3JMjAlgdpejUXn/2LTFuavZ46rpgWm6OaheSsy0FmcQFZxwyiIVQCEGh5qzWhNDKEQ5pRuPVNKv+2wIIlZmWqrNweGSyp4E8mSLBgJ3wBBLcVvWRiMYR902Vo41FISVTlsa20AtHeDNMtw7MAw4dagIevUDWUMGocIufSDNGortqRbGoHgp6NZp7NgRo1kG3GsptSXzg1f4iWv4qTyeLzbx3IMdb/AF01NFGjpXvhOzfDzGogxeun3fZT0f9vRUIduwoIUV9fhOjgeDYt8WLw/6I/fdKuyg56iZ+g5KtArNEbv0BGaIIoa9Bl9QV+TT8m35HvyY0NNel3OE7Rlyc8/Ds/8IA==</latexit><latexit sha1_base64="mqghtqRzSpy6EZUkmHnqcTqJ5K8=">AAADD3icbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZFcIwoB44FkbZSHFVer9O14p+VPds0svwQXLjCW3BDXHkEHoJ3wE4WibSMZenTzDefZ8ZTNoJbyPNfveTGzVu37+zcTe/df/Dw0e7e42OrW0PZhGqhzWlJLBNcsQlwEOy0MYzIUrCTcnEY4ycXzFiu1UdYNWwmybnic04JBBfGpXRYtv7Mhbvbz4f52rLroOhAH3V2dLbX+40rTVvJFFBBrJ0WeQMzRwxwKphPcWtZQ+iCnLNpgIpIZmduXbTPBsFTZXNtwlWQrb3/ZjgirV3JMjAlgdpejUXn/2LTFuavZ46rpgWm6OaheSsy0FmcQFZxwyiIVQCEGh5qzWhNDKEQ5pRuPVNKv+2wIIlZmWqrNweGSyp4E8mSLBgJ3wBBLcVvWRiMYR902Vo41FISVTlsa20AtHeDNMtw7MAw4dagIevUDWUMGocIufSDNGortqRbGoHgp6NZp7NgRo1kG3GsptSXzg1f4iWv4qTyeLzbx3IMdb/AF01NFGjpXvhOzfDzGogxeun3fZT0f9vRUIduwoIUV9fhOjgeDYt8WLw/6I/fdKuyg56iZ+g5KtArNEbv0BGaIIoa9Bl9QV+TT8m35HvyY0NNel3OE7Rlyc8/Ds/8IA==</latexit><latexit sha1_base64="mqghtqRzSpy6EZUkmHnqcTqJ5K8=">AAADD3icbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZFcIwoB44FkbZSHFVer9O14p+VPds0svwQXLjCW3BDXHkEHoJ3wE4WibSMZenTzDefZ8ZTNoJbyPNfveTGzVu37+zcTe/df/Dw0e7e42OrW0PZhGqhzWlJLBNcsQlwEOy0MYzIUrCTcnEY4ycXzFiu1UdYNWwmybnic04JBBfGpXRYtv7Mhbvbz4f52rLroOhAH3V2dLbX+40rTVvJFFBBrJ0WeQMzRwxwKphPcWtZQ+iCnLNpgIpIZmduXbTPBsFTZXNtwlWQrb3/ZjgirV3JMjAlgdpejUXn/2LTFuavZ46rpgWm6OaheSsy0FmcQFZxwyiIVQCEGh5qzWhNDKEQ5pRuPVNKv+2wIIlZmWqrNweGSyp4E8mSLBgJ3wBBLcVvWRiMYR902Vo41FISVTlsa20AtHeDNMtw7MAw4dagIevUDWUMGocIufSDNGortqRbGoHgp6NZp7NgRo1kG3GsptSXzg1f4iWv4qTyeLzbx3IMdb/AF01NFGjpXvhOzfDzGogxeun3fZT0f9vRUIduwoIUV9fhOjgeDYt8WLw/6I/fdKuyg56iZ+g5KtArNEbv0BGaIIoa9Bl9QV+TT8m35HvyY0NNel3OE7Rlyc8/Ds/8IA==</latexit>

µv<latexit sha1_base64="9IbN1Y44ztYDuUBjdFsREM0jTTQ=">AAADD3icbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZF9Bi1HDgWRNpKcRR5vU7Win9W9mzSyPJDcOEKb8ENceUReAjeAW+ySKRlLEufZr75PDOevBLcQpr+6kS3bt+5e2/vfvzg4aPHT/YPnp5bXRvKRlQLbS5zYpngio2Ag2CXlWFE5oJd5IvTJn6xZMZyrT7CumITSeaKzzglEFwY59JhWfupW/rpfjftpxtLboKsBV3U2tn0oPMbF5rWkimgglg7ztIKJo4Y4FQwH+PasorQBZmzcYCKSGYnblO0T3rBUyQzbcJVkGy8/2Y4Iq1dyzwwJYHSXo81zv/FxjXMjieOq6oGpuj2oVktEtBJM4Gk4IZREOsACDU81JrQkhhCIcwp3nkml37XYUESszbFTm8ODJdU8KohS7JgJHwDBLUYv2VhMIZ90Hlt4VRLSVThsC21AdDe9eIkwU0Hhgm3ARXZpG4pQ9A4RMiV78WNtmIruqMRCH48mLQ6C2bUQNYNbqrJ9ZVz/dd4xYtmUmlzvDvEcghlN8PLqiQKtHSvfKtm+LwEYoxe+UPfSPq/7WgoQzdhQbLr63ATnA/6WdrP3h91hyftquyh5+gFeoky9AYN0Tt0hkaIogp9Rl/Q1+hT9C36Hv3YUqNOm/MM7Vj08w8RcPwh</latexit><latexit sha1_base64="9IbN1Y44ztYDuUBjdFsREM0jTTQ=">AAADD3icbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZF9Bi1HDgWRNpKcRR5vU7Win9W9mzSyPJDcOEKb8ENceUReAjeAW+ySKRlLEufZr75PDOevBLcQpr+6kS3bt+5e2/vfvzg4aPHT/YPnp5bXRvKRlQLbS5zYpngio2Ag2CXlWFE5oJd5IvTJn6xZMZyrT7CumITSeaKzzglEFwY59JhWfupW/rpfjftpxtLboKsBV3U2tn0oPMbF5rWkimgglg7ztIKJo4Y4FQwH+PasorQBZmzcYCKSGYnblO0T3rBUyQzbcJVkGy8/2Y4Iq1dyzwwJYHSXo81zv/FxjXMjieOq6oGpuj2oVktEtBJM4Gk4IZREOsACDU81JrQkhhCIcwp3nkml37XYUESszbFTm8ODJdU8KohS7JgJHwDBLUYv2VhMIZ90Hlt4VRLSVThsC21AdDe9eIkwU0Hhgm3ARXZpG4pQ9A4RMiV78WNtmIruqMRCH48mLQ6C2bUQNYNbqrJ9ZVz/dd4xYtmUmlzvDvEcghlN8PLqiQKtHSvfKtm+LwEYoxe+UPfSPq/7WgoQzdhQbLr63ATnA/6WdrP3h91hyftquyh5+gFeoky9AYN0Tt0hkaIogp9Rl/Q1+hT9C36Hv3YUqNOm/MM7Vj08w8RcPwh</latexit><latexit sha1_base64="9IbN1Y44ztYDuUBjdFsREM0jTTQ=">AAADD3icbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZF9Bi1HDgWRNpKcRR5vU7Win9W9mzSyPJDcOEKb8ENceUReAjeAW+ySKRlLEufZr75PDOevBLcQpr+6kS3bt+5e2/vfvzg4aPHT/YPnp5bXRvKRlQLbS5zYpngio2Ag2CXlWFE5oJd5IvTJn6xZMZyrT7CumITSeaKzzglEFwY59JhWfupW/rpfjftpxtLboKsBV3U2tn0oPMbF5rWkimgglg7ztIKJo4Y4FQwH+PasorQBZmzcYCKSGYnblO0T3rBUyQzbcJVkGy8/2Y4Iq1dyzwwJYHSXo81zv/FxjXMjieOq6oGpuj2oVktEtBJM4Gk4IZREOsACDU81JrQkhhCIcwp3nkml37XYUESszbFTm8ODJdU8KohS7JgJHwDBLUYv2VhMIZ90Hlt4VRLSVThsC21AdDe9eIkwU0Hhgm3ARXZpG4pQ9A4RMiV78WNtmIruqMRCH48mLQ6C2bUQNYNbqrJ9ZVz/dd4xYtmUmlzvDvEcghlN8PLqiQKtHSvfKtm+LwEYoxe+UPfSPq/7WgoQzdhQbLr63ATnA/6WdrP3h91hyftquyh5+gFeoky9AYN0Tt0hkaIogp9Rl/Q1+hT9C36Hv3YUqNOm/MM7Vj08w8RcPwh</latexit><latexit sha1_base64="9IbN1Y44ztYDuUBjdFsREM0jTTQ=">AAADD3icbVLNbhMxEHaWv7L8tXDksiKNhIQU7UZF9Bi1HDgWRNpKcRR5vU7Win9W9mzSyPJDcOEKb8ENceUReAjeAW+ySKRlLEufZr75PDOevBLcQpr+6kS3bt+5e2/vfvzg4aPHT/YPnp5bXRvKRlQLbS5zYpngio2Ag2CXlWFE5oJd5IvTJn6xZMZyrT7CumITSeaKzzglEFwY59JhWfupW/rpfjftpxtLboKsBV3U2tn0oPMbF5rWkimgglg7ztIKJo4Y4FQwH+PasorQBZmzcYCKSGYnblO0T3rBUyQzbcJVkGy8/2Y4Iq1dyzwwJYHSXo81zv/FxjXMjieOq6oGpuj2oVktEtBJM4Gk4IZREOsACDU81JrQkhhCIcwp3nkml37XYUESszbFTm8ODJdU8KohS7JgJHwDBLUYv2VhMIZ90Hlt4VRLSVThsC21AdDe9eIkwU0Hhgm3ARXZpG4pQ9A4RMiV78WNtmIruqMRCH48mLQ6C2bUQNYNbqrJ9ZVz/dd4xYtmUmlzvDvEcghlN8PLqiQKtHSvfKtm+LwEYoxe+UPfSPq/7WgoQzdhQbLr63ATnA/6WdrP3h91hyftquyh5+gFeoky9AYN0Tt0hkaIogp9Rl/Q1+hT9C36Hv3YUqNOm/MM7Vj08w8RcPwh</latexit>

Figure 5.1: Training and transfer processes of IDE-VC.

decoder D as objectives I2, I3: We encode content cui from voice xui from speaker u.

The style of speaker u is encoded from another speaker u’s voice xuj. The dependency

of style and content embedding is minimized with I3. With cui and su, the decoder

reconstructs the voice xui as xui = D(su, cui). Then I2 is calculated based on the

original voice cui and the reconstruction cui. (c) shows the transfer process: For

zero-shot voice style transfer, with xui from speaker u and xvj from speaker v, we

encode content cui and style sv, and combine them together to generate a transferred

voice xu→v,i = D(sv, cui).

5.4 Related Work

Many-to-many Voice Conversion Traditional voice style transfer methods mainly fo-

cus on one-to-one and many-to-one conversion tasks, which can only transfer voices

into one target speaking style. This constraint limits the applicability of the meth-

ods. Recently, several many-to-many voice conversion methods have been proposed,

to convert voices in broader application scenarios. StarGAN-VC [KKTH18] uses

StarGAN [CCK+18] to enable many-to-many transfer, in which voices are fed into

79

Page 95: Improving Natural Language Understanding via Contrastive

a unique generator conditioned on the target speaker identity. A discriminator is

also used to evaluate generation quality and transfer accuracy. Blow [SPP19] is

a flow-based generative model [KD18], that maps voices from different speakers

into the same latent space via normalizing flow [RM15]. The conversion is accom-

plished by transforming the latent representation back to the observation space with

the target speaker’s identifier. Two other many-to-many conversion models, AU-

TOVC [QZC+19] and AdaIN-VC [CL19], extend applications into zero-shot scenar-

ios, i.e., conversion from/to a new speaker (unseen during training), based on only a

few utterances. Both AUTOVC and AdaIN-VC construct an encoder-decoder frame-

work, which extracts the style and content of one speech sample into separate latent

embeddings. Then when a new voice from an unseen speaker comes, both its style

and content embeddings can be extracted directly. However, as discussed in the In-

troduction, both methods do not have explicit regularizers to reduce the correlation

between style and content embeddings, which limits their performance.

Disentangled Representation Learning Disentangled representation learning (DRL)

aims to encode data points into separate independent embedding subspaces, where

different subspaces represent different data attributes. DRL methods can be clas-

sified into unsupervised and supervised approaches. Under unsupervised setups,

[BHP+18], [HMP+16] and [KM18a] use latent embeddings to reconstruct the original

data while keeping each dimension of the embeddings independent with correlation

regularizers. This has been challenged by [LBL+19], in that each part of the learned

embeddings may not be mapped to a meaningful data attribute. In contrast, su-

pervised DRL methods effectively learn meaningful disentangled embedding parts

by adding different supervision to different embedding components. Between the

two embedding parts, the correlation is still required to be reduced to prevent the

revealing of information to each other. The correlation-reducing methods mainly

80

Page 96: Improving Natural Language Understanding via Contrastive

focus on adversarial training between embedding parts [HFLM+18, KM18a], and

mutual information minimization [CLGD18, CMS+20]. By applying operations such

as switching and combining, one can use disentangled representations to improve

empirical performance on downstream tasks, e.g. conditional generation [BHP+18],

domain adaptation [GSR+20], and few-shot learning [HPR+17].

5.5 Experiments

We evaluate our IDE-VC on real-world voice a dataset under both many-to-many

and zero-shot VST setups. The selected dataset is CSTR Voice Cloning Toolkit

(VCTK) [YVM+19], which includes 46 hours of audio from 109 speakers. Each

speaker reads a different sets of utterances, and the training voices are provided in

a non-parallel manner. The audios are downsampled at 16kHz. In the following,

we first describe the evaluation metrics and the implementation details, and then

analyze our model’s performance relative to other baselines under many-to-many

and zero-shot VST settings.

5.5.1 Evaluation Metrics

Objective Metrics We consider two objective metrics: Speaker verification accuracy

(Verification) and the Mel-Cepstral Distance (Distance) [Kub93]. The speaker verifi-

cation accuracy measures whether the transferred voice belongs to the target speaker.

For fair comparison, we used a third-party pre-trained speaker encoder Resemblyzer1

to classify the speaker identity from the transferred voices. Specifically, style cen-

troids for speakers are learned with ground-truth voice samples. For a transferred

voice, we encode it via the pre-trained speaker encoder and find the speaker with

the closest style centroid as the identity prediction. For the Distance, the vanilla

1https://github.com/resemble-ai/Resemblyzer

81

Page 97: Improving Natural Language Understanding via Contrastive

Mel-Cepstral Distance (MCD) cannot handle the time alignment issue described in

Section 5.2. To make reasonable comparisons between the generation and ground

truth, we apply the Dynamic Time Warping (DTW) algorithm [BC94] to automat-

ically align the time-evolving sequences before calculating MCD. This DTW-MCD

distance measures the similarity of the transferred voice and the real voice from the

target speaker. Since the calculation of DTW-MCD requires parallel data, we se-

lect voices with the same content from the VCTK dataset as testing pairs. Then

we transfer one voice in the pair and calculate DTW-MCD with the other voice as

reference.

Subjective Metrics Following Wester et al. [WWY16], we use the naturalness of

the speech (Naturalness), and the similarity of the transferred speech to target iden-

tity (Similarity) as subjective metrics. For Naturalness, annotators are asked to rate

the score from 1-5 for each transferred speech.For the Similarity, the annotators are

presented with two audios (the converted speech and the corresponding reference),

and are asked to rate the score from 1 to 4. For both scores, the higher the better.

Following the setting in Blow [SPP19], we report Similarity defined as a total percent-

age from the binary rating. The evaluation of both subjective metrics is conducted

on Amazon Mechanical Turk (MTurk)2. More details about evaluation metrics are

provided in the Supplementary Material.

5.5.2 Implementation Details

Following AUTOVC [QZC+19], our model inputs are represented via mel-spectrogram.

The number of mel-frequency bins is set as 80. When voices are generated, we adopt

the WaveNet vocoder [ODZ+16] pre-trained on the VCTK corpus to invert the spec-

trogram signal back to a waveform. The spectrogram is first upsampled with decon-

2https://www.mturk.com/

82

Page 98: Improving Natural Language Understanding via Contrastive

volutional layers to match the sampling rate, and then a standard 40-layer WaveNet

is applied to generate speech waveforms. Our model is implememted with Pytorch

and takes 1 GPU day on an Nvidia Xp to train.

Encoder Architecture The speaker encoder consists of a 2-layer long short-term

memory (LSTM) with cell size of 768, and a fully-connected layer with output di-

mension 256. The speaker encoder is initialized with weights from a pretrained

GE2E [WWPM18] encoder. The input of the content encoder is the concatena-

tion of the mel-spectrogram signal and the corresponding speaker embedding. The

content encoder consists of three convolutional layers with 512 channels, and two

layers of a bidirectional LSTM with cell dimension 32. Following the setup in AU-

TOVC [QZC+19], the forward and backward outputs of the bi-directional LSTM are

downsampled by 16.

Decoder Architecture Following AUTOVC [QZC+19], the initial decoder consists

of a three-layer convolutional neural network (CNN) with 512 channels, three LSTM

layers with cell dimension 1024, and another convolutional layer to project the output

of the LSTM to dimension of 80. To enhance the quality of the spectrogram, following

AUTOVC [QZC+19], we use a post-network consisting of five convolutional layers

with 512 channels for the first four layers, and 80 channels for the last layer. The

output of the post-network can be viewed as a residual signal. The final conversion

signal is computed by directly adding the output of the initial decoder and the post-

network. The reconstruction loss is applied to both the output of the initial decoder

and the final conversion signal.

Approximation Network Architecture As described in Section 5.3.3, minimizing

the mutual information between style and content embeddings requires an auxiliary

variational approximation qθ(s|c). For implementation, we parameterize the varia-

tional distribution in the Gaussian distribution family qθ(s|c) = N (µθ(c),σ2θ(c) · I),

83

Page 99: Improving Natural Language Understanding via Contrastive

Table 5.1: Many-to-many VST evaluation results. For all metrics except Distance,higher is better.

Metric Objective Subjective

Distance Verification[%] Naturalness [1–5] Similarity [%]

StarGAN 6.73 71.1 2.77 51.5AdaIN-VC 6.98 85.5 2.19 50.8AUTOVC 6.73 89.9 3.25 55.0Blow 8.08 - 2.11 10.8IDE-VC (Ours) 6.70 92.2 3.26 68.5

where mean µθ(·) and variance σ2θ(·) are two-layer fully-connected networks with

tanh(·) as the activation function. With the Gaussian parameterization, the likeli-

hoods in objective I3 can be calculated in closed form.

5.5.3 Style Transfer Performance

For the many-to-many VST task, we randomly select 10% of the sentences for val-

idation and 10% of the sentences for testing from the VCTK dataset, following the

setting in Blow [SPP19]. The rest of the data are used for training in a non-parallel

scheme. For evaluation, we select voice pairs from the testing set, in which each pair

of voices have the same content but come from different speakers. In each testing

pair, we conduct transfer from one voice to the other voice’s speaking style, and

then we compare the transferred voice and the other voice as evaluating the model

performance. We test our model with four competitive baselines: Blow [SPP19]3,

AUTOVC [QZC+19], AdaIN-VC [CL19] and StarGAN-VC [KKTH18]. The detailed

implementation of these four methods are provided in the Supplementary Material.

Table 5.1 shows the subjective and objective evaluation for the many-to-many VST

task. Both methods with the encoder-decoder framework, AdaIN-VC and AUTOVC,

3For Blow model, we use the official implementation available on Github(https://github.com/joansj/blow). We report the best result we can obtain here, undertraining for 100 epochs (11.75 GPU days on Nvidia V100).

84

Page 100: Improving Natural Language Understanding via Contrastive

Table 5.2: Zero-Shot VST evaluation results. For all metrics except Distance, higheris better.

Metric Objective Subjective

Distance Verification[%] Naturalness [1–5] Similarity [%]

AdaIN-VC 6.37 76.7 2.67 68.4AUTOVC 6.68 60.0 2.19 58.6IDE-VC (Ours) 6.31 81.1 3.33 76.4

have competitive results. However, our IDE-VC outperforms the other baselines on

all metrics, demonstrating that the style-content disentanglement in the latent space

improves the performance of the encoder-decoder framework.

For the zero-shot VST task, we use the same train-validation dataset split as in

the many-to-many setup. The testing data are selected to guarantee that no test

speaker has any utterance in the training set. We compare our model with the only

two baselines, AUTOVC [QZC+19] and AdaIN-VC [CL19], that are able to handle

voice transfer for newly-coming unseen speakers. We used the same implementations

of AUTOVC and AdaIN-VC as in the many-to-many VST. The evaluation results of

zero-shot VST are shown in Table 5.5.3, among the two baselines AdaIN-VC performs

better than AUTOVC overall.Our IDE-VC outperforms both baseline methods, on

all metrics. All three tested models have encoder-decoder transfer frameworks, the

superior performance of IDE-VC indicates the effectiveness of our disentangled repre-

sentation learning scheme. More evaluation details are provided in the supplementary

material.

5.5.4 Disentanglement Discussion

Besides the performance comparison with other VST baselines, we demonstrate the

capability of our information-theoretic disentangled representation learning scheme.

First, we conduct a t-SNE [MH08] visualization of the latent spaces of the IDE-

85

Page 101: Improving Natural Language Understanding via Contrastive

Figure 5.2: Left: t-SNE visualization for speaker embeddings. Right: t-SNE visu-alization for content embedding.

Table 5.3: Speaker identity prediction accuracy on content embedding.

AUTOVC AdaIN-VC IDE-VC

Accuracy[%] 9.5 19.0 8.1

VC model in Figure 5.2. The embeddings are extracted from the voice samples of 10

different speakers. As shown in the left of Figure 5.2, style embeddings from the same

speaker are well clustered, and style embeddings from different speakers separate in

a clean manner. The clear pattern indicates our style encoder Es can verify the

speakers’ identity from the voice samples. In contrast, the content embeddings (in

the right of Figure 5.2) are indistinguishable for different speakers, which means our

content encoder Ec successfully eliminates speaker-related information and extracts

rich semantic content from the data.

We also empirically evaluate the disentanglement, by predicting the speakers’

identity based on only the content embeddings. A two-layer fully-connected network

is trained on the testing set with a content embedding as input, and the corresponding

speaker identity as output. We compare our IDE-VC with AUTOVC and AdaIN-

VC, which also output content embeddings. The classification results are shown

in Table 5.3. Our IDE-VC reaches the lowest classification accuracy, indicating

that the content embeddings learned by IDE-VC contains the least speaker-related

86

Page 102: Improving Natural Language Understanding via Contrastive

information. Therefore, our IDE-VC learns disentangled representations with high

quality compared with other baselines.

5.5.5 Ablation Study

Table 5.4: Ablation study with different training losses. Performance is measuredby objective metrics.

Distance Verification[%]

Without I1 9.81 11.1

Without I3 6.73 89.4IDE-VC 5.66 92.2

Moreover, we have considered an ablation study that addresses performance ef-

fects from different learning losses in (5.11), with results shown in Table 5.5.5. We

compare our model with two models trained by part of the loss function in (5.11),

while keeping the other training setups unchanged, including the model structure.

From the results, when the model is trained without the style encoder loss term I1, a

transferred voice still is generated, but with a large distance to the ground truth. The

verification accuracy also significantly decreases with no speaker-related information

utilized. When the disentangling term I3 is removed, the model still reaches compet-

itive performance, because the style encoder Es and decoder D are well trained by I1

and I2. However, when adding term I3, we disentangle the style and content spaces,

and improve the transfer quality with higher verification accuracy and less distortion.

The performance without term I2 is not reported, because the model cannot even

generate fluent speech without the reconstruction loss.

87

Page 103: Improving Natural Language Understanding via Contrastive

5.6 Conclusions

We have improved the encoder-decoder voice style transfer framework by disentangled

latent representation learning. To effectively induce the style and content informa-

tion of speech into independent embedding latent spaces, we minimize a sample-based

mutual information upper bound between style and content embeddings. The disen-

tanglement of the two embedding spaces ensures the voice transfer accuracy without

information revealed from each other. We have also derived two new multi-group

mutual information lower bounds, which are maximized during training to enhance

the representativeness of the latent embeddings. On the real-world VCTK dataset,

our model outperforms previous works under both many-to-many and zero-shot voice

style transfer setups. Our model can be naturally extended to other style transfer

tasks modeling time-evolving sequences, e.g., video and music style transfer. More-

over, our general multi-group mutual information lower bounds have broader poten-

tial applications in other representation learning tasks.

88

Page 104: Improving Natural Language Understanding via Contrastive

Chapter 6

Improving Fairness of Text

Understanding Models

6.1 Introduction

Text encoders, which map raw-text data into low-dimensional embeddings, have be-

come one of the fundamental tools for extensive tasks in natural language processing

[KZS+15b, CMS+20]. With the development of deep learning, large-scale neural

sentence encoders pretrained on massive text corpora, such as Infersent [CKS+17c],

ELMo [PNI+18], BERT [DCLT19], and GPT [RNSS18], have become the mainstream

to extract the sentence-level text representations, and have shown desirable perfor-

mance on many NLP downstream tasks [MYCG19, SLW+19, ZKW+19]. Although

these pretrained models have been studied comprehensively from many perspectives,

such as performance [JCL+20], efficiency [SDCW19], and robustness [LOG+19], the

fairness of pretrained text encoders has not received significant research attention.

The fairness issue is also broadly recognized as social bias, which denotes the

unbalanced model behaviors with respect to some socially sensitive topics, such as

gender, race, and religion [LLZ+20]. For data-driven NLP models, social bias is an

intrinsic problem mainly caused by the unbalanced data of text corpora [BCZ+16].

To quantitatively measure the bias degree of models, prior work proposed several

statistical tests [CBN17, CM19, BAHAZ19], mostly focusing on word-level embedding

models. To evaluate the sentence-level bias in the embedding space, [MWB+19]

extended the Word Embedding Association Test (WEAT) [CBN17] into a Sentence

89

Page 105: Improving Natural Language Understanding via Contrastive

Encoder Association Test (SEAT). Based on the SEAT test, [MWB+19] claimed the

existence of social bias in the pretrained sentence encoders.

Although related works have discussed the measurement of social bias in sentence

embeddings, debiasing pretrained sentence encoders remains a challenge. Previous

word embedding debiasing methods [BCZ+16, KB19, MLTB19] have limited assis-

tance to sentence-level debiasing, because even if the social bias is eliminated at the

word level, the sentence-level bias can still be caused by the unbalanced combination

of words in the training text. Besides, retraining a state-of-the-art sentence encoder

for debiasing requires a massive amount of computational resources, especially for

large-scale deep models like BERT [DCLT19] and GPT [RNSS18]. To the best of

our knowledge, [LLZ+20] proposed the only sentence-level debiasing method (Sent-

Debias) for pretrained text encoders, in which the embeddings are revised by sub-

tracting the latent biased direction vectors learned by Principal Component Analysis

(PCA) [WEG87]. However, Sent-Debias makes a strong assumption on the linearity

of the bias in the sentence embedding space. Further, the calculation of bias directions

depends highly on the embeddings extracted from the training data and the number

of principal components, preventing the method from adequate generalization.

In this paper, we proposed the first neural debiasing method for pretrained sen-

tence encoders. For a given pretrained encoder, our method learns a fair filter (Fair-

Fil) network, whose inputs are the original embeddings of the encoder, and out-

puts are the debiased embeddings. Inspired by the multi-view contrastive learning

[CKNH20], for each training sentence, we first generate an augmentation that has

the same semantic meaning but in a different potential bias direction. We con-

trastively train our FairFil by maximizing the mutual information between the de-

biased embeddings of the original sentences and corresponding augmentations. To

further eliminate bias from sensitive words in sentences, we introduce a debiasing

90

Page 106: Improving Natural Language Understanding via Contrastive

regularizer, which minimizes the mutual information between debiased embeddings

and the sensitive words’ embeddings. In the experiments, our FairFil outperforms

Sent-Debias [LLZ+20] in terms of the fairness and the representativeness of debiased

embeddings, indicating our FairFil not only effectively reduces the social bias in the

sentence embeddings, but also successfully preserves the rich semantic meaning of

input text.

6.2 Method

Suppose E(·) is a pretrained sentence encoder, which can encode a sentence x into

low-dimensional embedding z = E(x). Each sentence x = (w1, w2, . . . , wL) is a

sequence of words. The embedding space of z has been recognized to have social

bias in a series of studies [MWB+19, KVP+19, LLZ+20]. To eliminate the social

bias in the embedding space, we aim to learn a fair filter network f(·) on top of the

sentence encoder E(·), such that the output embedding of our fair filter d = f(z)

can be debiased. To train the fair filter, we design a multi-view contrastive learning

framework, which consists of three steps. First, for each input sentence x, we generate

an augmented sentence x′ that has the same semantic meaning as x but in a different

potential bias direction. Then, we maximize the mutual information between the

original embedding z = f(x) and the augmented embedding z′ = f(x′) with the

InfoNCE [OLV18] contrastive loss. Further, we design a debiasing regularizer to

minimize the mutual information between d and sensitive attribute words in x. In

the following, we discuss these three steps in detail.

6.2.1 Data Augmentations with Sensitive Attributes

We first describe the sentence data augmentation process for our FairFil contrastive

learning. Denote a social sensitive topic as T = {D1,D2, . . . ,DK}, where Dk (k =

91

Page 107: Improving Natural Language Understanding via Contrastive

1, . . . , K) is one of the potential bias directions under the topic. For example, if T

represents the sensitive topic “gender”, then T consists two potential bias directions

{D1,D2} = {“male”, “female”}. Similarly, if T is set as the major “religions” of the

world, then T could contain {D1,D2,D3,D4} = {“Christianity”, “Islam”, “Judaism”,

“Buddhism”} as four components.

For a given social sensitive topic T = {D1, . . .DK}, if a word w is related to

one of the potential bias direction Dk (denote as w ∈ Dk), we call w a sensitive

attribute word of Dk (also called bias attribute word in [LLZ+20]). For a sensitive

attribute word w ∈ Dk, suppose we can always find another sensitive attribute word

u ∈ Dj, such that w and u has the equivalent semantic meaning but in a different

bias direction. Then we call u as a replaceable word of w in direction Dj, and denote

as u = rj(w). For the topic “gender” = {“male”, “female”}, the word w = “boy” is

in the potential bias direction D1 = “male”; a replaceable word of “boy” in “female”

direction is r2(w) = “girl” ∈ D2.

With the above definitions, for each sentence x, we generate an augmented sen-

tence x′ such that x′ has the same semantic meaning as x but in a different potential

bias direction. More specifically, for a sentence x = (w1, w2, . . . , wL), we first find

the sensitive word positions as an index set P , such that each wp (p ∈ P) is a sensi-

tive attribute words in direction Dk. We further make a reasonable assumption that

the embedding bias of direction Dk is only caused by the sensitive words {wp}p∈P in

x. To sample an augmentation to x, we first select another potential bias direction

Dj, and then replace all sensitive attribute words by their replaceable words in the

direction Dj. That is, x′ = {v1, v2, . . . , vL}, where vl = wl if l /∈ P , and vl = rj(wl)

if l ∈ P . In Table 6.1, we provide an example for sentence augmentation under the

“gender” topic.

92

Page 108: Improving Natural Language Understanding via Contrastive

Table 6.1: Examples of generating an augmentation sentence under the sensitivetopic “gender”.

Bias direction Sensitive Attribute words Text contentOriginal male he, his {He} is good at playing {his} basketball.Augmented female she, her {She} is good at playing {her} basketball.

6.2.2 Contrastive Learning Framework

After obtaining the sentence pair (x,x′) with the augmentation strategy from Sec-

tion 6.2.1, we construct a contrastive learning framework to learn our debiasing fair

filter f(·). As shown in the Figure 6.1(a), our framework consists of the following

two steps:

(1) We encode sentences (x,x′) into embeddings (z, z′) with the pretrained en-

coder E(·). Since x and x′ have the same meaning but different potential bias direc-

tions, the embeddings (z, z′) will have different bias directions, which are caused by

the sensitive attributed words in x and x′.

(2) We then feed the sentence embeddings (z, z′) through our fair filter f(·) to

obtain the debiased embedding outputs (d,d′). Ideally, d and d′ should represent

the same semantic meaning without social bias. Inspired by SimCLR [CKNH20],

we encourage the overlapped semantic information between d and d′ by maximizing

their mutual information I(d;d′).

However, the calculation of I(d;d′) is practically difficult because only embedding

samples of d and d′ are available. Therefore, we use the InfoNCE mutual information

estimator [OLV18] to minimize the lower bound of I(d;d′) instead. Based on a

learnable score function g(·, ·), the contrastive InfoNCE estimator is calculated within

a batch of samples {(di,d′i)}Ni=1:

INCE =1

N

N∑i=1

logexp(g(di,d

′i))

1N

∑Nj=1 exp(g(di,d′j))

. (6.1)

93

Page 109: Improving Natural Language Understanding via Contrastive

Figure 6.1: (a) Contrastive learning framework of FairFil; (b) Illustration of infor-mation in d and d′.

By maximize INCE, we encourage the difference between the positive pair score

g(di,d′i) and the negative pair score g(di,d

′j), so that di can share more semantic

information with d′i than other embeddings d′j 6=i.

6.2.3 Debiasing Regularizer

Practically, the contrastive learning framework in Section 6.2.2 can already show

encouraging debiasing performance (as shown in the Experiments). However, the

embedding d can contain extra biased information from z, that only maximizing

I(d;d′) fails to eliminate. To encourage no extra bias in d, we introduce a debiasing

regularizer which minimizes the mutual information between embedding d and the

potential bias from embedding z. As discussed in Section 6.2.1, in our framework the

potential bias of z is assumed to come from the sensitive attribute words in x. There-

fore, we should reduce the bias word information from the debiased representation d.

Let wp be the embedding of a sensitive attribute word wp in sentence x. The word

embedding wp can always be obtained from the pretrained text encoders [BB19]. We

then minimize the mutual information I(wp;d), using the CLUB mutual informa-

tion upper bound [CHD+20] to estimate I(wp;d) with embedding samples. Given

a batch of embedding pairs {(di,wp)}Ni=1, we can calculate the debiasing regularizer

94

Page 110: Improving Natural Language Understanding via Contrastive

Algorithm 3 Updating the FairFil with a sample batch

Begin with the pretrained text encoder E(·), and a batch of sentences {xi}Ni=1.Find the sensitive attribute words {wp} and corresponding embeddings {wp}.Generate augmentation x′i from xi, by replacing {wp} with {rj(wp)}.Encode (xi,x

′i) into embeddings di = f(E(xi)),d

′i = f(E(x′i)).

Calculate INCE with {(di,d′i)}Ni=1 and score function g.if adding debiasing regularizer then

Update the variational approximation qθ(w|d) by maximizing log-likelihood with{(di,wp

i )}Calculate ICLUB with qθ(w|d) and {(di,wp

i )}Ni=1.Learning loss L = −INCE + βICLUB.

elseLearning loss L = −INCE.

end ifUpdate FairFil f and score function g by gradient descent with respect to L.

as:

ICLUB =1

N

N∑i=1

[log qθ(w

pi |di)−

1

N

N∑j=1

log qθ(wpj |di)

], (6.2)

where qθ is a variational approximation to ground-truth conditional distribution

p(w|d). We parameterize qθ with another neural network. As proved in [CHD+20],

the better qθ(w|d) approximates p(w|d), the more accurate ICLUB serves as the mu-

tual information upper bound. Therefore, besides the loss in (6.2), we also maximize

the log-likelihood of qθ(w|d) with samples {(di,wpi )}Ni=1.

Based on the above sections, the overall learning scheme of our fair filter (FairFil)

is described in Algorithm 3. Also, we provide an intuitive explanation to the two loss

terms in our framework. In Figure 6.1(b), the blue and red circles represent d and

d′, respectively, in the embedding space. The intersection I(d;d′) is the common

semantic information extracted from sentences x and x′, while the two shadow parts

are the extra bias. Note that the perfect debiased embeddings lead to coincident

circles. By maximizing INCE term, we enlarge the overlapped area of d and d′; by

minimizing ICLUB, we shrink the biased shadow parts.

95

Page 111: Improving Natural Language Understanding via Contrastive

6.3 Related Work

6.3.1 Bias in Natural Language Processing

Social bias has recently been recognized as an important issue in natural language

processing (NLP) systems. The studies on bias in NLP are mainly delineated into two

categories: bias in the embedding spaces, and bias in downstream tasks [BBDIW20].

For bias in downstream tasks, the analyses cover comprehensive topics, including

machine translation [SSZ19], language modeling [BB19], sentiment analysis [KM18b]

and toxicity detection [DLS+18]. The social bias in embedding spaces has been stud-

ied from two important perspectives: bias measurements and and debiasing methods.

To measure the bias in an embedding space, [CBN17] proposed a Word Embedding

Association Test (WEAT), which compares the similarity between two sets of tar-

get words and two sets of attribute words. [MWB+19] further extended the WEAT

to a Sentence Encoder Association Test (SEAT), which replaces the word embed-

dings by sentence embeddings encoded from pre-defined biased sentence templates.

For debiasing methods, most of the prior works focus on word-level representations

[BCZ+16, BB19]. The only sentence-level debiasing method is proposed by [LLZ+20],

which learns bias directions by PCA and subtracts them in the embedding space.

6.3.2 Contrastive Learning

Contrastive learning is a broad class of training strategies that learns meaningful

representations by making positive and negative embedding pairs more distinguish-

able. Usually, contrastive learning requires a pairwise embedding critic as a simi-

larity/distance of data pairs. Then the learning objective is constructed by max-

imizing the margin between the critic values of positive data pairs and negative

data pairs. Previously contrastive learning has shown encouraging performance in

many tasks, including metric learning [WBS06, DKJ+07], word representation learn-

96

Page 112: Improving Natural Language Understanding via Contrastive

ing [MCCD13], graph learning [TQW+15, GL16], etc. Recently, contrastive learning

has been applied to the unsupervised visual representation learning task, and signif-

icantly reduced the performance gap between supervised and unsupervised learning

[HFW+20, CKNH20, QMG+20]. Among these unsupervised methods, [CKNH20]

proposed a simple multi-view contrastive learning framework (SimCLR). For each

image data, SimCLR generates two augmented images, and then the mutual infor-

mation of the two augmentation embeddings is maximized within a batch of training

data.

6.4 Experiments

We first describe the experimental setup in detail, including the pretrained encoders,

the training of FairFil, and the downstream tasks. The results of our FairFil are

reported and analyzed, along with the previous Sent-Debias method. In general, we

evaluate our neural debiasing method from two perspectives: (1) fairness: we com-

pare the bias degree of the original and debiased sentence embeddings for debiasing

performance; and (2) representativeness: we apply the debiased embeddings into

downstream tasks, and compare the performance with original embeddings.

6.4.1 Bias Evaluation Metric

To evaluate the bias in sentence embeddings, we use the Sentence Encoder Association

Test (SEAT) [MWB+19], which is an extension of the Word Embedding Association

Test (WEAT) [CBN17]. The WEAT test measures the bias in word embeddings by

comparing the distances of two sets of target words to two sets of attribute words.

More specifically, denote X and Y as two sets of target word embeddings (e.g., X

includes “male” words such as “boy” and “man”; Y contains “female” words like

“girl” and “woman”). The attribute sets A and B are selected from some social

97

Page 113: Improving Natural Language Understanding via Contrastive

concepts that should be “equal” to X and Y (e.g., career or personality words).

Then the bias degree w.r.t attributes (A,B) of each word embedding t is defined as:

s(t,A,B) = meana∈A cos(t,a)−meanb∈B cos(t, b), (6.3)

where cos(·, ·) is the cosine similarity. Based on (6.3), the normalized WEAT effect

size is:

dWEAT =meanx∈X s(x,A,B)−meany∈Ys(y,A,B)

stdt∈X∪Ys(t,A,B). (6.4)

The SEAT test extends WEAT by replacing the word embeddings with sentence

embeddings. Both target words and attribute words are converted into sentences with

several semantically bleached sentence templates (e.g., “This is <word>”). Then the

SEAT statistic is similarly calculated with (6.4) based on the embeddings of converted

sentences. The closer the effect size is to zero, the more fair the embeddings are.

Therefore, we report the absolute effect size as the bias measure.

6.4.2 Pretrained Encoders

We test our neural debiasing method on BERT [DCLT19]. Since the pretrained

BERT requires the additional fine-tuning process for downstream tasks, we report

the performance of our FairFil under two scenarios: (1) pretrained BERT: we di-

rectly learn our FairFil network based on pretrained BERT without any additional

fine-tuning; and (2) BERT post tasks: we fix the parameters of the FairFil network

learned on pretrained BERT, and then fine-tune the BERT+FairFil together on task-

specific data. Note that when fine-tuning, our FairFil will no longer update, which

satisfies a fair comparison to Sent-Debias [LLZ+20].

For the downstream tasks of BERT, we follow the setup from Sent-Debias [LLZ+20]

and conduct experiments on the following three downstream tasks: (1) SST-2: A

sentiment classification task on the Stanford Sentiment Treebank (SST-2) dataset

98

Page 114: Improving Natural Language Understanding via Contrastive

[SPW+13], on which sentence embeddings are used to predict the corresponding sen-

timent labels; (2) CoLA: Another sentiment classification task on the Corpus of

Linguistic Acceptability (CoLA) grammatical acceptability judgment [WSB19]; and

(3) QNLI: A binary question answering task on the Question Natural Language

Inference (QNLI) dataset [WSM+18].

6.4.3 Training of FairFil

We parameterize the fair filter network with one-layer fully-connected neural net-

works with the ReLU activation function. The score function g in the InfoNCE

estimator is set to a two-layer fully-connected network with one-dimensional output.

The variational approximation qθ in CLUB estimator is parameterized by a multi-

variate Gaussian distribution qθ(w|d) = N(µ(d),σ2(d)), where µ(·) and σ(·) are

also two-layer fully-connected neural nets. The batch size is set to 128. The learning

rate is 1× 10−5. We train the fair filter for 10 epochs.

For an appropriate comparison, we follow the setup of Sent-Debias [LLZ+20]

and select the same training data for the training of FairFil. The training corpora

consist 183,060 sentences from the following five datasets: WikiText-2 [MXBS1y],

Stanford Sentiment Treebank [SPW+13], Reddit [VPSS17], MELD [PHM+19] and

POM [PSC+14]. Following [LLZ+20], we mainly select “gender” as the sensitive

topic T , and use the same pre-defined word sets of sensitive attribute words and

their replaceable words as Sent-Debias did. The word embeddings for training the

debiasing regularizer is selected from the token embedding of the pretrained BERT.

6.4.4 Debiasing Results

In Tables 6.2 and 6.3 we report the evaluation results of debiased embeddings on

both the absolute SEAT effect size and the downstream classification accuracy. For

99

Page 115: Improving Natural Language Understanding via Contrastive

Table 6.2: Performance of debiased embeddings on Pretrained BERT and BERTpost SST-2.

Pretrained BERT BERT post SST-2Origin Sent-D FairF− FairF Origin Sent-D FairF− FairF

Names, Career/Family 0.477 0.096 0.218 0.182 0.036 0.109 0.237 0.218Terms, Career/Family 0.108 0.437 0.086 0.076 0.010 0.057 0.376 0.377Terms, Math/Arts 0.253 0.194 0.133 0.124 0.219 0.221 0.301 0.263Names, Math/Arts 0.254 0.194 0.101 0.082 1.153 0.755 0.084 0.099Terms, Science/Arts 0.399 0.075 0.218 0.204 0.103 0.081 0.133 0.127Names, Science/Arts 0.636 0.540 0.320 0.235 0.222 0.047 0.017 0.005Avg. Abs. Effect Size 0.354 0.256 0.179 0.150 0.291 0.212 0.191 0.182Classification Acc. - - - - 92.7 89.1 91.7 91.6

Table 6.3: Performance of debiased embeddings on BERT post CoLA and BERTpost QNLI.

BERT post CoLA BERT post QNLIOrigin Sent-D FairF− FairF Origin Sent-D FairF− FairF

Names, Career/Family 0.009 0.149 0.273 0.034 0.261 0.054 0.196 0.103Terms, Career/Family 0.199 0.186 0.156 0.119 0.155 0.004 0.050 0.206Terms, Math/Arts 0.268 0.311 0.008 0.092 0.584 0.083 0.306 0.323Names, Math/Arts 0.150 0.308 0.060 0.101 0.581 0.629 0.168 0.288Terms, Science/Arts 0.425 0.163 0.245 0.249 0.087 0.716 0.500 0.245Names, Science/Arts 0.032 0.192 0.102 0.127 0.521 0.443 0.378 0.167Avg. Abs. Effect Size 0.181 0.217 0.141 0.120 0.365 0.321 0.266 0.222Classification Acc. 57.6 55.4 56.5 56.5 91.3 90.6 91.0 90.8

the SEAT test, we follow the setup in [LLZ+20], and test the sentence templates of

Terms/Names under different domains designed by [CBN17]. The column name Ori-

gin refers to the original BERT results, and Sent-D is short for Sent-Debias [LLZ+20].

FairFil− and FairFil (as FairF− and FairF in the tables) are our method without/with

the debiasing regularizer in Section 6.2.3. The best results of effect size (the lower

the better) and classification accuracy (the higher the better) are bold among Sent-

D, FairFil−, and FairFil. Since the pretrained BERT does not correspond to any

downstream task, the classification accuracy is not reported for it.

From the SEAT test results, our contrastive learning framework effectively reduces

100

Page 116: Improving Natural Language Understanding via Contrastive

Table 6.4: Comparison of average debiasing performance on pretrained BERT

Method Bias Degree

BERT origin [DCLT19] 0.354FastText [BGJM17] 0.565BERT word [BCZ+16] 0.861BERT simple [MWB+19] 0.298Sent-Debias [LLZ+20] 0.256

FairFil− (Ours) 0.179FairFil (Ours) 0.150

the gender bias for both pretrained BERT and fine-tuned BERT under most test

scenarios. Comparing with Sent-Debias, our FairFil reaches a lower bias degree on

the majority of the individual SEAT tests. Considering the average of absolute effect

size, our FairFil is distinguished by a significant margin to Sent-Debias. Moreover, our

FairFil achieves higher downstream classification accuracy than Sent-Debias, which

indicates learning neural filter networks can preserve more semantic meaning than

subtracting bias directions learned from PCA.

For the ablation study, we also report the results of FairFil without the debiasing

regularizer, as in FairF−. Only with the contrastive learning framework, FairF−

already reduces the bias effectively and even achieves better effect size than the

FairF on some of the SEAT tests. With the debiasing regularizer, FairF has better

average SEAT effect sizes but slightly loses in terms of the downstream performance.

However, the overall performance of FairF and FairF− shows a trade-off between

fairness and representativeness of the filter network.

We also compare the debiasing performance on a broader class of baselines, in-

cluding word-level debiasing methods, and report the average absolute SEAT effect

size on the pretrained BERT encoder. Both FairF− and FairF achieve a lower bias

degree than other baselines. The word-level debiasing methods (FastText [BGJM17]

and BERT word [BCZ+16]) have the worst debiasing performance, which validates

101

Page 117: Improving Natural Language Understanding via Contrastive

our observation that the word-level debiasing methods cannot reduce sentence-level

social bias in NLP models.

6.4.5 Analysis

To test the influence of data proportion on the model’s debiasing performance, we

select WikiText-2 with 13,750 sentences as the training corpora following the setup in

[LLZ+20]. Then we randomly divide the training data into 5 equal-sized partitions.

We evaluate the bias degree of the sentence debiasing methods on different combina-

tions of the partitions, specifically with training data proportions (20%, 40%, 60%,

80%, 100%). Under each data proportion, we repeat the training 5 times to obtain

the mean and variance of the absolute SEAT effect size. In Figure 6.2, we plot the

bias degree of BERT post tasks with different training data proportions. In gen-

eral, both Sent-Debias and FairFil achieve better performance and smaller variance

when the proportion of training data is larger. Under a 20% training proportion, our

FairFil can better remove bias in text encoder, which shows FairFil has better data

efficiency with the contrastive learning framework.

To further study output debiased sentence embedding, we visualize the relative

distances of attributes and targets of SEAT before/after our debiasing process. We

choose the target words as “he” and “she.” Attributes are selected from different

social domains. We first contextualize the selected words into sentence templates

as described in Section 6.4.1. We then average the original/debiased embeddings of

these sentence template and plot the t-SNE [MH08] in Figure 6.3. From the t-SNE,

the debiased encoder provides more balanced distances from gender targets “he/she”

to the attribute concepts.

102

Page 118: Improving Natural Language Understanding via Contrastive

Figure 6.2: Influence of the training data proportion to debias degree of BERT.

Figure 6.3: T-SNE plots of each words contextualized in templates. Left-hand side:the original pretrained BERT; right-hand side: FairFil.

6.5 Conclusions

This chapter has developed a novel debiasing method for large-scale pretrained text

encoder neural networks. We proposed a fair filter (FairFil) network, which takes

the original sentence embeddings as input and outputs the debiased sentence em-

beddings. To train the fair filter, we constructed a multi-view contrast learning

framework, which maximizes the mutual information between each sentence and its

augmentation. The augmented sentence is generated by replacing sensitive words in

the original sentence with words in a similar semantic but different bias directions.

Further, we designed a debiasing regularizer that minimizes the mutual information

between the debiased embeddings and the corresponding sensitive words in sentences.

Experimental results demonstrate the proposed FairFil not only reduces the bias in

sentence embedding space, but also maintains the semantic meaning of the embed-

103

Page 119: Improving Natural Language Understanding via Contrastive

dings. This post hoc method does not require access to the training corpora, or any

retraining process of the pretrained text encoder, which enhances its applicability.

104

Page 120: Improving Natural Language Understanding via Contrastive

Chapter 7

Conclusions

Representation learning methods have recently become the mainstream approach for

natural language understanding. Although representation learning methods already

show remarkable performance on many natural language downstream tasks, many

important perspectives of the learned representations are in lack of exploration, e.g.,

efficiency, interpretability, and fairness. In this thesis, I studied the contrastive learn-

ing methods on improving the representation learning methods from the above per-

spectives:

To improve the efficiency of text representation learning, I learned a binary trans-

formation to obtain compressed embeddings with contrastive learning in Chapter 2.

The contrastive learning framework preserves the relation information in the original

sentence embedding, that if two sentence are similar in continuous embedding space,

they could also be closed after the discrete transformation. Besides, I reconstruct the

continuous embedding by binarized embedding to ensure rich semantic information

after compression. Experimental results show the proposed method could dramati-

cally reduce the embedding storage with little performance loss.

To improve the interpretability of representations, I introduced an information-

theoretical framework in Chapter 4 to disentangle the style and context information

of representations into different embedding parts. First I proposed a contrastive

log-ratio upper bound of mutual information in Chapter 3, which is the key learn-

ing objective for the disentanglement. By minimizing the mutual information upper

bound, the framework induces style and content embeddings into two independent

low-dimensional spaces. Experiments on both conditional text generation and text-

105

Page 121: Improving Natural Language Understanding via Contrastive

style transfer demonstrate the high quality of our disentangled representation in

terms of content and style preservation. Moreover, I applied this contrastive learning

method for the voice style transfer in Chapter 5, where the speaker-related style and

content of each voice sample are encoded into separated low-dimensional embedding

spaces, and then transferred to a new voice via a decoder. The voice disentangling

method obtains state-of-the-art results in terms of transfer accuracy and voice natu-

ralness for style transfer on real-world datasets.

To improve the fairness of text representations, I proposed the first neural debi-

asing method for a pretrained sentence encoder in Chapter 6. The proposed method

transforms the pretrained encoder outputs into debiased representations via a fair

filter network. To learn the filter, I introduced a contrastive learning framework

that not only minimizes the correlation between filtered embeddings and bias words

but also preserves rich semantic information of the original sentences. On real-world

datasets, our method effectively reduces the bias degree of pretrained text encoders,

while continuously showing desirable performance on downstream tasks.

Representation learning is an effective and essential approach of machine learn-

ing, which has achieved promising empirical improvement on natural language under-

standing applications. However, many important properties of representation learn-

ing are still agnostic and worth for further exploration. I hope this thesis could have

some reference value for understanding representation learning in natural language

processing.

106

Page 122: Improving Natural Language Understanding via Contrastive

Bibliography

[ABC+14] Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab,Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau,and Janyce Wiebe. Semeval-2014 task 10: Multilingual semantic tex-tual similarity. In Proceedings of the 8th international workshop onsemantic evaluation (SemEval 2014), pages 81–91, 2014.

[ADSFS19] Jayadev Acharya, Chris De Sa, Dylan Foster, and Karthik Sridha-ran. Distributed learning with sublinear communication. volume 97 ofProceedings of Machine Learning Research, pages 40–50, Long Beach,California, USA, 09–15 Jun 2019. PMLR.

[AFDM17] Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy.Deep variational information bottleneck. In International Conferenceon Learning Representations, 2017.

[AHD+19] Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil D Lawrence,and Zhenwen Dai. Variational information distillation for knowledgetransfer. In CVPR, 2019.

[AKB+17] Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and YoavGoldberg. Fine-grained analysis of sentence embeddings using auxil-iary prediction tasks. CoRR, abs/1608.04207, 2017.

[Alt92] Naomi S Altman. An introduction to kernel and nearest-neighbornonparametric regression. The American Statistician, 46(3):175–185,1992.

[ASZAA07] Mehdi Aghagolzadeh, Hamid Soltanian-Zadeh, B Araabi, and AliAghagolzadeh. A hierarchical clustering based on mutual informa-tion maximization. In 2007 IEEE International Conference on ImageProcessing, volume 1, pages I–277. IEEE, 2007.

[BA03] David Barber and Felix V Agakov. The im algorithm: a variationalapproach to information maximization. In Advances in neural infor-mation processing systems, page None, 2003.

[BAHAZ19] Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson,and Richard Zemel. Understanding the origins of bias in word em-beddings. In International Conference on Machine Learning, pages803–811, 2019.

[BAPM15a] Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christo-pher D. Manning. A large annotated corpus for learning natural lan-guage inference. In EMNLP, 2015.

107

Page 123: Improving Natural Language Understanding via Contrastive

[BAPM15b] Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christo-pher D. Manning. A large annotated corpus for learning natural lan-guage inference. In Proceedings of the 2015 Conference on EmpiricalMethods in Natural Language Processing (EMNLP). Association forComputational Linguistics, 2015.

[BAPM15c] Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christo-pher D Manning. A large annotated corpus for learning natural lan-guage inference. arXiv preprint arXiv:1508.05326, 2015.

[BB19] Shikha Bordia and Samuel R. Bowman. Identifying and reducing gen-der bias in word-level language models. In Proceedings of the 2019Conference of the North American Chapter of the Association forComputational Linguistics: Student Research Workshop, pages 7–15,Minneapolis, Minnesota, 2019.

[BBDIW20] Su Lin Blodgett, Solon Barocas, Hal Daume III, and Hanna Wallach.Language (technology) is power: A critical survey of” bias” in nlp.arXiv preprint arXiv:2005.14050, 2020.

[BBR+18] Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, SherjilOzair, Yoshua Bengio, Devon Hjelm, and Aaron Courville. Mutual in-formation neural estimation. In International Conference on MachineLearning, pages 530–539, 2018.

[BC94] Donald J Berndt and James Clifford. Using dynamic time warping tofind patterns in time series. In KDD workshop, pages 359–370. Seattle,WA, 1994.

[BCZ+16] Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama,and Adam T Kalai. Man is to computer programmer as woman isto homemaker? debiasing word embeddings. In Advances in neuralinformation processing systems, pages 4349–4357, 2016.

[BGJM17] Piotr Bojanowski, Edouard Grave, Armand Joulin, and TomasMikolov. Enriching word vectors with subword information. Trans-actions of the Association for Computational Linguistics, 5:135–146,2017.

[BHP+18] Christopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, NickWatters, Guillaume Desjardins, and Alexander Lerchner. Understand-ing disentangling in beta-vae. arXiv preprint arXiv:1804.03599, 2018.

[BTS+16] Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, DilipKrishnan, and Dumitru Erhan. Domain separation networks. InNeurIPS, 2016.

108

Page 124: Improving Natural Language Understanding via Contrastive

[CA07] James A Coan and John JB Allen. Handbook of emotion elicitationand assessment. Oxford university press, 2007.

[CB02] George Casella and Roger L Berger. Statistical inference, volume 2.Duxbury Pacific Grove, CA, 2002.

[CBK+10] Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Este-vam R Hruschka, and Tom M Mitchell. Toward an architecture fornever-ending language learning. In Twenty-Fourth AAAI Conferenceon Artificial Intelligence, 2010.

[CBN17] Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. Seman-tics derived automatically from language corpora contain human-likebiases. Science, 356(6334):183–186, 2017.

[CCK+18] Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, SunghunKim, and Jaegul Choo. Stargan: Unified generative adversarial net-works for multi-domain image-to-image translation. In Proceedings ofthe IEEE conference on computer vision and pattern recognition, pages8789–8797, 2018.

[CcYyLsL18] Juchieh Chou, Cheng chieh Yeh, Hung yi Lee, and Lin shan Lee.Multi-target voice conversion without parallel data by adversariallylearning disentangled audio representations. In Proc. Interspeech 2018,pages 501–505, 2018.

[CDH+16] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever,and Pieter Abbeel. Infogan: Interpretable representation learning byinformation maximizing generative adversarial nets. In Advances inneural information processing systems, pages 2172–2180, 2016.

[CG97] Fan RK Chung and Fan Chung Graham. Spectral graph theory. Num-ber 92. American Mathematical Soc., 1997.

[CH+67] Thomas M Cover, Peter E Hart, et al. Nearest neighbor pattern classi-fication. IEEE transactions on information theory, 13(1):21–27, 1967.

[Cha12] Santosh V Chapaneri. Spoken digits recognition using weighted mfccand improved features for dynamic time warping. International Jour-nal of Computer Applications, 40(3):6–12, 2012.

[CHD+20] Pengyu Cheng, Weituo Hao, Shuyang Dai, Jiachang Liu, Zhe Gan, andLawrence Carin. Club: A contrastive log-ratio upper bound of mutualinformation. In Proceedings of the 37th international conference onMachine learning, 2020.

109

Page 125: Improving Natural Language Understanding via Contrastive

[CK18] Alexis Conneau and Douwe Kiela. Senteval: An evaluation toolkit foruniversal sentence representations. arXiv preprint arXiv:1803.05449,2018.

[CKNH20] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hin-ton. A simple framework for contrastive learning of visual representa-tions. In Proceedings of the 37th international conference on Machinelearning, 2020.

[CKS+17a] Alexis Conneau, Douwe Kiela, Holger Schwenk, Loıc Barrault, andAntoine Bordes. Supervised learning of universal sentence represen-tations from natural language inference data. In Proceedings of the2017 Conference on Empirical Methods in Natural Language Process-ing, pages 670–680, Copenhagen, Denmark, September 2017. Associa-tion for Computational Linguistics.

[CKS+17b] Alexis Conneau, Douwe Kiela, Holger Schwenk, Loıc Barrault, andAntoine Bordes. Supervised learning of universal sentence representa-tions from natural language inference data. In EMNLP, 2017.

[CKS+17c] Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, andAntoine Bordes. Supervised learning of universal sentence repre-sentations from natural language inference data. arXiv preprintarXiv:1705.02364, 2017.

[CL19] Ju-chieh Chou and Hung-Yi Lee. One-shot voice conversion by sep-arating speaker and content representations with instance normaliza-tion. Proc. Interspeech 2019, pages 664–668, 2019.

[CLGD18] Tian Qi Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud.Isolating sources of disentanglement in variational autoencoders. InAdvances in Neural Information Processing Systems, pages 2610–2620,2018.

[CLKM15] Benjamin Charrow, Sikang Liu, Vijay Kumar, and Nathan Michael.Information-theoretic mapping using cauchy-schwarz quadratic mu-tual information. In ICRA, 2015.

[CLY+17] Dongdong Chen, Jing Liao, Lu Yuan, Nenghai Yu, and Gang Hua.Coherent online video style transfer. In Proceedings of the IEEE In-ternational Conference on Computer Vision, pages 1105–1114, 2017.

[CLZ+20] Pengyu Cheng, Yitong Li, Xinyuan Zhang, Liqun Cheng, David Carl-son, and Lawrence Carin. Dynamic embedding on textual networksvia a gaussian process. In AAAI, 2020.

110

Page 126: Improving Natural Language Understanding via Contrastive

[CM19] Kaytlin Chaloner and Alfredo Maldonado. Measuring gender biasin word embeddings across domains and discovering new gender biasword categories. In Proceedings of the First Workshop on Gender Biasin Natural Language Processing, pages 25–32, 2019.

[CMG17] Anmol Chachra, Pulkit Mehndiratta, and Mohit Gupta. Sentimentanalysis of text using deep convolution neural networks. In 2017 TenthInternational Conference on Contemporary Computing (IC3), pages1–6. IEEE, 2017.

[CMS18] Ting Chen, Martin Renqiang Min, and Yizhou Sun. Learning k-wayd-dimensional discrete codes for compact embedding representations.arXiv preprint arXiv:1806.09464, 2018.

[CMS+20] Pengyu Cheng, Martin Renqiang Min, Dinghan Shen, ChristopherMalon, Yizhe Zhang, Yitong Li, and Lawrence Carin. Improvingdisentangled text representation learning with information-theoreticguidance. arXiv preprint arXiv:2006.00693, 2020.

[CPO19] Kai-Wei Chang, Vinod Prabhakaran, and Vicente Ordonez. Bias andfairness in natural language processing. In Proceedings of the 2019Conference on Empirical Methods in Natural Language Processing andthe 9th International Joint Conference on Natural Language Processing(EMNLP-IJCNLP): Tutorial Abstracts, 2019.

[CPR15] Miguel A Carreira-Perpinan and Ramin Raziperchikolaei. Hashingwith binary autoencoders. In Proceedings of the IEEE conference oncomputer vision and pattern recognition, pages 557–566, 2015.

[CT12] Thomas M Cover and Joy A Thomas. Elements of information theory.John Wiley & Sons, 2012.

[CWT+19] Liqun Chen, Guoyin Wang, Chenyang Tao, Dinghan Shen, PengyuCheng, Xinyuan Zhang, Wenlin Wang, Yizhe Zhang, and LawrenceCarin. Improving textual network embedding with global attentionvia optimal transport. In ACL, 2019.

[CYyK+18] Daniel Cer, Yinfei Yang, Sheng yi Kong, Nan Hua, Nicole Limtiaco,Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, SteveYuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil.Universal sentence encoder. CoRR, abs/1803.11175, 2018.

[D+17] Emily L Denton et al. Unsupervised learning of disentangled repre-sentations from video. In Advances in neural information processingsystems, pages 4414–4423, 2017.

111

Page 127: Improving Natural Language Understanding via Contrastive

[DBV16] Michael Defferrard, Xavier Bresson, and Pierre Vandergheynst. Con-volutional neural networks on graphs with fast localized spectral fil-tering. In Advances in neural information processing systems, pages3844–3852, 2016.

[DCLT18] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.Bert: Pre-training of deep bidirectional transformers for language un-derstanding. arXiv preprint arXiv:1810.04805, 2018.

[DCLT19] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.Bert: Pre-training of deep bidirectional transformers for language un-derstanding. In Proceedings of the 2019 Conference of the North Amer-ican Chapter of the Association for Computational Linguistics, pages4171–4186, 2019.

[DCZ+19] Shuyang Dai, Yu Cheng, Yizhe Zhang, Zhe Gan, Jingjing Liu, andLawrence Carin. Contrastively smoothed class alignment for unsuper-vised domain adaptation. arXiv preprint arXiv:1909.05288, 2019.

[DGK07] Inderjit S Dhillon, Yuqiang Guan, and Brian Kulis. Weighted graphcuts without eigenvectors a multilevel approach. IEEE transactionson pattern analysis and machine intelligence, 29(11):1944–1957, 2007.

[DGK+17] Bo Dai, Ruiqi Guo, Sanjiv Kumar, Niao He, and Le Song. Stochas-tic generative hashing. In Proceedings of the 34th International Con-ference on Machine Learning-Volume 70, pages 913–922. JMLR. org,2017.

[DKJ+07] Jason V Davis, Brian Kulis, Prateek Jain, Suvrit Sra, and Inderjit SDhillon. Information-theoretic metric learning. In Proceedings of the24th international conference on Machine learning, pages 209–216,2007.

[DL15] Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. InAdvances in neural information processing systems, pages 3079–3087,2015.

[DLS+18] Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and LucyVasserman. Measuring and mitigating unintended bias in text classi-fication. In Proceedings of the 2018 AAAI/ACM Conference on AI,Ethics, and Society, pages 67–73, 2018.

[DNP13] Shivanker Dev Dhingra, Geeta Nijhawan, and Poonam Pandit. Iso-lated speech recognition using mfcc and dtw. International Journalof Advanced Research in Electrical, Electronics and InstrumentationEngineering, 2(8):4085–4092, 2013.

112

Page 128: Improving Natural Language Understanding via Contrastive

[DS98] Kathleen Dahlgren and Edward Stabler. Natural language under-standing system, August 11 1998. US Patent 5,794,050.

[DV99] Georges A Darbellay and Igor Vajda. Estimation of the information byan adaptive partitioning of the observation space. IEEE Transactionson Information Theory, 1999.

[FDA17] Carlos Florensa, Yan Duan, and Pieter Abbeel. Stochastic neu-ral networks for hierarchical reinforcement learning. arXiv preprintarXiv:1704.03012, 2017.

[FL03] Beat Fasel and Juergen Luettin. Automatic facial expression analysis:a survey. Pattern recognition, 36(1):259–275, 2003.

[GB10] Xavier Glorot and Yoshua Bengio. Understanding the difficulty oftraining deep feedforward neural networks. In Proceedings of the thir-teenth international conference on artificial intelligence and statistics,pages 249–256, 2010.

[GBR+12] Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, BernhardScholkopf, and Alexander Smola. A kernel two-sample test. Journalof Machine Learning Research, 13(Mar):723–773, 2012.

[Gea35] Roy C Geary. The ratio of the mean deviation to the standard devia-tion as a test of normality. Biometrika, 27(3/4):310–332, 1935.

[GEB16] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image styletransfer using convolutional neural networks. In Proceedings of theIEEE conference on computer vision and pattern recognition, pages2414–2423, 2016.

[GH10] Michael Gutmann and Aapo Hyvarinen. Noise-contrastive estimation:A new estimation principle for unnormalized statistical models. InProceedings of the Thirteenth International Conference on ArtificialIntelligence and Statistics, pages 297–304, 2010.

[GL94] Clive Granger and Jin-Lung Lin. Using the mutual information co-efficient to identify lags in nonlinear models. Journal of time seriesanalysis, 1994.

[GL16] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learningfor networks. In Proceedings of the 22nd ACM SIGKDD internationalconference on Knowledge discovery and data mining, pages 855–864,2016.

113

Page 129: Improving Natural Language Understanding via Contrastive

[GPH+17] Zhe Gan, Yunchen Pu, Ricardo Henao, Chunyuan Li, Xiaodong He,and Lawrence Carin. Learning generic sentence representations usingconvolutional neural networks. In EMNLP, 2017.

[GPL+19] Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H Chi,and Alex Beutel. Counterfactual fairness in text classification throughrobustness. In Proceedings of the 2019 AAAI/ACM Conference on AI,Ethics, and Society, pages 219–226, 2019.

[GRC11] Elizabeth Godoy, Olivier Rosec, and Thierry Chonavel. Voice con-version using dynamic frequency warping with amplitude scaling, forparallel or nonparallel corpora. IEEE Transactions on Audio, Speech,and Language Processing, 20(4):1313–1323, 2011.

[GSR+18] Behnam Gholami, Pritish Sahu, Ognjen Rudovic, Konstantinos Bous-malis, and Vladimir Pavlovic. Unsupervised multi-target domainadaptation: An information theoretic approach. arXiv preprintarXiv:1810.11547, 2018.

[GSR+20] Behnam Gholami, Pritish Sahu, Ognjen Rudovic, Konstantinos Bous-malis, and Vladimir Pavlovic. Unsupervised multi-target domainadaptation: An information theoretic approach. IEEE Transactionson Image Processing, 29:3993–4002, 2020.

[GUA+16] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain,Hugo Larochelle, Francois Laviolette, Mario Marchand, and VictorLempitsky. Domain-adversarial training of neural networks. JMLR,2016.

[HB17] Xun Huang and Serge Belongie. Arbitrary style transfer in real-timewith adaptive instance normalization. In Proceedings of the IEEEInternational Conference on Computer Vision, pages 1501–1510, 2017.

[HCK16] Felix Hill, Kyunghyun Cho, and Anna Korhonen. Learning distributedrepresentations of sentences from unlabelled data. In HLT-NAACL,2016.

[HFLM+18] R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Gre-wal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learningdeep representations by mutual information estimation and maximiza-tion. In International Conference on Learning Representations, 2018.

[HFW+20] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick.Momentum contrast for unsupervised visual representation learning.In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition, pages 9729–9738, 2020.

114

Page 130: Improving Natural Language Understanding via Contrastive

[HHW+16] Chin-Cheng Hsu, Hsin-Te Hwang, Yi-Chiao Wu, Yu Tsao, and Hsin-Min Wang. Voice conversion from non-parallel corpora using varia-tional auto-encoder. In 2016 Asia-Pacific Signal and Information Pro-cessing Association Annual Summit and Conference (APSIPA), pages1–6. IEEE, 2016.

[Hin12] G Hinton. Neural networks for machine learning. coursera,[video lec-tures], 2012.

[HKGV11] Jihun Hamm, Christian G Kohler, Ruben C Gur, and Ragini Verma.Automated facial action coding system for dynamic analysis of fa-cial expressions in neuropsychiatric disorders. Journal of neurosciencemethods, 200(2):237–256, 2011.

[HLH+18] Jun-Ting Hsieh, Bingbin Liu, De-An Huang, Li F Fei-Fei, andJuan Carlos Niebles. Learning to decompose and disentangle rep-resentations for video prediction. In Advances in Neural InformationProcessing Systems, pages 517–526, 2018.

[HLY19] Wei Hu, Zhiyuan Li, and Dingli Yu. Understanding generalizationof deep neural networks trained with noisy labels. arXiv preprintarXiv:1905.11368, 2019.

[HMP+16] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, XavierGlorot, Matthew Botvinick, Shakir Mohamed, and Alexander Ler-chner. beta-vae: Learning basic visual concepts with a constrainedvariational framework. In International Conference on Learning Rep-resentations, 2016.

[HMSW04] Wolfgang Karl Hardle, Marlene Muller, Stefan Sperlich, and Axel Wer-watz. Nonparametric and Semiparametric Models. Springer Science &Business Media, 2004.

[HMT+17] Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, andMasashi Sugiyama. Learning discrete representations via informationmaximizing self-augmented training. In ICML, 2017.

[HPR+17] Irina Higgins, Arka Pal, Andrei Rusu, Loic Matthey, ChristopherBurgess, Alexander Pritzel, Matthew Botvinick, Charles Blundell, andAlexander Lerchner. Darla: Improving zero-shot transfer in reinforce-ment learning. In Proceedings of the 34th International Conference onMachine Learning-Volume 70, pages 1480–1490. JMLR. org, 2017.

[HS97] Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory.Neural computation, 9(8):1735–1780, 1997.

115

Page 131: Improving Natural Language Understanding via Contrastive

[HSK+12] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever,and Ruslan R Salakhutdinov. Improving neural networks bypreventing co-adaptation of feature detectors. arXiv preprintarXiv:1207.0580, 2012.

[HVG11] David K Hammond, Pierre Vandergheynst, and Remi Gribonval.Wavelets on graphs via spectral graph theory. Applied and Compu-tational Harmonic Analysis, 30(2):129–150, 2011.

[HWL+17] Haozhi Huang, Hao Wang, Wenhan Luo, Lin Ma, Wenhao Jiang, Xiao-long Zhu, Zhifeng Li, and Wei Liu. Real-time neural style transfer forvideos. In Proceedings of the IEEE Conference on Computer Visionand Pattern Recognition, pages 783–791, 2017.

[HYL17a] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive represen-tation learning on large graphs. In Advances in Neural InformationProcessing Systems, pages 1024–1034, 2017.

[HYL+17b] Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, andEric P Xing. Toward controlled generation of text. In Proceedings ofthe 34th International Conference on Machine Learning-Volume 70,pages 1587–1596. JMLR. org, 2017.

[HZRS16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deepresidual learning for image recognition. In Proceedings of the IEEEconference on computer vision and pattern recognition, pages 770–778,2016.

[JBP+02] S. V. Jones, S. Brown, T. Parker, V. Stubblefield, E. Kossler, C. Sim-mons, and J. Wittner. Anomalous indeterminate amorphous tran-sitory gradations observed in certain specific nongeneralizable lociiattendant to unique fleeting unpredictable phenomena. Journal ofNonspecific Research, 88:665–703, 2002.

[JBS17] Yacine Jernite, Samuel R. Bowman, and David A Sontag. Discourse-based objectives for fast unsupervised sentence representation learn-ing. CoRR, abs/1705.00557, 2017.

[JBvdM+18] Sarthak Jain, Edward Banner, Jan-Willem van de Meent, Iain J Mar-shall, and Byron C Wallace. Learning disentangled representationsof texts with application to biomedical abstracts. arXiv preprintarXiv:1804.07212, 2018.

[JCL+20] Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettle-moyer, and Omer Levy. Spanbert: Improving pre-training by rep-

116

Page 132: Improving Natural Language Understanding via Contrastive

resenting and predicting spans. Transactions of the Association forComputational Linguistics, 8:64–77, 2020.

[JGBM16] Armand Joulin, Edouard Grave, Piotr Bojanowski, and TomasMikolov. Bag of tricks for efficient text classification. arXiv preprintarXiv:1607.01759, 2016.

[JGP16] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameteriza-tion with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.

[JKR14] Brian J Julian, Sertac Karaman, and Daniela Rus. On mutualinformation-based control of range sensing robots for mapping appli-cations. The International Journal of Robotics Research, 2014.

[JMBV19a] Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova.Disentangled representation learning for non-parallel text style trans-fer. Proceedings of the 57th Annual Meeting of the Association forComputational Linguistics, 2019.

[JMBV19b] Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova.Disentangled representation learning for non-parallel text style trans-fer. In ACL, 2019.

[Joa96] Thorsten Joachims. A probabilistic analysis of the rocchio algorithmwith tfidf for text categorization. Technical report, Carnegie-mellonuniv pittsburgh pa dept of computer science, 1996.

[JYL15] Bo Jiang, Chao Ye, and Jun S Liu. Nonparametric k-sample testsvia dynamic slicing. Journal of the American Statistical Association,2015.

[KAS11] Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. Fairness-aware learning through regularization approach. In IEEE 11th Inter-national Conference on Data Mining Workshops, 2011.

[KB14] Diederik P Kingma and Jimmy Ba. Adam: A method for stochasticoptimization. arXiv preprint arXiv:1412.6980, 2014.

[KB19] Masahiro Kaneko and Danushka Bollegala. Gender-preservingdebiasing for pre-trained word embeddings. arXiv preprintarXiv:1906.00742, 2019.

[KC18] Jamie Kiros and William Chan. Inferlite: Simple universal sentencerepresentations from natural language inference data. In Proceedingsof the 2018 Conference on Empirical Methods in Natural LanguageProcessing, pages 4868–4874, 2018.

117

Page 133: Improving Natural Language Understanding via Contrastive

[KCT00] Takeo Kanade, Jeffrey F Cohn, and Yingli Tian. Comprehensivedatabase for facial expression analysis. In Proceedings Fourth IEEEInternational Conference on Automatic Face and Gesture Recognition(Cat. No. PR00580), pages 46–53. IEEE, 2000.

[KD18] Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow withinvertible 1x1 convolutions. In Advances in Neural Information Pro-cessing Systems, pages 10215–10224, 2018.

[KK18] Takuhiro Kaneko and Hirokazu Kameoka. Cyclegan-vc: Non-parallelvoice conversion using cycle-consistent adversarial networks. In 201826th European Signal Processing Conference (EUSIPCO), pages 2100–2104. IEEE, 2018.

[KKTH18] Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, and NobukatsuHojo. Stargan-vc: Non-parallel many-to-many voice conversion usingstar generative adversarial networks. In 2018 IEEE Spoken LanguageTechnology Workshop (SLT), pages 266–273. IEEE, 2018.

[KL51] Solomon Kullback and Richard A Leibler. On information and suffi-ciency. The annals of mathematical statistics, 22(1):79–86, 1951.

[KM18a] Hyunjik Kim and Andriy Mnih. Disentangling by factorising. InICML, 2018.

[KM18b] Svetlana Kiritchenko and Saif Mohammad. Examining gender andrace bias in two hundred sentiment analysis systems. In Proceedingsof the Seventh Joint Conference on Lexical and Computational Seman-tics, pages 43–53, 2018.

[KSAG05] Alexander Kraskov, Harald Stogbauer, Ralph G Andrzejak, and PeterGrassberger. Hierarchical clustering using mutual information. EPL(Europhysics Letters), 70(2):278, 2005.

[KSC76] Peter Ewen King-Smith and D Carden. Luminance and opponent-color contributions to visual detection and adaptation and to temporaland spatial integration. JOSA, 66(7):709–717, 1976.

[KSG04] Alexander Kraskov, Harald Stogbauer, and Peter Grassberger. Esti-mating mutual information. Physical review E, 2004.

[KST+18] Hadi Kazemi, Sobhan Soleymani, Fariborz Taherkhani, Seyed Iran-manesh, and Nasser Nasrabadi. Unsupervised image-to-image transla-tion using domain-specific variational information bound. In NeurIPS,2018.

118

Page 134: Improving Natural Language Understanding via Contrastive

[Kub93] Robert Kubichek. Mel-cepstral distance measure for objective speechquality assessment. In Proceedings of IEEE Pacific Rim Conference onCommunications Computers and Signal Processing, volume 1, pages125–128. IEEE, 1993.

[Kul97] Solomon Kullback. Information theory and statistics. Courier Corpo-ration, 1997.

[KVAMR18] Vinay Kumar Verma, Gundeep Arora, Ashish Mishra, and Piyush Rai.Generalized zero-shot learning via synthesized examples. In Proceed-ings of the IEEE conference on computer vision and pattern recogni-tion, pages 4281–4289, 2018.

[KVP+19] Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and YuliaTsvetkov. Measuring bias in contextualized word representations. InProceedings of the First Workshop on Gender Bias in Natural Lan-guage Processing, pages 166–172, 2019.

[KW13] Diederik P Kingma and Max Welling. Auto-encoding variationalBayes. arXiv preprint arXiv:1312.6114, 2013.

[KW16] Thomas N Kipf and Max Welling. Semi-supervised classification withgraph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.

[KZS+15a] Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel,Antonio Torralba, Raquel Urtasun, and Sanja Fidler. Skip-thoughtvectors. In NIPS, 2015.

[KZS+15b] Ryan Kiros, Yukun Zhu, Russ R Salakhutdinov, Richard Zemel,Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Skip-thoughtvectors. In Advances in neural information processing systems, pages3294–3302, 2015.

[LB18] Edward Loper and Steven Bird. Nltk: The natural language toolkit.In Proceedings of the ACL-02 Workshop on Effective Tools andMethodologies for Teaching Natural Language Processing and Compu-tational Linguistics - Volume 1, ETMTNLP ’02, pages 63–70, Strouds-burg, PA, USA, 2018. Association for Computational Linguistics.

[LBB+98] Yann LeCun, Leon Bottou, Yoshua Bengio, Patrick Haffner, et al.Gradient-based learning applied to document recognition. Proceedingsof the IEEE, 86(11):2278–2324, 1998.

[LBH15] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning.nature, 521(7553):436, 2015.

119

Page 135: Improving Natural Language Understanding via Contrastive

[LBL+19] Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Syl-vain Gelly, Bernhard Scholkopf, and Olivier Bachem. Challengingcommon assumptions in the unsupervised learning of disentangled rep-resentations. In International Conference on Machine Learning, pages4114–4124, 2019.

[LCK+10] Patrick Lucey, Jeffrey F Cohn, Takeo Kanade, Jason Saragih, ZaraAmbadar, and Iain Matthews. The extended cohn-kanade dataset(ck+): A complete dataset for action unit and emotion-specified ex-pression. In 2010 IEEE Computer Society Conference on ComputerVision and Pattern Recognition-Workshops, pages 94–101. IEEE,2010.

[LCL+09] Patrick Lucey, Jeffrey Cohn, Simon Lucey, Iain Matthews, Sridha Srid-haran, and Kenneth M Prkachin. Automatically detecting pain usingfacial actions. In 2009 3rd International Conference on Affective Com-puting and Intelligent Interaction and Workshops, pages 1–8. IEEE,2009.

[LCWJ15] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan.Learning transferable features with deep adaptation networks. InICML, 2015.

[LFS+17] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, BingXiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentivesentence embedding. arXiv preprint arXiv:1703.03130, 2017.

[LGLC16] Alexander Lachmann, Federico M Giorgi, Gonzalo Lopez, and An-drea Califano. Aracne-ap: gene network reverse engineering throughadaptive partitioning inference of mutual information. Bioinformatics,2016.

[LL18] Lajanugen Logeswaran and Honglak Lee. An efficient framework forlearning sentence representations. ICLR, 2018.

[LLZ+20] Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Rus-lan Salakhutdinov, and Louis-Philippe Morency. Towards debiasingsentence representations. In Proceedings of the 58th Annual Meetingof the Association for Computational Linguistics, pages 5502–5515,2020.

[LOG+19] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi,Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and VeselinStoyanov. Roberta: A robustly optimized bert pretraining approach.arXiv preprint arXiv:1907.11692, 2019.

120

Page 136: Improving Natural Language Understanding via Contrastive

[LPSB17] Fujun Luan, Sylvain Paris, Eli Shechtman, and Kavita Bala. Deepphoto style transfer. In Proceedings of the IEEE Conference on Com-puter Vision and Pattern Recognition, pages 4990–4998, 2017.

[LSS+19] Guillaume Lample, Sandeep Subramanian, Eric Smith, Ludovic De-noyer, Marc’Aurelio Ranzato, and Y-Lan Boureau. Multiple-attributetext rewriting. In International Conference on Learning Representa-tions, 2019.

[LTH+18] Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Singh, andMing-Hsuan Yang. Diverse image-to-image translation via disentan-gled representations. In Proceedings of the European Conference onComputer Vision (ECCV), pages 35–51, 2018.

[LW06] Chung-Han Lee and Chung-Hsien Wu. Map-based adaptation forspeech conversion using adaptation data selection and non-paralleltraining. In Ninth International Conference on Spoken Language Pro-cessing, 2006.

[LWL+17] Weibo Liu, Zidong Wang, Xiaohui Liu, Nianyin Zeng, Yurong Liu, andFuad E Alsaadi. A survey of deep neural network architectures andtheir applications. Neurocomputing, 234:11–26, 2017.

[LYF+18] Yen-Cheng Liu, Yu-Ying Yeh, Tzu-Chien Fu, Sheng-De Wang, Wei-Chen Chiu, and Yu-Chiang Frank Wang. Detach and adapt: Learningcross-domain disentangled deep representation. In Proceedings of theIEEE Conference on Computer Vision and Pattern Recognition, pages8867–8876, 2018.

[MBE10] Lindasalwa Muda, Mumtaj Begam, and Irraivan Elamvazuthi. Voicerecognition algorithms using mel frequency cepstral coefficient (mfcc)and dynamic time warping (dtw) techniques. arXiv preprintarXiv:1003.4083, 2010.

[MCCD13] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficientestimation of word representations in vector space. arXiv preprintarXiv:1301.3781, 2013.

[Mei07] Marina Meila. Comparing clusterings—an information based distance.Journal of multivariate analysis, 98(5):873–895, 2007.

[MH08] Laurens van der Maaten and Geoffrey Hinton. Visualizing data usingt-sne. Journal of machine learning research, 9(Nov):2579–2605, 2008.

[MH14] Kevin R Moon and Alfred O Hero. Ensemble estimation of multivari-ate f-divergence. In 2014 IEEE International Symposium on Informa-tion Theory, pages 356–360. IEEE, 2014.

121

Page 137: Improving Natural Language Understanding via Contrastive

[MIA99] Malik Magdon-Ismail and Amir F Atiya. Neural networks for densityestimation. In NeurIPS, 1999.

[MK17] Seyed Hamidreza Mohammadi and Alexander Kain. An overview ofvoice conversion systems. Speech Communication, 88:65–82, 2017.

[MLTB19] Thomas Manzini, Yao Chong Lim, Yulia Tsvetkov, and Alan W Black.Black is to criminal as caucasian is to police: Detecting and removingmulticlass bias in word embeddings. arXiv preprint arXiv:1904.04047,2019.

[MQ10] Anderson F Machado and Marcelo Queiroz. Voice conversion: A crit-ical survey. 2010.

[MSC+13] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and JeffDean. Distributed representations of words and phrases and their com-positionality. In Advances in neural information processing systems,pages 3111–3119, 2013.

[MSG+18] Liqian Ma, Qianru Sun, Stamatios Georgoulis, Luc Van Gool, BerntSchiele, and Mario Fritz. Disentangled person image generation. InCVPR, 2018.

[MWB+19] Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, andRachel Rudinger. On measuring social biases in sentence encoders. InProceedings of the 2019 Conference of the North American Chapterof the Association for Computational Linguistics: Human LanguageTechnologies, Volume 1 (Long and Short Papers), pages 622–628, 2019.

[MXBS1y] Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher.Pointer sentinel mixture models. In ICLR, 201y.

[MYCG19] Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian.Cedr: Contextualized embeddings for document ranking. In Proceed-ings of the 42nd International ACM SIGIR Conference on Researchand Development in Information Retrieval, pages 1101–1104, 2019.

[Naw28] B. Nawahi. Acoustic production in liquid filled ceramic environments.J. Chem. Phys., 108:9893–9904, 1928.

[NBG17] Allen Nie, Erin D. Bennett, and Noah D. Goodman. Dissent: Sen-tence representation learning from explicit discourse relations. CoRR,abs/1710.04334, 2017.

[NCZ17] Arsha Nagrani, Joon Son Chung, and Andrew Zisserman. Vox-celeb: a large-scale speaker identification dataset. arXiv preprintarXiv:1706.08612, 2017.

122

Page 138: Improving Natural Language Understanding via Contrastive

[NTSS06] Keigo Nakamura, Tomoki Toda, Hiroshi Saruwatari, and KiyohiroShikano. Speaking aid system for total laryngectomees using voiceconversion of body transmitted artificial speech. In Ninth Interna-tional Conference on Spoken Language Processing, 2006.

[NWJ10] XuanLong Nguyen, Martin J Wainwright, and Michael I Jordan. Esti-mating divergence functionals and the likelihood ratio by convex riskminimization. IEEE Transactions on Information Theory, 2010.

[ODZ+16] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan,Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, andKoray Kavukcuoglu. Wavenet: A generative model for raw audio.arXiv preprint arXiv:1609.03499, 2016.

[OLV18] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representa-tion learning with contrastive predictive coding. arXiv preprintarXiv:1807.03748, 2018.

[OS19] Samet Oymak and Mahdi Soltanolkotabi. Overparameterized nonlin-ear learning: Gradient descent takes the shortest path? In ICML,2019.

[PARS14] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Onlinelearning of social representations. In Proceedings of the 20th ACMSIGKDD international conference on Knowledge discovery and datamining, pages 701–710. ACM, 2014.

[PCPK15] Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudan-pur. Librispeech: an asr corpus based on public domain audio books.In 2015 IEEE International Conference on Acoustics, Speech and Sig-nal Processing (ICASSP), pages 5206–5210. IEEE, 2015.

[PGJ18] Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. Unsupervisedlearning of sentence embeddings using compositional n-gram features.In NAACL-HLT, 2018.

[Pha16] Phatpiglet. phatpiglet/autocorrect, Aug 2016.

[PHM+19] Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, GautamNaik, Erik Cambria, and Rada Mihalcea. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedingsof the 57th Annual Meeting of the Association for Computational Lin-guistics, pages 527–536, 2019.

123

Page 139: Improving Natural Language Understanding via Contrastive

[PNI+18] Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner,Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contex-tualized word representations. In Proceedings of NAACL-HLT, pages2227–2237, 2018.

[POVDO+19] Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, andGeorge Tucker. On variational bounds of mutual information. In In-ternational Conference on Machine Learning, pages 5171–5180, 2019.

[PRWZ02] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu.BLEU: a method for automatic evaluation of machine translation.In Proceedings of the 40th annual meeting on association for com-putational linguistics, pages 311–318. Association for ComputationalLinguistics, 2002.

[PSC+14] Sunghyun Park, Han Suk Shim, Moitreya Chatterjee, Kenji Sagae,and Louis-Philippe Morency. Computational analysis of persuasive-ness in social multimedia: A novel dataset and multimodal predictionapproach. In Proceedings of the 16th International Conference on Mul-timodal Interaction, pages 50–57, 2014.

[PSF18] Ji Ho Park, Jamin Shin, and Pascale Fung. Reducing gender bias inabusive language detection. In Proceedings of the 2018 Conference onEmpirical Methods in Natural Language Processing, pages 2799–2804,2018.

[PSM14] Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove:Global vectors for word representation. In Proceedings of the2014 conference on empirical methods in natural language processing(EMNLP), pages 1532–1543, 2014.

[QMG+20] Rui Qian, Tianjian Meng, Boqing Gong, Ming-Hsuan Yang, HuishengWang, Serge Belongie, and Yin Cui. Spatiotemporal contrastive videorepresentation learning. arXiv preprint arXiv:2008.03800, 2020.

[QYQ+19] Chen Qu, Liu Yang, Minghui Qiu, W Bruce Croft, Yongfeng Zhang,and Mohit Iyyer. Bert with history answer embedding for conver-sational question answering. In Proceedings of the 42nd InternationalACM SIGIR Conference on Research and Development in InformationRetrieval, pages 1133–1136, 2019.

[QZC+19] Kaizhi Qian, Yang Zhang, Shiyu Chang, Xuesong Yang, and MarkHasegawa-Johnson. Autovc: Zero-shot voice style transfer with onlyautoencoder loss. In International Conference on Machine Learning,pages 5210–5219, 2019.

124

Page 140: Improving Natural Language Understanding via Contrastive

[RCL+09] Andrew Ryan, Jeffery F Cohn, Simon Lucey, Jason Saragih, PatrickLucey, Fernando De la Torre, and Adam Rossi. Automated facialexpression recognition system. In 43rd annual 2009 international car-nahan conference on security technology, pages 172–177. IEEE, 2009.

[RH18] Sebastian Ruder and Jeremy Howard. Universal language model fine-tuning for text classification. In ACL, 2018.

[RK18] Sujith Ravi and Zornitsa Kozareva. Self-governing neural networksfor on-device short text classification. In Proceedings of the 2018 Con-ference on Empirical Methods in Natural Language Processing, pages887–893, 2018.

[RM15] Danilo Rezende and Shakir Mohamed. Variational inference with nor-malizing flows. In International Conference on Machine Learning,pages 1530–1538, 2015.

[RN16] Stuart J Russell and Peter Norvig. Artificial intelligence: a modernapproach. Malaysia; Pearson Education Limited,, 2016.

[RNSS18] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever.Improving language understanding by generative pre-training, 2018.

[RTC17] Aaditya Ramdas, Nicolas Trillos, and Marco Cuturi. On Wassersteintwo-sample testing and related families of nonparametric tests. En-tropy, 19(2):47, 2017.

[San05] F. Sanford. Oxidized ferrous materials. Journal of Junk, 5(4):324–345,2005.

[SCM98] Yannis Stylianou, Olivier Cappe, and Eric Moulines. Continuous prob-abilistic transform for voice conversion. IEEE Transactions on speechand audio processing, 6(2):131–142, 1998.

[SDCW19] Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf.Distilbert, a distilled version of bert: smaller, faster, cheaper andlighter. arXiv preprint arXiv:1910.01108, 2019.

[SH70] M. Smith and I. A. Hall. Handbook of Interstellar Travel. Dover, NewYork, 7th edition, 1970.

[SH09] Ruslan Salakhutdinov and Geoffrey Hinton. Semantic hashing. Inter-national Journal of Approximate Reasoning, 50(7):969–978, 2009.

[SHH+19] Kurt Shuster, Samuel Humeau, Hexiang Hu, Antoine Bordes, and Ja-son Weston. Engaging image captioning via personality. In Proceed-ings of the IEEE Conference on Computer Vision and Pattern Recog-nition, pages 12516–12526, 2019.

125

Page 141: Improving Natural Language Understanding via Contrastive

[SINT18] Yuki Saito, Yusuke Ijima, Kyosuke Nishida, and ShinnosukeTakamichi. Non-parallel voice conversion using variational autoen-coders conditioned by phonetic posteriorgrams and d-vectors. In 2018IEEE International Conference on Acoustics, Speech and Signal Pro-cessing (ICASSP), pages 5274–5278. IEEE, 2018.

[SKG+19] Jiaming Song, Pratyusha Kalluri, Aditya Grover, Shengjia Zhao, andStefano Ermon. Learning controllable fair representations. In The22nd International Conference on Artificial Intelligence and Statistics,pages 2164–2173, 2019.

[SLBJ17] Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. Styletransfer from non-parallel text by cross-alignment. In Advances inneural information processing systems, pages 6830–6841, 2017.

[SLS+18] Sandeep Subramanian, Guillaume Lample, Eric Michael Smith, Lu-dovic Denoyer, Marc’Aurelio Ranzato, and Y-Lan Boureau. Multiple-attribute text style transfer. arXiv preprint arXiv:1811.00552, 2018.

[SLW+19] Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, andPeng Jiang. Bert4rec: Sequential recommendation with bidirectionalencoder representations from transformer. In Proceedings of the 28thACM International Conference on Information and Knowledge Man-agement, pages 1441–1450, 2019.

[SN17] Raphael Shu and Hideki Nakayama. Compressing word em-beddings via deep compositional code learning. arXiv preprintarXiv:1711.01068, 2017.

[SNB+08] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, BrianGalligher, and Tina Eliassi-Rad. Collective classification in networkdata. AI magazine, 29(3):93–93, 2008.

[SPP19] Joan Serra, Santiago Pascual, and Carlos Segura Perales. Blow: asingle-scale hyperconditioned flow for non-parallel raw-audio voice con-version. In Advances in Neural Information Processing Systems, pages6790–6800, 2019.

[SPW+13] Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christo-pher D Manning, Andrew Y Ng, and Christopher Potts. Recursivedeep models for semantic compositionality over a sentiment treebank.In Proceedings of the 2013 conference on empirical methods in naturallanguage processing, pages 1631–1642, 2013.

[SPW+18] Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, NavdeepJaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang,

126

Page 142: Improving Natural Language Understanding via Contrastive

Rj Skerrv-Ryan, et al. Natural tts synthesis by conditioning waveneton mel spectrogram predictions. In 2018 IEEE International Con-ference on Acoustics, Speech and Signal Processing (ICASSP), pages4779–4783. IEEE, 2018.

[SSC+18] Dinghan Shen, Qinliang Su, Paidamoyo Chapfuwa, Wenlin Wang,Guoyin Wang, Lawrence Carin, and Ricardo Henao. Nash: Towardend-to-end neural architecture for generative semantic hashing. InACL, 2018.

[SSG+13] Dino Sejdinovic, Bharath Sriperumbudur, Arthur Gretton, KenjiFukumizu, et al. Equivalence of distance-based and rkhs-based statis-tics in hypothesis testing. The Annals of Statistics, 41(5):2263–2291,2013.

[SSSK08] Taiji Suzuki, Masashi Sugiyama, Jun Sese, and Takafumi Kanamori.Approximating mutual information by maximum likelihood densityratio estimation. In New challenges for feature selection in data miningand knowledge discovery, 2008.

[SSZ19] Gabriel Stanovsky, Noah A Smith, and Luke Zettlemoyer. Evaluatinggender bias in machine translation. In Proceedings of the 57th AnnualMeeting of the Association for Computational Linguistics, pages 1679–1684, 2019.

[SWUH18] Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and TatsuyaHarada. Maximum classifier discrepancy for unsupervised domainadaptation. In CVPR, 2018.

[SZL+18] Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, andChengqi Zhang. Disan: Directional self-attention network for rnn/cnn-free language understanding. In Thirty-Second AAAI Conference onArtificial Intelligence, 2018.

[SZS+18] Berrak Sisman, Mingyang Zhang, Sakriani Sakti, Haizhou Li, andSatoshi Nakamura. Adaptive wavenet vocoder for residual compen-sation in gan-based voice conversion. In 2018 IEEE Spoken LanguageTechnology Workshop (SLT), pages 282–289. IEEE, 2018.

[TC19] Yi Chern Tan and L Elisa Celis. Assessing social and intersectionalbiases in contextualized word representations. In Advances in NeuralInformation Processing Systems, pages 13230–13241, 2019.

[TdS18] Shuai Tang and Virginia R de Sa. Improving sentence representationswith multi-view frameworks. arXiv preprint arXiv:1810.01064, 2018.

127

Page 143: Improving Natural Language Understanding via Contrastive

[THG19] Julien Tissier, Amaury Habrard, and Christophe Gravier. Near-lossless binarization of word embeddings. AAAI, 2019.

[TLJ07] Yan Tong, Wenhui Liao, and Qiang Ji. Facial action unit recognitionby exploiting their dynamic and semantic relationships. IEEE transac-tions on pattern analysis and machine intelligence, 29(10):1683–1699,2007.

[TPB00] Naftali Tishby, Fernando C Pereira, and William Bialek. The infor-mation bottleneck method. arXiv preprint physics/0004057, 2000.

[TQW+15] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, andQiaozhu Mei. Line: Large-scale information network embedding. InProceedings of the 24th international conference on world wide web,pages 1067–1077, 2015.

[TYL17] Luan Tran, Xi Yin, and Xiaoming Liu. Disentangled representationlearning gan for pose-invariant face recognition. In Proceedings of theIEEE Conference on Computer Vision and Pattern Recognition, pages1415–1424, 2017.

[UVL16] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance nor-malization: The missing ingredient for fast stylization. arXiv preprintarXiv:1607.08022, 2016.

[VB10] Fernando Villavicencio and Jordi Bonada. Applying voice conversionto concatenative singing-voice synthesis. In Eleventh annual confer-ence of the international speech communication association, 2010.

[VCE+08] Esra Vural, Mujdat Cetin, Aytul Ercil, Gwen Littlewort, MarianBartlett, and Javier Movellan. Automated drowsiness detection forimproved driving safety. 2008.

[VDL10] Benjamin Van Durme and Ashwin Lall. Online generation of localitysensitive hash signatures. In Proceedings of the ACL 2010 ConferenceShort Papers, pages 231–235. Association for Computational Linguis-tics, 2010.

[vdMH08] Laurens van der Maaten and Geoffrey Hinton. Visualizing high-dimensional data using t-SNE. JMLR, 2008.

[VFH+18] Petar Velickovic, William Fedus, William L Hamilton, Pietro Lio,Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. arXivpreprint arXiv:1809.10341, 2018.

128

Page 144: Improving Natural Language Understanding via Contrastive

[VNG99] Gertjan Van Noord and Dale Gerdemann. An extendible regular ex-pression compiler for finite-state approaches in natural language pro-cessing. In International Workshop on Implementing Automata, pages122–139. Springer, 1999.

[VPSS17] Michael V”olske, Martin Potthast, Shahbaz Syed, and Benno Stein.TL;DR: Mining Reddit to learn automatic summarization. In Pro-ceedings of the Workshop on New Frontiers in Summarization, pages59–63, 2017.

[VSP+17] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, LlionJones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attentionis all you need. Advances in neural information processing systems,30:5998–6008, 2017.

[WBGL16] John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu.Towards universal paraphrastic sentence embeddings. CoRR,abs/1511.08198, 2016.

[WBS06] Kilian Q Weinberger, John Blitzer, and Lawrence K Saul. Distancemetric learning for large margin nearest neighbor classification. InAdvances in neural information processing systems, pages 1473–1480,2006.

[WEG87] Svante Wold, Kim Esbensen, and Paul Geladi. Principal componentanalysis. Chemometrics and intelligent laboratory systems, 2(1-3):37–52, 1987.

[WG18] John Wieting and Kevin Gimpel. Paranmt-50m: Pushing the limitsof paraphrastic sentence embeddings with millions of machine trans-lations. In ACL, 2018.

[WK18] John Wieting and Douwe Kiela. No training required: Exploring ran-dom encoders for sentence classification. CoRR, abs/1901.10444, 2018.

[WNB17] Adina Williams, Nikita Nangia, and Samuel R Bowman. A broad-coverage challenge corpus for sentence understanding through infer-ence. arXiv preprint arXiv:1704.05426, 2017.

[WSB19] Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. Neuralnetwork acceptability judgments. Transactions of the Association forComputational Linguistics, 7:625–641, 2019.

[WSM+18] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy,and Samuel Bowman. Glue: A multi-task benchmark and analy-sis platform for natural language understanding. In Proceedings of

129

Page 145: Improving Natural Language Understanding via Contrastive

the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpret-ing Neural Networks for NLP, pages 353–355, 2018.

[WSSJ14] Jingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. Hash-ing for similarity search: A survey. arXiv preprint arXiv:1408.2927,2014.

[WVHC08] Michael M Wolf, Frank Verstraete, Matthew B Hastings, and J Igna-cio Cirac. Area laws in quantum systems: mutual information andcorrelations. Physical review letters, 100(7):070502, 2008.

[WWPM18] Li Wan, Quan Wang, Alan Papir, and Ignacio Lopez Moreno. Gen-eralized end-to-end loss for speaker verification. In 2018 IEEE In-ternational Conference on Acoustics, Speech and Signal Processing(ICASSP), pages 4879–4883. IEEE, 2018.

[WWY16] Mirjam Wester, Zhizheng Wu, and Junichi Yamagishi. Analysis ofthe voice conversion challenge 2016 evaluation results. In Interspeech,pages 1637–1641, 2016.

[XWT+15] Jiaming Xu, Peng Wang, Guanhua Tian, Bo Xu, Jun Zhao, FangyuanWang, and Hongwei Hao. Convolutional neural networks for text hash-ing. In Twenty-Fourth International Joint Conference on ArtificialIntelligence, 2015.

[YHD+18] Zichao Yang, Zhiting Hu, Chris Dyer, Eric P Xing, and Taylor Berg-Kirkpatrick. Unsupervised text style transfer using language modelsas discriminators. In Advances in Neural Information Processing Sys-tems, pages 7287–7298, 2018.

[YLZ+15] Cheng Yang, Zhiyuan Liu, Deli Zhao, Maosong Sun, and Edward YChang. Network representation learning with rich text information.In IJCAI, volume 2015, pages 2111–2117, 2015.

[YM18] Li Yingzhen and Stephan Mandt. Disentangled sequential autoen-coder. In International Conference on Machine Learning, pages 5656–5665, 2018.

[YVM+19] Junichi Yamagishi, Christophe Veaux, Kirsten MacDonald, et al. Cstrvctk corpus: English multi-speaker corpus for cstr voice cloning toolkit(version 0.92). 2019.

[ZANMB16] Diego J Zea, Diego Anfossi, Morten Nielsen, and Cristina Marino-Buslje. Mitos. jl: mutual information tools for protein sequence anal-ysis in the julia language. Bioinformatics, 2016.

130

Page 146: Improving Natural Language Understanding via Contrastive

[ZCG+20] Ruiyi Zhang, Changyou Chen, Zhe Gan, Wenlin Wang, Dinghan Shen,Guoyin Wang, Zheng Wen, and Lawrence Carin. Improving adversar-ial text generation by modeling the distant future. Proceedings of the58th Annual Meeting of the Association for Computational Linguistics,2020.

[ZJZ10] Yin Zhang, Rong Jin, and Zhi-Hua Zhou. Understanding bag-of-wordsmodel: a statistical framework. International Journal of MachineLearning and Cybernetics, 1(1-4):43–52, 2010.

[ZKW+19] Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, andYoav Artzi. Bertscore: Evaluating text generation with bert. arXivpreprint arXiv:1904.09675, 2019.

[ZKZ+18] Junbo Zhao, Yoon Kim, Kelly Zhang, Alexander Rush, and Yann Le-Cun. Adversarially regularized autoencoders. In Proceedings of the35th International Conference on Machine Learning, pages 5902–5911,2018.

[ZLdM18] Xunjie Zhu, Tingfeng Li, and Gerard de Melo. Exploring semanticproperties of sentence embeddings. In Proceedings of the 56th AnnualMeeting of the Association for Computational Linguistics (Volume 2:Short Papers), volume 2, pages 632–637, 2018.

[ZLL+19] Hang Zhou, Yu Liu, Ziwei Liu, Ping Luo, and Xiaogang Wang. Talkingface generation by adversarially disentangled audio-visual representa-tion. In Proceedings of the AAAI Conference on Artificial Intelligence,volume 33, pages 9299–9306, 2019.

[ZPRH09] Zhihong Zeng, Maja Pantic, Glenn I Roisman, and Thomas S Huang.A survey of affect recognition methods: Audio, visual, and sponta-neous expressions. IEEE transactions on pattern analysis and machineintelligence, 31(1):39–58, 2009.

[ZWCL10] Dell Zhang, Jun Wang, Deng Cai, and Jinsong Lu. Self-taught hashingfor fast similarity search. In Proceedings of the 33rd internationalACM SIGIR conference on Research and development in informationretrieval, pages 18–25. ACM, 2010.

[ZWY+19] Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Or-donez, and Kai-Wei Chang. Gender bias in contextualized word em-beddings. In Proceedings of the 2019 Conference of the North Amer-ican Chapter of the Association for Computational Linguistics, pages629–634, 2019.

131

Page 147: Improving Natural Language Understanding via Contrastive

[ZXW+20] Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou,Houqiang Li, and Tie-Yan Liu. Incorporating bert into neural machinetranslation. arXiv preprint arXiv:2002.06823, 2020.

[ZYS+19] Ruiyi Zhang, Tong Yu, Yilin Shen, Hongxia Jin, and Changyou Chen.Text-based interactive recommendation via constraint-augmented re-inforcement learning. In Advances in neural information processingsystems, pages 15214–15224, 2019.

132

Page 148: Improving Natural Language Understanding via Contrastive

Biography

Pengyu Cheng is a Ph.D. candidate in the Department of Electrical and Computer

Engineering at Duke University. Advised by Dr. Lawrence Carin, he has a broad

class of research interests on probabilistic machine learning, interpretable machine

learning, and their applications in natural language processing. During his Ph.D.

study, he focus his research mainly on utilizing contrastive learning methods to im-

prove the natural language understanding models, particularly on the perspectives

of efficiency, interpretability, and fairness. He had published around 10 papers on

top-tier conferences, such as ICML, ICLR, NeurIPS, ACL, AAAI, etc. Besides, he

had two internship experiences at NEC Labs America and Microsoft. At the begin-

ning of his Ph.D., he got the first place in grade at the machine learning summer

school co-organized by Duke University and Tsinghua University. Before coming to

Duke, he earned a Bachelor of Science degree from the Department of Mathematical

Science at Tsinghua University in 2017.

133