4
One-Class Collaborative Filtering with the eryable Variational Autoencoder Ga Wu University of Toronto [email protected] Mohamed Reda Bouadjenek University of Toronto [email protected] Scott Sanner University of Toronto [email protected] ABSTRACT Variational Autoencoder (VAE) based methods for Collaborative Filtering (CF) demonstrate remarkable performance for one-class (implicit negative) recommendation tasks by extending autoen- coders with relaxed but tractable latent distributions. Explicitly modeling a latent distribution over user preferences allows VAEs to learn user and item representations that not only reproduce ob- served interactions, but also generalize them by leveraging learning from similar users and items. Unfortunately, VAE-CF can exhibit suboptimal learning properties; e.g., VAE-CFs will increase their prediction confidence as they receive more preferences per user, even when those preferences may vary widely and create ambigu- ity in the user representation. To address this issue, we propose a novel Queryable Variational Autoencoder (Q-VAE) variant of the VAE that explicitly models arbitrary conditional relationships be- tween observations. The proposed model appropriately increases uncertainty (rather than reduces it) in cases where a large number of user preferences may lead to an ambiguous user representa- tion. Our experiments on two benchmark datasets show that the Q-VAE generally performs comparably or outperforms VAE-based recommenders as well as other state-of-the-art approaches and is generally competitive across the user preference density spectrum, where other methods peak for certain preference density levels. Keywords: One-Class Collaborative Filtering; Variational Autoen- coder; Conditional Inference. ACM Reference Format: Ga Wu, Mohamed Reda Bouadjenek, and Scott Sanner. 2019. One-Class Collaborative Filtering with the Queryable Variational Autoencoder. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’19), July 21–25, 2019, Paris, France. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3331184. 3331292 1 INTRODUCTION Autoencoder-based Collaborative Filtering (CF) algorithms make predictions by embedding user preferences into a latent space that enables generalization to unobserved user preferences [1]. However, Affiliate to Vector Institute of Artificial Intelligence, Toronto Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. SIGIR ’19, July 21–25, 2019, Paris, France © 2019 Association for Computing Machinery. ACM ISBN 978-1-4503-6172-9/19/07. . . $15.00 https://doi.org/10.1145/3331184.3331292 Figure 1: In this experiment, we show the average standard deviation of the diagonal Gaussian latent embeddings for VAE-CF and Q-VAE across 500 users. At the top, we first mea- sure this embedding uncertainty after sampling 20 real in- teractions from each user’s data and at the bottom we add in 80 random (fake) interactions. While Q-VAE increases its un- certainty, VAE-CF oddly becomes more certain in user pref- erences after observing this incoherent random data. a conventional Autoencoder recommender tends to be unsatisfac- tory as latent representations are likely to overfit and memorize individual observations [2]. Indeed, an Autoencoder-based model for CF may be overly sensitive to individual user-item interactions, and thus may significantly change the latent representation of a user even with a single interaction update. Several prior works have noted this unsatisfactory representation issue [2, 3] and thus Denoising Autoencoders [4] have been developed to mitigate this issue. Unfortunately, denoising can hurt the prediction performance when the data is very sparse as we show later in the experiments. Recently, Variational Autoencoders (VAEs) [5] – which model distributions over latent representations – have been used and ex- tended by Liang et al. [6] for CF recommendation (VAE-CF) and showed remarkable prediction performance improvement over pre- vious Autoencoding methods. The prediction performance improve- ment arising from the generalization of VAEs over non-probabilistic Autoencoders is due to two key reasons: (i) VAEs relax the latent distribution from a (deterministic) Delta function to a Gaussian distribution allowing for explicit representation of user and item uncertainty, and (ii) VAEs regularize the latent distribution through Kullback-Leibler (KL) divergence with a tractable standard Gauss- ian distribution leading to learning stability (i.e., less sensitivity to individual data). Despite their remarkable prediction performance, VAEs exhibit the undesirable property of being over-confident when users express a large number of preferences. We argue in this paper that this property is particularly problematic because VAE-CF tends to recommend items to users with high certainty if a user has a considerable number of observed interactions even when these preferences may vary widely and increase ambiguity in the latent user representation as demonstrated in Figure 1. To address the issue mentioned above, we propose the Queryable Variational Auto-encoder (Q-VAE) for one-class (implicit negative)

One-Class Collaborative Filtering with theQueryable ... · novel Queryable Variational Autoencoder (Q-VAE) variant of the VAE that explicitly models arbitrary conditional relationships

  • Upload
    others

  • View
    22

  • Download
    0

Embed Size (px)

Citation preview

Page 1: One-Class Collaborative Filtering with theQueryable ... · novel Queryable Variational Autoencoder (Q-VAE) variant of the VAE that explicitly models arbitrary conditional relationships

One-Class Collaborative Filtering with theQueryable Variational Autoencoder

Ga Wu∗University of [email protected]

Mohamed Reda BouadjenekUniversity of [email protected]

Scott Sanner∗University of Toronto

[email protected]

ABSTRACTVariational Autoencoder (VAE) based methods for CollaborativeFiltering (CF) demonstrate remarkable performance for one-class(implicit negative) recommendation tasks by extending autoen-coders with relaxed but tractable latent distributions. Explicitlymodeling a latent distribution over user preferences allows VAEsto learn user and item representations that not only reproduce ob-served interactions, but also generalize them by leveraging learningfrom similar users and items. Unfortunately, VAE-CF can exhibitsuboptimal learning properties; e.g., VAE-CFs will increase theirprediction confidence as they receive more preferences per user,even when those preferences may vary widely and create ambigu-ity in the user representation. To address this issue, we propose anovel Queryable Variational Autoencoder (Q-VAE) variant of theVAE that explicitly models arbitrary conditional relationships be-tween observations. The proposed model appropriately increasesuncertainty (rather than reduces it) in cases where a large numberof user preferences may lead to an ambiguous user representa-tion. Our experiments on two benchmark datasets show that theQ-VAE generally performs comparably or outperforms VAE-basedrecommenders as well as other state-of-the-art approaches and isgenerally competitive across the user preference density spectrum,where other methods peak for certain preference density levels.Keywords: One-Class Collaborative Filtering; Variational Autoen-coder; Conditional Inference.

ACM Reference Format:Ga Wu, Mohamed Reda Bouadjenek, and Scott Sanner. 2019. One-ClassCollaborative Filtering with the Queryable Variational Autoencoder. InProceedings of the 42nd International ACM SIGIR Conference on Researchand Development in Information Retrieval (SIGIR ’19), July 21–25, 2019, Paris,France. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3331184.3331292

1 INTRODUCTIONAutoencoder-based Collaborative Filtering (CF) algorithms makepredictions by embedding user preferences into a latent space thatenables generalization to unobserved user preferences [1]. However,

∗Affiliate to Vector Institute of Artificial Intelligence, Toronto

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] ’19, July 21–25, 2019, Paris, France© 2019 Association for Computing Machinery.ACM ISBN 978-1-4503-6172-9/19/07. . . $15.00https://doi.org/10.1145/3331184.3331292

0.0 0.1 0.2 0.3Average Standard Deviation of Latent Representation

20.0

100.0

# of

Rat

ings VAE-CF

Q-VAE

Figure 1: In this experiment, we show the average standarddeviation of the diagonal Gaussian latent embeddings forVAE-CF and Q-VAE across 500 users. At the top, we first mea-sure this embedding uncertainty after sampling 20 real in-teractions from each user’s data and at the bottomwe add in80 random (fake) interactions.While Q-VAE increases its un-certainty, VAE-CF oddly becomes more certain in user pref-erences after observing this incoherent random data.

a conventional Autoencoder recommender tends to be unsatisfac-tory as latent representations are likely to overfit and memorizeindividual observations [2]. Indeed, an Autoencoder-based modelfor CF may be overly sensitive to individual user-item interactions,and thus may significantly change the latent representation of auser even with a single interaction update. Several prior workshave noted this unsatisfactory representation issue [2, 3] and thusDenoising Autoencoders [4] have been developed to mitigate thisissue. Unfortunately, denoising can hurt the prediction performancewhen the data is very sparse as we show later in the experiments.

Recently, Variational Autoencoders (VAEs) [5] – which modeldistributions over latent representations – have been used and ex-tended by Liang et al. [6] for CF recommendation (VAE-CF) andshowed remarkable prediction performance improvement over pre-vious Autoencoding methods. The prediction performance improve-ment arising from the generalization of VAEs over non-probabilisticAutoencoders is due to two key reasons: (i) VAEs relax the latentdistribution from a (deterministic) Delta function to a Gaussiandistribution allowing for explicit representation of user and itemuncertainty, and (ii) VAEs regularize the latent distribution throughKullback-Leibler (KL) divergence with a tractable standard Gauss-ian distribution leading to learning stability (i.e., less sensitivity toindividual data). Despite their remarkable prediction performance,VAEs exhibit the undesirable property of being over-confident whenusers express a large number of preferences. We argue in this paperthat this property is particularly problematic because VAE-CF tendsto recommend items to users with high certainty if a user has aconsiderable number of observed interactions even when thesepreferences may vary widely and increase ambiguity in the latentuser representation as demonstrated in Figure 1.

To address the issue mentioned above, we propose the QueryableVariational Auto-encoder (Q-VAE) for one-class (implicit negative)

Page 2: One-Class Collaborative Filtering with theQueryable ... · novel Queryable Variational Autoencoder (Q-VAE) variant of the VAE that explicitly models arbitrary conditional relationships

r<latexit sha1_base64="tli2lKeB1EcNpYeDPxK69ZODRog=">AAAB8XicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXBjcsK9oFtKJPpTTt0MgkzE6GE/oUbF4q49W/c+TdO2iy09cDA4Zx7mXNPkAiujet+O6WNza3tnfJuZW//4PCoenzS0XGqGLZZLGLVC6hGwSW2DTcCe4lCGgUCu8H0Nve7T6g0j+WDmSXoR3QsecgZNVZ6HETUTIIwU/NhtebW3QXIOvEKUoMCrWH1azCKWRqhNExQrfuemxg/o8pwJnBeGaQaE8qmdIx9SyWNUPvZIvGcXFhlRMJY2ScNWai/NzIaaT2LAjuZJ9SrXi7+5/VTE974GZdJalCy5UdhKoiJSX4+GXGFzIiZJZQpbrMSNqGKMmNLqtgSvNWT10nnqu65de/+utZ0izrKcAbncAkeNKAJd9CCNjCQ8Ayv8OZo58V5dz6WoyWn2DmFP3A+fwDusZEE</latexit><latexit sha1_base64="tli2lKeB1EcNpYeDPxK69ZODRog=">AAAB8XicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXBjcsK9oFtKJPpTTt0MgkzE6GE/oUbF4q49W/c+TdO2iy09cDA4Zx7mXNPkAiujet+O6WNza3tnfJuZW//4PCoenzS0XGqGLZZLGLVC6hGwSW2DTcCe4lCGgUCu8H0Nve7T6g0j+WDmSXoR3QsecgZNVZ6HETUTIIwU/NhtebW3QXIOvEKUoMCrWH1azCKWRqhNExQrfuemxg/o8pwJnBeGaQaE8qmdIx9SyWNUPvZIvGcXFhlRMJY2ScNWai/NzIaaT2LAjuZJ9SrXi7+5/VTE974GZdJalCy5UdhKoiJSX4+GXGFzIiZJZQpbrMSNqGKMmNLqtgSvNWT10nnqu65de/+utZ0izrKcAbncAkeNKAJd9CCNjCQ8Ayv8OZo58V5dz6WoyWn2DmFP3A+fwDusZEE</latexit><latexit sha1_base64="tli2lKeB1EcNpYeDPxK69ZODRog=">AAAB8XicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXBjcsK9oFtKJPpTTt0MgkzE6GE/oUbF4q49W/c+TdO2iy09cDA4Zx7mXNPkAiujet+O6WNza3tnfJuZW//4PCoenzS0XGqGLZZLGLVC6hGwSW2DTcCe4lCGgUCu8H0Nve7T6g0j+WDmSXoR3QsecgZNVZ6HETUTIIwU/NhtebW3QXIOvEKUoMCrWH1azCKWRqhNExQrfuemxg/o8pwJnBeGaQaE8qmdIx9SyWNUPvZIvGcXFhlRMJY2ScNWai/NzIaaT2LAjuZJ9SrXi7+5/VTE974GZdJalCy5UdhKoiJSX4+GXGFzIiZJZQpbrMSNqGKMmNLqtgSvNWT10nnqu65de/+utZ0izrKcAbncAkeNKAJd9CCNjCQ8Ayv8OZo58V5dz6WoyWn2DmFP3A+fwDusZEE</latexit><latexit sha1_base64="tli2lKeB1EcNpYeDPxK69ZODRog=">AAAB8XicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXBjcsK9oFtKJPpTTt0MgkzE6GE/oUbF4q49W/c+TdO2iy09cDA4Zx7mXNPkAiujet+O6WNza3tnfJuZW//4PCoenzS0XGqGLZZLGLVC6hGwSW2DTcCe4lCGgUCu8H0Nve7T6g0j+WDmSXoR3QsecgZNVZ6HETUTIIwU/NhtebW3QXIOvEKUoMCrWH1azCKWRqhNExQrfuemxg/o8pwJnBeGaQaE8qmdIx9SyWNUPvZIvGcXFhlRMJY2ScNWai/NzIaaT2LAjuZJ9SrXi7+5/VTE974GZdJalCy5UdhKoiJSX4+GXGFzIiZJZQpbrMSNqGKMmNLqtgSvNWT10nnqu65de/+utZ0izrKcAbncAkeNKAJd9CCNjCQ8Ayv8OZo58V5dz6WoyWn2DmFP3A+fwDusZEE</latexit>

z<latexit sha1_base64="hMUN0ytn0IYshdQKp5n0VmIOIPA=">AAAB8XicbVDLSsNAFL2pr1pfVZduBovgqiQi6LLgxmUF+8A2lMl00g6dTMLMjVBD/8KNC0Xc+jfu/BsnbRbaemDgcM69zLknSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlsslrHuBtRwKRRvoUDJu4nmNAok7wSTm9zvPHJtRKzucZpwP6IjJULBKFrpoR9RHAdh9jQbVGtu3Z2DrBKvIDUo0BxUv/rDmKURV8gkNabnuQn6GdUomOSzSj81PKFsQke8Z6miETd+Nk88I2dWGZIw1vYpJHP190ZGI2OmUWAn84Rm2cvF/7xeiuG1nwmVpMgVW3wUppJgTPLzyVBozlBOLaFMC5uVsDHVlKEtqWJL8JZPXiXti7rn1r27y1rDLeoowwmcwjl4cAUNuIUmtICBgmd4hTfHOC/Ou/OxGC05xc4x/IHz+QP62ZEM</latexit><latexit sha1_base64="hMUN0ytn0IYshdQKp5n0VmIOIPA=">AAAB8XicbVDLSsNAFL2pr1pfVZduBovgqiQi6LLgxmUF+8A2lMl00g6dTMLMjVBD/8KNC0Xc+jfu/BsnbRbaemDgcM69zLknSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlsslrHuBtRwKRRvoUDJu4nmNAok7wSTm9zvPHJtRKzucZpwP6IjJULBKFrpoR9RHAdh9jQbVGtu3Z2DrBKvIDUo0BxUv/rDmKURV8gkNabnuQn6GdUomOSzSj81PKFsQke8Z6miETd+Nk88I2dWGZIw1vYpJHP190ZGI2OmUWAn84Rm2cvF/7xeiuG1nwmVpMgVW3wUppJgTPLzyVBozlBOLaFMC5uVsDHVlKEtqWJL8JZPXiXti7rn1r27y1rDLeoowwmcwjl4cAUNuIUmtICBgmd4hTfHOC/Ou/OxGC05xc4x/IHz+QP62ZEM</latexit><latexit sha1_base64="hMUN0ytn0IYshdQKp5n0VmIOIPA=">AAAB8XicbVDLSsNAFL2pr1pfVZduBovgqiQi6LLgxmUF+8A2lMl00g6dTMLMjVBD/8KNC0Xc+jfu/BsnbRbaemDgcM69zLknSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlsslrHuBtRwKRRvoUDJu4nmNAok7wSTm9zvPHJtRKzucZpwP6IjJULBKFrpoR9RHAdh9jQbVGtu3Z2DrBKvIDUo0BxUv/rDmKURV8gkNabnuQn6GdUomOSzSj81PKFsQke8Z6miETd+Nk88I2dWGZIw1vYpJHP190ZGI2OmUWAn84Rm2cvF/7xeiuG1nwmVpMgVW3wUppJgTPLzyVBozlBOLaFMC5uVsDHVlKEtqWJL8JZPXiXti7rn1r27y1rDLeoowwmcwjl4cAUNuIUmtICBgmd4hTfHOC/Ou/OxGC05xc4x/IHz+QP62ZEM</latexit><latexit sha1_base64="hMUN0ytn0IYshdQKp5n0VmIOIPA=">AAAB8XicbVDLSsNAFL2pr1pfVZduBovgqiQi6LLgxmUF+8A2lMl00g6dTMLMjVBD/8KNC0Xc+jfu/BsnbRbaemDgcM69zLknSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlsslrHuBtRwKRRvoUDJu4nmNAok7wSTm9zvPHJtRKzucZpwP6IjJULBKFrpoR9RHAdh9jQbVGtu3Z2DrBKvIDUo0BxUv/rDmKURV8gkNabnuQn6GdUomOSzSj81PKFsQke8Z6miETd+Nk88I2dWGZIw1vYpJHP190ZGI2OmUWAn84Rm2cvF/7xeiuG1nwmVpMgVW3wUppJgTPLzyVBozlBOLaFMC5uVsDHVlKEtqWJL8JZPXiXti7rn1r27y1rDLeoowwmcwjl4cAUNuIUmtICBgmd4hTfHOC/Ou/OxGC05xc4x/IHz+QP62ZEM</latexit>

r̂<latexit sha1_base64="C+ewVegs3+gvFEDylyVVHLjR7ZE=">AAAB+XicbVDLSsNAFL3xWesr6tJNsAiuSiKCLgtuXFawD2hCmUwn7dDJJMzcFErIn7hxoYhb/8Sdf+OkzUJbDwwczrmXe+aEqeAaXffb2tjc2t7Zre3V9w8Oj47tk9OuTjJFWYcmIlH9kGgmuGQd5ChYP1WMxKFgvXB6X/q9GVOaJ/IJ5ykLYjKWPOKUoJGGtu1PCOZ+THASRrkqiqHdcJvuAs468SrSgArtof3ljxKaxUwiFUTrgeemGOREIaeCFXU/0ywldErGbGCoJDHTQb5IXjiXRhk5UaLMk+gs1N8bOYm1nsehmSwj6lWvFP/zBhlGd0HOZZohk3R5KMqEg4lT1uCMuGIUxdwQQhU3WR06IYpQNGXVTQne6pfXSfe66blN7/Gm0XKrOmpwDhdwBR7cQgseoA0doDCDZ3iFNyu3Xqx362M5umFVO2fwB9bnD0d3lAI=</latexit><latexit sha1_base64="C+ewVegs3+gvFEDylyVVHLjR7ZE=">AAAB+XicbVDLSsNAFL3xWesr6tJNsAiuSiKCLgtuXFawD2hCmUwn7dDJJMzcFErIn7hxoYhb/8Sdf+OkzUJbDwwczrmXe+aEqeAaXffb2tjc2t7Zre3V9w8Oj47tk9OuTjJFWYcmIlH9kGgmuGQd5ChYP1WMxKFgvXB6X/q9GVOaJ/IJ5ykLYjKWPOKUoJGGtu1PCOZ+THASRrkqiqHdcJvuAs468SrSgArtof3ljxKaxUwiFUTrgeemGOREIaeCFXU/0ywldErGbGCoJDHTQb5IXjiXRhk5UaLMk+gs1N8bOYm1nsehmSwj6lWvFP/zBhlGd0HOZZohk3R5KMqEg4lT1uCMuGIUxdwQQhU3WR06IYpQNGXVTQne6pfXSfe66blN7/Gm0XKrOmpwDhdwBR7cQgseoA0doDCDZ3iFNyu3Xqx362M5umFVO2fwB9bnD0d3lAI=</latexit><latexit sha1_base64="C+ewVegs3+gvFEDylyVVHLjR7ZE=">AAAB+XicbVDLSsNAFL3xWesr6tJNsAiuSiKCLgtuXFawD2hCmUwn7dDJJMzcFErIn7hxoYhb/8Sdf+OkzUJbDwwczrmXe+aEqeAaXffb2tjc2t7Zre3V9w8Oj47tk9OuTjJFWYcmIlH9kGgmuGQd5ChYP1WMxKFgvXB6X/q9GVOaJ/IJ5ykLYjKWPOKUoJGGtu1PCOZ+THASRrkqiqHdcJvuAs468SrSgArtof3ljxKaxUwiFUTrgeemGOREIaeCFXU/0ywldErGbGCoJDHTQb5IXjiXRhk5UaLMk+gs1N8bOYm1nsehmSwj6lWvFP/zBhlGd0HOZZohk3R5KMqEg4lT1uCMuGIUxdwQQhU3WR06IYpQNGXVTQne6pfXSfe66blN7/Gm0XKrOmpwDhdwBR7cQgseoA0doDCDZ3iFNyu3Xqx362M5umFVO2fwB9bnD0d3lAI=</latexit><latexit sha1_base64="C+ewVegs3+gvFEDylyVVHLjR7ZE=">AAAB+XicbVDLSsNAFL3xWesr6tJNsAiuSiKCLgtuXFawD2hCmUwn7dDJJMzcFErIn7hxoYhb/8Sdf+OkzUJbDwwczrmXe+aEqeAaXffb2tjc2t7Zre3V9w8Oj47tk9OuTjJFWYcmIlH9kGgmuGQd5ChYP1WMxKFgvXB6X/q9GVOaJ/IJ5ykLYjKWPOKUoJGGtu1PCOZ+THASRrkqiqHdcJvuAs468SrSgArtof3ljxKaxUwiFUTrgeemGOREIaeCFXU/0ywldErGbGCoJDHTQb5IXjiXRhk5UaLMk+gs1N8bOYm1nsehmSwj6lWvFP/zBhlGd0HOZZohk3R5KMqEg4lT1uCMuGIUxdwQQhU3WR06IYpQNGXVTQne6pfXSfe66blN7/Gm0XKrOmpwDhdwBR7cQgseoA0doDCDZ3iFNyu3Xqx362M5umFVO2fwB9bnD0d3lAI=</latexit>

µx<latexit sha1_base64="ihlpTIJSSoeYfdWYzsyV9rf2BzA=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMeAF48RzAOSNcxOJsmQmdllplcMSz7CiwdFvPo93vwbJ8keNLGgoajqprsrSqSw6PvfXmFtfWNzq7hd2tnd2z8oHx41bZwaxhsslrFpR9RyKTRvoEDJ24nhVEWSt6LxzcxvPXJjRazvcZLwUNGhFgPBKDqp1VXpQ/Y07ZUrftWfg6ySICcVyFHvlb+6/ZilimtkklrbCfwEw4waFEzyaambWp5QNqZD3nFUU8VtmM3PnZIzp/TJIDauNJK5+nsio8raiYpcp6I4ssveTPzP66Q4uA4zoZMUuWaLRYNUEozJ7HfSF4YzlBNHKDPC3UrYiBrK0CVUciEEyy+vkuZFNfCrwd1lpebncRThBE7hHAK4ghrcQh0awGAMz/AKb17ivXjv3seiteDlM8fwB97nD6+Fj7o=</latexit><latexit sha1_base64="ihlpTIJSSoeYfdWYzsyV9rf2BzA=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMeAF48RzAOSNcxOJsmQmdllplcMSz7CiwdFvPo93vwbJ8keNLGgoajqprsrSqSw6PvfXmFtfWNzq7hd2tnd2z8oHx41bZwaxhsslrFpR9RyKTRvoEDJ24nhVEWSt6LxzcxvPXJjRazvcZLwUNGhFgPBKDqp1VXpQ/Y07ZUrftWfg6ySICcVyFHvlb+6/ZilimtkklrbCfwEw4waFEzyaambWp5QNqZD3nFUU8VtmM3PnZIzp/TJIDauNJK5+nsio8raiYpcp6I4ssveTPzP66Q4uA4zoZMUuWaLRYNUEozJ7HfSF4YzlBNHKDPC3UrYiBrK0CVUciEEyy+vkuZFNfCrwd1lpebncRThBE7hHAK4ghrcQh0awGAMz/AKb17ivXjv3seiteDlM8fwB97nD6+Fj7o=</latexit><latexit sha1_base64="ihlpTIJSSoeYfdWYzsyV9rf2BzA=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMeAF48RzAOSNcxOJsmQmdllplcMSz7CiwdFvPo93vwbJ8keNLGgoajqprsrSqSw6PvfXmFtfWNzq7hd2tnd2z8oHx41bZwaxhsslrFpR9RyKTRvoEDJ24nhVEWSt6LxzcxvPXJjRazvcZLwUNGhFgPBKDqp1VXpQ/Y07ZUrftWfg6ySICcVyFHvlb+6/ZilimtkklrbCfwEw4waFEzyaambWp5QNqZD3nFUU8VtmM3PnZIzp/TJIDauNJK5+nsio8raiYpcp6I4ssveTPzP66Q4uA4zoZMUuWaLRYNUEozJ7HfSF4YzlBNHKDPC3UrYiBrK0CVUciEEyy+vkuZFNfCrwd1lpebncRThBE7hHAK4ghrcQh0awGAMz/AKb17ivXjv3seiteDlM8fwB97nD6+Fj7o=</latexit><latexit sha1_base64="ihlpTIJSSoeYfdWYzsyV9rf2BzA=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMeAF48RzAOSNcxOJsmQmdllplcMSz7CiwdFvPo93vwbJ8keNLGgoajqprsrSqSw6PvfXmFtfWNzq7hd2tnd2z8oHx41bZwaxhsslrFpR9RyKTRvoEDJ24nhVEWSt6LxzcxvPXJjRazvcZLwUNGhFgPBKDqp1VXpQ/Y07ZUrftWfg6ySICcVyFHvlb+6/ZilimtkklrbCfwEw4waFEzyaambWp5QNqZD3nFUU8VtmM3PnZIzp/TJIDauNJK5+nsio8raiYpcp6I4ssveTPzP66Q4uA4zoZMUuWaLRYNUEozJ7HfSF4YzlBNHKDPC3UrYiBrK0CVUciEEyy+vkuZFNfCrwd1lpebncRThBE7hHAK4ghrcQh0awGAMz/AKb17ivXjv3seiteDlM8fwB97nD6+Fj7o=</latexit>

µx,y<latexit sha1_base64="JvHu4rC8XllXsmazALvOLBRbAp0=">AAAB8HicbVBNS8NAEJ3Ur1q/qh69LBbBg5REBD0WvHisYD+kjWWz3bRLdzdhdyOG0F/hxYMiXv053vw3btMctPXBwOO9GWbmBTFn2rjut1NaWV1b3yhvVra2d3b3qvsHbR0litAWiXikugHWlDNJW4YZTruxolgEnHaCyfXM7zxSpVkk70waU1/gkWQhI9hY6b4vkofs6SydDqo1t+7mQMvEK0gNCjQH1a/+MCKJoNIQjrXueW5s/Awrwwin00o/0TTGZIJHtGepxIJqP8sPnqITqwxRGClb0qBc/T2RYaF1KgLbKbAZ60VvJv7n9RITXvkZk3FiqCTzRWHCkYnQ7Hs0ZIoSw1NLMFHM3orIGCtMjM2oYkPwFl9eJu3zuufWvduLWsMt4ijDERzDKXhwCQ24gSa0gICAZ3iFN0c5L8678zFvLTnFzCH8gfP5A/l5kHM=</latexit><latexit sha1_base64="JvHu4rC8XllXsmazALvOLBRbAp0=">AAAB8HicbVBNS8NAEJ3Ur1q/qh69LBbBg5REBD0WvHisYD+kjWWz3bRLdzdhdyOG0F/hxYMiXv053vw3btMctPXBwOO9GWbmBTFn2rjut1NaWV1b3yhvVra2d3b3qvsHbR0litAWiXikugHWlDNJW4YZTruxolgEnHaCyfXM7zxSpVkk70waU1/gkWQhI9hY6b4vkofs6SydDqo1t+7mQMvEK0gNCjQH1a/+MCKJoNIQjrXueW5s/Awrwwin00o/0TTGZIJHtGepxIJqP8sPnqITqwxRGClb0qBc/T2RYaF1KgLbKbAZ60VvJv7n9RITXvkZk3FiqCTzRWHCkYnQ7Hs0ZIoSw1NLMFHM3orIGCtMjM2oYkPwFl9eJu3zuufWvduLWsMt4ijDERzDKXhwCQ24gSa0gICAZ3iFN0c5L8678zFvLTnFzCH8gfP5A/l5kHM=</latexit><latexit sha1_base64="JvHu4rC8XllXsmazALvOLBRbAp0=">AAAB8HicbVBNS8NAEJ3Ur1q/qh69LBbBg5REBD0WvHisYD+kjWWz3bRLdzdhdyOG0F/hxYMiXv053vw3btMctPXBwOO9GWbmBTFn2rjut1NaWV1b3yhvVra2d3b3qvsHbR0litAWiXikugHWlDNJW4YZTruxolgEnHaCyfXM7zxSpVkk70waU1/gkWQhI9hY6b4vkofs6SydDqo1t+7mQMvEK0gNCjQH1a/+MCKJoNIQjrXueW5s/Awrwwin00o/0TTGZIJHtGepxIJqP8sPnqITqwxRGClb0qBc/T2RYaF1KgLbKbAZ60VvJv7n9RITXvkZk3FiqCTzRWHCkYnQ7Hs0ZIoSw1NLMFHM3orIGCtMjM2oYkPwFl9eJu3zuufWvduLWsMt4ijDERzDKXhwCQ24gSa0gICAZ3iFN0c5L8678zFvLTnFzCH8gfP5A/l5kHM=</latexit><latexit sha1_base64="JvHu4rC8XllXsmazALvOLBRbAp0=">AAAB8HicbVBNS8NAEJ3Ur1q/qh69LBbBg5REBD0WvHisYD+kjWWz3bRLdzdhdyOG0F/hxYMiXv053vw3btMctPXBwOO9GWbmBTFn2rjut1NaWV1b3yhvVra2d3b3qvsHbR0litAWiXikugHWlDNJW4YZTruxolgEnHaCyfXM7zxSpVkk70waU1/gkWQhI9hY6b4vkofs6SydDqo1t+7mQMvEK0gNCjQH1a/+MCKJoNIQjrXueW5s/Awrwwin00o/0TTGZIJHtGepxIJqP8sPnqITqwxRGClb0qBc/T2RYaF1KgLbKbAZ60VvJv7n9RITXvkZk3FiqCTzRWHCkYnQ7Hs0ZIoSw1NLMFHM3orIGCtMjM2oYkPwFl9eJu3zuufWvduLWsMt4ijDERzDKXhwCQ24gSa0gICAZ3iFN0c5L8678zFvLTnFzCH8gfP5A/l5kHM=</latexit>

(0, 0)<latexit sha1_base64="w2y21lA1/aySPyIRT6dyd9IfEJg=">AAAB7HicbVBNSwMxEJ31s9avqkcvwSJUkJIVQY8FLx4ruG2hXUo2zbah2WRJskJZ+hu8eFDEqz/Im//GtN2Dtj4YeLw3w8y8KBXcWIy/vbX1jc2t7dJOeXdv/+CwcnTcMirTlAVUCaU7ETFMcMkCy61gnVQzkkSCtaPx3cxvPzFtuJKPdpKyMCFDyWNOiXVSUMOX+KJfqeI6ngOtEr8gVSjQ7Fe+egNFs4RJSwUxpuvj1IY50ZZTwablXmZYSuiYDFnXUUkSZsJ8fuwUnTtlgGKlXUmL5urviZwkxkySyHUmxI7MsjcT//O6mY1vw5zLNLNM0sWiOBPIKjT7HA24ZtSKiSOEau5uRXRENKHW5VN2IfjLL6+S1lXdx3X/4brawEUcJTiFM6iBDzfQgHtoQgAUODzDK7x50nvx3r2PReuaV8ycwB94nz8PVY13</latexit><latexit sha1_base64="w2y21lA1/aySPyIRT6dyd9IfEJg=">AAAB7HicbVBNSwMxEJ31s9avqkcvwSJUkJIVQY8FLx4ruG2hXUo2zbah2WRJskJZ+hu8eFDEqz/Im//GtN2Dtj4YeLw3w8y8KBXcWIy/vbX1jc2t7dJOeXdv/+CwcnTcMirTlAVUCaU7ETFMcMkCy61gnVQzkkSCtaPx3cxvPzFtuJKPdpKyMCFDyWNOiXVSUMOX+KJfqeI6ngOtEr8gVSjQ7Fe+egNFs4RJSwUxpuvj1IY50ZZTwablXmZYSuiYDFnXUUkSZsJ8fuwUnTtlgGKlXUmL5urviZwkxkySyHUmxI7MsjcT//O6mY1vw5zLNLNM0sWiOBPIKjT7HA24ZtSKiSOEau5uRXRENKHW5VN2IfjLL6+S1lXdx3X/4brawEUcJTiFM6iBDzfQgHtoQgAUODzDK7x50nvx3r2PReuaV8ycwB94nz8PVY13</latexit><latexit sha1_base64="w2y21lA1/aySPyIRT6dyd9IfEJg=">AAAB7HicbVBNSwMxEJ31s9avqkcvwSJUkJIVQY8FLx4ruG2hXUo2zbah2WRJskJZ+hu8eFDEqz/Im//GtN2Dtj4YeLw3w8y8KBXcWIy/vbX1jc2t7dJOeXdv/+CwcnTcMirTlAVUCaU7ETFMcMkCy61gnVQzkkSCtaPx3cxvPzFtuJKPdpKyMCFDyWNOiXVSUMOX+KJfqeI6ngOtEr8gVSjQ7Fe+egNFs4RJSwUxpuvj1IY50ZZTwablXmZYSuiYDFnXUUkSZsJ8fuwUnTtlgGKlXUmL5urviZwkxkySyHUmxI7MsjcT//O6mY1vw5zLNLNM0sWiOBPIKjT7HA24ZtSKiSOEau5uRXRENKHW5VN2IfjLL6+S1lXdx3X/4brawEUcJTiFM6iBDzfQgHtoQgAUODzDK7x50nvx3r2PReuaV8ycwB94nz8PVY13</latexit><latexit sha1_base64="w2y21lA1/aySPyIRT6dyd9IfEJg=">AAAB7HicbVBNSwMxEJ31s9avqkcvwSJUkJIVQY8FLx4ruG2hXUo2zbah2WRJskJZ+hu8eFDEqz/Im//GtN2Dtj4YeLw3w8y8KBXcWIy/vbX1jc2t7dJOeXdv/+CwcnTcMirTlAVUCaU7ETFMcMkCy61gnVQzkkSCtaPx3cxvPzFtuJKPdpKyMCFDyWNOiXVSUMOX+KJfqeI6ngOtEr8gVSjQ7Fe+egNFs4RJSwUxpuvj1IY50ZZTwablXmZYSuiYDFnXUUkSZsJ8fuwUnTtlgGKlXUmL5urviZwkxkySyHUmxI7MsjcT//O6mY1vw5zLNLNM0sWiOBPIKjT7HA24ZtSKiSOEau5uRXRENKHW5VN2IfjLL6+S1lXdx3X/4brawEUcJTiFM6iBDzfQgHtoQgAUODzDK7x50nvx3r2PReuaV8ycwB94nz8PVY13</latexit>

y<latexit sha1_base64="qsusuHrCcQP6PgRTAJbCz2Q+12c=">AAAB8XicbVBNS8NAFHypX7V+VT16WSyCp5KIoMeCF48VbCu2oWy2L+3SzSbsboQQ+i+8eFDEq//Gm//GTZuDtg4sDDPvsfMmSATXxnW/ncra+sbmVnW7trO7t39QPzzq6jhVDDssFrF6CKhGwSV2DDcCHxKFNAoE9oLpTeH3nlBpHst7kyXoR3QsecgZNVZ6HETUTIIwz2bDesNtunOQVeKVpAEl2sP612AUszRCaZigWvc9NzF+TpXhTOCsNkg1JpRN6Rj7lkoaofbzeeIZObPKiISxsk8aMld/b+Q00jqLAjtZJNTLXiH+5/VTE177OZdJalCyxUdhKoiJSXE+GXGFzIjMEsoUt1kJm1BFmbEl1WwJ3vLJq6R70fTcpnd32Wi5ZR1VOIFTOAcPrqAFt9CGDjCQ8Ayv8OZo58V5dz4WoxWn3DmGP3A+fwD5VJEL</latexit><latexit sha1_base64="qsusuHrCcQP6PgRTAJbCz2Q+12c=">AAAB8XicbVBNS8NAFHypX7V+VT16WSyCp5KIoMeCF48VbCu2oWy2L+3SzSbsboQQ+i+8eFDEq//Gm//GTZuDtg4sDDPvsfMmSATXxnW/ncra+sbmVnW7trO7t39QPzzq6jhVDDssFrF6CKhGwSV2DDcCHxKFNAoE9oLpTeH3nlBpHst7kyXoR3QsecgZNVZ6HETUTIIwz2bDesNtunOQVeKVpAEl2sP612AUszRCaZigWvc9NzF+TpXhTOCsNkg1JpRN6Rj7lkoaofbzeeIZObPKiISxsk8aMld/b+Q00jqLAjtZJNTLXiH+5/VTE177OZdJalCyxUdhKoiJSXE+GXGFzIjMEsoUt1kJm1BFmbEl1WwJ3vLJq6R70fTcpnd32Wi5ZR1VOIFTOAcPrqAFt9CGDjCQ8Ayv8OZo58V5dz4WoxWn3DmGP3A+fwD5VJEL</latexit><latexit sha1_base64="qsusuHrCcQP6PgRTAJbCz2Q+12c=">AAAB8XicbVBNS8NAFHypX7V+VT16WSyCp5KIoMeCF48VbCu2oWy2L+3SzSbsboQQ+i+8eFDEq//Gm//GTZuDtg4sDDPvsfMmSATXxnW/ncra+sbmVnW7trO7t39QPzzq6jhVDDssFrF6CKhGwSV2DDcCHxKFNAoE9oLpTeH3nlBpHst7kyXoR3QsecgZNVZ6HETUTIIwz2bDesNtunOQVeKVpAEl2sP612AUszRCaZigWvc9NzF+TpXhTOCsNkg1JpRN6Rj7lkoaofbzeeIZObPKiISxsk8aMld/b+Q00jqLAjtZJNTLXiH+5/VTE177OZdJalCyxUdhKoiJSXE+GXGFzIjMEsoUt1kJm1BFmbEl1WwJ3vLJq6R70fTcpnd32Wi5ZR1VOIFTOAcPrqAFt9CGDjCQ8Ayv8OZo58V5dz4WoxWn3DmGP3A+fwD5VJEL</latexit><latexit sha1_base64="qsusuHrCcQP6PgRTAJbCz2Q+12c=">AAAB8XicbVBNS8NAFHypX7V+VT16WSyCp5KIoMeCF48VbCu2oWy2L+3SzSbsboQQ+i+8eFDEq//Gm//GTZuDtg4sDDPvsfMmSATXxnW/ncra+sbmVnW7trO7t39QPzzq6jhVDDssFrF6CKhGwSV2DDcCHxKFNAoE9oLpTeH3nlBpHst7kyXoR3QsecgZNVZ6HETUTIIwz2bDesNtunOQVeKVpAEl2sP612AUszRCaZigWvc9NzF+TpXhTOCsNkg1JpRN6Rj7lkoaofbzeeIZObPKiISxsk8aMld/b+Q00jqLAjtZJNTLXiH+5/VTE177OZdJalCyxUdhKoiJSXE+GXGFzIjMEsoUt1kJm1BFmbEl1WwJ3vLJq6R70fTcpnd32Wi5ZR1VOIFTOAcPrqAFt9CGDjCQ8Ayv8OZo58V5dz4WoxWn3DmGP3A+fwD5VJEL</latexit>

(a)<latexit sha1_base64="x7haAD4KZfAZXBOps7rjOuMRL+Q=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBahXkoigj0WvHisaD+gDWWy3bRLN5uwuxFK6E/w4kERr/4ib/4bt20O2vpg4PHeDDPzgkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TRVmLxiJW3QA1E1yyluFGsG6iGEaBYJ1gcjv3O09MaR7LRzNNmB/hSPKQUzRWeqji5aBccWvuAmSdeDmpQI7moPzVH8Y0jZg0VKDWPc9NjJ+hMpwKNiv1U80SpBMcsZ6lEiOm/Wxx6oxcWGVIwljZkoYs1N8TGUZaT6PAdkZoxnrVm4v/eb3UhHU/4zJJDZN0uShMBTExmf9NhlwxasTUEqSK21sJHaNCamw6JRuCt/ryOmlf1Ty35t1fVxr1PI4inME5VMGDG2jAHTShBRRG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwCFqY1A</latexit><latexit sha1_base64="x7haAD4KZfAZXBOps7rjOuMRL+Q=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBahXkoigj0WvHisaD+gDWWy3bRLN5uwuxFK6E/w4kERr/4ib/4bt20O2vpg4PHeDDPzgkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TRVmLxiJW3QA1E1yyluFGsG6iGEaBYJ1gcjv3O09MaR7LRzNNmB/hSPKQUzRWeqji5aBccWvuAmSdeDmpQI7moPzVH8Y0jZg0VKDWPc9NjJ+hMpwKNiv1U80SpBMcsZ6lEiOm/Wxx6oxcWGVIwljZkoYs1N8TGUZaT6PAdkZoxnrVm4v/eb3UhHU/4zJJDZN0uShMBTExmf9NhlwxasTUEqSK21sJHaNCamw6JRuCt/ryOmlf1Ty35t1fVxr1PI4inME5VMGDG2jAHTShBRRG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwCFqY1A</latexit><latexit sha1_base64="x7haAD4KZfAZXBOps7rjOuMRL+Q=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBahXkoigj0WvHisaD+gDWWy3bRLN5uwuxFK6E/w4kERr/4ib/4bt20O2vpg4PHeDDPzgkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TRVmLxiJW3QA1E1yyluFGsG6iGEaBYJ1gcjv3O09MaR7LRzNNmB/hSPKQUzRWeqji5aBccWvuAmSdeDmpQI7moPzVH8Y0jZg0VKDWPc9NjJ+hMpwKNiv1U80SpBMcsZ6lEiOm/Wxx6oxcWGVIwljZkoYs1N8TGUZaT6PAdkZoxnrVm4v/eb3UhHU/4zJJDZN0uShMBTExmf9NhlwxasTUEqSK21sJHaNCamw6JRuCt/ryOmlf1Ty35t1fVxr1PI4inME5VMGDG2jAHTShBRRG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwCFqY1A</latexit><latexit sha1_base64="x7haAD4KZfAZXBOps7rjOuMRL+Q=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBahXkoigj0WvHisaD+gDWWy3bRLN5uwuxFK6E/w4kERr/4ib/4bt20O2vpg4PHeDDPzgkRwbVz32ylsbG5t7xR3S3v7B4dH5eOTto5TRVmLxiJW3QA1E1yyluFGsG6iGEaBYJ1gcjv3O09MaR7LRzNNmB/hSPKQUzRWeqji5aBccWvuAmSdeDmpQI7moPzVH8Y0jZg0VKDWPc9NjJ+hMpwKNiv1U80SpBMcsZ6lEiOm/Wxx6oxcWGVIwljZkoYs1N8TGUZaT6PAdkZoxnrVm4v/eb3UhHU/4zJJDZN0uShMBTExmf9NhlwxasTUEqSK21sJHaNCamw6JRuCt/ryOmlf1Ty35t1fVxr1PI4inME5VMGDG2jAHTShBRRG8Ayv8OYI58V5dz6WrQUnnzmFP3A+fwCFqY1A</latexit>

(b)<latexit sha1_base64="2U3/wNlaT7QGwVjsUqiBa6qyVt8=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBahXkoigj0WvHisaD+gDWWz3bRLN5uwOxFK6E/w4kERr/4ib/4bt20O2vpg4PHeDDPzgkQKg6777RQ2Nre2d4q7pb39g8Oj8vFJ28SpZrzFYhnrbkANl0LxFgqUvJtoTqNA8k4wuZ37nSeujYjVI04T7kd0pEQoGEUrPVSDy0G54tbcBcg68XJSgRzNQfmrP4xZGnGFTFJjep6boJ9RjYJJPiv1U8MTyiZ0xHuWKhpx42eLU2fkwipDEsbalkKyUH9PZDQyZhoFtjOiODar3lz8z+ulGNb9TKgkRa7YclGYSoIxmf9NhkJzhnJqCWVa2FsJG1NNGdp0SjYEb/XlddK+qnluzbu/rjTqeRxFOINzqIIHN9CAO2hCCxiM4Ble4c2Rzovz7nwsWwtOPnMKf+B8/gCHLo1B</latexit><latexit sha1_base64="2U3/wNlaT7QGwVjsUqiBa6qyVt8=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBahXkoigj0WvHisaD+gDWWz3bRLN5uwOxFK6E/w4kERr/4ib/4bt20O2vpg4PHeDDPzgkQKg6777RQ2Nre2d4q7pb39g8Oj8vFJ28SpZrzFYhnrbkANl0LxFgqUvJtoTqNA8k4wuZ37nSeujYjVI04T7kd0pEQoGEUrPVSDy0G54tbcBcg68XJSgRzNQfmrP4xZGnGFTFJjep6boJ9RjYJJPiv1U8MTyiZ0xHuWKhpx42eLU2fkwipDEsbalkKyUH9PZDQyZhoFtjOiODar3lz8z+ulGNb9TKgkRa7YclGYSoIxmf9NhkJzhnJqCWVa2FsJG1NNGdp0SjYEb/XlddK+qnluzbu/rjTqeRxFOINzqIIHN9CAO2hCCxiM4Ble4c2Rzovz7nwsWwtOPnMKf+B8/gCHLo1B</latexit><latexit sha1_base64="2U3/wNlaT7QGwVjsUqiBa6qyVt8=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBahXkoigj0WvHisaD+gDWWz3bRLN5uwOxFK6E/w4kERr/4ib/4bt20O2vpg4PHeDDPzgkQKg6777RQ2Nre2d4q7pb39g8Oj8vFJ28SpZrzFYhnrbkANl0LxFgqUvJtoTqNA8k4wuZ37nSeujYjVI04T7kd0pEQoGEUrPVSDy0G54tbcBcg68XJSgRzNQfmrP4xZGnGFTFJjep6boJ9RjYJJPiv1U8MTyiZ0xHuWKhpx42eLU2fkwipDEsbalkKyUH9PZDQyZhoFtjOiODar3lz8z+ulGNb9TKgkRa7YclGYSoIxmf9NhkJzhnJqCWVa2FsJG1NNGdp0SjYEb/XlddK+qnluzbu/rjTqeRxFOINzqIIHN9CAO2hCCxiM4Ble4c2Rzovz7nwsWwtOPnMKf+B8/gCHLo1B</latexit><latexit sha1_base64="2U3/wNlaT7QGwVjsUqiBa6qyVt8=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBahXkoigj0WvHisaD+gDWWz3bRLN5uwOxFK6E/w4kERr/4ib/4bt20O2vpg4PHeDDPzgkQKg6777RQ2Nre2d4q7pb39g8Oj8vFJ28SpZrzFYhnrbkANl0LxFgqUvJtoTqNA8k4wuZ37nSeujYjVI04T7kd0pEQoGEUrPVSDy0G54tbcBcg68XJSgRzNQfmrP4xZGnGFTFJjep6boJ9RjYJJPiv1U8MTyiZ0xHuWKhpx42eLU2fkwipDEsbalkKyUH9PZDQyZhoFtjOiODar3lz8z+ulGNb9TKgkRa7YclGYSoIxmf9NhkJzhnJqCWVa2FsJG1NNGdp0SjYEb/XlddK+qnluzbu/rjTqeRxFOINzqIIHN9CAO2hCCxiM4Ble4c2Rzovz7nwsWwtOPnMKf+B8/gCHLo1B</latexit>

x<latexit sha1_base64="MtaA25cfKxHazUZoK9PQJjvGprs=">AAAB8XicbVDLSsNAFL2pr1pfVZduBovgqiQi6LLgxmUF+8A2lMn0ph06mYSZiVhC/8KNC0Xc+jfu/BsnbRbaemDgcM69zLknSATXxnW/ndLa+sbmVnm7srO7t39QPTxq6zhVDFssFrHqBlSj4BJbhhuB3UQhjQKBnWByk/udR1Sax/LeTBP0IzqSPOSMGis99CNqxkGYPc0G1Zpbd+cgq8QrSA0KNAfVr/4wZmmE0jBBte55bmL8jCrDmcBZpZ9qTCib0BH2LJU0Qu1n88QzcmaVIQljZZ80ZK7+3shopPU0CuxknlAve7n4n9dLTXjtZ1wmqUHJFh+FqSAmJvn5ZMgVMiOmllCmuM1K2JgqyowtqWJL8JZPXiXti7rn1r27y1rDLeoowwmcwjl4cAUNuIUmtICBhGd4hTdHOy/Ou/OxGC05xc4x/IHz+QP3z5EK</latexit><latexit sha1_base64="MtaA25cfKxHazUZoK9PQJjvGprs=">AAAB8XicbVDLSsNAFL2pr1pfVZduBovgqiQi6LLgxmUF+8A2lMn0ph06mYSZiVhC/8KNC0Xc+jfu/BsnbRbaemDgcM69zLknSATXxnW/ndLa+sbmVnm7srO7t39QPTxq6zhVDFssFrHqBlSj4BJbhhuB3UQhjQKBnWByk/udR1Sax/LeTBP0IzqSPOSMGis99CNqxkGYPc0G1Zpbd+cgq8QrSA0KNAfVr/4wZmmE0jBBte55bmL8jCrDmcBZpZ9qTCib0BH2LJU0Qu1n88QzcmaVIQljZZ80ZK7+3shopPU0CuxknlAve7n4n9dLTXjtZ1wmqUHJFh+FqSAmJvn5ZMgVMiOmllCmuM1K2JgqyowtqWJL8JZPXiXti7rn1r27y1rDLeoowwmcwjl4cAUNuIUmtICBhGd4hTdHOy/Ou/OxGC05xc4x/IHz+QP3z5EK</latexit><latexit sha1_base64="MtaA25cfKxHazUZoK9PQJjvGprs=">AAAB8XicbVDLSsNAFL2pr1pfVZduBovgqiQi6LLgxmUF+8A2lMn0ph06mYSZiVhC/8KNC0Xc+jfu/BsnbRbaemDgcM69zLknSATXxnW/ndLa+sbmVnm7srO7t39QPTxq6zhVDFssFrHqBlSj4BJbhhuB3UQhjQKBnWByk/udR1Sax/LeTBP0IzqSPOSMGis99CNqxkGYPc0G1Zpbd+cgq8QrSA0KNAfVr/4wZmmE0jBBte55bmL8jCrDmcBZpZ9qTCib0BH2LJU0Qu1n88QzcmaVIQljZZ80ZK7+3shopPU0CuxknlAve7n4n9dLTXjtZ1wmqUHJFh+FqSAmJvn5ZMgVMiOmllCmuM1K2JgqyowtqWJL8JZPXiXti7rn1r27y1rDLeoowwmcwjl4cAUNuIUmtICBhGd4hTdHOy/Ou/OxGC05xc4x/IHz+QP3z5EK</latexit><latexit sha1_base64="MtaA25cfKxHazUZoK9PQJjvGprs=">AAAB8XicbVDLSsNAFL2pr1pfVZduBovgqiQi6LLgxmUF+8A2lMn0ph06mYSZiVhC/8KNC0Xc+jfu/BsnbRbaemDgcM69zLknSATXxnW/ndLa+sbmVnm7srO7t39QPTxq6zhVDFssFrHqBlSj4BJbhhuB3UQhjQKBnWByk/udR1Sax/LeTBP0IzqSPOSMGis99CNqxkGYPc0G1Zpbd+cgq8QrSA0KNAfVr/4wZmmE0jBBte55bmL8jCrDmcBZpZ9qTCib0BH2LJU0Qu1n88QzcmaVIQljZZ80ZK7+3shopPU0CuxknlAve7n4n9dLTXjtZ1wmqUHJFh+FqSAmJvn5ZMgVMiOmllCmuM1K2JgqyowtqWJL8JZPXiXti7rn1r27y1rDLeoowwmcwjl4cAUNuIUmtICBhGd4hTdHOy/Ou/OxGC05xc4x/IHz+QP3z5EK</latexit>

Figure 2: Proposed Q-VAE model. (a) The joint likelihoodlogp(x) and conditional likelihood logp(y|x) objectives sharethe VAE network parameters that form a structured regu-larization. (b) KL[q(z|x, y)| |q(z|x)] restricts the user represen-tation from severe changes with additional observations y.

recommendation tasks. The key contribution of the Q-VAE is toreformulate the variational lower-bound of the joint observationdistribution to support arbitrary conditional queries over observeduser interactions. We show that our model can accurately measurethe uncertainty of user latent representations (cf. Figure 1), thuspreventing the model from performing poorly for users with a largenumber of interactions. Finally, we empirically demonstrate that Q-VAE outperforms VAE-CF in terms of prediction performance andis also competitive w.r.t. several state-of-the-art recommendationalgorithms across the user preference density spectrum.

2 QUERYABLE-VAE FOR RECOMMENDATIONWe begin with notation: we denote the observed preferences ofuser i as a set ri of items preferred by the user (assuming binarypreference with only positive observations for the one-class case).We denote partial preference observation subsets as xi and yi ,where xi , yi ⊆ ri , xi ∩ yi = ∅, and xi ∪ yi ⊆ ri . In the followingcontext, we omit the user subscript i to reduce notational clutter.

We propose the Queryable Variational Auto-encoder (Q-VAE)to model the joint probability of a user’s preferences p(r) and theconditional probability p(y|x) of some subset of preferences giventhe others, which allows the model to treat the recommendationas a conditional inference (i.e., a query) problem with an arbitraryevidence set of user preference observations.

Instead of directly modeling the lower-bound of the log jointprobability p(r) as other VAE-based recommender systems do, wepropose to model the joint probability of any arbitrary partition ofx and y as follows:

logp(x, y) = logp(y|x) + logp(x) (1)

where logp(y|x) estimates user preference for some items giventhe user’s historical interactions, and logp(x) estimates how wellthe model can reproduce the historical interactions.

Maximizing both logp(y|x) and logp(x) for a given user is in-tractable due to the unknown relations between their interactions.We therefore optimize the lower-bounds logp(x), which has beenderived for VAE [6–8] as follows:

logp(x) ≥ Eqϕ (z |x)[logpθ (x|z)] − KL[qϕ (z|x)| |p(z)] (2)

where ϕ and θ are respectively encoder and decoder coefficients,and z is a user latent representation. Similarly, we can define the

lower-bound of logp(y|x) as follows:

logp(y|x) ≥ Eq(z |x,y)[logp(y|z)] − KL[q(z |x, y)| |p(z|x)] (3)

However, we note that we cannot form Equation 3 into an Auto-encoder since the distribution p(z|x) is unknown. While it is pos-sible to relax p(z|x) by p(z) as it has been done in both CVAE [4]and BCDE [9], in this work, we require a variational approximationq(z |x, y)with additional observations y as close as possible top(z|x)to ensure that recommendations align with observed preferences x.

We address this excessive relaxation problem by approximatingthe prior distribution p(z |x) with its lower-bound qϕ (z|x) learnedfrom Equation 2. Thus, Equation 3 can represent a second VAEobjective function as follows:

logp(y|x) ≥ Eqψ (z |x,y)[logpϑ (y|z)] −KL[qψ (z |x, y)| |qϕ (z|x)] (4)

whereψ and ϑ are respectively encoder and decoder coefficients ofthe second VAE. The naive combination of the two VAE objectivesfrom Equations 2 and 4 is impractical due to the need to maintaintwo VAE parameter sets and obtain the conditional prior qθ (z|x)before training the second VAE network.

We mitigate the above problem by sharing the parameter setsfrom the two VAE objectives and training the two networks si-multaneously. Specifically, Q-VAE optimizes a combined objectivefunction on a single VAE network structure as follows:

logp(x, y)≥ Eqϕ (z |x)[logpθ (x|z)dz] − KL[qϕ (z|x)| |p(z)]+ Eqϕ (z |x,y)[logpθ (y|z)dz] − KL[qϕ (z |x, y)| |qϕ (z|x)],

(5)

where the two sub-objective functions form a mutually structuredregularizer as demonstrated in Figure 2(a).Arbitrary Combination of Evidence and Query Variables: In-stead of fixing the split of variables x and y as in CVAE and BCDE,Q-VAE randomly splits variables during training through a dropoutmethod as shown in Equation 6 that is detailed later. Such randomdropout training enables the model to do arbitrary conditional in-ference with a different set of evidence or query variables withoutretraining a new model. Specifically, we can obtain random x ∪ y,x and y as follows:

x∪y = Dropout(r, ρ); x = Dropout(x∪y, ρ); y = x∪y−x, (6)

where ρ indicates the dropout ratio that randomly samples dropoutpercentages uniformly from the range [0.1, 0.5].KL Divergence: The objective function in Equation 5 introducesone additional KL divergence term that regularizes posterior distri-butions qϕ (z|x) and qϕ (z |x, y). Since both posterior distributionsare approximated as diagonal Gaussian distributions that are param-eterized by mean µ and standard derivation σ , the KL divergencecomputation is in closed-form and is computed as follows:

KL[q(z|x, y)| |q(z|x)] =∑k

[log

σxk

σx,yk

+(σx,yk )2 + (µx,yk − µxk )

2

2(σxk )2 − 1

2

](7)

where k is the index of latent dimension. The KL divergence sug-gests to keep expectation of the user preference µx,y as close aspossible to µx when the model observes more interactions y asdemonstrated in Figure 2(b).

Page 3: One-Class Collaborative Filtering with theQueryable ... · novel Queryable Variational Autoencoder (Q-VAE) variant of the VAE that explicitly models arbitrary conditional relationships

Table 1: Results of Movielens-1M dataset with 95% confidence interval. Hyper-parameters are chosen from the validation set.α : loss-weighting. λ: L2-regularization. ρ: corruption rate.

model rank α λ epochs ρ R-Precision MAP@5 MAP@50 Precision@5 Precision@50 Recall@5 Recall@50

PureSVD 50 0 1 10 0 0.092±0.0024 0.1212±0.0052 0.0987±0.0024 0.116±0.0043 0.0852±0.0018 0.0383±0.002 0.2383±0.0052BPR 200 0 1e-5 30 0 0.0933±0.0025 0.1192±0.0052 0.1002±0.0025 0.1141±0.0043 0.0875±0.0019 0.0375±0.002 0.2426±0.0052

WRMF 200 10 100 10 0 0.097±0.0026 0.1235±0.0053 0.1039±0.0025 0.1198±0.0045 0.091±0.002 0.0411±0.0022 0.2668±0.0058CDAE 200 0 1e-5 300 0.5 0.0941±0.0025 0.1297±0.0056 0.1032±0.0028 0.1226±0.0047 0.0891±0.0021 0.035±0.0018 0.2177±0.0047VAE-CF 200 0 1e-5 200 0.4 0.0892±0.0025 0.1066±0.0048 0.093±0.0022 0.1054±0.0039 0.0827±0.0017 0.0376±0.002 0.2449±0.0054AutoRec 200 0 1e-5 300 0 0.0945±0.0025 0.1254±0.0054 0.1017±0.0026 0.1194±0.0045 0.0877±0.0019 0.0377±0.002 0.2398±0.0052Q-VAE 200 0 0.1 200 0 0.1±0.0026 0.1306±0.0055 0.1066±0.0026 0.125±0.0046 0.0917±0.0020 0.0404±0.0021 0.2504±0.0054

Table 2: Results of Netflix dataset with 95% confidence interval. Hyper-parameters are chosen from the validation set.

model rank α λ epochs ρ R-Precision MAP@5 MAP@50 Precision@5 Precision@50 Recall@5 Recall@50

PureSVD 50 0 1 10 0 0.0994±0.0003 0.159±0.0007 0.118±0.0003 0.146±0.0005 0.0953±0.0003 0.0445±0.0003 0.2188±0.0006BPR 50 0 1e-5 30 0 0.0757±0.0002 0.1197±0.0006 0.096±0.0003 0.115±0.0005 0.0816±0.0002 0.0291±0.0002 0.1859±0.0006

WRMF 200 10 1e4 10 0 0.0985±0.0003 0.1531±0.0007 0.117±0.0003 0.1447±0.0006 0.096±0.0003 0.045±0.0003 0.2325±0.0007CDAE 50 0 1e-5 300 0.2 0.0797±0.0003 0.1251±0.0006 0.0979±0.0003 0.1198±0.0005 0.0832±0.0002 0.0323±0.0002 0.1788±0.0006VAE-CF 100 0 1e-4 300 0.5 0.1017±0.0003 0.1559±0.0007 0.1176±0.0003 0.1465±0.0005 0.0957±0.0003 0.0467±0.0003 0.2309±0.0006AutoRec 50 0 1e-5 300 0 0.0876±0.0003 0.14±0.0006 0.1074±0.0003 0.1324±0.0005 0.0894±0.0003 0.0361±0.0002 0.1958±0.0006Q-VAE 100 0 1e-5 200 0 0.0976±0.0003 0.1593±0.0007 0.1194±0.0003 0.1488±0.0006 0.0972±0.0003 0.0429±0.0003 0.2303±0.0006

Table 3: Summary of datasets used in evaluation.

Dataset #Users #Items |ri, j > ϑ | Sparsity

MovieLens-1m 6,038 3,533 575,281 2.69 × 10−2Netflix Prize 2,649,430 17,771 56,919,190 1.2 × 10−3

3 EXPERIMENTS AND EVALUATIONIn this section, we evaluate Q-VAE by comparing it to a variety ofscalable state-of-the-art One-Class Collaborative Filtering (OC-CF)algorithms on two different benchmark datasets. The comparisonincludes: (i) recommendation precision performance, (ii) latent rep-resentation uncertainty evaluation, and (iii) convergence speed.Datasets: We evaluate the candidate algorithms on two publiclyavailable datasets: Movielens-1M1 and Netflix Prize2 where in bothdatasets, ratings rages from 1 to 5. For both datasets, we binarizethe ratings based on a threshold ϑ = 3, defined to be the upper halfof the range of the ratings. Hence, a rating ri j > ϑ is considered as apositive feedback, otherwise, it’s considered as a negative feedback.Evaluation Metrics: We evaluate the recommendation perfor-mance using five differentmetrics: Precision@K, Recall@K,MAP@K,R-Precision, and B-NDCG.Candidate Methods: We compare Q-VAE with six different CFalgorithms, ranging from classical MatrixMactorization to the latestVAE for CF. These algorithms are:• PureSVD [10]: A method that constructs a similarity matrixthrough randomized SVD decomposition of implicit matrix R.

• BPR [11]: Bayesian Personalized Ranking. One of the first rec-ommender that explicitly optimize the pairwise ranking.

• WRMF [12]: Weighted Regularized Matrix Factorization. A Ma-trix Factorization that was designed for OC-CF.

1https://grouplens.org/datasets/movielens/1m/2https://www.kaggle.com/netflix-inc/netflix-prize-data

• AutoRec [1]: Autoencoder based recommendation system withone hidden layer, Relu activation, and sigmoid cross entroy loss.

• CDAE [4]:Collaborative DenoisingAutoencoder, which is specif-ically optimized for implicit feedback recommendation tasks.

• VAE-CF [6]: Variational Autoencoder for CF. A state-of-the-artmetric learning based recommender system.

Ranking Performance Evaluation: Tables 1 and 2 show thegeneral performance comparison of Q-VAE with the six baselinesusing R-Precision, MAP, Precision@K, and Recall@K metrics. Fromthe results obtained, we make the following observations: (i) Ingeneral, Q-VAE achieves competitive prediction performance w.r.t.the state-of-the-art recommendation algorithms such as WRMFand VAE-CF. (ii) Also, Q-VAE outperforms all candidates on MAPand Precision@K, at the expense of Recall@K as a trade-off.Performance vs. User Interaction Level: We investigate condi-tions under which Q-VAE achieves a significantly higher predictionperformance than the baselines. To this end, we categorize usersbased on their number of interactions in the training set into 4categories. The categories come from the 25%, 50%, 75% quartilesof the number of training interactions, which indicate how oftenthe user rated items in the training set.

Figure 3 shows the performance comparison for different usercategories. We note that CDAE, comparing to AutoRec, performspoorly for users with sparse historical interactions. It reflects ourintuition that simple random corruption of inputs (i.e., "Denoising")hurts the performance for users with sparse observations. WRMFand VAE-CF both perform well with sparse user interactions butpoorly for users with many interactions. In comparison, Q-VAEshows relatively stable and good performance over all four usercategories and significant prediction performance improvementover VAE-CF, especially with a large number of user interactions.User Representation Uncertainty: Both VAE-CF and Q-VAE ex-plicitly model the user latent representation distributions. Hence, in

Page 4: One-Class Collaborative Filtering with theQueryable ... · novel Queryable Variational Autoencoder (Q-VAE) variant of the VAE that explicitly models arbitrary conditional relationships

[0, 19.0] [20.0, 41.0] [42.0, 87.0] [88.0, 1005.0]User Category

0.00

0.05

0.10

0.15

0.20

NDCG

[0, 19.0] [20.0, 41.0] [42.0, 87.0] [88.0, 1005.0]User Category

0.00

0.02

0.04

0.06

0.08

0.10

0.12

R-Pr

ecisi

on

CDAEWRMFAutoRec

PureSVDVAE-CF

Q-VAEBPR

Figure 3: AverageNDCG comparison for different quantiles of user activity (number of ratings as binned into the ranges shownin [·, ·]) for MovieLens-1m. Error bars show the standard deviation of the NDCG across users in that bin.

0 100 200 300 400 500 600 700Number of Ratings

0.0

0.1

0.2

0.3

0.4

Repr

esen

tatio

n Un

certa

inty

(i

n St

anda

rd D

evia

tion)

VAE-CFQ-VAE

Figure 4: Standard deviation of the isotropic Gaussian latentrepresentations of users (averaged over users having a givennumber of ratings) for VAE-CF andQ-VAE onMovielens-1m;VAE-CF is overconfident with a high number of user ratings.

this experiment, we analyze the latent representation uncertaintyof users vs. their number of ratings. As shown in Figure 4, VAE-CF tends to provide high certainty to users with a large numberof interactions, even though a large number of interactions oftenrequires more uncertainty to cover the range of preferences. Thecaveat of this over-certainty is reflected in our observation of Fig-ure 3, where VAE-CF performs poorly for users with a large numberof interactions as compared to Q-VAE (rightmost chart).Convergence Profile: We track the convergence progress amongthe four Autoencoder based recommenders in Figure 5. It showsthat VAE-based algorithms converge faster than the original Au-toencoder approaches (which tend to overfit). Q-VAE benefits fromrelatively fast and smooth convergence without overfitting due tothe mutually structured regularization of its two objectives.

4 CONCLUSIONIn this paper, we proposed the Queryable Variational Auto-encoder(Q-VAE) as a way to explicitly condition recommendations in one-class collaborative filtering on observed user preferences to bettermodel latent uncertainty of user preferences. Our experiments showthat the Q-VAE not only converges faster, but also outperforms sev-eral state-of-the-art Auto-encoder based recommendation models.Also, we showed that Q-VAE avoids over-confidence with a largenumber of user preferences leading to strong recommendationperformance across the user preference density spectrum.

0 200 400 600 800 1000Epoch

0.12

0.13

0.14

0.15

0.16

0.17

NDCG

modelCDAEVAE-CFQ-VAEAutoRec

Figure 5: NDCG versus training epochs for the four Auto-encoder based recommendation algorithms.

REFERENCES[1] Suvash Sedhain, Aditya Krishna Menon, Scott Sanner, and Lexing Xie. Autorec:

Autoencoders meet collaborative filtering. In Proceedings of the 24th InternationalConference on World Wide Web, pages 111–112. ACM, 2015.

[2] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful represen-tations in a deep network with a local denoising criterion. Journal of machinelearning research, 11(Dec):3371–3408, 2010.

[3] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol.Extracting and composing robust features with denoising autoencoders. InProceedings of the 25th international conference on Machine learning, pages 1096–1103. ACM, 2008.

[4] Yao Wu, Christopher DuBois, Alice X Zheng, and Martin Ester. Collaborativedenoising auto-encoders for top-n recommender systems. In Proceedings ofthe Ninth ACM International Conference on Web Search and Data Mining, pages153–162. ACM, 2016.

[5] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. 2013.[6] Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. Varia-

tional autoencoders for collaborative filtering. arXiv preprint arXiv:1802.05814,2018.

[7] Xiaopeng Li and James She. Collaborative variational autoencoder for recom-mender systems. In Proceedings of the 23rd ACM SIGKDD International Conferenceon Knowledge Discovery and Data Mining, pages 305–314. ACM, 2017.

[8] Yifan Chen and Maarten de Rijke. A collective variational autoencoder for top-nrecommendation with side information. In Proceedings of the 3rd Workshop onDeep Learning for Recommender Systems, pages 3–9. ACM, 2018.

[9] Rui Shu, Hung H. Bui, and Mohammad Ghavamzadeh. Bottleneck conditionaldensity estimation. In Proceedings of the 34th International Conference on MachineLearning - Volume 70, ICML’17, pages 3164–3172. JMLR.org, 2017.

[10] Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. Performance of recom-mender algorithms on top-n recommendation tasks. In Proceedings of the fourthACM conference on Recommender systems, pages 39–46. ACM, 2010.

[11] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme.Bpr: Bayesian personalized ranking from implicit feedback. In Proceedings ofthe twenty-fifth conference on uncertainty in artificial intelligence, pages 452–461.AUAI Press, 2009.

[12] Yifan Hu, Yehuda Koren, and Chris Volinsky. Collaborative filtering for implicitfeedback datasets. In Data Mining, 2008. ICDM’08. Eighth IEEE InternationalConference on, pages 263–272. Ieee, 2008.