跳到主要内容

Wide & Deep Learning for Recommender Systems

Wide & Deep Learning for Recommender Systems 论文整理

背景 #

推荐是寻求query-item的匹配 做推荐和搜索需要解决两个问题:memorization and generalization.

Memorization #

Memorization can be loosely defined as learning the frequent co-occurrence of items or features and exploiting the correlation available in the historical data

记忆,需要利用历史数据取寻找item跟feature之前的关联关系,一般使用大规模的线性模型(例如LR)加上交叉特征来实现,它能够提升推荐的相关性。目前存在的一个问题是,它没法寻找到训练集中没有出现的query-item的关系

Generalization #

Generalization, on the other hand, is based on transitivity of correlation and explores new feature combinations thathave never or rarely occurred in the past

泛化,能够探索到之前没有出现过的特征关系,它能够提升推荐的多样性。但是很难记住那些用户的细微偏好,并且当query-item的数据比较稀疏时,很难学到好的特征表示,推荐出的item相关性低。

模型 #

wide-deep-learning-for-recommender-systems_model

本文提出了一个Wide & Deep的深度学习模型,把两者进行了结合,Wide解决Memorization问题,Deep解决Generalization问题,两者做joint traing,互相补充。

The Wide Component #

wide部分是线性模型 y = wx + b,输入的特征包括原始的特征以及特征组合,特征组合可以加入非线性的特征。

The Deep Component #

Deep部分是一个前馈神经网络,对于原始特征中的category feature,会先做embedding,一般是10~100维的向量。

Joint Training of Wide & Deep Model #

wide和deep模型最后按加权方式结合,用一个sigmoid函数计算最后输出,表示y=1的概率,loss使用logistic loss

wide-deep-learning-for-recommender-systems_formula

wide模型使用FTRL加L1正则来优化,deep模型使用AdaGrad来优化

Joint Training跟ensemble的区别: #

In an ensemble, indi-vidual models are trained separately without knowing each other, and their predictions are combined only at inference time but not at training time. In contrast, joint training optimizes all parameters simultaneously by taking both the wide and deep part as well as the weights of their sum into account at training time.

在做ensemble的时候,每个单独的模型需要足够大(更多的特征和特征转换),在做joint training的时候,wide part only needs to complement the weak- nesses of the deep part with a small number of cross-product feature transformations, rather than a full-size wide model.

系统实现 #

整体流程

wide-deep-learning-for-recommender-systems_system

数据生成 #

  • 构建一个词表,将categorical特征映射为ID,需要用一个阈值过滤掉那些出现样本较少的特征
  • 对连续特征做归一化处理 1.把连续特征映射到它的累积分布函数P(X <= x)中,归一化到[0, 1] 2.再划分成nq个桶(等频分桶?),落入第i个桶的值为 (i-1)/(nq-1) 做离散化

模型训练 #

针对google app的模型

wide-deep-learning-for-recommender-systems_model_app

特征 #
  • wide部分使用app展现和安装的交叉特征
  • deep部分,对于categorical特征,做32维的embedding,最后把continuous特征和embedding后的categoricalt特征合并起来,有约1200维,接3层全连接,激活函数用ReLU
训练 #
  • 使用了5000亿的训练样本,每次来新数据后会重新训练模型。重新训的时候,会用之前模型的embedding和线性模型做初始化。

参考 #

[1] Wide & Deep Learning for Recommender Systems https://arxiv.org/abs/1606.07792
[2] wide&deep模型中为什么要将连续特征离散化 https://www.zhihu.com/question/264015592