RecSys 2018: recommender systems that care!
RecSys 2018: recommender systems that care!
Going single-track
RecSys changed its format this year and decided to go single-track. I’ve always enjoyed single-track conferences a lot so I am definitely biased, but the single-track has a lot to say about it. One of the unexpected advantages of it is that it helps bring people from various backgrounds around the same problem/topic. This is particularly useful in conferences like RecSys where the background of participants is very diverse (covering algorithms, math, sociology, machine learning, data science, UI/UX, etc.). Another way of looking at this is that single-track forces the discovery of new topics. As a participant, you end up attending talks that you would not have attended in a multi-track setting where people tend to attend tracks they are most familiar with.
One of the down-sides of single-track is that the schedule becomes quite packed and leaves little room for discussions or Q&A. In my opinion, this is not a problem actually as participants have plenty of time to discuss during the coffee breaks.
It is unclear whether RecSys will go single-track again next year but I look forward to hearing what their take on this question is. Let’s see!
Towards more useful recommendations
It is now well-known that recommender systems play a crucial role in our everyday lives. The RecSys community is making significant progress on this aspect.
Two keynotes were particularly instrumental this year. First, Elizabeth Churchill (Google) urged the participants to be proactive and act independently (“there is agency!”). She also insisted on the need to avoid building what she referred to as “dinosaurs” (useless/dangerous products born from a “very cool” and readily-available technical solution). The second (outstanding) keynote by Christopher Berry (@cjpberry)
In reality, even though most remains to be done, the community is already hard at work. This is visible in a number of accepted papers this year, such as Explore, Exploit, and Explain: Personalizing Explainable Recommendations with Bandits, by James McInerney et al., Mixed methods for evaluating user satisfaction, a tutorial by Spotify, and Interpreting User Inaction in Recommender Systems, by Qian Zhao et al.
Joseph Konstan’s talk at the REVEAL workshop also turned out to be a key contribution on this topic. What Joseph is essentially telling us here is that if the recommender systems we are building are trained to predict the very items that user founds by themselves without recommendation (yes, I’m looking at you Precision_at_k), then the usefulness of the recommender becomes very debatable. While we should keep using the many metrics that we’ve built so far (we love measuring things and that’s a good thing!), most of the challenge now lies in building a bridge between these metrics and the real targets we want to optimise for (CTR, time on site, referrals, purchases, etc.)
Some of the workshops contribute directly to making recommenders more useful, such as FatRec (fairness) and REVEAL (offline evaluation).
It was also notable that the best long paper award went to Causal embeddings, by Steven Bonner and Flavian Vasile from Criteo Labs. The paper tackles the task of increasing the desired outcome versus the organic user behavior, which is certainly a healthy way to evaluate recommenders in general. As Xavier Amatriain mentioned during the award ceremony, it is particularly nice to see this fundamental topic being tackled by industry. This is a sign of very healthy interaction between academia and industry at RecSys.
Fair recommenders
Fairness is taking off seriously at RecSys. While the topic can be at times considered from a purely ethical perspective, the conference highlighted various approaches on the topic.
One particular approach that struck me was Calibrated Recommendations, by Harald Steck from Netflix. The general idea is to satisfy the constraint that products recommended to you should follow a distribution that is similar to the movies that you watched in the past. So for instance, if you’ve been watching 70% of actions movies and 30% of romantic movies, then the recommender should show you (roughly) 70% of actions movies and 30% romantic movies, even though it could be a better option for Netflix to deliver a different distribution.
Fairness now has a dedicated workshop (FatRec) which attracted a strong crowd and will most likely impact RecSys and other conferences in the years to come.
No, deep learning is not taking over the world…
Algorithms is still a key aspect of recommendation, covering a bit more than 50% of the selected papers.
Despite what is regularly being said in social media, deep learning is *not* taking over the world in recommender systems. While deep learning contributions are still significant, a number of participants do not hesitate to expose methods that are by no mean deep and yet provide very elegant and efficient solutions to a problem. Examples include Building Recommender Systems with Strict Privacy Boundaries by Renaud Bourassa (Slack) and Artwork Personalization at Netflix, by Fernando Amat (Netflix).
In fact, this year’s session of the Deep Learning workshop (DLRS) was the third and last one, as the workshop organisers consider that the mission of the workshop has been accomplished (helping deep learning ramp up in the community). While I was a bit sad and stunned to learn this at first, it occurred to me that… well yeah, that’s actually probably right and a good thing as well!
Examples of deep learning contributions this year include Interactive Recommendation via Deep Neural Memory Augmented Contextual Bandits, by Yilin Shen et al. and Spectral Collaborative Filtering, by Lei Zheng et al.
… but frameworks are!
While deep learning is not taking over the world, frameworks, on the other hand, are making the lives of researchers and engineers alike a lot easier year on year. I could go on and on to explain the various novelties of tensor flow and pytorch but to keep things short, I found one example particularly striking: Even OldRidge from Realtor.com presented their work on Adapting Session Based Recommendation for Features Through Transfer Learning and explained how they used Fast.ai to build their pipeline. That a company was able to build an entire prod-grade pipeline on top of Fast.AI speaks a lot about the impact of frameworks.
This place is a lot of fun!!
The conference maintains a top-notch capacity to bring fun around! What happens in Vancouver stays in Vancouver, but we did see a number of very cool contributions to the karaoke on Thursday evening :-) And the banquet was just plain outstanding. See you all next year!