Here are the highlights:
- XMRec involves learning from a large source market e.g., US, which has a lot of data, to improve recommendation performance in a target market e.g., Japan, which has fewer interactions.
- Prior work on XMRec has shown that meta-learning models perform well on this task, but this meta-learning while effective requires time ⏱️ and 💻 compute power. They model the market implicitly, whereas we model it explictly via market embeddings
- We show that a simple modification of baselines, not only improves performance, but matches or sometimes even beats meta-learning based models while requiring fewer resources to train!