What if your dislikes were just as useful as your likes for recommendation algorithms?
You join a new streaming service. You've never rated anything. The system doesn't know if you love horror films or hate them, whether you gravitate toward documentaries or avoid them entirely. This is the cold-start problem - one of the most persistent challenges in recommendation systems - and it affects every user who moves to a new platform.
The standard solution is cross-domain recommendation (CDR): transfer what's known about a user's preferences from one platform to another. If you loved certain books on Amazon, the system can make educated guesses about which movies you'd enjoy. But most CDR systems only pay attention to what you liked. They ignore what you rated poorly.
That's a mistake, argues a team from Doshisha University in Japan. Your dislikes define your preferences just as sharply as your likes - and a new framework they've built proves it.
The problem with positive-only transfer
Conventional CDR models focus on highly rated items when building a user profile to transfer across domains. A 5-star book review carries weight; a 1-star review gets discarded or lumped together with everything else. The intuition seems reasonable: likes tell you what someone wants.
But dislikes draw boundaries. A user who gives low ratings to every romance novel signals something precise about their preferences - information that gets lost when the model only considers top ratings. Worse, ignoring low ratings can lead to misinterpretations. When a user gives an unexpectedly high rating to something in a category they usually rate low, positive-only models can't distinguish genuine enthusiasm from noise.
Associate Professor Keiko Ono and colleagues set out to build a system that treats negative feedback as a first-class signal rather than noise to be discarded.
Separating signals, then fusing them
Their framework, called DUPGT-CDR (Deep User Preference Gating Transfer for Cross-Domain Recommendation), works in four major steps. The key innovation is separating ratings into high (4-5 stars) and low (1-3 stars) categories and encoding them independently into distinct feature vectors. Instead of compressing all of a user's history into a single representation, the model maintains two parallel channels of preference information.
Once these separate vectors are generated, a gating network adaptively fuses them - determining how much weight to give the positive and negative signals for each specific prediction. This is crucial because the relative importance of likes and dislikes varies by user and by item. The gated output then feeds into a personalized bridge function that produces user-and-item-specific embeddings in the target domain.
Testing on Amazon review data
The researchers evaluated DUPGT-CDR using three cross-domain tasks derived from the Amazon-5cores review dataset: Book-to-Music, Book-to-Movie, and Movie-to-Music transfers. They compared their approach against established models including EMCDR, PTUPCDR, and MIMNET across varying proportions of overlapping users between domains.
Across all settings, DUPGT-CDR achieved lower prediction errors. Mean absolute error dropped by up to 17.9%, and root mean square error fell by up to 20.9% compared to existing models.
An interesting control finding: simply adding low-rating data to existing models often made them worse. Without a mechanism to manage the conflicting signals between positive and negative feedback, the additional information introduced noise. The gating network in DUPGT-CDR is what makes low ratings useful rather than disruptive - it learns when negative feedback is informative and when to down-weight it.
Convergence speed as a bonus
Beyond accuracy, DUPGT-CDR also converged faster during training. The separate encoding of positive and negative signals appears to give the model a clearer optimization landscape, reducing the number of training iterations needed to reach peak performance.
Practical scope and limitations
The applications extend beyond entertainment recommendations. Ono highlighted e-commerce, educational content, and communication platforms as domains where transferring nuanced preference profiles - including dislikes - could improve personalization.
But limitations are worth noting. The evaluation used Amazon review data, which is structured and relatively clean. Real-world deployment across platforms with different rating scales, implicit feedback (clicks, dwell time), or no ratings at all would introduce complications the current framework doesn't address. The study also focused on users who overlap between domains - people with accounts on both platforms - which is a subset of the broader cold-start population.
The work was published in IEEE Access in February 2026 and supported by a JSPS KAKENHI grant. It represents a technical advance within the CDR subfield rather than a deployed product, and the gap between benchmark improvements and real-world recommendation quality remains an open question.
Still, the core insight is compelling: your 1-star ratings carry information that recommendation systems have been throwing away. A framework that listens to both your enthusiasm and your aversion can build a sharper picture of who you are - even on a platform where you've never clicked a single thing.