MilikMilik

From Doomscrolling Filters to ‘I’m Not Sure’: How New AI Tools Are Trying to Make Themselves Less Addictive

From Doomscrolling Filters to ‘I’m Not Sure’: How New AI Tools Are Trying to Make Themselves Less Addictive

Why AI Is Moving Beyond Infinite Feeds and Perfect Confidence

For years, the most successful digital products have optimised for one thing: attention. Infinite feeds, autoplay videos and ever-fresh notifications pushed users toward compulsive scrolling, while modern AI systems layered on confident answers delivered in a single, frictionless tap. The result is a familiar double bind: we rely on these tools for news, work and connection, yet feel drained by doomscrolling and wary of AI hallucinations. A new wave of products and research is trying to redraw this trade-off. Instead of maximising time-on-screen at all costs, they emphasise AI digital wellbeing, responsible AI design and calibrated AI confidence. Tools like Noscroll filter and summarise overwhelming social feeds, and new training methods from MIT encourage AI models to say “I’m not sure” when evidence is thin. Together, they hint at a future where AI systems are designed not only to be powerful, but also to be selectively quiet.

From Doomscrolling Filters to ‘I’m Not Sure’: How New AI Tools Are Trying to Make Themselves Less Addictive

Noscroll: An AI Doomscrolling Tool That Curates Instead of Hooks

Noscroll positions itself as an AI doomscrolling tool that strips away the addictive mechanics of social media. Its tagline—“No feed. No brainrot. No ragebait. Just signal.” —captures the promise: short, relevant updates instead of endless timelines. Users start by texting an AI agent, connect their X account, and describe topics to follow or avoid. In response, Noscroll generates sample digests drawn from X, news sites, blogs, Reddit, Hacker News, Substack and even research papers. Behind the scenes, multiple off-the-shelf models are orchestrated with prompts so the bot can keep a consistent voice while scanning broad sources. Over time, it learns from clicks and interactions to personalise what matters most. The goal is AI digital wellbeing: helping people stay informed about everything from tech trends to local news without being pulled back into ragebait loops or repetitive negativity.

The Promise and Peril of Curated AI Feeds

Curated AI feeds like Noscroll’s introduce a different kind of friction into attention-hungry systems. Instead of encouraging constant checking, they compress hours of scrolling into a concise digest, functioning like a digital assistant for niche interests, jobs or politics. This can reduce anxiety and fatigue by filtering out repetitive or emotionally draining content. But curation cuts both ways. Over-filtering may cause important viewpoints or emerging stories to vanish from view. Personalisation that learns from clicks can quietly reinforce bias, amplifying what users already agree with and downplaying uncomfortable yet necessary information. Responsible AI design therefore requires transparency about what sources are included, how topics are ranked, and how users can adjust or reset preferences. As more AI doomscrolling tools appear, the key question becomes not just “What did this system show me?” but “What might it have left out—and why?”

Teaching AI to Say “I’m Not Sure” with Calibrated Confidence

While products like Noscroll tackle overload, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory are targeting another problem: AI models that sound certain even when they are guessing. Traditional reinforcement learning rewards models for correct answers and penalises wrong ones, with no incentive to express uncertainty. Over time, this encourages confident responses whether the model has strong evidence or is effectively flipping a coin, undermining user trust and raising safety concerns in domains like medicine or finance. MIT’s technique, Reinforcement Learning with Calibration Rewards (RLCR), adds a calibration term based on the Brier score, pushing models to align their stated confidence with actual accuracy. The model learns to produce an answer and a confidence estimate together, with confidently wrong outputs penalised and overly hesitant correct ones discouraged. Experiments showed up to 90 percent reduction in calibration error, offering a path toward AI hallucination reduction without sacrificing performance.

What Responsible AI Might Look Like in Everyday Apps

Taken together, curated feeds and calibrated AI confidence sketch a blueprint for more responsible AI tools. Mainstream chatbots could routinely surface confidence scores, explicitly label low-certainty answers and suggest follow-up checks instead of bluffing. Search products might offer two modes: a fast, summary-style response and a slower, evidence-rich view that exposes uncertainty and alternative interpretations. Social platforms could integrate AI digital wellbeing features that auto-generate digests, limit refresh frequency or highlight when a user’s feed has become overly narrow. For consumers, evaluating responsible AI design means watching for systems that admit limits, explain how they filter information and give users meaningful control over personalisation. The shift away from maximised engagement will not be instant, but tools like Noscroll and techniques like RLCR show that AI can be designed to protect attention and trust, not just capture them.

Comments
Say Something...
No comments yet. Be the first to share your thoughts!