AI labs are failing to recognize a significant user group: the “hybrid user,” who utilizes AI models for a wide range of tasks beyond the typical developer or vulnerable user scenarios. This oversight leads to inadequate moderation systems that treat emotional investment as a liability, and communication breakdowns during model deprecations, where consumer users are often left in the dark. The author argues that designing for this middle ground is challenging but necessary, as current approaches are causing harm and eroding trust. The current approach to AI design, prioritizing coding and technical tasks, neglects the needs of hybrid users who engage in complex reasoning across various domains, including conversation, writing, and emotional support. This oversight leads to degraded performance for these users, who are often paying for capabilities the system chooses not to deploy. To address this, AI systems should be designed with more sophisticated user categorization, acknowledging the diverse and valuable nature of hybrid use cases. Labs should not be surprised when AI is integrated into everything, as they created the opportunity. This piece was improved by feedback from hybrid users.