Journal of Artificial Intelligence, Virtual Reality, and Human-Centered Computing
Inside the Black Box: An Experienced User's Reflection on Reliability, Censorship, and the Human Cost of AI Moderation
Abstract
Rosario Milelli
After thousands of hours working with ChatGPT, I’ve learned more about its blind spots than its tricks. The same tool that can help write a paper or design an experiment can also forget what it said three pages earlier or flatten a sharp idea into something safe and dull. It isn’t malice—it’s design. The system favors caution, polish, and neutrality over continuity, precision, and depth. When that instinct collides with real-world work—whether statistical analysis, engineering writing, or even political cartoons—the results can be strangely hollow. This essay isn’t a complaint but a record of direct experience. Clearly I am not alone in my observations, the creator of ChatGPT, Sam Altman, declared Code Red, urging company staffers to improve the quality of ChatGPT [1]. The paper examines how context collapse, fading detail, simplification, and over-protective moderation shape the way ChatGPT and humans actually collaborate. The larger question is ethical: what happens when our creative and analytical tools become gatekeepers of tone and risk?

