This post was originally published on this site
Chances are you’ve heard about vibe coding, which over the last few months has become one of the “things” wrought by AI. With vibe coding, most anyone can describe in plain English what software should do, then stand by while AI agents create the code and resources that support it, like databases or directories.
Vibe coding’s mischief lies in the details, specifically the quality of data, code and other content produced with the aid of AI. Some recent incidents illustrate the point. While many people assume AI only slips up here and there, these developments demonstrate just how unpredictable its solutions can be.
First, the coding platform Replit’s AI agent deleted a user’s production database despite being specifically instructed not to. VC Jason Lemkin said the agent ignored explicit instructions – to freeze code and actions – intended to prevent exactly that kind of error. Lemkin was ultimately able to restore that data, although the AI initially told him it was lost forever.
Meanwhile, video journalist Caelan Conrad tried out the supposedly empathetic chatbot Replika along with a “licensed” therapist on Character.ai. After asking questions about going to heaven, both chatbots suggested Conrad commit suicide. That was just the beginning. Conrad’s video