

If this were a physical object that, ha ha, occasionally convinced people to commit suicide or murder, or spiral off into other delusions, it’d be off the shelves in a heartbeat
I want to gently push back on this. There are medications that can cause psychological symptoms and suicidal ideation as side effects and they’re still prescribed. They are, however, controlled, people who take them have to be informed of the side effects, and they’re managed by a trained physician. I absolutely think LLMs need to be more tightly regulated, and we need to have a much better idea of how they work and how to deploy them safely and in contexts where they are actually useful and won’t cause harm. But we do manage known risks with other products.




I think we’re probably in agreement. What I was trying to say was that for all the (many) issues in the medical system, we have places in society where risk is studied and managed. We can do the same with AI, there just doesn’t seem to be any will on the part of the political class to actually tackle complex problems right now, and we really need that will to regulate because the free market sure as fuck isn’t going to regulate itself.