Orienting on responsible use of a tool seems like a good place to start, and I'm sure that will be especially important as individuals and organizations delegate decisions that can affect people's lives to various agents like ChatGPT. We don't want a situation where they're tempted to simply hide behind the agent to avoid responsibility.
I'm less sure that's where it should stop. It doesn't seem right to set up a situation introducing powerful transformative tools without any obligations for the people creating it. And as far as I understand it there is indeed a layer of law where manufacturers can be subject to some standards of responsibility.
I'm less sure that's where it should stop. It doesn't seem right to set up a situation introducing powerful transformative tools without any obligations for the people creating it. And as far as I understand it there is indeed a layer of law where manufacturers can be subject to some standards of responsibility.