Menu

Multimedia

OpenAI (ChatGPT) agrees to work with the army

The artificial intelligence specialist OpenAI had a policy of not collaborating with the army. Not anymore.

Artificial intelligence remains a sensitive subject, with which we still need to be careful, say the greatest directors, but also the tech giants. Even the AI specialist OpenAI -ChatGPT- had specified in its rules not to associate its technology with military applications. But that was before.

ChatGPT, policy and conditions of use modified manu militari££££

Indeed, the company OpenAI recently and without warning removed mentions linked to the army in its policy text and conditions of use use. Even if the latter still mentions that users must not “use our services to harm themselves or others”, including “develop or use weapons”. Reason for this change? OpenAI has begun collaborating with the US Department of Defense on the development of AI tools, including cybersecurity tools.

Talk to Anna Makanju, VP at OpenAI££££

“Given that we previously had essentially a blanket ban on military, many thought this would prohibit many of these use cases, believing that this is what we wanted to see in the world, said Anna Makanju, vice president at OpenAI at the World Economic Forum. Our policy does not allow our tools to be used to harm people, develop weapons, conduct communications surveillance, or cause injury to others or destroy property, Makanju continued. However, there are use cases related to national security that are consistent with our mission.”

Don't worry, it'll be fine.

image