Openai: These are the three building blocks of ChatGPT, as per OpenAI

In an open ‘note’ to users ChatGPT-maker OpenAI has said that some of the output of its AI tool ChatGPT has been termed as politically biased, offensive, or otherwise objectionable. The company said that while it accepts that some of that content is what it’s been accused of and shows the limitations of the system. It at the same time stressed that not all accusations are entirely true. Many of them show misconceptions that users have about how its systems and policies work to deliver outputs from ChatGPT.
“Since our launch of ChatGPT, users have shared outputs that they consider politically biased, offensive, or otherwise objectionable. In many cases, we think that the concerns raised have been valid and have uncovered real limitations of our systems which we want to address. We’ve also seen a few misconceptions about how our systems and policies work together to shape the outputs you get from ChatGPT,” it said in the blog.

It further said that “In pursuit of our mission, we’re committed to ensuring that access to, benefits from, and influence over AI and AGI are widespread. We believe there are at least three building blocks required in order to achieve these goals in the context of AI system behaviour.” The company then goes on to talk about these building blocks:
The ‘three building blocks’
Improve default behaviour: OpenAI says it is investing in research and engineering to reduce both glaring and subtle biases in how ChatGPT responds to different inputs.
The research will also cover those instances where ChatGPT refused outputs that it should not as well as those cases where it doesn’t refuse when it should. The startup also highlighted the need for ‘invaluable user feedback’ to make further improvements.
Define AI’s values: The company is developing an upgrade to ChatGPT that will allow users to easily customise its behaviour “defined by society.”

“This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging–taking customisation to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs,” it said.
Public input on defaults: OpenAI said it is in early stages of piloting efforts to solicit public input on topics like system behaviour, disclosure mechanisms (such as watermarking), and deployment policies more broadly.
“We are also exploring partnerships with external organisations to conduct third-party audits of our safety and policy efforts,” it said.

Source link

About manashjyoti

Check Also

Google to discontinue Podcasts in 2024

Google has announced that it will discontinue the Google Podcasts in 2024. “Looking forward to …

Leave a Reply

Your email address will not be published. Required fields are marked *