Google has a plan to stop its new AI from being dirty and rude

Silicon Valley Executive Directors they usually focus on the positives when they announce their company for the next big thing. In 2007, Apple’s Steve Jobs praised the “revolutionary user interface” and “breakthrough software” of the first iPhone. Google CEO Sundar Pichai has taken a different approach in his company annual conference On Wednesday, when it announced a beta test of Google’s “most advanced conversational AI to date.”

Pichai said the chatbot, known as LaMDA 2, could talk on any topic and performed well in tests with Google employees. He announced an upcoming app called AI test kitchen this will make the bot accessible to outsiders. But Pichai added a stern warning. “While we have improved safety, the model may still generate inaccurate, inappropriate or offensive responses,” he said.

Pichai’s hesitant stance illustrates a mixture of excitement, puzzlement, and concern that revolves around a series of recent breakthroughs in the capabilities of the machine learning software that processes language.

Technology has already improved the power of autocomplete and web search. He also created new categories of productivity applications that help workers generate liquid text or program code. And when Pichai first discovered the LaMDA project last yearHe said it could eventually be put to work in Google’s search engine, virtual assistant and workplace applications. And yet, despite all this dazzling promise, it’s not clear how to reliably control these new AI words.

Google’s LaMDA, or language model for dialog applications, is an example of what machine learning researchers call the great language model. The term is used to describe software that builds a statistical sense of language patterns by processing huge amounts of text, usually obtained online. LaMDA, for example, was originally trained with more than a trillion words from online forums, question and answer sites, Wikipedia and other web pages. This vast amount of data helps the algorithm perform tasks such as generating text in different styles, interpreting new text, or functioning as a chatbot. And these systems, if they work, will not be like the frustrating chatbots you use today. Currently, Amazon’s Google Assistant and Alexa can only perform certain pre-programmed tasks and deviate when presented with something they don’t understand. What Google is offering now is a computer you can actually talk to.

Chat logs published by Google show that LaMDA can – at least at times – be informative, thought-provoking or even fun. Testing of the chatbot prompted Google Vice President and AI researcher Blaise Aguera and Arcas to write a personal essay last December, arguing that technology could provide new insights into the nature of language and intelligence. “It can be very difficult to shake off the idea that there is ‘who’ on the other side of the screen, not ‘it,'” he wrote.

Pichai made it clear when he announced the first version of LaMDA last year, and again on Wednesday, that he sees that it potentially provides a path to voice interfaces significantly wider than the often disappointingly limited capabilities of services such as Alexa, Google Assistant and Apple’s Siri. Now Google leaders seem convinced that they may have finally found a way to create computers that you can really talk to.

At the same time, big language models have proven to speak dirty, nasty and ordinary racists. Extracting billions of words from text on the web inevitably sweeps away very unpleasant content. OpenAI, the company behind GPT-3 language generatorannounced that its creation could perpetuate stereotypes about gender and race, and asked customers to implement filters to weed out unpleasant content.

Related Posts

Leave a Reply

Your email address will not be published.