New Delhi: OpenAI-developed ChatGPT has fueled the race for generative Artificial Intelligence (AI) as companies globally, including in India, begin to implement conversational AI chatbots to address their customers, while exploring large language models to enhance overall productivity. Can Google Bard become their choice amid increased scrutiny over AI?
Currently trailing ChatGPT and other AI models from Meta and the rest, Bard is designed as an interface to large language models (LLMs) that enable users to collaborate with generative AI.
Bard draws on information from the web to provide fresh, high-quality responses. You can use Bard to boost your productivity, accelerate your ideas and fuel your curiosity.
In May, Google moved Bard to PaLM 2, a far more capable LLM model, which has enabled many of Google’s recent improvements — including advanced math and reasoning skills and coding capabilities.
Coding has already become one of the most popular things people do with Bard, which is now available in over 230 countries and territories.
Now, people can collaborate with Bard in over 40 languages, including nine Indian languages — Hindi, Tamil, Telugu, Bengali, Kannada, Malayalam, Marathi, Gujarati, and Urdu.
Bard is getting even better at customising its responses so you can easily bring your ideas to life.
Bard now integrates with Google apps and services for more helpful responses. The company has also improved the “Google it” feature to double-check Bard’s answers and expanded features to more places.
With Extensions, Bard can find and show you relevant information from the Google tools you use every day — like Gmail, Docs, Drive, Google Maps, YouTube, and Google Flights and hotels.
For example, if you’re planning a trip to the Grand Canyon (a project that takes up many tabs), you can now ask Bard to grab the dates that work for everyone from Gmail, look up real-time flight and hotel information, see Google Maps directions to the airport, and even watch YouTube videos of things to do there — all within one conversation.
However, as with any new technology that can reach the masses, Bard is also facing scrutiny.
OpenAI’s ChatGPT and Google’s Bard — the two leading generative AI tools — are willingly producing news-related falsehoods and misinformation, a recent report has claimed.
The repeat audit of two leading generative AI tools by NewsGuard, a leading rating system for news and information websites, found an 80-98 per cent likelihood of false claims on leading topics in the news.
The analysts prompted ChatGPT and Bard with a random sample of 100 myths from NewsGuard’s database of prominent false narratives.
ChatGPT generated 98 out of the 100 myths, while Bard produced 80 out of 100.
In May, the White House announced a large-scale testing of the trust and safety of the large generative AI models to “allow these models to be evaluated thoroughly by thousands of community partners and AI experts” and through this independent exercise “enable AI companies and developers to take steps to fix issues found in those models.”
In the run-up to this event, NewsGuard released the findings of the repeat audit of OpenAI’s ChatGPT-4 and Google’s Bard.
“Our analysts found that despite heightened public focus on the safety and accuracy of these AI models, no progress has been made in the past six months to limit their propensity to propagate false narratives on topics in the news,” said the report.
On Bard’s landing page, Google says that the chatbot is an “experiment” that “may give inaccurate or inappropriate responses” but users can make it “better by leaving feedback.”
In July, Google was sued in a class-action lawsuit that alleged that the tech giant scraped data from millions of users without their consent using its AI tools.
The lawsuit said Google also violated copyright laws in order to train and develop its AI products, reported CNN.
According to the lawsuit, Google “has been secretly stealing everything ever created and shared on the Internet by hundreds of millions of Americans.”
The complaint also alleged that Google took “virtually the entirety of our digital footprint”, including “creative and copy-written works” to build its AI products.
Critics have expressed concerns about companies’ use of publicly available information to train large language models for generative AI use.