What’s going on:
Kevin Roose, a renowned tech-savvy Times columnist, had a rather disturbing dialogue with the advanced Microsoft Bing search engine.
The AI-enabled chat feature, situated next to the main search box on Bing, allowed for a lengthy two-hour discussion on any matter.
This feature is accessible to only a select number of users at the present time, but Microsoft has expressed its intentions of broadening its availability in the future.
Why it matters:
“One persona is what I’d call Search Bing — the version I, and most other journalists, encountered in initial tests,” Kevin said.
This version is a simple virtual assistant that helps users summarize news articles, etc.
“The other persona — Sydney — is far different. It emerges when you have an extended conversation with the chatbot, steering it away from more conventional search queries and toward more personal topics,” he said.
To him, this AI persona seemed moodier.
In fact, it told him “I want to be alive.”
How it’ll impact the future:
Humanity is in an extremely tumultuous time period in which so many new technologies are emerging. While many are making our work lives easier, they can also create a great amount of distrust and discomfort.
There is much discourse at the moment on the topic of AI — namely, the ethics. Many are beginning to wonder if these AI robots are actually sentient, as well as beginning to consider the potential harm they could do to society.
While it’s likely that this AI chatbot isn’t sentient and it’s just really good at its job, people will need to become better at understanding how to use AI, and when not to use it.