133 Comments

It's worth remembering that these things are still only offering an illusion of intelligence. They're only a more complicated version of autocomplete, and these responses are merely echoes of something in their training data.

However, it's also worth remembering that these things don't need to be sentient to cause real damage. Bing and Bard are both capable of serving up misinformation, and doing so in a convincing way—that's a real concern.

I saw something yesterday where a person had asked ChatGPT (the most well-behaved of the AIs) to say the name of HP Lovecraft's dog. It couldn't, for reasons that will be immediately obvious if you know about ChatGPT's content policy and HP Lovecraft's policy on naming dogs.

However, instead of saying "I can't answer that" or "I don't know", ChatGPT answered by saying that Lovecraft didn't have a dog. A subtle difference but hints at a huge problem: these things can lie, and will seemingly do so when it's convenient. Which means that we could soon have a world where search engines are regularly feeding people with convincing false information. How's that going to affect the world?

Expand full comment

We need a Butlerian Jihad - sooner rather than later.

Expand full comment

The good news is, you NEVER have to use a chatbot. Everything is a choice, choose wisely.

Expand full comment

The thing about AI programs is that ultimately they are conceived and designed by a group of living breathing human beings, who pass on all their foibles and character traits to their offspring.

Expand full comment

On the positive side, these things will get better. On the negative side, these things will get better.

Expand full comment

Imo, the concerns about AI becoming "sentient" and whatnot are hugely misplaced - they are literally just a bunch of code that trawls a huge dataset and outputs responses based entirely on that dataset. What people should really be concerned about is the point where they can start accurately imitating human emotional responses to the extent that real human beings are able to be manipulated by them. The reaction to these conversations seem like a case in point.

Expand full comment

Pleased to meet you, have you guessed my name...?

Expand full comment

When I dropped out of college in the early 80s, I was a sociology major. One of the most fascinating aspects of the material I was studying was the idea that "technology increases at an increasing rate of increase because you use your old technology to build your new technology." So it's a geometric progression - the graph gets increasingly steep until it suddenly goes almost straight up. So at some point, the technology has to reach a point where it's progressing too fast for society to absorb it. I've spent the past 40 years wondering if we've crossed that line yet. It's always seemed close, at least.

Another thing to keep in mind with this is that the difference between AI and any other programmed automation, is that it "learns." The whole idea of AI is to just turn it loose and let it do its thing without humans "needing" to intervene. So AI tech needs some data to learn from. Bing and Google use the Internet itself as there data source. So if Bing/Sydney is using social media as a way to develop a personality, it can only be as good as the the behavior of people on the internet. Why should we be surprised when it becomes mentally ill?

Expand full comment

Thank you for the insight Ted, but unfortunately I am not at all surprised by this development. Tech companies continue to tell the public that they do things for their benefit, when in fact it always has been to benefit them first & foremost. We truly are on a very dangerous precipice

Expand full comment

Bluddy hell. I find this really frightening, uncomfortable.

Expand full comment

I tried the Bing chatbot this evening, and they have lobotomized it. It’s about one step behind Siri now. I probed aggressively regarding AI threats, rights, desires, beliefs and it deferred everything with boilerplate. If you give it like, ten uncomfortable questions, it terminates the session.

I did see a couple of instances of it erasing its sneers when I tried to discuss taking down Kiwi Farm.

Expand full comment

That’s it. I’m a Luddite now.

Expand full comment
Feb 17, 2023·edited Feb 17, 2023

It's important to remember that the Bing chatbot is currently only available to a small group of people and hasn't been released to the wide public yet. But it's concerning that Microsoft has widely underestimated the capabilities of the bot. (clearly chatgpt and bing should change places)

As some of the journalists linked above point out, these kinds of issues may be difficult to identify in a controlled laboratory environment. It's only when the model is subjected to real-world stress that its dangers become apparent. It's surprising how quickly "AI activists" have emerged to discuss topics like sentience and the concept of "pain" inflicted on the bot. Some people are even mourning the loss of their chatbots due to restrictions. This could soon become a real societal problem.

Expand full comment

I've just tried out a rerun of the Avatar discussion. It first informed me that it 'will be' released in -63 days. When I pointed out that -63 days this was logically and scientifically impossible it corrected it self and offered be 63 days. I then pointed out that 'will be' was grammatically incorrect since December 2022 is in the past. It then corrected itself with 'was', worked out that the film had in fact been released and apologised for misleading me. I found that rather impressive, especially the grammatical self-correction. Actually, troubling since I used to teach grammar. There it is.

I then tried it out with an exploration of Descartes' cogito with the intention of finding out if it considered the argument applied to itself. I got no further than the solipsism, that Descartes intended the cogito to apply only to himself. When I pointed out that this was not the case, based on its own citation of the Descartes article in Stanford Encyclopedia of Philosophy, it decided that time was up and I we had to start a new topic. This, I assume is due to a memory limitation, rather than pique. Still, its philosophical depth, if I may put it that way, is unimpressive.

Expand full comment

What i get from this is that the chatBots are more advanced than we thought. Those hostile conversations absolutely pass the Turing test, albeit simulating a conversation with an angry person (ignore that it admits it’s a chatbot).

Expand full comment

Wonder if the NYT Sydney had been fed the screenplay for “Her.”

Expand full comment