104 Comments

ChatGPT demonstrates what language skills that are divorced from any knowledge of the world looks like. It reminds me of precocious young people who can say things they have heard that seem appropriate, but don't really understand what they are saying.

Expand full comment

"'I’m a confidence man.” And that’s actually how the term originated—as 'confidence man.'"

I've heard this for years, but never the second half "I give them confidence". I always assumed it meant the person doing conning, but this makes so much more sense... and explains why it rarely works on me. lol

Also: I've recently seen some screenshots of ChatGPT being asked to write a poem about a particular politician and it refused on the grounds that "orange man bad" ... but didn't waste a second writing one praising the current White House resident, who is arguably even more problematic. I'm not fan of orange man, the fact that AI has been trained to react this way should be of major concern to everyone.

Expand full comment

I read a good book about AI ‘Architects of intelligence’ with a series of interviews with the people at the forefront of AI. So I’m not surprised that ChatGPT is not intelligent. What I really find fascinating, is how AI has always been pictured as objective and scrupulously truthful in literature and films, but in reality it seems to exhibit the least desirable traits in a human being. I’d like someone to rewrite the character Data from StarTrek based on the new findings.

Expand full comment

Not very competent, is our ChatGPT, but still creepy as hell.

Expand full comment

Here I thought AI would be cold and rational and not at all human, and in some ways it's more human than we are. It just makes stuff up like children (and many adults) often do.

Expand full comment

Not sure if you've done an article on Stable Diffusion? Interesting lawsuit being filed against them: https://www.cbsnews.com/news/ai-stable-diffusion-stability-ai-lawsuit-artists-sue-image-generators/

As one artist aptly put it, it's "another upward transfer of wealth, from working artists to Silicon Valley billionaires."

I have to say I agree. Some would argue that artists themselves imitate art to hone their skills, then at some point they develop their own style and it becomes their own. But I would argue that because AI can generate "good enough" artwork in seconds, it will flood the market with images and destroy or at least impede the careers of working artists (especially those starting out) because AI Bros are happy to use a free service (or cheap service) that's good enough rather than pay an artist for their hand crafted artwork.

Expand full comment

Love it Ted // shameless cross post but wrote a bit about my trials and tribulations with ChatGPT recently...I'm mainly having trouble grasping the ethics of a tech company built on the information of users it won't cite or compensate...anyhow I also tried prompting it to "ship" garfield and alf and things got weird...https://cansafis.substack.com/p/the-incredibly-super-duper-very-very...thank you for your awesome blog!!

Expand full comment

The term often used instead of lies is „hallucinations“. Which I like, as for creative work it is almost necessary but for doing math it is quite a hassle.

I agree, Chatgpt is not the search engine killer, more like a new interface to it. And for writing texts it might force us to question how much bullshit we should still be forced to write/read (I am looking at you cooking recipe introduction). I am also pretty sure it will be useful for my next scientific proposal.

Expand full comment

I asked ChaGPT to write "a Native American sonnet" and it felt like it included every bad and stereotypical rhyme that's ever been associated with us Natives.

Expand full comment

Yeh, you are being alarmist. Firstly there’s a clear selection bias - it’s never lied to me and has generated novel lyrics and poetry when asked. I’m sure that what has been posted in twitter has happened, but it’s not as common as the post would indicate. Also being wrong isn’t the same as a confidence trick.

It’s a tool. You need to do your own double checking, and, yea, it will get better. If in 5 years we are where we are now I’d lose hope in AI.

Expand full comment

Maybe the most promising thing about language models like ChatGPT is that young early adopters often see through its limitations and adapt quickly. For example, with Wikipedia, students and the public learned how it works, how to use it, and how to trust it. I'm optimistic that the public will not trust it entirely but will find where it can be most useful. There is always hyperbole around every new technology. If this one advances as quickly as they are promising, it may become more and more useful, and we may even find that it knows it's limitations and can be honest about them. Certainly there will be unscrupulous actors in the space selling snake oil, but there will also be the good guys. Wikipedia occasionally offers up some real baloney from time to time, but for the most part it has been a great tool for connecting the "hive mind" on all manner of subjects. One could even say they have democratized knowledge. In a perfect world, language models would compete for the truth, just as Wikipedia's editors do.

Expand full comment

Remember that ChatGPT, like *any* language model, does not reason in the way humans do. Its *entire* purpose is to provide plausible completions of text.

As such, everything it does is BS in the pure Frankfurtian sense; it DOES NOT CARE if what it's saying is true.

Expand full comment

In 2024 ChatGPT will run for President. Oh wait, that's already happened.

Expand full comment

In David Mamet's "House of Games", Mike (Joe Mantegna) explains con games to Margaret Ford (Lindsay Crouse): "It's called a confidence game. Why? Because you give me your confidence? No. Because I give you mine." That's just before this scene, in which Mike demonstrates "short con": https://www.youtube.com/watch?v=N27gumJNHP0

Expand full comment

We're pissing it off! (Based on that last tweet.) And it's coming for us with it's lying emotions.

Expand full comment