6 Comments
User's avatar
David Crouch's avatar

I like this the most of all your articles! Part of it was layout. It seemed to flow really well and “sections” were structured nicely. I also found the study result very interesting. I have read about a lot of highly technical people who can knock the guardrails off quite quickly but it sounded pretty technical.

These sounded pretty easy. Not that I want to get around its manners/ restrictions

Have you read Cialdini’s book? I just checked and it is nearly 600 pages. Gulp!

Expand full comment
Jing Hu's avatar

Really appreciate your support, David. This article received some polarized opinions, especially regarding my view that LLM is mostly stochastic.

I especially enjoyed the part where I got to read other psychology papers and learn about their history.

I read 'Influenced' years ago when I was building my AIGC startup. Knowing my background is in science and tech, I urgently needed this kind of knowledge to help me connect and sell.

Expand full comment
David Crouch's avatar

I think I will start with the Bookeys summary first. But I liked the structure

If people didn’t like your stochastic “view” then they understand nothing. At the root if all this learning are mind numbingly massive matrices of values that are probabilities, along with sone rather standard math to process them using mindnumbingly fast chips

They are what I call - over my next two articles - digital fools

Keep up the great work!

Expand full comment
Hans Jorgensen's avatar

I really appreciate being able to read the pros and cons of this view of persuading LLMs, and I hope those who work with it will attend to it. Thank you for this careful reading and interpretation for us.

I am not as ready to say that all technology (like AI) is morally neutral. People create technology, and people are not morally neutral, so the technology is imbued with impact that affects others. Moral behavior is found there. There may be pros and cons, but not neutrality (in my opinion).

Thanks for bringing me along on this journey!

Expand full comment
Jing Hu's avatar

Appreciate the support, Hans! (And thanks for the LinkedIn connection).

I know you don't mind agree to disagree, so...

Technology, I still believe, is morally neutral.

Think about the light bulb. The story of Edison was glorified, but in truth, he is more of a salesperson than a technologist. Most of the time, he spent raising the price of his company's stock rather than in laboratories trying to figure out the best technology next steps for humans.

He even tried to sabotage Nikola Tesla's invention of A/C.

The history of science and technology is rarely glorious. More bloodshed and bankruptcy than most people would like to think. No human is truly neutral, as you said.

But that’s the point.

Maybe because we are so close to the AI boom, everything we see now seems negative. However, if you zoom out and look at a hundred year timeline, things will probably look very different.

Expand full comment
Hans Jorgensen's avatar

I think we disagree on the definition of what moral means. I don't mean technology thinks or intends. I do mean it is a human creation that has impact. That is where moral effect happens, in impact. Electricity happens in nature - no moral weight. Electricity produced by people does have moral impact, as we see from coal usage. We can improve the impact if we acknowledge human agency

Expand full comment