On relationships - are we losing our sense of what makes for good relationships, rather than the mindless and easy validation? The give and take? That someone can make me better, but I also want to make someone else's life better in some way? How does one give into something when there is no "there", there?
On fragmentation of facts, news, common truths, etc. - I'd have called it "the algos" which have already been doing this. When did/does it become AI?
This is fabulous to read. It's the apparent aversion to slowness, practice, discipline, failing and learning from that which concerns me. Shortcuts are a clear way to moral dissolution, I think. I love your use of Jane Eyre and especially The Little Prince - "It is the time you have wasted for your rose that makes your rose so important."
I do not share your inherent trust of the market (which I do not currently believe is free, but is weighted against the best interests of people), nor of small government, at least until we define government. I think that includes emergency response and teachers and public health nursing. In the US at least, when those are cut, societies suffer.
AI is here to stay - I hope people like you can help us address the concerns and accentuate the gains for people. Thanks for your work!
I'm glad you resonate with the classics I referenced! They are close to my heart, and I believe they best illustrate why human bonding is so special.
I agree with your proposal to 'define government.' It's a difficult question, isn't it? My concept of a small government might differ from yours.
From what I see, much needs to be done to help people in need. But if it's stretched too far, it could lead to a situation like in Europe, heavy taxes that discourage hard working people (why do I have to pay 60% in the UK and work my ass off, compared to only paying 2x% in the US?) and the draining of money out of countries.
Finding that balance is hard to achieve.
I don't think ANY politicians in the world have the right incentives to do the right thing...
Thank you again for the kind words @Hans Jorgensen. This is a special one, hearing from you, someone doing very important work for a local community.
People usually see more risk in the future than opportunities due to loss aversion—a core principle of prospect theory—which finds that potential losses have a greater psychological impact than equivalent gains, prompting individuals to focus more on what could go wrong rather than possible positive outcomes when evaluating future uncertainties
Maybe we ARE the religious authorities burning Copernicus. I respect the work of these researchers, because I believe most of them begin their studies with objectivity in mind.
Personally, I’d love to see generative AI produce overwhelmingly positive outcomes for humanity. I think most researchers hope for the same, even if their findings don’t always favor generative AI.
That said, it’s essential to highlight the risks that come with these technologies. No one can deny the harm caused by social media, or how generative AI could weaken our thinking skills, or the dangers of deepfakes and scams created by malicious actors.
Both sides of the debate are important.
Ideally, we’d see more people take a balanced, middle ground approach. Sometimes it feels like the world is divided, but perhaps that’s because people crave and amplify polarized opinions for the sake of drama and marketing.
Those with authority have a responsibility to speak objectively.
Unfortunately, conflicts of interest can creep in, making trustworthy, balanced opinions harder to find and trust.
I'm not impressed by negative forecasts. People usually see more risk in the future than opportunities due to negativity bias, pessimism bias, dread aversion, and loss aversion—all well-researched in psychology—which find that potential losses have a greater psychological impact than equivalent gains, prompting individuals to focus more on what could go wrong rather than possible positive outcomes when evaluating future uncertainties
I so appreciate your research-based overview of this topic.
I am trying to help people realize that AI is changing our inner worlds more than our outer worlds. The focus on jobs, social media, and social institutions is blinding us to the mind revolution transforming what it means to be human--as you detail here.
On relationships - are we losing our sense of what makes for good relationships, rather than the mindless and easy validation? The give and take? That someone can make me better, but I also want to make someone else's life better in some way? How does one give into something when there is no "there", there?
On fragmentation of facts, news, common truths, etc. - I'd have called it "the algos" which have already been doing this. When did/does it become AI?
This is fabulous to read. It's the apparent aversion to slowness, practice, discipline, failing and learning from that which concerns me. Shortcuts are a clear way to moral dissolution, I think. I love your use of Jane Eyre and especially The Little Prince - "It is the time you have wasted for your rose that makes your rose so important."
I do not share your inherent trust of the market (which I do not currently believe is free, but is weighted against the best interests of people), nor of small government, at least until we define government. I think that includes emergency response and teachers and public health nursing. In the US at least, when those are cut, societies suffer.
AI is here to stay - I hope people like you can help us address the concerns and accentuate the gains for people. Thanks for your work!
I'm glad you resonate with the classics I referenced! They are close to my heart, and I believe they best illustrate why human bonding is so special.
I agree with your proposal to 'define government.' It's a difficult question, isn't it? My concept of a small government might differ from yours.
From what I see, much needs to be done to help people in need. But if it's stretched too far, it could lead to a situation like in Europe, heavy taxes that discourage hard working people (why do I have to pay 60% in the UK and work my ass off, compared to only paying 2x% in the US?) and the draining of money out of countries.
Finding that balance is hard to achieve.
I don't think ANY politicians in the world have the right incentives to do the right thing...
Thank you again for the kind words @Hans Jorgensen. This is a special one, hearing from you, someone doing very important work for a local community.
People usually see more risk in the future than opportunities due to loss aversion—a core principle of prospect theory—which finds that potential losses have a greater psychological impact than equivalent gains, prompting individuals to focus more on what could go wrong rather than possible positive outcomes when evaluating future uncertainties
You may be right.
Maybe we ARE the religious authorities burning Copernicus. I respect the work of these researchers, because I believe most of them begin their studies with objectivity in mind.
Personally, I’d love to see generative AI produce overwhelmingly positive outcomes for humanity. I think most researchers hope for the same, even if their findings don’t always favor generative AI.
That said, it’s essential to highlight the risks that come with these technologies. No one can deny the harm caused by social media, or how generative AI could weaken our thinking skills, or the dangers of deepfakes and scams created by malicious actors.
Both sides of the debate are important.
Ideally, we’d see more people take a balanced, middle ground approach. Sometimes it feels like the world is divided, but perhaps that’s because people crave and amplify polarized opinions for the sake of drama and marketing.
Those with authority have a responsibility to speak objectively.
Unfortunately, conflicts of interest can creep in, making trustworthy, balanced opinions harder to find and trust.
I'm not impressed by negative forecasts. People usually see more risk in the future than opportunities due to negativity bias, pessimism bias, dread aversion, and loss aversion—all well-researched in psychology—which find that potential losses have a greater psychological impact than equivalent gains, prompting individuals to focus more on what could go wrong rather than possible positive outcomes when evaluating future uncertainties
I so appreciate your research-based overview of this topic.
I am trying to help people realize that AI is changing our inner worlds more than our outer worlds. The focus on jobs, social media, and social institutions is blinding us to the mind revolution transforming what it means to be human--as you detail here.
You emphasize relationships in this post. You might be interested in my post about likely future changes in our sexual beings, also research-based. At: https://mindrevolution.substack.com/p/the-last-revolution-in-sex