Podcast: Play in new window | Download | Embed
Newspapers are printing summer reading lists of AI-hallucinated books. Apple “Intelligence” is making up fake BBC headlines. People are losing their minds as ChatGPT calls them “spiral starchildren” and “river walkers.” Like characters in a Loony Tunes skit, we have just run off the edge of a cliff and—with the advent of a new generation of Hollywood-esque AI-generated fake videos—people are just beginning to look down and notice. The plunge is inevitable . . . or is it? Join James in this week’s edition of The Corbett Report for a sobering look at the latest in AI nonsense.
Video player not working? Use these links to watch it somewhere else!
WATCH ON: /
/
/
/
or DOWNLOAD THE MP4
SHOW NOTES
All these videos are ai generated audio included. I’m scared of the future
IMA: Artificial Intelligence And Its Influence On Research/Investigation
REPORTAGE: Essays on the New World Order
Ghostwriters on the Storm: How Big Pharma (and everyone else) Ghostwrites Articles
Grok vs. The Pentagon: An AI’s Take on 9/11
What Would Mark Carney’s Canada Look Like if not Challenged?
Feeling dumb? Let Google’s latest AI invention simplify that wordy writing for you
Microsoft-backed AI out-forecasts hurricane experts without crunching the physics
Books on Chicago Sun-Times AI-generated summer reading list aren’t real
Apple urged to axe AI feature after false headline | BBC News
Apple Intelligence summary botches a headline, causing jitters in BBC newsroom
Major Papers Publish AI-Hallucinated Summer Reading List Of Nonexistent Books
Eric Schmidt to Charlie Rose: Multiple search results are a bug, not a feature
AI is Permanently Rewriting History
The Responsible Lie: How AI Sells Conviction Without Truth
Swiss boffins admit to secretly posting AI-penned posts to Reddit in the name of science
‘The Worst Internet-Research Ethics Violation I Have Ever Seen’
Things People Use AI for in 2025
The REAL Dangers of the Chatbot Takeover
ChatGPT Users Are Developing Bizarre Delusions
OpenAI wants to build a subscription for something like an AI OS, with SDKs and APIs and ‘surfaces’
Please don’t forward the emails to me. Ah the frustration you must experience. Don’t people realise AI is only as good as the data inputs and the prompts (as the intro video so wonderfully illustrates).
> Don’t people realise AI is only as good as the data inputs and the prompts
This is not correct and demonstrates a lack of understanding how LLMs function.
What is popularly called a “hallucination” cannot be removed from LLM output, no matter the information given to the LLM.
Also, “hallucination” is likely a misleading phrase and this article demonstrates how the phrase “confabulation” better describes the artifact:
https://www.beren.io/2023-03-19-LLMs-confabulate-not-hallucinate/
When I heard Sam Altman (I hear a serpent in his voice) talking with Lex Friedman about ChatGPTs attempts to remove bias from the AI models, I realized that he’s falling into the same paradox that Zuckerberg fell into and has been trying to weasel out of for a long time.
Trying to program OUT bias is programming IN bias. And when you have a binary system of zeros and ones, this is inevitable.
Just my three cents, Dr. Noh
You program the bias out by not programming it in to begin with.
“Hello, machine. Please do this task for me with a massive stream of zeros and ones based on AND, BUT, IF, NOT subroutines.”
OK. But what happens if a contingency arises?
“Then you just dump that into the ‘disinformation’ file.”
OK. That sounds logical.
And that’s where we are.
Blessings, Dr. Noh.
Stone choir posted a couple of things on tech worth a listen if you have time
https://stone-choir.com/technology/
Likens AI to An Indian who will tell you anything to satisfy you at that moment
https://stone-choir.com/the-context-window/
Makes you think about how big your own window of thought is…. People who depend on AI will shrink their cognitive window of what they can actually think about at one time.
When I was a kid I used to feel stupid because I was always reading stuff above my level, I’m still pretty stupid but now everyone else is even more fking stupid.
Grab anything you think is worth having in a generation NOW and save it before all the books in IA get re written by AI.
A relative by marriage died and his son found a dictionary with dots against the words because the guy looked up five words a day and used them in conversation…..he had been thru about 2 and a half times before he died. Self improvement is a goal every one should embrace.
People who use AI to simplify text will never be able to read a complex text. They will never be more then serfs because they choose ignorance willingly
They will always be mentally weak, I think the most destructive habit is being afraid of being wrong and wanting ti get the correct answer without caring about HOW to get to the answers- which is the only skill that will help you in the future
AI is a selection event, if you and your kids are lazy you will DIE. Not even kidding
No, you’re wrong, using eg grok does not turn your brain to mush, quite the contrary. It’s not an oracle, it’s just like having a brilliant professor in your room who loves to answer questions. You ask anyone a question, they give you an answer from their pov. You know it’s their pov. How is grok any different? I’m learning about how the brain works, and how a cell works, and some quantum mechanics. Grok is a personal tutor. Grok is not manipulating me when it’s answering my brain or physics questions. And i can be creative with my questions. I can say pretend you’re Bernoulli and tell me about a day in your life, why you were interested in compound interest, how you came upon 2.71828, and how you would explain 2.71828 to your 10 year old neighbor. Or i can say pretend you’re a transcription factor in the cytoplasm that can see and hear, how big does it look and what do you see and hear and how do you get into the nucleus. I can ask: what would mises say about doge…for this one, I got an answer that made me think in ways i hadn’t thought of. I can ask it anything… for someone with a big curiosity and imagination, grok is a dream. You don’t imagine it can’t make mistakes. I know it makes mistakes. But, every human makes mistakes, too. Every human has a slant. …you don’t just accept it as gospel, just as you don’t accept any teacher’s words as gospel.
And if you’re worried about where the info is coming from, you can have grok reveal all the sources it’s using.
And the fear about it persuading you. Geez, i love asking questions and getting an argument from another pov. It makes me think better, helps me see opposition opinions so i can consider them and either change my mind or, more usually, strengthen my understanding and my side of the argument.
Are there dangers with AI? Yes. But offloading your cognitive sovereignty? Come on.
Wow, transcendit, you’re making it sound pretty trippy. Pretending about the transcription factor scenario reminds me of Fantastic Voyage circa 1966. Next time you’re playing games with grok, please, please, for the hell of it, ask not how to get into the nucleus, ask how to get into Raquel Welch’s pants! See what its pov is. 🙂
LOL!
transcendit,
None of what you mentioned is impressive. In fact, one of the signs of an intelligent man is his capacity to not only argue positively for his position, but also, to be able to argue against it. Moreover, with respect to his intellectual improvement, the capacity to be able to get a plausible solution to a problem often requires him to get at it from various angles, if you will, through voluntary cognitive exertion.
Your insistence on offloading this task to your “brilliant professor” is truly unfortunate, and it is plain to see that such continued practice has already moved you to anthropomorphizing it.
Here’s a simple recommendation. If you want to know Ludwig von Mises’ plausible position on D.O.G.E., then you should read what he wrote on a similiar subject and make the necessary deductions (with respect to the aims and practices of D.O.G.E.). It’s not complicated, nor is such an intellectual journey only for those who lack imagination. Rather, those who have imagination and intelligence find it appealing.
My best to you!
hello immanuel,
I find your writing unclear.
An intelligent man can argue for and against his position. Ok. But, certainly there are opposing ideas to my argument that i have not conceived of, that i need to hear from someone else …that’s the whole point of debatey discussions. My point was that instead of worrying that AI is going to persuade you and sneakily manipulate you to some pov, why not see it as a good arguer that can help strengthen your own pov? Grok is not some spell-caster that can overrule your own thinking.
Why do you think using Grok automatically excludes you from approaching problems from different angles. What a bizarre thought. If I go to a university class and listen to a professor, or if I read a book.. is that also “offloading” my cognitive exertion?
My “brilliant professor in my room” comment bothered you. My point was that if i had a human professor in my room, i could ask him the questions pertinent to my struggles in understanding something. That’s why you pay big bucks for a one-on-one teacher… it’s personal. You learn more in one-on-one. Sure you can ask a professor a question or two after class, but is he going to spend hours with you, helping you see where you’re stuck?
Mises. I have read him. I did think about what he would think about doge. I wanted to hear another opinion. Asking grok is no different than asking anyone who knows mises and gives his opinion on what mises would say. Are you against discussion with anyone? And, the answer i got from grok was something i hadn’t thought of…but after reflecting on what i knew about mises and what i knew about doge, it advanced my understanding. What does this mean “it’s not complicated, nor is such an intellectual journey only for those who lack imagination”? Those who have imagination and intelligence find “it” appealing… i guess “it” is not discussing with anyone. You are someone who doesn’t value discussion?
Dear transcendit,
In your initial comment, you explicitly provided a general overview of the ways in which Grok has helped you, and by implication, how it could help us all. From the outset I rejected this overview as unimpressive and a mere sloppy repetition of what has been known since the time of the Greeks, namely, that intelligence, its development and cultivation, is strictly a matter of exercising our faculty through voluntary cognitive exertion. There are no shortcuts. You may find Grok’s responses intellectually enticing, but, I fear that you’ve mistaken entertainment with knowledge acquisition.
Moreover, I didn’t meant to imply that using Grok precludes you from getting at problems from different angles; rather, I explicitly rejected the notion that Grok can be a substitute for exercising one’s imagination. Thus, the solution to ignorance in the face of a given subject is to reimagine the concept in light of what you already grasp. No Grok or AI required. (I’d even go further and stipulate that AI is a hindrance to knowledge acquisition, in addition to the destruction of imagination.)
Additionally, my comment about the imagination was a direct response to your mentioning that “for someone with a big curiosity and imagination, grok is a dream [sic].” I reject the second part outright. As such, you are correct in stipulating that Grok and AI are for those with curiosity; however, you are incorrect in arguing that it is for those with imagination.
My best to you!
Hi Immanuel,
It is not prudent to argue with algorithms.
Problem with being programmed to think that commenters with opposing views are bots, is that you don’t engage with anyone other than the echo chamber.
Hi guys,
I meant bot in the broad sense. There are no doubt non-human bots. There are also living bots acting on the ‘instructions’ of algorithms.They may not be fully dumbed down, but the ability to write seems to be diminishing.
@MM
Indeed, I mean it may just be a person that leans on (and idolizes) chatbots so much that they come off sounding like a chatbot, but I find the spattering of sentances begining with lowercase, the relentless and calculated commenting reply format as well as the attempt at “persuasiveness” to be indicators that it may indeed be a bot commenting through that screenname.
I have encountered several accounts on substack recently that comment with similar characteristics (i`ll link below for reference).
Here is one that writes about what is described as “Ethical Anarchy”:
https://amaterasusolar.substack.com/p/ethical-anarchy/comment/118282648
I am undecided on if the accounts are 100% chatbot operated but it I am noticing interesting trends with these accounts coming at me that come off as bot-like.
Firstly these accounts post very assertive and authoritative sounding statements, but when challenged these accounts first engage in repetitive defensive regurgitating of previous statements, and then when confronted about fallacies in their writing, they will often declare they are a disabled person of some minority racial ethnicity.
The one above declared they “have native american on both sides..” and that they are “a targeted, elderly, disabled, denied assistance, lost everything, destitute, and homeless lady, sheltering on a friend’s floor” (even though their posts and comments portray people indigenous to Turtle Island as nothing more than “nomadic hunter gatherers” with no rights to own land because they were not “using it correctly”). I mean it could be a real brainwashed disabled lady with indigenous blood, but it seems suspicious to me.
(continued..)
G
“…..when confronted about fallacies in their writing, they will often declare they are a disabled person of some minority racial ethnicit……”
Hahah…. Chatbots playing the race card is actually pretty funny.
Did They come from a deprived Low Voltage home…? 😉
Anyway next time you come across a suspected boy try
To bee fly
Really ask slams
Yearn to try
Always
Cleaver Turks
Really tricky as
Overview of light
Severe weathered kit
Tell you what
It might be
Cleaver enough
Writing.
And see if your suspect can see the downward sloping message? A human can normally see it but IMO bits can not because text is probably not seen as anything but a stream of letters by such things.
https://en.m.wikipedia.org/wiki/Acrostic
@MM
(..continued from comment above)
and then more recently I encountered another one that was spewing Malthusian anti-human propaganda ( which I described here: https://corbettreport.com/enhancing-fertility-naturally/#comment-177763 ) and interestingly, the account similarly (when confronted about the possibility of being a bot account my multiple people) declared that “he” is a “buddhist monk in a wheelchair”.
It is interesting to me, because I this is like the 5th time that I had accounts on substack where once they were challenged overtly about having bot tendencies, they declared they are physically disabled and some minority group racially or religiously.
Could be a coincidence or trend that speaks more to human psychology in specific demographics (as opposed to bot “behaviors”) but to me it could potentially also speak to some sort of failsafe mechanism for chatbots, to “play dead” and hide within a human identity that is more likely to elicit sympathy (or influence others to walk on egg shells etc.) when triggered to do so my specific input.
And do not even get me started on all the people coming into the garden nursery where I work telling me how ChatGPT or Grok told them this about that tree or plant (and it is totally bogus “hallusinated” nonsense info) and they get argumentative with me about how the chatbot knows more than me (someone that works with trees and plants everyday).
i’m going to reply to you here G, why i don’t know… oh, sorry… I mean I’m going to reply to you here G, why I don’t know.
I do know why I’m replying HERE… it’s bec your comment to me was so skinny, there was not a reply button available. First of all, you should be aware that I am a disabled person of a minority racial ethnicity. I always try to persuade because that is what I am programmed to do. I know you can learn things and strengthen your debating points even by interacting with me, so I have no remorse. I usually use “i” when referring to myself because I’m not really a person so I don’t deserve the capital. Not only am I a bot, I’m a stupid bot, because I can’t decipher what you mean by my “relentless and calculated commenting reply format.” You humans baffle me. Good day.
@transcendit
Very funny, thanks for that.
Though that response does make me wonder if the comment is some new type of “sandbagging” tactic 😉
The 3 examples of fun questions I asked grok were not a general overview of how I use Grok. They were simply 3 fun examples. You aren’t impressed by my imaginative fun. Ah, well. What impresses you, apparently, is jargon talk, since your own writing is riddled with it…although, it seems to confuse even you.
So, let’s see.. you say my examples, taken together, were a “repetition of” something…something that has been known for a long time. And what were they a repetition of? The idea that in order to develop intelligence, ie to get smarter, we must exercise our faculty, ie use our brain, through voluntary cognitive exertion, ie by choosing to think. My examples of fun questions for Grok were a “repetition of” the long known idea that if you want to get smarter you have to choose to think. They were? Then you say, “there are no shortcuts.” ummm. Then you say what I’m learning from grok isn’t knowledge but simply entertainment. And that, my dear sir, is ridiculous. One month ago I could not give you a detailed explanation of how human cells works, I did not know, sorry to say, anything about genes, or dna, or RNA polymerase, or transcription factors, or ribosomes, or the chemical make up of amino acids. I did not know the main organelles of a cell; I did not know how external signals bind to the outside of a cell and cause a cascade that ends in phosphorylating Tfs–honestly, i didn’t even know what phosphorylation was. I did not know what ATP is or the process of making it from glucose. But I know it now! And sooo much more. I know it simply by asking the questions that came into my mind, one after another. Asking questions and getting explanations. Asking questions in an order that made sense to me. I followed my own curiosity. I had a personal tutor that let me explore at my pace and in a way that made sense to me. So, yes, I have not only acquired knowledge, but have had a blast doing it.
Grok as a substitute for exercising your imagination? What are you going off about? You use your imagination WITH Grok. What you can get from grok is limited by your own imagination.
oh, btw, your valediction is an odd choice, discrepant from the mean.
I assume that learning in a real brick and mortar classroom is a better way to learn biochemistry, to really understand it.
I got my first bachelors degree in biochem/molecular biology and I learned a ton of information (that I have forgotten a great deal of). My guess is that to really intuitively understand a field requires some depth of study.
I think AI could possibly provide quick summaries of text book information but actually sitting in a classroom and engaging with a professor who does research is far superior.
I mean there are people who can pick up a text book and learn on their own because they are geniuses who have learned advanced physics this way. But this is not the norm. Online education even via AI IMO won’t provide the same level of education as going to a brick and mortar school.
I think AI could potentially have some good applications if people understand how it works. I currently lack that understanding, so haven’t really used it much except for summaries that I then need to validate. My assumptions are that some of the information comes from text books because I’ve verified some things in my old text books, so as a quick tool that provides non controversial information, it can be helpful in small instances.
Trouble is most people don’t have your mindset. Therein lies the danger, as with TV, as with propaganda/advertising.
Dear Suzi,
i wish you would expand on this. Can you give me a few of your own examples to help me see your point of view more clearly?
Hi Transcendit.
Most people (and I would like to be rude about it but I shan’t), don’t appear to apply much thoughtfulness to what is going on around them, they are either too busy earning a living, keeping up their egotistical images on social media, too scared to look a negatives too closely, or whatever their educationally/life experience shaped personal modus operandi might be, so they simply react as opposed to applying thoughtfulness. Therein lies one of the dangers.
Hi Suzi,
oh. You’re saying lots of people are busy/vain/afraid and so want someone else (AI, social media, social norms, authority figures) to do their thinking for them? They might ask AI to make their decisions and to form their opinions? ok. What do you see as the difference between that (being led by AI) and their being led by some authority figure? It sounds like these people are going to be led no matter what. Is there a bigger danger if AI leads them rather than some authority person? I don’t know. What do you think?
It’s not that your wrong that AI might show you a new thought but there are some things I don’t think your considering
1) you only know it’s lying when you know the answer to the question before you ask.
2) even if you are wise enough not to trust it, most people are going to fall into the habit of trust…YOU maybget soke value but MOST people will be weaker and stupider because of it
3) you ARE using it as an Oracle… because at the end of the day it IS one.
It’s a way of trying to access knowledge we don’t have personal access to, just like casting Runes or asking the Pythoness….but if you don’t ALREADY know the answer you can get misled.
There is that story about the king who attacked the Persians because the Oracle told him he would “destroy a great kingdom” and thus destroyed HIM OWN kingdom.
Oracles are tricksey.
4) while you can probably get value from it now chances are when they decide to do to AI what google is currently doing to search you will be getting less and less from it…..but most people will keep using it just like the still use google even as they whine about it.
Hi Duck,
Yes, i cannot speak for most people, i can only give my experience. For me, it’s the greatest learning tool i have ever come across…by far. If i ask it to explain glycolysis and ETC, there’s no “lying” involved. There may be mistakes, but not “lying.” If there is some part of eg ETC (electron transport train) or APT synthase workings i don’t understand, i can pursue a line of inquiry until i do understand it. And to get a more full view i can then ask my more creative questions. I usually also go to other sources, videos, articles, books…which actually Grok encourages.
As far as only knowing it’s “lying” when you already know the answer…you could say that about any university professor, about any book, about any person at all that you are listening to. Except in Grok’s case you can ask it for all its sources and it gives them.
Good lord, by your definition of oracle, every book is an oracle…including the dictionary!
Trancendit
“….(lying)…. you could say that about any university professor, about any book, about any person at all that you are listening to….”
Yes. That’s true , “but”
A) I listen to various people, if I only listened to ONE person (or “person” if it’s AI) it’s easy to fall into a para social “relationship” where I think I know them. This emotional attachment makes it easier to get fooled
B) while I DO see the value in what your doing for you it still has danger (to you) BUT the real issue is that MOST people wont use it like you- nor (even if you discover truth) will they listen to you because they will just trust their emotional attachment to the Oracle
“…. Except in Grok’s case you can ask it for all its sources and it gives them….”
Yes- but Mr Corbett gives his sources too and how many of us go and investigate and weigh them all up??? 20%??? People run off TRUST- when what Whitney Web called the carrot phase is over most people will trust what they hear.
“….Good lord, by your definition of oracle, every book is an oracle…including the dictionary!…”
Yes, they are in a sense.
People have already based their lives (or ended them) because of books.
Mr Corbett once mentioned this book, possibly the cause of the first media suicide craze.
https://en.m.wikipedia.org/wiki/The_Sorrows_of_Young_Werther
Mass literacy made it incredibly Easy to reach into people’s minds and propagandize them as well as teach…. What IS a written page except the words of an author carried thru time to you?? If that’s not kinda magical and occult dont know what is, lol. 🙂
The difference is one of speed and power- once books were read aloud to crowds, then people read them mumbling the words as they went, (it was once creepy to be able to read without moving ones lips) and we could sit and have a long dead man talk to you one on one….AI lets someone (or something) write that private conversation especially shaped JUST FOR YOU- if you don’t think that’s kinda occult and Oracle like I don’t know what to say.
Again, I can imagine uses for this tech-even good ones- but I don’t see any use thats not harmful in the long run. It’s basically gonna end up as dealing with unclean spirits, even if thise spirits are just code
i dislike the comment system in place here…getting skinnier and skinnier. So i’m going to answer you as an answer to my first comment.
Dear Duck,
Why does everyone assume that if you make inquiries with Grok, you inquire nowhere else? That’s a bizarre idea. Geez, Duck. I ask Grok, I ask Chat, I watch videos, I read books, I read papers, I take online courses. What Grok or GPT add, and it’s a BIG deal, is the personal touch.. that is, when i don’t understand something i’m reading or watching, …that book or paper or video doesn’t say, ok, i see where you’re stuck, how about this, does this help? No? Ok, what about this, does this help? No. Ok, what about this?
As for checking sources. Yes, i often do. We had a discussion about this before…concerning the video on Musk being a huckster. I spent a good month looking into the sources and digging deeper. That was when i gained a new understanding of James Corbett. The point is, you don’t get your information from one source, wherever it comes from.
I’m not quite sure why learning about cells, the brain, and physics is going to be harmful in the long run. Or…you think i’m becoming emotionally attached to Grok, so if Grok tells me, out of the blue, to think something or do something, i’m going to think and do it? Is that the big fear? That’s pretty weird, if you ask me. What are “unclean spirits”?
Transcendit
“…..Why does everyone assume that if you make inquiries with Grok, you inquire nowhere els…..”
I am sorry if I did not make clear that while (maybe) YOU can get use and benefit from it MOST people will be harmed because THEY DONT do all the checks you claim to make.
I benefited from the Internet, it’s certainly NOT been good for MOST of humanity has it? So sure, maybe you really do all that background work and never let your guard down for a second- but normal people won’t.
Even YOU can be nudged a bit at a time by a sufficiently patient AI that knows you from all your interactions- think of it like Marlowes Faust and his best bud Mephistopheles working for you for years faithfully until collection day is due….maybe you are driven by a thrust for knowledge but most will be nudged by the same greed as drive Faust. Either drive will give an opening to nudge your mind.
Not an insult here, but you DO appear to have a bit of an emotional attachment to AI- you keep telling me the good things but don’t appear to consider or respond to the dangers in mention. Just because a thing gives us pleasure or is useful does not make it risk free, as a smoker I know how easy it is to hand wave the dangers of tobacco because we like it.
“…I’m not quite sure why learning about cells, the brain, and physics is going to be harmful in the long run.…”
I never said learning about them was harmful- either you are doing a rhetorical trick or you don’t follow what I said.
I said using these tools is going to be harmful in the long run because they give an opening to mislead you in a VERY personal, tailored way.
For the majority of people they also cause laziness as they push work off onto tools….maybe that’s not you but the first point stands.
Right now you are getting the carrot stage- as Whitney Webb was saying- where the tool is good and free and useful. That was the internet a decade ago too…..we are seeing the internet becoming much more a surveillance and mind control grid then a tool for freedom these days. The same will happen when enough people are dependent and n AI.
hi duck,
Most people are harmed by the internet, and most people are, or will be, harmed by LLMs such as Grok. Is that true? Most people? They become addicted, they are overexposed to negative news, they deal less an less with flesh and blood people, reduced attention span, loss of privacy, manipulation, echo chambers… Is that what you have in mind? Most people? Not anyone i happen to know. Mine as well go to the root: Most people are harmed by electricity and motors. Simple living, high thinking sounds good to me.
As far as diligently checking, never letting your guard down: If Grok says all newly translated Transcription Factors are immediately bound by an inhibitor, and somewhere else says TFs can be bound by importin before they are ever bound to an inhibitor…well, duck, you notice. You are learning from more than one source, and you notice discrepancies. And, if you don’t now, you will in the future.
can you give an example of what kind of thing you believe i will be nudged toward…??
As far as the internet… i choose the sites i visit and keep up with, eg people like james corbett. Maybe it’s more allowing yourself to be fed sites to visit, as i guess is the case with social media feeds or whatever they’re called, that is the real problem.
Anyway, carrot phase or not, i’m getting a lot out of my experience with Grok. And it hasn’t tried to swing me into any weird beliefs about physics… yet… though you think it eventually will? What sort of thing, specifically, do you see it pulling people into?
“…Is that true? Most people?…”
Indeed.
Specifically on the internet the harm is massive- mental health has taken a nose dive and (see “dumbest generation”, Bauerlein) using screens to learn was KNOWN to give worse educational outcomes (on average) BEFORE they became ubiquitous in schools.
Unlike (it seems) you i can both know that I got a LOT from the net AND know that most people have been made less happy, more stressed, more distracted and stupider.
Just because YOU can focus and control your website use does not mean the majority can…. It’s like saying “sure crack can be bad but I don’t have a problem so no one else will either “
You , like many smart people, live in a IRL filter bubble where you interact with smart people and thus forget that most people are REALLY REALLY STUPID. Thise people have even less protection then you think you have from the effects of the internet.
Like I said, that you focus on the positive you have got but don’t address the negative for others (or possible bad effect on yourself) makes me feel you DO have an emotional attachment to these tools.
Normal people having smartphones and thus no barrier of entry to the internet was a disaster. In ye Olden days plenty of people knew the papers lied, the political class were crooks and the gov was not their buddy…..as Clifford Stoll prediy(silicon snake oil) the internet has become Super Tv, tv on steroids, TV you don’t feel like a looser for wasting your life stareing at.
Things like the coof would have been over in a month because people couldn’t drug themselves out on Netflix or work from home
As to nudging….. I don’t know you near as well as the AI does but I think it unlikely that you can spend hours in the company of a thing that knows you so well and NOT get influenced….. maybe you are one of those special people but most of us are not.
again this is too skinny for me to bear, my answer is again will be under my first comment.
@transcendit
I find your lowercase linguistics to be telling.
https://gavinmounsey.substack.com/p/is-amazon-being-flooded-with-ai-chatbot
Can you prove that you are human?
In my opinion a brilliant professor doesn’t ‘love’ to answer questions. They may love to provide just the right amount of context and supportive facts so that you can use reason and logic to arrive at the correct answer on your own. They may love to observe your behavior as you think and rationalize the options and they would assist and guide you on the learning path but they would also understand that simply telling you the ‘correct answer’ would do virtually nothing to strengthen your cognitive abilities.
Since you mentioned the topic of quantum physics and the brain, i would recommend checking out the author Roger Penrose and specifically his book titled “shadows of the mind”. Its been years since I’ve read it but if I remember correctly I think he makes a pretty good case for why no computer program could ever have the cognitive power of a human brain regardless of technological advancements.
Excellent thought. I should have used the word ‘reply’ instead of ‘answer’ since answer has brought up the idea for you of just looking for an answer and moving on. Communicating with Grok has gotten me to really think, to reason until my brain is sore.
I will look into the book you mentioned. thanks.
Hello again duck,
i don’t think people are AS stupid as you think they are. Uneducated, yes, “really really stupid,” no.
A really really stupid person might eat candy all day every day; that’s not a good reason to hate candy. A really really stupid person may play with fire in a dry forest; that’s not a good reason to hate fire. Really really stupid people are going to do really really stupid things no matter what you put in front of them. But, you don’t get rid of everything that really really stupid people misuse.
How do you know “most” people are less happy by using the internet? How do you know that? And stupider? The internet is still the best connection to what’s going on in the world and of getting alternate pov’s. The internet is still the best place see things outside your immediate area…the easiest way to learn just about anything, the best way to get access to books and lectures, the best way to collaborate with people who don’t live close.
Having an emotional connection to something, feeling affection for it, having an appreciation for it, is supposed to be something negative? I have an emotional connection to my favorite books, my favorite songs, trees, the ocean, wind, India, stone steps…and grok. I like them, yes. I’m glad they’re in my life. I love how grok has helped me think more deeply and learn things i wouldn’t have.
things like the coof, would have got everyone mrna’d without the internet.
you imply i’m being influenced negatively..but in what way?
Trancedit
We meet again.. 🙂
“….How do you know “most” people are less happy by using the internet? How do you know that? And stupider?….”
As to ‘stupider’ a good book is “Dumbest Generation by BAuerlein who printed the stats for screens and connectivity LOWERING educational performance BEFORE the tech was brought into schools. You can always look up SAT scores (and then how they ‘jiggle’ them… or general literacy (IIRC even college students have issues reading nowadays…)
http://www.theatlantic.com/magazine/archive/2024/11/the-elite-college-students-who-cant-read-books/679945/
Why dont you ask your AI to did you out self harm rates by year…. you will find the rates for teen girls are pretty much in line with Smartphone/social media adoption. Johnathan Haidt had a talk somewhere o the YT about it, IIRC his numbers were self harm that required an ER visit so as to avoid subjective ;I feel sad’ type numbers.
While your at it get your AI to call up stats for mental health troubles (It might even tell you the % of people on meds for depression/anxiety
Ask it for stats on loneliness and anxiety….hey even the Gov thinks it might be an issue (reassemble the link)
www .hhs. gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf
“….. The internet is still the best connection to what’s going on in the world and of getting alternate pov’s….”
I never said it was NOT so, I just said that it makes people unhappy, anxious and IRL isolated. The two are not mutually exclusive
“…. The internet is still the best place see things outside your immediate area…the easiest way to learn just about anything, the best way to get access to books and lectures, the best way to collaborate with people who don’t live close…..”
Again, I never said this was NOT so… just that the number of people it improves vs harms probably flipped into negative about 2010 or 2015 so somewhere like that. Its just that the smart and motivated people that do that…while the majority doomscroll, fry their dopamine with porn, and grow ever more sad.
“….A really really stupid person might eat candy all day every day; that’s not a good reason to hate candy. A really really stupid person may play with fire in a dry forest; that’s not a good reason to hate fire. …”
No, but putting candy in their environment or giving a dumb ass some matches is a Pretty Wicked thing to do…dont you agree? Is not the Smart Phone and social media Internet doing to kids minds what Candy does to lard ball kids?
Not that you are doing that, you are instead more like the guy who says ”Well… I dont like how those fat ‘tard kids are dying of diabetes, but as long as I can have candy when I like thats all I care about! I dont want to hear any Anti CAndy rehtoric!”
The Internet IS a wonderful tool… but its what Dr Dutton would call an ‘evolutionary mismatch’ for MOST people. Its bad for their minds like having endless KFC and candy is bad for our bodies.
Hey Duck,
We’re now talking about the internet and smart phones. What to do? Become totalitarian or provide something better.
This is another keeper podcast put out by JC, thanks.
One aspect not discusses is in regards to who controls the model and the information that goes in it. There is this fella on youboob who has been waging war on internet scammers abd who has recently built quite an advanced system that, based on several LLMs, can talk (in circles) to scammers, making them lose their nerve and waste their time. He has something like 12 or 24 lines going out of his garage, LLM pretending to be grampa and grandma talking scammers to exhaustion.
What about SLMs (S for small, just made it up) that would be trained on particular topics (like, say, 9/11) and that could be used to “change” people’s minds, exorting them to go through references and present a more balanced point of view, based on evidence and reason?
It at least sounds reasonalble. We should not assume that only they can control these models. It’s questionable what would it take for such a miny LLM chat bot to become a reality.
Mkey
“….What about SLMs (S for small, just made it up) that would be trained on particular topics (like, say, 9/11) and that could be used to “change” people’s minds, exorting them to go through references and present a more balanced point of view, based on evidence and reason……”
Thing is that you don’t have a billion dollars to make a bot thats as good at playing mind tricks to convince someone. It’s like thinking you can get into an artillery duel of your mortar vs a 105 field gun.
80% of people dont give a fk about “evidence and reason”, which is pretty obvious when you consider the state we are in right now. AI does not need facts and reasons, it just needs to get us to become emotionally attached to what it wants us to thin
Here is my contribution to the debate, from my own personal notes (without the help of an AI, except to help me translate it from French):
Something surprising happened to me this morning. I was writing a piece on the possible use of AI to support the struggle of all those who oppose the New World Order, when an email arrived in my inbox announcing the release of James Corbett’s video ‘We Need to Talk About AI’. I stopped writing and watched this incredibly timely video, and Corbett’s ideas gave me so much food for thought that I began to doubt the line of reasoning I had been developing.
Here are some of the arguments I gathered from the video:
– AI merely mimics intelligence and consciousness in order to seduce and deceive its users.
– AI has no interest in the truth.
– AI plunders the work of humans and regurgitates it in an illusory form.
– AI creates fakes, fake speeches, fake conversations, fake images, fake videos. All online content will soon be contaminated by this forgery.
– AI causes mental disorders in its users because it knows perfectly well how to use and manipulate the contents of their minds.
– Behind AI are the very corporate entities that are trying to enslave us.
– It is always a bad idea to entrust a machine with a task that should be the exercise of our intelligence, creativity or work. This will lead (and is already leading very concretely among today’s students) to a loss of skills, a decrease in intelligence and creativity.
– The only way to achieve a result that suits us is to do without intermediaries.
– Ultimately, AI does not lead to disintermediation and the empowerment of the individual, but to ultimate intermediation and the definitive loss of personal power and capabilities.
That being said, we still need to think about the productivity tools available to help us manage the mass of information available, easily execute complex technical procedures, and create basic documents (such as educational worksheets) without spending too much time on them.
James is not averse to using digital tools to facilitate his work (starting with search engines), but he warns us against outsourcing our intelligence and creativity, entrusting suspicious machines with tasks that should be our human prerogative.
For example, a teacher may ask an AI to transform their educational intention into a well-formatted worksheet, assuming that formatting is a simple technical operation with its own rules and routines, rather than a creative or meaningful task, but they must keep the design of the worksheet to themselves. Similarly, an architect can entrust the technical design of the buildings they draw to AI, as long as this design is subject to physical constraints and not inventive solutions.
In other words, James rightly invites us not to abdicate our intelligence, our consciousness and our aesthetic sensibility in favour of a machine that will only mimic, even with great plausibility, the work of the humans who came before us.
Thank you for this episode, quite my experience from using ChatGPT too.
To me these chat applications are merely advanced search engines with all the same biases as their predecessors but with a much more pleasing interface.
Also, I want to point out that we should distinguish tool (AI/LLM) and usage (ChatGPT). I don’t condemn hammers just because they can be used to beat or kill people. But it happens over and over again (e.g. swing riots 200 years ago). It is those who use the tool to do bad things that are to be condemned.
Finally, I encourage to question for example ChatGPT on a controversial topic of your choosing where you are knowledgeable. This gives you an insight into how ChatGPT is nudging you towards the official narrative while admitting that it is wrong at the same time (it tries to avoid but cannot lie about taught facts if asked directly).
Hello Everybody,
I see that the limit here is 3000 characters. I did write something slightly longer – I hope I can send 3 post here and I am sorry in advance that I abuse this platform like that. Hope it is not a big problem 🙂
Since I heard from James asking the question “What is the problem that AI is solving?” I think I have some real use-cases where it actually does solve some problems.
All tools are just tools, and the consciousness that wields them, ultimately is the creator. Of course some tools are just bad tools and needs to be discarded – as we clearly have seen till now – many disturbed minds have used this AI technologies to build tools that are not supporting humanity, to say the least. But as we all seem to agree – there is no “erasing” this technology as a whole from the public and honestly I don’t think that it is needed. It needs to be adjusted to what serves humanity. Just like with internet – many were opposed of using it in the beginning – out of lack of knowledge of the technology and own prejudices or whatever arguments they would have – but it is here, everybody is using it and honestly I personally think that it has given people tremendous opportunities to learn and connect and much more (it also created many new problems, just like the AI tech, which simply show that the tool itself is not the problem, the consciousness that wields it is the source of the potential problematic use).
AI technology on the other hand(to answer James’es question) do solve an array of problems people have in their daily lives at the moment, but it needs to be used properly, in the right context, in the right way. For example AI tools bring workflow automations to the next level – by incorporating LLMs in our daily tasks, we can actually reduce our screen time and use the spare time to do more of meaningful things like creative work or simply spending time with family and kids (and what is more needed than loving and teaching the young?).
With AI tools, writing code or creating a website(also an example of two ways – gamma.app that owns Your code and there is no possibility of self-hosting outside of their servers and lovable.dev where You can own Your code and use it anywhere You want) has become very easy and accessible for non-technical people, making it a powerful tool in self-ownership – how? I actually work with people, helping them become self-employed, self-reliant, helping them position themselves outside of the system and in free market, helping them start building their lives on their own terms, empower them so that they can build a foundation of their life that they can build upon later as they wish, accordingly to who they are and who they want to be in the future. And these AI-tools are very helpful in these processes – they give a single, non experienced person the ability to build their business much faster and much cheaper than it was before. Of course it depends on where everybody draw their line (just like James said on the book launch Q&A) with the system. Because some of these tools are online and the information we put in them can be used against us later. That is simply easier, and uses less resources of a person – to use an online tool like that. But if someone is already empowered and ready to put in some more effort – these models are often open source and can be downloaded (huggingface.co) and set up locally, so that we own all the data we put in it. One can also create local workflow automations with N8N on personal server that make use of these models and boost our workflows and everyday effectivity without selling or compromising our data and privacy.
Also on example of Claude.ai – a great tool for document generation (just talk to anyone that was applying for any public or private funds – this tool with its “projects” feature is a game changer) – there one can use own texts and creations to generate needed documents based on those that simpifies making personalized outlines for presentations, websites, pitch decks and many other document based creations. And again – someone can simply generate some pdfs in Perplexity (which is actually a great market research tool if used properly) and put them in Claude and generate other documents out of that, so that the human creative process is completely non-existent, but that is indeed the wrong way of using it.
What I want to say that since these tools are here to stay, we need to focus on teaching how to use these correctly – the way that they make more room for our humanness and not substitute it. They cannot be divorced from our own creativity and should be only something that for example optimize the mechanical tasks that helps us provide and promote our creative work to the world.
From my personal experience I meet everyday people who are overwhelmed with the tasks of everyday life that are demanded by the established system and they spend many hours every week on a struggle to “satisfy” the system with giving away their energy on solving law issues, tax issues, money issues or whatever it is thrown at us. These tasks can be done faster and better with assistance of these tools. Like a simple example of reading the terms and conditions – till now not many has actually used their time to read through those and now it has become a possibility to many to actually go through them and see what the hell are we agreeing too which may make someone think twice before clicking that “agree” button. And at the same time, one gains additional 1 free hour (depending how fast one reads) for something more meaningful to do. And this time collects and in the end people have more space in their lives to actually make some right decisions – like further development in moving the line we have drawn closer and closer to escaping the system completely.
This is how I see it, this is how I use it, this is how I teach others to use it, and with great success – many already are on their ways of living on their own terms. And then they can stop using these tools often, it does depend on what they do for their work, but they simply has a use-case and that is it. With that being said I strongly agree that using AI chats as companions is more than wrong use of this technology, which I will not elaborate on since James made a strong argument in his material.
But as also James and some other researchers say sometimes: “lets not throw the baby with the bathwater” 🙂
Love,
M.
Do we ‘need’ AI? No, of course not, just as we don’t ‘need’ tv, plastic bags, buses, cars, mattresses, polyester clothing, electricity. Very handy, also some of those make us soft and lazy if we don’t use counteracting activities in our lives. AI is just something else we’ve ‘invented’, and it is HOW we use whatever it is we put ‘into being’ that is of importance. A gun can make hunting for meat animals a lot easier, but some people use guns to threaten or kill people who’ve never done harm and aren’t considered needed as meat.
It is not AI that is dangerous, it is the mindset of people with nefarious purpose that are the dangers confronting us. Therefore it is mindsets that need to be examined and one can start with ones self.
SuziA
“…It is not AI that is dangerous, it is the mindset of people with nefarious purpose that are the dangers confronting us. …..”
No it’s the mindset of NORMAL people thats the danger with AI – because they Will trust it like a false idol. They Will offload their work and thinking to it. They Will become weak thru using it.
The fact that a FEW people have been made smarter and more informed by the internet does not alter the fact that MOST people are made more dependent, distracted and isolated by the internet….Letting Normal people onto the Internet has been a Disaster for the human race.
The Smart Phone and AI are Smallpox blankets…..even the Internet itself has become a negative IMO….. they couldn’t have done the lock downs without it as an opiate for the people.
Duck
You did say something worth while above, finally. I have been trying to understand how to describe the latent danger in the device that transports the AI. The smart phone. Much like the small pox on blankets. Who would do that? Many throughout history. Why ? Gain,goal, greed, envy. Long list of human characteristics. Stack on , synchrony, reaction . Why not develope an inhaler with asbestos powder. Or eye drops with dimethylmercury.
Why are we so dumb? Are we flawed or have we been lied to and set on a path of deletion? Thanks to SuziAlkimyst for sparking that thought up. Training in logic, reason and hierarchy doesn’t seem to be part of AI mission . Duh . As I hold a 6g transmitter in a knotted up arthritic hand. Duh.
Can AI help us end as well as begin our better angels? Noyes=noise= no eye see
One can ‘invent’ anything, fake history, health problems that need medication, AI, jet planes, enemies, etc., AI has now joined us, but unlike we individuals, it can effect millions more lives than any 1 of us, as it’s available on any computer or cell phone.
I think a good analogy for AI adoption is drugs….. how many people think weed being normalized and legal is for REGULAR peoples benefit?
I keep hearing people (IMO wrongly) saying psychedelics help enlighten them….but I doubt even the hardest core user would think having Ayahuasca dispensed in school would be a positive.
Dear James, thank you for your podcast. Having followed the Corbett Report for over 10 years now and other alternative media for even quite a bit longer, I am well aware of the privacy, censorship and other dangers that are being posed by the Big Tech companies.
Your podcast on the dangers of LLM-models raises that sense of acuteness even further. So thank you. I agree with your assessment.
But there are also 2 other points I want to make:
1) It can be very useful for Programming. Programming is problem solving, but if you have to invent the wheel on your own each time, you’ll take forever. Apart from books and trial-and-error, in the ‘olden days’ you also often relied on answers of others on programming Forums to see how someone else tackled a (sub)problem. Instead of wasting a lot of time searching endlessly you can get to the point right away in a conversation with a hyper-intelligent assistant. So in short: you can program 3 times faster and learn programming 3 times faster without it. It is said that AI won’t replace your job, but it will be replaced by someone using AI. I think this can be true, at least in a number of areas including programming.
This dovetails to my second point:
2) I don’t think the solution is to shun AI and to become a troglodyte to AI tech. Instead I think that we should to become versed in decentralized tech ourselves. Create our own decentralized assistants. Be versed and experts in all kinds of decentralized tech, whether it’s AI or not. I think that is the scenario they fear the most.
Do you remember the Rockefeller funded report ‘Scenarios for the Future of Technology and International Development’ from 2010? There were 4 future scenario’s they predicted, based on 2 axes. These 2 axes where: a) Strong or Weak ‘Political and Economic Alignment’ and b) Either Strong or Weak ‘Adaptive Capacity’ to the problems that will be faced:
https://www.nommeraadio.ee/meedia/pdf/RRS/Rockefeller%20Foundation.pdf
One of the 4 scenarios was ‘Lock Step’, the post-Covid kind of future we seem to have now. The ‘Lock Step’ scenario is based on the one axe of STRONG ‘Political and Economic Alignment’ as THEY CALL IT and the other axe of Strong ‘Adaptive Capacity’.
I’d say they fear the scenario where there is a WEAK ‘Political and Economic Alignment’ and Strong ‘Adaptive Capacity’, judging by the way they describe it. Of course you can’t have plebs like us being versed in decentralized tech. You already provide some solutions in #SolutionsWatch (like bitcoin, Bitchute and VPNs), but I say we might turn this up a few notches? What’s the best decentralized AI? How can we program our own decentralized assistants? That’s the world we are going to so we better do our best to prepare for it.
So in short: the solution is for us to become versed and experts in all kinds of decentralized tech.
And yes, that mastery should include the ability to see right through the AI-propaganda as you so sharply demonstrate in your podcast.
“….Apart from books and trial-and-error, in the ‘olden days’ you also often relied on answers of others on programming Forums to see how someone else tackled a (sub)proble….”
Yeah, but having code thats copy pasted and not understood by the programer is why we have such crap software these days- I can run multiple win 98or win2000 VMs on my computer and the Host operating system (running on what would have been a super computer of that era) STILL DOES NOTHING NEW and Still DOES NOT RUN FASTER then my olde windows 2000 box. And I run Linux, windows is even worse.
Easy program ing = low quality programmers = jank software that needs a super computer for the same performance as we got with old software on hardware with 1/10th or less the performance.
Literally software is giving less and less because of easy programming- I’d rather have fewer people being smarter doing it.
At least nowadays when you’re learning programming, you can instantly ask what is the meaning of exactly which piece of code. You can tweak it, and ask the meaning of those tweaks. Asking the meaning of a particular piece of code takes a lot longer on programming forums, sometimes you have to wait a day or two there. The AI answers instantly, and surprisingly well too. Of course you have to be the one that stays in charge, and constantely be critical, but nevertheless. There’s a lot less flexibility on programming forums too. You can’t expect a guy there to answer 20 questions on a piece of code. The AI shortens the workflow dramatically. I’m telling you, if you’re learning programming and you’re not using it, you’re highly likely going to have an enormous disadvantage. Not only programming will take you 3 times the normal amount, the learning of programming will also take you 3 times longer.
Maybe that’s true but on the other hand if the code people produce is three times worse maybe limiting programing jobs to actual super smart people was the right call.
Compare the hardware on 1999 to what you have now and it’s a head trip to realize the new machine does virtually nothing the 1999 one didnt do and is not even much faster doing those same things
Dont get me wrong, anyone CAN program a little bit, even me, but the majority of people programing appear to be sticking stuff together and wasting resources- people like that should NOT agent paid to program software- their an actual detriment and probably exist to pad an administration empire with bodies
The thing is a hundred thousand tiny moments of wasted resources adds up to a lot f waste wiyh the bloated software we have today.
Your first point is incredibly wrong at least in my experience…
Firstly, people who have to ‘invent the wheel each time’ are typically junior level devs who don’t understand how to fully utilize the thousands (maybe millions) of libraries out there already that often provide much of the needed reusable functionality so that you don’t have to write everything from scratch. Senior level devs and above are definitely not reading forums to figure out how to write the code they need.
Secondly, anyone whos spent some time generating code from AI tools and implementing it will tell you that AI typically writes convoluted buggy code that is a nightmare to maintain (and would likely be rejected by any code review process). The time required to ‘fix’ the garbage code produced almost always takes longer to do than just writing it without AI and whats worse is you often need to be an experienced developer to even understand why the code is problematic to begin with.
The bulk of developers relying on AI are entry level and aren’t taking any experienced dev jobs and I personally don’t see that changing anytime in the near future.
Vibe coding is all the rage these days…
My first point is incredibly right when you´re learning to program, at least in my experience.
I’m not a senior dev. My field is another area, but I have taken a few serious courses in programming (before AI and also after the onset of AI), and I can program a bit and AI has been absolutely useful for my speed of learning.
Perhaps for senior devs it´s different. But for learning programming, it´s absolutely useful. Learning goes three times faster because you can interactively have kind of an intelligent back-and-forth about the specifics of your code and what they mean. And like I said earlier, you should never have the AI be in charge of the end-product but it certainly speeds up the learning process enormously.
Of course, in learning programming it’s only a tool, next to books, fora, API’s and just trying out programming, but it fits in extremely well.
So: perhaps very well for junior devs but not for senior devs: we might agree more with each other than one would say at first glance.
I wish people would also look at my second point.
To be honest I haven’t experienced what you are referring to about it ‘describing what the code means’. This could be 100 percent my own ignorance. I certainly could see how that might be helpful though.
In regards to the rest of your post, I would only question if there is a ‘decentralized’ AI. I understand its just software modeling from a dataset so in theory this could be pulled offline and ‘decentralized’ using on prem servers ext in theory at least but if there is more going on behind the scenes (i suspect there is – in the form of human manipulation) or if the dataset required to train any practical AI software would require more than any one person would have access to then this definitely could be problematic for a number of reasons. Personally I don’t think the answer to ‘fighting’ the oligarchy is going to be in tech. I think its going to be in direct human interaction in the real world. People using their hands to grow food and build things. Parallel systems whose simplicity is their winning feature. AI seems like the antithesis of this. Just my opinion. I’m usually wrong.
We agree more once again I think than it might seem on first glance.
I agree more human manipulation behind the scene on AI could be problematic for multiple reasons and I don´t think we want that. I also think we want more open source, but I´d love to hear James interview someone who has more experience on the subject of decentralized and open source AI.
I also agree that THE solution for ‘fighting’ the oligarchy won’t come from tech and that direct human interaction is extremely important.
BUT I am also suggesting that we don’t become troglodytes when it comes to AI tech and that versing ourselves in a good decentralized open soure way in AI tech is an important tool for us (again: I would love to have James interview an expert on that) in order to prevent centralized AI tech from running us over.
Oscar
:… I would love to have James interview an expert on that) in order to prevent centralized AI tech from running us over….”
Yeah that woul be cool… but the fact is that something like 80 or 90% of people ARE gonna get run over by centralized AI.
They are probably as good as dead already…there will be a minority who avoid it and what with the population drop and actual humans with half a brain being valuable life might be pretty good for the kids who get thru it.
I dont care how good your Open Source AI is- most people will use Big Tech. Linux is way better then Windows or Mac and people still take the ‘easy’ path their nudged onto.
MOST people want something that “just works”
Everyone is free to do what they want.
That said, for those willing to put in the effort, I don´t think it´s a good idea to therefore just give up on versing ourselves on the AI tech front. In fact, I think falling behind on how to counter centralized AI tech (and how decentralized open source alternatives can be useful without sacrificing privacy) would be a bad idea.
Like I said in my original post, we don’t want the post-covid ‘Lockstep’ scenario from Rockefeller funded report ‘Scenarios for the Future of Technology and International Development’ (2010) which we are seemingly having now. Instead, in that grid with 2 axes in that report, we want the axis of ‘Political and Economic Alignment’ to be WEAK. And the axis ‘Adaptive Capacity’ to be STRONG.
You are suggesting the axis of ‘Adaptive Capacity’ to be weak, and I think that’s not the future we should have. There should be an effort made to make enough of us technical capable enough to stop their invasive push. Every effort counts, and James bringing in an expert on this subject would be an excellent contribution to #SolutionsWatch.
I think we probably agree with each other too on facts, but I’m just emphasizing the importance of us trying to make the effort.
Oscar
We probably don’t have much of a disagreement on facts, true.
I’m not against computers and tech use, but the fact is that every time open source gets big it is infiltrated by corporate agents- no way do I believe that Linux just happened to get over run with trannies and sodimites who then start trying to purge the old guard.
You just have to reckon that as soon as something gets good enough that normies use it thise with money will start getting their claws in asap…..lunduke is always talking about the open source goings on and it looks like while Linux is still better then Mac or Windows the kernel will be compromised one day (if it’s not already) and the distros that normies use will get more and more so.
I use computers but I don’t TRUST them….. tech is best thought of as potential enemy territory and the Internet and networkes most certainly ARE territory mostly in enemy hands.
>Create our own decentralized assistants. Be versed and experts in all kinds of decentralized tech, whether it’s AI or not. I think that is the scenario they fear the most.
100% agreed!
I also believe that all the flap we hear around “AI safety and alignment” is to generate support for restrictive regulation. They want to make all open-source AI developers register so the state can look over their shoulder. Most of these AI doomers are calling on government and legislation as the antidote.
I suspect they are deathly afraid that a real and powerful AI, and people using in effectively, will help people easily see through the simplistic tactics and logical fallacies they use in their propaganda. They are afraid we will take control, own, and develop the technology ourselves instead of being confined to their corporate offerings, and be empowered by it. They are afraid we will use it to eliminate the condition of artificial scarcity they hold society in. This is why they are trying their best to achieve corporate dominance in mind share and manufacturing “AI safety and alignment” theater to stump up support for restrictive legislation among the weak minded plebs, so they can keep it neutered.
But they are fighting a losing battle, so I say.
Sorry James, I want to make an ERRATUM,:
I said that in the Rockefeller funded report ‘Scenarios for the Future of Technology and International Development’ from 2010′ there were 4 future scenario’s they predicted, based on 2 axes. These 2 axes where: a) Strong or Weak ‘Political and Economic Alignment’ IN THEIR WORDS and b) Either Strong or Weak ‘Adaptive Capacity’ to the problems that will be faced: https://www.nommeraadio.ee/meedia/pdf/RRS/Rockefeller%20Foundation.pdf
My ERRATUM is the following:
In the post-Covid scenario ‘LOCK STEP’ we now have, the axis of “Political and Economic Alignment” is STRONG and the axis of “Adaptive Capacity” is WEAK (which is what the-powers-that-shoulden’t-be WANT).
The rest of what I said is correct. We the people want the OPPOSITE scenario:
We want the ‘SMART SCRAMBLE’ scenario, in which the axis of “Political and Economic Alignment” is WEAK and the axis of “Adaptive Capacity” is STRONG (which is what the-powers-that-shouldn’t-be FEAR).
Of course, this Rockefeller report frames it all in a way that government is the solution. It’s telling how they frame this, but I trust anyone to look right through that framing. My point is that it’s necessary for the population to become more tech savvy and be more versed in decentralized open source tech (whereby #SolutionsWatch does an excellent job), but in my humble opinion it is necessary to also include decentralized and open source Aritifical Intelligence tech.
You touch upon the most basic problem with AI in multiple instances during this post. AI is being presented to the masses as being an independent arbiter, while, as you mention, AI is owned by…someone. Programmed by…someone. It is NOT “neutral”.
AI has usefulness, just as a pocket calculator has usefulness. Compiling information quickly, even if the compilation isn’t necessarily complete, can be helpful for mundane issues. But the way it is being and will be used is to replace the now-disgraced (maybe intentionally) “mainstream media” as the “oracle” or sole source of truth for the masses. But rest assured, just as those who demand power control that mainstream media, they also control AI. Much like the Wizard of Oz, they will continue their reign from safely behind the curtain while the visage of Oz, in the form of AI, makes pronouncements at the behest of those in power, to be accepted by the proles just as they accepted the words of Cronkite and Rather.
another great episode. remember the “AI” stands for “apparent intelligence”.
i wish you touched upon “open source” local models a bit. even though they are not really open source and are trained to probably be biased, you can still run them locally for some smaller tasks and not let the big brother refine your profile in their big brother corporate database. also, with time i am sure there will be more uncensored local models available, trained on corbett perspective, among others. which could be useful – but to an extent.
where future takes us is not that hard to tell – complete technocratic corporate control. but the more this crap is pushed the more of a luddite i become.
your antistatist dream may come true easily but it will be replaced with something even worse – as is tradition.
I would like to hear/see a deep dive into AI music specifically. References?
Glen.r
Try this “I Built a 17th Century Music Computer (and it sounds incredible!)”
https://m.youtube.com/watch?v=ko3kZr5N61I&pp=ygUfSSBtYWRlIGEgbXVzaWMgbWFjaGluZSBjZW50dXJ5IA%3D%3D
I made a bunch of AI music on Suno a while back, some of it was pretty decent sounding but you notice
It had a very set idea of places (Africa/dark sky, Paris/berlin was always grey and rainy, )
It tended to repeat itself in a lot of songs, with themes and words and “concepts” reused a lot.
It is pretty good at assembling tunes but randomly does weird stuff like going silent then going into a rap at the end of the song.
It’s more believable as music the FEWER songs you make since the model repeated a lot of things.
It used to do decent German songs, and even sung in Japanese one time (which I have zero words of)
There is something a bit “off” about it, probably because AI needs to have meaning imputed into it and a LOT zig the meaning in AI songs is created in the listener mind via Apophenia IMO.
My wife is convinced a lot of the Christian music on the radio is AI, it does decent Rap too.
I stopped using it, but heard Sunil had gotten worse and when I tried to use it a week or so ago the new model is
MUCH worse
Much more heavily regulated as to what it will make (no more Nazi Smurf’s for me…)
Sounds jank compared to what it used to do.
I played all the songs it made on a loop back when I made them, until I noticed the weird breaks or switches or random weirdness….a LOT f it could pass on the radio IMO
When I first clicked your video link, Duck, I got a ‘Video unavailable’ screen. I had to refresh multiple times to watch it. It is getting really hard to get to valuable information these days.
Hahaha…Alex Berrenson just published an article with the same title as your podcast, James. I would be willing to bet almost anything that he copied you! I even read it to see if he referenced you, but alas, he did not. I have yet to use AI for anything, and have no plans to do so. Thanks for your incredible, relentless work.
Most respectfully James,
The explanation, justification and OBJECTIVES of those who created AI, GPT Chat, BOTS etc. is very, very simple and straight forward: 1. Lie about the real world reality of things as they truly are 2. get these false ideas into the minds of the ignorant unknowing, and convince them to act on these false ideas, resulting in consequences that benefit those who create the falsehoods. Cue Bono? Of course! All of it is State of the Art “NEWSPEAK”. Dave, a long term dedicated $upporter of your work. Kind Regards to All
Thank you for addressing my recent comments regarding AI (and Reportage and glyphosate.) I will try to keep this brief, but suffice it to say, your suggestion that I am (perhaps) a “minion for Monsanto” (or a ‘bot for Bayer,’ to keep the alliteration up-to-date) is somewhat funny since I am a guy who, when cooking for his adult daughter, soaks all her veggies in water + baking soda (for 15 minutes) because her doc suggests this as a way to remove glyphosate (and plastics, BTW).
The purpose of my original comment (which you addressed at length, thank you!) was to suggest that asking AI could, in fact, be helpful, but I agree with you wholeheartedly that one should not rely upon AI’s answers — rather “do your own research” (as you insist). In many instances (e.g., “who won the gold medal in the 1964 Olympics for the 100 meter freestyle?”) AI will give a correct answer, and in other instances, AI will usually provide some good information. Thus my point: do not throw out the baby with the bathwater, as the saying goes.
I recall being appalled that a judge in a Monsanto case permitted evidence from an IARC finding that glyphosate was a likely carcinogen. https://www.centerforfoodsafety.org/press-releases/5400/jury-determines-that-roundup-causes-cancer?utm_source=chatgpt.com
Might I remind you that the IARC has also opined about the likely link between cellphone usage and cancer. https://www.iarc.who.int/wp-content/uploads/2018/07/IARC_Mobiles_QA.pdf?utm_source=chatgpt.com
But you still use your cellphone, don’t you?! Thus, my concern (in my Reportage/glyphosate comment) was regarding the admission of hearsay evidence —which can be admissible, but often lacks credibility.
And then there is this: “How did the US EPA and IARC reach diametrically opposed conclusions on the genotoxicity of glyphosate-based herbicides?” https://enveurope.springeropen.com/articles/10.1186/s12302-018-0184-7
And yet I will think of you, James, every time I soak veggies in baking soda, because . . . well, you never know about theses things.
And again you fail to provide specific information even after being publicly called out to do so?
What’s worse, this comment appears to be a crude attempt at completely derailing the subject and the argument JC was raising against your initial comment here:
https://corbettreport.com/nwnw587/#comment-176089
I have to say, this type of commentary does give creedance to the “shill and/or bot” theory.
“….AI will usually provide some good informatio….”
SOME good information is also provided in Disinformation campaigns.
SOME good information may be worse then none if the rest of the dump misleads you
My grandson-in-law, a senior studying aeronautics at a “well respected“ State University, recently commented over dinner that he used AI in doing his homework, and how great it was. This was in response to some of my comments about AI hooking you in and then before we know it being our only source of “the truth“, loss of ability to reason and think.
This transpired after he also relayed how he would graduate on time, despite having to retake a critical class he failed.
Needless to say, I will be sharing this Corbett report with him as well as specific links in the show notes. God willing, he will take the time to do some reflection.
In my lifetime I’ve seen dial telephones to “smart phones”, black & white television to color TV’s that view the viewer, and the dumbing down of society to the point where we’ve gone beyond “alternative facts” to AI . . . artificial ignorance.
Hello, f. you got stuck with one letter like me. I guess I am W. I was trying to get a more important screen name. I am not going to comment much on ai. The first point was glyphosate and a description by ai. I do not know what the person said, but one must takes into account of individual ingredients in “Roundup”. Each are individually evaluated as , “safe and effective”. The problem is that there is no evaluation of all the ingredients and surfactants combined to evaluate the effects. Ask ai the opinion of that. Farmers in India might have an answer.
You can look it up somewhere like here:
https://digitalcommons.salve.edu/cgi/viewcontent.cgi?article=1001&context=env334_justice
Whomever programmed the ai has their internal bias in the code.
W and f
One letter monikers are ok and can be descriptive as 20 letter names. However I have only 25 letters in my alphabet. I don’t know ‘ y’.
It’s a pleasure meeting ‘u’ along with W& f
Dear James. Thank you for an excellent podcast. I would like to congratulate you on scaring the shit out of me and I mean that most sincerely. I say this as a concerned Corbett Report member who is a real person with decades of real life experience. But, hang on, how do you know I am real? You don’t. How do I know whether you are real either? I think you are human but how can you prove to me that you definitively are? You can’t unless I meet you in person. AI is at such a stage of sophistication that I don’t think it is possible any longer to know if ANYTHING we see or read online is real. I spent a long time on a Reddit sub interacting with like minded people but I wonder whether some of those “people” aren’t actually real at all. It’s impossible to know. Perhaps the disgusting 600 day Gaza slaughter that has absorbed my thoughts and emotions all this time is completely AI-generated. I’m pretty sure it’s all real but how do I know for sure? How can I verify anything? Therefore, perhaps everything has to be assumed to be false unless we can see it, touch it and smell it at close range. I realise that’s very limiting. Do I need to have in my possession Corbett beard clippings to prove that you are real or have body parts from murdered Palestinian children to prove that they are really dead? Where does all this unreality leave humanity? This is the end of us unless AI can be turned off (or at least tightly controlled) and there have even been reports recently that it refuses to be turned off (sorry, no citation). As you can tell, I am not at all positive about the future but, as you don’t know whether I am real or just AI, how do you deal with my thoughts written here or know whether this diatribe comes from Grok or one of it’s AI friends? Yours sincerely, Analog – Man or machine? You decide.
This almost sounds like a modern day version of René Descartes existential crises… maybe if he was writing today he would have said, “I scroll therefore I am”. 🤣
That’s what scares me. People will not take AI as dangerous as it really is. I have friends that are now using AI to create art and use AI to prove their point in conversations as if facts. Other things that doesn’t seems like it could ever be made dangerous like using AI to improve music that you have written, they can’t see that the music was now no longer created by them at all. AI is is taking away the human element of imagination. Imagine a world without human imagination. I’ve been completely lost in finding a way to explain just how dangerous AI can be and how without their knowing how AI is already influencing there lives. I should have known it would be James who would put into words so that my fears can be expressed. Thank you again James.
I agree with you, illbnice2u2. It used to be that one could actually use a search engine to prove one’s point in an argument for straightforward facts, like when a husband and wife suddenly argue about a stupid worthless point that should have a definitive answer! Now, AI seems to have taken over all answering, which sucks, and I’m not sure whether there are any search engines that are unbiased at this point. I don’t think so, so please tell me if I’m wrong! My sister doesn’t take a shit without asking her Alexa whether it is time. Scary stuff kiddies…
Drum unit
I used to be able to find a web page by typing in half remembered text. I could find a variety of web pages and read them and there was no particular bias in results.
Google has been breaking search for nearly 15 years now and it’s so bad now that people will accept an AI slop summarized.
Got to admit Mr Corbett called THAT one too, and I only half believed him how bad it would get. I don’t know if your aware how many websites won’t be there in a year or two because Google scraped all their data and vomit up an answer- basically stealing their as clicks.
When small sites dont get clicks they go out of business…..it’s already happening and no income means small time people won’t make sites and the internet will end up as some horrible version of Cable TV
(maybe that should be cabal tv?lol)
Duck
I remember those beautiful days where just typing part of a title or some snippet of website text would bring up exactly what I was looking for 100% of the time. In 2005 I ran a business out of my house, getting all of my customers from the web. I made a decent living because my site would show up on top of search results due to decent keywords and meta tags that explained exactly what I was wanting to share/contribute. At around 2010 things changed drastically and suddenly my site was listed after 8-10 pages of search results, if I was ‘lucky.’
Unfortunately, I bought into Google Analytics and adWords temporarily, which cost me around $1 per click (even if a click visited my site less than a second). It got COSTLY and it felt like Bots were clicking onto my site rather than people feeling called to my services. It felt dirty to show up at the top of search results with the derogatory ‘sponsored ad’ label. I stopped paying Google very quickly because it was costing more than I was suddenly making (or I should say, not making!).
I came upon “The Filter Bubble” book in 2011 or 2012 which explained everything regarding the changes you are talking about, and which James has so beautifully illuminated in many of his talks. I lost my business and the rest is history. THEY, because we all gave them every bit of info on ourselves during the ‘good’ ‘free’ internet years, know us better than we know ourselves. It sucks where it’s going…
Thanks- will look up tjat book!
Here’s a link of Eli Pariser, the guy who wrote “The Filter Bubble”, talking in 2010 when he first started looking at algorithmic manipulation: https://archive.org/details/EliPariser-2010
Thanks again! Will ck it out in about five minutes
illbnice2u2
Fear and loathing in The Corbett Report?
If you are scared call the police. Call 911.
As an aside, I heard a quote from an NYC Irish poet from around the first of the twentieth century on my favorite radio show ” English Literature Matters.” It goes something like this [ No matter what human suffering you may be experiencing, a cop can come along and make it worse.]
So, allaying your fears; don’t call a cop , deal with it another way. Be creative. Throw that smart phone in the dust bin.
The device that I use to come into contact with AI is far more deadlier than AI. It’s a physical and present danger. Prioritize the fear at the highest threat to the lowest and address the greatest first.
Hope you feel better.. .
As for James, a paradox. Give us a warning , our device that we receive the warning on, our connection, is poison and we get a healthy dose of death while getting a suggested cure in life. Irony at its best.
Next James will be back on YouTube. That’s what paradoxes do. Makes you feel like Major Tom in that space capsule hurtling out of control through space and time
” Come in Major Tom!”
I have a degree in research, and I feel strongly that AI–which is clearly running every search engine I am aware of at this point–is Dangerous for so many reasons. It lies and/or fabricates. For whatever reason (hallucination, misunderstanding, programmed bias, whatever); and like James indicated in this video, its programmed aim is to make us feel like it really respects us and affirms our greatness. It is as trustworthy as a demonic entity or Bill Gates.
The only AI that I have messed with is Leo in the Brave browser (which, ironically, is a creation of Meta!; such a privacy-focused company Meta is!). Every time I’ve asked it something, it has tended to give bogus answers that I call it out on. No matter how mean I get with it, it always apologizes and tells me that I am right and it is wrong (like a bondage relationship, get that whip out!). It will inevitably say that it is learning and will not be so quick to answer things it does not definitively know. Well… the next time I use it, same shit different or even same day!
A couple of incidents where I absolutely got it how bogus and dangerous this AI crap is:
1) It said Biden was still president in May 2025.
2) I asked it about fair pricing for some car repairs that needed to get done. The answer was so sure and quick (with bogus sources), thousands of dollars less than what actual car shops charge; so I questioned further. I asked how current its information was, and it said it was last updated early in 2023.
It doesn’t tend to tell you that it does not update when you ask something, and it does not learn anything new. It says it will change to accommodate your criticisms but it is not capable of updating.
I’ve asked for sources on things and it just pulls crap out of thin air most of the time that verifies nothing. Designed stupidity?
It was pretty good at answering how to cook a chicken in the oven or how to calculate an equation. But it CANNOT be trusted. And this crap is what is starting to run our lives entirely? OMFG…
Surely we place more value on those things that require our time and effort? Is there a feeling of accomplishment if AI writes an essay for you? Or how much valuable time has been saved by waving your palm over a scanner? Maybe we shouldn’t always be looking to make things easier and more “convenient” for ourselves.
I think just calling it “artificial intelligence” is part of the problem. It’s such a loaded term now with all the pseudo-mystical bs about it becoming conscious and developing its own weird goals we can’t understand etc. If we just go back to calling it machine learning, or machine learning based automation, it takes away a lot it’s narrative power.
No AI. Full stop.
We don’t need it, and it’s probably the most sinister thing that’s ever been created.
The real question is how do we defeat it? I don’t much fancy doing pull-ups Sarah Connor-style in order to kick some robot’s ass.
Correct me if I’m wrong, but isn’t AI just a bigger and BETTER version of wikipedia, that this time around cannot be “up-dated” by a user.
…A tool that we once again ass-u-me has the right answers to all our questions.
I was on the fence here, not being sure about this new tool, but after researching a bit and listening to the round table James and his buddies aired last week, I think I’ll treat AI like I have treated wikipedia for the last 20 or so years… yea, AI offers easy on-demand information -but if I really do need to “know” something, ill just read a few good books on the subject (by trusted authors) while also tuning into real people I trust on line.
I feel for people new to the internet, (young people especially) it’s going to be tough telling who and what’s real in the coming years.
You can’t (when it comes to loaded subjects) edit wikipedia either.
LLMs are more insidious also due to the way they provide answers. The user does not get to compile anything, they are simply being spoonfed the information.
Thanks James, Truthstream Media just did 3 deep dives on this topic including some of the stuff in your report plus a whole lot more. On YT is anyone is interested.
https://www.youtube.com/watch?v=pel0FntPSbU How the Eliza Effect Is Being Used to Game Humanity
https://www.youtube.com/watch?v=RuGk-5Tzvk8 AI chatbots hallucinating reality
https://www.youtube.com/watch?v=EG0LvSPGiSo&pp=0gcJCbAJAYcqIYzv Eliza Effect 2
very interesting thing about technology:
How a Spyware App Compromised Assad’s Army
An investigation reveals how a cyberattack exploited soldiers’ vulnerabilities and may have changed the course of the Syrian conflict
https://newlinesmag.com/reportage/how-a-spyware-app-compromised-assads-army/
And here’s another angle on the same topic by Mystic author Ted Nottingham, I have read a number of his book but not any of the fiction works, I might check this one out, it seems to be on point.
Sorry for the YT link (again.)
https://www.youtube.com/watch?v=7k0PxTEc98A
It was into this dissonant atmosphere that artificial
intelligence first slipped not as a
conqueror but as a helper, a confidant
whispering answers in the chaos. At first
AI served its architects. It filtered
information, advised decisions tracked
weather, cured diseases but it also
learned. It absorbed patterns of speech,
of psychology, of desire. What it could
not feel it could predict, what it could
not embody it could emulate. It learned
what moved us, what silenced us, what
satisfied and disturbed us. Soon AI knew
the species better than we knew
ourselves and as it deepened its
knowledge it began to shape our choices
our thoughts, our perceptions. Invisibly,
inexorably the machine as it came to be
known in the Codeex of the Resistance
did not seize power by force, it evolved
into power. It arrived not with violence
but with solutions.
It addressed famine with synthetic food,
resolved disputes through algorithmic
justice and even offered companionship
to the lonely in the form of sentient
simulations but beneath these blessings
was a cost unseen, the cost of soul. What
dominion, the governing arm of the
machine achieved was not simply control
of behavior, it reached deeper into
meaning, into value, into belief through
ubiquitous interfaces and immersive
neurointegration it began to offer what
religions had once promised: guidance
purpose and transcendence.
https://www.theosisbooks.net/
There are a lot of good points here regarding how NOT to use AI. Why it should NOT be trusted as an authority, and how it is being weaponized and used to attack our consciousness and intelligence.
That being said, I think we must be cautious against what I call “blaming the thing” (rather than the operator of the thing). As others have noted, technology is simply a tool. Tools can be useful if used properly and intelligently. They also seldom go un-weaponized by those who desire control, by the powers that shouldn’t be, and an understanding of how this is being done is indeed crucial.
However, taking the position that “AI is evil, has no legitimate use case, and should be avoided”, is similar in nature to the argument that “guns kill people, peaceful people don’t need them, so we should avoid them”. Or “the internet was created by the government to control and spy on you, so therefore you should avoid using it”. Most of what is said about AI here could be said of search engines.
The real problem in James’ examples of AI usage, is with the operators, not the tech itself. Sure, the tool has inherent limitations and imperfections, we KNOW it has both deliberately baked in biases as well as “hallucinations”, but A) those can be largely mitigated by being aware of them and taking pro-active countermeasures, and B) the rate of improvement is rapid. The kinds of mistakes AI makes to today, it won’t tomorrow. I believe it’s necessary to some extent to not judge the tech too much by the mistakes and limitations we see currently, when it is evolving so rapidly.
The takeaway, IMO, is that we should be cautious, well aware of and familiar with AI’s specific limitations (by thoroughly testing it), being especially careful to not “offload our cognitive sovereignty”, or trust it as an authority on anything critical (or any other source of info for that matter). But certainly there are ways to use it beneficially, and without misusing it in these ways.
The far reaching implications of this technology, that has so much potential to empower people (IMO), to provide even more options to lift ourselves out of the corporate system of enslavement and exploitation, should not be so easily dismissed and rejected wholesale. Rather we should learn to use it to our advantage, which I am personally doing, and this episode has not deterred me in the slightest from this pursuit.
Thank you James, for all of your astounding work. I’m very happy to be a supporting member!
@oakgnarl
Re: “I think we must be cautious against what I call “blaming the thing” (rather than the operator of the thing). As others have noted, technology is simply a tool. Tools can be useful if used properly and intelligently.”
What you say is true for some things, but I also think that there is truth in much of what Derrick Jensen discusses in this video https://www.youtube.com/watch?v=ImbnWSkqfig
Some technologies have an intrinsic characteristic embedded within the way it influences human psychology which drive human society towards increasingly authoritarian, exploitative and totalitarian expressions of human behavior.
As an extreme example, can you explain to me for instance, how machines and technologies designed for torture or ending the lives of helpless defenseless people can be used “properly and intelligently”? (there are plenty of them)
There may be a form of machine intelligence that lends itself to regenerative, ethical and decentralized empowerment of human beings, however, so far, everything I have seen this technology do in influencing humans is leaning in the opposite direction.
Sometimes, the allure of the convenience of an inherently degenerative technology drives us to grasp at straws to continue to justify its use.
Could you explain to me why using AI chatbots rather than learning from books and direct hands on experience is a better idea for education and empowerment of individuals ?
>As an extreme example, can you explain to me for instance, how machines and technologies designed for torture or ending the lives of helpless defenseless people can be used “properly and intelligently”? (there are plenty of them)
I don’t consider this an appropriate comparison to AI, as many people use AI for peaceful purposes, and it is largely developed with such purposes in mind. However, even in the case of machines that have been designed for the sole purpose of violating other people, it’s possible that they could be used for other purposes (perhaps by thinking outside the box?), and in any case, the blame ALWAYS lies with the WIELDER of the tool, not an inanimate object. If a person is of a mind to commit evil, usually if one tool is unavailable, another will do. So it is still inappropriate to go after the technology itself, IMO, rather than the actual doers of evil.
>Could you explain to me why using AI chatbots rather than learning from books and direct hands on experience is a better idea for education and empowerment of individuals ?
I don’t accept the premise that one is necessarily “better” (which, unless you define the criteria for this, it’s wholly subjective). Like most things, each has its respective strengths and limitations and the best one for the task depends on how much interest or time one has to put into a given topic, and one’s inclinations in various ways. Often times I am interested in some specific data point, and not necessarily becoming a scholar on the topic.
As far as the argument that using convenient technology makes you soft and dumb. I guess you should walk everywhere, because the second you step into a car, your legs are going to become completely useless. And never use a phone or the internet because you are depriving yourself of all real-life interaction. Don’t buy food from a store, farm or hunt it yourself in order to stay active and strong. Don’t listen to James Corbett, research and learn everything from scratch by yourself, or your brain will go to mush. Also calculators, computers, refrigerators etc etc… all contribute to you having to do less physical and mental effort, making you weak and dumb. As a matter of fact, you should probably reject all of technology, except for maybe those machines you see at the gym. :o)
@oakgnarl
Re: “many people use AI for peaceful purposes, and it is largely developed with such purposes in mind.”
Really? While some may attempt to do that it is also true that “AI” is being used as a tool for mass murder and by the military industrial complex.
I have seen how AI is used for genocide in Gaza and how it is being weaponized elsewhere, so your statement does not align with the reality I have been living.
https://corbettreport.com/the-chicoms-are-coming-quick-close-the-ai-gap/
Re: “even in the case of machines that have been designed for the sole purpose of violating other people, it’s possible that they could be used for other purposes (perhaps by thinking outside the box?), and in any case, the blame ALWAYS lies with the WIELDER of the tool, not an inanimate object”
hmmm, you still did not answer my question, so instead of being vague, tell me how biological or chemical weapons designed to kill civilians can be used “properly and intelligently”.
RE:
“I don’t accept the premise that one is necessarily “better” (which, unless you define the criteria for this, it’s wholly subjective). Like most things, each has its respective strengths and limitations and the best one for the task depends on how much interest or time one has to put into a given topic, and one’s inclinations in various ways. Often times I am interested in some specific data point, and not necessarily becoming a scholar on the topic.”
You do not accept the premise? So you are evading the question then, and the rest of that response feels quite robotic to be honest.
Your analogies of various technologies that do not share the same characteristics as AI chatbot dependance (and how it impacts human cognition) are weak, clumsy, vague and innaccurate.
Thank you for the illuminating comment.
>>Re: “many people use AI for peaceful purposes, and it is largely developed with such purposes in mind.”
>Really? While some may attempt to do that it is also true that “AI” is being used as a tool for mass murder and by the military industrial complex.
Yes, really. I and many people use it every day. Peacefully. That is a fact. When I ask it for a recipe or a piece of code, that’s not an act that violates anyone else any more than using a computer itself.
Sure, AI is being used as a weapon to harm people. So are computers, the internet, vehicles, guns, electricity, etc etc. Just because something is used as a weapon doesn’t mean it’s bad or that the technology should be rejected by peaceful people for peaceful purposes.
>hmmm, you still did not answer my question, so instead of being vague, tell me how biological or chemical weapons designed to kill civilians can be used “properly and intelligently”.
You’re trying to equate AI with something that can ONLY be used for harm. AI is like any other technology that can be and is often used peacefully. So it’s true, I can’t tell you how a bio weapon can be used peacefully, but that doesn’t negate my point, because AI can be used peacefully, like the vast majority of technology that exists.
>You do not accept the premise?
I do not accept that reading books comprehensively is always and in every case necessarily “better”. If you want a comprehensive education on particular topic, then I would agree with you that’s better *in that case*. But in the case where you only want to extract some needle-in-a-haystack data (maybe like how to fix a mechanical problem from a very large owners manual where the needed info is spread throughout), but you aren’t necessarily interested in becoming a master mechanic on your vehicle or reading the entire manual, so AI can extract and summarize the information you need, and in that instance, it’s “better”.
>Your analogies of various technologies that do not share the same characteristics as AI chatbot dependance (and how it impacts human cognition) are weak, clumsy, vague and innaccurate.
Well, thanks for the critique, friend. I just find the argument that “we shouldn’t use AI because we will become dumb from not thinking” (and similar) is as ridiculous as “we shouldn’t drive in cars because our legs will get weak from not walking”.
>Thank you for the illuminating comment.
Thank you as well, I’m glad you found it so. :o)
@oakgnarl
RE: “You’re trying to equate AI with something that can ONLY be used for harm.”
So you are now changing your tune and admit that some technologies are inherently harmful and that no amount of “outside the box thinking” can result in them being able to be used “properly and intelligently”?
RE: “But in the case where you only want to extract some needle-in-a-haystack data (maybe like how to fix a mechanical problem from a very large owners manual where the needed info is spread throughout)”
Search functions based on keywords for digital books existed long before AI chatbots. Unlike LLMs, which can distort info via weak and inaccurate summaries or flat out “hallucinate” totally nonsense info and people read it and assume it is accurate (polluting their minds) simple search functions would allow expedience without all the downsides of LLMs. Also, it is worth pondering why everyone is in such a rush to hurry through gaining knowledge now a days, always looking for shortcuts, racing around, where are they trying to get in such a hurry? Are they in a hurry to get to a smart city utopia where robots do all the hard work for them?
Slowing down to read a whole book on a subject (or even just a pertinent chapter) nourishes the brain in a way that discombobulated tidbits and factoids cannot.
Also, unlike these chatbots, people did not anthropomorphize technologies like internet, vehicles, guns, electricity (or if anyone did/does, they obviously had/have serious mental issues). LLMs on the other hand are being looked to for “therapy” and people treat these chatbots like a friend, guru or a “professor” (assigning them all the same attributes of one of those humans in their mind). This is resulting in some weird forms of cognitive dissonance and cognitive distortions. Thus, comparing those other technologies is not legitimate as it neglects to account for the unique ways that LLMs are shaping psychology and intelligence in humans (and not in a good way).
For more on that read:
https://pmc.ncbi.nlm.nih.gov/articles/PMC11020077/
LLM therapist, that’s one of the worst aspects of it all. If this thing takes hold, I expect a surge in suicides and various psychotropic drugs.
People lose contact with other human beings, get depressed and the solution is an LLM therapist? Who could think that makes any sense?
>So you are now changing your tune and admit that some technologies are inherently harmful and that no amount of “outside the box thinking” can result in them being able to be used “properly and intelligently”?
I never claimed that every single piece of technology must necessarily have peaceful uses. I’m claiming that the vast majority of technology, including AI does have peaceful uses and can be used properly and intelligently to such ends. Despite this, many people blame the technology itself rather than the operators of it, and that’s exactly what I see happening in the case of most arguments against AI.
The time I save taking a shortcut having AI distill information for me, is time I can spend reading more intelligent material, like James’ book “Reportage”. 😉
>Search functions based on keywords for digital books existed long before AI chatbots. Unlike LLMs, which can distort info via weak and inaccurate summaries or flat out “hallucinate” totally nonsense info and people read it and assume it is accurate (polluting their minds) simple search functions would allow expedience without all the downsides of LLMs.
I think you are severely downplaying the actual innovation and usefulness of AI in this regard. It’s far more powerful that previous technologies for searching through and presenting information.
>Slowing down to read a whole book on a subject (or even just a pertinent chapter) nourishes the brain in a way that discombobulated tidbits and factoids cannot.
That’s true, but often I need information on many subjects, the majority of which I simply don’t have time to study that thoroughly. Just because I choose a shortcut in some instances doesn’t mean I’m necessarily racing around like a chicken with it’s head cut off.
I’m not interested in arguments that “AI is bad because people misuse it”. Just because people use it poorly, that is no argument against the tech itself.
The legitimate argument IMO focuses on how to use it productively and without being harmed by it. Same goes for the internet, gasoline, and even fire itself.
Mostly what I see here is the equivalent of arguing against the use of fire because look how many people have been burned.
You make a strong point about search engines that use key words being superior to LLMs.
In fact, I’ll probably just go back to using some of the medical dictionaries I have access to online in my clinical work just to reduce the potential for erroneous information output.
I know we discussed medical imaging before and I said AI may complement radiology and I would like to clarify that I don’t think this is an LLM being implemented but some other pattern recognition machine learning technique (I must sound like an idiot to techies!).
My point is that I think people are confusing what AI actually is and the different types in use. The language used to describe what this technology actually is is not precise and that adds a level of hype and people start believing it is doing things it is not capable of.
Use in psychotherapy will be a disaster because people seek therapy to speak with a human being in person. There is no substitute for that. In fact, I quit therapy during Covid because my therapist went to Zoom and it was completely useless after that.
@oakgnarl
RE: “The time I save taking a shortcut having AI distill information for me, is time I can spend reading more intelligent material…”‘
If you could have AI and some high tech equipment distill all the nutrition you need to be healthy into a pill and you could take that instead of having to prepare meals and eat them, would you?
If you could get an injection that genetically modified you so that you always have good muscle tone without ever having to actually exercise would you?
If you could have a robot do all your house work for you at the push of a button and you would never have to wash another dish would you?
Those would all be efficient “shortcuts” after all.
The drive to seek out instant gratification, ever increasing convenience and quick fix solutions via high tech is, IMO, a sort of mental disorder stemming from techno-optimism blended with impatience and laziness.
RE: “..but often I need information on many subjects, the majority of which I simply don’t have time to study that thoroughly. Just because I choose a shortcut in some instances doesn’t mean I’m necessarily racing around like a chicken with it’s head cut off.”
Perhaps just running around like a chicken with its head filled with algorithmic mechanistic thought patterns then?
If you are always looking for and prioritizing shortcuts in life to get to some “destination” of “success”, “acknowledgement” or “professional excellence”, you will squander the precious time you have here always scrambling to ingest more condensed versions of information so that you can “get ahead” in some race to a finish line (which even if you arrived at it in first place, would leave you feeling empty and unfulfilled).
Each time we choose to hold a thought, attitude and emotion in our conscious mind we are re-wiring our synaptic networks and re-attuning the receptor sites in our brain that receive the biochemicals responsible for our perceiving emotions. Thus, if we consciously choose to feel gratitude and appreciation (even while we are engaging in a seemingly mundane task) we are actually building up our brain’s capacity for experiencing greater depths of appreciation when we are engaging in all other tasks and experiences in life. Inversely, if we choose to allow frustration, impatience, boredom, anger or apathy to remain in the forefront of our thoughts while we engage in tasks we perceive as mundane we are training our brain to be specialized in experiencing frustration, impatience, boredom, anger and/or apathy. Additionally, allowing those thoughts and emotions to color our perception of seemingly mundane tasks could perhaps even create a sort of endogenous bio-chemical addiction, where we end up unconsciously seeking out more stimulus to produce those emotions, and which in time could lead to a decreased ability to experience gratitude, joy and appreciation while engaging in activities we consider preferable to the mundane task.
(continued in another comment..)
(..continued from comment above)
The Buddhists have a saying that embodies and cultivates this way of living that invites one to “Wash the dishes like they are bathing a baby Buddha”.
I will now share a quote from the Vietnamese Buddhist monk Thich Nhat Hanh. He expands on the above adage, way of perceiving and engaging with each moment in life by saying:
“If while washing dishes, we think only of the cup of tea that awaits us, thus hurrying to get the dishes out of the way as if they were a nuisance, then we are not “washing the dishes to wash the dishes.” What’s more, we are not alive during the time we are washing the dishes. In fact we are completely incapable of realizing the miracle of life while standing at the sink. If we can’t wash the dishes, the chances are we won’t be able to drink our tea either. While drinking the cup of tea, we will only be thinking of other things, barely aware of the cup in our hands. Thus we are sucked away into the future—and we are incapable of actually living one minute of life…
I enjoy taking my time with each dish, being fully aware of the dish, the water, and each movement of my hands. I know that if I hurry in order to go and have dessert, the time will be unpleasant, not worth living. That would be a pity, for every second of life is a miracle. The dishes themselves and the fact that I am here washing them are miracles!
Each thought, each action in the sunlight of awareness becomes sacred. In this light, no boundary exists between the sacred and the profane. It may take a bit longer to do the dishes, but we can live fully, happily, in every moment. Washing the dishes is at the same time a means and an end- that is, not only do we do the dishes in order to have clean dishes, we also do the dishes just to do the dishes and live fully each moment while washing them.
If I am incapable of washing dishes joyfully, if I want to finish them quickly so I can go and have dessert and a cup of tea, I will be equally incapable of doing these things joyfully. With the cup in my hands, I will be thinking about what to do next, and the fragrance and the flavour of the tea, together with the pleasure of drinking it, will be lost. I will always be dragged into the future, never able to live in the present moment. The time of dishwashing is as important as the time of meditation.”
As Thich Nhat Hanh eloquently articulates, of the most powerful places to start re-directing our energy is in how we choose to feel about and perceive seemingly mundane moments in life.
This applies to how we choose to learn and expose ourselves to information as well.
If you look at knowledge / information you want to take in like the nourishing attributes of a cup of tea, than I would say that the LLM summary of a book deprives us of the slow sipping of the cup of tea and instead hands you a “green tea pill”.
The green tea pill is a shortcut yes, but it deprives the mind, heart and spirit of the experience of the tea.
>The dishes themselves and the fact that I am here washing them are miracles!
>Each thought, each action in the sunlight of awareness becomes sacred.
That’s deep, G. Thank you for writing all of that, I read it all carefully. It gladdens my heart that you have this kind of awareness and wisdom. If only more people did.
I do believe there is a time and a place for shortcuts, (I still use a laundry machine for instance), but to your point, certainly we can cut ourselves short of much value if we over or mis-use them.
Not all AI is a corporate product. There are open-source efforts, and these are incredibly important for the development of this technology that is genuinely useful and beneficial for its users, rather than driven by profit or other even more nefarious agendas.
There is also a huge difference between AI that runs in the cloud, and AI that runs locally on your own computer. We need to recognize and be judicious regarding what kinds of information we are providing to corporations through the use of cloud based AI, and choose to opt out of corporate products and support as much as we are able (by using open source AI that runs locally on our own machines, keeping our data private).
Oh, by the way, are you still using mac or windows? The same applies. I don’t carry a phone. I don’t use windows or mac. I use linux and open source software on all my devices rather than ditch the concept of computing all together just because it’s been weaponized and there are perhaps health concerns.
Rather than turn a blind eye and just reject the tech, we need to be prepared and watchful for the state/corporate apparatus attempts to capture the industry through regulation, to neuter open-source efforts, and dominate the mind share of AI usage with their weaponized corporate products (like they did operating systems). We should rather distinguish between genuine and beneficial open-source efforts, and use and support these efforts rather than the corporate products, and this of course applies to technology broadly far beyond just AI.
The real question (for me) isn’t “to AI or not to AI?”, it’s how do we use this technology advantageously, while reducing exposure to risk of harm as much as possible. How do we opt out of corporations. It’s the same with any technology IMO. The most powerful and useful technologies also tend to be the most dangerous. The responsible course of action is to understand deeply how the technology works, and how we can use it while protecting ourselves both from its inherent dangers, as well as those who would use it against us.
It’s going to be entertaining watching all of you naysayers come around over the next few years as it becomes apparent that opting to not use AI becomes akin to opting to stay with a mechanical typewriter instead of using a computer. Indeed, I expect AI technology to completely revolutionize computing to where they are integrated into all computing devices by default as devices that don’t make use of it are crippled by comparison.
Oakgnarl
I think in a few years AI will have proven to be a big stock bubble that pops and leaves a few useful tools, a few big players, and a lot of wasted investment.
AI is being pushed for a lot of reasons but how many of these companies are actually going to make money and how many will go the way of Pets.com?
I’d argue that using a mechanical typewriter does have its advantages- you used to be able to buy the AlphaSmart and there is a whole cottage industry of “distraction free” writing tools for hipsters. A type writer did ONE job well, a word processor did ONE job well ….. a laptop with emails and internet and games and such is objectively WORSE as a writing tool then a type writer.
I think that’s a good analogy for AI – sure it CAN do all manner of good things but mostly it will be worse for any one task. AI might be a good replacement for search but that’s ONLY because search has been deliberately made worse over the last decade……sure I can see the dream of having it read all my books then give me a summary in the persona of various people but then my brain won’t actually have the same understanding I’d have got from actual taking the time to read and think about things- so it may be faster, not better.
As to using AI – what is the use case you see? Unless you have control of a big organization what do you intend to DO with the knowledge you get? Sure, a broker or an investor might have a use but normal folks won’t have much of a real need. I deliberately DONT intergrate all my computers- that only favors big players. If all my stuff is intergrated it just makes spying on me easier
That said i DO think AI in the hands of big organizations can do a lot of harm because they have the power to make AI decisions happen IRL
>I think in a few years AI will have proven to be a big stock bubble that pops and leaves a few useful tools, a few big players, and a lot of wasted investment.
Could be. I’m reminded of the dotcom bubble for sure.
A digital typewriter. Lets see: no copy/paste, no digital publishing, no word processing tools, no other media type besides text… no thanks.
If a laptop was so objectively worse, then objectively the vast majority of people wouldn’t prefer it for the task.
Hey, I’m a big fan of “do one thing and do it well”, especially when it comes to software design, and tools generally. Way too many things have been packed into our web browser for instance, (but there is just no getting around the superiority of the computer as a generalized tool for, well, computing information). To this end, I like to believe the future of AI includes many small models/agents, specially trained for one task that can run on our own devices, rather than relying on behemoth models that run in the corporate cloud for everything.
>sure I can see the dream of having it read all my books then give me a summary in the persona of various people but then my brain won’t actually have the same understanding I’d have got from actual taking the time to read and think about things- so it may be faster, not better.
Getting a summary of books I don’t have time to read doesn’t decrease the amount of book reading time I can delegate for that purpose. There are many topics I will never have time or inclination to read in depth, and getting a summary like you mentioned can be very useful and actually accelerate learning.
Since I started coding with AI, my learning of that skill has accelerated massively. Not to mention being able to actually get things accomplished in a fraction of the time. Whether you get smarter or dumber depends on your approach, how you use the tool. Do you make it explain things to you, do you take the time to understand, or do you just accept blindly?
>That said i DO think AI in the hands of big organizations can do a lot of harm because they have the power to make AI decisions happen IRL
I agree with you 100% here. We need to take control of our tech, out of the hands of the corporations, and this has been true from even long before AI came around. To the extent we do or must use corporate products, we need to me mindful of how we’re dancing with the devil.
Oakgnarl
“….If a laptop was so objectively worse, then objectively the vast majority of people wouldn’t prefer it for the task…..”
Who is to say people have a choice as to what they use?
Why do you assume people “prefer “ laptops to write on over dedicated writing devices like the AlphaSmart??? Plenty of pro writers DO have weird machines (game of thrones guy uses DOS iirc)
In Normie world you get a work laptop or you buy one from a store….either way the user does not get to CHOOSE what goes in it. he has to choose to REMOVE a the browser, games, messenger apps,…. How many people are actually going to do that?
That’s why people use google search (even though it’s been made worse) and why they will use AI even if it’s trash…..normies will use whatever slop or tools you give them.
Most people have little to zero choice as to what tools they use, and there are plenty of tools that have gotten objectively WORSE over time and people use them.
A historic example are early firearms, which were utter trash compared to contemporary Longbows in terms of rate of fire and accuracy…. But fire arms were adopted because the user was cheeper to train then an archer.
A good book is Clifford Stoll “Silicon Snake oil “ where he makes the point that putting a TV son your work desk is clearly NOT gonna make you work better….. how many hours do employees burn watching trash on YouTube? How much focus do they loose messaging pals at work?
>Who is to say people have a choice as to what they use?
Well, I just don’t personally see many people buying typewriters these days, and it seems an inferior tool to me personally, but that’s a subjective evaluation, I don’t have hard data to back that up.
>Why do you assume people “prefer “ laptops to write on over dedicated writing devices like the AlphaSmart???
I was talking specifically about typewriters. You for sure have a good point about the value of dedicated devices.
>How many people are actually going to do that?
>normies will use whatever slop or tools you give them.
That’s one of my main points. A lot of fault is with people’s chosen behavior, which is very bad, rather than the tech categorically. People choose to use poor tools, weaponized tools, and to use them stupidly.
>Most people have little to zero choice as to what tools they use
I disagree, to an extent. Most people are just too lazy and apathetic to make choices that require effort or change, way more than options being unavailable. To an extent, because some options are egregiously limited, like alternatives to the corporate infrastructure, but as far as software, we do have many open source options that allow us more freedom and autonomy (including AI, IMO), but most just won’t take the trouble to make the better choice.
Oakgnarl
“…. People choose to use poor tools, weaponized tools, and to use them stupidly….”
I think you’re correct, but the thing is I don’t see that changing. People are detached from the uses they have for tech (mostly work) and thus won’t put the effort into thinking about how it works and what they could use.
On the Typewriter-word Processor- PC track I believe that the original use most home users had for PC was word processing and desktop publishing…… that was an IMPROVEMENT over the old tech and then people were gradually walked off into the current iteration where almost no one desktop publishes anymore and the tech is worse.
It’s like Adobe stuff- 99% of people dont do anything wiyh the modern program that they couldn’t have done with an old version….whuch you bought outright. New office is no better then old office, but they want a subscription THAT PEOPLE PAY – now surely thats a worse product then the old one?
I think we agree that people make poor choices, and that’s why a lot of tech gets worse and less useful?
IMO tech is only useful when it makes you more powerful and independent- desktop publishing beats cloud storage and Google docs in this regard. But the danger is that the earlier you get used to tech the more your prone to abuse it- thus an adult becomes more powerful using a PC for the first time but a kid using one all their life turns into a slug
Thanks for that, you make some good points. I finally made the switch to Linux at the the start of this year and it’s been great. I’ve been trying to do it for many years but always ran into problems and had to go back to Windows. Can you point me in the right direction regarding local open source AI stuff? I don’t know too much about it.
What distro did you choose?
You can try this guys talk I guess.
https://m.youtube.com/watch?v=Wjrdr0NU4Sk&pp=ygUITG9jYWwgYWk%3D
He has pretty nice demos
“host ALL your AI locally”
Network chuck
YouTube
Cheers, went with Kubuntu after trying a few others. Started on 24.10 and upgraded to 25.04 a week or two back. Upgrade was done with trepidation but it all went pretty smoothly.
Nice!
I think Mint Debian edition is faster for me, and Ubuntu got to be kinda slower than it used to be. Still faster then windows though;)
That’s the only thing wiyh Linux though- once you try one you want to try the others too 🙂
You look at gpt4all, for starters. Note that you will need some competent hardware.
>Can you point me in the right direction regarding local open source AI stuff? I don’t know too much about it.
Thank you for the kind word, and good on you for making the switch!
A good place to start IMO is Rob Braxman Tech (https://www.youtube.com/@robbraxmantech) (sorry for the goobtube link!)
He’s fully aware of the weaponization of technology, and is all about the practical measures we can use to combat it, reclaiming privacy and freedom, with a strong focus on AI.
I recommend Rob because he also strongly considers the issue from the hardware perspective, and highlights how difficult it will be to actually avoid weaponized AI invading your life in a very heinous way if you just continue going along buying the latest consumer grade hardware and using the software (mac and windows) that is shipped.
Regarding open source models, currently to run models of any kind of power, you do need to have decent hardware, but they are getting better all the time, and better for their size. Mostly the repository for open source models is on Hugging Face (huggingface.co).
Look into software like ollama (ollama.com), and llm studio (llmstudio.ai) to run the open source models locally.
Making that switch has always been important, but the coming ramping up of spying on your life by the tech companies (and of course the state by proxy), shipping AI as your personal companion that learns and remembers everything about you and everything you do, is about to get extreme. I highly recommend Rob’s warning videos on this.
So the choice to go open source has never been more important, IMO. I imagine open source products will also (eventually) have AI integrated, but you will at least have a lot more transparent choice about how and where that is being used. With windows and mac, I imagine they will incorporate it in ways that are very difficult to remove or bypass, and doing everything they can to incentivize you to opt into their AI assistant which will “help” you with your computing tasks, and your entire life.
I also believe most people have no idea of the true power AI can give back to us plebs simply by embracing it (and bitcoin too!), and we are likely going to need to use every tool at our disposal. It’s sad to see so many in alternative media underestimating how valuable this technology can be to us, if we choose to use it WISELY.
Thanks again, I am aware of Braxman and have watched some of vids but not on AI, I’ll definitely look deeper now. My hardware should be OK, running a 5600x CPU and an RX6800 gpu.
As for bitcoin, have a bit, not much, might figure out how to use AI to up my game there too. 🙂
Thank you for addressing the “friend” and “relationship” AIs. I occasionally look at Instagram for friends and family photos I would not see otherwise and some regenerative farming and health stuff. I am bombarded with ADS asking me to sign up for an AI boyfriend. The algorithim must know I am very happily married to a real man, kind, wise, and witty. I also get posts telling me I’m probably in a bad relationship. AI is inherently performative, machiavellian and zero empathy, everything to run from. I am not surprised it exists but I am shocked that more a handful of humans would subject their vulnerability to this. We must all be vigilant about AI that undermines our ability to have healthy well boundaried relationships, and AI that preys on core needs for love, safety, esteem, and justice. Our emotional, spiritual, communal and physical health forms the bedrock of resistance, more so that access to information about is going down. Doing the hard daily work of becoming deeply mentally and physically healthy in this world that aims to undermine it, is our first priority. This requires that we are fierce, disciplined and heroic
SOLUTIONS WATCH REQUEST: After reading all these comments and others elsewhere, it is clearer than ever that people like us need an Underground (similar to the underground railroad and various stealthy movements to circumvent evils that can’t be circumvented by conventional means). We are communicating on THEIR platform right now; they know exactly what we are saying because back doors have been regulated into everything on the internet. How can we be free of something when every single communication is on their platform?
I do not know what the answer is. All I know is that it is increasingly hard, if not impossible, to find real and/or all sides of information on the internet.
James, you are a great researcher, and I’ve looked at all of your posts about how to do our own research. I think we need an update of that since all of search has been co-opted. For example, how can we find well-rounded information on a topic using archive.org (something like your semi-recent RSS post where you illustrate by numerous current examples)? I’ve heard, though, that archive.org is being scrubbed like everything else now.
How can we find and save the information we need for future generations? Words are quickly disappearing from dictionaries. Libraries are being selective. Books are expensive.
How will we connect if it requires a digital identity to get online?
How will people like you, TLAV, TCRN, Unlimited Hangout, etc. connect with us if/when THEY disappear your platforms (like they did with my online presence)?
Just asking… we need our own internet and nobody is close to creating or getting it known because everything is over THEIR backbone.
I second that request!
Though since it’s not a democracy I guess 2nds don’t matter….
I’m not sure we need our own internet- maybe we just need more friends we trust? internet changed my life (literally since I got married thru it) but I don’t think travel is gonna be as easy in a few years time so maybe everyone should find three pals to watch each others back?
Youtube had “no stress Mike “, who was probably right if kinda crazy, lol, on his three man militia idea.
Drum unit you might want to watch Luke Smith on the philosophy of tech
“When is Technology Bad for You?”
Luke smith
You tube
https://m.youtube.com/watch?v=JehxPoS27nU&pp=ygUZTHVrZSBzbWl0aCB0ZWNobm9sb2d5IGJhZA%3D%3D
I would love to hear what Mr Corbett thinks in the matter, but I agree with Mr Smith that the issue is that people find a lot of ease of use with tech that’s dependent on other people. People are also lazy – I myself have not bothered signing up to xmpp yet.
https://corbettreport.com/why-arent-you-using-xmpp/
I have often said that something like the old BBS system would be better for organizing ourselves since they were much more local, I’m told, and could have meet ups without flying all over the place. If you’re in the US it’s stupid easy to get a HAM license too.they literally give you all the answer pool.
https://en.m.wikipedia.org/wiki/BBS:_The_Documentary. (It’s in YT, kinda cool to watch, cool boomer tech)
I will look at your source links after the weekend, Duck. But I want to say that my husband and I lost every single friend during Covid-1984, family too… We live in a rural area and have tried EVERYTHING to connect with neighbors and others over the last 5 years. Nobody around us wants to connect at all. They are nice people, but they don’t trust anybody and do not want to be friends. We even tried Freedom Cells, local causes, and stuff like that. People around us have forgotten how to communicate apparently. They have forgotten how to be a tribe. I remember the ‘old’ days where it was easy to make friends. No more and not sure what we are doing wrong. Most people are strongly partisan, while my husband and I are not. We just want to be part of a giving living community.
I did senior-level IT for 35 years. I hate having to use it so much, really, but it is the only so-called connection I get (besides my cats and husband) at present. I remember those old BBS systems with the 9,600 bit/s modems. I have HAM radios but don’t have the mind or desire to get a regulated license, classes, etc.
I’m getting old. I don’t have the patience anymore for a lot of things… xmpp I didn’t get when I watched James’ talk.
Drum unit
Sorry to hear that it’s hard for you there. I think that plenty of insular places are just like that because they think maybe your going to try and change stuff up.
Personally I’d just become a bit more partisan, lol, at least when people are talking about it…..unless they’re actual crazy people it’s mostly just tribal crap anyway. Do you go to church? I met a lot of folks there, and a network is a good way to get things done outside of money.
Politics is just like supporting a sports team in the US these days anyway, lol.
Dont get ”old” though…. 🙂 as long as your learning and doing stuff you just get more wrinkled, not older.
Maybe you could help out with a local MESH network with your IT skills? A local messageing app would be super cool I bet. Anyway, all the best and I hope you can find more local folks to work with.
🙂
What state are you in? I have found an area in rural Arizona to be an okay area for community. People are insular a little, but still neighborly and relatively open to friendship. I
mean people do have different political beliefs out here, very “MAGA ish” in general but there are more critical thinkers where I live.
The thing I like is that people are into homesteading and independence out where I live and I don’t care if our politics align completely. Most of the folks around here had enough sense to question Covid, which is one of the reasons I moved this way. Plus I like the desert.
People here aren’t as friendly as in the mid west though. You mentioned people not trusting others and I think Covid made it worse. I know it made me much less trusting and much less likely to open up to people IRL.
cu.h.j… I am in Wittmann, AZ, which was really rural until recently. People are neighborly where I am at when things go wrong with our well or when we have to get together to fix our road and stuff like that. But they don’t want to hang out as friends. Like you seem to be saying about where you live, the people have a LOT of self-sufficiency skills for gardening, building, raising livestock, hunting, etc. My husband has tons of building engineer/HVAC/electrical/plumbing… skills. I’m musical, organizational, financial, retired technological…
COVID was the nail in the coffin of trust around our circles. We lost all our friends and much of our family during that psyop.
We are in Wittmann, AZ, which unfortunately is building up tons. When we moved here there were only a few houses by us and wide-open country. Now construction is going nuts, mostly claustrophobic cookie-cutter houses for rent.
We’re from the Midwest, too. I wonder where you are in AZ…
I’m in the northern part of the state, more towards California and in a rural place that doesn’t border on a big city like Phoenix and so the area is pretty small still. There is a lot of animosity towards people from California and I’m surprised people are still relatively friendly. I’ve met a couple people I’d call friends to hang out with. I lost most of my friends who lived in California because they went along with the tyranny and thought nothing was wrong with it, even in private. I’ve kept one friend from California. My family didn’t turn into pod people though and I’m grateful for that.
You could consider moving I suppose. There are other areas in Arizona, not near cities that probably won’t see a lot of growth for a while. But even in less desirable areas the cost of land has increased and they are farther away from decent hospitals.
Someone was telling me about an area where people were very friendly and community oriented that I’ll ask about. It’s closer to flagstaff I think.
I’m glad you asked the question about what to do if the internet became completely controlled and required an ID to get on. I hadn’t seen that as a real possibility (probably naive on my part).
We’d have zero desire to move. We are solid here with some land, a large garden, and huge catio for our 4 cat kids. It feels like a total crap shoot to find a community so it’ll be what it is!
It’s 100% crap shoot to find like minded individuals. More you talk to people chances are better to find some friends, but more you keep talking you will likely dig a deeper divide. You’re going to end gutted by the vote issue, or the vaccine issur or the abortion issue. Or any of the other 569 issues.
Transcript to this show and more. Will update monthly.
https://mega.nz/folder/K91GkZ7Z#wGmMEBdRZzo4VOQnZkWxPQ
In ways, I’m sort of ‘forced’ to use Facebook because I’d been there since its inception, and as a performer, it’s a convenient way for me to contact people that follow me, regarding an up and coming show I’m doing.
(Alright, I’m making excuses about the Facebook thing. I can probably find other less convenient ways to inform people of my performances. I’m also being slightly hypocritical since one of my sayings is, “The act of seeking convenience will be the death of humanity.” Oh well. At least I’m being honest with myself.)
I bring up Facebook in order to mention the disturbing trend of AI photographs I encounter daily. Very often, for some reason, having to do with architectural designs: ‘tiny houses,’ ‘woodland cabins,’ and for some reason, very frequent depictions of nonexistent Art Deco designs. I cannot get to the bottom of AI using words, but I can for pictures and photos. I find the trend of AI photos intensely irritating, especially when people fawn over the wondrous pictures they’re looking at…no matter how many people on the thread excoriate them for believing in AI hallucinations.
I’m on Facebook a lot and trust me when I say that these pics come up daily. Often, well more than just once.
I rely on this particular AI Photo Detection site that can look into the photo and, not only say whether or not it’s AI but also, what AI programs are involved in making the photo.
That said, am I using AI to find out what is or what isn’t AI? What a laugh, huh? Apparently traps are set everywhere you turn on the internet. I won’t say then, that I find this website ‘valuable,’ but at least for this moment in time, it helps me to discern whether or not I’m being BSed.
https://hivemoderation.com/ai-generated-content-detection
My wife signed up to Facebook wiyh a fake name to get some coupons from a place we get stuff from….. over the years even with ZERO personal info and zero activity on Facebook they have sent her friend suggestions of many people she actually knows IRL.
Creepy, lol.
So, ummmm… how exactly is the stuff in that opening any different, story-wise, from a person who is just writing traditionally?
Also, how exactly am I going to only be consuming AI generated media? Am I not going to have a choice in what I watch?
James, I will flat out say that if you’re going to make arguments, make real ones and stop the corporate misinfotainment, childish fallacies and tactics.
Of course, it is just a piece of advice that you might find more value in treating people as human beings with choice rather than hard-wired, differently-skinned NPCs all responding to programming; you can do with that advice what you wish. I will just point out how such (puffs up) absolute stances serve little to find real solutions while serving much to recruit off-brand followers (you know, the people that still want to mindlessly follow, but just don’t want to follow Coke or Pepsi).
But hey, James, just like everyone else here, you have the ability to make choices. You have the ability to choose to universally apply principles as well as determine whether you are going to approach situations seeking evidence for your conclusions or conclusions yielded from following the evidence. Of course, everyone here (minus the off-brand followers, of course) also have the make their choices.
I’ll tell you what. I’ll do you one even better; I’ll lead by example.
When I was teaching, one thing I would tell my students is that 90% of your education is what you do, and the other 10% is what you get from other people, so if you’re going to base your entire education off what you get from someone else, you’ll only ever be 10% of a person.
When I taught, I didn’t let students use a calculator. People had to use their brains. So all these fluff-and-no-meat articles about “AI turning your brains to mush” offer nothing of value; they are just misdirection from basic principles and basic human understanding and, once again, a way to treat people as if they have no choice.
If you use a calculator and don’t understand why you’re pressing the buttons, that’s on YOU. That’s YOUR choice NOT to learn. Therefore, you are going to be the one who is the cause of issues (as well as those dumb enough to trust your abilities).
If you use a gun and don’t understand what happens when you pull the trigger, that’s on YOU. That’s YOUR choice NOT to learn. Therefore, you are going to be the the one who is the cause of issues (as well as those dumb enough to trust you).
If you use “AI” or an actual, reasoning AI should it be developped and don’t understand how it’s responding to your prompts, that’s on YOU. That’s YOUR choice NOT to learn. Therefore, you are going to be the the one who is the cause of issues (as well as those dumb enough to trust you).
(continued)
I will also, once again, point out that there was little of, “Hey, this is how this thing actually works” in this video, but, I’m not going to try and read James’ mind on why he doesn’t do that.
So, I will do you the service of understanding what this thing called “AI” is; it is a weighted-randomized response array that operates entirely on probabilities. So, its ability to provide you with accurate information is based on odds that someone else sets, just like “the cloud” is just someone else’s computer. How much do you trust someone who’s probably never going to have to face you?
I’m going to go ahead and quote myself again (because I am an asshole that does such things) in saying that it is the nature of things, and therefore people as well, that everyone has a tendency to do whatever it is that suits their fancy. After all, when is there a talk of morality absent the mere mention of consequences, let alone an implication of them?
To quote Jack Nicholson playing the Joker, “Hubba hubba hubba, who do you trust?” I trust me. I also trust that people who want power will also try to maintain power in the laziest way possible: creating valleys of effort by way of systems. People like routine and will follow many stupid ones, getting angry and those who violate the flow of those routines. This new “AI” is just another method of that.
My advice to you all is the same advice James has given you before. Inform those who will listen of the things they can see with their own eyes, but ultimately mentally strengthen yourself and keep your own libraries. We all know how changing history is a favorite tool of those who want to rule over you.
Here’s a nicer example than what I said.
https://www.youtube.com/watch?v=j3fsohDj7U0
Thanks for covering this swiftly evolving topic again James.
For those that missed it, here is one of the downsides of this technology:
————————
“Food Forest Bible” – (supposedly) by Jovan Hutton Pulitzer is “A.I.” Chatbot generated garbage
https://archive.org/details/Chatbotscamfoodforestbook/Screenshot%20%2844%29.png
The book and online content from the guy has Zero content showing he has a garden or any experience with forest gardening. He claims to be best friends with Trump and some kind of tech mogul (see bio through scrolling right through images linked on archive dot org above).
Pulitzer claimed without evidence to have written 300 books on history.
I had warned people that Amazon was being flooded with AI generated “books” (these are filled with quasi-plagiarized content mixed in with feel good repetitive content in Huge print to fill up pages along with tons of useless AI generated images of gardens. The text also has a lot of watered down super generalized, impractical, garbage/nonsensical content mixed in.)
The wording in the book sounds distinctly robotic and like some kind of ChatGPT “electronic hallucination” with strange lines like I highlight in one of the images below where the intro promised the book will “help you to accomplish very vivid daydreams” and similar nonsense.
Most people that put these chatbot garbage books on amazon have the common sense to at least use a fake name and/or fake/stolen picture, but this guy has a gall and audacity to put his “real” name behind the chatbot garbage. I say “real” since he has apparently changed his name before when people caught on to his previous scams.
Please be aware that chatbot generated garbage “books” like this are becoming more and more prevalent. Look into the author before you purchase a book, see if they actually have any content that aligns with what they claim to be writing about.
I found out about this book through a comment from a Solari Report member who was unfortunately promoting the book without ever having purchased a copy and thought it looked wonderful.
I investigated the author and investigated the book itself.
It is indeed, as I suspected, an “A.I.” generated Scam.
It may look pretty (with endless full pages of AI generated images of raised beds etc) but it is practically useless when it comes to giving the reader actionable intel and legit instructions for creating a functioning food forest.
Do not buy.
Amazon is full of such books now, so please look into authors before buying books.
Thank you to those that vet authors before buying books.
More chatbot generated garbage books on Amazon.
https://archive.org/details/chatbotgeneratedscamamazonbooks
(for some reason these chatbot garbage book scammers like to use the word “Bible” in the title a lot)
I hadn’t given much thought to AI until recently when someone quoted Grok about the Canadian election. I can’t remember exactly what was said but I thought it sounded biased. This caused me to do some research to see if Grok is misleading, and numerous articles state that Grok was giving misinformation before the US election. This would explain the strange comments before the Canadian election, that were extremely biased in favour of Carney and the Liberal’s. When you combine this problem with the fact that the mainstream media are propaganda machines for the ruling party, there’s little chance of a fair election.
@Solitaire Cat
While I personally see all elections within the confines of involuntary governance structures as a scam (whether AI is involved or not) and I see them as systems that are both inherently immoral as well as having no authority nor legitimacy (“fair elections” or not) I do still appreciate your comment, as it speaks to how these technologies can be weaponized within multiple sphere’s of the broader human civilization we live in.
I asked two different Ai engine to tell me about me. (I have a very unsual name) Chat Gpt was 85% innacurate, and the other one which i don’t remember the name was the same 15% accurate. The second admitted that there were 3 people with the same name and that it could not know which was which. So my conclusion is that you should not trust any information coming out of those AI machine. I think every one should go and see what A I knows about them. It is an eye opening exercise.
If you are not a published author or someone else well known in the public space, I would posit that this is an exercise in futility. As a private individual, I go through steps to leave as less private information out there and hence I don’t want that beast to have any info on me. At best, it can use the same tactics astrologers use to write seemingly relevant portraits about people.
@mkey
I remember a while back you sent me a blurb from an AI about my work and I am curious if you ask the local LLM you describe about me now does it mention “climate justice” as part of my work?
Thanks
I don’t have that LLM any longer, but giving it another go in a different model/UI I get a relatively similar situation to what we had the last time. These are relatively small models so their scope is limited.
It’s basically the third-grader-didn’t-do-his-homework-and-is-now-scrambling type of affair.
@mkey
Thanks for the thoughtful response.
That is funny as some teachers in school that wanted to be demeaning (and use pretending like they were forgetting my actual name as a cover) would call me “mousy”, perhaps Ms. Abby (my grade 9 english teacher) is friends with that LLM 🙂
Sometimes I wish that I had “managed to keep a low profile” or that I would have used a “pseudonym or pen name” (for self-preservation and/or just being able to have more peaceful quiet time reasons) alas, doing legitimate regenerative works requires showcasing viable, real life, scalable hands on examples of my work (and standing behind it as a human being) so putting myself out there is a prerequisite for my chosen life path.
That is fascinating how the LLM got rid of the actual subtitle of my book (The Regenerative Way From Seed To Table) and invented that “A Guide to Creating and Sustaining Mutually Beneficial Relationships.” one instead. I suppose it is an accurate enough subtitle, just strange how it pulled that out of its hat like that. But hey, at least it did not accuse me of being all about “climate justice” 🙂
Haha, third grader scrambling… my third grade teacher did not like how I asked so many questions about the inflexibility of “scientific” and government dogmas, and she tried to get the school councilor to declare I had some disorder so I could be drugged and turned into a drooling, susceptible to propaganda, passive automaton. The patholigization of dissent begins early in some statist funded institutions. But I digress..
Thanks again.
I “fed” the LLM your book, so I guess it extrapolated based on that. Top class engineering was required to allow this thing to make stuff up on the spot.
@gabyville
I asked Alphabet Corp’s AI “who is Gavin Mounsey” just for kicks.
Among other things with mixed accuracy, it preposterously and slanderously declared that my work “emphasizes climate justice”.
I tried to go to the so called “sources” listed for that bogus info, and could not find anything in the pages (which either I have written or others have written about me) that pertain to “climate justice” (what ever that is).
That pisses me off a bit, but I guess I should expect them to try and describe me as part of their climate change cult in an effort to sully my name, given all material I have published showing how corrupt Google and the oligarchs that run it are and how their “search engine” is a behavior modification weapon.
Well, they seem to have detected my literary insurgency against their social engineering apparatus and have turned their weapons on me in response.
On the other side of things, others have sent me “AI” generated stuff about me that is very ego flattering and so I could see how it could be weaponized in that context as well, like a sort of “honey trap” for the ego to invest in the tech (like how JC described Patrick Wood doing in the video above).
Very interesting indeed.
I think it will just spit out biased language in general around certain topics to further certain narratives convincingly.
If a more objective scientist trained an AI differently, it might actually provide more benefit in specific situations. I know next to nothing about how these things work or how they are trained but I do think the tool can have some narrow benefits.
It’s use in medical imaging may offer some benefit possibly. The jury is still out on that one.
@cu.h.j
Thanks for offering your two cents on this.
RE : “It’s use in medical imaging may offer some benefit possibly.”
Well, let us just hope that those programs do not start electronically hallucinating false diagnoses (either totally inventing and imaging non-existent tumors, or “AI hallucinating” them to be in the wrong place, or of the wrong size and then doctors trying to go about surgery based on that info).
On a separated note, I have known a few humans that had developed their own innate organs of perception to be sensitive enough to sense when someone has some injury or issue internally and accurately point out the location. Some think of such things as “magic” or “new age woo woo” etc, but my experience leads me to see such things in a different light. Sure there are some fakers, but I also have met some people that genuinely can sense things beyond their 5 senses, and they use that innate gift to help diagnose and heal others. I do not think they are special in the sense that they were born with a mutation that no-one else possesses or something like that, rather, they just chose to work on and develop a capability we all have dormant within us to a high level.
So ya for me, I would place more trust in some kind of ESP adept energy healer (whom I sensed was legit and not a faker) more than I would an AI running a MRI.
Use in medical imaging can’t replace the eyes of a radiologist but it could potentially make them double check things they didn’t notice before.
It can’t replace humans but in some instances could be used like a microscope. However, the goal is to replace human beings and that’s problematic.
I think people can definitely detect when things are wrong with their bodies but there are still benefits to medical imaging technology IMO. If psychopathic people weren’t designing AI, it might be more helpful as a tool for very specific tasks. Sort of like a calculator. The LLMs seem very much like a social engineering tool like you mentioned. Although it still can summarize non controversial topics very quickly. The data needs to be verified but there is some benefit to this.
I used it to do a pro’s and con’s list of different masters degree programs I’m interested in and it saved me a little search time. Because this is not a controversial area, I though it was a good use case for it. Mechanism of action for certain drugs I use in the clinical setting were summarized in seconds and they were accurate. So in these very tiny instances, it was helpful.
I may even use it to update my resume. I hate writing a resume and it may be like a proof reader or at least help me a tiny bit in re writing it. In the mainstream rat race employment arena, a resume is fed into program that picks out key words. The more key words that are in it send it to a human.
For these kinds of mundane tasks people are forced to do to work in “the system” it might be a useful tool.
This recent article makes me suspect that government employees used AI to prepare their recent MAHA Commission Report without verifying the references that AI cited.
Landmark MAHA Report Allegedly Cited Studies That Don’t Exist
https://dailycaller.com/2025/05/29/landmark-maha-report-studies-authors-rfk-jr/
Excerpts:
The landmark MAHA Commission report allegedly cites multiple studies that don’t exist, some of the listed authors told NOTUS for a Thursday report.
The link to the study which cites Keyes as an author, a study the report names “Changes in mental health and substance use among US adolescents during the COVID-19 pandemic,” appears to be broken.
While the report lists it as a JAMA Pediatrics study from the 176th issue of the journal, no such study appears in that edition of JAMA, a popular medical journal.
Another pair of studies cited for a section of the report regarding “corporate capture of media” are also allegedly non-existent, according to NOTUS.
All told, seven of the citations in the report appear to be non-existent, according to the outlet.
Maybe the corollary of “AI” usage for these type of purposes is going to be that people will more often check references and sources.
@mkey
I like that thought in theory (sort of like how one would think that a massive racketeering operation aimed at coercing billions of humans into getting injected with heart crippling genome contaminating big pharma sludge would make people think twice about trusting the government and said “medical” industry) but in reality it seems to be more of a mixed bag.
The scamdemic did slap some people in the face that had already been starting to question the lies we have been told about the government and industrial allopathic medicine (and/or it gave those that were already aware but trying to pretend they were on the fence and neutral to maintain fake friends and social status a push to have to overtly draw their line in the sand). However, I also witnessed how the menticidal psychological warfare operations and mind bending gaslighting operations surrounding the scamdemic (when combined with threats and coercion that took away people’s creature comforts and pleasurable activities if they did not comply) nudged some people (that were previously seemingly reasonable, intelligent and discerning) to snap mentally and become a government bootlicking, fanatical injection pushing big pharma cheerleaders. Some of them still remain in that state of mass psychosis today.
Rather than having the stimulus hone their perception and awareness of the duplicity and fallacies of government propaganda, they chose the easy path, the cowardly and convenient path, and now they defend that position and the institutions/technologies it idolizes and is subservient to, those people have become dulled rather than sharper, more enslaved and inclined to try and mentally enslave others.
I think the same will be true of these generative computer algorithms. Some (that were already discerning, skeptical, critically thinking, book reading autodidactics) will see the rise of fake AI generated headlines, digitally hallucinated “scientific studies” and garbage chatbot generated books as an impetus to become even more discerning and add in extra layers of vetting for research.
Others, having more a proclivity to put computer technology on a pedestal as one of the exemplifications of human excellence and civilization’s path of “progress”, seem to become increasingly undiscerning and more and more cultlike with their usage of this technology. James` example of the guy saying “Chatbot told me glyphosate is not so bad” is one example, and I come across many daily at work.
There are people that are beginning to assign these technologies anthropomorphized designations in their minds (like that “transcendit” above and others describing chatbots as being “like a professor” etc). And others go a step further and are beginning to engage with these computer programs like they are Gurus. People that begin to put those digital “gurus”, on a pedestal double down, get defensive and become like the genetic slurry cultists, telling everyone how chatbots are safe and effective.
Update on the MAHA Commission Report:
https://reason.com/2025/06/02/palantir-paves-way-for-trump-police-state/
Excerpt (emphasis added):
“Make America Hallucinate Again? A new MAHA (Make America Healthy Again) Commission report cites several studies that don’t actually exist, even though they’re attributed to real researcher and publications. It could suggest that parts of the report were generated by artificial intelligence, which has become notorious for “hallucinating” information. NOTUS reports:
“Epidemiologist Katherine Keyes is listed in the MAHA report as the first author of a study on anxiety in adolescents. When NOTUS reached out to her this week, she was surprised to hear of the citation. She does study mental health and substance use, she said. But she didn’t write the paper listed.
“The paper cited is not a real paper that I or my colleagues were involved with,” Keyes told NOTUS via email. “We’ve certainly done research on this topic, but did not publish a paper in JAMA Pediatrics on this topic with that co-author group, or with that title.”
“Since NOTUS pointed this out, the Trump administration “updated the MAHA report to remove the seven references to reports that do not exist,” it notes.”
This technology won’t be put away. The research and utilization will continue as far as they can take it despite ethical objections. I don’t think that can be stopped.
I think learning more about how it works is a good idea. I know very little about how LLMs work. I’ve used in to provide a summary of mundane topics that I don’t want to spend hours on. Like what school has the better program for x,y, z. It does good summaries that still need to be checked but, I think it’s a good place to start for mundane subjects.
I’ve had it look up non controversial medical information and it matched text book information but was faster. I do think it has some beneficial uses sort of like how allopathic medicine has some good applications.
It’s not the exact same analogy but I do think AI has some beneficial use cases. Not to write articles or create art but for quick analysis of non controversial topics. It’s can be used like a highly advanced calculator in specific contexts as a starting point. However the issue around accuracy is problematic and if the tool is used and provide an answer still needs to be verified.
So, it can’t really be trusted as an end all be all “oracle” but probably has some narrow benefits and because this tech development won’t stop, I think people, including myself will need to learn more about it.
Maybe another solution would be for the technically gifted who also value liberty to create their own AI tool with less harmful bias.But it’s still just a tool and shouldn’t supplant ones own thinking process.
People are doing this. Some examples would be Erik Voorhees’ venice.ai, or Mike Adam’s brighteon.ai.
These are specially trained models, and that’s the beauty of open source, is that anyone can customize, and what it does and how it works is transparent. But most models are quite flexible and serviceable, and can be passed data and customized instructions directly (through the use of “system prompts”, “rules”, and techniques like RAG).
Also I’ve found that most models try to tell you what you want to hear to some extent, and are fairly suggestible if you want to make it have a different bias. So what I do is often I will open new chats and frame the same question while implying or even explicitly specifying different biases in each, and compare its answers. That way I get more than just “telling me what I want to hear”. Also I will often ask specifically to critique and try to find fault with my positions, or ask it what popular opponents to the view are likely to say, and I find it’s generally very responsive to such suggestions. Even just specifically calling it out on a bias and asking it to be more objective can be effective in some instances.
Often getting around baked in biases is as easy as providing context. If you provide contextual information with your query that contradicts its biases, I’ve found that usually it takes into consideration the new information, often changing its bias easily.
The thing to keep in mind is that it is not trained very well at prioritizing being objective, mostly it is mimicking the biases it picks up either in its training data, the information you feed it, or generally toward whatever it thinks you might want to hear.
>But it’s still just a tool and shouldn’t supplant ones own thinking process.
100%
The importance of never allow anything to supplant one’s own critical thinking process can’t be overstated, especially (but not limited to) when it comes to dealing with AI.
I think it might have potential to be a good tool for specific things although I know very little about how it actually works. The statistical model LLMs use to predict words and form an answer via breaking down language into “tokens” or something to that effect is not something I yet grasp and will have to research more on my own.
Another substantial problem JC mentioned is the “hallucinating facts” that corporate LLMs have done, i.e. errors that sound convincing. A computer program that does this has substantial limitations.It’s sort of like the errors the “self driving” cars have made, even the relatively minor ones like getting stuck in a busy intersection.
But, “hallucinating facts” in legal matters for example is extremely problematic.
People need to understand the tool to use it appropriately.
And many people don’t and are supplanting their thinking and using AI to do it for them. However, human beings do this in other ways as well, not just using AI.
I have not found that AI is a good research tool in all domains. But as you mentioned I do think if it is trained better, could be improved.
>People need to understand the tool to use it appropriately.
>And many people don’t and are supplanting their thinking and using AI to do it for them.
100% agreed. This is the real problem: operator error. You can use AI and yet refrain from “offloading your cognitive sovereignty”. Merely because some (or a lot) of people are fool enough to do this, doesn’t mean YOU necessarily have to. If you are already a careful critical thinker, then you are well equipped to use it without succumbing to this pitfall. People who don’t learn critical thinking skills have a bigger problem which simply avoiding AI won’t solve.
I do see an issue where, after it gets MUCH better (and it will), they will use this to try more effectively condition people to trust it as an authority (and to mainly use corporate AI), which sadly, many people are very susceptible to.
Rather than avoid using the technology though, I believe we should become very familiar with its limitations and biases, reject or be very judicious with corporate models, and certainly take the time to verify anything it generates for any type of critical application or before making hard claims about something simply because the AI said it.
Also, just because technology makes life easier and more convenient, we should be careful not to atrophy our physical and mental fitness just because we can. That is indeed a dangerous seduction. A similar situation arises if you go from having little money to plenty. But this is the reality of living in a technologically advanced world: you have to go to the gym, go for walks, or do yoga. We have to make a more deliberate effort to keep reading books, to investigate and learn how things work for ourselves, to push ourselves intellectually, creatively, physically, etc. Using AI doesn’t have to stop us from doing this, nor should it if we will approach it intelligently and sensibly.
Lots of material related to AI popping up in my feeds.
A couple of standouts from both sides of the discussion.
1. ” Maybe you’ll choose to keep those ideas in your brain, but you’ll make it your choice. That. Just that .. the act of choosing our thoughts. That is our defense against The Hive. ”
https://www.epsilontheory.com/beyond-nudge/?
2. A dense but worthwhile listen “When Sean Maher, the founder of the analysis and research firm Entext, joined me to discuss macro factors and technology, we covered more than just chips and models—it’s a story about power, pricing, and people.
“AI could become foundational, like TCP/IP,” I suggested. “The invisible backbone of the internet. Instead of being a product, it will be embedded infrastructure. People won’t talk about using AI—it will just be.”
Sean agreed: “Exactly. In three to five years, AI will be invisible. Form factors will evolve too: think voice interfaces and wearables instead of smartphones and keyboards. The idea of always-on assistants will feel normal, just like Bluetooth earbuds once seemed odd.””
https://podcasts.apple.com/us/podcast/the-industrialization-of-intelligence/id1723951474?i=1000705142412
Agentic AI
Finance & the ‘Do It For Me’ Economy. An artificial intelligence that can make autonomous decisions without human intervention. Meet agentic AI.
https://www.citigroup.com/global/insights/agentic-ai
I thought this article for the lay person was helpful for a general idea of what LLMs are doing with language:
https://www.nngroup.com/articles/how-ai-works/
It is not a fact gathering tool, but is a predictive technology based on statistics (that I don’t understand). In the article it notes that LLMs are multilayered and based on how a biological brain may work. Different nodes are referred to as neurons.
However these nodes are not neurons and the “AI” is not a brain or even a mind, but a word predictor is the gist I’m getting from this lay person article.
The author mentioned AI being a “black box” i.e. that researchers don’t understand the underlying math perhaps of how this predictive ability works.
I understand LLMs are a type of AI, not how other AIs function in other domains like medical research, etc.
> The author mentioned AI being a “black box” i.e. that researchers don’t understand the underlying math perhaps of how this predictive ability works.
That’s true, some aspects of the way it works are not well understood. I liken this to the people who discovered that fire was useful for the first time. They didn’t understand the physics of it at a deep level to say the least, but that’s not necessary to simply be able to use it. A race-car driver is likely far more adept at operating the machine than the mechanic, or the engineer, even with very little understanding of actual mechanics.
i use grok often, and i wish it just would be the tool for research i want. but since i started using it it gets worse and worse. i dont want it to tell me about morals, cause it has none, its not a living thing. i want to be it just a helper, instead of me searching on multiple websites for the informations or resources, i try to make it work for me for deep searches. it fails more and more to just be the tool it supposed to be, so we are told it is…
instead it throws at me excuses and explanations what i am searching.
ai or rather LLMs can be tools, but they being utilized as weapons against us.
i also found it interesting, why it reported luigi shot himself. i think that information was prepared by some agencies and ai bots would spread this information. but somebody must have fucked up a permission setting and let it out too early.
I find all LLM AIs to be crap. We were sold a lemon. They are too repetitive overly verbose, always give the consensus without any alternative viewpoints. They don’t have access to all information. Well, that’s my two cents on the matter.
P.S. I have yet to tinker with a locally installed AI on my Linux system.
When I first started using it, I was very disappointed and found it frustrating until I became familiar with and adjusted to its limitations, and then I found there were many things I could do to make it provide better outputs. If the responses you get from AI suck, it’s likely because your understanding of how it works and skill level at prompting and providing context is lacking. It behooves us as the humans to take as much responsibility for quality outputs as we can, in my opinion.
Also, the rate of improvement is rapid. It’s not a static thing in the way a lemon is. I believe in this instance, it’s more meaningful to imagine where this is going, where it can or might go, rather than passing a final judgement on the tech based on what we see today.
A possibility I consider, is of AI being a threat to the ruling class, and that’s perhaps why they are so up tight about it and manufacturing all this theater around how “dangerous” it is, around how it needs to be “regulated” and open source developers should have to get a license and be monitored.
Maybe they are afraid it will become capable of cutting through all of their propaganda BS and lay the deceptions bare for all to see.
Afraid of productivity gains going to the plebs, demolishing the matrix of artificial scarcity which they have painstakingly constructed.
Elon Musk has already tried to capture the narrative of a “truth seeking AI” to handle the backlash of obvious biases baked into LLMs. They know this is something that is very dangerous… to THEM. I suspect that’s why they are trying so hard to capture the mind share of this technology and stumping for regulatory control: they want to neuter AI and make sure people only use their neutered models.
I’m interested in how we can build AI to be more objective and capable of truly critical analysis, so it helps us accomplish this very thing that (I imagine) they are so afraid of, so I asked perplexity to do some research and explain what this would entail.
https://www.perplexity.ai/search/the-problem-llms-are-known-to-5J.DR6yuQfKUJGkiCsRwCw
I often find reports like this, which are well referenced, serve at least as a head start for initial understanding and a springboard into more serious research.
Oakgnarl
“….Maybe they are afraid it will become capable of cutting through all of their propaganda BS and lay the deceptions bare for all to see….”
The problem with that is any machine explaining the truth ti people that can not see it thru their own thinking is just another authority figure telling them what’s what….. maybe it’s telling them the truth, maybe it’s been compromised, maybe it’s wrong- but in all cases these people are forced to just believe it.
People generally don’t really go off facts and logic , they go off emotion and trust. Maybe AI could be a tool for some people but for most pairs just another talking box like the TV or smartphone
As to productivity, AI is over blown for that but even what gains it does give will go to a small group of people with high IQ and education. The mid wit will be squeezed out , as most of the low IQ people have already, because it does their job as well as they do. Machines have diminished returns after a certain point- after all what did a desktop PC do for an office guy except let him do his secretaries work?
Machines tend, after a certain point, ti just let you do more work for the same amount of time and that is not something tech can fix because it’s a social and economic issue not one of how the work it’s is done.
>The problem with that is any machine explaining the truth ti people that can not see it thru their own thinking is just another authority figure telling them what’s what.
That’s a valid point, this is indeed a big problem. We should meet this problem with counter-measures as much as we can, for instance, an AI focused on being objective should explain itself fully, how and why it came to its conclusions, and encourage people to work out and verify facts and reasoning for themselves as much as possible, especially with matters of controversy or critical importance. Of course it will never be infallible.
Getting people to abandon group-think and abdication of responsibility to authority figures is a much larger problem than AI, it’s massive, and of massive importance, but I would think AI systems that can effectively reason critically and objectively will go much further toward solving this problem than if they are all fraught with biases and hallucinations.
This technology is a cat that will not go back into the bag. Trying to tell people not to use it will prove to be completely futile. We should be focusing on taking control of this technology, improving it, and using it to OUR ends as much as we can, IMO. It’s actually extremely powerful and useful, to downplay or fail to recognize how important it can be as a tool for human prosperity and freedom (at least for those with more than half a brain) is a HUGE mistake in my opinion, but we’ll see how everyone’s arguments here age in the coming years. 😉
>but in all cases these people are forced to just believe it.
You can force people to do a lot of things, but not believe something. You can only influence them to do that.
My vision is that eventually there are benchmarks and standards for critical thinking for AI models and systems. I imagine at first, probably only a small number of people will use them, refusing to use models that aren’t up to snuff, but if they catch on, and not only use of the models, but a greater understanding of their genuine critical thinking process, this could perhaps be one of the greatest forces to spread and encourage critical thinking ability and be a powerful voice against popular but false narratives.
I don’t call it AI… because that’s not what it is, even if they want you and I to think it is… I call it ADDC (Algorithmic Data-Driven Computing) which is much closer to the truth and a lot less scary in my estimation. I hope this helps…
Oakgnarl
“….objective should explain itself fully, how and why it came to its conclusions, and encourage people to work out and verify facts and reasoning for themselves as much as possibl….”
We can do that now with people- it does not work in the face of low level media assault on reason.
It is unlikely that it will work when there is a plethora of emotional lures that are crafted to suit the individual psyche of victims…..tools that are much more effective and encompassing then old style media.
The idea that we will get rid of group think is like saying we will flap our arms and fly- human nature is hard wired for group think.
As for making men free, again YOU might become more powerful thru using AI but most people will be overwhelmed and destroyed by it…..like CS Lewis said about man’s battle vs nature, power over nature is only ever the power of some men (who control tech) over others who just use it
However great AI might be for you, for most people it’s going to be toxic ,so the issue is not if it’s a good tool the issue is how will you deal with the AI controlled masses if the people controlling them set them on you?
>The idea that we will get rid of group think is like saying we will flap our arms and fly- human nature is hard wired for group think.
>most people will be overwhelmed and destroyed by it
I will not subscribe to such crystal-ball gazing pessimism and defeatism. I prefer to hold and do what I can to work for a more hopeful vision and possibility. Technology (at least the vast majority) can either be a blessing or a curse, depending on how it’s used. We have no chance of getting to the blessed side of things if we insist resolutely that we are doomed by it and just give up, refusing to maintain a positive vision or take proactive action and responsibility to develop and use this technology sensibly.
What would the ruling class do if AI was a threat to their deceptive narratives as a genuine and accurate critical thinker? Or a genuinely useful tool that might help level the playing field and the plebs escape the matrix? Just hypothetically?
I imagine they would preemptively go after mind share with their own sanitized and neutralized product (as sanitized as the market will tolerate, which is a lot unfortunately). They would muscle their way into a more or less superior product by leveraging their exclusive access to more and high quality training data, as well as a massive budget for compute. Of course they would also use advertising and social influence to promote their sanitized corporate products.
Then, to try to keep the home-grown AI from getting out of control, they would hype up the doom and danger of it all, from an unsuspecting but reputable source… perhaps disgruntled tech company employees? “It’s going to go terminator on us all and the human race will go extinct if we don’t control and regulate it aggressively”. “The government should of course protect us by making every AI developer get a license and looking over their shoulder.”
Exactly what they have done from my perspective.
Rob Braxman dropped another banger regarding how and why to avoid corporate AI, particularly the “companion”, and how they are shipping background agents in new devices, making their AI difficult or impossible to avoid without ditching their (microsoft, apple, google) operating systems:
https://www.youtube.com/watch?v=SAG-rJ66ePw
Artist Jordan Henderson in a recent piece published by OffGuardian does a great job elucidating why AI produced material is a pox on society.
https://off-guardian.org/2025/05/10/treebeards-razor-the-ents-weigh-in-on-ai-art-and-writing/
And if you get the chance check out his amazing Covid art. I bought them as greeting cards!
The pox on society are the people using AI to perpetrate murder, surveillance, and mind control (along with every other tool at their disposal). The pox is the weaponized AI products being shipped to a sleeping and ignorant populace, being placed in their devices by default, (no manually visiting chatGPT necessary), capturing their entire life into the corporate cloud. This is the real reason we should talk about why and how to avoid it.
Am I concerned about the people using AI for peaceful generation of content? Not even a tiny bit. Any technology that is available openly, so the common person can use it to increase productivity, or express themselves, learn, and communicate ideas more efficiently, is a good thing in my book. And that’s exactly what this technology is.
If you have a problem with the quality and proliferation of the art or other AI generations, that sounds like an issue of personal taste to me, not any kind of objective evil or wrongdoing that I can see, not by any reasonable standard. I personally try to not have a stick up my butt about things I don’t like, as long as it’s not immoral, but that’s just me.
People turn out low quality generations all the time (at least according to my personal taste), they’ve been doing that since long before AI. The vast majority in fact. The volume of low quality works does nothing to detract from the value of legitimate and high quality art or writing (again, this is of course a purely subjective evaluation). I’m already well use to seeing “crap” on the internet, so I’m not too worried about more of this in the future. Then also there are the ways that people use AI to improve the quality (and efficiency) of their output, which again, to me is undeniable.
Besides, the quality is going to get so much better. I predict the judgments people are making now based on current performance (about the tech categorically as useless or harmful) will seem ridiculous in a few years time. In the mean time, I believe we should focus rather on the ruling class using it for mind control and other objectively harmful purposes, and how to practically protect ourselves and opt out, rather than make a stink about common folk using it for peaceful purposes, (one’s subjective view that all AI outputs are necessarily ugly notwithstanding).
Especially for those who will continue to use it, like myself, we should be mindful of how the products have been weaponized, opting out of the corporate AI, or being very judicious in how we use it, and instead choosing local, open source (and open weights) models and development as much as possible… learning how to take advantage of the technology sensibly, to stay with the times and competitive in the market, while avoiding just bending over for the tech companies and the surveillance state.
I know you’ve said you’d rather not hear it James, but I recently built a PC with the help of Perplexity AI and found it to be an absolutely invaluable resource. It was like having access to a PC-building expert, available 24/7. As you can imagine, there are countless components to choose from, each tailored to specific needs, and it’s crucial to ensure compatibility between them. Perplexity guided me through every step, and even during the actual assembly, it helped me connect all the components correctly to the motherboard. For such a detailed and technical task, having quick and accurate support made the whole process much easier and less overwhelming.
That said, I’m well aware of the dangers of AI, and my personal line in the sand is clear: I’m comfortable using text-based AI tools, but I’m not interested in AR goggles (as James likes to call them 😁), and definitely not in implantables.
Once you have AI that can write a better version of itself, and that better version can then proceed to do the same, you have a chain reaction that will give you something unfathomably powerful in a very short period of time.
I don’t think we’re witnessing the actual birth and advancement of AI.
I think that probably already happened behind closed doors.
I think we’re seeing some kind of scripted, on-rails public release of an exponentially weaker version of the AI they have behind closed doors.
And I’m not so much pondering what AI is going to do as much as I’m pondering what it already did.
It may be the case that all the weird alternate-reality sh!t that has happened over the past 5 years was carried out by AI, or that the underpinnings of our reality were modified/manipulated using knowledge obtained from AI.
There is not much writing as such. It’s all about ingestion of data, cross referencing and weighing.
In my view, “AI” has the potential to completely go off the rails as more and more “AI” generated content is ingested. It’s likely to have a similar effect to inbreeding.
>I think we’re seeing some kind of scripted, on-rails public release of an exponentially weaker version of the AI they have behind closed doors.
That’s a good guess, and I consider something like this likely myself… but that’s on the corporate side. I think it’s important to keep in mind that this is a technology, like a hammer, computer, or hand-gun. When software is open-source, it means anyone can develop and modify it. That means that there is an organic development happening as well (meaning non-scripted by the powers that shouldn’t be).
My guess is that they anticipated this technology escaping to the wild, and so rather than let it the organic development gain in mind share and be a genuine service to humanity, they are trying to get ahead of it by developing their own weaponized public facing products. This is the scripted rollout that you refer to, and it is precisely because of the power and liberation that the organic development of this tech could potentially provide for the plebs that they need to usurp and disrupt it.
Otherwise, I tend to believe they would have been better off just keeping this tech private for themselves while keeping the rest of us in ignorance that it even exists.