You’re performing the same work that companies pay data annotators to do, rating responses, correcting errors, and choosing between outputs. The difference is that you’re doing it unknowingly, for free, and OpenAI designed the exchange to be invisible.
Six mechanisms extract value from ChatGPT usage.
The company’s own documentation confirms that user prompts serve as curriculum, ratings function as reward signals, and corrections train future models.
Furthermore, the most valuable contribution isn’t the prompts themselves, it’s the intent data: explicitly stated desires that command premium prices in markets that behavioral tracking can’t reach.
This applies to Claude, Gemini, and Copilot. The architecture is identical.
Q&A TLDR: The Hidden Cost of “Free” AI Tools
How much is my ChatGPT usage actually worth to OpenAI per month?
It’s difficult to estimate precisely, but the underlying economics are clear: if you submit ~200 prompts monthly (6 per day), give ratings, choose between responses, and correct failed queries, and edge-case testing that companies pay professional annotators to provide. The market rates for this labor are close to $20-$60/hour for specialized corrections in fields like law, finance, or medicine.
What are the ways AI companies extract value from my free labor?
(1) Training data: Your prompts teach models what matters to humans. (2) RLHF feedback: Your thumbs-up/down ratings perform or choosing between responses trains model reasoning. (3) Edge case discovery: Your weird prompts that help define the boundary of these models. (4) Intent data: You explicitly state desires worth exponentially more than behavioral tracking. (5) Error correction: You fix mistakes, especially if you are an expert in STEM, medicine, finance, and law.If I opt out of training, will I lose access to AI features?
No. Opting out affects only whether your chats train future models. You’ll still get full functionality, real-time responses, and all features. The opt-out is purely about consent to use your data for improvement. The fact that companies hide this setting suggests they know users would opt out if given a clear choice.
My company uses ChatGPT. Is our data vulnerable?
Depends on the plan you signup to.
Consumer plans (Free/Plus/personal accounts): Training is on by default, and may be reviewed by humans. Enterprise plans (ChatGPT Enterprise/Team, Claude for Work, Gemini Workspace): Data excluded from training by default. Check your T&Cs—many employees use personal accounts for work without realizing it.
The Six Ways You Create Value for AI Companies
Let’s start with something OpenAI did in public, and almost nobody noticed.
In their technical documentation, they explicitly stated:
This is not an afterthought, but one of the core training methods.
That’s the first way you create value: Training Fodder.
Value Type #1: Your Queries as Curriculum
As a core training method. Every question you ask becomes curriculum. Your images and personal files are a key ingredient to improving an AI model.
When you ask ChatGPT how to negotiate a raise, or generate a The Simpsons-style image based on your pic as your wife’s birthday gift, or how to write a performance review for someone you’re about to put on a PIP, you’re not just getting an answer.
You’re teaching the model what good questions look like and what matters to humans.
This is data with enormous value because it’s high-signal.
It’s what you choose to ask, which reveals what you care about.
Value Type #2: Your Thumbs-Up or Thumbs-Down Feedback
You’ve probably seen those little thumbs-up and thumbs-down buttons in ChatGPT. Most people think they’re just feedback. A way for OpenAI to know if you liked the response.
When you click that thumbs-up or thumbs-down, you’re doing something called RLHF—Reinforcement Learning from Human Feedback—it’s still one of the key techniques that made ChatGPT or any other chatbots conversational.
In Spring 2025, one of OpenAI’s updates announcement went wrong.
OpenAI revealed that thumbs-up and thumbs-down ratings serve as “an additional reward signal” for actual model training.
Which directly shapes how the AI thinks and changes the model itself. Even more importantly, if you give thumbs-up or thumbs-down feedback, that entire conversation may be used for training, even if you’ve already opted out of general training!
This revelation surprised many users because most people assumed feedback buttons were simply for bug reports or user satisfaction surveys, not direct model training on their specific conversations.
The models improve with your (and millions of others’) feedback. You’re performing the same task that companies pay professional data labelers $6 to $60 an hour to do.
Value Type #3: Choosing the “Better” Response
A related mechanism: Sometimes ChatGPT shows two responses and asks users to choose the better one.
This isn’t a courtesy, of course. It’s yet another form of data collection.
Those preference comparisons—that exact format—is used for both traditional RLHF and newer methods like DPO, Direct Preference Optimization. Meta used DPO to train Llama 3.
Every time you choose response A over response B, you're encoding your judgment into the model's decision-making process.
Value Type #4: Edge Case Discovery — Your Weird Questions as Stress Tests
AI companies have dedicated “red teams.”
Red teams function like hiring a professional burglar to test your home security: they probe for weaknesses before real threats emerge. They try every window, every lock, every weak point to find vulnerabilities before malicious actors do.
AI companies use the red teams to find where their models break, finding failure modes, and testing the boundaries.
This is very costly.
AI companies spent an estimated 1.5 billion on red teaming in 2025, and the market is likely to triple in four years’ time. Each red-team specialist is paid $40-150 per hour to systematically test for vulnerabilities. Search for AI red team engineer roles, and you’d find tons of them.
But formal testing misses something crucial that it can never match: the unpredictable nature of failures that emerge only when millions of real users explore the system in unexpected ways.
Research from Stanford and Google shows that crowdsourced user reports discover systematic AI failures that the formal systematic testing completely overlooks. Quote:
“By including a more diverse set of users, this crowd auditing method can also detect and describe failures that developers had not considered due to their own blind spots and biases.”
Why?
Because formal test suites hypothesize failures in advance based on past experience. But you, millions of you, discover unique and new bugs, issues, failures by accident, simply by trying something creative. By combining constraints in ways developers never imagined.
For instance, requests combining professional tone with creative constraints (like “write a resignation letter in Shakespearean style”)
Every weird prompt you try. Every time you push the boundaries. Every creative edge case. You’re collectively mapping failure modes that would cost thousands to discover through formal testing alone.
That’s invaluable testing labor for free.
Value Type #5: Your Desires as Market Intelligence
Intent data represents something fundamentally different from behavioral data.
Behavioral data is what you did. Intent data is what you want to do.
When you ask ChatGPT, “How do I ask for a promotion?” that’s an intent. That was you broadcasting a desire. A problem. An unmet need.
Or when you search, “Best laptop for remote work under $1,500,” you’re not just describing behavior. You’re saying: I want to buy something. I have a budget. I have specific needs.
This is consumer desire at its purest.
Out on the open web (and before Generative AI), companies have to follow you around with cookies and trackers just to estimate what you’re trying to do. Then they still need to stitch together page visits, link clicks, and time-on-site to infer intent from your “activity”.
However, it’s the next level with AI.
This kind of telling someone what you desire, the type of information, is extraordinarily valuable to advertisers, recruiters, financial services companies, and anyone willing to pay for knowing “what you want”.
Meta makes about $68 to $80 per year from each US user just from behavioral data.
The intent data market is massive and growing.
The U.S. marketing data market was valued at$26.1 billion in 2025. More specifically, 39% of B2B businesses spend more than half of their marketing budget on intent data, and 67% of B2B respondents use intent data specifically for digital advertising.
With AI, you get the distilled intent that no company would dream of before 2020.
AI companies can command premium prices in the data marketplace, far exceeding the value of passive behavioral tracking.
Value Type #6: Your “That’s Wrong” as Error Detection
Last one: Correction labor.
Every time you rephrase a failed query, point out a hallucination, or say, “No, I meant something different.”
You’re doing quality assurance.
Professional data annotation for complex corrections costs $0.06 to over $1.00 per instance, depending on complexity.
You do it for the privilege of using AI for a fraction of what the AI companies invested in the models.
Especially if you are an experienced professional in the field, your correction is actually more valuable than a random contractor’s correction.
Because you have context. You know what you actually wanted. You can articulate exactly why the response failed.
Professional RLHF annotation and correction (especially in STEM, law, finance, and medicine) work typically pays $20-60 per hour.
In the detailed article, I’ve listed the latest job specs for a professional to correct answers for an AI company.
Mathematics Education Specialist - AI Trainer - Remote - Indeed.com
https://uk.indeed.com/q-ai-training-l-remote-jobs.html?vjk=26d36994f8b2bd0e&advn=7831672186689475
⠀
What Your Labor Is Actually Worth (in $)
Consider a concrete example:
Most of you are a principal/ director/ VP-level professional. Mid-30 to 50s. You’re using ChatGPT (or any similar chatbot), maybe 30 minutes a day for work. Drafting emails. Preparing for difficult conversations. Researching industry trends.
Occasionally asking for career advice.
Let’s put a price tag on your estimated usage.
In a typical month, you might:
Submit around 200 prompts (200/30 = 6 prompts per day)
Give 0.2 thumbs-up or thumbs-down ratings, rate
Choose between responses 0.03 times, rate
Correct or rephrase 30 failed queries
Express purchase intent 10 times, rate
The dollar value of your contributions can be $30 a month to several hundred dollars (depending on your usage frequency and the market rate). That’s likely just pocket change to someone at your level.
But let’s put things in perspective, think about the 800 million weekly ChatGPT users. Even at the low end, that’s $24 billion (30*800million) in annual labor value, provided for free.
Now, a further disclaimer, the $30/month is fuzzy math. I didn’t take into account that
Your individual thumbs-up is worth almost nothing; it only becomes valuable at scale, combined with millions of others.
And AI companies also took the risk, burned billions before you ever showed up. So ‘owe’ is a strong word.
An individual annotation isn’t the same as the market rate “do this full-time” types of annotators.
But all things considered, here’s what I think matters so much more than the dollar amount... and should actually give you a pause, whether you made an informed choice about this exchange.
The Information You’re Disclosing
Think about what you ask AI in a typical week:
“How do I handle a direct report who’s underperforming?”
“What’s a reasonable salary range for a VP of something in Washington?”
“Help me prepare talking points for pushing back on my CEO’s strategy.”
This is sensitive professional intelligence. Career anxieties. Salary expectations. Strategic disagreements with your own leadership. Vulnerabilities you’d hesitate when thinking about sharing with people who are close to you.
And it’s sitting on a server somewhere, potentially used for training, and reviewed by humans.
You Signed a Contract Unknowingly
You understand that “free” usually means you are the product.
Every free (even the paid ones) AI tool is a negotiation where only one party shows up. The terms were written in legalese designed to be skipped. Dense. And feel like intentionally unreadable.
Here’s the parallel to your day job: If your employer asked you tomorrow to work nights and weekends “for exposure and learning opportunities,” you’d push back. You’d negotiate.
You might even walk.
But this is exactly the arrangement you have with every AI company. The difference is that they made the exchange invisible.
The Targeting That Follows
Beyond the immediate data collection, every time you express a want, a new job, a purchase decision, or a problem you need solved, that’s a signal.
You’d become more and more targetable with every conversation. The value will be so much higher than the current estimate in the coming years, including advertising and shopping. To advertisers, To recruiters, To anyone willing to pay for access to people who’ve expressed specific needs.
Read this article about how AI chatbots are built to harvest your desire:
This explains why ChatGPT rolled out advertising, shopping features, and ad-supported tiers in recent months.
So rather than thinking the $X a month. Consider whether you made an informed choice to share this level of professional and personal detail—or whether that choice was made for you, buried in terms of service you never read.
Which AI Companies Default to Training On Your Data?
Here’s the thing: Not all AI companies handle this the same way.
And if privacy matters to you, these differences matter.
Most of the big US chatbots default to using your conversations for training—you can opt out, but you have to hunt for the setting.
Some European providers recommend opting in for training on chat data, and some even promise to encrypt all chats.
And ironically, China is moving toward rules that require explicit consent before chat logs can be used to improve models. So also an opt-in (but we shall see)
But if you’re using the big names—ChatGPT, Gemini, Claude, Meta AI—you should assume your chats are training data unless you’ve explicitly switched it off.
Here’s my quick take on how the major players compare.
Anthropic (Claude)
Anthropic—the company behind Claude.
They stood out as one of the few major chatbots that did NOT use conversations for training by default. However, as of late 2025, Updates to Consumer Terms and Privacy Policy: your conversations ARE used for training unless you explicitly opt out.
New users now choose during signup whether to allow training. Which is still the most transparent among other big names.
If you do opt in, your data, however, is retained for up to five years—so the choice has real consequences.
OpenAI (ChatGPT)
OpenAI is more opaque.
OpenAI, of course, doesn’t explicitly ask you whether you’d like to opt out of data training during sign-up; it also uses your conversations for training by default. You have to find the setting and turn it off.
And here’s the concerning part: if you give thumbs-up or thumbs-down feedback, that entire conversation may be used for training—even if you opted out of general training.
The setting exists. But you have to know where to look.
Google (Gemini)
Google’s Gemini uses your data when “Gemini Apps Activity” is on, which is the default. Human reviewers may read your conversations.
You can turn it off. But again, the default assumes consent, and the data from the previous chats can be kept (anonymized from your account, but kept) for up to 3 years
While most people don’t care, I’m guessing you will.
What Changes Now That You Know This?
The question is simpler than deleting your account or joining a movement: now that you know this, does anything change?
I see three possible responses.
First: Nothing changes. You’ve decided the trade is fair, free access to powerful tools in exchange for your labor and data. That’s a legitimate choice. As long as it’s an informed one.
Second: You become more selective. You treat your AI interactions differently. Maybe you use the opt-out settings. Maybe you’re more thoughtful about what you share, especially career-sensitive or personal information. Maybe you keep a separate account for queries you’d rather not have associated with your identity.
Third: You start asking which companies at least acknowledge this exchange honestly. Opt-in versus opt-out isn’t just a technical detail. It reflects whether a company thinks your informed consent actually matters.
So keep using AI if you already are and find it valuable for you. I use them too.
We should, though, talk about how to estimate the economic value of your AI to you. ...well. That’s probably a topic for another time.
But at minimum, now you understand what you’re actually trading away.
What you do with that is up to you.
All my research and reference data, and the Opt-out instructions are in the article linked in the end.
If you found this useful, let me know in the comments which AI company you think is most transparent about this stuff.
I’m curious what you think.
Opt-Out Instructions
ChatGPT: Profile icon → Settings → Data Controls → turn OFF Improve the model for everyone”. Or visit:privacy.openai.com
Note: OpenAI also confirms, in its current consumer privacy page, that ChatGPT users can control whether they contribute to “future model improvements” via in‑product settings, and that API / Enterprise / Teams data is not used for training by default.
Claude: Settings → Privacy (or Data & Privacy) → Model training → toggle OFF use of your chats/coding data for training. Or visit: https://www.anthropic.com/news/updates-to-our-consumer-terms
Note: As of Sept 2025, Claude trains on consumer chats by default unless you opt out. Enterprise / Work / Education / API plans are excluded.
Gemini: Go to https://myactivity.google.com/product/gemini or open Gemini → Settings & help → Activity, then turn OFF “Keep Activity”.
Note: When “Keep Activity” is off, Google still retains chats for up to 72 hours for operational reasons, but they’re not stored in your account or used for long‑term training.












