Opinion · AI and Work

How to Actually Evaluate Whether AI Will Affect Your Job

Most AI job risk analysis is either panic or reassurance. Generic lists of "safe" and "at risk" jobs miss the only question that matters for your specific situation. This is a practical framework for applying the research to your actual daily work.

April 2026 · Mike Price · 13 min read

Two things are true simultaneously about AI and jobs. The public discourse is generating more heat than light on this topic, and the underlying risk is real and unevenly distributed in ways that most people have not thought clearly about. The reassurance camp ("AI creates more jobs than it destroys, historically this is always true") is too comfortable. The panic camp ("AI will replace 40% of jobs by 2030")[10] is too blunt. Neither gives you a useful tool for thinking about your specific situation.

This article is that tool. It draws on the actual research rather than the headlines, and gives you a framework for doing an honest evaluation of your own exposure. Not your job title. Your actual daily work, the specific tasks you spend your time on, the judgment calls you make, and the barriers between those tasks and automation.

Take the 2-minute AI exposure quiz
Five research-backed questions. Personalized risk profile and guidance at the end.
The problem

Why most AI job risk analysis fails you

The standard approach to AI job risk analysis works like this: researchers take a list of occupations, score each one on how many of its tasks AI can theoretically perform, and publish a ranked list of "at risk" and "safe" jobs. Telemarketers and data entry clerks rank near the top. Surgeons and clergy rank near the bottom. You look up your job title, find a risk percentage, and either feel reassured or anxious.

The problem is that this approach answers the wrong question. It tells you whether AI can theoretically automate tasks associated with a job title. It does not tell you whether AI will actually displace you, or when, or what specifically makes you more or less exposed within your occupation.

Yale's Budget Lab director Martha Gimbel put it cleanly: "We do not have a good track record of predicting how technological change will play out in the labor market."[1] ATMs were supposed to eliminate bank tellers. Employment of bank tellers increased after ATMs were introduced, because branches became cheaper to operate, so banks opened more of them. Radiologists were supposed to be replaced by AI diagnostic tools. Radiology employment has grown. The predictions were not wrong about what AI could do. They were wrong about how organizations, labor markets, and human behavior adapt around new capabilities.

There is also a critical distinction in the research that most popular coverage glosses over, and it is the most important thing to understand before applying any risk framework to your own situation.

The three terms most people conflate

Exposure: AI could theoretically perform tasks associated with this role. A theoretical estimate. Does not mean displacement will happen.

Task automation: Specific tasks within a job are being automated. The role continues to exist, often with higher output expectations or fewer people performing it. This is what is actually happening at scale right now.

Job displacement: The role itself is eliminated. Headcount is permanently reduced. This is rarer than exposure statistics suggest and slower to materialize than task automation.

Most statistics about AI job risk are measuring exposure. Most of the actual disruption happening right now is task automation. Full displacement is the least common of the three and the one most dependent on factors beyond the tasks themselves.

The reframe

The distinction that changes everything

The most useful reframe in all of the AI labor market research comes from the Goldman Sachs analysis: the question is not "can AI do my job?" It is "what fraction of my actual daily work involves tasks where the correct output can be determined algorithmically from structured inputs?"

That question sounds technical but it is not. You already know the answer for your own work. Think about the last five things you did today. For each one, ask: could someone write a complete rule for how to do this correctly, in advance, without seeing the specific instance? If the answer is yes, that task is automatable in principle. If the answer is no, because the right response depends on context, judgment, relationships, or information that cannot be fully specified in advance, it is not.

A data entry clerk transcribing handwritten forms into a database is doing a task where the correct output can be completely determined by a rule. AI does this better than humans already. A therapist deciding how to respond to a client who just disclosed something unexpected is making a judgment call that cannot be reduced to a rule. No current AI system comes close to replicating it.

Most real jobs contain tasks of both types. The question for your specific situation is the ratio, and where the tasks that make you valuable to your employer fall on that spectrum.

"AI job risk statistics measure exposure at the occupation level. Your actual risk is determined at the task level, by the specific things you spend your time on and how much judgment each one requires."

The framework

The four factors that actually predict risk

Goldman Sachs Research, in its analysis of over 800 occupations, identified four factors that best predict whether AI will displace specific work.[2] Brookings Institution's research on adaptive capacity adds a fifth that the Goldman Sachs analysis underweights.[3] Here are all five, with honest explanations of how to apply each one.

01
Task repetitiveness and codifiability

The single strongest predictor of automation risk. Repetitive tasks with well-defined correct outputs are automatable. Novel tasks requiring judgment about what the correct output even is are not, at least not with current AI.

This is not about whether the task is cognitively demanding. Tax preparation is cognitively demanding and highly automatable, because the rules for what constitutes correct output are fully defined in tax law. Negotiating a difficult conversation with a colleague is not cognitively demanding in the same way, but it is very hard to automate because the correct response depends entirely on context that cannot be pre-specified.

Where you fall on this factor
High riskThe same task with the same correct output type, repeated across similar cases
Medium riskRecurring task type but each instance requires fresh judgment about specifics
Low riskEvery instance is genuinely novel; correct output cannot be predetermined
02
Consequences of errors

AI systems make mistakes. The higher the cost of those mistakes, the greater the pressure to maintain human oversight, which slows displacement even when automation is technically feasible.

An AI writing a first draft of a marketing email can get things wrong with low consequence: a human reviews and corrects it before it goes out. An AI making a diagnostic recommendation that influences surgery cannot get things wrong the same way: the consequence of an error is potentially irreversible harm to a person. The error consequence is not about intellectual difficulty. Data entry is simple and low-consequence for each individual error. Surgery is complex and high-consequence for each individual error.

Importantly, this factor is shifting as AI reliability improves. Tasks that felt too high-consequence to automate in 2022 are now being automated because error rates have dropped enough that the expected cost of AI errors is lower than the expected cost of human errors for the same task. The threshold moves continuously.

Where you fall on this factor
High riskErrors are caught easily and corrected cheaply before any harm is done
Medium riskErrors are costly but recoverable; human review remains important
Low riskErrors can cause irreversible harm to people or significant legal liability
03
Task interconnection and context dependency

AI systems work well on well-defined, discrete tasks. They struggle when the correct execution of one task depends on understanding a complex web of organizational context, relationship history, unstated preferences, and institutional knowledge that cannot be fully captured in a prompt or a dataset.

A customer service representative handling a billing dispute draws on: the customer's history, the company's informal policies, their read of the customer's emotional state, their knowledge of which exceptions are acceptable and which will get escalated, and their judgment about what outcome will actually satisfy the customer rather than just technically resolving the ticket. Each of these is a separate input that an AI can approximate but cannot reliably synthesize in the way an experienced human does.

The more your work is embedded in a web of organizational context and human relationships that does not fully live in any document or database, the less automatable it is.

Where you fall on this factor
High riskTask is self-contained; all relevant information is in the input
Medium riskContext matters but most of it can be documented or queried
Low riskCorrect execution requires deep organizational, relational, or situational knowledge that cannot be fully specified
04
The value of AI-exposed tasks relative to your total compensation

This is the most underrated factor. If the tasks that AI can automate represent a small fraction of why you are paid what you are paid, the automation of those tasks may actually benefit you (more time for high-value work) rather than threaten you. If those tasks represent most of what your employer is paying for, the calculus is different.

A senior lawyer may spend 30% of their time on tasks that AI can automate: legal research, first-draft contract review, discovery document screening. But the other 70% of their time, the strategic judgment, the client relationship management, the courtroom performance, is what determines their compensation. The automation of the 30% improves their efficiency. It does not make them redundant.

A paralegal at the same firm may spend 80% of their time on exactly those automatable tasks. The calculus is completely different, even though they work in the same office on the same cases.

Where you fall on this factor
High riskThe tasks AI can automate account for most of what your employer pays you to do
Medium riskAI-automatable tasks are a significant but not dominant part of your role's value
Low riskAI-automatable tasks are a small fraction of your total value contribution
05
Your adaptive capacity

Brookings Institution's research adds a dimension that the Goldman Sachs analysis underweights: even high-exposure workers are not equally at risk, because people differ substantially in their capacity to adapt if disruption occurs. Financial security, transferable skills, professional networks, geographic mobility, educational credentials, and age all influence how well someone can navigate a career disruption even when the disruption itself is real.

This is the factor that makes the AI disruption most unequal. The Brookings analysis found roughly 6.1 million US workers in the highest-risk category: high AI exposure combined with low adaptive capacity, concentrated in administrative and clerical roles.[3] These workers face the same task automation as high-earning knowledge workers in exposed fields, but with far fewer resources to navigate the transition.

Adaptive capacity is not destiny. It is a realistic assessment of your margin for error if your primary role does change materially. Someone with significant savings, transferable skills, and a strong professional network has more time and options to adapt than someone without those resources. Both assessments are honest. Both are worth knowing.

Where you fall on this factor
Lower capacityLimited savings, narrow skills, smaller network, reduced geographic flexibility
Moderate capacitySome financial runway, some transferable skills, active professional network
Higher capacityMeaningful financial cushion, broad transferable skills, strong network, geographic flexibility
Interactive self-assessment

The self-audit: applying this to your role

Work through these five questions honestly. You will get a personalized risk profile at the end, with specific guidance based on your answers. This is not a score to panic about. It is an accurate picture of where you are exposed and where you are not, so you can make deliberate decisions about what to build toward.

Applied examples

How this plays out across real roles

The same framework applied to different roles produces very different conclusions, even within the same industry. This table works through several examples to show how the factors interact.

Role Primary exposure Primary protection Honest assessment
Data entry clerk Tasks are repetitive, fully codifiable, low error consequence, self-contained None significant. All four factors point toward high risk. High This is the clearest high-risk category in the research. Already automating rapidly.
Junior software developer Boilerplate code, debugging well-defined errors, documentation Complex system design, novel problems, understanding existing codebase context Medium Goldman Sachs data shows employment for workers aged 22–25 in AI-exposed tech roles fell nearly 20% since early 2025.[2]
Radiologist Routine scan interpretation where patterns are well established Complex cases, patient communication, interdisciplinary judgment, error consequence Mixed Predicted to be eliminated in 2016. Hasn't happened. Error consequence and complex case judgment remain protective.
Marketing manager Report generation, basic content, campaign setup, standard optimization Brand strategy, stakeholder management, novel creative direction, commercial judgment Medium The execution tasks are automating. The judgment and relationship tasks are not. The role is restructuring, not disappearing.
Elementary school teacher Information delivery, standard assessment, some administrative work Relationship with specific children, behavioral reading, socialization role, error consequence Lower High human context dependency. The relational and developmental role is genuinely hard to automate.
Paralegal Document review, legal research, filing, first-draft contract work Client-facing work, complex case strategy support, attorney relationship High The core tasks are exactly what legal AI is designed for. Already happening in large firms.
Plumber Very little. Physical-world problem solving in variable environments Physical dexterity, variable environments, embodied judgment, customer interaction Lower Consistently one of the least exposed categories. Physical-world variability is a strong protection.
Content writer (SEO) Bulk content production, standard keyword optimization, templated formats Original research, genuine subject matter expertise, editorial judgment, brand voice High Commodity content production is already largely automated. Original-insight content from genuine expertise is not.
What gets missed

The second-order risks most people miss

The framework above addresses direct risk: will AI automate the tasks in your role? But there are two second-order risks that are equally real and get almost no attention in the popular discourse.

The demand compression risk

Your job may not be directly automatable, but the industry or organization that employs you may shrink because AI automated the work of your colleagues. A senior strategy consultant at a firm that uses AI to do the research and analysis work that used to require 10 junior analysts is still employed, but the firm is smaller, growing more slowly, and hiring less than it used to. The career ladder is shorter. The opportunities for advancement are fewer. The compensation growth that came from expanding the practice is limited.

This is not the same as losing your job. But it is a real consequence of AI disruption that the "AI won't replace knowledge workers" reassurance does not capture. The structural environment around you changes even when your specific role does not.

The entry-level collapse risk

This is the risk that nobody is talking about honestly. The traditional career path in most knowledge work professions runs through a junior tier that involves a lot of the work AI is automating fastest: document review, research aggregation, first drafts, data analysis, report preparation. When those entry-level roles shrink because AI handles the work, the pipeline for the next generation of senior practitioners compresses.

As Thomas Davenport, professor at Babson College and author of The AI Advantage, put it: "If companies don't hire entry-level workers today, how do you get experienced workers tomorrow? We still haven't figured that out."[4] If you are mid-career, this may not affect you directly. If you are early-career or entering a field now, it is the most important risk to understand. The training ground is changing faster than anyone has a good answer for.

The demographic reality worth knowing

Brookings found that roughly 86% of workers in the highest-risk administrative and clerical categories are women.[3] Goldman Sachs data shows that employment for workers aged 22 to 25 in AI-exposed tech roles fell nearly 20% since early 2025.[2] Entry-level job postings overall have declined approximately 35% since early 2023, according to Revelio Labs data cited by CNBC.[5] The disruption is not randomly distributed. It is concentrated among younger workers and administrative support roles, which have historically been disproportionately held by women. These are structural facts about who bears the cost of the transition, not abstractions about employment statistics.

Action

What to do with an honest assessment

An honest assessment of your AI exposure, like the one from the interactive quiz above, is only useful if it informs action. Here is what the research suggests is actually worth doing, separated from the generic advice that gets recycled endlessly.

If your assessment says high risk on multiple factors

The useful response is not to wait and hope the disruption is slower than expected. It is to deliberately shift the ratio of your work toward the tasks that score lower risk on the framework: more judgment, more context dependency, more consequence-bearing work. In practice this usually means moving toward client-facing roles, strategic functions, or roles that require managing the AI systems rather than executing the tasks they automate. The window to make that shift proactively is shorter than it feels.

If your assessment says medium risk

The honest response is to identify which specific tasks in your role score high risk and build AI fluency for those tasks specifically, not general "AI literacy" in the abstract. If you understand the tools that are automating your most exposed tasks, you become the person who manages and improves those tools rather than the person those tools replace. PwC's research found that workers with demonstrated AI skills earn 56% more than peers in the same roles without them.[6] The gap is real and the window to develop it is now.

If your assessment says low risk

Low direct automation risk does not mean no AI relevance. The tools available to people in low-risk roles are still improving your leverage. A therapist who uses AI to reduce their administrative burden does more therapy per hour. A plumber who uses AI to optimize their scheduling and quoting spends more time on billable work. The frame shifts from "will AI take my job?" to "how do I use AI to do more of the high-value parts of my job?"

On adaptive capacity specifically

Whatever your task-level risk assessment shows, building adaptive capacity is worth the investment independently. Financial cushion, transferable skills, and a genuine professional network are valuable whether or not AI disrupts your specific role, because they reduce the cost of any career transition, voluntary or not. The Brookings research framing is useful here: exposure and adaptive capacity are two different dimensions. High exposure with high adaptive capacity is a manageable situation. High exposure with low adaptive capacity is the genuinely difficult one.

FAQ

Frequently asked questions

How accurate are the AI job displacement statistics I keep seeing?

Mostly measuring the right things but being interpreted incorrectly. The statistics from Goldman Sachs,[2] McKinsey,[7] WEF,[8] and Brookings[3] are based on genuine research and are broadly credible as estimates of task-level exposure. They are not predictions of the timeline or magnitude of actual job losses, which depend on adoption rates, economic conditions, policy responses, and organizational behavior that no model fully captures. The Yale Budget Lab is the most honest about this uncertainty:[1] current data shows no measurable evidence that AI is putting Americans as a whole out of work yet, even as the theoretical exposure numbers are high. Both things are true and both are worth holding simultaneously.

My job title appears on a "high risk" list. Should I be worried?

Apply the four-factor framework to your actual tasks rather than your title. Job titles are blunt instruments for measuring AI risk. Two people with the same job title at different organizations can have completely different risk profiles depending on what they actually spend their time doing. A "marketing manager" who spends most of their time on strategic brand decisions and stakeholder management is in a different position from a "marketing manager" who spends most of their time on campaign reporting and content production. The title is the same. The exposure is not.

Is it too late to make a career pivot if my role is high risk?

For most people reading this in 2026, no. Forrester's analysis of the actual displacement timeline projects 6.1% net US job loss from AI and automation by 2030, with the most significant disruption concentrated in the 2026 to 2028 window for the most exposed roles.[9] People who are currently in high-exposure roles and who start building toward lower-exposure capabilities now have time to make that transition ahead of the steepest part of the disruption curve. The people who will find it most difficult are those who wait until the disruption is fully visible and obvious before responding, at which point they are competing for fewer opportunities with more people making the same transition simultaneously.

What should I do if I cannot easily move to a lower-risk role?

Focus on adaptive capacity rather than task-level risk reduction. If a significant career pivot is not feasible in the short term, the most useful investments are in the factors that reduce the cost of any disruption: building financial cushion, actively maintaining a professional network outside your current employer, and developing skills that transfer across roles and industries. The Brookings research is clear that adaptive capacity is as important as exposure level in determining actual outcomes. High exposure with strong adaptive capacity is a manageable position. The investments that build adaptive capacity pay off regardless of whether your specific role is disrupted.

Will learning to use AI tools protect my job?

It depends on what "learning to use AI tools" means in practice. Casually using ChatGPT to draft emails provides very limited protection. Genuinely understanding the tools that are automating your most exposed tasks, learning to evaluate their outputs critically, building workflows that use AI for the parts it does well while preserving your judgment for the parts it does not: that version of AI fluency provides real protection, and real compensation premium. PwC's AI Jobs Barometer found that workers with advanced AI skills earn 56% more than peers in the same roles without them.[6] That figure is not measuring people who have used AI occasionally. It is measuring people who have integrated AI into their work in ways that genuinely change their output quality and speed. That is a meaningful distinction.

Sources and references

  • Yale Budget Lab — Gimbel, M. et al. "Evaluating the Impact of AI on the Labor Market: Current State of Affairs." Budget Lab at Yale University, 2025. budgetlab.yale.edu →
  • Goldman Sachs Research — Briggs, J. & Dong, T. "How Will AI Affect the Global Workforce?" Goldman Sachs, August 2025. Analysis covers 800+ occupations across task repetitiveness, error consequences, task interconnection, and wage value relative to AI-exposed tasks. goldmansachs.com →
  • Brookings Institution — "Measuring US Workers' Capacity to Adapt to AI-Driven Job Displacement." Brookings, February 2026. Introduces the adaptive capacity framework alongside AI exposure, identifying 6.1 million high-exposure / low-adaptive-capacity workers concentrated in administrative and clerical roles. brookings.edu →
  • Babson College / Thomas Davenport — "AI, Jobs, and Uncertainty: A Leading Expert Weighs In." Babson Thought & Action, March 2026. Davenport is President's Distinguished Professor of Information Technology at Babson and author of The AI Advantage (MIT Press). entrepreneurship.babson.edu →
  • Revelio Labs / CNBC — Entry-level job posting decline of approximately 35% since January 2023, cited in CNBC reporting on AI's impact on early-career hiring. Goldman Sachs corroborated with data showing unemployment among 20–30 year olds in tech-exposed occupations rose nearly 3 percentage points since early 2025.
  • PwC — "AI Jobs Barometer 2025." PricewaterhouseCoopers. Workers with advanced AI skills earned on average 56% more than peers in equivalent roles. Productivity growth in industries most exposed to AI has nearly quadrupled since 2022. pwc.com →
  • McKinsey Global Institute — "The State of AI 2025." McKinsey & Company, late 2025. Estimates current technology could theoretically automate approximately 57% of US work hours. More than 70% of companies expect to reskill at least 11% of their workforce within three years. mckinsey.com →
  • World Economic Forum — "Future of Jobs Report 2025." WEF. 92 million roles projected displaced by 2030, 170 million new roles emerging (net gain 78 million). 41% of employers plan workforce reductions in AI-automatable areas within five years. Employers expect 39% of workers' core skills to change by 2030. weforum.org →
  • Forrester Research — "The Forrester AI Job Impact Forecast for the US 2025–2030." Forrester, January 2026. Projects 6.1% net US job loss (approximately 10.4 million positions) by 2030 from AI and automation. Separately: "Agency AI-Powered Workforce Forecast 2030" projects 32,000 US advertising agency jobs lost to automation (7.5% of agency workforce) by 2030. forrester.com →
  • IMF — Jaumotte, F. et al. "Bridging Skill Gaps for the Future: New Jobs Creation in the AI Age." IMF Staff Discussion Note SDN/2026/001. International Monetary Fund, 2026. Finds 40% of global jobs exposed to AI-driven change. imf.org →
  • Harvard Business Review / MIT — "Research: How AI Is Changing the Labor Market." HBR, March 2026. Reviews evidence that generative AI is reshaping, not uniformly erasing, white-collar work. hbr.org →
  • Washington Post / GovAI — "See which jobs are most threatened by AI and who may be able to adapt." Washington Post, March 2026. Draws on research by Sam Manning (GovAI) and Tomás Aguirre. washingtonpost.com →

If you found this useful, share it with someone who needs it

This kind of analysis is most valuable when it reaches people making real decisions about their careers. If someone you know is anxious about AI and their job, this framework gives them something more honest to work with.

More essays →