AI Skill NowAI Skill Now

A web-based AI agent that automatically collects, processes, and publishes the latest research and news about AI's impact on jobs and skills • Last updated April 14, 2026 • 04:24 CST

📊 Today's Data Collection

News items: 44 articles gathered
Research papers: 6 papers fetched
Highlights: 5 top items
US Layoff News: 20 events tracked (past 3 days)
AI Tools: 10 newest tools curated
Flagship research: 10 papers featured
Total sources: 10 data feeds processed

⭐ Highlights

🏢 US AI Layoff News - Past 3 Days

📰 AI Jobs & Skills News

A people-first vision for the future of work in the age of AI

Brookings2026-03-25
… a national Workers First AI Summit to ensure workers help shape the policies governing AI in the workplace. The summit arrives at a moment when debates about AI and jobs often center on a single question: Will AI destroy jobs, change jobs, create new jobs, or leave work largely unchanged? There are doomsayers and champions … The post A people-fir

Working to advance the nuclear renaissance

MIT News2026-04-03
Dean Price, assistant professor in the Department of Nuclear Science and Engineering, sees a bright future for nuclear power, and believes AI can help us realize that vision.

📄 Research Papers & Policy Reports

Does Generative AI Narrow Education-Based Productivity Gaps? Evidence from a Randomized Experiment

NBER Working Paper (Guillermo Cruces, Diego Fernández Meijide, Sebastian Galiani, Ramiro H. Gálvez, María Lombardi)Published: February 2026
Does generative artificial intelligence (AI) reinforce or reduce productivity differences across workers? Existing evidence largely studies AI within firms and occupations, where organizational selection compresses educational heterogeneity, leaving unclear whether AI narrows productivity gaps across individuals with substantially different levels of formal education. We address this question using a randomized online experiment conducted outside firms, in which 1,174 adults aged 25–45 with heterogeneous educational backgrounds complete an incentivized, workplace-style business problem-solving task. The task is a general (not domain-specific) exercise, and participants perform it either with or without access to a generative-AI assistant. Unlike prior work that studies heterogeneity within relatively homogeneous worker samples, our design targets the between–education-group productivity gap as the primary estimand. We find that AI increases productivity for all participants, with substantially larger gains for lower-education individuals. In the absence of AI access, higher-education participants outperform lower-education participants by 0.548 standard deviations; with AI access, this gap falls to 0.139 standard deviations, implying that generative AI closes three-quarters of the initial productivity gap. We interpret this pattern as evidence that generative AI narrows effective productivity differences in task execution by relaxing constraints that are more binding for lower-education individuals, even though underlying skill differences remain, as reflected in persistent education gaps in task performance and in a follow-up exercise without AI assistance.

Building Pro-Worker Artificial Intelligence

NBER Working Paper (Daron Acemoglu, David Autor, Simon Johnson)Published: February 2026
This paper defines pro-worker technologies, including Artificial Intelligence, as technologies that make human skills and expertise more valuable by expanding worker capabilities. Our conceptual framework distinguishes among five categories of technological change: labor-augmenting, capital-augmenting, automating, expertise-leveling, and new task-creating. Only the last category is unambiguously pro-worker, generating demand for novel human expertise rather than commodifying it. We illustrate these distinctions through hypothetical and real-world examples spanning aviation maintenance, electrical services, custodial work, education, patent examination, and gig delivery. While AI’s capacity to automate work is substantial, we argue that its potential to serve as a collaborator, by extending human judgment, enabling new tasks, and accelerating skill acquisition, is equally transformative and currently underexploited. We identify market failures, including misaligned firm and developer incentives, path dependence, and a pervasive pro-automation ideology, that may lead to underinvestment in pro-worker AI. We consider nine policy directions that would change incentives, including targeted investments in health care and education, tax code reform, antitrust enforcement, and intellectual property protections for worker expertise.

Chaining Tasks, Redefining Work: A Theory of AI Automation

NBER Working Paper (Mert Demirer, John J. Horton, Nicole Immorlica, Brendan Lucier, Peyman Shahidi)Published: February 2026
Production is a sequence of steps that can be executed (1) manually, (2) augmented with AI, or (3) fully automated within contiguous AI-executed steps called “chains.” Firms optimally bundle steps into tasks and then jobs, trading off specialization gains against coordination costs. We characterize the optimal assignment of humans and AI to steps and the firm’s resulting job structure, showing that comparative advantage logic can fail with AI chaining. The model implies non-linear productivity gains from AI quality improvements and admits a CES representation at the macro level. Empirical evidence supports the model’s key predictions that (1) AI-executed steps co-occur in chains, (2) dispersion of AI-exposed steps lowers AI execution at the job level, and (3) adjacency to AI-executed steps increases the likelihood that a step is AI-executed.

Public Finance in the Age of AI: A Primer

NBER Working Paper (Anton Korinek, Lee Lockwood)Published: February 2026
Transformative artificial intelligence (TAI) - machines capable of performing virtually all economically valuable work - may gradually erode the two main tax bases that underpin modern tax systems: labor income and human consumption. We examine optimal taxation across two stages of artificial intelligence (AI)-driven transformation. First, if AI displaces human labor, we find that consumption taxation may serve as a primary revenue instrument, with differential commodity taxation gaining renewed relevance as labor distortions lose their constraining role. In the second stage, as autonomous artificial general intelligence (AGI) systems both produce most economic value and absorb a growing share of resources, taxing human consumption may become an inadequate means of raising revenue. We show that the taxation of autonomous AGI systems can be framed as an optimal harvesting problem and find that the resulting tax rate on AGI depends on the rate at which humans discount the future. Our analysis provides a theoretically grounded approach to balancing efficiency and equity in the Age of AI. We also apply our insights to evaluate specific proposals such as taxes on robots, compute, and tokens, as well as sovereign wealth funds and windfall clauses.

Minimum Wages and Rise of the Robots

NBER Working Paper (Erik Brynjolfsson, J. Frank Li, Javier Miranda, Robert Seamans, Andrew J. Wang)Published: March 2026
This paper studies how minimum wage policy affects firms’ adoption of automation technologies. Using both state-level measures of robot exposure and novel plant-level data on industrial robot imports linked to U.S. Census microdata from 1992–2021, we show that increases in minimum wages raise the likelihood of robot adoption in manufacturing. Our preferred identification exploits discontinuities at state borders, comparing otherwise similar firms exposed to different wage floors. Across specifications, a 10 percent increase in the minimum wage increases robot adoption by roughly 8 percent relative to the mean.

Machine Learning Meets Markowitz

NBER Working Paper (Yijie Wang, Hao Gao, Campbell R. Harvey, Yan Liu, Xinyuan Tao)Published: February 2026
The standard approach to portfolio selection involves two stages: forecast the asset returns and then plug them into an optimizer. We argue that this separation is deeply problematic. The first stage treats cross-sectional prediction errors as equally important across all securities. However, given that final portfolios might differ given distinct risk preferences and investment restrictions, the standard approach fails to recognize that the investor is not just concerned with the average forecast error - but the precision of the forecasts for the specific assets that are most important for their portfolio. Hence, it is crucial to integrate the two stages. We propose a novel implementation utilizing machine learning tools that unifies the expected return generation process and the final optimized portfolio. Our empirical example provides convincing evidence that our end-to-end method outperforms the traditional two-stage approach. In our framework, each investor has their own, endogenously determined, efficient frontier that depends on risk preferences, investor-specific constraints, as well as exposure to market frictions.

GPT as a Measurement Tool

NBER Working Paper (Hemanth Asirvatham, Elliott Mokski, Andrei Shleifer)Published: February 2026
We present the GABRIEL software package, which uses GPT to quantify attributes in qualitative data (e.g. how “pro innovation” a speech is). GPT is evaluated on classification and attribute rating performance against 1000+ human annotated tasks across a range of topics and data. We find that GPT as a measurement tool is accurate across domains and generally indistinguishable from human evaluators. Our evidence indicates that labeling results do not depend on the exact prompting strategy used, and that GPT is not relying on training data contamination or inferring attributes from other attributes. We showcase the possibilities of GABRIEL by quantifying novel and granular trends in Congressional remarks, social media toxicity, and county-level school curricula. We then apply GABRIEL to study the history of tech adoption, using it to assemble a novel dataset of 37,000 technologies. Our analysis documents a tenfold decline of time lags from invention to adoption over the industrial age, from ~50 years to ~5 years today. We quantify the increasing dominance of companies and the U.S. in innovation, alongside characteristics that explain whether a technology will be adopted slowly or speedily.

AI, Human Cognition and Knowledge Collapse

NBER Working Paper (Daron Acemoglu, Dingwen Kong, Asuman Ozdaglar)Published: March 2026
We study how generative AI, and in particular agentic AI, shapes human learning incentives and the long-run evolution of society’s information ecosystem. We build a dynamic model of learning and decision-making in which successful decisions require combining shared, community-level general knowledge with individual-level, context-specific knowledge; these two inputs are complements. Learning exhibits economies of scope: costly human effort jointly produces a private signal about their own context and a “thin” public signal that accumulates into the community’s stock of general knowledge, generating a learning externality. Agentic AI delivers context-specific recommendations that substitute for human effort. By contrast, a richer stock of general knowledge complements human effort by raising its marginal return. The model highlights a sharp dynamic tension: while agentic AI can improve contemporaneous decision quality, it can also erode learning incentives that sustain long-run collective knowledge. When human effort is sufficiently elastic and agentic recommendations exceed an accuracy threshold, the economy can tip into a knowledge-collapse steady state in which general knowledge vanishes ultimately, despite high-quality personalized advice. Welfare is generally non-monotone in agentic accuracy, implying an interior, welfare-maximizing level of agentic precision and motivating information-design regulations. In contrast, greater aggregation capacity for general knowledge—meaning more effective sharing and pooling of human-generated general knowledge—unambiguously raises welfare and increases resilience to knowledge collapse.

ClawGuard: A Runtime Security Framework for Tool-Augmented LLM Agents Against Indirect Prompt Injection

arXiv AI & Jobs Research (Wei Zhao, Zhe Li, Peixin Zhang, Jun Sun)Published: 2026-04-13
Tool-augmented Large Language Model (LLM) agents have demonstrated impressive capabilities in automating complex, multi-step real-world tasks, yet remain vulnerable to indirect prompt injection. Adversaries exploit this weakness by embedding malicious instructions within tool-returned content, which agents directly incorporate into their conversati

Towards Automated Pentesting with Large Language Models

arXiv AI & Jobs Research (Ricardo Bessa, Rui Claro, João Trindade, João Lour)Published: 2026-04-13
Large Language Models (LLMs) are redefining offensive cybersecurity by allowing the generation of harmful machine code with minimal human intervention. While attackers take advantage of dark LLMs such as XXXGPT and WolfGPT to produce malicious code, ethical hackers can follow similar approaches to automate traditional pentesting workflows. In this

Dual-Control Frequency-Aware Diffusion Model for Depth-Dependent Optical Microrobot Microscopy Image Generation

arXiv AI & Jobs Research (Lan Wei, Zongcai Tan, Kangyi Lu, Jian-Qing Zheng, )Published: 2026-04-13
Optical microrobots actuated by optical tweezers (OT) are important for cell manipulation and microscale assembly, but their autonomous operation depends on accurate 3D perception. Developing such perception systems is challenging because large-scale, high-quality microscopy datasets are scarce, owing to complex fabrication processes and labor-inte

Playing Along: Learning a Double-Agent Defender for Belief Steering via Theory of Mind

arXiv AI & Jobs Research (Hanqi Xiao, Vaidehi Patil, Zaid Khan, Hyunji Lee, )Published: 2026-04-13
As large language models (LLMs) become the engine behind conversational systems, their ability to reason about the intentions and states of their dialogue partners (i.e., form and use a theory-of-mind, or ToM) becomes increasingly critical for safe interaction with potentially adversarial partners. We propose a novel privacy-themed ToM challenge, T

RationalRewards: Reasoning Rewards Scale Visual Generation Both Training and Test Time

arXiv AI & Jobs Research (Haozhe Wang, Cong Wei, Weiming Ren, Jiaming Liu, F)Published: 2026-04-13
Most reward models for visual generation reduce rich human judgments to a single unexplained score, discarding the reasoning that underlies preference. We show that teaching reward models to produce explicit, multi-dimensional critiques before scoring transforms them from passive evaluators into active optimization tools, improving generators in tw

GeomPrompt: Geometric Prompt Learning for RGB-D Semantic Segmentation Under Missing and Degraded Depth

arXiv AI & Jobs Research (Krishna Jaganathan, Patricio Vela)Published: 2026-04-13
Multimodal perception systems for robotics and embodied AI often assume reliable RGB-D sensing, but in practice, depth is frequently missing, noisy, or corrupted. We thus present GeomPrompt, a lightweight cross-modal adaptation module that synthesizes a task-driven geometric prompt from RGB alone for the fourth channel of a frozen RGB-D semantic se

📊 Flagship Research Papers

ARC-AGI-3: A New Challenge for Frontier Agentic Intelligence

arXivPublished: March 24, 2026
ARC-AGI-3 environments only leverage Core Knowledge priors and are difficulty-calibrated via extensive testing with human test-takers. Testing shows humans can solve 100% of the environments, in contrast to frontier AI systems which, as of March 2026, score below 1% — underscoring a vast remaining gap between human general intelligence and current AI capabilities.

Sam Altman Admits AI Is Killing the Labor-Capital Balance—and Says Nobody Knows What to Do About It

FortunePublished: March 12, 2026
Sam Altman validated widespread anxieties about the future of employment, admitting the traditional balance between labor and capital is shifting drastically. He noted AI has become a scapegoat for corporate downsizing: "Almost every company that does layoffs is blaming AI, whether or not it really is about AI." While some immediate blame may be misplaced, Altman confirmed the underlying threat to traditional employment is real, warning "the next few years are going to be a painful adjustment" marked by "very intense and uncomfortable debates" over how to reshape society. Google DeepMind's Sir Demis Hassabis similarly forecasts a "kind of new renaissance" but expects a significant shakeout over the next 10 years en route to it.

$OneMillion-Bench: How Far are Language Agents from Human Experts?

arXivPublished: March 9, 2026
$1M-Bench is a benchmark of 400 expert-curated tasks spanning Law, Finance, Industry, Healthcare, and Natural Science, built to evaluate agents across economically consequential scenarios. It provides a unified testbed for assessing agentic reliability, professional depth, and practical readiness in domain-intensive scenarios.

Labor Market Impacts of AI: A New Measure and Early Evidence

Anthropic ResearchPublished: March 6, 2026
Introduces 'observed exposure' — a new measure combining theoretical LLM capability with real-world usage data. Finds computer programmers, customer service reps, and financial analysts most exposed; no unemployment spike yet, but suggestive evidence hiring of younger workers in exposed occupations has slowed.

Artificial Intelligence and Technological Unemployment

NBERPublished: May 2025
By calibrating to the U.S. data, our model predicts more than threefold improvements in productivity in some-AI steady state, alongside a long-run employment loss of 23%, with half this loss occurring over the initial five-year transition.

The 2025 AI Index Report by Stanford

Stanford HAIPublished: April 7, 2025
Comprehensive annual report tracking AI progress, adoption, and impact across industries and society.

🚀 Newest AI Tools

💼 Workplace & Research Tools

Echo Now AI

Trending

Intelligent Slack summaries to reclaim your day.

Empower teamwork with private AI assistance.

Vectara

Trending

Conversational search for smarter data queries.

💻 Code & Data Tools

New Relic

Trending

AI-powered observability platform that predicts and prevents issues.

DataFlow

New

Easy data preparation with AI-powered operators.

DecisionNode

Open Source

Shared structured memory for all AI coding tools.

🎨 Creative & Communication Tools

TuckMeIn

New

Every night, a new adventure starring your child.

MyCopyBot

Trending

Turn Words Into Cash Flow - In Seconds.

Muses

Trending

Your intelligent AI writing agent for creating content.

🌟 Wildcard

Curated startup ideas with market research and validation insights.

This Week's Trends

This week's AI tools selection emphasizes recent innovations with practical workplace applications. Notable trends include tools enhancing team collaboration and new creative content generation utilities. The wildcard selection reflects a novel approach to business ideation.