The use of artificial intelligence in everyday life has exploded in recent years — from internet searches to writing code and drafting emails to image and video generation. According to a 2025 survey by Pew Research Center, 62% of adults in the U.S. said they interact with AI at least several times a week. Half said they interact with it multiple times a day.
One of the most common forms of the technology is generative AI — which reacts to a user’s prompt and creates an output, such as an image or text. Also increasingly popular is agentic AI, so named because of the “agency” given to these AI agents that are autonomous and self-directed, working proactively to make decisions and act unprompted.
These AI agents may have a lot to offer, but the question people often ask is, “At what cost?”
On her latest research, Dr. Xiahua Wei’s explores the risks and opportunities of agentic AI.
Fostering an interdisciplinary mindset
Wei first became interested in artificial intelligence as a college student in the early 2000s when the technology was still in its infancy. She was fascinated and curious about its potential.
Now, as an associate professor at the University of Washington Bothell’s School of Business, Wei applies her knowledge and skills as an econometrician to research the economics of information technology.
“As someone who has training in economics and an interest in the market itself — but also a love for the technology that drives our society — I appreciate how UW Bothell values interdisciplinary research,” she said.
Wei brings that same value into the classroom, where her students are a mix from different majors and schools across the campus. “It’s exciting to see the creative ideas they have when I provide them with the level of comfort to share that interdisciplinary perspective.”
As a researcher, her interest in agentic AI evolved from previous work shared in a paper addressing bias in generative AI. Wei and her co-authors there presented how large language models perpetuate biases in training data and algorithms, posing a risk of biased and unfair outcomes in business decision-making ranging from discriminatory hiring practices to inequitable health care.
Keeping up with the risks, or not
Wei’s new research into generative AI followed trends in how the technology was being used — including the growing popularity of AI agents — as well as shifts in governance in response.
“As technology becomes a commodity, how it’s leveraged becomes more critical,” she said. “Like all technologies, AI comes with a lot of benefit but also areas that we need to pay more attention to.”
Unfortunately, she added, governance often struggles to keep up with AI and with the privacy and security risks it brings. She points to the recent craze in China over OpenClaw, an open-source AI agent. It was released earlier this year, and security concerns about possible cyberattacks and data breaches surfaced soon followed.
Although China has restricted the use on government devices, it remains unrestricted to the public. As the agent also requires a certain level of technical skill to install and uninstall, a new market of individuals charging for those services has also emerged.
“I think the idea that we can all have an assistant is very beautiful and empowering,” Wei said. “But at the same time, I’m not entirely comfortable with how the risks are amplified at this level.”
Developing a tradeoff framework
Wei said her research is not about whether this new technology should or will be used. For better or worse, it is here to stay. Instead, her focus is on how it should be used.
“Agentic AI is not just a technical improvement or transformation, it also presents us with a challenge in terms of oversight, privacy and ethics,” she said. “So, for the agent that is able to orchestrate its own workflow and act autonomously, how much trust do we place in it, and what kind of governance are we going to put in place?”
In her latest paper, “Agentic Artificial Intelligence as a New Frontier in Information Systems: Promise, Peril, and Research Opportunities,” Wei and her co-authors introduce an “agentic AI tradeoff framework” to aid in shaping resilient, equitable and sustainable pathways for integrating the technology.
As they note, these tradeoffs arise from recurring tensions between autonomous systems and institutional structures.
“It’s about leveraging the technology to its best use,” she said, “providing as much benefit in terms of productivity, efficiency, cost-saving — and even helping with inclusion and equity by lowering the barrier to access for individuals — while minimizing risk.”
“Agentic AI is not just a technical improvement or transformation, it also presents us with a challenge in terms of oversight, privacy and ethics.”
Dr. Xiahua Wei, associate professor, School of Business
Navigating the paradox
The best use of the technology, however, is unique to the user, Wei said, adding that AI appears to be the latest challenge in the “productivity paradox.” Also known as the Solow paradox after Nobel Laureate Robert Solow, the productivity paradox is an observation that investing in technology doesn’t improve productivity — it slows it.
Or, as Solow said, “You can see the computer age everywhere but in the productivity statistics.”
As Wei’s paper states, the technology does indeed create opportunity for advancement, but the challenges in seeing it realized are largely institutional and societal.
“Every company can use AI to some degree, but the outcome will be different,” she said. “The return on investment isn’t the same across the board. So how do we determine what the net benefit should be that we’re going to get at the end?”
In their next paper, Wei and her co-authors plan to dive deeper into a specific example of bias in generative AI: resume selection and screening as a human resource management tool.