
Implications of AI for Financial Markets
AI in Financial Markets: A Technical Deep Dive with Dr. Michael Wellman of University of Michigan
Introduction
If you are a finance leader, technologist, or anyone curious about where AI in finance is actually headed beyond the marketing layer, this conversation cuts straight to the research.
In this edition of the Hyperbots CFO Insights Series, we sat down with Dr. Michael Wellman, Professor at the University of Michigan, where he has been on faculty since 1992. Dr. Wellman has spent four decades at the intersection of artificial intelligence and economics, from multi-agent systems and auction theory to algorithmic trading, deep reinforcement learning, and AI governance. He was involved in a startup doing auction technology that was later acquired by Ariba and ultimately SAP, giving him rare firsthand perspective on how AI moves from research labs into large-scale commercial deployment. Moderated by Niyati Chhaya, Co-founder and VP of AI/ML at Hyperbots, the session ranged from the first autonomous trading agents to the future of AI governance, financial literacy, and what it means for human professionals when AI becomes more capable than them.
This is not a conversation about dashboards or automation ROI. It is a conversation about the deeper mechanics and implications of AI in finance, from someone who has been watching and shaping the space for longer than most practitioners have been working in it.
Key Takeaways
Finance was the first industry to deploy truly autonomous AI agents, and that head start makes it both the most advanced and the most complex domain to regulate.
Large language models matter in finance not primarily for their reasoning, but because they dissolve the narrow interface problem and let algorithms interact anywhere language is used.
Deep reinforcement learning is the more consequential new technique for finance because it allows trading strategies to be derived automatically from data, without anyone programming them directly.
The human-AI relationship in finance will follow the chess model: a period of powerful complementarity, followed by a slow fade of the human contribution as AI capability continues to advance.
AI safety in finance cannot rest on keeping humans in the loop indefinitely, because humans get overwhelmed, become too expensive, and eventually get removed, so the systems themselves must be designed to be safe.
Interview Summary
Meet Dr. Michael Wellman
Niyati: Good morning, everyone. I am Niyati Chhaya, Co-founder and VP of AI/ML at Hyperbots. Today, it is a privilege to have Dr. Michael Wellman with us, one of the leading researchers exploring the intersection of AI, economics, and financial markets. We will dive deep into AI in financial markets, algorithmic trading, generative AI in finance, and risks, regulations, and the future of autonomous systems. Dr. Wellman, welcome.
Dr. Wellman: Thank you. Happy to be here.
How Dr. Michael Wellman Went from AI Research to the Heart of Financial Markets
Niyati: Dr. Wellman, why don't you tell us about your journey? How did you end up working on AI and finance? I do know you started off at an ERP company and now you are at Michigan. So how did you land here?
Dr. Wellman: So actually, I started working on artificial intelligence, well, quite a long time ago. I guess I was a graduate student 40 years ago. So I have been working in artificial intelligence my whole career. In fact, I joined the University of Michigan in 1992. And a lot of my research was then about the intersection of AI and economics and connecting multi-agent systems and economic thinking about those. Well, shortly in those early 90s, the World Wide Web came along. And it turned out that whatever you were doing, you would do on the web. So we were doing auctions and markets using computational markets. So we got very much involved in the emerging e-commerce world at that time because of platforms.
Entry into that ERP role that you mentioned came because I was involved in a startup company that did auction technology and was later acquired by Ariba and then ultimately SAP for those platforms. So we have been working on economics and electronic commerce for a very long time. And even though of course one of the most important economic domains is the financial world, we kind of purposefully avoided that for a while because it is a very secretive world that has a lot of power and we did not think we were going to beat the market and that was not really our goal. So we stayed away from finance. But about 15 years ago or so, we started seeing algorithmic trading and something called flash trading and some various scandals around that making the news and decided really that it was time for my research group to look at this domain. It was really important for computer scientists to try to help understand what are the implications of algorithms now playing a role in financial markets.
Why Financial Markets Became the First Real-World Laboratory for Autonomous AI Agents
Niyati: And this is still pre-generative AI, right? So we are talking about 2008, 2009. Of course, another thing that happened around then was a major financial crisis.
Dr. Wellman: Of course. And trying to understand the potential for how algorithms could affect financial stability and how the financial markets may affect the economy and the world seemed very salient. And so it was a very rich domain to get into. But even though it was before ChatGPT and the things that make everybody aware now, it was clear that algorithms and using some AI techniques were being adopted in financial markets. And you could say that, in fact, finance has been an early adopter of AI. Machine learning for credit verification and even helping in loan decision making has been going on for quite a long time. And in financial markets, using algorithms to assist in trade or even to execute trades happened pretty early on.
It ultimately became what I would call the first major application of autonomous agents. That is, AI programs that are operating in important high-stakes environments without human intervention. And the reason that markets became that is because a big part of the advantage of algorithms is how fast they can respond to information. And so using that advantage precludes having humans in the loop. So I think it is very interesting that it emerged as one of the probably most important case studies for how AI could be out in the world.
How Predictive Accuracy and Profitability Work Differently in Algorithmic Trading
Niyati: And here I believe both accuracy and efficiency as well as security were extremely critical, right?
Dr. Wellman: Well, you know, the thing about financial trading is that there is a bottom line. And you can identify these various features you want to have, but the ultimate thing is, is your algorithm implementing a strategy that is profitable? And so of course the strategies are based on making predictions in some case. So predictive accuracy is going to be an important ingredient. It does not mean that you have to be 99% accurate. In financial trading, if you have an algorithm that is reliably 55% accurate, you can make unbounded amounts of money. But no doubt, many of the practitioners of algorithmic trading, they use a lot of mathematical techniques, but certainly included the whole toolbox of machine learning as part of that.
What Large Language Models and Deep Reinforcement Learning Have Actually Changed for Finance
Niyati: Come generative AI and LLMs. What is it that you saw as a paramount shift? What is it that these models suddenly did that has become so exciting?
Dr. Wellman: If I can just get to that by stepping back first a bit. So even just simply the existence of algorithms in financial markets could qualitatively change things compared to just human traders. As I mentioned, the speed of response changes things qualitatively, the amount of information that can be processed at once, the ability to take actions in multiple markets at once, the ability to promulgate new strategies that you discover to a lot of different places. That already led to a lot of new kinds of strategies that maybe did not exist before.
So now to get into your question. What has changed, if anything, about the latest round of AI? I would say there are really two categories. One is large language models or foundation models or basically big pre-trained neural network kinds of models, like you mentioned. The other is deep reinforcement learning, techniques that can learn strategies for acting from experience data. What deep reinforcement learning lets you do is to further the automation of algorithmic trading in markets because you actually do not need anyone to program the algorithm directly. You basically can have a reinforcement learning process that derives a trading strategy automatically. And it could then even be something that the developers do not even really understand how it works, what it is doing. Of course, they can test whether it works and monitor it, but you can automatically generate these strategies. So that provides really a new degree of autonomy beyond even putting an algorithm out there and letting it trade. Now you can even put something out there, let it derive an algorithm that will then trade.
Is AI in Financial Markets Learning From Data Alone, or Is There a Human in the Loop?
Niyati: And here, is all of that data-driven or does that include human feedback being incorporated in some form? Or is it more just based on time series data?
Dr. Wellman: The most straightforward thing, of course, is to use just the experience time series that the algorithm has by trading in the market. That is, all of the market data that one gets and the feedback from taking actions in the markets, not necessarily any other inputs. But of course, if there are other streams of information that may be guided by humans, those could be incorporated as well.
One of the reasons financial trading became an important domain for autonomous AI is because it was really easy to automate trading. You have electronic markets that have very simple, well-defined interfaces and computer feeds. And so you did not have to consider a very big scope of actions. It was just what orders do I want to submit at what price at what time? Pretty simple interface.
Well, what the latest wave of generative AI and large language models in particular let you do is it lets you interact with a lot of things where you had not defined a narrow interface ahead of time because it opens up really the language channel. So if you have the ability to interact in language, you do not have to necessarily customize your input and output to fit a certain API. You can basically have your bot call a dealer on the phone and negotiate a price for a bond in principle. Obviously, that is not an easy thing to do very reliably, but one could even think about doing that with this latest technology, whereas that would have been a wild dream just a few years ago.
Why LLMs Still Struggle With Financial Math, and What the Right Architecture Actually Looks Like
Niyati: Do you see practical challenges when you are especially dealing with finance, accounting or math data when it comes to these models? Because at least in what we are seeing when we do experiments, these models significantly still struggle to understand math or numbers. So what are the challenges you are seeing there?
Dr. Wellman: Yeah, so of course, one way you could use these capabilities is to wrap it around an existing algorithm. So I might have a core training algorithm that is not based on an LLM, but I now want it to interface with a person by using language. You could design that program that just uses the LLM as the front end, basically just to open up the interface to more places. So this also just increases the level of automation, because you could take your algorithm to places that you had not been able to take it before. You could work through things like language interfaces.
Niyati: So your entry gate is likely language through an LLM, and you would still need some significant modeling or algorithms for it to start understanding the domain and the data or tie back to the data.
Dr. Wellman: That is right. Now, it may also be possible to take advantage of this new technology that gave us LLMs, transformer models and just these massive pre-trained models of various kinds, things that are more generally called foundation models, and try to develop those particularly for trading domains or some financial applications. And you know, the financial trading world is very secretive, but I would be very surprised if there were not a lot of resources being devoted to trying this out by some major trading firms and hedge funds right now.
Niyati: Technically, what do you see as the key differentiation in building a foundation model for finance or trading as against the ChatGPTs of the world?
Dr. Wellman: So when you are building a foundation model for a particular domain, obviously, you are curating the kind of data in a very different way than when you are just trying to feed in your general kitchen sink of everything that you could find. And sitting in the university, we do not have the resources to train massive models of any kind. We can train smaller models. A lot of the knowledge that has come from this experimentation, which I am hypothesizing is happening, will never see the light of day because if it works, you do not want to tell anybody about it because then you squander your big advantage. You do not publish papers about your big successful efforts in this world.
How AI Regulation in Finance Could Lead the Way for the Rest of the Economy
Niyati: Finance as a world comes with a lot of regulations, both on the market side of things as well as corporate finance. Where do you see generative AI, especially positioned as an agentic behavior, contribute towards maybe either improving regulations or interpreting them?
Dr. Wellman: I am not sure exactly what the shape of regulation would be. I have certainly expressed opinions about certain issues that we should be concerned about with trading like market manipulation and maybe the particular issue about how AI can sometimes create loopholes. You could find some behavior that would be illegal if a person did it, but somehow if you get a machine to do it, it gets around the rules. So I think we need to try to close a lot of those loopholes.
But what I do think is that people are trying to think about how to regulate AI in all areas of our economy and our society. But the financial world already is highly regulated. There is a lot of regulatory infrastructure. The kind of companies that would deploy this technology already have big compliance departments and they already expect to follow rules. So it may be the case that the financial sector is going to lead the way in how to regulate AI. That does not mean that it will be easy to regulate or that the lawmakers will get things right on the first try, but it may be the first place where you could even try to come up with sensible regulations.
Algorithmic Bias, Fairness in Lending, and the Ethical Risks of AI-Driven Financial Decisions
Niyati: There is a question around what are the ethical considerations surrounding the use of AI in portfolio management, especially regarding biases and their impacts on investment decisions.
Dr. Wellman: Yes, excellent question. So there are a lot of ethical considerations. I have been mostly thinking about things like algorithmic trading, where we have considerations about how you can make sure your algorithms are not doing nefarious things like manipulate the market or colluding, maybe even unintentionally. How can you evaluate your algorithms? And I think the answers to that will be analogous to how do I want to avoid unintended biases in making decisions, say about lending. That is usually where a lot of the personal discrimination kind of questions come in.
This, too, I think is an area where finance is going to lead the way in regulation. And obviously, there are already a lot of fairness in lending laws and regulatory structures. There are new questions about how those apply to algorithmic decisions. Or is it possible that algorithmic decisions get around the rules that exist now, but we have to create new ones?
But even aside from the legal questions, there are questions from the standpoint of a company that wants to use these models and they do not themselves want to discriminate regardless of what the law says. How could you even tell what they are doing? I think we have a real need for third-party kind of tools and infrastructure to help validate models, to certify that they are free from at least certain kinds of biases that we can enumerate. I am hoping that some of these tools will emerge naturally from the market, as there is a need for them. No doubt government regulation could help drive that market into existence because of the way compliance works. But either way, I think we need to focus on that kind of new evaluation and certification software for AI systems.
How Enterprise ERP Data Can Ground and Validate AI in Corporate Finance
Niyati: Can the ERP or the structured data that already sits in the world of an organization act towards grounding AI to some extent? When I move to the enterprise finance world or the corporate finance world, my ground truth is the data that sits in my ERP about my vendor, my cash outflow, my credit lines, my capital. Do you see these kinds of algorithms or validation agents learning from that ground truth?
Dr. Wellman: I see. That is a very good question. I think that we will see some combination of third party tools that maybe have their own means of grounding things. But I think you are right that that may be hard to do in the abstract for a lot of things. And you have to make your certifications relative to some assumptions that are some representations that are made on the part of the entity. As long as that is understood, that could be workable. I mean, that is really also of course how auditing works, right? You still have the auditor who is only certifying something conditional on the fact that the information it was given was correct or accurate or truthful in itself.
Niyati: I pretty much agree that we may have auditing agents of some sort, both on the market side as well as on the enterprise finance side of things.
Where Are the Limits of AI in Finance, and What Happens When No One Knows?
Niyati: What is going to be the limitation? Where is this going to stop? And where do you think, as a computer scientist, ethically we would need to stop in the context of finance?
Dr. Wellman: As an AI person who has been around for a long time, what I will say is that AI will often surprise us. The emergence of these tools just a couple of years ago really surprised us. I mean, even people who are experts in the field, maybe they knew the technology, but I think that the breadth of capability that came out was not anticipated. That was a positive surprise.
We often also have negative surprises where things that were new ceilings of capabilities, new limitations, roadblocks that we did not realize, come in. So I am not going to venture to claim that I know where the next roadblock is or where the next positive surprise will be. I think the prudent thing is to plan for assuming that these capabilities will play out and provide a lot of value. No doubt people are going to try and have been trying a lot of things that did not really work out as they expected. But it really is early days. I do not think we are anywhere close to exploiting even the stuff that is pretty obvious can be done. And of course, it is reasonably expected that the capabilities will be increasing over time.
How Regulatory Frameworks Must Evolve to Handle AI Accountability and Systemic Risk in Markets
Niyati: There is a question from the audience around how regulatory frameworks can adapt to the increasing use of AI in financial markets, addressing concerns around transparency, accountability, and systemic risks.
Dr. Wellman: I think the accountability is key. So we may need to require some more clear definitions of ownership when an algorithm is applying, that it is traceable on whose behalf they are acting. Maybe even disclosure in some cases to other parties that this is the kind of algorithm that is operating.
In terms of systemic effects, that is of course the really most important question in financial regulation. We care about things being fair and no one is cheating here and there, but the real risks involve things that could induce major crashes or other dislocations. And I think there is something that we really need a lot more research to understand. The kinds of qualitative phenomena that could come out of the interaction of AIs are not really well understood. It is really not as understood as it needs to be.
Will AI Replace Finance Professionals, or Are We Entering a Golden Age of Human-AI Collaboration?
Niyati: What are the implications of AI-driven financial analysis for job displacement? And what do you think are the skills required for future finance professionals? Is it going to be a complete replacement? What is the synergy between that professional and their finance AI?
Dr. Wellman: So predicting the future is hazardous here. But I think it is a reasonably good bet that at least in the short run to the intermediate term, what we are mostly going to be seeing from this kind of AI is complementarity and amplification of human skills. I have heard the phrase that we are entering the golden age of complementarity, that there are some things that humans do well, and now with these much more powerful tools, they can do much more.
Now, these kinds of relationships do evolve over time. I like to use the example of the game of chess. If you go back far enough, the best chess players were human beings, and then since Deep Blue beat Gary Kasparov the best computer player was a computer. But there was a period right around then where even better than a computer or a person were a computer and a person working together. Human-computer teams could beat the best computer and they could beat the best person.
Well, over time, that started to change a little bit. It lasted for maybe about 15 years or so. At some point in the human-computer team, the human is not really adding much. And it is really the computer doing it all. In Go, it may have even happened faster.
But of course, different kinds of tasks and skills are going to play out in different time scales. But as a longtime AI researcher, I do believe there are no ultimate limits to AI. And ultimately, AI will be able to do pretty much anything that people can do better. And so we are going to hopefully use this time when we are in this golden age of complementarity to also think really deeply about our future and how we are going to work along with machines that are even more capable than us.
Does AI Have the Potential to Take Over Financial Strategy, or Will Humans Always Own the Final Decision?
Niyati: Do you see AI starting to take over strategy, especially from a finance perspective? Or do you see that being a limit beyond which AI is going to give me the data, give me the reports, give me the insights, maybe give me recommendations, but humans own the decision?
Dr. Wellman: I do not see any ultimate limits. I think certainly right now there are limits. I think the limits are much farther along than they were five years ago, 10 years ago, 20 years ago.
It may have seemed surprising at the time if I had told you that the first applications of autonomous agents will be in financial markets. You would say, that is crazy. You are going to connect a machine to your bank account and maybe lose all your money. And of course, that has happened a few times to some companies. But as they got experience with it and found it worked, they trusted it more and more and made the monitoring at more of an arm's length and further and further along. So certainly, any algorithmic trading systems are heavily monitored by people, but people with tools, and how involved the people are sort of gets farther and further away over time.
How to Prevent AI-Driven Financial Forecasting From Producing Unintended and Dangerous Outcomes
Niyati: How can the integration of AI and financial forecasting be balanced? How can we mitigate unintended consequences in this space?
Dr. Wellman: Unintended consequences is kind of the ultimate question about AI risks. How can we be sure? So I think across the board in AI, we need better ways of expressing the consequences that we intend and find ways to monitor. So I think what we are going to ultimately build are better and better AI tools to help monitor the other AIs. Now, we realize of course that is still now just a bigger AI. So we have to accept that this supervision gets more and more indirect and distant as the technology gets more capable. That can also lead to big risks. I do not think there is any kind of silver bullet that gets rid of that risk. So that is why I think it is something that we really do need to put a lot of energy and investment into building safety systems for AIs.
Niyati: I think especially when we talk about enterprise software or software that is going to be deployed with significant embedded AI, I do believe that maybe traditional algorithms and some math may help us do this kind of risk management. It cannot be just free-flowing data. But what I want to convey is it can sometimes be a kind of crutch to say, oh, well, we will keep humans in the loop or we will keep humans around. Because ultimately, the humans get overwhelmed and they cannot really handle it. And ultimately, they become too expensive and somebody takes them out. So that cannot be what we are going to be depending on so much. We have to come up with better and better system solutions for safety.
Niyati: Got it. So we need our systems to be kind of self-reliant or secure in a way that they learn, they heal, all of that on their own.
Dr. Wellman: We have to accept that they are going to have autonomy, more and more, and how to manage that in the safest way is the question.
Niyati: Yeah, absolutely. I very much resonate with that view, actually. These things are going to learn on their own. They are going to work by themselves. We just need to ensure that we do not harm the outputs that we are looking for as the user.
How AI Could Transform Financial Literacy and Empower Individual Investors to Make Smarter Decisions
Niyati: How can AI be leveraged to enhance financial literacy and empower individual investors to make more informed decisions? Where do you see that space evolve?
Dr. Wellman: That is a very interesting question. It is almost like the hope that many people have that AI will also be very helpful for education. Financial literacy is just another kind of education. In some ways, having automated tools makes it less necessary to know a lot of details about how certain things work. But it does not lessen the need for people to really understand the fundamentals of what investment is and why. If they are going to be empowered to make certain decisions and they want to understand, be able to tell the difference between investing in certain kinds of assets or cryptocurrency, we really do want people to understand the basis for investment, which is value for future returns.
What Excites Dr. Wellman Most About the Future of AI in Finance
Niyati: Maybe to close, as someone who has seen these waves of AI, pretty much all of them, and actually even narrowly looked at it from the perspective of finance, what excites you the most in terms of potential of AI and finance together?
Dr. Wellman: Well, any time you see transformational change, it is exciting as well as maybe scary. But what excites me of course is just being in the front row and seeing what some of these effects are and trying to help influence them in the more beneficial direction. So how it is going to play out, I am both excited, but also feel there is a lot of interesting uncertainty that is going to get resolved in the coming years.
Niyati: So thanks for this chat and extremely insightful. We ended up talking about just finance AI markets, then talking about a couple of small use cases and pretty much addressing key concerns around risks, ethics, where are the boundaries, how it is going to help individuals, how are we going to control the kind of challenges. Short, brief, but extremely insightful. Thanks for this chat. It was amazing sitting here and just discussing it.
Dr. Wellman: My pleasure. Thank you.
How Hyperbots Brings AI Research to Enterprise Finance Today
What Dr. Wellman described at the research frontier, autonomous agents operating in high-stakes environments, AI systems grounded in domain-specific data, the need for auditability and trust, and the slow shift from human oversight to system-level safety, maps directly onto what the enterprise finance function needs to solve right now.
The gap between where financial markets AI has been operating for 15 years and where most corporate finance teams are today is significant. Most finance teams are still running manual invoice processing, periodic reconciliations, and month-end close cycles that depend on human effort at every step. The accruals process alone is one of the most manual and error-prone steps in that cycle, and one of the clearest candidates for the kind of autonomous agent Dr. Wellman described. The research Dr. Wellman describes represents the ceiling. Hyperbots represents the practical floor: making the foundational automation work reliably, accurately, and with full auditability before asking finance teams to trust the next layer.
Hyperbots' Invoice Processing Co-Pilot delivers 99.8% extraction accuracy and 80% straight-through processing. The multi-agent collaboration framework that Dr. Wellman described as the future of autonomous finance is already how Hyperbots' co-pilots are architected: specialized agents handling extraction, matching, GL coding, approvals, and payments as a coordinated system. And the fraud and anomaly detection capabilities Dr. Wellman identified as essential for trust in any autonomous financial system are built into Hyperbots' workflows from day one, making every decision traceable and auditable, which is exactly what enterprise finance and their auditors require.
The insight Dr. Wellman offered on ERP data as the ground truth for enterprise AI validation is something Hyperbots is built around. The data that already exists in your ERP, your vendor records, your cash flows, your approval hierarchies, is what anchors the AI to reality and prevents the kind of hallucination and drift that makes finance leaders rightly cautious about deploying these systems.
See it in action with a demo or start your free trial today.