World AI Cannes Festival – Intelligent Automation in Action This is an edited summary of the talk I gave at the World AI Cannes Festival (WAICF) as part of the Applications Stage Program. I was the last speaker on the first day of the conference, following Bjoern Rosenthal, Head of Product at DFL Digital Sports. For those of you that don’t follow European Football (soccer), DFL Digital Sports owns the rights to all Bundesliga and Bundesliga 2 content. The Bundesliga is the German Football League, one of the top five European Soccer Leagues. It was standing room only. Mr. Rosenthal proceeded to showcase how they harnessed Generative AI to extract live match commentary and in near-real time, translate that to a running match commentary ticker, for every goal scored, shot on goal, foul, and other relevant events. The screen burst alive with players of the calibre of Harry Kane, Serge Gnabry, and Leroy Sané passing, crossing, and scoring. And the ticker worked. It was accurate, and timely. Then it was my turn. From Football to Intelligent Mortgage Automation. Half the room emptied. But only half. And, as I told the remaining audience, the benefit of the last slot at a conference is that you’re that much closer to Happy Hour . . . But I digress. The organizers had asked for a real-world example of intelligent automation in action, and I came prepared with a presentation on how the application of intelligent automation, artificial intelligence, and Gen-AI is helping a US mortgage lender survive the mortgage winter and how it is enabling it to be competitive as the market outlook improves. I started by setting the stage: what’s happened in the last decade, and in particular, the last four years, in the US Mortgage market? We know what happened, we survived it: the massive increase in mortgage originations due to the covid spike as people fled cities and the Fed kept rates low doubled origination volumes from 2019 to 2021, with over 60% refinancing to lower rates. Because of the speed of the growth, lenders didn’t have time to invest in technology to scale, so they did what they always do, hire more people, which increased their fixed cost base. As inflation rose, the Fed adjusted rates aggressively leading to a slowdown in originations, with rates peaking at 7.8% in October 2023, and origination volumes collapsing by 60% from 2021 to 2023. Ouch. US Mortgage originations vs. 30 Yr. Mortgage Rates Sources: Rates: Economic Research Division, Federal Reserve Bank of St. Louis; Originations: U.S. Mortgage Bankers Association (MBA) So why automate mortgage if it’s in such poor shape? Simple. It’s exactly because of the cyclicality and because 500 page loan documents are still the norm, that it’s the most ideal to automate, the one with the most potential. The problem is that in the financial services stack, unlike investment banking, mortgage is the last segment to innovate and invest in technology. But there are smart lenders who scaled with technology and were able to absorb the slowdown. In order to understand how automation is a perfect match for mortgage, it’s important to level-set on the definition of intelligent automation first, then layer that on top of mortgage via a case study. What is Intelligent Automation? MOZAIQ defines it as the configuration, integration, deployment and use of automation and AI technologies to streamline functions and scale processes in support of human workers. It’s not just one technology, it is a combination of solutions that supports human workers, and allows them to do their job better. In our world Intelligent automation is comprised of several components: MOZAIQ’s POV on Intelligent Automation So let’s see intelligent automation in action for an Independent Mortgage Bank (IMB). The IMB, a national wholesale lender, was created with a philosophy comprised of two simple drivers: (1) be the low-cost lender and (2) deliver superior customer service. Because a mortgage loan is a commodity, if they stayed focused on these two drivers, they could sustain growth and profitability, and win. But how? They needed to ensure that their DNA supported these two goals as effortlessly as possible by leveraging outsourcing (domestic and offshore) to deal with the inherent fluctuation in loan processing volume and invest in technology and automation from day one. An Automation Foundation to Build On The IMB started with the basics, with the low-hanging fruit including document indexing, RPA automation, and data extraction from structured documents, functions that would allow them to measure the automation ROI in weeks (and not months) and build a platform to grow their automation initiatives (avoiding the pitfall of creating point solutions, i.e., deploying technology for technology’s sake). They progressed to more complex processes, and in particular supporting the “stare and compare” processes for auditors and underwriters, combining document indexing, data extraction, and machine learning (pre- and post-processing) to further optimize the data—Initial Underwrite, Appraisal Review, CD Prep, and Post-Close Audit. Automation Helps To Seamlessly Absorb Volume Fluctuations The automation foundation was important in allowing the IMB to absorb the wholesale origination portfolio of a top ten wholesale lender that had to disband, an acquisition that was completed in the first quarter of 2023. Because the IMB’s strategy was predicated on scaling with technology (and automation) first, they were able to absorb a three-fold increase in loan volume in a 3 month period—see the adjacent chart tracking loan volume for two sample processes: Loan Estimate and Loan Delivery. They did this by spinning up more VMs, spinning up more BOTS, and leveraging the automation framework fabric to scale, minimizing the number of operations and tech resources that they brought over from the acquired lender. And, with the ops and tech personnel savings, they then invested in sales and marketing (account management) personnel, continuing to scale the wholesale channel by acquiring broker relationships at a reduced fixed cost, knowing that they could scale the volume with automation. The benefits achieved by the IMB were astounding: And finally, an additional critical benefit is that the IMB can scale up with intelligent automation by provisioning technology resources, as the loan volumes increase, without
Should We Slow Down AI Research?
World AI Cannes Festival – AI R&D Fast or Slow? I had the pleasure of speaking at the largest AI conference in Europe the week of February 5, 2024, the World AI Cannes Festival (WAICF), held in the homonymous city on the Côte d’Azure. The conference was a premier event, with keynotes that featured Bruno Le Maire, the French Minister for the Economy, Finance, and Industrial and Digital Sovereignty; Yann Lecun, Chief AI Scientist at Meta; and Luc Julia, Scientific Director at Renault (and co-inventor of Siri). One of the more interesting and insightful panel discussions was the session titled: Should we slow down research on AI? It was a debate on whether effective accelerationism, also known as e/acc, should be constrained in favor of a more responsible, methodical approach to AI research and development. Three points of View Yes: Regulate and Constrain Representative: Mark Brakel, Director of Policy, Future of Life Institute The case against e/acc was not well made. The view is to pause development of “powerful” AI systems, (“powerful” was not well defined), and allow R&D to continue for “good” causes, (“good” was not well defined). Take the analogy with nuclear weapons—there is a belief in the “decelerate” AI camp that AI will be used by bad actors, just like nuclear weapons are being used as a threat today (although they have only been detonated twice to harm humans). The belief is that the risk is there. What the “Yes” camp failed to mention is that while bad actors and rogue states will inherently embrace AI for nefarious purposes (see the recent article about North Korea, Russia, China, and Iran using AI to enhance their hacking skills), then shouldn’t AI be used to proactively assess and predict these threats, and to actively fight them with the same means and weapons? Middle of the road Representative: Professor Nick Bostrom, Oxford University and head of the Future of Humanity Institute Move fast in the R&D direction, but slow down as the technology approaches transformative capabilities, until the risks have been fully analyzed. The self-driving car was used as an example: some would argue that the “move fast R&D” approach has continued in the production phase, endangering drivers, passengers, and bystanders. No: Don’t constrain R&D, but intelligently regulate the application of AI Representative: Yann Lecun, Vice President and Chief AI Scientist, META AI; Francesca Rossi, IBM Fellow and Ethics Global Leader AI growth will be progressive, not instant, hence there is no fear that AI will go rogue on its own anytime soon. Society has inherent built-in safeguards that make AI safe. As an analogous situation, look at turbojet technology. The turbojet was invented in the thirties, and it was not until the sixties that it was deemed safe and economically viable for commercial use—in this case the regulators and government entities stepped in to ensure the safety of consumers. Mr. Lecun believes that this will be the case with AI. R&D on the turbojet was allowed to progress (and implemented for military use), but regulation came in when it was ready to be commercialized. Therefore, AI R&D should be allowed to progress unfettered from government regulation, because R&D will also help with addressing and managing the risks of AI. Regulators should focus on regulating the application of AI—the use case—not the R&D, with companies doing the research held responsible and accountable for what they create and deploy. Ms. Rossi made an insightful analogy: the counter to the nuclear weapon analogy is that AI research is analogous to the splitting of the atom, and that a nuclear weapon is the application of the function of splitting the atom. Look at the nuclear reactor; it is a positive use of the atom-splitting technology. Mr. Lecun then used other colorful analogies to discuss managing AI risk; one should look at it from an existential risk perspective: There are safeguards already in place to manage the risk for the first two, whereas the third is more an ethical question, and not a practical scenario. Furthermore, these are imaginary existential risks, and one cannot regulate existential risks, fictitious and theoretical threats, because otherwise governments will enact regulations that will be counterproductive. Another problem with potential regulation as it is being proposed today—as in the EU AI Act—is that governments are targeting opensource large language models (LLMs), because private companies that have an interest in restricting innovation—companies who want to sell their proprietary LLMs—are convincing the regulators that unfettered access to opensource LLMs will allow rogue players access to IP that they can then weaponize. I’m not going to pontificate on the benefits of opensource, suffice it to say that the focus on regulating AI is on the wrong aspect of AI here i.e., the R&D. Panel Discussion Takeaways The takeaways from the session can be summarized as: a responsible AI framework should be implemented by all companies and individuals performing R&D on AI; regulating entities should focus on regulating the application of AI—the use case—and not the R&D; standards that do not inhibit innovation should be defined and applied; and access to opensource LLMs should remain unfettered to allow companies to implement safe, trusted, AI solutions. How it impacts MOZAIQ MOZAIQ is nowhere near to creating an AI-powered solution that has the ability to go rogue, and if it did, to cause life-threatening and / or reputational damage to the provider, and user of the solution. What MOZAIQ is doing is applying the principles of responsible AI to all intelligent automation solutions that are being designed, built, and deployed for our customers, to ensure that the responsible AI core tenets are adhered to, including privacy and security, trust and fairness, equity and inclusion, transparency and accountability, and safety and reliability. Note: The topic of AI is massive, provocative, and constantly evolving. Every day there’s a new story, a point of view, a new application of AI, that spurs heated debates across multiple aisles. There is no right answer. Therefore let me be clear: the opinions expressed in this blog post are one of many possible takes
The EU AI Act
The EU Artificial Intelligence Act What does this mean for U.S. Companies? I was going to provide a brief summary on the EU’s new AI Act and its impact on the US market, but after being inundated with summaries interpreting the legislation and purporting to know exactly how this would impact the US government’s own AI legislative agenda, I decided against it. Instead, I’ve provided two insightful links that explain the EU AI Act and its practical impacts. Some of you will be asking: what do I care about EU AI legislation? Simple. The EU AI Act confirms some of the items that I’ve recently written about, that of Reponsible AI as a driving factor for AI initiatives. The EU AI Act imposes legally binding rules on transparency and ethics in any and all AI initiatives that are deployed in the EU. For example, it will be illegal to indiscriminately scrape images from the internet to create facial recognition database; AI-generated content will have to be labeled as such; the act also uses a “risk-based approach” in how AI is regulated: the more risky the use case e.g., lending, hiring, and education, the more oversight and restrictions to the AI models, and therefore the more transparency will be required in understanding how the model was built (where does the data used to train the model come from?) to avoid biases. And, since tech companies are loathe to have to adhere to two different standards (look at GDPR), they’ll probably enact one set of transparency and ethical codes globally. Probably enact. The other impact of the EU AI act is to spur the US Government to enact an AI regulatory framework of its own, instead of relying on an executive order that focuses on national security with limited scope, and hoping that tech companies “self-police” their AI efforts. If the recent chaos at OpenAI suggests anything, it is that the e/acc factions are winning, and that does not bode well for responsible AI. We’ll see. One more thing: I have been invited to speak at the World AI Cannes Festival, the largest AI conference in Europe, this coming February. I’ll make sure to bring back some tidbits and insights from the conference, especially with regards to the real-world impact of the EU AI Act. Happy Holidays. Other EU AI Act Articles of Interest: Note: this blog post was written by a real human and does not contain content generated by ChatGPT or any other Generative-AI platform.
Grok This – Responsible AI is a Critical Success Factor
Grok This Responsible AI is a Critical Success Factor The announcement that the platform formerly known as Twitter launched its own AI model, Grok, wasn’t surprising. It did, however, raise alarms, especially after it was introduced with promises that it would “answer questions with a bit of wit” and that it “has a rebellious streak”, not to mention that “A unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the 𝕏 (twitter) platform.” That should make for some interesting answers. And dangerous ones. Because, after all, “Artificial Intelligence” is the wrong moniker; it should be “Statistical Decision Making”—that’s all a Generative AI model does, it creates images and concatenates words based on probabilities—it’s a very powerful statistical inference engine, a really good autocomplete. So if a model is trained on bad, toxic, and biased data, it will inevitably spew out garbage and filth. This spurred me to go back to a blog post I wrote recently that summarized the key takeaways from a forum hosted by the Mortgage Bankers Association (MBA) and MISMO titled: Artificial Intelligence—Promise and Peril for Mortgage Lending. One of these takeaways was “Responsible AI as a Critical Success Factor”. The Grok announcement has compelled me to elaborate on this concept, as these principles must be considered anytime an AI-powered solution is developed, deployed, monitored, and managed, irrespective of industry and use case. Before we delve into Responsible AI, let’s make sure that we understand that it should be just one of the components of the foundation of anyone’s AI compliance strategy. This foundation is based on three elements: Part I (this Blog Post) will focus on Responsible AI, while Part II will focus on Risk Management and Monitoring of AI-powered solutions. Part I: Responsible AI By analyzing vast amounts of historical transaction data (using deep learning techniques to analyze patterns and relationships within the data), a Gen-AI powered solution learns intricate patterns that might escape human detection. It then generates synthetic data, helping professionals understand the types of activities that might otherwise go unnoticed. In Financial Services, there are multiple use-cases: And we’re just scratching the surface. Deploying any form of AI, whether Machine Learning models or Gen-AI solutions, creates challenges when it comes to measuring and mitigating concerns about fairness, bias, toxicity, and IP. The AI solution must respect the law, and must respect equity, privacy, and fairness. Foundational principles for deploying and using Gen-AI in a responsible manner are crucial in enabling AI’s trusted use. So where to start? Having spent most of my career deploying technology in the Financial Services vertical, and in particular the last two years in the US mortgage banking market, we can take some of the lessons that the regulators (yes, the regulators!) have published to ensure that AI is used responsibly. In early 2022 the Federal Housing Finance Agency (FHFA) published an Advisory Bulletin related to AI and Machine Learning: “ARTIFICIAL INTELLIGENCE/MACHINE LEARNING RISK MANAGEMENT.” The FHFA framework is a great baseline to start from when designing and deploying any type of AI-powered system, especially one predicated on Gen-AI. The FHFA’s Core Ethical Principles to enable Responsible AI systems include: To ensure that the Gen-AI model functions in a Responsible AI framework, a set of processes should be put in place to ensure that the results generated by the Gen-AI model are correct and fair. In essence, implement a control structure predicated on the above Core Ethical Principles: Oh, and one more thing about Grok: it promises to “…also answer spicy questions that are rejected by most other AI systems.” There’s a reason smarter and better trained AI systems reject these “spicy” questions. Note: this blog post was written by a real human and does not contain content generated by ChatGPT or any other Generative-AI platform.
Loan Quality is a Critical Success Factor
The Importance of Loan Quality The Fed raised rates by another three-quarters of a point and hinted at continued yet lower increases in the future. The MBA expects origination volume to decline to $2.05 trillion in 2023, down from an expected $2.26 trillion in 2022. Not good numbers if you’re a mortgage lender. A tight market, eroding margins, and plummeting refi volumes are just some of the factors forcing lenders to cut staff, terminate lines of business and even shutdown. In such a competitive market, everything that a lender can do to successfully originate, fund and sell a loan is paramount to its existence. That’s why the topic of loan quality was everywhere at this year’s annual MBA Conference in Nashville. Why? Because the quality of a loan has direct economic and reputational consequences on the lender. I’ll explain. Examples of common, critical deficits fall into two categories: credit and collateral. Credit defects could be missing or expired documents required by the GSE, documentation not supporting the borrower income or assets, and incorrect calculations. Collateral defects often involve a lender missing flags on the appraisal for soft markets and high CU1 scores. Poor loan quality can have severe adverse impacts on a lender: No one in the mortgage ecosystem—brokers, loan officers, realtors, or borrowers—wants to work with a lender that closes a loan and then must bring the borrower back to the table to re-sign or even renegotiate loan terms, or requests additional documentation. Example: one lender said they are auditing 100% of their loans, pre-funding, to ensure that they don’t run the risk of having to buy the loan back or incur a penalty with a cost of up to 30% on the loan amount. Do the math: 30% of a $300,000 mortgage is… $90,000. For one loan. And how are they auditing the loans? With people of course. After having shed over half their workforce. Checkpoint Audit Helps Checkpoint classifies mortgage loan files into discrete loan documents, extracts data with high accuracy and enables validation of the final output across multiple checkpoints in the loan fulfillment process, eliminating costly errors, increasing loan quality, and enabling a faster loan throughput, all while letting operations teams be more efficient. Auditing—automated or not—ensures that human errors are caught and resolved before costs are compounded. The further into the loan life cycle, the greater the time, effort, and cost required to fix the issues. Auditing plus Intelligent Automation ensures that loan reviews are consistently done the same way every time. The combination reduces human errors, enables 24/7 reviews, and decreases FTE costs, since usually high value and high-cost resources are doing the audits. The Checkpoint Audit platform does not remove humans from the decision making, it helps them be more efficient by finding and highlighting potential issues that the underwriter or operations team can then make a decision on. Finally, and importantly, it sets the lender up to absorb increases in loan volumes (once the market rebounds) while maintaining the higher quality of their loans, without having to hire expert resources. Checkpoint audit is the intelligent automation audit platform for mortgage lenders. Check out the Checkpoint Audit demo and contact us to learn more. 1CU scores: “Collateral underwriter” scores. An automated score that the GSE’s put on an appraisal. If it is too high there is a high risk associated with that property maintaining its value. The main critical defects are associated to Credit (borrower’s ability to pay) and Collateral (property value is solid).
Is Robotic Process Automation (RPA) a Commodity
Once upon a time… When I graduated from college, in 1989, my first job was with a startup called Cambridge Technology Group (CTG), the forerunner to Cambridge Technology Partners, a systems integration pioneer of the early nineties. CTG was in the business of executive education, training entire sales forces for the likes of AT&T and NCR on how to sell an emerging computing platform called UNIX. To showcase the flexibility of the operating system vis a vis Mainframes, we developed scripting tools that “scraped” data off of mainframe screens, took that data and input it into a different screen, sometimes directly to a different mainframe, and automatically executed commands. We even built an early GUI platform to create customized displays of the data – the data that was scraped from multiple modules across multiple mainframes automagically appearing on one single screen at the touch of a button. The audiences were shocked, stunned. They had never seen anything like it. We called it the “SurroundTM” Architecture (yes it was trademarked). The scripting tools morphed into platforms – TalkAsync for screen scraping (TalkSNA for IBM), DataHandler for storing and mapping the fields (dh_get, dh_put), User Interface (not very original) for displaying the data. Shell scripts filled the gaps. In 1989. Back to Reality Fast forward thirty-two years. How has the technology evolved? Now data can be extracted from scanned documents (and from screens and via web automation and through API calls). Now we have Python to fill the gaps. Now we can program scripts to perform complex human functions. There are entire toolkits and frameworks and platforms that enable a non-coding user to program these scripts. Except now they’re called BOTs. And now the Surround Architecture is called RPA. So what? The point is, the concepts are old, and the Robotic Process Automation (RPA) market is commoditizing – there is little differentiation across offerings. Sure, one orchestration platform functions more efficiently than the other, one drag and drop user interface is easier to use. Yes, today’s computing power enables the deployment of Machine Learning so that you can “train” your solution to be smarter, faster, more accurate. But if you’re a company selling an RPA platform, you’re running out of runway. You sell product based on features. You’re stuck in the nuts and bolts. Your product does what your competition does, in some cases better, in others worse. You’re not talking about what it can do for your customer. How your customer benefits. What new capabilities your customers can unleash. You can’t, because all RPA platforms do the same thing. And, your customer is tired of forking thousands of dollars, in some cases hundreds of thousands of dollars, up front, for a piece of software that may or may not get deployed into production. And that is not all the spending that the customer will have to do to operationalize the platform. Enter MOZAIQ That’s our take on it. On this market. So when we decided to create a Mortgage automation solution to address Wave 3 (see the prior blog post), we knew that RPA alone wasn’t going to cut it. BOT’s by themselves are limited. An effective Intelligent Automation solution requires that multiple components work seamlessly together. A BOT needs to be trained. A BOT needs to be managed. The documents need to be classified (indexed), and data needs to be extracted AND cleansed (no existing OCR solution extracts data with 100% accuracy) before the BOTs can go to work. Loan origination systems need to be integrated with. Audit trails are mandated. The MOZAIQ platform has integrated these disparate requirements into one simple, easy to configure and deploy platform. Our SaaS offering, targeted at the Mortgage ecosystem, is a digital worker-enabled per loan transaction model, eliminating costly software licensing and startup costs, allowing customers to keep costs in line with loan volume. Its pre-built offerings deliver foundational and vertical process automation out of the box, while allowing customizable processes and services to be built on top of the core intelligent platform. MOZAIQ is in the Intelligent Mortgage Automation business. We don’t sell software, we sell a solution that enables our customers to achieve rapid benefits, whether it’s reducing processing costs, increasing loan processing throughput, increasing accuracy past 99% and enabling scale – up or down – through the deployment of pre-trained digital workers. Find out more at www.mozaiq.ai and check out the process demos at www.mozaiq.ai/demos.