Monday, March 19, 2007

FICOs and AUS: We Will Add Your Distinctiveness to Our Collective

by Bill McBride on 3/19/2007 01:13:00 PM

From Tanta:

Brian asked, in the comments to an earlier post regarding FICO scores, why mortgage lenders have not developed a “mortgage score” that would address the deficiencies in FICO when it comes specifically to mortgage underwriting. In fact, such attempts have been made and have never really gotten off the ground. It’s a very good question, though, and since the last time I launched onto a giant long Nerdish explanation of some dorky mortgage matter, some serious coin dropped into the CR tip jar, it occurs to me that you all deserve another one. This is, my dears, by way of thanking you all for the tips. It goes straight to my coffee-and-chocolate budget, which just then produces more lengthy posts. My story is that that’s a virtuous cycle, and I’m sticking to it.

The main reason that mortgage scores never got far, in my view, is the development of automated underwriting systems (AUS). The earliest attempts at building an AUS, in the late 80s and early 90s, used some sort of internal credit scorecard. By the mid-to-late 90s, when Freddie Mac and then Fannie Mae were perfecting their AUS, FICO scores had become an easily and widely available “credit scorecard,” and so the development path of these AUS changed, from the idea of creating a new credit scoring method for mortgages to creating the additional rule sets and algorithms needed, beyond the analysis of a borrower’s consumer credit history, to fully analyze mortgage loans. In other words, the AUS were intended to be automated “holistic” analysis, not just more computerized credit scoring.

The history and development of AUS is fascinating (really, it is, my UberNerds). It is, however, beyond today’s scope. Let me just note that a few years ago, the general situation in the industry was that the systems of the GSEs (Freddie Mac’s Loan Prospector (LP) and Fannie Mae’s Desktop Underwriter (DU), both of which could also handle FHA loans via additional technology called FHA TOTAL Scorecard), were the gold standard for AUS in the conforming-balance prime loan world. But they were never designed to underwrite loans that are not eligible for delivery to the GSEs, including jumbos, no docs, subprime (outside of the A- stuff the agencies have special AUS capabilities for), and a lot of exotic product structures (like the Option ARM). So there was parallel development by large private investors of their own AUS, the two best-known and most reliable of which are Countrywide’s CLUES and GMAC-RFC’s AssetWise, both of which specialize in jumbo loan balances, Alt-A and subprime.

But while just about every lender, correspondent, and broker in the country could have access to LP or DU for very low cost, and needed to anyway for its GSE loans, you had to be a correspondent of Countrywide or RFC to get access to CLUES or AssetWise, and like anyone else Countrywide and RFC tended to expect you to sell them the loan if you used their systems to underwrite it. Other buyers of jumbo and Alt-A whole loans might appeal to these smaller loan originators, but those other buyers didn’t offer an AUS, which are very expensive to develop. There became a habit of originators using the ones everyone had access to and were familiar with, LP and DU, to underwrite loans that neither system was designed to accommodate. What happened is that the whole-loan buyers would create an “overlay” of rules that a jumbo or Alt loan had to meet, in addition to approval of the loan by LP or DU. A very odd hybrid of traditional and automated underwriting was born; Star Trek fans are free to imagine Borg drones (half organic, half machine creatures) invading the mortgage world. Resistance sure seemed futile there for a while.

I mentioned this on an earlier post about the new subprime mortgage guidance, but let me touch on it again: you can offer “reduced documentation” loans in two general ways, lender-directed or borrower-directed. Lender-directed means that the lender first looks at the loan as a whole, including the proposed loan amount, sales price and appraised value, borrower credit history, and the income, assets, and liabilities that are indicated (still just “stated” at this point) on the loan application, plus the treasure-trove of other information that is on a loan application (where you work, how long you’ve worked there, what other real estate you own, whether you have supplemental assets like retirement accounts or cash-value life insurance, etc.). If all of that looks good enough—or it all looks low-risk enough—the lender might decide that the income or assets can be verified with less documentation than is usually required, or perhaps even no documentation. For instance, what is “usually required” to count a loan as full-doc is that the borrower verify income for the last two years, as well as currently. For a salaried borrower, that would mean submitting the last two years’ W-2s, plus a current year-to-date pay stub. (If your pay stub doesn’t show year-to-date, you have to scrounge up enough of them to prove that your current pay is not just this week’s fluke, which can happen for hourly employees who might just have worked more hours than usual this week, or for borrowers who receive a bonus. The lender doesn’t want to know what you made in your best month ever, but what can count as “stable monthly income.”) After a review of the file, the lender might require the borrower to submit only the last pay stub, and allow him or her to skip hunting down the W-2s.

This isn’t really just “documentation relief,” to use an actual industry phrase that might well drive you nuts. (Relief? Like having to prove your income is some giant burden?) It is also often a way to allow a loan to be underwritten at a marginally higher income figure than would have been calculated with true “full doc,” because, as in the examples I listed above, a true full-doc loan might involve some income “averaging” to arrive at the “stable monthly income.” Your average monthly income over the last two years can, clearly, be higher or lower than your last month’s income. Traditionally, underwriters considered an upward trend to be favorable, as long as there were any reason to think it would continue, and a downward trend to be worrisome, generally requiring some good offsetting factor like a higher than usual down payment or a perfect rather than just acceptable credit record. You can see, then, that when you don’t get the two years’ W-2s, just the current pay stub, you aren’t doing any averaging; you are taking the additional risk that the current pay stub is distorting a trend. For most salaried borrowers, that’s not a huge risk. It can be a major risk when we get to commissioned borrowers, contract workers, and so on, who are not, we notice, getting to be smaller rather than larger percentages of the workforce pool. (It was a big problem in the NASDAQ bubble, when you’d get all these folks wanting to count recently exercised stock options as “stable monthly income.” Underwriters can be crankier than usual—in need of regular chocolate infusions—in January of any given year, because they see more than usual numbers of borrowers wanting to count the annual bonus as current monthy income.)

In any case, this kind of “lender-directed” program of doc relief is different from a borrower-directed program, in which the borrower comes to you and requests a low-doc or no-doc loan up front. The specific term for that is “adverse self-selection,” and it is much riskier than a lender-directed program. It also creates one of the big problems of using an AUS like LP or DU to underwrite these loans. LP and DU were designed to be lender-directed programs; they might allow some doc relief after the initial analysis is done, but they always start with the “assumption” that any number you type in for income or assets is verifiable if not initially verified. That’s a huge, important difference. The initial analysis can “give more weight” to things like DTI and reserves after closing if it can consider those things as potentially verified fact rather than quite possibly unverifiable smoke. It might “decide” to let those things remain unverified, or only partially verified, but it does so, if I can put it this way, because it knew it had the right to demand otherwise. A “borrower-directed” low doc loan simply messes up the whole underlying assumption of verifiability. And, of course, a borrower-directed low or no doc loan is, as we’ve seen, probably (although not necessarily, of course) already “gaming” the system: inflating the income or assets so that the DTI or reserve calculations come up with better results than they would have using verifiable numbers. The huge joke is that you can get the AUS offering “relief” to a borrower who qualified for that “relief” by lying to the system up front.

(It is possible, of course, to get around that problem by building in some algorithm that selects a certain number of loans to be forced into full doc, regardless of whether they might otherwise have been eligible for doc relief, to create some disincentive for gaming. I won’t say the GSEs aren’t doing that; I honestly don’t know, although I don’t see any signs that it’s working if it is happening. The problem, though, is making sure that such an instant-feedback fear of getting caught lying is applied enough for any individual user of the AUS to create the right Pavlovian behavior. Remember that the GSEs buy loans at the top of the food chain, mostly, from big seller/servicers and “aggregators,” who in turn buy their loans from smaller correspondents and fund loans for little brokers. The AUS gets used at the top of the chain and also at the bottom (the borrower entry level). So your algorithm would have to work by selecting a big enough percent of those little bottom-level pipelines of loans to scare any individual originator, as well as by selecting enough of the aggregator’s pipeline to scare the aggregator. This is by way of saying that we’re dealing with second- and third- and fourth-order effects of how the business structure, in “disintermediating” the process, finds a way to create a problem that the original software engineers didn’t have in their sights. You have to keep re-modulating your phasers, because the Borg adapts.)

In large part, that’s where this “hybrid” or Borg approach comes in: whole-loan investors did (mostly) realize that LP and DU were not designed to accommodate borrower-directed low doc or no doc loans. They’re also not designed to accommodate jumbos in a very important sense. Since LP and DU are designed to analyze loans the GSEs actually buy, their internal logic was designed, for instance, to weigh the proposed down payment on the loan with the assumption that the loan amount isn’t going to be higher than $417,000 (currently). A 20% down payment on a $471,000 loan is generally considered a compensating factor for other possible weaknesses (like tight ratios or a few minor credit dings). But a 20% down payment on a $1,000,000 loan? That might not even meet basic program guidelines; it might be possible, but it stops being a compensating factor and becomes a weakness that needs compensation elsewhere. In other words, a traditional view of things indicates that an 80% $1MM loan is the equivalent of a 90% or so conforming loan: possible, but definitely in the higher-risk bucket. But in a real sense LP and DU didn’t “know” that, because they weren’t designed to handle the problem. Ergo, you had the investor accepting these loans underwritten by the AUS, as long as it met a separate “overlay” or second hurdle of requirements to get around this problem.

Eventually Fannie Mae came up with the idea of “Custom DU,” which is a way a lender can access DU for loans that aren’t Fannie Mae-eligible by “customizing” its product eligibility features to take things like jumbo balances and borrower-directed documentation reduction into account. (You may ask why a GSE is putting such investment in technology to accommodate loans it is in no way chartered to buy. You may be asking about something like the concept of “corporate welfare,” where the private sector gets the quasi-government sector to subsidize its technology costs. But that’s another day’s problem.) This is still a fairly new development and, as far as I’m concerned, the results aren’t in yet (someone else might tell you different, of course). But we here at CR have now become quite wary of these things without a long enough performance history. And it’s not just history I want; there’s also the problem with the level of possible “customization.” The short version, for now at least, is that I am concerned that investors don’t know enough about the core logic of the system (the “black box” part) to know if the things that can be customized (product eligibility rules like maximum loan amount, documentation type, and so on) are 1) customized correctly and 2) sufficiently compatible with the core logic. The customization is done by the lender, and so it’s only as good as the lender’s inputs (there are competence issues here as well as potentially abuse and failure-to-test issues). Furthermore, you get back to the whole logic problem behind the lender-vs-borrower directed issue: at what level does too much customization defeat the purposes of the machine’s approach?

On an earlier post I talked about conforming loans as the vanilla ice cream of the mortgage business; I’ve also used the term “commodity” to describe them. The development of GSE AUS was spurred as much by the desire to keep its book of business uniform and homogeneous as much as to use technology to speed up and increase productivity of the loan approval process. The whole idea was that the AUS could sort out the vanilla ice cream from the mocha java praline mango. It is in no way clear to me that the eventual use of GSE AUS for nonconforming loans, with an overlay or with customization, was motivated largely by anyone’s desire to impose uniformity and homogeneity on the jumbo and Alt production. I personally believe that it was motivated more by two things, one more respectable and one less so: first, it was a desire to capture the speed and productivity increases of technology. Second, it was an attempt by at least some people to get the “seal of approval” of LP or DU on exotic loans—in other words, the “core logic” incompatibility was a feature, not a bug, to some folks. I’d start seeing that a lot in due diligence. I’d find some god-awful loan I’d throw on my problem loan list, only to have the originator come back and say, “Yeah, but we got a DU approve on that one.” My response was something on the order of “Yes, but you threw a loan at DU that was ‘over its head,’ as it were.” They did that for a reason.

So how do we get back to FICO? Well, the AUS out there—at least LP and DU—do not use FICO scores as such. The GSEs still require lenders to get them and report them on the loans, but the AUS do their own internal credit analysis based on raw data imports from an electronic credit report. That’s what I meant above by indicating that the development shifted away from creating a free-standing “mortgage score” to replace FICO. AUS do not need another “free-standing” score, because they’re designed to do the holistic underwriting themselves. They’re an attempt to automate what a traditional underwriter before FICOs did.

That’s what I meant in the comment section of this post when I indicated that, for mortgage people, FICOs traditionally were useful less as a predictive tool than a communication tool: it’s not so much that traditional lenders like the GSEs ever depended on FICO’s analytics to substitute for their own default estimates; it’s that FICO score became a handy, consistent, easily-available “shorthand” designation of a loan’s credit quality, insofar as over time they were “calibrated” to GSE loan performance, and the GSEs could then set the actual FICO “bucketing” guidelines (over 720, under 620, etc.). What that means, in essence, is that they were less important to traditional underwriters (people or machines) than they were to investors in traditional loans. As I suggested in this post, the giant MBS market works “efficiently” insofar as end-investors can really just make a lot of reliable assumptions about what’s going on in the details of processing the underlying mortgage loans (the “sausage factory”). By reporting such indications of credit quality as FICO, LTV, DTI, doc type, etc., a lender can “indicate” to a bond buyer what the general quality of the loans is, and the bond buyer can have a sort of “reality check.” The exact methods I use to get into the weeds with individual loans might be a matter of “rep and warranty,” but you, the end investor, can glance at the general stratification of the pool I supply you with, the FICO, LTV, etc., and you can check the plausibility of those reps and warranties. If I’m claiming to use “traditional” underwriting methods but I produce these pools with these low average FICOs, you might wonder what the hell I mean by “traditional.” You might be right to do so.

From using FICOs as a short-hand indication of credit quality, it was a short step to using them to price things. By price, I mean more than just setting the interest rate and points for an individual loan, or even the price of a security or tranche thereof. FICOs are involved in setting the required credit enhancement levels of a security (such as overcollateralization), the MI premium required, the due diligence level required, and any number of things that, basically, come out of the yield of a loan. I, actually, worry as much if not more about this issue than I do in using FICOs as part of the initial underwriting. We’ve had occasional discussions here on the blog about “guideline rationing” versus “price rationing” as mechanisms of credit crunching. That whole issue is about whether available credit is reduced as much by making it too expensive as by re-writing the guidelines so that people don’t qualify for certain kinds of loans. It’s a true chicken-and-egg problem, though. Suffice it to say, for now, that a large distortion may have entered the market during the boom because FICO (a kind of derivative or simplification of a complex credit analysis) drove a lot of pricing decisions. That, in short, is the “Alt-A” problem in a nutshell: not only did the FICO of those loans make them look like “prime,” it made people willing to price them at tiny risk premiums over prime. So pricing models have to get as complex as AUS models, and they have to be applied to the right kind of product, or else you have the same problem as I’ve indicated above with using LP or DU to underwrite a “nontraditional” loan. Borg pricing is as scary as Borg underwriting.

The rating agencies do have their own software—S&P’s Levels is generally the standard—that are supposed to account for pricing/credit enhancement levels on nontraditional product. I still think those models over-weight FICO, and that that’s a large part of why it seems that “Alt-A” is deteriorating “so fast.” There’s a whole issue out there about why, then, people aren’t using more AUS like CLUES or AssetWise, which were designed to handle Alt-A, but that kind of gets complicated by what we’re hearing from Countrywide and RFC about their own little Alt problems. Perhaps building an Alt AUS is harder than everyone thought? Perhaps speed and efficiency are more “expensive” than we thought? Perhaps you don’t have to be an outright Luddite to conclude that, maybe, we should give this tech fetish another thought? I have observed before now that I very often think we fail to consider certain kinds of tech in the mortgage business at its “true cost,” and that once you do that, you often find the vaunted cost savings and productivity increases kind of evaporating on you when your business adapts, like the Borg does, to whatever high-tech weapon you can fire at it. But I am known as an unassimilated thinker.

Tanta

Last 10 Posts