Thomas Tegreene, Cleveland's finance director, had had worse days, though he couldn't remember when. Already, the city's school system was broke. On top of that, the local power authority had just won its suit against City Hall for $14 million in unpaid electric bills. Cleveland also had $40 million in notes due by year end without the money to pay them off.
So far, so miserable. But as if this wasn't enough, out of New York one day last June came word that Moody's was dropping Cleveland's bond rating from a respectable A to Baa. A month later, it dropped again, to a distressing Ba. Standard & Poor's, meantime, had declined even to continued rating Cleveland.
The result: Cleveland would have to pay even more for any cash it hoped to raise in the public money market, assuming it could raise any at all.
Tegreene thought to himself: Those guys in New York aren't helping much at all. Besides, they're overreacting to a little bad news.(Okay, maybe more than a little.) Anyway, what gives them such authority to affect the financial future of this city? Just who are those guys?
"Those guys" happen to be two very powerful institutions that effectively decide which companies can raise money cheaply and which can't. They do the same for state and local governments and for hospitals, school districts, and building and highway authorities - basically, for anyone who dips into the public market for funds.
They do this by assigning letter grades to bonds, from Aaa to C for Moody's and from AAA to D for S&P. The higher the rating, the safer investors figure the bonds are, so they'll agree to take a lower interest for them.
In their time, the rating agencies have been charged with many things - with being capricious and arbitrary, with being wrong, with being understaffed and overworked and with being too powerful. Congress has investigated them, company presidents blaspheme them regularly, politicans pillory them.
But the fact is, they are still in business, essentially because there is no one else. They are the only standard sources on which to rely for which bonds are safe and which too risky. It has made some a bit nervous that there are only two of them. (A third, Fitch Investors Service, has a modest rating operation but hardly qualifies as a competitive threat.) Few others, though, have tried to enter the field because establishing a credible rating agency is a costly and long-term proposition.
It is also a somewhat secretive affair. No one on the outside really knows for sure exactly what goes on inside. And the raters prefer it that way.
hey will tell you in a general way what factors they consider important in evaluation companies of governments - things like debt levels, revenue streams and growth prospects. They even will hint at how some of these factors are weighted. But they won't give you the formula, for a very simple reason: There isn't any.
"We have no such thing as a weighting system," said Jackson Phillips, Moody's executive vice president.
Rating are usually judgment calls, a mix of factual data and gut feel - which is a fact of life that doesn't do much to calm critics who allege that the agencies are overly subjective and irresponsibly capricious. In response, the raters say you just have to take them on faith - that even if they wanted to, they couldn't explain all that goes on in deciding a rating. It's all, well, sort of mysterious.
"We have a saying here called 'the rating eyes'," said John Dailey, S&P's group vice president in charge of ratings. "It means you have to be here a year or two before you understand what a rating actually means. No one teaches it. There are only two places to learn: here and Moody's."
About 80 percent of the time, the "rating eyes" at S&P and Moody's agree. Which means that 20 percent of the time they disagree. And that, critics say, is proof the two agencies play too much of a guessing game. The points is, if both are looking at the same numbers, shouldn't they come up with the same ratings?
"There is room for differences in judgment," said Phillips. "That is what this evaluation is supposed to be about."
In most instances where there is a split, the two agencies are just one grade apart. However, on occasion, the split can be wide. The most dramatic disagreement at the moment is over the financial condition of Pittsburgh - an interesting case because it spotlights the fundamental difference in approach between the two rating agencies.
S&P has given Pittsburgh its next-to-best rating, AA. Moody's is three grades away, at Baa-1. The reason seems to be that Moody's is more concerned about city budget figures - notably, a high debt level and large pension liabilities - that suggest Pittsburgh isn't as financially health as it might be. In contrast, S&P is putting greater weight on Pittsburgh's chances for improvement, emphasizing the city's location in a potentially boom coal region.
Dailey said the Pittsburgh case points up what he describes as S&P's more progressive approach to the rating game. "Historically in this business, you had a bunch of guys with green eyeshades who came in and looked at debt levels. Today, we tend to emphasize other things. The big change has been to stress economic base."
The difference between Moody's and S&P is highlighted by the contrasting personalities of their chief spokesmen. S&P's Dailey comes on like a successful street salesman - he's talkative, animated, free-spirited and fully engaging. Moody's Phillips strikes a more professorial pose - he's older, astute, very studious.
As for which agency the bond dealers take more seriously, that's a tossup. Moody's, a subsidiary of Dun and Bradstreet, is 40 years older.It started rating bonds in 1909. S&P, a subsidiary of McGraw-Hill, began in 1949. Until the last half dozen years or so, a Moody's rating was considered more prestigious. These days, the two agencies carry about the same weight.
Just what does a rating mean? It is frequently thought of as measuring the chance of default by a company or government. But in the postwar period almost no bond has been defaulted on while still rated "investment grade," which makes concern for default somewhat specious. Still, the bond buyers - mostly institutions such as banks, insurance companies and pension funds - are interested in knowing what bonds are likely to fail in the event something unthinkable, like a bad recession, happens. Hence the need for 'A's and 'B's.
Not all bonds are rated, but most are. Moody's grades the alrgest number, about 15,000 issues each year. There has been criticism that the agencies are understaffed and the analysts overworked and less knowledgeable than they should be. Most are young, just a few years out of a masters program in business or public administration.
In the past, both Moody's and S&P have had difficulty retaining trained employes, who are lured by more lucrative Wall Street salaries. But that reportedly is changing. The rating agencies now claim to pay salaries that are competitive with that banks an investment houses pay - somewhere in the $40,000 range for the better-trained personnel.
Still, no one really checks on how well the agencies are doing. Some bond buyers do have their own rating bureaus, but these operations are nowhere near as large as the two industry leaders. Everyone goes along with the system, generally deferring to S&P and Moody's on the assumption that the rating agencies know what they're doing - until something happens, something like the near-default of New York City.
That financial debacle took both rating services by surprise. S&P did not suspend its New York City bond rating until April 1975; Moody's kept an A rating on city bonds until October. Yet there had been warning signals all through the previous autumn and winter that the city was in trouble. Moreover, New York's crisis had been the result of fiscal delinquencies stretching back for years. Neither agency had done anything to dig deeply into New York's problems before the crisis broke.
Raters say the reason is that it is not their responsibility to dig. "We have to rely on the figures they give us," said Dailey. "There's no way we can conduct an audit of a city or a company. We can ask a lot of questions, and we do, but we have to trust them. The problem often is, their own accounting and reporting systems are antiquated. They don't report what they should report."
Even so, in the aftermath of New York's fiscal crisis, the rating agencies are taking extra care in assessing state and municipal governments, particularly in the Northeast. Some critics say the agencies have become too careful and tend now to lower a rating too soon to avoid being caught short again.
Moreover, the municipal bond market has grown enormously in recent years, nearly doubling in four years to $47 billion in 1977.
As a result of their influence, the rating agencies frequently are consulted by government finance directors and company comptrollers prior to the release of new bond issues.
The rating agencies deny they get involved directly in the actual structuring of a deal. Such participation would subject them to regulation under the Securities and Exchange Act. "We don't act as investment bankers," said Dailey. "What we do is sit there and react to certain proposals. We'll say what we fell might be wrong."
But whether they shape a deal directly or indirectly, there's no question of the central role they play. Bond issuers approach them with deference and, in some instances, with trepidation.
"I do have some very strong feelings about them," said Tegreene when asked to be interviewed for this article. "But expression of those feelings could damage the city. I'm going to have to go back to them this fall for a bond rating . . . In other words, you don't criticize the referee because it's going to make it more difficult to play ball the next time."
In view of their influence, the rating agencies are natural targets for influence peddlers and bribe schemes. But to their credit, there never has been a hint of payola.
"There are so many controls, no one could get away with anything," Dailey boasted. The main control at S&P is that no rater ever acts alone. All ratings are decided on by a committee of about a half dozen. And committee members rotate regularly. This makes it nearly impossible for a single rater to bias a decision and it's difficult to pack a committee, Dailey said.
Ten years ago, Congress investigated the rating agencies, and there was talk then about regulating, or at least supervising, them. But the talk led nowhere. Several years ago, several New York congressmen introduced a bill that would have allowed bond issuers to appeal to the Securities and Exchange Commission any ratings they objected to. The bill was never acted on.
"We have no official status," said Phillips, intent on having things stay that way. "You can choose to accept us or not. The fact is, enough people accept us so that we have an impact. That doesn't mean we are accepted blindly. Other institutions watch us. But I guess we do stand in a unique position."