Monday, November 15, 2010

El Naschie makes the New York Times

This is Questionable Science Behind Academic Rankings by D.D. Guttenplan from the New York Times education section, 14 November 2010 in its entirety. Best parts are in green. Comments follow.




LONDON — For institutions that regularly make the Top 10, the autumn announcement of university rankings is an occasion for quiet self-congratulation.

When Cambridge beat Harvard for the No. 1 spot in the QS World University Rankings this September, Cambridge put out a press release. When Harvard topped the Times Higher Education list two weeks later, it was Harvard’s turn to gloat.

But the news that Alexandria University in Egypt had placed 147th on the list — just below the University of Birmingham and ahead of such academic powerhouses as Delft University of Technology in the Netherlands (151st) or Georgetown in the United States (164th) — was cause for both celebration and puzzlement. Alexandria’s Web site was quick to boast of its newfound status as the only Arab university among the top 200.

Ann Mroz, editor of Times Higher Education magazine, issued a statement congratulating the Egyptian university, adding “any institution that makes it into this table is truly world class.”

But researchers who looked behind the headlines noticed that the list also ranked Alexandria fourth in the world in a subcategory that weighed the impact of a university’s research — behind only Caltech, M.I.T. and Princeton, and ahead of both Harvard and Stanford.

Like most university rankings, the list is made up of several different indicators, which are given weighted scores and combined to produce a final number or ranking. As Richard Holmes, who teaches at the Universiti Teknologi MARA in Malaysia, wrote on his University Ranking Watch blog, according to the Webometrics ranking of World Universities, published by the Spanish Ministry of Education, Alexandria University is “not even the best university in Alexandria.”

The overall result, he wrote, was skewed by “one indicator, citations, which accounted for 32.5% of the total weighting.”

Phil Baty, deputy editor of Times Higher Education, acknowledged that Alexandria’s surprising prominence was actually due to “the high output from one scholar in one journal” — soon identified on various blogs [oh, come on] as Mohamed El Naschie, an Egyptian academic who published over 320 of his own articles in a scientific journal of which he was also the editor. In November 2009, Dr. El Naschie sued the British journal Nature for libel over an article alleging his “apparent misuse of editorial privileges.” The case is still in court.

One swallow may not make a summer, but the revelation that one scholar can make a world class university comes at a particularly embarrassing time for the rapidly burgeoning business of rating academic excellence.


“The problem is we don’t know what we’re trying to measure,” said Ellen Hazelkorn, Dean of the Graduate Research School at the Dublin Institute of Technology and author of “Rankings and the Reshaping of Higher Education: the Battle for World Class Excellence,” coming out this March. “We need cross-national comparative data that is meaningful. But we also need to know whether the way the data are collected makes it more useful — or easier to game the system.”

Dr. Hazelkorn also questioned whether the widespread emphasis on bibliometrics — using figures for academic publications or how often faculty members are cited in scholarly journals as proxies for measuring the quality or influence of a university department — made any sense. “I understand that bibliometrics is attractive because it looks objective. But as Einstein used to say, ‘Not everything that can be counted counts, and not everything that counts can be counted.”’

Unlike the Times Higher Education rankings, where surveys of academic reputation make up nearly 45 percent of the total, Shanghai Jiao Tong University relies heavily on faculty publication rates for its rankings; weight is also given to the number of Nobel Prizes or Fields Medals won by alumni or current faculty. The results, say critics, tip toward science and mathematics rather than arts or humanities, while the tally of prizewinners favors rich institutions able to hire faculty members whose best work may be long behind them.

“The big rap on rankings, which has a great deal of truth to it, is that they’re excessively focused on inputs,” said Ben Wildavsky, author of “The Great Brain Race,” who said that measuring faculty size or publications, or counting the books in the university library, as some rankings do, tells you more about a university’s resources than about how those resources impact on students. Nevertheless Mr. Wildavsky, who edited U.S. News and World Report’s Best Colleges list from 2006 to 2008, described himself as “a qualified defender” of the process.

“Just because you can’t measure everything doesn’t mean you shouldn’t measure anything,” said Mr. Wildavsky, adding that when U.S. News published its first college guide in 1987 a delegation of college presidents met with the magazine’s editors to ask that the whole exercise be stopped.

Today there are over 40 different rankings — some, like U.S. News, focused on a single country or a single academic field like business administration, medicine or law, while others attempt to compare universities on a global scale.

Mr. Wildavsky freely admits the system is subject to all kinds of bias. “A lot of ratings use graduation rates as a measure of student success,” he said. “An urban-setting university is probably not going to have the same graduation rate as Dartmouth.”

“But there’s a real need for a globalized comparison on the part of students, academic policymakers, and governments,” he said.

The difficulty, Dr. Hazelkorn said, “is that there is no such thing as an objective ranking.”

Mr. Baty said that when Times Higher Education Magazine first set up its rankings in 2004 “it was a relatively crude exercise” aimed mainly at prospective graduate students and academics. Yet today those ratings have an impact on governments as well as on faculties.

Dr. Hazelkorn pointed out that a recent Dutch immigration law explicitly targets foreigners who received their degree “from a university in the top 150” of the Shanghai or Times Higher Education rankings.

According to Mr. Baty, it was precisely the editors’ awareness that the Times Higher Education rankings “had become a global news event” that prompted them to overhaul their methodology for 2010. So it is particularly ironic that the new improved model should prove so vulnerable. “When you’re looking at 25 million individual citations there’s no way to examine each one,” he said. “We have to rely on the data.”

That may not convince the critics, who apparently include Dr. El Naschie. “I do not believe at all in this ranking business and do not consider it anyway indicatory of any merit of the corresponding university,” he said in an e-mail.

But if rankings can’t always be relied on, they have become an indispensable part of the educational landscape. “For all their methodological shortcomings, rankings aren’t going to disappear,” said Jamil Salmi, an education expert at the World Bank. Mr. Salmi said that the first step in using rankings wisely is to be clear about what is actually measured. He also called for policy makers to move “beyond rankings” to compare entire education systems. He offered the model of Finland, “a country that has achieved remarkable progress as an emerging knowledge economy, and yet does not boast any university among the top 50 in the world, but has excellent technology-focused institutions.”




They should have cited El Naschie Watch. Does anyone seriously think we weren't a primary source for this article? But never mind. After the Poynder affair we're inured to being un-cited.

The reader who pointed out this Times piece also found a blog post by Rahul Siddharthan, One man can make an “impact”, which is a clever title. He writes

Mohamed el Naschie, Egyptian mathematician, has been much discussed recently for his single-handed effort in taking a journal that he edits, “Chaos, Solitons and Fractals”, to the “top” of its field (according to the most widely-misused metric, the “impact factors” published by Thomson Scientific): it had an “impact factor” of well above 3 (that is, each of its articles was on average cited 3 times or more). And it was all because of him, because (1) he was the editor; (2) he published extensively in his own journal, and (3) he cited his own articles extensively. He is not alone in this: some similar cases are discussed in this preprint by Arnold and Fowler. There is also a blog devoted entirely to El Naschie’s achievements. :)

Now it seems his journal was not the only beneficiary of his self-largesse. From this New York Times article:


[quote from above Times article omitted]

And guess who was responsible.

(Nature, John Baez and Jacques Distler all had extensive writeups about the man; all of these have been taken down, apparently due to legal threats. Some of it is archived, without permission, on the El Naschie Watch site linked above. Read the NYT article now, before it disappears too.)


"Before it disappears" alludes to the tendency of El Naschie to get critical information about him withdrawn by threatening (e.g., Baez and Distler) or bringing (Zeit and Nature) law suits. But the New York Times would be an even more formidable legal opponent, and American libel laws are uncongenial to the great man.

Our reader says

Ironically, this THE ranking will likely only further contribute to his undoing, by drawing excess attention. At some point, THE will acknowledge the methodology was flawed (if not before, then implicitly by next year, when you can bet Alexandria won't be anywhere near the top 200 in a corrected weighting scheme, adapted to eliminate precisely this embarrassment).

It's also conceivable that the manifest intellectual fraudulence of the articles in CS&F will be more generally appreciated by then, and to save face Alexandria will not only have to disown its formerly high ranking, but also disown the fraudulent person responsible for it.


and I agree with that.

La Nación has this D.D. Guttenplan piece in Spanish.



Posts about D.D. Guttenplan:


Translate English to Arabic
محمد النشائى El Naschie Watch محمد النشائي El Naschie News محمد النشائى محمد النشائي All El Naschie All The Time محمد النشائى
StumbleUpon.com

10 comments:

  1. The sad thing is that it is highly unlikely that any amount of exposing the fraud that is the douche will have any impact on the Egyptian media that has ready responses to be paraded out when they don't like what they hear: NYT is run by jews, zionists have it for the douche (he told us so but we didn't take him seriously enough), the west is jealous of Alexandria University, it's because the douche is a muslim, etc.

    I was wondering why the douche had been denigrating the THE ranking for AU. It is not like him to deflect praise. This preemtive move makes me believe he is truly worried that such international attention can only mean trouble for him. Lets see ...

    ReplyDelete
  2. Yup. He is dismissive of THE ranking in Rosa as well. I think he does it in the vain hope that his own role will be unexamined by his readers.

    ReplyDelete
  3. > One man can make an “impact”, which is a clever title

    clever titles abound in this latest round, even the stodgy New York Times'

    Questionable Science Behind Academic Rankings

    is (intentional?) double entendre on bibliometrics and e-infinity (maybe lawyers will get to battle over the primary meaning...)

    > he does it in the vain hope that his own role will be unexamined

    If so, that seals the case for charlatan
    http://elnaschiewatch.blogspot.com/2010/05/case-for-charlatan.html
    Were he delusional, he would be basking in the warm glow of the spotlight.
    For so long he successfully operated under the mainstream radar, a giant among pygmies... but now he's sweating profusely under the spotlight, his cover of relative anonymity blown, and his life's work in CS&F potentially up on the electronic pillory.

    ReplyDelete
  4. And yet he keeps upping the ante. The latest video seems to show that he won't slink away in embarrassment, ever. He just makes ever more absurd and grandiose claims, and puts them out there in the Egyptian media. That behavior suggests delusion, not charlatanry.

    ReplyDelete
  5. The NYT should've mentioned CS&F and Elsevier by name. After all, not only is Elsevier in love with the same Thomson-Reuters game-able bibliometrics that THE used, but also Elsevier is still publishing the tainted and disgraced journal.

    ReplyDelete
  6. Yes, they should've. I wonder why they didn't.

    ReplyDelete
  7. Marc Abrahams cited the NYT article:

    Yet another triumph for Prof. El Naschie

    BTW: Marc erroneously thinks the Great Man is a professor.

    ReplyDelete
  8. TY Shrink! You are as fast as I am. :)

    For the record, El Naschie is not a real professor, despite the claim on his Web site:

    "After becoming full Professor of Engineering he followed his inclination towards theoretical subjects"

    ReplyDelete
  9. why do not you show us your work instead of attacking others'.

    ReplyDelete
  10. Because this is El Naschie Watch. We expose the lying fraud and his followers. That's what we do. El Naschie claims to deserve a Nobel Prize several times over. We think that's hilarious.

    ReplyDelete