[Japan's freeze on supercomputers marks end of era (2009/11/13) on Reuters]


TOKYO, Nov 13 (Reuters) - Japan may freeze spending on supercomputers, dealing a blow to a crippled sector and threatening brain drain in a country that prides itself on technological prowess.
A government panel set up to unearth wasteful spending recommended all but wiping out a 27 billion yen ($300 million) budget to build the world's fastest supercomputer at state-affiliated Institute of Physical and Chemical Research (Riken).

The move could spell the end to a series of failed bids by Japan to leapfrog U.S. dominance in computing, including the ill-fated, 50 billion yen Fifth Generation Computer Systems project which became obsolete in 1992 without meeting its goals.



Riken's supercomputer, developed with Fujitsu Ltd (6702.T), is designed to analyse complex patterns such as wind and air currents and climate change and help nanotechnology research.

But members of the Government Revitalisation Unit questioned spending on a project that has already cost 54.5 billion yen and is likely to require another 70 billion.

"Does Japan need to be No.1 in this?" one member said at an open hearing to examine public science and technology programmes. "Japan will not become second-rate just because we don't have this," said another.


しかし、政府Revitalisation Unitのメンバーは、すでに545億円かかって、もう700億を必要としそうであるプロジェクトの出費を疑いました。「日本がこれで1位になる必要があるのか?」と公的科学技術計画を調査する公聴会でメンバーの一人が述べた。別のメンバーは「これがないだけで日本は二流にはならないだろう」と述べた。

Japan, trying to cope with snowballing debt, has fallen far behind in supercomputer spending compared with the United States.

While Hewlett-Packard (HPQ.N) and IBM (IBM.N) now dominate the sector, a lack of public funds has forced NEC (6701.T) and Hitachi (6501.T) to pull out, leaving Fujitsu as the sole Japanese supercomputer developer.


Supercomputers are custom-built systems that run at high speeds, allowing scientific research.

"The Next-Generation Supercomputer Project is an important part of Japan's scientific infrastructure," said Riken official Masaharu Shiozaki, adding he planned to continue to promote the supercomputer program. "Companies and universities depend on it."

スーパーコンピュータは高速で動作するカスタムメイドのシステムで、科学研究を可能とする。「次世代スーパーコンピュータ計画は、日本の科学のインフラストラクチャの重要な部分である。私はスーパーコンピュータ計画を推進続行を計画していた。企業と大学はこれに依存している。」と理研Masaharu Shizokiは述べた。

Insufficient funds and infrastructure could exacerbate an ongoing exodus of top researchers from Japan, drawn by grants and research opportunities not just in the United States but also in Singapore and China, scientists have said.

"You hear of whole teams of researchers leaving," said an executive director at another public research institution, who asked not to be named after having one set of grants for next-generation research cut by more than 60 percent. "Japan's scientific community is hollowing out."



The number of supercomputers made by Japanese firms peaked in the early 1990s at well over 100 machines. Today, of the world's 500 most powerful supercomputers, Fujitsu was the vendor of just three machines, while NEC supplied three and Hitachi two

Hewlett Packard supplied 212, IBM had 188, Cray (CRAY.O) and Silicon Graphics International (SGI.O) each had 20 and Dell (DELL.O) supplied 14. (Reporting by Mayumi Negishi; Editing by Dan Lalor) ($1 = 90.12 yen)


ところで、同日(2009/11/13)にWall Street Journalのブログ記事にこんな記事が載っていた:
[Don Clark: "Faster Supercomputers: Your Tax Dollars at Work" (2009/11/13) on Wall Street Journal - Digit]

より高速なスーパーコンピュータ: 働いている税金

On Monday, researchers will release a twice-yearly list of the 500 biggest computers in the world. The latest rankings should provide some new clues about high tech’s relentless speed race, and how it’s being funded.

National labs and other research institutions buy these supercomputers to handle huge number-crunching tasks, like modeling weather patterns, nuclear explosions and aircraft designs. They rely heavily on advances from the semiconductor industry, since each system uses thousands of microprocessor chips–typically supplied by Intel, Advanced Micro Devices and IBM.

Rankings on the so-called Top500 list are determined by performing a set of mathematical calculations known as Linpack that indicate how fast a system is. Chip makers have been making it easier and less expensive to get higher scores by designing generations of products that plug into the same socket; upgrading a machine boils down to pulling circuit boards out of a system, plugging in a faster chip–or two, or four–and sticking boards back into a system.

A favorite to come in as No. 1 next week is Jaguar, a massive system at Oak Ridge National Laboratory built by Cray using AMD’s Opteron chips. The system, which ranked No. 2 in the list last June, was originally built using models that each had four processor cores (think of each core as one calculating engine). Over the past few months, technicians at the Tennessee lab have been replacing many of those chips with newer models that have six cores; the upgraded portion of the system has 224,256 cores, the lab says.


Jaguar was already in rarified company, being one of few systems to top a petaflop–a thousand trillion calculations per second. The system, at 1.059 petaflops in June, ranked just behind the No. 1 Roadrunner system at Los Alamos National Labs, which logged in at 1.105 petaflops.


posted by Kumicit at 2009/11/16 00:01 | Comment(0) | TrackBack(0) | News | このブログの読者になる | 更新情報をチェックする



コメント: [必須入力]