[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: invitation to UAHCI 2001
日立国際電気 上田 様、
CC: BEP ML
渡辺@湘南工科大です。
On Mon, 23 Oct 2000 17:35:52 +0900, Hirotada Ueda 上田博唯 wrote:
> 私と Constantine との間の連絡ミスなどが重なり、急な話で申し訳な
> いのですが、水曜日までに、
> participant names, affiliations, country,
> title of paper, 300-800 words abstract
> を私宛にお送り頂けませんでしょうか?
以下に同封します。よろしくお願いいたします。
申し訳ございませんが、時間的に間に合わなかったので、
一部共著者の所属先とアブストラクトは変更する予定です。
今月中に最終版をお送りします。
-------------------------------------------------------------
Participant names and their affiliations;
Takayuki WATANABE*
Department of Information Science, Shonan Institute of Technology
Koichi INOUE*
Software Research Center, RICOH Co. Ltd.
Mitsugu SAKAMOTO*
Masanori KIRIAKE*
Hideki SHIRAFUJI
Aisan Technology Co. Ltd
Hirohiko HONDA
Department of System and Communication Engineering,
Shonan Institute of Technology
Takuya NISHIMOTO
Department of Electronics and Information Science,
Kyoto Institute of Technology
Tuneyoshi KAMAE
Department of Physics, Hiroshima University
Note; * is a member of ARGV (Accessibility Research Group for the
Visually Impaired)
Country; Japan
Title of paper;
Bilingual Emacspeak Project
- A universal speech interface with GNU Emacs -
Abstract (Tentative);
The advent of Microsoft Windows enlarged a digital divide, especially
for Japanese visually impaired users, because graphical Windows
applications are hard to be accessed by the visually impaired, while
sighted users can enjoy Internet with Windows computers. Unix is
another high-capability OS but there have been no Japanese
screen-readers, which obliges users to have to login the system from a
remote terminal that has a speech output. In late 1990's Dr. Raman
developed a new speech system, Emacspeak. Emacspeak is quite unique
because it is not a screen-reader but provides a complete audio
desktop, i.e. it is a self-voicing Emacs with dedicated AUI (Auditory
User Interface). For example, Emacspeak speaks a calendar using
calendar's two-dimensional information. It speaks Web pages according
to the aural style sheets of the CSS2 (Cascading Style Sheets, Level 2
of W3C). It uses various voices to effectively display, or speak,
various kinds of information such as comment strings, key words, and
quoted strings of a program source code. Most of the task required at
the workplace is covered by Emacs; therefore, Emacspeak might be a
promising speech system for the Japanese users.
In 1999, we, the sighted and the visually impaired who love Emacs,
launched the Bilingual Emacspeak Project that will extend Raman's
Emacspeak into Japanese and English and that will run on Windows and
Linux. The aim of the current project is to enhance Japanese visually
impaired graduate students, researchers, and programmers to use
computers at high levels, which is not given by Windows'
screen-readers. The current system is bilingual because Japanese
users need to treat English as well as Japanese at workplace and
Internet. It runs on Windows because there are many users who use
Windows, the most popular OS for a personal use. It also runs on
Linux because PC-UNIX like Linux could be an alternate accessible OS
for the handicapped.
The current system consists of 2 parts, an AUI working inside Emacs
and an external speech server. The AUI is written in Emacs Lisp and
works as a multilingual Emacspeak. The speech server is written for
both Windows and Linux, respectively. The language of the information
is identified in a bilingual Emacspeak package; in fact it is
identified in a speech server for the current version but will be
identified in an Emacs level in the near future. Japanese language
has Kanji (a Chinese character), Hiragana, Katakana, and other
multi-byte characters. As Kanji is homonym, there are multiple Kanji
characters that correspond to one pronunciation. There also are same
Hiragana and Katakana that have exactly the same pronunciation. Thus
a user cannot identify these Japanese characters by hearing the
pronunciation and needs additional information that helps a user to
identify the character. Bilingual Emacspeak is to use two mechanisms
to give this additional information to the user. One is an
explanatory reading of Kanji, which is the same function an ordinary
Japanese screen-readers have. The current system has 3 fields for one
Kanji. They are very brief explanation used for reading a character
according to a cursor movement, short explanation used for explaining
a Kanji at Hiragana-Kanji translation of a Japanese input method, and
long explanation used for other cases. The other mechanism is audio
formatting, or use of various voice-fonts. For example, Hiragana and
Katakana will be read with different voices or different pitch of the
same voice. Our speech server for Windows uses American English and
commercial Japanese text-to-speech (TTS) engines that are conformable
to Microsoft Speech API. That for Linux, which speaks only Japanese
for now, uses a Japanese TTS engine developed by us from a commercial
software development kit.
Development is in progress but the Windows version is under alpha test
by a limited number of users. With use of Bilingual Emacspeak they
enjoy a life with Emacs, especially reading and writing E-mails. As
for Linux we are struggling to make a capable Japanese TTS engine and
to make a bilingual speech server. The current Linux version with a
prototype Japanese engine, however, enables visually impaired Japanese
Linux users to use Emacs without using any other devices or terminals.
When the first version is completed, the system is distributed as open
source except commercial TTS engines. In this regard, we need free or
open-source Japanese TTS engines. In the future, we plan to add
Braille output to the current system. We also plan to extend
Bilingual Emacspeak to multilingual Emacspeak to meet the requirements
of international users, especially for Asian users.
In conclusion, Bilingual Emacspeak provides a new accessibility for
Japanese visually impaired users who want to use Emacs. They can use
Emacs' high capabilities that are already used by the sighted users.
In other words, the current system provides a universal accessibility
for Emacs. It also provides the first Japanese speech interface for
Linux.
以上。