GNU bug report logs - #12055
24.1.50; Characters "á" and "é" are not correctly displayed on a Windows terminal

Previous Next

Package: emacs;

Reported by: Dani Moncayo <dmoncayo <at> gmail.com>

Date: Thu, 26 Jul 2012 12:21:02 UTC

Severity: normal

Found in version 24.1.50

Done: Eli Zaretskii <eliz <at> gnu.org>

Bug is archived. No further changes may be made.

Full log


Message #56 received at 12055 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Jason Rumney <jasonr <at> gnu.org>
Cc: lekktu <at> gmail.com, 12055 <at> debbugs.gnu.org
Subject: Re: bug#12055: 24.1.50;
	Characters "á" and "é"
	are not	correctly displayed on a Windows terminal
Date: Fri, 27 Jul 2012 21:03:43 +0300
> From: Jason Rumney <jasonr <at> gnu.org>
> Cc: lekktu <at> gmail.com,  12055 <at> debbugs.gnu.org
> Date: Sat, 28 Jul 2012 00:46:08 +0800
> 
> > Well, I see some strange stuff in the input processing.
> 
> 	  /* Get the codepage to interpret this key with.  */
>           GetLocaleInfo (GetThreadLocale (),
> 			 LOCALE_IDEFAULTANSICODEPAGE, cp, 20);
> 	  cpId = atoi (cp);
> 
> is quite suspicious. It appears in two places - one is a fallback for
> older versions of Windows that do not fully support Unicode, the other
> is more interesting for this case, as it is in the dead key handling,
> and from Juanma's description, a dead key is being used to input the
> problem characters.
> 
> The above lines should probably be replaced with
> 
>    cpId = GetConsoleCP ();

Thanks.  Yes, I wondered about that as well.  However, this is not my
problem right now.  If we were decoding input with a wrong codepage, I
should have at least seen correct Unicode character codes right at
entry into key_event.  But what I see on my machine (whose ANSI
encoding is cp1255 and the corresponding OEM encoding is cp862) is
something really weird.  When I switch the keyboard to Hebrew and type
ALEPH, BET, GIMEL, whose Unicode codepoints are, respectively, u+05D0,
u+05D1, u+05D2, I see 0x0580, 0x0581, and 0x0582 instead.  That makes
no sense at all, and no amount of tinkering with input codepage can
ever fix that.

Besides, at least in my locale, the code that you mention is never
executed at all.  Instead, we return the original Unicode character
codepoint via this fragment:

      else if (event->uChar.UnicodeChar > 0)
	{
	  emacs_ev->kind = MULTIBYTE_CHAR_KEYSTROKE_EVENT;
	  emacs_ev->code = event->uChar.UnicodeChar;
	}

And since, at least in my locale, event->uChar.UnicodeChar is wrong,
the rest is a logical consequence of this.

So my current theory is that it is simply wrong to look at
uChar.UnicodeChar unless we call ReadConsoleInputW, the wide-character
version of the API.  But I need data from other locales to make sure
this theory is correct.  The theory is based on the following vague
portion of the ReadConsoleInput's documentation:

  This function uses either Unicode characters or 8-bit characters
  from the console's current code page.

There isn't a word about when it does one or the other (AFAICS), which
led me to the above hypothesis, since that's the only cause that
doesn't need to be explicitly documented.

Btw, the MSDN documentation about stuff this is not as helpful as it
could have been (so what else is new?).  This page

  http://msdn.microsoft.com/en-us/library/windows/desktop/ms684166%28v=vs.85%29.aspx

says:

  uChar
      A union of the following members.

      UnicodeChar
	  Translated Unicode character.

      AsciiChar
	  Translated ASCII character.

What the heck do they mean by "translated" here?  "Translated" by whom
and how?




This bug report was last modified 12 years and 294 days ago.

Previous Next


GNU bug tracking system
Copyright (C) 1999 Darren O. Benham, 1997,2003 nCipher Corporation Ltd, 1994-97 Ian Jackson.