+ 3
Can you enable UTF-16 encoding (emojis) with the c++ preprocessor?
It does change the error stream, but how? I honestly don't know if this is a dead end or a possible 1-liner. Does anyone have any ideas? https://code.sololearn.com/cHhj9vqH9qAp/?ref=app
3 Answers
+ 10
Kirk Schafer Yeah I checked it out too. It's strange that the other wide commands are not working.
Also I noticed another thing, if we try to print normal text/string it prints some sort of Chinese words and also none of the escape sequences work.
#include <iostream>
int main()
{
wprintf(L"%s", L"\xfeffâșđ");
wprintf(L"%s", L"đ");
printf("%s", L"đ");
printf("%s", L"đ©ïž");
std::wcout << L"âïž";
printf("Hello\n");
return 0;
}
Above code works fine with no errors, but you must see the output of the last print command.
If there's a way to remove BOM, then maybe the last command works fine.
+ 9
I am not better than you but I tried changing to your code and it worked fine.
#include <iostream>
int main()
{
wprintf(L"%s", L"\xfeffâșđ");
wprintf(L"%s", L"đ");
printf("%s", L"đ");
return 0;
}
It seems that BOM is required at least once to know the compiler that in what way should the encoding/decoding be done.
+ 1
nAutAxH AhmAd Well, my earlier answer dropped; sorry for the delay in recreating my response.
Your answer indeed works, which is both neat and perplexing -- it doesn't require resetting stdout (could be the Long specifiers?) but the other wide-output commands (when used after this new approach) don't seem to work in the model used in both of our CodePlayground codes.
This is mysterious to me, because once you set the BOM, I feel like it should just keep working...