In the spirit of Robin Cover's unedited musings, let me offer some early
morning thoughts. I also have been annoyed by visually intrusive markup
and wanted it removed. It seems also to me, however, that we need this
removal -- or, more generally, interpretation -- to be done dynamically
by the software that puts the text on screen or printer. How in general
this interpretation is done, how it is selectively controlled by the user
in real time -- are these matters for the TEI to consider? Forgive my
ignorance of the TEI's global plan. If I were a software developer, I'd
want to know how users might want to have the encoded meta-information
acted upon, how they might want to specify the actions to be taken.
Not "what-you-see-is-what-you-get" (wysiwyg) but
I have wandered into combat without any weapons or armor. Will I escape
in one piece?
Peering into the near future, I see not merely a direction for software
development to take but also a rapidly developing need for much more
powerful hardware. Has anyone spoken to the folks at NeXT about the
dynamic presentation of encoded texts?
Willard or Willard's breakfast ask good questions which I will try to
answer without belligerence, though the gist of what I have to say
remains essentially 'nothing to do with TEI squire'.
Yes, indeed, acting on the markup encoded in a TEI text should be "done
dynamically by the software that puts the text on screen or printer"
(how else you gonna see it I ask myself). For the TEI to specify the
user interface to that processing (which is what it sounds Willard is
proposing) -- to for example "all div1s should be realised in pink
with green underlining" or "start a new screen and play God Save the
Queen before every div0" -- does not seem either practicable or
advisable. Firstly we haven't got time or personpower. Secondly we
wouldn't do it right. I say that with confidence, because the whole
point of this exercise is to markup texts so they can be used for
multiple applications, many different ways of presenting the same text
including some which we *havent thought of yet*. The 'G' in SGML is for
If the TEI scheme doesnt tell you how to process your text (but just
how to say what's in it) you still need some way of controlling the
software which does process it. Clearly, the more sgml-aware the
software is that does the processing, the easier that interface will be.
So when I said `SGML is not meant for human readers' I was somewhat
muddying the waters, for which I apologise. For example, a word
processor which knows that you should have end-tags that balance your
start-tags, and won't let you insert ones that don't is more use to you
than one that doesn't even know what a start-tag is; just as a retrieval
program to which you can say "only look in the bits of text tagged as
blorts" is more use than one which thinks that <blort> is a funny sort
of word. But specifying software, still less writing it, is one of the
jobs which the TEI has emphatically *not* volunteered for.
Is there a general feeling that this separation of tasks is
Willard asked about the NeXT presentation of marked up text files.
Their solution is to present the screen images directly from the marked
up files, which are marked in PostScript (I am not sure if the vanilla,
or other usual versions of PostScript might be what they use. I recall
some mention of something called Presentation PostScript, but this must
have been at one of the first demos of the NeXT machine back in version
0.2 or so.) If more information is desired, I can forward the question
to our local NeXT wizards.