classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
Report Content as Inappropriate


Robin C. Cover
Re: Lou Burnard's "Subject: How far SGML?"

Lou must have strong confidence in his friendships to have posted
an exchange between Stig Johansson and Bob Amsler "without asking
permission of either of them."

Stig's queries ("Does everything have to be expressed through SGML?
Is it good for all types of texts? ... Even if SGML can be made to do
the work, is it the best way for all texts and all types of textual
features?), in light of his subsequent comment, can be understood to
ask whether KNOWLEDGE/ANALYSIS OF ALL KINDS *about* a text is best
represented, in all kinds of texts, by SGML.  Amsler's (posted)
response in paraphrase (..."whether SGML should always be used to
represent all texts...category of material") seems to focus on
format and content, and validation of these.  Stig's question seems
to ask about propriety, economy, felicity.  When I remind everyone
that we need character-level and morpheme-level annotations on texts
for representation of codices, textual criticism and linguistic
analysis -- the question may be whether we WANT to use an SGML id
marker for every character in the text, or whether we hand off some
of the cross-referencing/parsing problems to applications software.

I may be wrong in this interpretation of the exchange, but even if so,
let me expand the question.  Part of the issue seems to be: when should
we let the applications do type-checking on the data (Amsler eliminates
program code from SGML coding because compilers do the validation)
and when do we declare detailed rules in a document DTD?  If the
applications already exist, the question is moot: they validate or
they do not.  In other cases, it's a fresh choice: new text being
authored or structured, and new software process the encoded text.  Do
we require that the exchange format (SGML) validate *everything* that
the local application controls with integrity checks?

An example would be citations, whether of bibliographic data or
internal references in classical literature.  One *could* use SGML
(the DTD, with entities, attributes and tags) in an authoring system
to make sure that nobody ever typed "Sirach 23:30" (since Sirach chapter
23 has fewer than 30 verses).  Is that the best way to validate the
legitimacy of citations?  Or is there a more economical way for ALL
applications to validate citations, so that we can leave deep-level
SGML structuring markup out of the interchange format?

This is a more benign case than worries Stig, though (I think).  He
seems to be concerned with the important matter of representing
knowledge and analysis of texts (not visible, surface features having
to do with rules of containment) and questioning whether SGML always
the best way to do this.  Perhaps Bob answered adequately, and it's
taking me a while to see how Bob's generalization answers all the