classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view


Robin C. Cover
(part 2 of 2)

    I have only begun to work out some details in terms of an SGML-based
encoding scheme.  I am increasingly pushed in the direction of this
recommendation: that we provide basic guidelines, and examples (in a
manual) but assume that the details of the feature list, including the
designations of elements, attributes and nestings, will need to be filled
out by domain experts in various fields of literature.  Two other
conclusions are clear: (a) the "standard" template, somewhat like a
bibliographic template, will involve many optional features; (b) the
templates will be sometimes very complicated, with multiple levels of
inter-dependencies; they will involve discipline-specific or language
specific mechanisms for expressing the inter-dependencies.

    It's often not clear to me whether to suggest elements or attributes
for some features, so I welcome comments from other TEI-REP members and
from the meta-language experts.  The following crude scheme reflects a
desire to be able to reference a text-critical situation from outside the
text (from a commentary or the database point of view), and thus
envisions an external referencing mechanism for both "lemma" and
"alternative reading."  It would be helpful to use the same general
structure whether from the standpoint of internal or external
referencing.  The two objects seem to be the same textual objects except
that some essential information about the lemma is automatically
inherited from the document DTD which needs to be supplied for the
alternative readings.  I do not intend these names to be taken seriously,
but as provisional handles that are clear.


    an element,  (a) embedded within the text <emp>loc cit</> at some
        legal level of granularity [character/syllable/morpheme/word
        level], or (b) in an
        associated text-critical file sub-document, or (c) in a separate
        document type; it contains.. one <primary_reading>, zero or more
        <alternate_readings> and zero or more <emendation_readings>; one
        or more of <alternate_reading> OR <emendation_reading> is

    <primary_reading> .....</>  open & end tags for the primary reading

    <primary_reading_document_name>  required element within <primary_
        reading>; may be omitted if the encoding is part of the
        containing document's app crit, and/or DTD   (??)

    attributes of the <primary_reading_document_name> would include:

        canonical_referencing_scheme_name: (optional; e.g., "uses the
            Goettingen Septuagint versification"; not necessary
            if this info is in the containing document's DTD)

        language: (required; means should be documented for indicating
            various kinds of bi-lingual documents; refinements to the
            2-character language codes of ISO 639 should be recommended
            to account for regional and genre-dialects, etc.)

        text_locus: (required; using the canonical reference plus offset;
            using some specified referencing system (perhaps different
            options for different textual arenas & genre types)
            to designate the precise locus(offset), which may contain
            two or more discontinuous textual elements

        orthographic_stratum: (optional but recommended for texts which
            were written in different orthographic strata at different
            periods/locations; useful for artificial/hypothetical
            readings discussed in textual commentary or in emendations,
            where presumed orthographic systems inferred through
            historical linguistics)

        script: (optional, but required for languages which are written
            in more than one script)

        other_attributes: (optional; a host of other attributes about the
            witness, whether a printed edition, manuscript, tablet; these
            would be standard  bibliographic data in many cases; date of
            publication; current museum location; date of discovery;
            provenience; date of the witness; name of archive; physical
            substance & medium (inscribed stone, papyrus, codex, clay
            tablet, fired brick, inscribed shard); literary genre &
            sub-genre (e.g., scripture text on a mezuza))

        <lemma>  required, single element within the <primary_reading>

            <lemma_content> nested element of lemma, the content of
            language: language attribute if different than that
                specified for document
            orthographic_stratum: necessary if different than that
                specified or assumed for the document
            script: script attribute if different than that declared
                for document (e.g., Hebrew tetragrammaton spelled
                in archaic characters)
I have several uncertainties here:
    a)    which should be attributes/tags
    b)    how to signify parentage if the attributes are attached at
          several different hierarchical sub-levels within the lemma (for
          example, when "script" attribute changes for one of the three
          words of the lemma
            <modern_language_translation_of_lemma>, one or more
                optional nested elements of lemma; should
                provide for alternate translations of the lemma
                in a single language, or in multiple modern
            ll_encodings: mostly optional tags/attributes; examples:
                *character-level, morpheme-level, syllable-level,
                    phrase-level (etc.) id markers
                *morphological descriptions (morpheme & word-level
                *lexical descriptions (lexical lemmas)
                *syntactic tags
                *paleographic/orthographic annotations
                *other linguistic/literary annotations (all the
                    ll_encodings potentially overlap with
                    the TEI-ANA annotation database)
            <top_level_normalization>  optional nested element of
                    lemma when appropriate to textual arena;
                    this would be the normal mechanism used
                    by the application for grouping all
                    similar alternative readings of witnesses
                    in different languages
  <conversion_table> (with some better name! to designate the
                    parent(s) and/or child(ren) in presumed
                    translation or dependency stream; e.g., Hebrew
                    >> translated by Greek/Syriac as xxxx; Greek
                    << retroversion from Hebrew Vorlage XXXX;
                    Armenian << mediated through Greek YYYY << from
                    Hebrew ZZZZ; this could be a table of mappings
                    relevant to the particular circumstances of
                    this lemma (word, phrase level)
            <evaluation> optional nested element of lemma (which
                contains one or more elements of....

                <standard_tc_comment>  one or more of (e.g., name of
                    scribal lapsus; unexplained corrupt reading;
                    preferred reading; conflate reading
                    (with demarcation & origin of sub-elements in
                    conflation); the standard_tc_comment will be
                    discipline specific)

                <justification_of_standard_tc_comment>  nested within

                <freeform_comment> any prose, probably allow this
                    at any level of nesting

                <typological_placement> (optional element of lemma
                    as applicable to textual arena; genetic
                    or stemmatic placement if known; justification
                    for typological_placement as standard comment
                    and/or freeform; would make use of data in

    <alternate_reading> ....</> the alternate readings will have the same
         features, in general, as the <primary_reading>

    <emendation_reading> ...  </>  the emendations will have fewer of the
        features of the <primary_reading>, but include one or more of a
        set of principled_reason justifying the emendation; some scholars
        prefer to put emended readings in separate categories, while
        others (and more appropriately in some fields) would place
        emended readings on equal par with <alternative_reading>; this
        makes sense in textual arenas where retroversions and other
        language-equivalents amount to guesses anyway.


    I do not have a strong opinion whether this text-critical information
should normally be held at the close-tag of the text_locus or in some
other file (sub-document) which is merged with the "text" prior to
processing, or in some completely separate document (linked to the lemma
containing document via (non-?) SGML link mechanisms. In any case the
precise SGML expression of the file would just be a flattened out form
once we decide which are tags and which are attributes.  If a document
has sparse, sporadic text-critical annotation, there would be nothing
wrong with putting the data at the text_locus.  I think I prefer the
notion of holding text-critical data in a separate file if the amount of
annotation is massive (more efficient use of processing using buffers?).


    I may bring to Oxford some refinements/improvements on the above
scheme, and welcome alternative proposals from others.  Perhaps a couple
examples.  Michael has sent me a sample text with variants (with full
DTD's) based upon SGML-ized EDMACS; I do not include it here, but hope
Michael (and perhaps Dominik) will bring these samples and proposals. I
especially solicit comments from Professor Ott.  I hope also to hear from
Professors Thaller and Huitfeldt in Oxford (who are reported to have
schemes for encoding textual variation), and I have yet to analyze a
sample of marked text from Bob Kraft.  I look forward to seeing the
detailed work of Peter Robinson (Hockey/Burnard) in connection with
COLLATE and OCP productions using SGML for textual variation.  Overall, I
feel that the referencing scheme is the hardest part.  The taxonomies for
inter-dependencies can be worked out by domain experts, and we should be
able to settle on the core terms/structures in Oxford.  The motivation
for a highly detailed and rich (but constrained) set of text-critical
annotations, obviously, is in support of the richness of the database.  I
look to Lou Burnard for recommendations on database.



The following sections of Appendix attempt to explain why I feel that a
focus on the traditional "critical apparatus" is, at least in some
textual arenas, less appropriate for encoding than a focus upon the total
available body of text-critical knowledge.  I admit the probability that
this evaluation and its significance are of less moment in some literary
fields than in others.  The appendix has three parts:

A-1. A reworked HUMANIST posting stating the case generally
A-2. A summary of positive and negative appraisals of the traditional
        critical apparatus
A-3. A worked example from the standard critical edition of the Hebrew



    I feel a lot of work remains to be done before we are prepared to
assess how we may best represent knowledge about "textual variation"
(textual evolution, textual parallels) using SGML markup languages or
other "portable" formalisms.  In the simplest textual arenas, or in the
event that someone wishes to represent in electronic format JUST what is
visible on a printed page of a critical edition, the challenge may not be
too difficult.  Indeed, several schemes are currently in use by scholarly
editing and text-processing systems which can be expressed in an SGML
language.  By "simple" textual arenas, I refer to: (a) cases in which all
textual witnesses are written in the same language and the same "scripts"
(= one level within a stratified orthographic system); (b) cases in which
the witnesses can be seen in close genetic/stemmatic relationship, not as
products of complex textual evolution through heavy recensional/editorial
activity; (c) cases in which the number of witnesses and amount of
necessary textual commentary represents a small body of information; (d)
cases in which one is not concerned about paleographic information and
other character-level annotations or codicolgical information.  In such
cases, the traditional critical apparatus serves literary scholarship
very well (the apparatus-region offers enough space for unambiguous
presentation of the information), and the SGML-ish representations are
fairly straight-forward.

    But I think the assumptions above may not pertain to the work of a
significant number of humanities scholars.  The goal of encoding "JUST
what is visible on a printed page" (a traditional apparatus criticus, for
example) might constitute an important and economical step in the
creation of a text-critical database, if assumptions (b) and (c) and (d)
were also germane.  But when the textual data and published knowledge
about that "textual" data become very rich, the standard critical
apparatus represents (increasingly) a concession to the limitations of
the traditional paper medium: both physical space (the amount of
selectivity and condensation) and the ability of a reader to absorb
(synthesize, evaluate) large amounts of textual information in complex
relationships.  In these more complex situations (biblical studies, for
example), the paper app crit will contain a selection of data, not all
the data (excluding orthographic variants, for instance, which may be
important for historical linguistics); it will indicate THAT a certain
manuscript or manuscript tradition bears testimony to a certain reading,
but will not indicate the steps of principled evaluation which were used
to make this judgment (language retroversions, for example); it will tell
you THAT a certain manuscript tradition (e.g.,
"Syriac/Ethiopic/Arabic/Aramaic/Coptic" in support of a certain variant
of the Hebrew Bible) supports a given reading, but not which manuscripts
exactly, or where, precisely (machine-readable terms) these
Syriac/Ethiopic/Arabic/Aramaic/Coptic readings may be found, or what
expressions are actually used there.  Similarly, some editors (at some
periods) were narrowly focused on particular aspects of textual history,
textual variation and textual evolution, and systematically ignored the
"irrelevant" and "trivial" variations which contributed nothing to their
own interests.  Searches for texts of highest antiquity or authenticity,
for example, have often been given prominent and full representation in
critical apparatuses: this priority serves well the interests of certain
kinds of historical inquiry (search for the Urtext), but ignores data
which are valuable for understanding later traditions which inadvertently
or consciously "corrupted" the texts, perhaps for liturgical purposes.

    It is my opinion, then, that to model the "electronic critical
editions" of the 21st century after paper editions would, in some cases,
represent a short-sighted goal.  We no longer have to exclude "minor"
(sic!) orthographic variants (essential for some forms of historical or
comparative linguistics) from the databank just because they would render
a traditional app crit "unreadable" from the perspective of scholars
disinterested in orthography.  Rather than just "encoding" or "marking
up" modern critical editions (a necessary or desirable step, perhaps), we
need to think rather about representation of the knowledge about textual
variation, held in critical editions, to be sure, but also in textual
commentaries and in fully-encoded manuscripts (tablets, shards, lapidary
inscriptions, papyri or other primary documents) which themselves
constitute the primary data.  In short: we need the encoding of ALL the
human knowledge about physical texts, textual "variants" AND the
scholarly judgments about processes of textual evolution.  "Hypertext"
and "SGML-based" encoding can then be put to work in applications
software which allows us to study the text with multiple views, even
hypothetical documents created with the aid of an SQL/FQL and the text
critical database.  We may then dispense with the static (sometimes
overly selective, sometimes overfull, sometimes inaccurate) app crits and
instead enjoy dynamic user-specified app crits containing particular
classes of text-critical information we wish to see at a given moment; we
may have several different app-crits on the screen, simultaneously.  We
will be able to do simulations and test hypotheses by dynamically
querying hypothetical texts reconstructed from an FQL expression.

    It is also my judgment that we are quite a distance away from knowing
how to encode knowledge about textual relationships in which inter
dependencies are complex ("variants," recensions, parallels, allusions,
quotations, evolutionary factors, hermeneutical- translational factors).
But I think SGML embodies one indispensable ingredient in getting there:
encouraging us to assign unique names to objects in our textual universe,
and to other properties of text and textual relationships.  Our
conceptions about these textual (literary, linguistic) objects will
inevitably prove to be crude approximations, but by coding our current
understanding about them in syntactically- rigorous ways (using SGML
based languages), we at least contribute to a legacy of preserving the
text and our understanding of it.  This conception of encoding that is
self-documenting represents an advance upon the less thoughtful processes
of antiquity (and in some modern conceptions of text), which were usually



If the reader fails to feel any relevance of the tension I have
articulated above, it may be because the traditional "critical apparatus"
serves varying fields more or less well.  I will promote here the
argument (should be, "hypothesis") that generally speaking, the adequacy
of a critical apparatus (*adequacy conceived in MODERN terms*) will be
inversely proportional to the mass and complexity of relevant textual
data.    Here is an amazing situation: I have before me open copies of
six critical editions (gathered off my shelves at random)...

  - Hesiod's Theogony (Oxford/Clarendon, 1966/1982)
  - Hesiod's Works and Days (Oxford/Clarendon, 1970/1983)
  - Greek New Testament (Nestle-Aland, 26th edition)
  - Hebrew Bible (BHS, 1967)
  - Ms Neophyti I, Aramaic Targum to Genesis (Madrid, 1968)
  - Annals of Assurbanipal, King of Assyria (Leipzig, 1916)...

... and each of the six critical editions contains about the same
proportion of "critical apparatus" to "text" per page, though the Greek
NT and Targum have slightly higher proportion of apparatus.  Does this
mean fortune has preserved for us an equivalent wealth of textual
evidence for inclusion in these critical apparatuses?  Hardly.  It means
that the scholarly convention called the "critical" edition typically
contains pages in which "text" makes up 1/2 - 3/4 of the page, usually at
the top, and the remainder of the page is free for "critical apparatus."
If the two Bibles included a level of textual detail (data/perspicuity)
and percentage of associated relevant facts equivalent to that contained
in the Hesiod and Assurbanipal volumes, no human would be able to lift
the book.  We may also reflect on the fact that most of these volumes, as
with most critical editions of literary texts, contain separate sections
or companion volumes of textual commentary: why separate sections or
companion volumes?

    Why separate sections in a critical edition volume, or companion
volume, for textual commentary -- when the same subject matter is
covered?  Because the traditional critical apparatus is, at the same
moment, both a useful and feeble (paper) convention.  I qualify "paper"
because in the age of hypertext we are no longer bound by some of the
debilitating features of linear text on paper, and the textual commentary
supplies one nice example.  Positively: The critical apparatus is a
powerful and useful scholarly convention, and (I suspect) will continue
to be so for the future, even when reproduced on the computer display in
character-for-character mimicry of the paper apparatus.  We feel sure
that encoding of text-critical information will enable scholars to
electronically produce far more complete, accurate and informative
critical apparatuses than in the past, whether for paper distribution or
for use in programs.  For textual traditions containing fewer than a
dozen witnesses, the app crit may contain exhaustive inventory of textual
variants (fully spelled out), including "mere" orthographic variants.
For textual traditions with a wealth of evolutionary history, the app
crit supplies an essential, digestible overview or summary of the "most
important" text-critical issues.

    Negatively: and negatively in direct proportion to the wealth of
textual tradition, the app crit is a feeble instrument in that it only
provides a "summary" or "overview" of the textual issues that are most
important in the editor's personal judgment.  The fact that the textual
commentary is isolated in a separate section, or in a companion volume,
is a concession to this problem of linear (paper) text.  Hypertext offers
the power to zoom up-and-down through our choice of detail, at any layer;
the textual commentary need not be tucked away somewhere else, and the
data of the apparatus need not be an "overview" unless we choose that
format.  The syntax of the apparatus need not be cryptic and ambiguous
(as is often the case in my field -- though this should not be
tolerated).  To the delight of many younger inter-disciplinary scholars,
the language of the apparatus need not be Latin (ahem...where the
scholarly world of humanities seems intent on outdoing the evil of the
medieval Church in keeping certain information...); the app crit may be
in EVERY language, if desirable.

    Consider, for example, that the app crit of the Hebrew Bible
(mentioned in the group above) frequently cites among its witnesses the
"Greek."  But in traditional terms (conventional amount of space allotted
to the app crit), this "citation" can never be more than a pointer to the
eclectic critical edition of the Greek text, which lives in some 18
separate volumes (still incomplete in the standard Goettingen LXX), which
in turn is built up through the investigation of its Greek witnesses and
daughter versions (Armenian, Arabic, Ethiopic, Coptic, Syriac, etc.) each
of which have their own critical editions.  Hypertext will allow us to
traverse this path, if the exact paths are charted (rather than citing
sigla of "traditions alluded to"), but gaining access to this data will
not fit (sensibly) within the traditional conception of a critical
apparatus.  A traditional critical apparatus must be "manageable," and
not take up more than a certain acceptable percentage of the page, and
one must be able to peruse it for a synthetic view.  Or consider again,
when a textual note in the Hebrew Bible says "&gothicQ + mult vrb" it
means, "go look for a Qumran manuscript containing a reading relevant to
this textual locus, and you'll see it adds lots of additional words."
The editor's failure to cite the words may indicate that he thinks they
are secondary, or it may indicate that the three additional sentences of
Hebrew will not fit on the page.  We may fault the editor for not being
more specific, (or the editorial board for wanting to print the Henrew
Bible in one volume) but within the constraints of the apparatus space,
there may indeed not be room for giving the Qumran reading.  There is not
space for giving dozens of other interesting variants.

    In short, the negative features of the app crit (even if these are
not theoretically-necessary negatives) disappear with cheap/compact
electronic storage and non-sequential access to the textual databank.
The principle of selectivity in a critical apparatus (or for any encoded,
annotated text) is essential, but the question is: WHOSE power to select?
In a fully marked text (all manuscripts, text-critical annotations,
linguistic annotations, bibliographic annotations, literary-critical
annotations), 99% of what's "in" the text will be garbage at any one
moment.  The power of selectivity needs to be handed to the user.
Scholars who are forced to work with inadequate critical editions should
not be encouraged to "encode" those critical editions; they should be
given guidelines and tools which make possible the encoding of
information, ultimately designed for a (relational?) database, from which
they can create their own critical apparatuses and make queries of the
sort that critical-apparatuses were never intended to answer.



Here follows an example of the deficiency of critical apparatus in cases
where the principle of selectivity and pressure for brevity work
decidedly against the usefulness of the apparatus.  Two rebuttals could
be offered (by anyone patient enough to read this example): "it's an
isolated, unrepresentative and extreme example," and "the editor should
be shot."  Neither are fair: I can find far worse examples (which I
sometimes use in instructing students that ad fontes does not mean the
critical apparatus!); and the editors were working under hopelessly
unrealistic constraints: "(something like) the critical apparatus shall
consume no more that 2 vertical inches at the bottom of the page..."

The standard edition of the Hebrew Bible (BHS) offers four textual notes
on Deuteronomy 32:8, where the text reads (according to the translated
Hebrew BHS text):

    When the Most High (Elyon) gave the nations their patrimony
        When he made a division for the sons of men
    He established the territories/boundaries of the people
        According to the number of the sons of Israel.

The fourth textual note on this verse in the BHS apparatus concerns the
expression "sons of Isreal."  It's tough using IRV characters to indicate
what's happening here, but I will try in successive stages: (1) to
represent the textual note (2) to explain what's meant by the textual
note (3) to explain what's missing and wrong and inadequate from the
perspective of a machine-readable version.


&par; <superscript>d &mdash; d</superscript> &gothicQ;&gothicG;
&lambda;&omega;&nu &theta;&epsi;&ogr;&tilde</greekfont>)
</superscript> prb recte <hebrewfont1 direction=righttoleft>bny
</hebrewfont1><hebrewfont2 direction=righttoleft>&aleph;"l
</hebrewfont2> vel <hebrewfont1 direction=righttoleft>bny</hebrewfont1>
<hebrewfont2 direction=righttoleft>&aleph;"</hebrewfont1><hebrewfont2
direction=righttoleft>lym</hebrewfont2> &par

Comment: I don't mean this as an optimal or even legal SGML represe
tation, but simply an intelligible representation of surface/typographic
features; it's stupid to use "direction" as an attribute, but I did not
want to write rules; different representations of Hebrew in the app crit
will use different directions of writing.  In this textual note, the
orthographic stratum changes twice within Hebrew clauses, once within a
single Hebrew word.


&par  := indicates/delimits textual notes
<superscript>d &mdash; d</superscript> := the superscript d-d
    notation means that the textual note pertains to the text
    string in the main Hebrew text which is likewise delimited by
    superscript d-d in Roman characters
&gothicQ := Qumran
&gothicG := Old Greek
(...)    := Greek text meaning "angels of god"
<greekfont>&sigma;</greekfont>&acute; := Symmachus (late Greek
&gothicL;  := Old Latin
<superscript>Syh</superscript>  := Syrohexaplaric reading
prb recte  := editor's evaluative comment in latin
<hebrewfont1 direction=righttoleft>bny
    </hebrewfont1><hebrewfont2 direction=righttoleft>&aleph;"l
    </hebrewfont2>  := first Hebrew reading which "might be" the
    correct retroverted/restored reading, and means "sons of
    El/god"; the use of "hebrewfont1" and "hebrewfont2" are to
    signify two different levels of orthography, one without
    vowels and one with vowels; the lemma needs to be represented
    (tagged) as belonging to a third stratum of the orthography,
    having accentuation, designation of spirant allophones, etc.
    thus: <hebrewfont3 direction=righttoleft>b.:n"y71
    yi&sin;rF'"75l</>   (means: "sons of Israel")
vel  := editor's comment in latin
<hebrewfont1 direction=righttoleft>bny</hebrewfont1> <hebrewfont2
    direction=righttoleft>lym</hebrewfont2>  := the second
    possible Hebrew reading, according to the editor which might
    stand behind the readings of "Qumran," "Old Greek,"
    "Symmachus," "Old Latin" and "Syrohexapla"

        OF VIEW?

    (a) The convention of mapping the textual note to the text string
with "superscript d-d" works just fine for humans, but is not a legal
SGML representation.  If we wish to place such textual data in the flat
file loc cit, a number of possibilities are open: flag the lemma at both
ends with <textual_variant>....</> or whatever.  I question whether we
want to do this, since in some cases there will be literally pages of
"textual_variant" data for each of several words in a sentence.  An
alternative is to find a qualified referencing system (perhaps using id's
and refid's), and to hold text-critical data in a separate file.  Note
that when giving the Hebrew lemma, a third orthographic stratum
(different from the two in the app crit) must be used as a language
(script?) qualifier: the main text contains cantillation marks as well as
consonants, matres and vowels.  The application software will have to be
smart enough to convert readings from one orthographic stratum to another
in order to make comparisons.

(b) The siglum &gothicQ for "Qumran" is meant for a humans.  It means
that somewhere, sometime a manuscript was found in one of the Qumran
(Dead Sea) caves which bears some relationship (yet to be made clear) to
this text.  But which cave?  Which manuscript?  Where was it published?
Plate number?  Line number?  There are hundreds of published and
unpublished Qumran manuscripts.  What language (Hebrew, Aramaic, Greek,
-- all three languages are among the Qumran witnesses)?  Is this reading
in a biblical manuscript, or in a liturgical text, or quoted in a non
canonical apocalyptic work?  If I look up the siglum &gothicQ in the
introduction to the text, I am simply informed that "&gothicQ" refers to
discoveries made at Chirbet Qumran and published in the series DJD
(Discoveries of the Judean Desert), 1960ff.  Now, if I didn't know any
better, I might hunt through all DJD volumes to find this Qumran reading
bearing on Deut 32:8, but would not find it.  The text is actually
published (yet not in full editio princeps) in two journal articles, as I
determine from other bibliographic research.  When I read the articles
and look at the only published photograph, I discover that the Qumran
reading (bny 'lhym) is actually *neither* of the two alternatives offered
by the editor, who proposed either "bny 'l" or "bny 'lym".  The correct
reading was published in 1959, though the editor of BHS did his work in
1972.  Thus, the siglum &gothicQ is entirely misleading, a mere cipher
alerting me to the existence of Qumran evidence for this text, which I
have to find for myself and read for myself.  The interpretation actually
given in the app crit is wrong.

(c) The siglum &gothicG means that the "Old Greek" (as determined through
careful sifting of hundreds of Greek manuscripts and daughter versions
dependent upon the Old Greek) has a bearing on the text at this lemma.
But since the Old Greek had no Hebrew reading at all, the citing of this
siglum should be accompanied by an indication of what the Old Greek
reading was, and whether its retroversion to Hebrew is assured, and with
what level of confidence, and on what grounds, and with what precise
Hebrew result.  It may be that the Greek reading which follows in
parenthesis constitutes part of that evidence: but there is no grammar
telling what "parentheses" means in this syntactic relationship; a human
would not know for sure what the parenthesized reading
&lambda;&omega;&nu &theta;&epsi;&ogr;&tilde</greekfont>) "angels of god"
means for this textual variant.  It's presence between the Old Greek
siglum and that of Symmachus (another Greek witness) suggests that it
might pertain to the Old Greek tradition.  So we now turn to the standard
critical edition of the (Old) Greek Deuteronomy (the Goettingen
Septuagint volume, 1977) to explicate the ambiguity.  We find that the
eclectic text of the Goettingen LXX reads "sons of god" not "angels of
God."  Studying carefully the apparatus of the Goettingen LXX presents a
confusing picture, so we turn to the companion volume of textual
commentary which supports the Goettingen Septuagint (Text History of the
Greek Deuteronomy. MSU XIII. 1978).  There we find that the reading of
the eclectic text in the critical edition is not attested in the extant
Greek manuscripts themselves, but is inferred from the (derivative-of
Old-Greek) Armenian text and from a partial reading in one very
prestigious text in Greek (manuscript 848, dating to the middle of the
first century B.C.E.).  So it turns out, on further inspection, that the
reading apparently attached to "Old Greek" in BHS is wrong, or at best,
entirely misleading.  Again: the siglum served as a cipher alerting us
that the Old Greek tradition has relevant testimony bearing on this
Hebrew text, and we will have to find it and study it.

(d) The next siglum <greekfont>&sigma;</greekfont>&acute; says that the
late Greek reviser Symmachus has a reading which reflects one of the two
Hebrew alternatives suggested by the editor.  But what is Symmachus'
reading in Greek?  On what grounds can it be claimed to support one of
the two Hebrew readings which follow as the editor's proposals?  We are
not told: we must find a copy of Symmachus and make our own evaluation.

(e) The next siglum (&gothicL) tells us that Old Latin supports a Vorlage
like one of the two proposed Hebrew readings (or perhaps just a reading
in support of the (now exposed) "Old Greek," but we cannot tell for sure.
In either case, can we trust this judgment?  What does Old Latin read (in
Latin)?  Is the Old Latin reading secure at this locus, and on what basis
is it claimed as support?  We must find the Old Latin ourselves and make
the evaluation.

(f) The superscript "Syh" following &gothicL is curious for being
superscripted (presumably a typographic error, but demands
investigation).  In any case, we are not told what "the" Syriac reading
is, in what manuscripts/authors it is found, and on what basis it can be
claimed to support the Old Greek (as indirect evidence) or one the Hebrew
readings proposed by the editor?

(g) The two Hebrew readings proposed by the editor cannot be used
directly by software: (a) as direct substitutes for the lemma (b) as
alternatives to be compared to the lemma.  The lemma carries full
orthography for vocalization (vowels) and cantillation (tone, pitch), and
thus embeds 4 or 5 distinct strata of Hebrew orthography in one
particular script; the proposed Hebrew variants in this case use the same
script, but contain mixed orthographies, with only partial vocalization
and no cantillation. One could propose that software absorb the burden of
identifying transcriptional systems, orthographic strata and then
generating normalizations, but I doubt this would be wise: if editors use
mixed orthography with words, the encoding should identify this. Thus,
the encoding would require that the scholar supply linguistic information
which is only inferred (though easily by humans) in the critical

(h) Finally, the app crit supplies no information about witnesses that
support the lemma.  All Masoretic texts?  A majority?  How about the
Samaritan Pentateuch?  And where is the editor's explanation for the
lemma's reading "sons of Israel" in place of the alternative "sons of
(the) god(s)"?  What is the justification for the editor's judgment
"<emp>prb recte</>"?


The app crit for the 4th textual reading in BHS Deuteronomy 32:8 is a
more like a footnote or pointer to external textual evidence than a
record of precise textual evidence.  The scholar must locate and evaluate
several other sources to determine what the alternate readings are, what
their inter-relationships are, and credentials these readings have in
support of the editor's two alternatives.  From the perspective of a
traditional critical apparatus, the major elements are indeed present:
lemma, variand(s), witnesses.  But essential information is missing:
annotations to these witnesses and readings expressed in terms of objects
which can be pronounced, written-out, classified and counted.  An
encoding of this information in a form useful for analysis would need to
contain much more information.