The original code for The Poetry Processor was written by Paul Holzer, a financial analyst and programmer in New York . One principal challenge was to devise a system for dividing words into syllables correctly without constant reference to a database table -- which would slow the program down. In the February 1986 issue of Byte, Holzer explained the solution, providing an insight into the programmer's art:

The algorithm {a special set of sequential calculations} is based on a systematic application of what appear to be the general rules by which English words break into syllables:

1. Break the word up into a sequence of alternative vowel and consonant groupings. Thus microcomputer becomes m/i/cr/o/c/o/mp/u/t/e/r. Whenever there is a vowel or group of contiguous vowels, there will be a syllable. We need only assign the neighboring consonants to the syllable on the right or on the left.

2. If the first vowel group has a consonant group to its left, then assimilate this consonant group to the vowel group. This leads, in our example, to mi/cr/o/c/o/mp/u/t/e/r.

3. If the final vowel group has a consonant group to its right, then assimilate this consonant group to the vowel group. We now get mi/cr/o/c/o/mp/u/t/er.

4. For the remaining unassigned consonants, do the following:

a. If the consonant stands alone, attach it to the following vowel. Thus we get mi/cr/o/co/mp/u/ter.

b. If there are two consonants, split them: mic/ro/com/pu/ter.

c. If there are three consonants, then: (i) If there is a doubled consonant, split the pair; thus apply becomes ap/ply. (ii) If there is no doubled consonant, but the first of the three consonsants is n, r or l, then split between the second and third consonants. (iii) In all other cases, split between the first and second consonants.

Before applying the algorithm, however, we must pre-process the initial string of letters in order to take into account certain peculiarities of English orthography:

1. Final e is silent (with certain exceptions); treat it as a special consonant.

2. Translate many two-letter sequences into special single consonants -- e.g., sh, th, gu, qu and ck.

3. Identify common suffixes. For example, the algorithm applied to blameless would yield bla/me/less. However, when less is removed as a suffix, then the e in blame would be recognized as silent, yielding blame/less.

4. Identify some prefixes. For example, if en is recognized as a prefix, then enact becomes en/act rather than e/nact.

It seems to be impossible to come up with a reasonably small set of rules and pre-processing steps to guarantee correct syllabification of all words. Two examples will illustrate the difficulties:

1. Compound words: The algorithm will not detect the silent e in snake within the compound word snakebite unless the fragment bite is recognized as a word or treated as a suffix. Avoiding the problem would require either extensive word- or prefix-table lookups.

2. Successive vowels in different syllables: In reach, the ea is single vowel sound, and the algorithm would treat it correctly. In react, we pronounce the e and a separately, and the correct syllabification is re/act. Were the algorithm modified to isolate re as a prefix, it would treat react correctly, but turn reach into re/ach.