# 9. Heron’s Rule and Integer-Area Triangles

. By Eugene McDonnell. First published in Vector, 12, 3, (January 1996), 133-142.

### I. Preliminaries

This note makes use of several less-well-known parts of J: the fix (f.) and Taylor series coefficient (t.) adverbs and the polynomial rootfinder verb (p.).

To make the following accessible to all readers, the following verbs are defined:

```C =: @  NB. compose       f C g x <-> f g x
D =: %  NB. divide        18 D 3  <-> 6
H =: -: NB. halve         H 12    <-> 6
I =: ]  NB. identity      I 6     <-> 6
P =: */ NB. product       P 1 2 3 <-> 6
R =: %: NB. square root   R 36    <-> 6
S =: +/ NB. sum           S 1 2 3 <-> 6
```

The following convention applies to verbs f, g, and h:

```   (f g h) y <-> (f y) g (h y) NB. (%: , *:) 16 <-> 4 256
```

### II. Heron's Rule

Heron's rule for the area A of a triangle with sides a, b, and c is usually written in two steps. First the semi-perimeter s is computed:

```  s =: (a + b + c) D 2
```

For example:

```   (13 + 14 + 15) D 2
42       D 2
21
```

And then the following expression for the area is computed:

```   A =: R (s * (s - a) * (s - b) * (s - c))
```

Continuing the example:

```   R (21 * (21 - 13) * (21 - 14) * (21 - 15))
R (21 *     8     *     7     *     6    )
R 7056
84
```

Heron's is a scalar-oriented formula, with the lengths of the three sides and the semi-perimeter playing separate roles in the formulation. We make a first approach to an array formulation by considering the triangle to be defined by a three-item list of side lengths . We then determine the semi-perimeter by a verb SP:

```   SP =: H C S
```

The next step is to replace the three explicit subtractions by appending a zero to the list and subtracting the resulting four values from the semi- perimeter, then taking the product over this result, and finally the square root of the product.

```   Heron =: R C (P C (SP - (0 , I)))
```

This is a slightly more efficient form than APL expression 318 in the FinnAPL Idiom Library.

```   Heron 13 14 15
84
```

Fixing the definition of Heron, and giving this fixed version the name Hrn, by using the fix adverb (f.) yields a form in which the names of defined items are replaced by their values. Doing this insures that changes in the items defined do not alter the definition of the item in which they are used. As a side effect, a fixed verb is generally faster than an unfixed equivalent.

```   Hrn =: Heron f.
Hrn
%:@(*/@(-:@(+/) - 0 , ]))

Hrn 13 14 15
84
```

### III. Integer Heron

In preparing examples for Heron's formula, I thought it would make examples clearer if I could find triangles having integer sides that also had integer areas. I explored consecutive triplets of integers among the first 200 sets of triplets. The first step was to build the table of triplets:

```   T =: 3 ]\ i. 202
```

The table has 200 rows:

```   \$ T
200 3
```

Its first and last four rows are:

```   4 _4 {."0 _ T
0   1   2
1   2   3
2   3   4
3   4   5

196 197 198
197 198 199
198 199 200
199 200 201
```

Applying Hrn to the rows of this table gives a list of areas.

```   Areas =: Hrn"1 T
```

We determine which of the areas are integral (those which are equal to their own floor):

```   Mask =: (= <.) Areas
```

And use the mask so found to give us the winning rows of T:

```   Winners =: Mask # T
Winners
1   2   3
3   4   5
13  14  15
51  52  53
193 194 195
```

The areas corresponding to these are:

```   Hrn"1 Winners
0 6 84 1170 16296
```

The triangle 1 2 3 is degenerate (ugh!).

### IV. A Recursive Formula

I looked at Winners for some clue as to how the series could be prolonged, but without success. Then I thought of N. J. A. Sloane's book A Handbook of Integer Sequences. I looked in it for the series 1 3 13 51 193 without avail. Then, knowing that each series in the book began with 1, which was sometimes prefixed to a series which began naturally with some other integer, I looked for 1 2 4 14 52 194 and 1 3 5 15 53 195, but again to no avail. Finally, I divided the even column 2 4 14 52 194 by 2, looked for 1 2 7 26 97, and this time struck pay dirt. It was Sloane's sequence 700.

Sloane's entry for series 700 not only gave a number of additional values but, more importantly, it gave a doubly recursive formula for finding the values, in common mathematical notation:

``` A(n) = 4 A(n - 1) - A(n - 2)
```

So now I was able to extend the series as far as I wanted to. I wrote a J version of A:

``` A =: ((4*A@<:)-(A@<:@<:))`>:@.(<&2)
```

which, as you can see, is doubly recursive in A. It tests whether the argument is less than 2 (<&2), giving one plus the argument (>:) as result in these cases, and otherwise yields the difference between A of n-1 (<:) and A of n-2 (<:<:).

I calculated additional results of A, for arguments 6 through 14, derived triplets from the results, and applied Hrn to the triplets, and in each case found an integer area. But was I satisfied? No.

### V. Generating Functions

O them doddhunters and allanights, aabs and baas for agnomes, yees and zees for incognits, bate him up jerrybly! James Joyce, Finnegans Wake, p. 283

The reason the story carries on is that I was unhappy with the long execution times required by the deeply rooted calling trees of the double recursion. (Adverb M. Memo improves performance, but that is another story.) I had to terminate the execution of A 30 after five hours with no result. My mind turned to the subject of generating functions, something I had often heard about and often, with little or no success, had tried to master. I was stimulated to do this because of three books. These were K. E. Iverson's new book Concrete Math Companion, the Ronald Graham, Donald E. Knuth, and Oren Patashnik book Concrete Mathematics, which I shall refer to as GKP, and most of all, the H. S. Hall and S. R. Knight book Higher Algebra, first edition 1887, and usually referred to simply as Hall & Knight, worthy successor to Todhunter's Algebra for Schools and Colleges. Both of these books are celebrated by James Joyce in the mathematics chapter of his Finnegans Wake.

Iverson's new book and GKP focus sharply on generating functions. From GKP I learned a four-step process that promised to allow me to have my will with arbitrary generating functions. I plodded through their examples, and tried to duplicate their results on my problem. No luck. I turned to Iverson's book, and found out one important thing that GKP had neglected to tell me, that is, that the key to generating functions was the ability to generate the coefficients of Taylor series, something that J is well suited for, since it contains a primitive (t.) to do just that. However, that is about all I was able to learn there. Lastly, I got out my rusty red copy of Hall & Knight, and it came through. The examples they gave were of the same kind as mine, that is, they dealt primarily with doubly recursive functions, where the nth term is some linear combination of the two preceding terms. Their explanations were carefully laid out in great detail.

Here is how they go about it.

Given a sequence with a sufficient number of terms, it is possible to describe how to extend the sequence arbitrarily. The first thing to do is to get rid of the notion that we are dealing with a mere list of numbers. Instead we think of the list as being the coefficients in a polynomial with a never-ending set of terms, that is, an infinite series. Thus the list:

```  1 2 7 26 97 ...
```

in fact defines the first several coefficients of the infinite series:

```   (1*y^0) + (2*y^1) + (7*y^2) + (26*y^3) + (97*y^4) +  ...
```

This is where Hall and Knight lost me. They say, from out of the blue, that each term after the second is equal to the sum of the two preceding terms multiplied respectively by the constants _1 and 4. Thus

```  7 = (_1*1) + (4*2)
```

or

```  7 = _1 4 +/ . * 1 2
```

This implies that if we take any three consecutive terms r, s, t, they are related by:

```   t = (_1 * r) + (4 * s)
```

which can be rewritten as:

```   0 = (1 * r) + (_4 * s) + (1 * t)
```

In this equation the coefficients

```   1 _4 1
```

of r, s, and t form the scale of relation of the infinite series. They are the coefficients of a quadratic polynomial written in ascending order:

```   (1*y^0) + (_4*y^1) + (1*y^2)
```

Now this is all stated baldly in Hall & Knight, and I was thoroughly lost. How does one find the scale of relation, and what was the point of it?

Luckily, the authors soon give the game away, noting that if a sufficient number of the terms of a series be given, the scale of relation may be found, and proceed to show how to do just that.

Suppose the first four terms of the series are, in order, a, b, c, and d. Assume then that the general term is arrived at by multiplying the two preceding terms by p and q, and adding. We are able to write the following pair of equations:

```   c = (p*a) + (q*b)
d = (p*b) + (q*c)
```

and then it is a simple matter to solve this linear system for p and q by writing

```   'p q' =: (c,d) %. (a,b),:(b,c)
```

For example, if a, b, c, and d are 1 2 7 26 we write

```   ]'p q' =: 7 26 %. 1 2,:2 7
_1 4
```

and we can form the scale of relation by appending a 1 to the negative of these:

```   ]s =: 1 ,~ - 7 26 %. 1 2 ,: 2 7
1 _4 1
```

A pretty way to write this in J is to form a table t as follows:

```   ]t =: 2 ]\ y =: 1 2 7 26
1  2
2  7
7 26
```

and then we can write

```   ({: %. }:)t
_1 4
```

So that we can form the scale of relation using the function scr:

```   scr =: 1 ,~ [: - [: ({: %. }:) 2 ]\ ]
```

And use it to get our scale of relation:

```   scr y
1 _4 1
```

What can we do with a scale of relation? Suppose we take the vector product of the scale of relation and any three successive terms of the infinite series, say 7 26 97

```   ]a =: 1 _4 1 * 7 26 97
7 _104 97

+/ a
0
```

This sum will always be zero as a consequence of the way the infinite series and the scale of relation are interrelated. Consequently, if we do the polynomial multiplication of the scale of relation with the infinite series beginning with 1 2 7 26, we find that all the terms after the first two are zero:

```     1     2    7   26   97  362 ...
1    _4    1
____________________________ ...
1     2    7   26   97  362 ...
_4   _8  _28 _104 _388 ...
1    2    7   26 ...
____________________________
1    _2    0    0    0    0 ...
```

Since all the terms of this product after the first two are zero, and since we can ignore trailing zeros in a list of polynomial coefficients, we find that the infinite product of the scale of relation and the infinite series reduces to the linear polynomial:

```   (1 * y ^ 0) + (_2 * y ^ 1)
```

In practice it is difficult to represent or work with infinite series, so we enable the process by using only the first two terms of the series. We can then do the polynomial multiplication of these two terms with the scale of relation, and take only the first two terms of the resulting product. Ordinary polynomial multiplication is given by:

```   pm =: +//. @ (*/)
```

and our special infinite series multiplication by this scale of relation polynomial is given by:

```   spm =: 2 {. pm
1 _4 1 spm 1 2 7 26
1 _2
```

The next thought to convey to you is the most important one in the whole paper, so PAY ATTENTION!

Let me write the situation schematically:

```Infinite series x Scale of Relation
----------------------------------- <-> Infinite Series
Scale of Relation
```

That is, if I multiply and divide the infinite series by the scale of relation, I end up with the infinite series. But I know the numerator is simply a linear polynomial. So I can substitute the linear polynomial for the numerator and write::

```Linear polynomial
----------------- <-> Infinite series
Scale of relation
```

This suggests that an infinite series of the kind we are describing can be represented as a rational polynomial whose numerator is the linear polynomial found as the product of the infinite series with its scale of relation, and its denominator is the scale of relation, and that this rational polynomial is fully equivalent to the infinite series. By this chicanery I have managed to encapsulate the whole infinite series in a rational polynomial.

In J we represent a polynomial by a list of coefficients c bonded to the polynomial primitive p., that is,

```   c & p.
```

is a polynomial with coefficients c.

We can thus represent an infinite series by the rational polynomial function gf, using its product with the scale of relation as the numerator, and the scale of relation as the denominator.

```  gf =: 1 _2&p. % 1 _4 1&p.
```

There isn't much we can do directly with gf, since the only meaningful arguments for it are those which make the infinite series converge, so we are restricted, if that is what we want to do, to arguments less than one in magnitude. But that isn't what we want to do. We are only interested in the coefficients of the terms in the series, and J provides us with the tool needed to find these, and that is the Taylor coefficient adverb (t.). Thus if we apply t. to gf, and apply this derived function to any nonnegative integer argument, the result will be the corresponding coefficient:

```   gf t. i. 12
1 2 7 26 97 362 1351 5042 18817 70226 262087 978122
```

Compared with the doubly recursive verb A, the time required by gf t. is significantly less and its advantage in speed increases rapidly with the size of the argument.

I estimate A 30 would have taken 15 hours to complete on my computer, versus the 1.2 seconds taken by gf t. 30.

### VI. Partial Fractions

Hall & Knight discuss the relevance of partial fractions in handling recurrences, and work through some examples. This leads to the ability to derive an even simpler expression for the general term of the series.

The method works as follows: separate the generating function into a sum of partial fractions with constant numerators and linear denominators. That is, find constants a, b, A, and B such that:

```             A        B
gf <-> ------ + ------         (1)
1-ax     1-bx
```

The constants a and b are the roots of the scale of relation quadratic polynomial. These can be obtained using the polynomial rootfinder primitive, which is the monad of the verb p., by

```   ]'a b' =: , > }. p. 1 _4 1
3.73205 0.267949
```

You might recognize these roots as

```   2 + %: 3 and 2 - %: 3
```

In (1) the denominators can be removed by multiplying each term by the scale of relation, giving:

```   1 _2&p. <-> (A * (1 , -b)&p.) + (B * (1 , -a)&p.)
```

This linear system can be solved for A and B by writing

```   ]'A B' =: 1 _2 %. 1 1 ,:  -(b,a)
0.5 0.5
```

and now we can write a function gt:

```   gt =: (A * a^]) + (B * b^])
gt i. 10
1 2 7 26 97 362 1351 5042 18817 70226
```

The function gt is 5 times faster than gf t.

But wait! Since B and b are each less than one, the right hand expression is always less than one and isn't really needed -- we can replace it by a ceiling (>.). And since A is 0.5, we can replace it by halving (-:) giving us an even simpler expression:

```   gtt =: >. @ -: @ (a & ^)
gtt i.10
1 2 7 26 97 362 1351 5042 18817 70226
```

The function gtt is twice as fast as gt.

Vector Vol.12 No.3