Fifty Shades of J/Chapter 9
Table of Contents ... Glossary ... Previous Chapter ... Next Chapter
- ^: (power conjunction) _ (infinity) = (self-classify) matrix product, eigenanalysis, infinity, left binding, normalisation, convergence, transition matrices, population forecasts.
The Power Conjunction
The power conjunction is one of the most versatile computational tools in J. At the same time there are traps for the unwary which must be avoided. Consider the following statement:
- 2*^:(4)1 and 1*^:(4)2 mean "multiply 1 by 2 four times" and "multiply 2 by 1 four times" respectively, which are the same thing because multiplication is commutative, that is, 1 times 2 is the same as 2 times 1.
Obviously true? No. The right operand is used only for the first power step. Following steps use the result of the previous step for the right argument. But when there is a left argument, it is used for every step.
'2 * ',^:(4)'1' 2 * 2 * 2 * 2 * 1 '1 * ',^:(4)'2' 1 * 1 * 1 * 1 * 2
Next, here is the standard matrix product verb to allow some work with matrices
mp=.+/ .* NB. matrix product
To self-test your understanding of the power conjunction, cover up all but the next two or three lines and answer the questions: what do the following do to a matrix?
If your answer was that they raise it to the fourth and eighth powers respectively – well done! In general mp~^:n raises a matrix to the power 2n on account of the way in which each result is ‘fed back’ to the next power step.
Suppose you didn’t want to go up in quite such big leaps, how about these?
- m&mp^:2 m
- m&mp^:3 m
This time the answers are the third and fourth powers of m. In general m&mp^:n raises m to the power n+1, which is irritatingly ‘one out’ if you are looking for a tidy verb for mn.
To find such a verb, go back to the scalar case 2*^:(4)1 which raised 2 to the power 4. The key ingredient is the 1, that is the identity of multiplication. A verb which gives the identity matrix of the same shape as any square matrix is id=.=@i.@# , following which a verb which raises a matrix x to the power y is
id=:=@i.@# power=:dyad def 'x mp^:y id x'
in which y may be a list, so that several powers of a matrix can be obtained simultaneously. For example
]m2=:2 2$4 1 _2 1 4 1 _2 1 m2 power 10 9 117074 58025 _116050 _57001 38854 19171 _38342 _18659
Now compute the ratios of two successive powers:
ratio=:%/@power m2 ratio 10 9 3.01318 3.02671 3.02671 3.05488
This demonstrates that the items in the above matrix all tend to a value which is in fact the value of the largest eigenvalue of the matrix, which for m2 happens to be 3. Eigenanalysis at this early stage? - heady stuff! Press on and use this value 3 to obtain a value for the smallest eigenvalue
3+(m2-3*id m2)ratio 20 19 2 2 2 2
You don’t have to be exact with the multiplier 3, but the closer you are to the true eigenvalue the faster is the convergence
5+(m2-5*id m2)ratio 20 19 1.9991 1.99955 1.99955 1.99977
As an aside – in the verb definition id=:=@i.@# you might think that the = is equals, but bearing in mind that the left conjunct of @ must be monadic, = in this definition is self-classify, which returns an equals table for the nub of a list y versus itself.
The above experiments need not be confined to two-by-two matrices, for example
]m3=:3 3$1 1 1 1 2 1 1 1 3 1 1 1 1 2 1 1 1 3 m3 ratio 10 9 4.21425 4.21415 4.21442 4.21415 4.21392 4.21456 4.21442 4.21456 4.21418 4+(m3-4*id m3)ratio 20 19 0.324939 0.32441 0.325842 0.32441 0.327887 0.318417 0.325842 0.318417 0.338313
showing that the largest and smallest eigenvalues of m3 are about 4.214 and 0.325. Given that the sum of eigenvalues is the sum of the leading diagonal, the third eigenvalue is readily calculated as 1.461. As for the eigenvectors, it is convenient first to define a verb based on the hook %>./ which normalises a list so that its largest item is 1, or in the case of a matrix, it does this for each row separately
norm=:%>./"1 norm m3 1 1 1 0.5 1 0.5 0.3333333 0.3333333 1
Now observe the behavior of the successive (normalized) powers of m3 as they converge
norm m3 power 10 0.525439 0.688915 1 0.525454 0.688947 1 0.525412 0.68886 1
Each row when fully converged is simply an eigenvector corresponding to the largest eigenvalue, and repeating the previous ‘smallest eigenvalue’ wrinkle
norm 4+(m3-4*id m3) power 20 1 _0.481039 _0.194043 _2.07377 1 0.400721 _2.08754 1 0.407959
results in rows each of which converge to the eigenvector corresponding to the smallest eigenvalue.
It is not of course suggested that this is the basis of a systematic way of calculating eigenvalues and eigenvectors. For one thing there are numerous special cases which have to be looked out for, and the choice of adjustment factor for the smallest eigenvector must in general be made with some care.
The value of the power conjunction for obtaining, say, quick population estimates is readily apparent. Suppose that transm is a transition matrix giving the proportions of juveniles, workers and elderly who change from one status to the next within any one year, so that, for example, 94.71% of juveniles remains so at the end of the year, the birth rate for the workers group is 5.52% and the death rate within the elderly group is 5.7%.
]transm=:3 3$0.9471 0.0552 0 0.0379 0.9682 0 0 0.0207 0.943 0.9471 0.0552 0 0.0379 0.9682 0 0 0.0207 0.943
If the current population (in millions) is 11, 38 and 9 for the three sectors, the predicted values in 20 years time are
transm mp^:20 [ 11 38 9 26.7554 31.9539 11.1334
As before, the ratio between successive powers shows convergence to the largest eigenvalue
transm ratio 100 99 1.00458 1.0046 0 1.0046 1.00459 0 1.0049 1.00457 0.943
This is approximately 1.005, which is the instantaneous overall rate of population increase.
For a matrix which has a column with only non-one zero item, that item is necessarily one of the eigenvalues, and the others are the eigenvalues of the matrix obtained by removing its row and column. Thus 0.943 is an eigenvalue of transm and the remaining eigenvalue is the smallest eigenvalue of
0.9471 0.0552 0.0379 0.9682
Power and Infinity
Returning to the power conjunction, there is another smart feature of J, namely ‘function raised to the power infinity’. As observed earlier, the function values are repeatedly fed back as input, and if this process ultimately leads to convergence, then ^:_ will deliver it. For example given
successive values of cos y, cos(cos y), cos(cos(cos y))… starting from an initial value of 0 are given by
cos^:(i.6)0 0 1 0.540302 0.857553 0.65429 0.79348
and where these converge, as they do for cos, the converged value is the solution of the equation cos(y)=y
This feature is useful in the sort of population study which concerns, for example, the spread of AIDS within a third world population. Suppose that the current population is 60 million and an 8% birth rate is assumed, a percentage which is subject to a downward adjustment of 10-7 per unit of population to account for pressures of space, food shortage and so on. Assume also that there is a net annual emigration of 10,000. Forecasting future population levels is a matter of repeated exercising a polynomial defined by
p=:#.&_0.000000001 1.08 _10000
so that the projected population in 20 years time is :
The infinity option gives the ultimate equilibrium population when the forces of environmental pressure and migration exactly balance the birth rate :
p^:_ (60000000) 7.98748e7
that is equilibrium is attained at about 80 million.
Clearly there’s a lot of power in J!
power=: dyad : 'x mp^:y id x' NB. matrix x to power y mp =: +/ .* NB. matrix product id =: =@i.@# NB. identity matrix ratio=: %/@power NB. ratio of successive powers norm =: %>./"1 NB. normalises each row of a matrix