Jul 1, 2014

static dimension checking

The hmatrix library includes an experimental strongly-typed interface (based on GHC's type literals extension) for automatic inference and compile-time checking of dimensions in matrix and vector operations. This prevents structural errors like trying to multiply two matrices with inconsistent sizes or adding vectors of different dimensions. The interface uses existential types to safely work with computations which produce data-dependent result types.

We import the library as follows (other language extensions may also be required):

{-# LANGUAGE DataKinds #-}

import GHC.TypeLits
import Numeric.LinearAlgebra.Static

data types

Vectors and matrices are defined using safe constructors:

a = row (vec4 1 2 3 4)
    ===
    row (2 & 0 & 7 & 1)

u = vec4 10 20 30 40

v = vec2 5 0 & 0 & 3 & 7
λ> a
(matrix
 [ 1.0, 2.0, 3.0, 4.0
 , 2.0, 0.0, 7.0, 1.0 ] :: L 2 4)
λ> u
(vector [10.0,20.0,30.0,40.0] :: R 4)
λ> v 
(vector [5.0,0.0,0.0,3.0,7.0] :: R 5)

We can promote the traditional hmatrix arrays to the new dimension-typed ones:

import qualified Numeric.LinearAlgebra as LA
λ> create (LA.vector [1..10]) :: Maybe (R 5)
Nothing
λ> create (LA.matrix 2 [1..4]) :: Maybe (L 2 2)
Just (matrix
 [ 1.0, 2.0
 , 3.0, 4.0 ] :: L 2 2)

For convenience, data can also be created from lists of elements and explicit type annotations:

b = matrix
    [ 2, 0,-1
    , 1, 1, 7
    , 5, 3, 1
    , 2, 8, 0 ] :: L 4 3

w = vector [1..10] :: R 10
λ> b
(matrix
 [ 2.0, 0.0, -1.0
 , 1.0, 1.0,  7.0
 , 5.0, 3.0,  1.0
 , 2.0, 8.0,  0.0 ] :: L 4 3)
λ> w
(vector [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0] :: R 10)

If the length of the input list does not match the declared size we get a run time error:

λ> vector [1..5] :: R 3
*** Exception: R 3 can't be created from elements [1.0,2.0,3.0,4.0, ... ]
λ> matrix [1..5] :: L 2 2
*** Exception: L 2 2 can't be created from elements [1.0,2.0,3.0,4.0,5.0]

(This kind of input data errors are relatively unimportant. The key feature of a strongly typed interface is that the computations are proven to be structurally correct and when they receive well defined inputs will not produce run-time dimension consistency errors.)

Most of the functions that in the traditional hmatrix interface require a size argument are now generic. For instance:

c :: (KnownNat m, KnownNat n) => L m n
c = build (\r c -> r**2-c/2)
λ> c :: L 3 4
(matrix
 [ 0.0, -0.5, -1.0, -1.5
 , 1.0,  0.5,  0.0, -0.5
 , 4.0,  3.5,  3.0,  2.5 ] :: L 3 4)
λ> c :: Sq 5
(matrix
 [  0.0, -0.5, -1.0, -1.5, -2.0
 ,  1.0,  0.5,  0.0, -0.5, -1.0
 ,  4.0,  3.5,  3.0,  2.5,  2.0
 ,  9.0,  8.5,  8.0,  7.5,  7.0
 , 16.0, 15.5, 15.0, 14.5, 14.0 ] :: L 5 5)

The type signature is needed to make the definition size-polymorphic.

The function disp pretty-prints the arrays with a given number of decimal digits:

λ> disp 3 (c :: L 3 5)
L 3 5
0.000  -0.500  -1.000  -1.500  -2.000
1.000   0.500   0.000  -0.500  -1.000
4.000   3.500   3.000   2.500   2.000

(For clarity we show the output of some examples below using hidden disp commands.)

Constant and diagonal arrays are compactly stored and many operations are efficiently performed without expansion:

λ> 7 :: L 3 5
(7.0 :: L 3 5)
λ> diag u
(diag 0.0 [10.0,20.0,30.0,40.0] :: L 4 4)
λ> eye + 2 :: Sq 4
(diag 2.0 [3.0,3.0,3.0,3.0] :: L 4 4)

The disp function shows the "expanded" logical structure:

λ> disp 5 (eye + 2 :: Sq 4)
L 4 4
3  2  2  2
2  3  2  2
2  2  3  2
2  2  2  3
λ> disp 5 (3 :: R 5)
R 5
3  3  3  3  3

safe matrix computations

The type system statically prevents structural errors:

λ> u
(vector [10.0,20.0,30.0,40.0] :: R 4)
λ> v
(vector [5.0,0.0,0.0,3.0,7.0] :: R 5)
λ> u + v
Couldn't match type ‘5’ with ‘4’
Expected type: R 4
  Actual type: R 5
In the second argument of ‘(+)’, namely ‘v’
In the first argument of ‘print’, namely ‘(u + v)’
λ> (u & 1) + v
(vector [15.0,20.0,30.0,43.0,8.0] :: R 5)
λ> (vec2 1 2 & 3 & 4) <.> u
300.0
λ> cross (vec2 1 2) (vec3 1 2 3)
Couldn't match type ‘2’ with ‘3’
Expected type: R 3
  Actual type: R 2
In the first argument of ‘cross’, namely ‘(vec2 1 2)’
In the first argument of ‘print’, namely
  ‘(cross (vec2 1 2) (vec3 1 2 3))’
λ> a
(matrix
 [ 1.0, 2.0, 3.0, 4.0
 , 2.0, 0.0, 7.0, 1.0 ] :: L 2 4)
λ> b
(matrix
 [ 2.0, 0.0, -1.0
 , 1.0, 1.0,  7.0
 , 5.0, 3.0,  1.0
 , 2.0, 8.0,  0.0 ] :: L 4 3)
λ> b <> a
Couldn't match type ‘2’ with ‘3’
Expected type: L 3 4
  Actual type: L 2 4
In the second argument of ‘(<>)’, namely ‘a’
In the first argument of ‘print’, namely ‘(b <> a)’
λ> a <> b
(matrix
 [ 27.0, 43.0, 16.0
 , 41.0, 29.0,  5.0 ] :: L 2 3)
λ> a #> v
Couldn't match type ‘5’ with ‘4’
Expected type: R 4
  Actual type: R 5
In the second argument of ‘(#>)’, namely ‘v’
In the first argument of ‘print’, namely ‘(a #> v)’
λ> a #> u
(vector [300.0,270.0] :: R 2)

Block matrices can be safely created and unspecified dimensions are automatically inferred:

λ> a ||| b
Couldn't match type ‘4’ with ‘2’
Expected type: L 2 3
  Actual type: L 4 3
In the second argument of ‘(|||)’, namely ‘b’
In the first argument of ‘print’, namely ‘(a ||| b)’
d = b ||| 10*b ||| col 1
          ===
       row range
λ> d
L 5 7
2  0  -1  20   0  -10  1
1  1   7  10  10   70  1
5  3   1  50  30   10  1
2  8   0  20  80    0  1
1  2   3   4   5    6  7
e = (a ||| a ||| 5)
          ===
    (7 :: L 3 12)
λ> e
L 5 12
1  2  3  4  1  2  3  4  5  5  5  5
2  0  7  1  2  0  7  1  5  5  5  5
7  7  7  7  7  7  7  7  7  7  7  7
7  7  7  7  7  7  7  7  7  7  7  7
7  7  7  7  7  7  7  7  7  7  7  7
mhomog m =     m ||| col 0
                 ===
           row 0 ||| (1 :: L 1 1)
λ> mhomog a
L 3 5
1  2  3  4  0
2  0  7  1  0
0  0  0  0  1

Array destructuring and element access is also safe:

λ> u
(vector [10.0,20.0,30.0,40.0] :: R 4)
λ> snd $ split u :: R 7
Couldn't match type ‘4 - p0’ with ‘7’
The type variable ‘p0’ is ambiguous
Expected type: (R p0, R 7)
  Actual type: (R p0, R (4 - p0))
In the second argument of ‘($)’, namely ‘split u’
In the first argument of ‘print’, namely ‘(snd $ split u :: R 7)’
λ> snd $ split u :: R 3
(vector [20.0,30.0,40.0] :: R 3)
λ> fst . headTail . snd . headTail $ (range :: R 4)
2.0
λ> let x = col range <> row range :: L 4 5
λ> x
L 4 5
1  2   3   4   5
2  4   6   8  10
3  6   9  12  15
4  8  12  16  20
λ> snd . splitCols . fst . splitRows $ x :: Sq 2 
L 2 2
4   5
8  10

Automatic inference of dimensions allows elegant definitions:

sumV v = v <.> 1
λ> sumV (vec2 1 2 & 3 & 100)
106.0
average :: (KnownNat n, 1<=n) => R n -> 
average = (<.> (1/dim))
λ> average (vector [1..100] :: R 100)
50.5

The vector dim is a helpful generic constant which contains its own dimension:

λ> dim :: R 7
(7.0 :: R 7)

The type system statically prevents the attempt to take the average of an empty vector:

λ> let (u1 :: R 5, u2) = split (range :: R 5)
λ> sumV u1
15.0
λ> sumV u2
0.0
λ> average u1
3.0
λ> average u2
Couldn't match type ‘'False’ with ‘'True’
Expected type: 'True
  Actual type: 1 <=? 0
In the first argument of ‘print’, namely ‘(average u2)’

The function konst promotes a scalar type to a generic array:

unitary v = v / konst (sqrt (v <.> v))
λ> unitary (vec4 1 1 0 1 & 1)
(vector [0.5,0.5,0.0,0.5,0.5] :: R 5)

warning

We must be careful with the Num instances and the append operator "&", since the numeric literals stand for constants of any dimension. Excess elements are always detected:

λ> cross (1 & 2 & 3) (1 & 2 & 3 & 4 & 5)
Couldn't match type ‘'False’ with ‘'True’
Expected type: 'True
  Actual type: 1 <=? 0
In the first argument of ‘(&)’, namely ‘1 & 2’
In the first argument of ‘(&)’, namely ‘1 & 2 & 3’
In the first argument of ‘(&)’, namely ‘1 & 2 & 3 & 4’

but the first literal may accidentally not be interpreted as a singleton:

λ> cross (1 & 2 & 3) (1 & 2)
(vector [1.0,1.0,-1.0] :: R 3)
λ> 1 & 2 :: R 5
(vector [1.0,1.0,1.0,1.0,2.0] :: R 5)

To avoid this pitfall we should start with vec2, etc.

linear algebra

The library provides a safe interface to the main linear algebra functions. For instance, linSolve only admits square coefficient matrices and returns Nothing if the system is singular:

λ> let m = matrix [1,2,3,5] :: L 2 2
λ> m #> (2 & 3)
(vector [8.0,21.0] :: R 2)
λ> linSolve m (col (8 & 21))
Just (matrix
 [  2.000000000000002
 , 2.9999999999999987 ] :: L 2 1)

The function linSolve admits several right-hand sides. As an example we define an operator to work with a single right-hand side:

λ> let (|\|) m = fmap uncol . linSolve m . col
λ> m |\| (8 & 21)
Just (vector [2.000000000000002,2.9999999999999987] :: R 2)
λ> let m = matrix [1,2,2,4] :: L 2 2
λ> m |\| (8 & 21)
Nothing

If we don't mind the unhelpful error message produced by a singular input, the matrix inverse can be defined as

inv :: KnownNat n => Sq n -> Sq n
inv = fromJust . flip linSolve eye
λ> inv (diag (vec3 1 2 4))
(matrix
 [ 1.0, 0.0,  0.0
 , 0.0, 0.5,  0.0
 , 0.0, 0.0, 0.25 ] :: L 3 3)

The operator (<\>) (equivalent to \ in Matlab/Octave) solves a general linear system in the least squares sense:

λ> let m = matrix [1,2,2,4,3,3] :: L 3 2
λ> m
(matrix
 [ 1.0, 2.0
 , 2.0, 4.0
 , 3.0, 3.0 ] :: L 3 2)
λ> let x = m <\> col (3 & 7 & 6)
λ> x
(matrix
 [ 0.6000000000000002
 , 1.4000000000000001 ] :: L 2 1)
λ> m <> x
(matrix
 [ 3.4000000000000004
 ,  6.800000000000001
 ,  6.000000000000001 ] :: L 3 1)

The thin variants of the SVD take into account the shape of the matrix:

λ>  p
L 3 5
 1   2   3   4   5
 6   7   8   9  10
11  12  13  14  15
λ> svdTall p
Couldn't match type ‘'False’ with ‘'True’
Expected type: 'True
  Actual type: 5 <=? 3
In the first argument of ‘print’, namely ‘(svdTall p)’
In a stmt of a 'do' block: print (svdTall p)
λ> let (u,s,v) = svdFlat p
λ> u
L 3 3
-0.20   0.89   0.41
-0.52   0.26  -0.82
-0.83  -0.38   0.41
λ> s
R 3
35.13  2.47  0.00
λ> v
L 5 3
-0.35  -0.69   0.57
-0.40  -0.38  -0.75
-0.44  -0.06  -0.17
-0.49   0.25   0.30
-0.53   0.56   0.05
λ> u <> diag s <> tr v
L 3 5
 1.00   2.00   3.00   4.00   5.00
 6.00   7.00   8.00   9.00  10.00
11.00  12.00  13.00  14.00  15.00

The eigensystem functions take into account if the matrix is symmetric (or hermitic) for the output type. In general, the result is complex:

λ> q
(matrix
 [ 3.0, -2.0
 , 4.0, -1.0 ] :: L 2 2)
λ> eigenvalues q
C 2
1+2.00i  1-2.00i
λ> snd $ eigensystem q
M 2 2
0.41+0.41i  0.41-0.41i
      0.82        0.82

However, if we take the symmetric part of the matrix the result is real:

λ> sym q
Sym (matrix
 [ 3.0,  1.0
 , 1.0, -1.0 ] :: L 2 2)
λ> eigenvalues (sym q)
R 2
3.24  -1.24
λ> snd $eigensystem (sym q)
L 2 2
-0.97   0.23
-0.23  -0.97

(and the eigenvectors are orthogonal, but this is not encoded in the type).

existential dimensions

The size of the result of certain computations can only be known at run time. For instance, the dimension of the nullspace of matrix depends on its rank, which is a nontrivial property of its elements:

t1 = matrix [ 1, 2, 3
            , 4, 5, 6] :: L 2 3

t2 = matrix [ 1, 2, 3
            , 2, 4, 6] :: L 2 3
λ> LA.nullspace (extract t1)
(3><1)
 [ 0.40824829046386285
 ,  -0.816496580927726
 , 0.40824829046386313 ]
λ> LA.nullspace (extract t2)
(3><2)
 [    0.9561828874675147, 0.11952286093343913
 , -4.390192218731975e-2, -0.8440132318358361
 ,   -0.2894596810309586,  0.5228345342460776 ]

A possibility is that the function returns a list of typed vectors (e.g. nullspace :: L m n -> [R n]). This can be the right choice in some applications, but if this result is required as a matrix in subsequent computations we may introduce an unsafe step in the algorithm.

By hiding the unknown dimension in an existential type we can still compute safely with a strongly typed nullspace. For instance, we can easily check that it actually produces an approximate zero matrix:

checkNull x n = norm_Frob (x <> n) < 1E-10
λ> withNullspace t1 (\x -> checkNull t1 x)
True
λ> withNullspace t2 (\x -> checkNull t2 x)
True

If by mistake we define checkNull as follows we get a compilation error:

λ> checkNull x n = norm_Frob (n <> x) < 1E-10
Could not deduce (k ~ 2)
from the context (KnownNat k)
  bound by a type expected by the context:
             KnownNat k => L 3 k -> Bool
  at :2:1-39
  ‘k’ is a rigid type variable bound by
      a type expected by the context: KnownNat k => L 3 k -> Bool
      at :2:1
Expected type: L 3 2
  Actual type: L 3 k
Relevant bindings include n :: L 3 k (bound at :2:20)
In the second argument of ‘checkNull’, namely ‘n’
In the expression: checkNull t2 n

As another example, if we try to compute the following function we find that the existential dimension would escape its scope:

λ> withNullspace t2 (\x -> tr x <> x)
Couldn't match expected type ‘a0’ with actual type ‘L k k’
  because type variable ‘k’ would escape its scope
This (rigid, skolem) type variable is bound by
  a type expected by the context: KnownNat k => L 3 k -> a0
  at static.aux.hs:495:39-72
Relevant bindings include
  x :: L 3 k (bound at static.aux.hs:495:58)
In the expression: tr x <> x
In the second argument of ‘withNullspace’, namely
  ‘(\ x -> tr x <> x)’
In the first argument of ‘print’, namely
  ‘(withNullspace t2 (\ x -> tr x <> x))’

We probably meant the (probably not very meaningful) opposite operation:

λ> withNullspace t2 (\x -> x <> tr x)
L 3 3
 0.929  -0.143  -0.214
-0.143   0.714  -0.429
-0.214  -0.429   0.357
λ> withNullspace t1 (\x -> x <> tr x)
L 3 3
 0.167  -0.333   0.167
-0.333   0.667  -0.333
 0.167  -0.333   0.167

The result is of the same size for the two inputs and known at compile time.

Note that from the withNullspace function we can trivially define the list-based version:

λ> let ker x = withNullspace x toColumns
λ> ker t1
[(vector [0.40824829046386285,-0.816496580927726,0.40824829046386313] :: R 3)]
λ> length (ker t2)
2

Another interesting example of the existential approach is the withCompactSVD factorization, which produces a triplet with a common dimension which depends on the rank of the matrix, which again cannot be known at compile time.

checkSVD m (u,s,v) = norm_Frob (m - u <> diag s <> tr v)
λ> withCompactSVD t1 $ checkSVD t1
3.789423922623494e-15

By mistake we could write

λ> checkSVD m (u,s,v) = norm_2 (m - u <> diag s <> v)

This is something that makes sense for appropriately consistent dimensions of m, u, s, and v, but when we try to use it with a "true" compact SVD decomposition we get a compilation error:

λ> withCompactSVD t2 (checkSVD t2)
Could not deduce (k ~ 3)
from the context (KnownNat k)
  bound by a type expected by the context:
             KnownNat k => (L 2 k, R k, L 3 k) -> ℝ
  at :4:1-31
  ‘k’ is a rigid type variable bound by
      a type expected by the context:
        KnownNat k => (L 2 k, R k, L 3 k) -> ℝ
      at :4:1
Expected type: (L 2 k, R k, L 3 k) -> ℝ
  Actual type: (L 2 k, R k, L k k) -> ℝ
In the second argument of ‘withCompactSVD’, namely ‘(checkSVD t2)’

Finally, The functions withVector and withMatrix wrap the traditional hmatrix vectors and matrices with existential dimensions to be safely used by other strongly typed computations.

As an example we can check the above SVD decomposition on a big matrix read from a file:

λ> let datafile = "path/to/data/mnist.txt"
λ> mnist <- LA.loadMatrix datafile
λ> mnist
5000x785
0.0  0.0  0.0  0.0  0.0  0.0  0.0  .. 0.0  5.0
0.0  0.0  0.0  0.0  0.0  0.0  0.0  .. 0.0  0.0
0.0  0.0  0.0  0.0  0.0  0.0  0.0  .. 0.0  4.0
0.0  0.0  0.0  0.0  0.0  0.0  0.0  .. 0.0  1.0
0.0  0.0  0.0  0.0  0.0  0.0  0.0  .. 0.0  9.0
0.0  0.0  0.0  0.0  0.0  0.0  0.0  .. 0.0  2.0
0.0  0.0  0.0  0.0  0.0  0.0  0.0  .. 0.0  1.0
 :    :    :    :    :    :    :   ::  :    : 
0.0  0.0  0.0  0.0  0.0  0.0  0.0  .. 0.0  1.0
0.0  0.0  0.0  0.0  0.0  0.0  0.0  .. 0.0  2.0
λ> withMatrix mnist $ \x -> withCompactSVD x (checkSVD x)
6.062944964780807e-10

type inference

As a final example of a computation with strongly typed dimensions, we solve an overconstrained homogeneous system $Ax=0$. The solution is the right singular vector of minimum singular value, or equivalently, and for more interesting type checking, the eigenvector of minimum eigenvalue of $A^TA$.

We consider the following coefficient matrices:

a1 = matrix [  0,  0, 1  
            ,  1,  2, 0
            , -2, -4, 0 ] :: L 3 3

a2 = matrix [  0,  0,   1
            ,  1,  2,   0
            , -2, -4,   0
            ,  3,  6.1, 0] :: L 4 3

the first case has an exact solution:

λ> withNullspace a1 toColumns 
[(vector [-0.894427190999916,0.447213595499958,0.0] :: R 3)]

but the second one is overconstrained ($\text{rank}(A) > n-1$):

λ> withNullspace a2 toColumns 
[]

The desired least squares nullspace function null1 solves both cases:

λ> null1 a1
(vector [-0.8944271909999159,0.4472135954999579,0.0] :: R 3)
λ> null1 a2
(vector [-0.8963282782819243,0.4433910436084173,0.0] :: R 3)

It can be defined very easily:

null1 :: (Dim m, Dim n) => L m (1+n) -> R (1+n)
null1 = uncol . snd . splitCols . snd . eigensystem . mTm

The function mTm x = tr x <> x returns a type checked symmetric matrix, so snd . eigensystem is guaranteed to be a real vector. Then, the final uncol . snd determines the size of the partition produced by splitCols on the eigenvectors.

If alternately we had defined the function as the last element of a list of eigenvectors, we should check that the list is not empty (or that the matrix $A$ has at least one column).

This condition is statically enforced by the type system in the above definition. For clarity we have created an auxiliary class Dim for the somewhat redundant constraints required:

class (KnownNat (1 + p), KnownNat p, ((1 + p) - p) ~ 1, (p <= (1 + p))) => Dim (p :: Nat)
instance (KnownNat (1 + p), KnownNat p, ((1 + p) - p) ~ 1, (p <= (1 + p))) => Dim p

(Those constraints are inferred by ghci, and perhaps they say that the dimension must be finite...)

safe indexing

Although this library is oriented to global array processing sometimes we must read a particular element of a vector or matrix. If range checking must be done at compile time the index cannot be a value but a type. A possible approach to generic type safe indexing is as follows:

data Coord (n :: Nat) = Coord

atV :: forall n k . (Dim n, Dim k, k+1<=n) => R n -> Coord k -> 
atV v _ = extract v LA.! pos
  where
    pos = fromIntegral . natVal $ (undefined :: Proxy k)

atM :: forall m n i j . (Dim m, Dim n, Dim i, Dim j, i+1<=m, j+1 <=n)
   => L m n -> Coord i -> Coord j -> 
atM m _ _ = extract m `LA.atIndex` (pi,pj)
  where
    pi = fromIntegral . natVal $ (undefined :: Proxy i)
    pj = fromIntegral . natVal $ (undefined :: Proxy j)

cx = Coord :: Coord 0
cy = Coord :: Coord 1
cz = Coord :: Coord 2
λ> let v = vector [0..4] :: R 5
λ> atV v (Coord :: Coord 3)
3.0
λ> atV v (Coord :: Coord 7)
Couldn't match type ‘'False’ with ‘'True’
Expected type: 'True
  Actual type: (7 + 1) <=? 5
In the expression: atV v (Coord :: Coord 7)
In an equation for ‘it’: it = atV v (Coord :: Coord 7)
λ> atV v cz
2.0
λ> atV (vec3 10 20 30) cz
30.0
λ> atV (vector [] :: R 0) cx
Couldn't match type ‘'False’ with ‘'True’
Expected type: 'True
  Actual type: (0 + 1) <=? 0
In the expression: atV (vector [] :: R 0) cx
In an equation for ‘it’: it = atV (vector [] :: R 0) cx
λ> atM (matrix [1..12] :: L 3 4) cz cy
10.0
λ> atM (matrix [1..12] :: L 3 4) cx cx
1.0