Code golf a random orthogonal matrix

Haskell, 169 150 148 141 132 131 bytes

import Numeric.LinearAlgebra
z=(unitary.flatten<$>).randn 1
r 1=asRow<$>z 1
r n=do;m<-r$n-1;(<>diagBlock[m,1]).haussholder 2<$>z n

Recursively extend an orthogonal matrix of size n-1 by adding 1 to lower right corner and apply a random Householder reflection. randn gives a matrix with random values from a gaussian distribution and z d gives a uniformly distributed unit vector in d dimensions.

haussholder tau v returns the matrix I - tau*v*vᵀ which isn't orthogonal when v isn't a unit vector.

Usage:

*Main> m <- r 5
*Main> disp 5 m
5x5
-0.24045  -0.17761   0.01603  -0.83299  -0.46531
-0.94274   0.12031   0.00566   0.29741  -0.09098
-0.02069   0.30417  -0.93612  -0.13759   0.10865
 0.02155  -0.83065  -0.35109   0.32365  -0.28556
-0.22919  -0.41411   0.01141  -0.30659   0.82575
*Main> (<1e-14) . maxElement . abs $ tr m <> m - ident 5
True

Python 2+NumPy, 163 bytes

Thanks to xnor for pointing out to use normal distributed random values instead of uniform ones.

from numpy import*
n=input()
Q=random.randn(n,n)
for i in range(n):
 for j in range(i):u=Q[:,j];Q[:,i]-=u*dot(u,Q[:,i])/dot(u,u)
Q/=(Q**2).sum(axis=0)**0.5
print Q

Uses the Gram Schmidt Orthogonalization on a matrix with gaussian random values to have all directions.

The demonstration code is followed by

print dot(Q.transpose(),Q)

n=3:

[[-0.2555327   0.89398324  0.36809917]
 [-0.55727299  0.17492767 -0.81169398]
 [ 0.79003155  0.41254608 -0.45349298]]
[[  1.00000000e+00   0.00000000e+00   0.00000000e+00]
 [  0.00000000e+00   1.00000000e+00  -5.55111512e-17]
 [  0.00000000e+00  -5.55111512e-17   1.00000000e+00]]

n=5:

[[-0.63470728  0.41984536  0.41569193  0.25708079  0.42659843]
 [-0.36418389  0.06244462 -0.82734663 -0.24066123  0.3479231 ]
 [ 0.07863783  0.7048799   0.08914089 -0.64230492 -0.27651168]
 [ 0.67691426  0.33798442 -0.05984083  0.17555011  0.62702062]
 [-0.01095148 -0.45688226  0.36217501 -0.65773717  0.47681205]]
[[  1.00000000e+00   1.73472348e-16   5.37764278e-17   4.68375339e-17
   -2.23779328e-16]
 [  1.73472348e-16   1.00000000e+00   1.38777878e-16   3.33066907e-16
   -6.38378239e-16]
 [  5.37764278e-17   1.38777878e-16   1.00000000e+00   1.38777878e-16
    1.11022302e-16]
 [  4.68375339e-17   3.33066907e-16   1.38777878e-16   1.00000000e+00
    5.55111512e-16]
 [ -2.23779328e-16  -6.38378239e-16   1.11022302e-16   5.55111512e-16
    1.00000000e+00]]

It completes within a blink for n=50 and a few seconds for n=500.


APL (Dyalog Unicode), 63 61 bytes

⎕CY'dfns'
n←⊢÷.5*⍨a←+.×⍨
{{⍵,(n⊢-⊣×a)/⌽⍺,⍵}/⊂∘n¨↓NormRand⍵ ⍵}

Try it online for n=5

-1 byte thanks to @user41805 (improved train for inner dfn)

Uses the modified Gram-Schmidt process on a set of n random unit vectors --- fast.

How?

⎕CY'dfns'   ⍝ Copy the dfns namespace, which includes the NormRand dfn
n←⊢÷.5*⍨+.×⍨  ⍝ n: normalize the vector argument, from https://codegolf.stackexchange.com/a/149160/68261
{{⍵,(n⊢-⊣×+.×⍨)/⌽⍺,⍵}/⊂∘n¨↓NormRand⍵ ⍵} ⍝ Main anonymous function
                           NormRand ⍵ ⍵  ⍝ Generate an n×n array of gaussian variables
                       ⊂∘n¨↓             ⍝ Treat as an array of singular arrays of n-vectors; normalize each
 {                   }/                  ⍝ Reduce in a sneaky manner to get array of orthogonal (e_1, ... e_n)
                                         ⍝ Start with e_arr=⍬ (empty)
                                         ⍝ For each vector v_i:
                ⌽⍺,⍵                      ⍝ Start this step with e_arr←(e_{i-1}, ..., e_1, e_0, v_i)
    (         )/                          ⍝ Reduce this to repeatedly modify from the right:
     n⊢-⊣×+.×⍨                             ⍝ Normalize(v_i - proj_{e_j}(v_i))
  ⍵,                                       ⍝ Append this to e_arr