Version 1.6 includes the following changes:

- Implemented block sampling of betas for data with large numbers of predictors (options ‘blocksample’ and ‘blocksize’)
- Display the Widely Applicable Akaike’s Information Criterion (WAIC) instead of DIC in summary output
- Implemented alternative priors for tau2
- Added an example to demonstrate block sampling and alternative priors for tau2Â (see examples\br_example13.m)

Version 1.5 includes the following changes:

- Written a new parallelised C++ implementation of sampling code for logistic regression
- efficient MATLAB implementation of logistic regression sampling included; works even when MEX files are not available but not as fast

Version 1.4 includes the following changes:

- Added option ‘groups’ which allows grouping of variables into potentially overlapping groups
- Groups work with HS, HS+ and lasso priors
- Fixed a regression bug with g priors and logistic models
- Updated examples

Version 1.3 includes the following changes:

- Tidied up the summary display to switch between scientific and normal notation depending

on size of numbers in table - Fixed the summary banner to adjust to the size of the table
- Added support for MATLAB tables
- Code now accepts either X/y as matrix/vector, or table/string name of variable to use as table

- Added support for categorical predictors
- These are now either specified using ‘catvars’ option for matrix inputs, or automatically determined from the contents of the table

- Removed the dependency of br_summary() on X and y
- Consolidated all likelihood functions into a single br_regnlike() function that is vectorized using bsxfun
- Added a br_predict function to produce predictions over data given design matrices
- Checks input design matrix/table against structure of training data
- Automatically handles categorical expansions as required
- Produces credible intervals if requested
- Produces prediction statistics if targets are supplied

- Updated and improved the example scripts
- Added initial internal code structures required for grouping variables and variable expansions
- Fix bug in computation of R2

Version 1.1 of the MATLAB toolbox can be downloaded from here. Please see “examples_bayesreg.m” for examples on how to use the toolbox, or type “help bayesreg” within MATLAB.

An R version of this toolbox is now available on CRAN. To install the R package, type “install.packages(“bayesreg”)” within R.

To cite this toolbox:

- Makalic E. & Schmidt, D. F.

High-Dimensional Bayesian Regularised Regression with the BayesReg Package

arXiv:1611.06649 [stat.CO], 2016

Updates:

(23/01): Updated the bayesreg code to version 1.2. This version implements Zellner’s g-prior for linear and logistic regression. The g-prior only works with full rank matrices. The examples in “examples_bayesreg.m” have been updated to include a g-prior example.

(17/01): Updated the bayesreg code to version 1.1. The display code is now in a separate “summary()” function so that it can be used many times on the same data and sampling results. We have updated “examples_bayesreg.m” to include examples of the new “summary()” function.

]]>I hope this document will be useful to researchers interested in learning more about the the MML principle.

*Update (24/07/2014)*: I have written MATLAB code to compute MML estimates and the message length. I have also done minor updates to the document.

TEAM | ELO | WINS | LOSSES | DRAWS | TOTAL | |
---|---|---|---|---|---|---|

1 | Hawthorn | 1475 | 869 | 951 | 10 | 1830 |

2 | Geelong | 1469 | 1224 | 1042 | 21 | 2287 |

3 | Collingwood | 1468 | 1460 | 910 | 26 | 2396 |

4 | Sydney | 1428 | 1061 | 1199 | 23 | 2283 |

5 | West Coast | 1360 | 339 | 269 | 5 | 613 |

6 | Brisbane Bears | 1319 | 72 | 148 | 2 | 222 |

7 | Adelaide | 1316 | 269 | 241 | 1 | 511 |

8 | St Kilda | 1311 | 877 | 1338 | 25 | 2240 |

9 | North Melbourne | 1293 | 803 | 1005 | 17 | 1825 |

10 | Fremantle | 1292 | 168 | 236 | 0 | 404 |

11 | Carlton | 1253 | 1385 | 939 | 33 | 2357 |

12 | Essendon | 1194 | 1315 | 973 | 34 | 2322 |

13 | Richmond | 1170 | 1058 | 1035 | 22 | 2115 |

14 | Brisbane Lions | 1147 | 192 | 175 | 6 | 373 |

15 | Western Bulldogs | 1099 | 803 | 976 | 22 | 1801 |

16 | Melbourne | 1032 | 1034 | 1209 | 21 | 2264 |

17 | Port Adelaide | 1032 | 183 | 181 | 5 | 369 |

18 | GW Sydney | 1016 | 2 | 20 | 0 | 22 |

19 | Gold Coast | 992 | 6 | 38 | 0 | 44 |

20 | Fitzroy | 871 | 869 | 1034 | 25 | 1928 |

21 | University | 781 | 27 | 97 | 2 | 126 |

Note:

- I used the results of all 14,166 games played in the VFL/AFL since 1897 to create the table.
- The ranking scores of the top three teams are really close, but Hawthorn has edged out Collingwood and Geelong.
- The ELO of each team was initialised to 1200; the only exception to this rule were the Brisbane Lions which inherited the final ELO score from the Brisbane Bears. This seemed reasonable given the details of the “merger” between the Brisbane Bears and Fitzroy Lions.
- The team University withdrew from the competition for World War 1 and never came back.
- All ratings were updated using a constant K-factor of 32.
- My MATLAB script and the data file used to create this table are freely available for download here.

It would be of interest to use this data for prediction of the AFL 2013 season, perhaps using a methodology similar to the ELO++ work.

]]>Suppose you want to do a QR decomposition of a matrix X. There are three classes that can do this for you, one of which is ColPivHouseholderQR, and you may write something like

```
// perform a QR decomposition of the [n x p] X matrix; where n > p
Eigen::ColPivHouseholderQR< Eigen::MatrixXd > qr = X.colPivHouseholderQr();
```

Seems fairly straightforward, right? But, how do we get Q and R? If we look at the online documentation for ColPivHouseholderQR, we find a public method called matrixQR() which seems to return a reference to a matrix related to Q and R in some fashion. The documentation doesn’t state anything about how to get Q and R from the returned matrix. It turns out, that the matrix returned by matrixQR() contains both Q and R, with R occupying the ‘top’ block and Q the ‘bottom’ block. Thus, to get, say, R we need to use something like

```
// perform a QR decomposition of the [n x p] X matrix; where n > p
Eigen::ColPivHouseholderQR< Eigen::MatrixXd > qr = X.colPivHouseholderQr();
Eigen::Matrix
``` R;
R = qr.matrixQR().topRightCorner(p,p).triangularView(); // R matrix

Here, topRightCorner() extracts the matrix of interest and triangularView

It didn’t take too long to figure out how QR works in Eigen. After discovering the solution I found a post on the Eigen forum that describes a similar solution to the aforementioned. I urge anyone who uses Eigen to contribute code and documentation to the project, if possible. It really is a fantastic open-source library and equals and surpasses many other commercial linear algebra packages in both performance and quality.

]]>where , is an i.i.d. zero-mean multivariate Gaussian random variable with a symmetric, positive definite variance-covariance matrix ; that is, for all . It is thus assumed that each data vector follows a -variate Gaussian distribution with moments

The aim of single factor analysis is to estimate the unknown factor scores , the mean , the variance-covariance matrix and the factor loadings .

I’ve written this note to show how one may derive the Fisher information for the single factor model. This turns out to be a fairly involved task requiring a lot of linear algebra and matrix differential calculus identities. I’ve uploaded a document describing all the necessary mathematical steps to the Publications section. Anyone interested in matrix differential calculus and linear algebra should find it useful.

]]>```
z=1.96;
2*(1-normcdf(z))
```

However, the above only works for relatively small z, in the range z < 10. In order to get *p*-values for z as large as 30, one should instead run

```
z=1.96;
2*normcdf(-abs(z))
```

For those using R, the above can be accomplished by running:

```
z <- 1.96
2*pnorm(z,lower.tail=F)
```

]]>`#include `
#include
#include
boost::mt19937 rng; // random number generator
boost::random::uniform_01 dist;
/*
* Generate n samples from a k-nomial distribution.
* Input:
* tab - vector of length k to store the counts;
* - tab[j] \in [0, n] for all j=1,...,k
* - sum_{j=0}^k tab[j] = n
* p - the probability vector of length k
* - does not assume p(0)+p(1)+...+p(k)=1;
* k - k-nomial (k>1)
* n - sample size
*/
void sample_knomial(unsigned int *tab, double *p, unsigned int k, unsigned int n)
{
assert(k>1);
// set all k-cells to zero
memset(tab,0,sizeof(unsigned int)*k);
// normalise the probabilities
double norm_factor=0.0;
for(unsigned int j=0; j 0);
norm_factor += p[j];
}
// pre-compute cumulative sums [p(0),p(0)+p(1),...,p(0)+p(1)+...+p(k-1)]
// (could sort p prior to this)
double *cumsum = new double[k];
cumsum[0]=p[0]/norm_factor;
for(unsigned int j=1; jcumsum[t]; t++);
// Increment count for cell j
tab[t]++;
}
// free memory
delete[] cumsum;
}

]]>Below is a simple C function to compute the HL estimator for a given array of observations.

```
// Comparison function needed for quicksort
int compare (const void * a, const void * b)
{
const double *da = (const double *) a;
const double *db = (const double *) b;
return (*da > *db) - (*da < *db);
}
// Function to compute the Hodges-Lehmann estimator
double hodgeslehmann(double *x, int n)
{
// store and compute Walsh averages
unsigned int nw=n*(n+1)/2;
double *walsh_avgs=(double*) malloc(sizeof(double)*nw);
unsigned int k=0;
for(unsigned int i=0; i
```
I have also written a MATLAB driver function that allows the aforementioned C code to be executed within MATLAB, see the Software section to download this implementation. An efficient (pure) MATLAB implementation that avoids the use of nested *for* loops is a little bit tricky. Here is a simple (memmory intensive!) implementation that relies on the *ndgrid(.)* function. This code works reasonably well as long as the sample size is not too big.

function m=hodglehm_slow(x)
%% Length of vector
x=x(:); % make sure x is a column vector
n=length(x);
%% compute cartesian product
[a,b]=ndgrid(x,x);
%% Sum of all pairs
A=triu((a+b)./2);
%% Extract upper diagonal
Ix=triu(true(n));
A=A(Ix);
%% Compute median of averages
m=median(A(:));
%% done
return;

We can get the standard error of the HL estimator in MATLAB by using the bootstrap:

% Generate 50 observations from a Student-t distribution with 2 dof
x=trnd(2,50,1);
% Compute standard errors for the mean and HL estimators
std(bootstrp(1000, @(a) mean(a), x))
std(bootstrp(1000, @(a) hodglehm_slow(a), x))

]]>
http://www.emakalic.org/blog/?feed=rss2&p=78
0