mathematicalmichael / mud

Home Page:https://mud.dataconsistent.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

testing testing

mathematicalmichael opened this issue · comments

#9 gets us started with a baseline.

  • add badge to README for codecov

  • get coverage above 50%

  • make final release with commented out code, then delete it pre-v0.1

  • ignore plotting for now, just make sure it runs skipping for coverage

  • test funs

  • test that numerical solutions agree for some test problems

  • test plot

  • test util

  • test norm

  • test against numpy basics with identity cov.

  • test that diagonals getting larger actually shrinks the evaluation

  • test problem that you have analytical solutions for

  • will be used in numerical comparisons

map_sol matches from sklearn.linear_model import Ridge with w and alpha playing the same roles.

r = Ridge(alpha=1, fit_intercept=False).fit(X, y)
map_sol = mf.map_sol(X, np.zeros(100), y, w=1).ravel()
print(r.coef_ - map_sol)

next up: how does sample_weights correspond to covariance / prior evaluation.