MTH 420-520 LAB 8

In this lab you will explore SVD, inverse problems, their sensitivity, and their regularizations.

  • 420 turn in A or B, or both for extra credit. 520 turn in A and B.
    Be concise !
    C or D are for extra credit.


In A or B, convince yourself that an integral "smooths out" the solution (find given f). Explore sensitivity of the inverse problem (find f given g). Find the dependence of the condition number of the problem depending on the parameters. Put it all together!

  1. Gravity problem .
    1. Plot the kernel function for the gravity problem in (0,1). You must choose some value for D, of course.

      >> myfun = @(x,D)(D./(sqrt(D^2+x.^2)).^3);
      >> x=linspace(0,1);D=0.1;plot(x,myfun(x,D));

      Discuss the behavior of this function depending on D. (Choose, e.g., D=1e-2,1e-1,1e0,1e1,1e2 etc.).

    2. Use the provided code gravity.m to obtain the matrix K discussed in class. Assume you know your input f(t), a piecewise constant vector. Run the "smoother" and plot the result, the piecewise constant output g(s). This can be done, as follows

      >> n=10;K = gravity(n,.1);
      >> mytreasure = [0,1,1,1,0,0,1,0,1,0]';
      >> f = mytreasure; g = K*f;
      >> t=linspace(0,1,length(mytreasure))';
      >> plot(t,f,t,g)

      Explore different parameters D to see how your output g depends on D. Discuss.

    3. Plot the condition number of matrix K depending on D and size n of the problem. (Use cond(K)) (In particular, explore n=10,100 etc., as well as some interesting values of D that you identified in I.) Collect results in a CONCISE table and/or figure, and discuss.
    4. Now solve the inverse problem fi=K\g. Is fi the same as f? What if your output g was perturbed a bit and you start with gp

      >> gp = g + 0.5*rand(n,1); fp=K\gp; (You can experiment with different "size" of perturbation replacing 0.5)

      Compute and plot f,fi,fp. You should explore of course the "interesting" values of n,D that you identified above. Discuss and synthesize your observations from I-IV.


      For longer inputs if you do not want to input them by hand:

      >> n=10;rand('seed',0);mytreasure=heaviside(rand(n,1)-0.5);

    5. Extra: Consider a modification of the gravity problem. Now let g(s) be the magnitude of the entire gravity field (not just the vertical component) felt at s due to the presence of the "treasure". Derive the new model, and analyze the problem as in A.

  2. Follow all the steps in A for the barcode reading problem instead of the gravity problem with

    >> K(s,t)=exp(-(s-t).^2/sigma^2));

    (You must code your own function to replace gravity.m). Now your parameters are sigma and n.
  3. Extra PROJECT: regularization Regularize the problem in (A) or (B) to get control over the magnitude of A\g, in case g is perturbed, and the solution A\g is rough.

    Regularization means the following. Given g and a nonsingular square matrix A, instead solving for f using f=A\g, i.e., finding the minimizer of
    min || A f-g ||^2
    we want to solve the "nearby" problem
    min ( || Af-g ||^2 + \lambda^2 || f ||^2).
    This means we are solving a constrained minimization problem whose solution f_lambda will have a norm that is not too large as compared to that of f. Clearly small lambda means we are solving a problem close to the original one. Large lambda means that f will be "smoother" (have a smaller norm) but that f_lambda will be far away from the "true solution".
    This regularization technique is known as Tikhonov regularization.

    (i) Derive and solve the algebraic problem depending on lambda.
    (ii) Explore which lambda are effective and how they work. What solutions do they give ? What happens when lambda goes to zero, and when lambda goes to infinity ?

  4. Find SVD for the matrices given in class by hand calculations. (here they are: [3 0; 0 -2]; [0 2; 0 0; 0 0]; [ 1 1; 0 0]; [ 1 0 2; 0 1 1];)
    (You can check in MATLAB with svd if you must but please do all calculations by hand. MATLAB's eigenvectors may be a bit different than yours, up to a sign.)

    You can also explore low rank approximations using SVD following this example:
    >> load clown.mat; %%% this loads matrix X in which an image is encoded
    >> subplot(1,2,1);colormap gray; image(X);
    >> %%% find SVD of the image in the matrix X
    >> [u,s,v] = svd(X);
    >> %%% now construct rank 1, 3, 5, 10 approximations and plot them
    >> subplot(1,2,2);colormap gray;
    >> for k=[1,3,5,10] Xk=u(:,1:k)*s(1:k,1:k)*v(:,1:k)';image(Xk);title(sprintf('rank %d',k));pause;end
    >> %% Enjoy and explore with your own images. Repeat the above for the new X found as below
    >> myX = imaread('myimage.png'); X = double(myX);