5 Octave Tutorial

Basic Operations

If you want to build a large scale deployment of a learning algorithm, what people will often do is prototype and the language is Octave.

Get your learning algorithms to work more quickly in Octave. Then overall you have a huge time savings by first developing the algorithms in Octave, and then implementing and maybe C++ or Java, only after we have the ideas working.

Octave is nice because open sourced.

  • % : comment
  • ~= : not equal
  • ; : suppresses the print output
  • DISP : For more complex printing
    1. V=10.12 % it sets V to the bunch of elements that start from 1. And increments and steps of 0.1 until you get up to 2.
    2.  
    3. ones(2, 3) % generates a matrix that is a two by three matrix that is the matrix of all ones.
    4.  
    5. C = 2 * ones(2, 3) % that is all two's.
    6.  
    7. w = zeros(1, 3) % that is all zero's.
    8.  
    9. rand(3,3)
    10.  
    11. w = rand(1, 3) % normal random variable
    12.  
    13. hist % plot a histogram
    14.  
    15. help

    Moving Data Around

    1. size(A) % the size of a matrix
    2.  
    3. size(A, 1) % the first dimension of A, size of the first dimension of A.
    4.  
    5. length(v) % the size of the longest dimension.
    6.  
    7. load('featureX.dat')
    8.  
    9. who % the variables that Octave has in memory currently
    10.  
    11. whos % the detailed view
    12.  
    13. clear featuresX
    14.  
    15. save hello.mat v % save the variable V into a file called hello.mat.
    16.  
    17. save hello.txt v -ascii % a human readable format
    18.  
    19. A(3,2)
    20.  
    21. A(2,:) % fetch everything in the second row.
    22.  
    23. A([1 3],:) % get all of the elements of A who's first indexes one or three.
    24.  
    25. A = [A, [100, 101, 102]] % this will do is add another column vector to the right.
    26.  
    27. A(:) % put all elements with A into a single column vector
    28.  
    29. C = [A B] % taking these two matrices and just concatenating onto each other.
    30.  
    31. C = [A; B] % The semicolon notation means that I go put the next thing at the bottom.

    There’s no point at all to try to memorize all these commands.

    It’s just, but what you should do is, hopefully from this video you have gotten a sense of the sorts of things you can do.

    Computing on Data

    1. AxC % multiply 2 of matrices
    2.  
    3. A .* B % take each elements of A and multiply it by the corresponding elements of B.
    4.  
    5. A .^ 2 % the element wise squaring of A
    6.  
    7. 1 ./ V % the element wise reciprocal of V
    8.  
    9. log(v) % an element wise logarithm of v
    10.  
    11. exp(v) 
    12.  
    13. abs(V) % the element wise absolute value of V
    14.  
    15. -v % the same as -1 x V
    16.  
    17. v + ones(3,1) % this increments V by one.
    18.  
    19. v + 1 % another simpler way
    20.  
    21. A' % the apostrophe symbol, a transpose of A
    22.  
    23. val=max(a) % set val equals max of A
    24.  
    25. [val, ind] = max(a) % val = the maximum value, ind = the index
    26.  
    27. a < 3 % a = [1 15 2 0.5], the result will be [1 0 1 1]
    28.  
    29. find(a < 3) % [1 3 4]
    30.  
    31. A = magic(3) % Returns this matrices called magic squares that all of their rows and columns and diagonals sum up to the same thing.
    32.  
    33. [r,c] = find( A>=7 ) % This finds all the elements of a that are greater than and equals to 7 and so, R C sense a row and column.
    34.  
    35. sum(a) % This adds up all the elements of A.
    36.  
    37. prod(a) % multiply them
    38.  
    39. floor(a) % Floor A rounds down
    40.  
    41. ceil(A) % rounded up
    42.  
    43. type(3) % sets a 3 by 3 matrix
    44.  
    45. max(A,[],1) % takes the column wise maximum
    46.  
    47. max(A,[],2) % takes the per row maximum
    48.  
    49. max(A) % it defaults to column
    50.  
    51. max(max(A)) % the maximum element in the entire matrix A
    52.  
    53. sum(A,1) % does a per column sum
    54.  
    55. sum(A,2) % do the row wise sum
    56.  
    57. eye(9) % nine identity matrix
    58.  
    59. sum(sum(A.*eye(9)) % the sum of these diagonal elements
    60.  
    61. flipup/flipud % Flip UD stands for flip up/down
    62.  
    63. pinv(A) % a pseudo inference

    After running a learning algorithm, often one of the most useful things is to be able to look at your results, or to plot, or visualize your result.

    Plotting Data

    Often, plots of the data or of all the learning algorithm outputs will also give you ideas for how to improve your learning algorithm.

    1. t = [0:0.01:0.98]
    2. y1 = sin(2 * pi * 4 * t)
    3. plot(t, y1) % plot the sine function
    4.  
    5. y2 = cos(2 * pi * 4 * 4)
    6. plot(t, y2)
    7.  
    8. hold on % figures on top of the old one
    9.  
    10. plot(t, y2, 'r') % different color
    11.  
    12. xlabel('time') % label the X axis, or the horizontal axis
    13. ylabel('value')
    14.  
    15. legend('sin', 'cos') % puts this legend up on the upper right showing what the 2 lines are
    16.  
    17. title('myplot') % the title at the top of this figure
    18.  
    19. print -dpng 'myplot.png' % save figure
    20.  
    21. close % disappeared
    22.  
    23. figure(1); plot(t, y1); % Starts up first figure, and that plots t, y1.
    24. figure(2); plot(t, y2); 
    25.  
    26. subplot(1,2,1) % sub-divides the plot into a one-by-two grid with the first 2 parameters are
    27. plot(t, y1) % fills up this first element
    28. subplot(1,2,2)
    29. plot(t, y2)
    30.  
    31. axis([0.5 1 -1 1]) % sets the x range and y range for the figure on the right
    32.  
    33. clf % clear
    34.  
    35. imagesc(A) % visualize the matrix and the different colors correspond to the different values in the A matrix.
    36. colormap gray % color map gray
    37.  
    38. imagesc(magic(15))colorbarcolormap gray % running three commands at a time

    Control Statements_ for, while, if statements

    1. for i = 1 : 10, 
    2.     v(i) = 2 ^ i;
    3. end;
    4.  
    5. indices = 1 : 10;
    6. for i = indices,
    7.     disp(i);
    8. end;
    9.  
    10. i = 1;
    11. while i <= 5,
    12.     v(i) = 100;
    13.     i = i + 1;
    14. end;
    15.  
    16. i = 1;
    17. while true, 
    18.     v(i) = 999;
    19.     i = i + 1;
    20.     if i == 6,
    21.         break;
    22.     end;
    23. end;
    24.  
    25. v(1) = 2;
    26. if v(1) == 1,
    27.     disp('The value is one');
    28. elseif v(1) == 2,
    29.     disp('The value is two');
    30. else,
    31.     disp('The value is not one or two');
    32. end;
    33.  
    34. function name (arg-list)
    35.   body
    36. endfunction
    37.  
    38. function wakeup (message)
    39.   printf ("\a%s\n", message);
    40. endfunction
    41.  
    42. wakeup ("Rise and shine!");
    43.  
    44. function y = squareThisNumber(x)
    45.  
    46. addpath % add path for search dirs
    47.  
    48. [a, b] = SquareAndCubeThisNumber(5) % a = 25, b = 125

    Vectorization

    Unvectorized implementation

    \(h_\theta (x) = \sum _{j=0}^{n} \theta _jx_j\)
    1. prediction = 0.0;
    2. for j = 1 : n + 1,
    3.     prediction = prediction + theta(j) * x(j)
    4. end;

    Vectorized implementation

    \(h_\theta (x) = \theta ^Tx\)
    1. prediction = theta' * x;

    Using a vectorized implementation, you should be able to get a much more efficient implementation of linear regression.

    Working on and Submitting Programming Exercises

    How to use the submission system which will let you verify right away that you got the right answer for your machine learning program exercise.

    If you are interested in details, you can visit coursera.org

    Leave a Reply

    Your email address will not be published. Required fields are marked *