Objective Function: Rosenbrock function function y = rosenbrock(x) y = 100 * (x(2) - x(1)^2)^2 + (1 - x(1))^2; end % Gradient of Rosenbrock function function g = rosenbrock_gradient(x) g = [-400 * x(1) * (x(2) - x(1)^2) - 2 * (1 - x(1)); 200 * (x(2) - x(1)^2)]; end % Steepest Descent Method function [x_opt, f_opt, num_iterations, x_iterations] = steepest_descent(x0, tolerance, max_iterations) x = x0; num_iterations = 0; x_iterations = []; while true g = rosenbrock_gradient(x); alpha = (g' * g) / (g' * rosenbrock_hessian_product(g)); x_new = x - alpha * g; % Update the Variables for the next iteration x = x_new; num_iterations = num_iterations + 1; x_iterations = [x_iterations, x]; % Check for Convergence if norm(x_new - x) < tolerance || num_iterations >= max_iterations break; end end % Output the Result x_opt = x; f_opt = rosenbrock(x); end % Conjugate Gradient Method function [x_opt, f_opt, num_iterations, x_iterations] = conjugate_gradient(x0, tolerance, max_iterations) x = x0; num_iterations = 0; x_iterations = []; % Initialize the initial search direction as negative gradient g = rosenbrock_gradient(x); d = -g; while true alpha = - (g' * d) / (d' * rosenbrock_hessian_product(d)); x_new = x + alpha * d; % Update the search direction using the *****-Ribière formula g_new = rosenbrock_gradient(x_new); beta = (g_new' * (g_new - g)) / (g' * g); d_new = -g_new + beta * d; % Update the Variables for the next iteration x = x_new; g = g_new; d = d_new; num_iterations = num_iterations + 1; x_iterations = [x_iterations, x]; % Check for Convergence if abs(rosenbrock(x_new) - rosenbrock(x)) < tolerance || num_iterations >= max_iterations break; end end % Output the Result x_opt = x; f_opt = rosenbrock(x); end % Function to compute the Hessian matrix times a vector function H_d = rosenbrock_hessian_product(d) H_d = [1200 * d(1)^2 - 400 * d(2) + 2; -400 * d(1)]; end % Example usage x0 = [-1.2; 1]; % Initial guess tolerance = 1e-6; % Tolerance level max_iterations = 40; % Maximum number of iterations % Run Steepest Descent Method [x_opt_sd, f_opt_sd, num_iterations_sd, x_iterations_sd] = steepest_descent(x0, tolerance, max_iterations); % Run Conjugate Gradient Method [x_opt_cg, f_opt_cg, num_iterations_cg, x_iterations_cg] = conjugate_gradient(x0, tolerance, max_iterations); % Display the Result for Steepest Descent Method fprintf('Steepest Descent Method:\n'); fprintf('Optimal Solution: x = [%f; %f]\n', x_opt_sd(1), x_opt_sd(2)); fprintf('Objective Function Value: f(x) = %f\n', f_opt_sd); fprintf('Number of Iterations: %d\n', num_iterations_sd); fprintf('----------------------------------\n'); % Display the Result for Conjugate Gradient Method fprintf('Conjugate Gradient Method:\n'); fprintf('Optimal Solution: x = [%f; %f]\n', x_opt_cg(1), x_opt_cg(2)); fprintf('Objective Function Value: f(x) = %f\n', f_opt_cg); fprintf('Number of Iterations: %d\n', num_iterations_cg); fprintf('----------------------------------\n'); % Compare the Objective Function Values fprintf('Comparison of Objective Function Values:\n'); fprintf('Steepest Descent: f(x) = %f\n', f_opt_sd); fprintf('Conjugate Gradient: f(x) = %f\n', f_opt_cg); fprintf('----------------------------------\n'); % Compare the Number of Iterations fprintf('Comparison of Number of Iterations:\n'); fprintf('Steepest Descent: %d iterations\n', num_iterations_sd); fprintf('Conjugate Gradient: %d iterations\n', num_iterations_cg); fprintf('----------------------------------\n'); % Compare the Iteration Values for Steepest Descent Method fprintf('Steepest Descent Method: Iteration Values for x:\n'); for i = 1:min(num_iterations_sd, max_iterations) fprintf('Iteration %d: x = [%f; %f]\n', i, x_iterations_sd(1, i), x_iterations_sd(2, i)); end fprintf('----------------------------------\n'); % Compare the Iteration Values for Conjugate Gradient Method fprintf('Conjugate Gradient Method: Iteration Values for x:\n'); for i = 1:min(num_iterations_cg, max_iterations) fprintf('Iteration %d: x = [%f; %f]\n', i, x_iterations_cg(1, i), x_iterations_cg(2, i)); end fprintf('----------------------------------\n');
Write, Run & Share Octave code online using OneCompiler’s Octave online compiler for free. It’s a simple and powerful platform to practice numerical computations and matrix operations using GNU Octave right from your browser.
GNU Octave is an open-source high-level programming language primarily intended for numerical computations. It is mostly compatible with MATLAB, and it's commonly used for linear algebra, numerical analysis, signal processing, and other scientific computing tasks.
The following is a simple Octave program that prints a greeting:
printf("Hello, OneCompiler!\n");
OneCompiler’s Octave editor supports stdin through the I/O tab. Here's an example of reading input from the user:
name = input("Enter your name: ", "s");
printf("Hello, %s!\n", name);
a = 10;
b = 3.14;
name = "Octave";
v = [1, 2, 3];
M = [1, 2; 3, 4];
Operation | Syntax |
---|---|
Add | + |
Subtract | - |
Multiply | * |
Divide | / |
Element-wise | .* , ./ |
x = 10;
if x > 5
disp("x is greater than 5");
else
disp("x is 5 or less");
end
for i = 1:5
disp(i);
end
i = 1;
while i <= 5
disp(i);
i = i + 1;
end
function y = square(x)
y = x ^ 2;
end
result = square(4);
printf("Square: %d\n", result);
This guide provides a quick reference to Octave programming syntax and features. Start writing Octave code using OneCompiler’s Octave online compiler today!