1. Square Root of a Number
The square root of a number is a number that, when multiplied by itself, gives the original number.
In other words, the square root of a number $ N $ is a value $ x $ such that $ x^2 = N $.
It is represented as:
$$ \sqrt{N} = x $$
For example, the square root of 25 is 5, because $ 5^2 = 25 $. Every positive real number has two square roots: one positive and one negative. For instance:
- The square roots of 25 are $ +5 $ and $ -5 $.
- The principal square root refers to the positive square root, denoted by $ \sqrt{N} $.
1.1 Properties of Square Roots
- Non-negative Numbers: The square root of a non-negative number is always a non-negative number.
- Perfect Squares: The square root of a perfect square (e.g., 1, 4, 9, 16) is always an integer.
- Irrational Square Roots: The square root of a non-perfect square (e.g., 2, 3, 5) is irrational and cannot be expressed exactly as a fraction.
1.2 Mathematical Representation
The square root function can be expressed using exponent notation:
$$ \sqrt{N} = N^{1/2} $$
This representation helps in simplifying and manipulating expressions involving square roots.
2. Methods of Finding Square Root of a Number - Using sqrt Functions
One of the simplest and most commonly used ways to find the square root of a number is through built-in sqrt
functions provided by various programming languages and libraries. These functions are optimized to quickly compute the square root of a given number, typically using highly efficient underlying algorithms.
2.1 Using sqrt in Python (math.sqrt)
In Python, the math
module provides the sqrt
function, which computes the square root of a number. The result is always a floating-point number.
import math
# Example: Finding square root of 16
result = math.sqrt(16)
print(result) # Output: 4.0
Here, math.sqrt(16)
returns 4.0, which is the square root of 16.
2.2 Using sqrt in C++ (cmath library)
In C++, the cmath
library provides the sqrt
function, which computes the square root of a number.
#include <cmath>
#include <iostream>
int main() {
double result = sqrt(25.0);
std::cout << "The square root of 25 is " << result << std::endl;
return 0;
}
The program outputs: "The square root of 25 is 5".
2.3 Using sqrt in Java (Math.sqrt)
In Java, the Math
class provides a static method sqrt()
to calculate the square root of a number.
public class Main {
public static void main(String[] args) {
double result = Math.sqrt(49);
System.out.println("The square root of 49 is " + result);
}
}
This program prints: "The square root of 49 is 7.0".
2.4 Properties of sqrt Functions
- Input Types: The
sqrt
functions typically expect a positive number. If the input is negative, they may throw an error or return NaN (Not a Number) in some languages. - Accuracy: These functions provide highly accurate results, often up to 15 decimal places or more, depending on the language and platform.
- Performance: Using the
sqrt
function is generally the fastest method to calculate square roots, as it is optimized at the hardware level.
2.5 Examples and Limitations
These functions are highly efficient for most use cases, but may not be ideal for symbolic calculations, handling of complex numbers, or very high precision needs. Some libraries, like Python's sympy
, offer more advanced handling for such cases.
import sympy
# Example of symbolic square root
result = sympy.sqrt(50)
print(result) # Output: 5*sqrt(2)
This example shows how symbolic computation provides a more exact result than floating-point approximations.
2.6 Methods Behind the Logic of sqrt
Functions
The sqrt
functions in various programming languages are typically backed by highly efficient algorithms. Some of the common methods used behind the scenes to calculate square roots are:
- Newton-Raphson Method: This iterative method refines an initial guess of the square root using the formula: $ x_{n+1} = \frac{x_n + \frac{N}{x_n}}{2} $
- Binary Search Method: This method works by performing binary search within a specific range to converge on the square root. The algorithm narrows down the search space based on whether the square of the midpoint is greater or smaller than the target number.
- Exponential and Logarithmic Approximation: Some algorithms use logarithms and exponentiation to compute the square root. This is based on the identity: $ \sqrt{N} = e^{\frac{1}{2} \ln(N)} $
- Fast Inverse Square Root (Quake III Algorithm): This algorithm, popularized by the Quake III game engine, approximates the inverse square root of a number, which can be used to find the square root efficiently.
- Continued Fractions (For Square Root Approximation): This method represents the square root as a continued fraction, allowing for efficient computation of the square root.
- Long Division Method: This manual algorithm, often taught for square root extraction, is analogous to long division and is used in some software to compute square roots step by step, digit by digit.
- Bakhshali Approximation Method: This ancient Indian method is an iterative algorithm based on an initial guess and incremental corrections, similar to Newton's method.
- CORDIC Algorithm: This method is used in hardware implementations like calculators or embedded systems. It computes trigonometric, hyperbolic, and square root functions using only addition, subtraction, and bit-shifting operations.
- Householder's Method: This is a generalization of Newton's method that converges faster by using higher-order derivatives.
Each method has its trade-offs in terms of speed, accuracy, and computational complexity, depending on the use case and hardware environment.
3. Newton-Raphson Method
The Newton-Raphson method, also called the Newton's method, or Heron's method, or the Babylonian method, or the Newton's square root method, is an iterative approach used to approximate the square root of a number. It is based on Newton's method for finding successively better approximations to the roots (or zeros) of a real-valued function.
3.1 Conceptual Foundation
Newton-Raphson is rooted in calculus, where we use tangent lines to approximate the roots of a function. To find the square root of a number $ N $, we reformulate the problem as finding the root of the equation:
$$ f(x) = x^2 - N $$
The root of this equation gives us the square root of $ N $ because solving $ f(x) = 0 $ leads to $ x = \sqrt{N} $.
To apply Newton's method, we use the iterative formula:
$$ x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} $$
For $ f(x) = x^2 - N $, the derivative is $ f'(x) = 2x $, so the iterative formula becomes:
$$ x_{n+1} = x_n - \frac{x_n^2 - N}{2x_n} $$
Simplifying the formula:
$$ x_{n+1} = \frac{x_n + \frac{N}{x_n}}{2} $$
3.2 Iterative Process
The algorithm starts with an initial guess $ x_0 $, and then iterates the above formula until the difference between successive approximations is smaller than a predefined tolerance level, indicating convergence.
Steps for Newton-Raphson Method:
- Step 1: Choose an initial guess $ x_0 $. A reasonable choice is $ x_0 = N/2 $.
- Step 2: Apply the iterative formula: $$ x_{n+1} = \frac{x_n + \frac{N}{x_n}}{2} $$
- Step 3: Check the convergence condition: $$ |x_{n+1} - x_n| < \epsilon $$ where $ \epsilon $ is a small tolerance value, e.g., $ 10^{-6} $.
- Step 4: If the convergence condition is met, $ x_{n+1} $ is the approximate square root of $ N $. Otherwise, repeat from Step 2.
3.3 Example
Find the square root of $ N = 25 $ using the Newton-Raphson method with $ \epsilon = 10^{-6} $.
Step-by-step Iteration:
N = 25
x_0 = 25 / 2 = 12.5
Iteration 1:
x_1 = (12.5 + 25 / 12.5) / 2 = 7.25
Iteration 2:
x_2 = (7.25 + 25 / 7.25) / 2 = 5.349
Iteration 3:
x_3 = (5.349 + 25 / 5.349) / 2 = 5.011
Iteration 4:
x_4 = (5.011 + 25 / 5.011) / 2 = 5.000
Iteration 5:
x_5 = (5.000 + 25 / 5.000) / 2 = 5.000
Since |5.000 - 5.000| < 10^{-6}, we stop here.
The approximate square root of 25 is 5.
3.4 Advantages and Disadvantages
- Advantages:
- Converges quickly with a good initial guess.
- Highly accurate if enough iterations are performed.
- Disadvantages:
- Requires a reasonable initial guess to avoid slow convergence or divergence.
- Not ideal for complex or symbolic calculations.
3.5 Code Implementation
def sqrt_newton_raphson(n, tolerance=1e-10):
# Handle negative numbers by using the absolute value and returning a complex result
if n < 0:
return sqrt_newton_raphson(-n, tolerance) * complex(0, 1) # sqrt(-n) is imaginary
guess = n / 2.0
while abs(guess * guess - n) > tolerance:
guess = (guess + n / guess) / 2.0
# print("While Loop Validation: ", guess * guess - n)
# print("Guessed Number: ", guess)
return guess
# Example usage
n = float(input("Enter a number to find its square root: "))
print(sqrt_newton_raphson(n)) # Output: 5j (sqrt of -25 is 5j)
3.6 Output
Enter a number to find its square root: 25
While Loop Validation: 27.5625
Guessed Number: 7.25
While Loop Validation: 3.61327660523186
Guessed Number: 5.349137931034482
While Loop Validation: 0.11407089098919343
Guessed Number: 5.011394106532552
While Loop Validation: 0.00012953065462539826
Guessed Number: 5.000012953048684
While Loop Validation: 1.6777690348135366e-10
Guessed Number: 5.000000000016778
While Loop Validation: 0.0
Guessed Number: 5.0
5.0
Enter a number to find its square root: 16
Guessed Number: 5.0
While Loop Validation: 9.0
Guessed Number: 4.1
While Loop Validation: 0.8099999999999987
Guessed Number: 4.001219512195122
While Loop Validation: 0.009757584770966332
Guessed Number: 4.0000001858445895
While Loop Validation: 1.4867567514897928e-06
Guessed Number: 4.000000000000004
While Loop Validation: 3.552713678800501e-14
4.
4. Binary Search Method (with Complex Numbers)
The binary search method is a classical algorithm used to find the square root of a number by repeatedly dividing the search range into two halves. This method works well for real numbers but can also be adapted for complex numbers with certain modifications.
In this method, we search for the square root by maintaining a low and high bound, then iteratively narrow down the range. The midpoint of the current range is squared, and we check whether it approximates the original number. For complex numbers, we calculate the square root in terms of its real and imaginary parts.
4.1 Conceptual Overview
For a complex number $ z = a + bi $, where $ a $ and $ b $ are real numbers, the square root will also be a complex number of the form $ \sqrt{z} = x + yi $, where $ x $ and $ y $ are real values to be determined.
The goal is to find $ x $ and $ y $ such that:
$$ (x + yi)^2 = z = a + bi $$
Expanding the left side gives two equations:
$$ x^2 - y^2 = a $$
$$ 2xy = b $$
We then apply binary search on $ x $ and $ y $ within a bounded region to find the square root of the complex number.
4.2 Binary Search Algorithm for Real Numbers
Let's first review the binary search method for real numbers:
- Step 1: Define a range [low, high] where the square root of the number lies. For example, for a positive real number $ N $, initialize $ \text{low} = 0 $ and $ \text{high} = N $.
- Step 2: Compute the midpoint: $$ \text{mid} = \frac{\text{low} + \text{high}}{2} $$
- Step 3: Check:
- If $ \text{mid}^2 \approx N $, return $ \text{mid} $.
- If $ \text{mid}^2 > N $, update the range to [low, mid].
- If $ \text{mid}^2 < N $, update the range to [mid, high].
- Step 4: Repeat the process until the result converges to the required tolerance level.
4.3 Extending Binary Search to Complex Numbers
For complex numbers $ z = a + bi $, we adjust the binary search to handle both the real and imaginary parts of the square root:
- Step 1: Start with an initial guess for the real part $ x $ and the imaginary part $ y $. Define a range for both.
- Step 2: Compute the midpoint $ x_{\text{mid}} $ and $ y_{\text{mid}} $ for both real and imaginary parts.
- Step 3: Calculate: $$ (x_{\text{mid}} + y_{\text{mid}}i)^2 = (x_{\text{mid}}^2 - y_{\text{mid}}^2) + 2x_{\text{mid}}y_{\text{mid}}i $$ and compare this with $ z = a + bi $.
- Step 4: Adjust the ranges of $ x $ and $ y $ based on whether the result is greater or smaller than the target complex number.
- Step 5: Repeat until the square root is approximated within a given tolerance.
Example for Simpler Numbers
Let's find the square root of the number 25 using binary search:
def sqrt_binary_search(n, tolerance=1e-10):
if n < 0:
return sqrt_binary_search(-n, tolerance) * complex(0, 1) # sqrt(-n) is imaginary
low, high = 0, n
if n < 1:
high = 1
while high - low > tolerance:
mid = (low + high) / 2
if mid * mid < n:
low = mid
else:
high = mid
return (low + high) / 2
# Example usage
n = float(input("Enter a number to find its square root: "))
print(sqrt_binary_search(n)) # Output: 5j (sqrt of -25 is 5j)
This method converges to a solution where the real and imaginary parts closely approximate the square root of the given complex number.
4.4 Example for Complex Number
Let's find the square root of the complex number $ z = 3 + 4i $ using binary search:
def complex_sqrt_binary_search(a, b, tolerance=1e-6):
low_real, high_real = 0, max(1, abs(a))
low_imag, high_imag = 0, max(1, abs(b))
while True:
mid_real = (low_real + high_real) / 2
mid_imag = (low_imag + high_imag) / 2
real_part = mid_real**2 - mid_imag**2
imag_part = 2 * mid_real * mid_imag
if abs(real_part - a) < tolerance and abs(imag_part - b) < tolerance:
return mid_real, mid_imag
if real_part > a:
high_real = mid_real
else:
low_real = mid_real
if imag_part > b:
high_imag = mid_imag
else:
low_imag = mid_imag
# Example usage
sqrt_real, sqrt_imag = complex_sqrt_binary_search(3, 4)
print(f"The square root of 3 + 4i is approximately {sqrt_real} + {sqrt_imag}i")
This method converges to a solution where the real and imaginary parts closely approximate the square root of the given complex number.
4.5 Limitations and Considerations
- Convergence: Binary search is slower compared to more sophisticated methods like Newton-Raphson for finding square roots, but it guarantees convergence in most cases.
- Complex Numbers: Handling both real and imaginary parts adds complexity to the process. Iterating over both ranges separately can make it inefficient for larger numbers.
- Tolerance: Precision depends on the tolerance level set, and for complex numbers, this can affect how close the approximation is to the actual square root.
5. Exponential and Logarithmic Method (with Complex Numbers)
The Exponential and Logarithmic Method is a powerful approach to calculate the square root of a complex number using logarithms and exponentiation. This method is based on the principle that square roots can be expressed in terms of powers and logarithms, which extend naturally to the complex domain.
5.1 Conceptual Foundation
For a complex number $ z = a + bi $, the square root can be found by converting the number into polar form, using logarithms and then exponentiating the result.
We start with the identity for any real or complex number:
$$ \sqrt{z} = z^{1/2} $$
To express $ z^{1/2} $, we first need to represent $ z $ in polar form. Any complex number $ z = a + bi $ can be expressed as:
$$ z = r e^{i \theta} $$
where $ r $ is the modulus (magnitude) of the complex number and $ \theta $ is the argument (angle) of the complex number.
Then, the square root can be written as:
$$ \sqrt{z} = \sqrt{r} e^{i \frac{\theta}{2}} $$
5.2 Steps to Find Square Root Using Exponential and Logarithmic Method
- Step 1: Convert the complex number $ z = a + bi $ into its polar form $ r e^{i \theta} $. This requires calculating the modulus $ r $ and argument $ \theta $ of the complex number.
- Step 2: Compute the square root of the modulus $ r $, which is a non-negative real number: $$ \sqrt{r} $$
- Step 3: Compute half of the argument: $$ \frac{\theta}{2} $$
- Step 4: Reconstruct the square root of the complex number in exponential form: $$ \sqrt{z} = \sqrt{r} \left( \cos\left(\frac{\theta}{2}\right) + i \sin\left(\frac{\theta}{2}\right) \right) $$
5.3 Example
For Simpler Numbers
Find the square root of a user entered number using the exponential and logarithmic method.
import cmath # Use cmath for handling complex numbers
def sqrt_exp_log(n):
return cmath.exp(0.5 * cmath.log(n))
# Example usage
n = float(input("Enter a number to find its square root: "))
print(sqrt_exp_log(n)) # Output: 5j
For Complex Numbers
Find the square root of the complex number $ z = 3 + 4i $ using the exponential and logarithmic method.
import cmath
def complex_sqrt_exp_log(z):
# Convert complex number to polar form
r, theta = cmath.polar(z)
# Compute the square root of the modulus
sqrt_r = r**0.5
# Compute the argument divided by 2
theta_half = theta / 2
# Convert back to rectangular form
sqrt_z = cmath.rect(sqrt_r, theta_half)
return sqrt_z
# Example usage
z = 3 + 4j
sqrt_z = complex_sqrt_exp_log(z)
print(f"The square root of {z} is approximately {sqrt_z}")
In this example, the square root of $ 3 + 4i $ is calculated using the exponential and logarithmic method, giving a result close to $ 2 + i $.
5.4 Derivation of Modulus and Argument
To apply the method effectively, we must first calculate the modulus $ r $ and argument $ \theta $ of a complex number $ z = a + bi $. These are given by:
- Modulus: $$ r = \sqrt{a^2 + b^2} $$
- Argument: $$ \theta = \text{atan2}(b, a) $$
For the complex number $ z = 3 + 4i $:
- Modulus $ r = \sqrt{3^2 + 4^2} = 5 $
- Argument $ \theta = \text{atan2}(4, 3) \approx 0.93 $ radians
Using these values, we calculate the square root as described in the steps above.
5.5 Limitations and Considerations
- Principal Square Root: This method typically gives the principal square root (the root with the positive real part). However, complex numbers have two square roots, and the second square root can be obtained by adding $ \pi $ to the argument.
- Accuracy: The method is highly accurate, but floating-point precision may introduce small errors, especially when the modulus or argument is very large or very small.
- Polar Representation: The method requires converting complex numbers to polar form and back to rectangular form, which adds complexity compared to simpler methods like the Newton-Raphson method.
6. Fast Inverse Square Root
The Fast Inverse Square Root is a clever algorithm used to quickly approximate the inverse of the square root of a floating-point number. This algorithm gained fame for its use in the graphics engine of the video game *Quake III Arena*, where it helped in computing lighting and 3D transformations efficiently. The goal of the algorithm is to compute $ \frac{1}{\sqrt{x}} $ for a given number $ x $ much faster than traditional methods like the Newton-Raphson or hardware-based square root functions.
6.1 Conceptual Overview
The Fast Inverse Square Root uses bit-level manipulation and mathematical approximations to achieve its result. The core idea behind the algorithm is to treat the floating-point number as an integer, then manipulate the bits in a way that provides a good initial guess for the inverse square root. This guess is then refined using Newton's method.
The formula for the fast inverse square root is:
$$ \text{invSqrt}(x) \approx \frac{1}{\sqrt{x}} $$
The algorithm achieves a balance between accuracy and speed, which was crucial for real-time 3D rendering in early graphics engines.
6.2 The Algorithm
Here is the step-by-step breakdown of the Fast Inverse Square Root algorithm:
- Step 1: Reinterpret the bits of the floating-point number $ x $ as an integer.
- Step 2: Perform a bit-level operation using a "magic constant" to compute an initial guess for $ \frac{1}{\sqrt{x}} $.
- Step 3: Use one iteration of Newton's method to refine the initial guess.
The magic constant used in the bit manipulation is often $ 0x5f3759df $, which gives a good approximation.
Here is the original C code for the Fast Inverse Square Root:
float fastInverseSqrt(float x) {
float xhalf = 0.5f * x;
int i = *(int*)&x; // Interpret the bits of x as an integer
i = 0x5f3759df - (i >> 1); // Perform the magic bit shift and subtraction
x = *(float*)&i; // Reinterpret the bits as a float
x = x * (1.5f - xhalf * x * x); // One iteration of Newton's method
return x;
}
The key parts of the algorithm are the bit manipulation and the Newton-Raphson refinement step.
6.3 Explanation of Key Steps
6.3.1 Bit Manipulation
The reinterpretation of the floating-point number as an integer allows the algorithm to manipulate the bits directly. The magic constant $ 0x5f3759df $ helps generate a good initial approximation for the inverse square root by exploiting the structure of floating-point numbers.
6.3.2 Newton's Method Refinement
Once the initial guess is computed using bit manipulation, the algorithm uses one iteration of Newton's method to improve the accuracy. The formula for the Newton-Raphson iteration is:
$$ x_{n+1} = x_n \times \left( 1.5 - \frac{x_n^2 \cdot x_{\text{input}}}{2} \right) $$
This step adjusts the approximation closer to the actual inverse square root.
6.4 Example
Let's compute the fast inverse square root of $ x = 4.0 $ using the C code described above:
- Step 1: $ x = 4.0 $, so we compute $ xhalf = 0.5 \times 4.0 = 2.0 $.
- Step 2: Reinterpret the bits of $ x $ as an integer. Let's assume the integer value is 0x40800000 (the IEEE-754 representation of 4.0).
- Step 3: Perform the bit shift and magic constant subtraction: $$ i = 0x5f3759df - \left( \frac{0x40800000}{2} \right) = 0x3f800000 $$
- Step 4: Reinterpret the bits back to a float, giving an initial approximation of $ \frac{1}{\sqrt{x}} \approx 0.5 $.
- Step 5: Refine the result using Newton's method: $$ x = 0.5 \times \left( 1.5 - 2.0 \times 0.5^2 \right) = 0.5 \times 1.375 = 0.6875 $$
After this single iteration, the approximation for $ \frac{1}{\sqrt{4}} $ is $ 0.6875 $, which is close to the correct value $ 0.5 $.
Applying this to the code:
import struct
def fast_inverse_sqrt(n):
if n < 0:
return fast_inverse_sqrt(-n) * complex(0, 1) # sqrt(-n) is imaginary
threehalfs = 1.5
x2 = n * 0.5
y = n
packed_y = struct.pack('f', y)
i = struct.unpack('i', packed_y)[0]
i = 0x5f3759df - (i >> 1)
packed_i = struct.pack('i', i)
y = struct.unpack('f', packed_i)[0]
y = y * (threehalfs - (x2 * y * y))
return y
n = float(input("Enter a number to find its inverse square root: "))
print(1 / fast_inverse_sqrt(n)) # Output: 5j (Approximate value)
6.5 Performance and Accuracy
- Performance: The Fast Inverse Square Root algorithm was faster than traditional methods because it avoided expensive floating-point divisions. It was particularly useful in early 3D rendering engines where performance was critical.
- Accuracy: The algorithm provides a reasonably accurate result after just one iteration of Newton's method. More iterations can improve the accuracy, but at the cost of performance.
6.6 Applications
The Fast Inverse Square Root is used in a variety of applications, especially in fields where 3D geometry, graphics, and physics simulations are involved, such as:
- 3D graphics engines (e.g., lighting calculations, normalization of vectors).
- Physics simulations requiring fast calculations of distances and angles.
- Games and real-time rendering systems where performance is critical.
While modern hardware has made this algorithm less relevant due to faster floating-point operations, it remains a popular example of algorithmic optimization in computer graphics history.
7. Continued Fractions for Square Roots (with Code Implementation)
Continued fractions provide an efficient way to approximate square roots by representing them as an infinite sequence of nested fractions. This method can also be adapted for complex numbers, which involve imaginary parts. The Python code below demonstrates how to compute the continued fraction approximation of the square root of both real and imaginary numbers using a fixed number of iterations.
7.1 Conceptual Overview
A continued fraction is a representation of a number through an infinite series of fractions. The continued fraction for the square root of $ n $ takes the form:
$$ \sqrt{n} = a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{a_3 + \dots}}} $$
Where $ a_0, a_1, a_2, \dots $ are integers determined by the continued fraction algorithm. This sequence of integers provides progressively better approximations to the square root of $ n $.
If $ n $ is a negative number, the square root will involve imaginary numbers. This is handled by converting the negative square root into a complex number.
7.2 Modified Algorithm for Square Roots with Complex Numbers
The algorithm for computing the continued fraction of a square root with real or imaginary numbers involves the following steps:
- Step 1: Start with an integer approximation $ a_0 = \lfloor \sqrt{n} \rfloor $, where $ n $ is the number whose square root we seek.
- Step 2: For each iteration, update the values of $ m $, $ d $, and $ a $ using the recurrence relations: $$ m_{n+1} = d_n a_n - m_n $$ $$ d_{n+1} = \frac{n - m_{n+1}^2}{d_n} $$ $$ a_{n+1} = \left\lfloor \frac{a_0 + m_{n+1}}{d_{n+1}} \right\rfloor $$
- Step 3: Add each new term to the approximation of the continued fraction.
- Step 4: If $ n $ is negative, return the square root in the form of an imaginary number.
This code allows us to handle both real and imaginary inputs, providing approximations to square roots through continued fractions.
7.3 Code Implementation
The following Python code implements the continued fraction method for finding square roots, with support for real and imaginary numbers:
def sqrt_continued_fraction(n, iterations=10):
if n < 0:
return sqrt_continued_fraction(-n, iterations) * complex(0, 1) # sqrt(-n) is imaginary
a0 = int(n ** 0.5)
if a0 * a0 == n:
return a0
m = 0
d = 1
a = a0
approx = a0
for _ in range(iterations):
m = d * a - m
d = (n - m * m) // d
a = (a0 + m) // d
approx += 1 / a
return approx
# Example usage
n = float(input("Enter a number you want to find square root of: "))
print(sqrt_continued_fraction(n))
This code handles both real and complex square roots by checking whether $ n $ is negative. If it is, the square root is returned as an imaginary number.
7.4 Example Usage and Output
Let's say the user inputs $ n = -25 $. The code will compute the square root of $ -25 $ using continued fractions:
Enter a number you want to find square root of: -25
5j
The output is $ 5j $, which is the imaginary square root of $ -25 $. The algorithm can handle both positive and negative inputs by appropriately returning real or imaginary roots.
7.5 Applications and Limitations
- Applications: This approach is useful for approximating irrational square roots or handling complex numbers in mathematical computations.
- Limitations: The accuracy depends on the number of iterations, and increasing the iterations can improve the approximation. However, continued fractions may converge slowly compared to other numerical methods like Newton-Raphson.