Using Python, Quadratic approximation of sine function without the idea of limits or infinitesimal quantities (Mini Research Paper)
The question that I actually I started out was, Is there any way to express transcendental functions using polynomials avoiding the idea of limits or infinitesimal quantities?
- Origin of the Question
Nowadays,we learn about the Cauchy-Weierstrass interpretation of limits in our calculus classes.Although I genuinely think that it works and it’s more convenient but I thought there would be creative other ways to do this without the idea of estimating infinitesimal quantities.I found one paper named “The Lost Calculus”[1],it avoids the idea of abstract limits but fails when it comes to transcendental functions.I knew that if I broke down the sine function to be humble polynomials I could than use something like Descartes’s method[1] of Tangents to that polynomial to get the derivative without using the idea of limits. That’s when I was stumbled upon this question. First thing that came to my mind was a sine function,I was thinking if there was any way to represent sine function using polynomial which is contradictory to it being a transcendental function . I was further energized when I saw the Weierstrass approximation theorem,which states that every continuous function defined on a closed interval [a, b] can be uniformly approximated as closely as desired by a polynomial function.I wanted to at least find something that closely mapped to sine function,even if it didn’t fully map it,and I also thought that using technology would help because when Descartes or Hudde were working with these stubborn functions they didn’t have any aid of technology or computation,they had to do it by hand.
III. Investigation of the Question
To narrow down the problem the first thing I tried to do was to work with a single transcendental function which is the sine function.I wanted to avoid the idea of taylor series first because it included the idea of taking multiple derivatives of the function at a certain point to express it using polynomials.This was tedious for the Descartes method and included the idea of using infinitesimals. To approximate the sine function the first thing that came to my mind was using a quadratic function to represent one curve in a parabola.
Sine functions and other trigonometric functions are periodic,which means a representation of the function of one period would suffice for the interval (-∞,∞) . Assuming we have a sine function of a period 2Ï€ centered at 0,we need to observe it’s behaviours in the interval [-Ï€,Ï€]. In this interval,sine is an odd function because f(-x)= -f(x) , and it’s symmetric about x=0.
To narrow down the problem the first thing I tried to do was to work with a single transcendental function which is the sine function.I wanted to avoid the idea of taylor series first because it included the idea of taking multiple derivatives of the function at a certain point to express it using polynomials.This was tedious for the Descartes method and included the idea of using infinitesimals. To approximate the sine function the first thing that came to my mind was using a quadratic function to represent one curve in a parabola.
Sine functions and other trigonometric functions are periodic,which means a representation of the function of one period would suffice for the interval (-∞,∞) . Assuming we have a sine function of a period 2Ï€ centered at 0,we need to observe it’s behaviours in the interval [-Ï€,Ï€]. In this interval,sine is an odd function because f(-x)= -f(x) , and it’s symmetric about x=0.
Figure 1.1
As we can see in figure 1.1 it looks like a parabolic curve in the interval [0,Ï€],and at this point I got excited thinking that I could make the endpoints of this interval the roots of a parabola and dilate this function horizontally and vertically to fit that curve of the sine function. This way I could use those quadratic functions to approximate the entire sine function of one period using piecewise functions.
To make this easier,I made one sine function sin(2Ï€x) to make the period of this function 1,to have more freedom by having a rational number to play around with values. I knew that in a period [0,1],there had to be another zero at halfway there,which is x=0.5. So with the zeroes of our quadratic function being at 0 and 0.5,we can start constructing the function by having -(x-0)(x-0.5).
Figure 1.2: Violet function → -16(x-0)(x-0.5), Green Function → sin(2Ï€x)
By trying out different values,this is the closest I could get.This was a close approximation,but still in terms of mathematics,it’s still not there in terms of being equivalent to each other. I thought this approach might work because of the sine function looking like a upside down parabola in the interval [0,Ï€/2],and thought that I could use this to somehow dilate by a factor of something to make it go close to the actual sine function.
I started to think of other ways to find close approximations of sine function,because I was trying to avoid infinitesimal quantities or the idea that a value goes infinitely close to zero,I started thinking of using datasets where the change in x was not infinitely close to zero,but a very small quantity. I created a dataset of sin values from degrees 1 to 360 using python [2].
I converted the degrees back to radians and stored the input and the output into two separate files using file.write procedure in python. I was trying to use numpy.polyfit function in python which fits a polynomial p(x) = p[0] * x**deg + ... + p[deg] of degree deg to point (x,y) where x and y both are arrays of input and outputs. I created a sample array using numpy.linspace to test this out domain= np.linspace(-math.pi,math.pi,360). This approach failed,it returned with an error that I couldn’t understand.
Then I found [3] a precalculated trig table where it had precalculated trig values of interval (0,Ï€/4),then I thought about doing quadratic regression of these values. I found a pseudocode online[4] where they included the coefficients of x after doing the quadratic regression themsleves. I wrote two functions in python using the pseudocode provided and those regression values.
def wrapangle(v):
while not -math.pi<=v<=math.pi:
if (v < math.pi):
v += (2 * math.pi)
elif (v > math.pi):
v -= (2 * math.pi)
print(v)
return v
The wrapangle function makes sure the closed interval is [-Ï€,Ï€]. It adds or subtracts 2Ï€ to the input value until it has reached our desired interval because adding 2Ï€ radians to the input doesn’t change the output because of the periodic nature of the function.
def sinapprox(x):
sin=0
if(x<0):
sin=1.27323954*x +0.405284735*x*x
if(sin<0):
sin=sin* (-0.255*(sin+1)+1)
else:
sin=sin* (0.255*(sin-1)+1)
else:
sin=1.27323954x-0.405284735*x*x
if (sin < 0):
sin = sin * (-0.255 * (sin + 1) + 1)
else:
sin = sin * (0.255 * (sin - 1) + 1)
return sin
Sinapprox function is the main function which takes the output from the wrapangle function as input.
output= sinapproxfirst(wrapangle(x))
This function utilizes the idea of nested regression,as we can see the conditions are nested in between each other. This is almost the same idea as second derivatives,but the difference is in derivatives Δx gets infinitely small in dy/dx,whereas in regression it is defined and can be either big or small. The smaller Δx is,the approximation gets more accuracy,but with added time complexity.
I plotted a sin function using matplotlib in python,along with my new approximation function by using pylab which can be found in matplotlib library.
import pylab
import numpy
import math
x = numpy.linspace(-math.pi,0,360) # 360 equal spaced numbers between [-Ï€,0]
y = numpy.sin(x)
z=(1.27323954*x +0.405284735*x*x)
approx=z*(-0.255*(z+1)+1)
pylab.plot(x,y) # sin(x)/x
pylab.plot(x,approx) #
pylab.show() # show the plot
Figure 1.3: approximated function with Figure 1.4: both functions approximately
original sine function 750 times zoomed in
By using all these, I still wasn’t able to completely map the sine function,as we can see in Figure 1.3 it looks almost mapped but it falls apart when I zoom in 750x(approximately).
IV. Conclusions
I wasn’t able to come up with any perfect match or quadratic polynomial to represent sine function without using the idea of infinitesimal quantities,but I came close to mapping the function over the interval. I wanted to use piecewise quadratic functions, I think using third degree polynomial to represent over an extended interval would be worth a try,but my intuition tells me now that the lesser the degree and the interval,the more the accuracy of the approximation.Something that surprised me was how close the quadratic approximation came,without using a Quadratic Lagrange Polynomial. First my approach was a lot analytical,I was trying the algebraic way out of this,but my approach soon turned a lot more numeric,and I started to understand how small the intervals need to be in order to go for higher accuracy. My first approach was pretty obvious,but using the program I wanted to go for nested regression,which made the dx smaller,and was more accurate. This is why the idea of infinitesimals is so crucial,it made me realize that. Also, some of the syntax in programming numpy or matplotlib can get very tedious,which makes room for more errors.
V. Extensions
To extend this problem we can ask for the margin of error of our quadratic function that we constructed. In a strict closed interval,for every x value,what would be the maximum offset or the margin or error of the function compared to the original sin function. So, if f(x) = sin(x),and g(x)=our function,then margin of error would be |g(x)-f(x)|.This would even allow us to find out at values of x these two functions perfectly match each other. And also not speaking from a perfect mathematical sense,we can also extend it by analyzing how far are these off from real life situations,because sine waves are highly used in sounds,circuits,game development. We can also use taylor series for sin function to represent it using polynomials,but then represent taylor series without the idea of infinitesimals. That would still preserve our condition,but would be much more analytic,we could use Descartes’s method of tangents[1] and Hudde’s method of figuring out double roots of a polynomial[1] to continuously figure out the first,second,third derivative and so on. Approximation functions are good for certain programming situations where the time complexity of getting the data from standard library is higher than approximating it. So,the extensions of this problem is ‘limit’less.
VI. References
(1) Suzuki, Jeff. “The Lost Calculus (1637-1670): Tangency and Optimization without Limits.” The Lost Calculus (1637-1670): Tangency and Optimization without Limits, Mathematics Magazine, Dec. 2005, www.maa.org/programs/maa-awards/writing-awards/the-lost-calculus-1637-1670-tangency-and-optimization-without-limits.
(2) “Python: Generate Sine Table¶.” Python: Generate Sine Table — Hackzine Wiki, wiki.hackzine.org/development/python/generate-sine-table.html.
(3) Analyzemath, Analyzemath, www.analyzemath.com/trigonometry/trig_1.gif.
(4) Michael. “Fast and accurate sine/Cosine approximation.” Alt-Text, lab.polygonal.de/2007/07/18/fast-and-accurate-sinecosine-approximation/.
(1) Suzuki, Jeff. “The Lost Calculus (1637-1670): Tangency and Optimization without Limits.” The Lost Calculus (1637-1670): Tangency and Optimization without Limits, Mathematics Magazine, Dec. 2005, www.maa.org/programs/maa-awards/writing-awards/the-lost-calculus-1637-1670-tangency-and-optimization-without-limits.
(2) “Python: Generate Sine Table¶.” Python: Generate Sine Table — Hackzine Wiki, wiki.hackzine.org/development/python/generate-sine-table.html.
(3) Analyzemath, Analyzemath, www.analyzemath.com/trigonometry/trig_1.gif.
(4) Michael. “Fast and accurate sine/Cosine approximation.” Alt-Text, lab.polygonal.de/2007/07/18/fast-and-accurate-sinecosine-approximation/.
Saikanam Siam
Junior
Math Major
Math Major
Brooklyn Technical High School
This is pretty fire!
ReplyDeleteNice job saik
Bro I also went to Brooklyn Tech
ReplyDelete