We hear that students are abandoning science and engineering studies in droves, not least of all because it turns out that those subjects are hard and graded rather strictly. Perhaps I can help alleviate the pain with the current version of my twenty minute calculus.

{Best presented with the help of a blackboard}

If you have opened your calculus book, you may have noticed that it consists of about 1700 pages of closely spaced text, diagrams, formulas, and equations. Perhaps that experience has already convinced some of you to change your major to psychology or art history.

For those of you who plan to leave, then, as well as those of you who plan to stay, I would like to start this lecture by mentioning that there are only a few key ideas in calculus, and those are handy to know even if you do plan to major in psychology or art history. Depending on your point of view, those ideas are one, two, or three in number. I should add that none of those ideas will be exactly new to you.

Let’s start with the one idea, since it’s intimately involved with the other two. That idea is the idea of **the limit**. A function, we recall, is a kind of a rule which presented with one number gives us another. An example might be the square of x, that is, given one number, its square of that number is the number times itself. Two squared is four. Three squared is nine. And so on.

Consider a function that’s slightly harder to describe: the sum for all the non-negative integers from 0 to n of the fractions 1/(2^n). Remembering that any non-zero number to the zero power is one, we see that f(0) = 1, f(1) = 1+1/(2^1) = 1 + ½ = 3/2, f(2) = 1 + ½ + ¼ = 7/4, f(3) = 1 + ½ + ¼ + 1/8 = 15/8. If you know or suspect that these numbers get closer and closer to 2 as n gets larger and larger you are perfectly correct. In fact we can write this as Limit[f(n),n--> Infinity] = 2. This idea of **limit** turns out to be particularly useful, especially since it’s an enabling technology for the next two ideas.

The second important idea of the calculus is the idea of rate of change. We use this one all the time in everyday life – the speed of our car is the rate of change of its position, for example. If we aren’t accelerating or decelerating, speed is sort of easy to calculate – we can just measure the distance we travel in a certain amount of time and divide distance by time. If our speed is changing, though, how do we do it? Well, we look at the same relation: distance travelled/elapsed time and apply our idea of limit.

Limit[distance f/time t, time -> 0] = df/dt. That funny looking ratio we wrote for the limit df/dt is called **the derivative** of distance with respect to time, and that’s defined to be the instantaneous speed. Some of you may realize that distance has to be a smooth function of time for that to work – there can’t be any instantaneous jumps in position, but those don’t seem to happen in the real world.

Rate of change, or derivative, can be applied to many things outside of travel, of course, which is why derivative is such an important concept. If you can take the derivative of a function at every point you get a new function, the derivative of the first function. There is an interesting way to reverse this process by starting with a function that is the derivative of the original function. Remember that the derivative function at each point is the rate of change of the original function at that point.

Let's apply that to the speed. Suppose you had been driving along a long strait road, and knew what your speed was at every point in time (because you have one of those recording speedometers, say) but had no idea of how far you had gone.

How can you turn the previous process around and figure out how far you had gone at each time? How about trying this: break up your speedometer record into, say, five minute increments. Pick, say, the average of the fastest and slowest speed values in each increment, and multiply by five minutes. If your speed isn’t changing too rapidly, that product should approximate the distance travelled in the five minute intervals. If you add up the values for each interval up to a given point, that approximates the total distance travelled to that point. Suppose we now make those intervals shorter and shorter. In the limit where the length of the intervals goes to zero, the value of the sum at each point is **the integral** (we write Integral[speed=df/dt {t=0 to t}] = f(t)) of the speed function at that time, aka the net distance travelled to that time.

The fact that this process of integration is in some sense opposite to differentiation is summarized in the Fundamental Theorem of Calculus: Integral [df/dx,{x=x0 to x1}] = f[x1]-f[x0].

We can do the adding up even for functions that we don’t recognize as being **the derivative **of some other function. Suppose, for example, we have just about any somewhat smooth function f(x). We can break it up into pieces and multiply values for each piece by the length of that piece, and add up all the products from 0 to some value x. Then if F(x) is the integral from 0 to x of f(x) (AKA, the antiderivative of f), its derivative dF/dx = f(x).

Well, that’s all three of the key ideas (**limit, derivative, integral**) of calculus. The rest of the stuff in the book is details, technology, and applications, all of which are pretty important, but these three were just the key ideas. We got the derivative, remember, by taking the ordinary idea of rate of change and looking at the limit as the amount of time got very short. Similarly, we got the integral by looking at adding up values on pieces of a function in the limit where the pieces got very short.

I hope you have questions, because otherwise this is a really short lecture.

Corrections not involving epsilons, deltas, or inverse maps of open sets are welcome.