So having motivated and hyped up the, the generality of the master method, and it's used for analyzing recursive algorithms.
Let's move on to it's precise mathematical statement.
Now the master method is, in some sense, exactly what you want.
It's what I'm gonna call a black box for solving recurrences.
Basically, it takes, as input, a recurrence in a particular format, and it spits out as output, a solution to that recurrence.
An upper [inaudible] the running time of your recursive algorithm.
That is, you just plug in a few parameters of your recursive algorithm, and boom, out pops its running time.
Now, the master method does require a few assumptions, and let me be explicit about one.
Right now.
Namely, the master method, at least the one I'm gonna give you, is only gonna be relevant for problems in which all of the subproblems have exactly the same size.
So, for example, in Merge Sort, there are two recursive calls, and each is on exactly one-half of the array.
So Merge Sort satisfies this assumption both so problems have equal size.
Similarly, in both of our integer multiplication algorithms, all subproblems were on integers with N over two digits with half as many digits.
So those will also obey this assumption.
If, for some reason, you had a recursive algorithm that recursed on the third of the array, and then on the other two-thirds of the array.
The master method that I'm gonna give you will not apply to it.
There are generalizations of the master method that I'm going to show you, which can accomodate unbalanced subproblem sizes, but those are outside the scope of this course.
This will be sufficient for almost all of the examples we're going to see.
One notable exception, for those of you that watch the optional video on the deterministic algorithm for liner time selection.
That will be one algorithm which has two recursive calls on different subproblem sizes.
So to analyze that recurrence, we'll have to use a different method, not the master method.
Next I'm going to describe the format of the recurrences to which the master method applies.
As I said, there are more general versions of the master method which apply to even more recurrences, but the one I'm going to give you is going to be reasonably simple and it will cover pretty much all the cases you're likely ever to encounter.
So recurrences have two ingredients.
There's the relatively unimportant, but, still necessary base case step.
And we're gonna make the obvious assumption, which is just satisfied by every example we're ever going to see in this course.
Which is that, at some point, once the input size drops two a sufficiently small amount, then the [inaudible], then the recursion stops, and the problem is solved, sub problem is solved in constant [inaudible].
Since this assumption is pretty much always satisfied in every problem we're going to see, I'm not gonna discuss it much further.
Let's move on to the general case, where there are recursive calls.
So we assume there were recurrences given in the following format.
The running time on an input of link N is bounded above by some number of recursive calls.
Let's call it A different recursive calls.
And then each of these sub-problems has exactly the same size and it's, one over B fraction of the original input size.
So there's A recursive calls, each of on input of size N over B.
Now as usual there's the case where I never be as a fraction and not an integer and as usual I'm going to be sloppy and ignore it and as usual that sloppiness has no implications for the final conclusion, everything that we're going to discuss is true, for the same reasons and the general case where N over B is not an integer, outside the recursive calls we do some extra work and lets say that it's O of N to the D for some parameter D so in addition to the input size N there are three letters here which we need to be very clear of what their meaning is.
So first of all there's A, which is the number of subproblems, the number of recursive calls.
So A could be as small as one or it might be some larger integer.
Then there's B.
B is the factor by which the input size shrinks before a recursive call is applied.
B is some constant strictly greater than one.
So for example if you re-curse oh half of the original problem then B would be equal to two.
It better be strictly bigger than one so that eventually you stop recursion.
So that eventually that you terminate.
Finally, there's D, which is simply the exponent in the running time of the quote unquote combined step.
That is, the amount of work which is done outside of the recursive calls.
And D could be as small as zero, which would indicate constant amount of work outside of the recursive calls.
One point to emphasize is that A, B and D are all constants.
That's all, they're all numbers that are independent of N.
So A, B and D are gonna be numbers like one, two, three or four.
They do not depend on the input size and.
And in fact, let me just redraw the D so that you don't confuse it with the A.
So again, A is the number of recursive calls.
And D is the exponent in the running time governing the work done outside of the recursive calls.
Now, one comment about that final turn, that Big O of N to the D.
On the one hand, I'm being sorta sloppy.
I'm not keeping track of the constant that's hidden inside the Big O [inaudible].
I'll be explicit with that constant when we actually prove the master method, but it's really not gonna matter.
It's just gonna carry through the analysis, without affecting anything.
So you can go ahead and ignore that constant within the Big O.
Obviously.
The constant of the exponent, namely D.
Is very important.
Alright? So depending on when D.
Is depends on whether that amount of time is constant, linear, quadratic, or so on.
So certainly we care about the constant [inaudible].
D.
So that's the input to the master method.
It is a recurrence of this form, so you can think of it as a recursive algorithm which makes A recursive calls, each on some problems of equal size, each of size N over B, plus it does N to the D work outside of the recursive calls.
So having setup the notation I can now precisely state the master method for you.
So given such a occurrence we are going to get an upper ban on the running time.
So the running time on.
Inputs of size N is going to be upper bounded by one of three things.
So somewhat famously the master method has three cases.
So let me tell you about each of them.
The trigger, which determines which case you're in, is a comparison between two numbers.
First of all A.
Recall A is the number of recursive calls made and B raised to the D power.
Recall B is the factor by which the input size shrinks before you re-curse.
D is the exponent in the amount of work done outside the recursive call.
So we're gonna have one case for when they're equal, we're gonna have one case for when A is strictly smaller then B to the D And third case is when A.
Is strictly bigger than B.
To the D.
And in the first case we get a running time of, big O.
Of N.
To the D.
Times log in.
Again this is D.
The same D that was in the final term of the recurrence.
Okay? That worked on the outside of the recursive calls.
So the first case, the running time is the, is the same as the running time in the recurrence and outside the recursive calls but we pick up an extra log in factor.
In second case where A is smaller than B to the D.
The running time is merely Big O of N to the D.
And this case might be somewhat stunning if this could ever occur.
Because, of course, in recurrence, what do you do? You do some recursion, plus you do N to the D work outside of the recursion.
So in the second case, it actually says the work is dominated by just what's done outside the recursion in the outermost call.
The third case will initially seem the most mysterious, when A is strictly bigger than B to the D.
We're gonna get a running time of Big O, of N to the log.
Base B.
Of a.
For again recall A is the number of recursive cause and B is the factor by which the input size shrinks before you recurse.
So that's the master method with its three cases.
Let me give this to you in a cleaner slide to make sure there's no ambiguity in my handwriting.
So here's the exact same statement, the master method.
Once again, with it's three cases depending on how A compares to B to the D.
So one final comment.
You'll notice that I'm being asymmetrically sloppy with the two logarithms that appear in these formulas.
So let me just explain why.
In particular you'll notice that in case one, with a logarithm.
I'm not specifying the base.
Why is that true? Well, it's because the logrithm with respect to any two different bases differs by a constant factor.
So the logrithm that is at base E, that is the natural logrigthm, and the logrithm base two, for example, differ by only a constant factor independent of the argument N.
So you can switch this logrithm to whatever consant base you like.
It only changes the leading constant factor, which of course is being suppressed in the bigger notation anyways.
On the other hand, in case three, where we have a logarithm in the exponents.
Once it's in the exponent, we definitely care about that constant.
Constants is the difference between, say, linear time and quadratic time.
So we need to keep careful of the ba-, the logarithm base in the exponent in case three.
[inaudible], and that base is precisely B, the factor by which the input shrinks which each [inaudible], with each recursive call.
So that's the precise statement of the master method.
And the rest of this lecture will work toward understanding the master method.
So first, in the next video, we'll look at a number of examples, including resolving the running time of [inaudible] recursive algorithm for integer multiplication.
Following those several examples, we'll prove the master method.
And I know now, these three cases probably look super mysterious.
But if I do my job, by the end of the analysis.
因此,激发并宣传了master方法的一般性,并将其用于分析递归算法。
让我们继续进行精确的数学陈述。
从某种意义上说,现在的主方法正是您想要的。
这就是我所说的解决复发的黑匣子。
基本上,它以特定格式的循环作为输入,而将输出作为该循环的解决方案作为输出。
递归算法的运行时间上限(听不清)。
也就是说,您只需插入递归算法的一些参数,Boom就会弹出运行时间。
现在,主方法确实需要一些假设,并且让我明确说明一下。
马上。
也就是说,主方法,至少是我要给您的方法,仅与所有子问题的大小完全相同的问题有关。
因此,例如,在“合并排序”中,有两个递归调用,每个递归调用恰好位于数组的一半。
因此,合并排序满足了两个假设,因此问题的大小相等。
类似地,在我们的两种整数乘法算法中,所有子问题都是整数N超过两位的整数,位数是一半。
因此,那些人也将服从这个假设。
如果由于某种原因,您有一个递归算法在数组的三分之一处递归,然后在数组的其他三分之二处递归。
我要给您的主要方法将不适用于该方法。
我将向您展示master方法的一般化,它可以适应不平衡的子问题的大小,但是这些都不在本课程的范围之内。
这对于我们将要看到的几乎所有示例都足够了。
一个值得注意的例外,对于那些观看有关确定时间的确定性算法的可选视频的人。
那将是一种算法,它对不同的子问题大小具有两个递归调用。
因此,要分析该重复发生,我们将不得不使用其他方法,而不是主方法。
接下来,我将描述master方法所适用的重复格式。
就像我说的,master方法有更多通用的版本,它们适用于更多的重复,但是我要给你的将相当简单,它将涵盖您可能遇到的几乎所有情况遇到。
因此,复发有两个要素。
有相对不重要但仍然必要的基本案例步骤。
我们将做出一个显而易见的假设,我们在本课程中将看到的每个示例都满足了这一假设。
也就是说,在某个点上,一旦输入大小减小两个足够小的量,则[听不清],然后递归停止,问题得以解决,子问题以恒定[听不清]得以解决。
由于这个假设几乎总是会满足我们要解决的每个问题,因此,我将不做进一步讨论。
让我们继续进行递归调用的一般情况。
因此,我们假设存在以下格式的重复发生。
链接N的输入上的运行时间在上面受一定数量的递归调用的限制。
我们称它为A不同的递归调用。
然后,每个子问题都具有完全相同的大小,并且是原始输入大小的B分之一。
因此,存在一个递归调用,每个递归调用的大小为B上的N。
现在像往常一样,在这种情况下,我永远不会作为分数而不是整数,并且像往常一样,我会草率地忽略它,并且像往常一样,草率对最终结论没有影响,我们要做的一切出于相同的原因,并且在B上的N不是整数的一般情况下,讨论是正确的,在递归调用之外,我们做了一些额外的工作,并且说对于某些参数D,它是D的N的O。输入大小N在这里有三个字母,我们需要非常清楚它们的含义。
所以首先有一个A,它是子问题的数量,即递归调用的数量。
因此,A可以小到一个,也可以是更大的整数。
然后是B。
B是在应用递归调用之前输入大小缩小的因数。
B是一个严格大于1的常数。
因此,例如,如果您对原始问题进行一半的折旧,那么B等于2。
最好严格大于一个,以便最终停止递归。
这样最终您终止了。
最后,有D,它只是引用取消引用合并步骤的运行时间的指数。
也就是说,在递归调用之外完成的工作量。
D可以小到零,这表示递归调用之外的工作量是恒定的。
需要强调的一点是,A,B和D都是常数。
仅此而已,它们都是独立于N的数字。
因此,A,B和D将会是一,二,三或四。
它们不取决于输入大小和。
实际上,让我重新绘制D,以免您将其与A混淆。
同样,A是递归调用的数量。
D是运行时间的指数,用于控制在递归调用之外完成的工作。
现在,关于最后一转的评论,即N到D的大O。
一方面,我有点草率。
我没有跟踪Big O [音频不清晰]中隐藏的常量。
当我们实际证明主方法时,我会用该常量明确表示,但这并不重要。
它只会进行分析,而不会影响任何结果。
因此,您可以继续操作,而忽略Big O中的该常数。
明显。
指数常数,即D。
非常重要。
好的?因此,取决于何时D。
Is取决于时间量是恒定的,线性的,二次的等。
因此,我们当然会关心常数[听不清]。
D.
这就是master方法的输入。
它是这种形式的重复出现,因此您可以将其视为一种递归算法,该算法进行A个递归调用,每个递归调用的大小相等,N超过B,N对D之外的D工作执行N递归调用。
因此,设置好标记后,我现在可以为您精确地说明master方法。
因此,鉴于这种情况,我们将限制运行时间。
因此,运行时间就开始了。
大小为N的输入将由以下三项之一限制。
因此,有些著名的主方法有三种情况。
所以,让我告诉您他们的每一个。
该触发器确定两个数字之间的比较,从而确定您所处的情况。
首先A。
调用A是递归调用的数量,B是提高到D的幂。
调用B是重新计算之前输入大小缩小的因数。
D是递归调用之外完成的工作量的指数。
因此,当它们相等时,我们将有一种情况,当A严格小于B到D时,我们将具有一种情况。
严格大于B。
到D。
在第一种情况下,我们的运行时间为大O。
N的
到D。
时间登录。
再次是D。
与重复期最后期限相同的D。
好的?这在递归调用的外部起作用。
因此,第一种情况,运行时间为,与递归调用和外部递归调用中的运行时间相同,但是我们选择了一个额外的登录因子。
在第二种情况下,A小于B到D。
运行时间仅是N到D的BigO。
如果发生这种情况,这种情况可能会有些令人震惊。
因为,当然,您会反复发生什么?您进行了一些递归,另外对递归之外的D工作执行了N。
因此,在第二种情况下,它实际上表示工作主要由最外层调用中递归之外的操作所决定。
当A严格比D的B大时,第三种情况最初看起来是最神秘的。
我们将获得Big O(运行时间为N)的运行时间。
基数B。
的
再说一次,A是递归原因的数量,B是递归之前输入大小缩小的因数。
这就是三种情况的主要方法。
让我以简洁的幻灯片给您,以确保我的笔迹没有歧义。
所以这是完全相同的语句,即master方法。
再次,有三种情况取决于A与B与D的比较方式。
最后一个评论。
您会注意到,这些公式中出现的两个对数使我变得不对称。
因此,让我解释一下原因。
特别要注意的是,在第一种情况下,它具有对数。
我没有指定基础。
为什么会这样呢?嗯,这是因为关于任意两个不同基数的对数相差一个常数。
因此,底数为E的对数即自然对数,而底数为2的对数仅相差一个与参数N无关的常数。
因此,您可以将此logrithm切换到您喜欢的任何共识基础。
它只会更改前导常数因子,当然无论如何都会用较大的符号将其抑制。
另一方面,在第三种情况下,我们的对数为对数。
一旦达到指数,我们一定会在意这个常数。
常数是线性时间和二次时间之间的差。
因此,在第三种情况下,我们需要注意ba-指数的对数底数。
[音频不清晰],并且该底数恰好是B,即每次递归调用时,每个[音频不清晰]所用的输入缩小系数。
这就是master方法的精确说明。
本讲座的其余部分将有助于理解主要方法。
因此,首先,在下一个视频中,我们将看许多示例,包括为整数乘法解决[音频不清晰]递归算法的运行时间。
通过这些示例,我们将证明主方法。
我现在知道,这三种情况可能看起来超级神秘。
但是,如果我做我的工作,那么在分析结束时。