3.4 「Stanford Algorithms」O(n log n) Algorithm for Closest Pair 1 [Advanced - Optional] - Part 1

So in this video and the next, we're going to study a very cool divide and conquer algorithm for the closest pair problem.

this is a problem where you're given n points in the plane and you want to figure out which pair of points are closest to each other.

So this would be the first taste we get of an application in computational geometry, which is the part of algorithms which studies how to reason and manipulate geometric objects.

So those algorithms are important in, among other areas robotics, computer vision and computer graphics.

So this is relatively advanced material, it's a bit more difficult than the other applications of divide and conquer that we've seen.

The algorithm's a little bit tricky and it has a quite nontrivial proof of correctness, so just be ready for that and also be warned that because it's more advanced I'm going to talk about the material in at a slightly faster pace tha I do in most of the other videos.

So let's begin now by defining the problem formally, so we're given as imput endpoints in the plane, so each one just define by its x coordinate and ist y coordinate.

And when we talk about the distance between two points in this problem, we're going to focus on Euclidean distance.

So, let me remind you what that is briefly, but we're going to introduce some simple notation for that, which we'll use for the rest of the lecture.

So we're just going to note the Euclidean distance between two points, pi and pj, by d of pi pj.

So in terms of the x and y coordinates of these two points, we just look at the squared differences in each coordinate, sum them up, and take the square root.

And now as the name of the problem would suggest, the goal is to identify among all pairs of points that pair which has the smallest distance between them.

Next, let's start getting a feel for the problem by making some preliminary observations.

First, I want to make an assumption purely for convenience that there's no ties.

So that is I'm going to assume all endpoints have distinct x coordinat es, and also all endpoints have distinct y coordinates.

It's not difficult to extend the algorithm to accommodate ties.

I'll leave it to you to think about how to do that.

So next, let's draw some parallels with the problem of counting inversions, which was a earlier application of divide and conquer that we saw.

The first parallel I want, want to out is that, if we're comfortable with the quadratic time algorithm, then this is not a hard problem, we can simply solve this by brute-force search.

And again, by brute-force search, I just mean we set up a double for loop, which iterates over all distinct pairs of points.

We compute the distance for each such pair and we remember the smallest.

That's clearly a correct algorithm, it has to iterate over a quadratic number of pairs, so its running time is going to be theta of n squared.

And, as always, the question is can we apply some algorithmic ingenuity to do better? Can we have a better algorithm than this naive one which iterates over all pairs of points? You might have a, an initial instinct that because the problem asks about a quadratic number of different objects, perhaps we fundamentally need to do quadratic work.

But again, recall back in counting inversions, using divide and conquer, we were able to get an n log n algorithm despite the fact that there might be as many as a quadratic number of inversions in an array.

So the question is, can we do something similar here for the closest pair problem? Now, one of the keys to getting an n log n time algorithm for counting inversions was to leverage a sorting subroutine.

Recall that we piggybacked on merge sort to count the number of inversions in n log n time.

So the question is, here, with the closest pair problem, perhaps, sorting again can be useful in some way to beat the quadratic barrier.

So, to develop some evidence that sorting will indeed help us compute the closest pair of points embedded in quadratic time, let's look at a special case of the problem, really, an easier version of t he problem, which is when the points are just in one dimension, so on the line rather that in two dimensions in the plane.

So in the 1D version, all the points just lie on a line like this one, and we're given the points in some arbitrary order not necessarily in sorted order.

So, a way to solve the closest pair problem in one dimension, is to simply sort the points, and then of course, the closest pair better be adjacent in this ordering, so you just iterate through the n minus 1 consecutive pairs and see which one is closest to each other So, more formally, here's how you solve the one-dimensional version of the problem.

You sort the points according to their only coordinate, because you're going to remember, this is one dimension.

So as we've seen, using merge sort, we can sort the points in n log n time and then we just do a scan through the points, so this takes linear time.

And for each consecutive pair, we compute their distance and we remember the smallest of those consecutive pairs and we return that.

That's gotta be the closest pair.

So, in this picture here on the right, I'm just going to circle here in green the closest pair of points.

So this is something we discover by sorting and then doing a linear scan.

Now, needless to say, this isn't directly useful, this is not the problem I started out with.

We wanted to find out the closest pair among of points in the plane not points in the line.

But, I want to point out that, this, even in the line, there are a quadratic number of different pairs, so brute-force search is still a quadratic time algorythm even in the 1D case.

So at least, with one dimension, we can use sorting, piggyback on it, to beat the naive brute-force search bound and solve the problem in n log n time.

So our goal for this lecture is going to be to devise an equally good algorithm for the two-dimensional case, so we want to solve closest pair of points in the plane, again, in n log n, n time.

So we will succeed in this goal.

I'm going to show you an n log n time algo rithm for 2D closest pair.

It's going to take us a couple steps.

So let me begin with a high level approach.

Alright.

So the first I need to try is just to copy what worked for us in the one-dimensional case.

So the one-dimensional case, we first sorted the points by their coordinate and that was really useful.

Now, in the 2D case, points have two coordinates, x coordinates and y coordinates, so there's two ways to sort them.

So let's just sort them both ways, that is, the first step of our algorithm, which you should really think of as a preprocessing step.

We're going to take the input points.

We invoke merge sort once to sort them according to x coordinate, that's one copy of the points.

And then we make a second copy of the points where they're sorted by y coordinates.

So we're going to call those copies of points px, that's an array of the points sorted by x coordinate, and py for them sorted by y coordinate.

Now, we know merge short takes n log n times, so this preprocessing step only takes o of n log n time.

And again, given that we're shooting for an algorithm with running time big O of n log n, why not sort the points? We don't even know how we're going to use this fact right now, but it's sort of harmless.

It's not going to effect our goal of getting a big of O n log n time algorithm.

And indeed, this illustrates a broader point, which is one of the themes of this course.

So recall, I hope one of the things you take away from this course is a sense for what are the four free primitives, what are manipulations or operations you can do on data which basically are costless.

Meaning that if your data set fits in the main memory of your computer, you can basically invoke the primitive and it's just going to run blazingly fast, and you can just do it even if you don't know why.

And again, sorting is the canonical for free primitive, although, we'll see some more later in the course and so, here, we're using exactly that principle.

So we don't even understand why yet we might wa nt the points to be sorted.

It just seems like it's probably going to be useful, motivated by the 1D case, so let's go ahead and make assorted copies of the points by x and y coordinate upfront.

So reasoning by analogy with the 1D suggests that sorting the points might be useful, but we can't carry this analogy too far.

So in particular, we're not going to be able to get away with just a simple linear scan through these arrays to identify the closest pair of points.

So, to see that, consider the following example.

So we're going to look at a point set which has six points.

There's going to be two points, which I'll put in blue which are very close in x coordinate, but very far away in y coordinate.

And then there's going to be another pair of points which I'll do in green, which are very close in y coordinate, but very far away in x coordinate.

And then there's going to be a red pair of points, which are not too far away in either the x coordinate or the y coordinate.

So in this set of six points, the closest pair is the pair of red points.

They're not even going to show up consecutively on either of the two arrays, right? So in the array that's sorted by x coordinate, this blue point here is going to be wedged in between the two red points, they won't be consecutive.

And similarly in the, in py, which is sort of by y coordinate, this green coordinate is going to be wedged between the two red points.

So you won't even notice these red point if you just do a linear scan if your px and py, or py look at the consecutive pairs of points.

So, following our preprocessing step where we just invert, invoke merge sort twice we're going to do a quite nontrivial divide and conquer algorithm to compute the closest pair.

So really, in this algorithm, we're applying the divide and conquer algorithm twice.

First, internal to the sorting subroutine, assuming that we use the merge sort algorithm to sort.

Divide and conquer is being used there to get an n log n running time in this preprocessing step, and the n, we're going to use it again on sorted arrays in a new way and that's what I'm going to tell you about next.

So let's just briefly review the divide and conquer algorithm design paradigm before we apply it to the closest pair problem.

So, as usual, the first step is to figure out a way to divide your problem into smaller subproblems.

Sometimes this has a reasonable amount of ingenuity, but it's not going to.

Here in the closest pair problem, we're going to proceed exactly as we did in the merge sort and counting inversions problems, where we took the array and broke it into its left and right half.

So here, we're going to take the input point set, and again, just recurse on the left half of the points, and recurse on the right half of the points.

So here, by left and right, I mean with respect to the points x coordinates.

There's pretty much never any ingenuity in the conquer step, that just means you take the sub-problems you identified in the first step, and you solve them recursively.

That's what we'll do here, we'll recursively complete the closest pair in the left half of the points, and the closest pair in the right half of the points.

So where all the creativity in divide and conquer algorithms is in the combined step.

Given the solutions to your sub problems, how do you somehow recover a solution to the original problem? The one that you actually care about.

So for closest pair, the questionis going to be, given that you've computed the closest pair on the left half of the points, and the closest pair on the right half of the points, how do you then quickly recover the closest pair for the whole point set? That's a tricky problem, that's what we're going to spend most of our time on.

So let's make this divide and conquer approach for closest pair a little bit more precise, so let's now actually start spelling out our closest pair algorithm.

The input we're given, it's, this follows the preprocessing steps or recall that we invoke, merge sort, we get our two sorted copies of the poin t set Px, sorted by x coordinate, and py sorted by y coordinate.

So the first dividend is the division step.

So given that we have a copy of the points px sorted by x coordinate, it's easy to identify the leftmost half of the points, those with the, those n over two smallest x coordinates and in the right half, those were the n over two largest x coordinates.

We're going to call those Q and R respectively.

One thing I'm skipping over is the base case.

I'm not going to bother writing that down, so base case omitted, but it's what you would think it would be.

So basically once you have a small number point, say two points or three points, then you can just solve the problem in constant time by a brute-force search.

You just look at all the pairs and you return the closest pair.

So think of it being at least four points in the input.

Now, in order to recurse, to call clo pair again, in the left and right halves, we need sorted version of Q and R, both by x coordinate and by y coordinate, so we're just going to form those by doing suitable linear scans through px and py.

And so one thing I encourage you to think through carefully or maybe even code up after the video is how would you form Qx, Qy, Rx and Ry given that you already have Px and Py.

And if you think about it, because Px and Py are already sorted just producing these sorted sublists takes linear time.

It's in some sense the opposite of the merge subroutine used in merge sort.

Here, we're sort of splitting rather than merging.

But again, this can be done in linear time, that's something you should think through carefully later.

So that's the division step, now we just conquer, meaning we recursively call closest pair line on each of the two subproblems, so when we invoke closest pair on the left half of the points on Q we're going to get back what are indeed, the closest pair of points amongst those in Q.

So we're going to call those P1 and Pq, So among all pairs of points that both lie in Q, P1 and Q1 minimize the distance between them.

Similarly, we're going to call Q2Q2 the results of the second recursive call, that is, P2 and Q2 are amongst all pairs of points that both lie in R, the pair that has the minimum Euclidean distance.

Now, conceptually, there's two cases.

There's a lucky case and there's an unlucky case.

In the original point set P, if we're lucky, the closest pair of points in all of P, actually, both of them lie in Q or both of them lie in R.

In this lucky case, we'd already be done if the closest pair in the entire point set they happen to both lie in Q, then this first recursive call is going to recover them and we just have them in our hands P1Q1.

Similarly, if both of the closest pair of points in all of P lies on the right side in R, then they get handed to us on a silver platter by the second recursive call that just operate on R.

So in the unlucky case, the closest pair of point in P happens to be split.

That is, one of the points lies in the left half, in Q, and the other point lies in the right half, in R.

Notice, if the closest pair of points in all of P is split, is half in Q and half in R, neither recursive call is going to find it.

Okay? The pair of points is not passed to either of the two recursive calls, so there's no way it's going to be returned to us.

Okay? So we have not identified the closest pair after these two recursive calls, if the closest pair happens to be split.

This is exactly analagous to what happened when we were counting inversions.

The recursive call on the left half of the array counted the left inversions.

The recursive call on the right half of the array counted the right inversions.

But we still had to count the split inversions, so in this closest pair algorithm, we still need a special purpose subroutine that computes the closest pair for the case in which it is split, in which there is one point in Q and one point in R.

So just like in counting inversions, I'm going to write down that subroutine and I'm going to leave it unimplemented for now, we'll figur e out how to implement it quickly in the rest of the lecture.

Now, if we have a correct implementation of closest split pair, so that takes us input the original point set sort of the x and y coordinate, and returns the smallest pair that's split or one points in Q and one points in R, then we're done.

So then, the split, then the closest pair has to either be on the lef or onm the right or it has to be split.

Steps two through four compute the closest pair in each of those categories, so those are the only possible candidates for the closest pair and we just returned the best of them.

So that's an argument for y, if we have a correct implementation of the closest split para subroutine, then that implies a correct implementation of closest pair.

Now, what about the running time? So the running time of the closest para algorithm is going to be in part determined by the running time of closest split pair.

So in the next quiz, I want you to think about what kind of running time we should be shooting for with a closest split pair subroutine.


因此,在此视频及下一个视频中,我们将研究一种非常酷的分而治之算法,用于解决最接近的配对问题。

这是一个问题,您在平面中获得了n个点,并且想找出哪对点最接近。

因此,这将是我们在计算几何中应用的第一个尝试,这是研究如何推理和操纵几何对象的算法的一部分。

因此,这些算法在机器人技术,计算机视觉和计算机图形学等领域都很重要。

因此,这是相对高级的材料,它比我们已经看到的其他“分而治之”应用程序要困难一些。

该算法有点棘手,并且具有相当不平凡的正确性证明,因此请为此做好准备,并警告一下,由于它的高级性,我将以稍微快一点的速度谈论该材料。其他大多数视频。

因此,让我们从形式上定义问题开始,因此,我们将其指定为平面中的归因端点,因此每个端点仅由其x坐标和ist y坐标定义。

当我们讨论此问题中两点之间的距离时,我们将重点关注欧几里得距离。

因此,让我提醒您一下这是简短的内容,但是我们将为此引入一些简单的表示法,我们将在后续的讲座中使用。

因此,我们仅要注意pi pj的d点,即pi和pj两点之间的欧几里得距离。

因此,就这两点的x和y坐标而言,我们只需查看每个坐标的平方差,将它们求和,然后求平方根即可。

现在,正如问题的名称所暗示的那样,目标是在所有成对的点之间识别距离最小的那对点。

接下来,让我们通过做一些初步的观察来开始了解这个问题。

首先,我仅出于方便起见就假设没有联系。

因此,我将假设所有端点都具有不同的x坐标,并且所有端点都具有不同的y坐标。

扩展算法以适应联系并不困难。

我留给您考虑如何做。

因此,接下来,让我们与计算反转的问题进行一些比较,这是我们所看到的分而治之的早期应用。

我想做的第一个并行操作是,如果我们对二次时间算法感到满意,那么这不是一个难题,我们可以通过蛮力搜索简单地解决这个问题。

同样,通过蛮力搜索,我只是意味着我们设置了一个double for循环,该循环遍历所有不同的点对。

我们为每个这样的对计算距离,并记住最小的距离。

显然,这是一种正确的算法,它必须迭代二次对,因此其运行时间将是n平方的θ。

而且,像往常一样,问题是我们可以运用某些算法的独创性来做得更好吗?我们可以有一个比在所有点对上迭代的幼稚算法更好的算法吗?最初,您可能会有一个直觉,因为问题询问的是不同数量的对象的二次方,也许我们从根本上需要进行二次工作。

但是,再次回顾一下,使用除法和征服计数倒数的方法,尽管数组中倒数的数量可能是二次数,但我们仍然可以得到n log n算法。

所以问题是,对于最接近的配对问题,我们可以在这里做类似的事情吗?现在,获取n log n时间算法来计算反转的关键之一是利用排序子例程。

回想一下,我们背负了归并排序,以n次n次记录反转次数。

因此,问题是,在这里,对于最接近的配对问题,也许可以用某种方式再次进行排序以克服二次障碍。

因此,为了获得一些证据证明排序确实可以帮助我们计算二次时间中嵌入的最接近的点对,让我们看一下问题的一种特殊情况,实际上是问题的更简单版本,即当这些点只是在一维上,所以在直线上,而不是平面上的二维。

因此,在1D版本中,所有点都位于这样的一条线上,并且我们可以按任意顺序(不一定按排序顺序)获得这些点。

因此,一种解决一维最接近对问题的方法是简单地对点进行排序,然后,当然,最接近对最好按此顺序相邻,因此您只需遍历n个负1个连续对,然后查看哪个因此,更正式地讲,这是解决问题的一维版本的方法。

您可以根据点的唯一坐标对其进行排序,因为您要记住,这是一个维度。

因此,如我们所见,使用合并排序,我们可以按n log n个时间对点进行排序,然后只扫描这些点,所以这需要线性时间。

对于每个连续对,我们计算它们的距离,并记住那些连续对中的最小对,然后将其返回。

那一定是最接近的一对。

因此,在这张右图中,我将在此处以绿色圈出最近的一对点。

因此,这是我们通过排序然后进行线性扫描发现的。

现在,不用说,这不是直接有用的,这不是我刚开始遇到的问题。

我们想找出平面上的点中最接近的点,而不是直线中的点。

但是,我想指出的是,即使在一行中,也存在二次数的不同对,因此,即使在一维情况下,蛮力搜索仍然是二次时间算法。

因此,至少在一个维度上,我们可以使用排序,搭载在上面,以克服朴素的蛮力搜索界限,并在n log n时间内解决问题。

因此,本次演讲的目标是针对二维情况设计一个同样好的算法,因此,我们希望在n log n,n时间内再次求解平面中最接近的点对。

因此,我们将成功实现这一目标。

我将向您展示2D最接近对的n次算法。

这需要我们走几步。

因此,让我从一个高层次的方法开始。

好的。

因此,我首先要尝试的是复制一维情况下对我们有用的内容。

因此,在一维情况下,我们首先通过点的坐标对其进行排序,这确实很有用。

现在,在2D情况下,点具有两个坐标,即x坐标和y坐标,因此有两种对它们进行排序的方法。

因此,让我们以两种方式对它们进行排序,即算法的第一步,您应该将其真正视为预处理步骤。

我们将获取输入点。

我们调用一次归并排序以根据x坐标对它们进行排序,这就是这些点的一个副本。

然后,我们对按y坐标对它们进行排序的点进行第二次复制。

因此,我们将这些点称为px,即按x坐标排序的点的数组,对于py则按y坐标排序的点的数组。

现在,我们知道merge short需要n log n次,因此此预处理步骤仅需要n log n次的o。

再一次,假设我们正在为运行时间为O O n log n的算法进行射击,为什么不对点排序?我们甚至不知道我们现在将如何使用这个事实,但这是无害的。

这不会影响我们获得大量O n log n时间算法的目标。

实际上,这说明了一个更广泛的观点,这是本课程的主题之一。

因此,回想一下,我希望您从本课程中学到的东西之一是对四个免费原语的理解,对基本无成本的数据可以进行的操作或操作是什么。

这意味着,如果您的数据集适合您计算机的主内存,则您基本上可以调用该原语,并且它将以极快的速度运行,即使您不知道为什么也可以这样做。

同样,排序是免费原始语言的规范,尽管,我们将在本课程的后面部分看到更多内容,因此,在这里,我们正使用该原理。

因此,我们什至不理解为什么我们还不知道要排序的点。

似乎受一维情况的启发,它可能会很有用,所以让我们继续制作x和y坐标的点的各种副本。

因此,通过与一维类比进行推理表明,对点进行排序可能很有用,但我们不能将此类类推得太远。

因此,尤其是,仅通过这些数组进行简单的线性扫描以识别最接近的点对就无法摆脱困境。

因此,请看下面的示例。

因此,我们将看一个包含六个点的点集。

将有两个点,我将用蓝色表示,在x坐标上非常接近,但在y坐标上非常遥远。

然后我将用绿色做另外一对点,它们在y坐标上非常接近,而在x坐标上非常遥远。

然后会有一对红色的点,它们在x坐标或y坐标上都不太远。

因此,在这套六个点中,最接近的一对是红色点对。

它们甚至不会连续出现在两个阵列中的任何一个上,对吗?因此,在按x坐标排序的数组中,此蓝色点将被楔入两个红色点之间,它们将不会连续。

同样,在以y坐标表示的py中,该绿色坐标将被楔入两个红色点之间。

因此,如果您只进行线性扫描(如果px和py或py看着连续的点对),您甚至都不会注意到这些红点。

因此,在预处理步骤中,我们只需进行反转,然后调用合并排序两次,我们将做一个非常平凡的分而治之算法来计算最接近的对。

因此,实际上,在此算法中,我们两次应用了分治法。

首先,在排序子例程的内部,假设我们使用合并排序算法进行排序。

在此预处理步骤中,使用分治法来获得n log n的运行时间,而n我们将以一种新的方式在排序数组上再次使用它,这就是我要告诉您的下一个。

因此,在将其应用到最接近的对问题之前,让我们简要地回顾一下分而治之算法设计范例。

因此,像往常一样,第一步是想出一种将问题分解为较小的子问题的方法。

有时,这有一定的创造力,但事实并非如此。

在最接近的对问题中,我们将像在合并排序和计数反演问题中一样,继续进行操作,在这里我们将数组分成左右两半。

因此,在这里,我们将采用输入点集,然后再次递归于点的左半部分,并递归于点的右半部分。

所以在这里,我指的是关于点x坐标。

在征服步骤中几乎没有任何聪明才智,这仅意味着您要解决第一步中确定的子问题,然后递归地解决它们。

这就是我们要做的,我们将在点的左半部分中递归地完成最接近的对,在点的右半部分中递归地完成最接近的对。

因此,分治法中所有创造力都在结合步骤中。

给定您的子问题的解决方案,您如何以某种方式恢复原始问题的解决方案?您真正关心的那个。

因此,对于最接近的对,问题就在于,假设您已计算出点的左半部分的最接近对,而对点的右半部分则计算了最接近对,那么您如何快速恢复以下点对呢?整个定点?这是一个棘手的问题,这就是我们将要花费大部分时间的原因。

因此,让我们对最接近对的分治法更加精确一些,让我们现在开始实际阐明最接近对的算法。

我们得到的输入是按照预处理步骤执行的,或者我们调用了调用,合并排序,得到了点集px的两个排序副本,分别按x坐标排序,而py按y坐标排序。

因此,第一笔红利是除法步骤。

因此,假设我们拥有按x坐标排序的点px的副本,则很容易识别出点的最左半部分,即那些在最小的x坐标上的n个点,以及在两个最小的x坐标上的n个点的最右半部分。最大x坐标。

我们将分别称为Q和R。

我要跳过的一件事是基本情况。

我不会费心写下来,因此省略了基本情况,但这就是您认为的那样。

因此,基本上,一旦您有一个小数点,例如两点或三点,您就可以通过蛮力搜索在恒定时间内解决问题。

您只需查看所有对,然后返回最接近的对。

因此,可以认为输入中至少有四个点。

现在,为了递归,在左右两半再次调用clo对,我们需要按x坐标和y坐标对Q和R进行排序,所以我们将通过做适当的选择来形成Q和R。通过px和py进行线性扫描。

因此,我鼓励您仔细考虑一遍,甚至在视频播放后进行编码,考虑到已经拥有Px和Py,您将如何形成Qx,Qy,Rx和Ry。

而且,如果考虑一下,因为Px和Py已经排序,只生成这些排序的子列表将花费线性时间。

从某种意义上讲,这与合并排序中使用的合并子例程相反。

在这里,我们有点分裂而不是合并。

但是同样,这可以在线性时间内完成,这是您以后应该仔细考虑的事情。

这就是除法步骤,现在我们就征服了,这意味着我们递归地在两个子问题中的每一个上调用最接近的对线,因此,当我们在Q上点的左半部分调用最接近的对时,我们将得到确实是,是Q中最接近的一对点。

因此,我们将它们称为P1和Pq,因此在都位于Q的所有成对的点之间,P1和Q1将它们之间的距离最小化。

类似地,我们将第二个递归调用的结果称为Q2Q2,即P2和Q2都位于R中,这对点具有最小的欧几里德距离,这对所有点都在其中。

现在,从概念上讲,有两种情况。

有一个幸运的情况,有一个不幸运的情况。

如果幸运的话,在原始点集P中,所有P中最接近的一对点实际上都位于Q或都位于R中。

在这种幸运的情况下,如果它们在整个点集中最接近的一对都碰巧都位于Q上,那么我们已经做完了,那么第一个递归调用将恢复它们,而我们只需将它们放在我们的手中。

类似地,如果所有P中最接近的两个点对都位于R的右侧,那么第二个递归调用(它们仅在R上)会将它们放在银盘上交给我们。

因此,在不幸的情况下,P中最接近的点对恰好被分割了。

也就是说,其中一个点位于Q的左半部分,另一点位于R的右半部分。

请注意,如果将所有P中最接近的点分开,则Q的一半,R的一半,则两个递归调用都不会找到它。

好的?这对点不会传递给两个递归调用中的任何一个,因此不可能将其返回给我们。

好的?因此,如果最接近的一对碰巧被拆分,则在这两个递归调用之后,我们尚未确定最接近的一对。

这恰好与我们计算反演时发生的情况类似。

数组左半部分的递归调用计算了左反转。

数组右半部分的递归调用计算了正确的反转。

但是我们仍然必须计算分裂的倒数,因此在这种最接近的对算法中,我们仍然需要一个特殊用途的子例程,该子例程针对分裂的情况计算最接近的对,其中Q中有一个点,而在Q中有一个点。 R.

因此,就像计算反转一样,我将写下该子例程,并且现在暂时不执行它,我们将在本教程的其余部分中弄清楚如何快速实现它。

现在,如果我们正确地实现了最接近的拆分对,那么就需要我们输入x和y坐标的原始点集,然后返回拆分的最小对或Q中的一个点和R中的一个点,那么我们'重做。

因此,然后进行分割,然后最接近的一对必须位于左侧或右侧,否则必须对其进行分割。

第2步到第4步将计算每个类别中最接近的对,因此这些是最接近对的唯一可能的候选者,我们只返回了其中的最好的对。

因此,这是y的参数,如果我们对最接近的split para子例程有正确的实现,则意味着对最接近的对的正确实现。

现在,运行时间如何?因此,最接近对等算法的运行时间将部分由最接近分裂对的运行时间确定。

因此,在下一个测验中,我希望您考虑使用最接近的拆分对子例程应该拍摄什么样的运行时间。


O(n log n) Algorithm for Closest Pair I [Advanced - Optional] - Question 1

Suppose we can correctly implement the ClosestSplitPair subroutine in 𝑂(𝑛) time.

What will be the overall running time of the ClosestPair algorithm ? (Choose the smallest upper bound that applies)

A. 𝑂(𝑛)

B. 𝑂(𝑛log(𝑛))

C. 𝑂(𝑛(log(𝑛))^2)

D. 𝑂(𝑛^2)

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 205,132评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 87,802评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,566评论 0 338
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,858评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,867评论 5 368
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,695评论 1 282
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,064评论 3 399
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,705评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 42,915评论 1 300
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,677评论 2 323
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,796评论 1 333
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,432评论 4 322
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,041评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,992评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,223评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,185评论 2 352
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,535评论 2 343