To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Output-sensitive algorithm

From Wikipedia, the free encyclopedia

In computer science, an output-sensitive algorithm is an algorithm whose running time depends on the size of the output, instead of, or in addition to, the size of the input. For certain problems where the output size varies widely, for example from linear in the size of the input to quadratic in the size of the input, analyses that take the output size explicitly into account can produce better runtime bounds that differentiate algorithms that would otherwise have identical asymptotic complexity.

YouTube Encyclopedic

  • 1/3
    Views:
    291 280
    784 208
    486
  • Decision Tree Tutorial in 7 minutes with Decision Tree Analysis & Decision Tree Example (Basic)
  • Chi-squared Test
  • JKU ICG Lab Talk: Prof. Markus Hadwiger, King Abdullah University of Science and Technology (KAUST)

Transcription

Decision Tree Tutorial in 7 minutes with Decision Tree Analysis & Decision Tree Example (Basic) Hello! Welcome back again to www.MBAbullshit.com. The topic for this video is Decision Tree Analysis and this is the basic video on this topic. Remember, you can always go back to www.MBAbullshit.com. This video completely explains the very basic Decision Tree concept. After this, you’ll be ready for my next video on Decision Tree Exam Training to help you pass or get high grades. Let’s start with a story. Let’s say you can choose between two business projects either a candy shop or a lemonade stand. Let’s say the candy shop can earn up to one hundred dollars and the lemonade stand can earn up to ninety dollars. Which one should you do? Of course, the answer is quite easy and obvious; you should choose the candy shop in this simple situation. But what if, let’s make the story a step further. What if the candy shop had a fifty percent chance of success and also had a fifty percent of failure? If your candy shop is a success, you would earn one hundred dollars. If it was a failure, you would lose thirty dollars; it’s a negative sign there, a negative thirty. On the other hand, if you have a lemonade stand, you also have a fifty percent success and a fifty percent chance of failure. If it’s a success, you would probably earn ninety dollars and if it’s a failure, you would probably lose ten dollars. Which one do you choose now? So now it’s not so simple. Actually it’s very simple; you would use a simple formula like this. Okay? And it would look like this: fifty percent times one hundred dollars plus fifty percent multiplied by negative thirty and we would get a figure here which we call an “Expected Value.” I’m going to explain that a little bit more in a while so just be patient. Where do we get this fifty percent over here? It’s the same as the fifty percent over here. Where do we get the one hundred dollars over here? It’s the same as the one hundred dollars over here. Plus fifty percent over here, corresponds the fifty percent over here chance of failure, multiplied by negative thirty or loss of thirty. Where do we get that? It corresponds the negative thirty dollars over here. Now we come up with these thirty five dollars. Then we do the same thing for the lemonade stand. Fifty percent chance of success, ninety dollars that you would earn if it is a success plus fifty percent chance of failure multiplied by the loss of ten dollars which you would have if lemonade stands become a failure. We end up with this formula over here and this figure over here: fifty percent times ninety plus fifty percent multiplied by negative ten percent equals forty dollars. So now, which one would you choose? Obviously, would you choose the thirty five dollars? Or would you choose the forty dollars? Obviously, you would choose the forty dollars. These forty dollars is what we call the “Expected Value” of the Lemonade project. The “Expected Value” of the Candy project is thirty five dollars. Since we are choosing a business, we choose the bigger one, the one with the bigger expected value. I promise you I speak more about expected value. What do you mean by this? Well, does this mean that if you do the Lemonade project, you will earn forty dollars? Forty dollars over here. Does this mean you will earn forty dollars? No! It means that if you did identical Lemonade projects very many times in exactly the same situation as you have here. If you did it very many times in exactly the same situation then your average earnings will probably be forty dollars per time on the average. So it does not mean, you’ll earn forty dollars each time and it certainly does not mean you will earn forty dollars this time. It just means that your average earning is forty dollars if in some strange world, you had the chance to replicate or duplicate exactly the same situation many, many times. Make sure that’s what you mean by that. You understand that’s what we mean by Expected Value. It’s different from everyday street language. If you say, “Oh, I expect my friend to give me five dollars today.” that means he probably will give you exactly five dollars. But in MBA bullshit language, it means what I just explained right now, your average earning in certain situation if you could replicate that same situation very many times. Right. So that’s how simple it is. Okay? Now you’re ready for my next video. This video that we watched right now was a very simple example, not enough to pass an exam or to get high marks or high grades but still you should have completely understood at least the concept of this video. Remember to share it if you like it. On twitter, simply @MBAbullshit, www.facebook.com/MBAbullshit is the place to go to find out the latest update from me and the latest videos. Please do forward our YouTube videos links on your email. Have a great day! Goodbye! debbierojonan Page 1

Examples

Division by subtraction

A simple example of an output-sensitive algorithm is given by the division algorithm division by subtraction which computes the quotient and remainder of dividing two positive integers using only addition, subtraction, and comparisons:

def divide(number: int, divisor: int) -> Tuple[int, int]:
    """Division by subtraction."""
    if divisor == 0:
        raise ZeroDivisionError
    if number < 1 or divisor < 1:
        raise ValueError(
            f"Positive integers only for "
            f"dividend ({number}) and divisor ({divisor})."
        )
    q = 0
    r = number
    while r >= divisor:
        q += 1
        r -= divisor
    return q, r

Example output:

>>> divide(10, 2)
(5, 0)
>>> divide(10, 3)
(3, 1)

This algorithm takes Θ(Q) time, and so can be fast in scenarios where the quotient Q is known to be small. In cases where Q is large however, it is outperformed by more complex algorithms such as long division.

Computational geometry

Convex hull algorithms for finding the convex hull of a finite set of points in the plane require Ω(n log n) time for n points; even relatively simple algorithms like the Graham scan achieve this lower bound. If the convex hull uses all n points, this is the best we can do; however, for many practical sets of points, and in particular for random sets of points, the number of points h in the convex hull is typically much smaller than n. Consequently, output-sensitive algorithms such as the ultimate convex hull algorithm and Chan's algorithm which require only O(n log h) time are considerably faster for such point sets.

Output-sensitive algorithms arise frequently in computational geometry applications and have been described for problems such as hidden surface removal[1] and resolving range filter conflicts in router tables.[2]

Frank Nielsen describes a general paradigm of output-sensitive algorithms known as grouping and querying and gives such an algorithm for computing cells of a Voronoi diagram.[3] Nielsen breaks these algorithms into two stages: estimating the output size, and then building data structures based on that estimate which are queried to construct the final solution.

Generalizations

A more general kind of output-sensitive algorithms are enumeration algorithms, which enumerate the set of solutions to a problem. In this context, the performance of algorithms is also measured in an output-sensitive way, in addition to more sensitive measures, e.g., bounded the delay between any two successive solutions.

See also

References

  1. ^ Sharir, M.; Overmars, M. H. (1992). "A simple output-sensitive algorithm for hidden surface removal". ACM Transactions on Graphics. 11: 1–11. doi:10.1145/102377.112141. hdl:1874/16612.
  2. ^ Khaireel A. Mohamed and Christine Kupich. An O(n log n) Output-Sensitive Algorithm to Detect and Resolve Conflicts for 1D Range Filters in Router Tables. Institut für Informatik. August 5, 2006. ftp://ftp.informatik.uni-freiburg.de/documents/reports/report226/report00226.ps.gz
  3. ^ Frank Nielsen. Grouping and Querying: A Paradigm to Get Output-Sensitive Algorithms. Revised Papers from the Japanese Conference on Discrete and Computational Geometry, pp.250–257. 1998. ISBN 3-540-67181-1. http://www.sonycsl.co.jp/person/nielsen/PT/groupingquerying/n-grouping.ps
This page was last edited on 13 April 2023, at 22:23
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.