To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

From Wikipedia, the free encyclopedia

Daniel McCann (30 November 1957 – 6 March 1988) was a member of the Provisional Irish Republican Army (IRA), who was shot dead by the British Army on 6 March 1988 whilst trying to plant a car bomb in Gibraltar.

YouTube Encyclopedic

  • 1/5
    Views:
    30 535
    42 684
    44 127
    1 388 113
    1 171
  • ✪ How Will Self-Driving Cars Make Moral Decisions?
  • ✪ EV Charging in Multi-Residential Buildings: One Success Story
  • ✪ Chevy Bolt EV Summer Range Test
  • ✪ Watch Bed Bugs Get Stopped in Their Tracks | Deep Look
  • ✪ FP McCann OASIS Project Omagh Bridge Installation July 2014

Transcription

This episode id sponsored by Brilliant. You’re faced with a dilemma. There’s a rogue trolley headed straight for 5 people. There’s a lever which, when pulled, changes the trolley’s course saving the 5 people, but killing 1. If you do nothing, 5 people will die. If you pull the lever, you save 5 but kill 1 who would have otherwise been safe. With only seconds to act, what do you do? This is a famous question in philosophy called The Trolley Problem and philosophers have been arguing about it for centuries. Some say the right thing to do is to pull the lever, as saving 5 lives is obviously better than saving 1. Others argue that the act of pulling the lever means that you are responsible for the death, whereas inaction would just have been letting fate happen. We’re far from reaching a consensus on the matter and the battle of right and wrong still continues. And that’s been fine… until now. Self driving cars are estimated to be on roads in the not too distant future and accidents will inevitably happen. Programmers will need to make these decisions ahead of time if the car is to act correctly in the face of death. So should we just round up all the ethicists and philosophers and lock them in a room until they can all agree? Well a professor of computational social choice, Ariel Procaccia from Carnegie Mellon University, had a better idea. The story starts with a visit to MIT where, when talking to other computer scientists, Prof. Procaccia and his PhD Ritesh heard about the most fascinating experiment. The Moral Machine experiment. A group at MIT created a website which presents you with a number of different self driving car catastrophes and you can choose which one you think is the most morally correct. For example here we have a self driving car that had a sudden brake failure. Should it A, continue ahead to the barrier, killing the passengers, 2 dogs, 1 elderly woman and 2 female athletes or should it swerve and kill 1 cat, 1 male athlete, 1 elderly man and 1 pregnant woman. In this we choose between the car killing 2 female executives, 1 man and 2 male executives, or 3 homeless people, 1 woman and 1 man. The team at MIT collected over 40 million votes from millions of users around the world, and some of the results are pretty interesting. Like most people would rather spare a dog’s life than a criminal’s, and while Western countries tended to spare the young over the elderly, in middle eastern and asian countries this pattern was much less pronounced. Anyway, when talking about this, Prof. Procaccia realized that this data not only revealed decisions, it could be used to automate decisions. In other words, this data could be used to tell self driving cars how to act in a life threatening situation. In the words of Prof. Procaccia, they could create a virtual democracy. Now what I find so fascinating about this approach is that it seems so simple when you hear it but it’s so opposite to anything we’ve tried in the past. Most approaches to date have been top down, in that we try to establish foundational laws first, like Asimov’s laws of robotics for example, and then build off those. This is completely different in that we examine the opinions of millions and aggregate them to a final decision. Now Prof. Procaccia said that this actually caused a lot of controversy in the ethics community. People said things like “we’ve been talking about this for centuries and you think you’ve solved it in just one paper?!” But the whole point of this approach is that we can deal with The Trolley Problem without having to solve the Trolley Problem. I dunno I think that’s really clever. Now moving onto the more technical stuff (because I know that’s why you’re here), how did the team of computer scientists actually implement this idea and what was the computer science involved? I think the main questions are: How would a self driving car deal with new scenarios that hadn’t been included in the Moral Machine experiment? There will typically be seconds or even microseconds for a self driving car to assess a situation and act accordly. How will it make the right decision so quickly? How can one algorithm represent over a million people’s preferences and over 40 million votes? Now this stuff is pretty advanced for me so I actually have a friend with us today who has greatly studied ethics in AI. His name is Le and he runs the YouTube channel Science 4 All which explores AI research and algorithms, he also has a rather lovely french accent. Le, can you tell us how this all works? Hey Jade! For sure! Let’s first give an overview of the process. It consists of 4 steps. First the data collection, which as you’ve mentioned, was done in the moral machine experiment. Second, the learning step. The goal here is to use the data from the moral machine experiment to learn a model that extrapolates the preferences of each voter to all possible alternatives. Third, the summarisation step: combine the individual models into a single model, which approximately captures the collective preferences of all voters over all possible alternatives in a more efficient manner, to allow for faster future computations. And finally aggregation, where we use a summarized model to run a virtual democracy and come to a decision. You already covered step 1 Jade, so I’ll now start with the next step, learning. So each voter is given a bunch of scenarios where they have to choose the best outcome. Now the options always come in pairs in the moral machine experiment. This is because people reason much more clearly about hypotheticals with 2 alternatives rather than having to, say, give each situation a score according to their preference. For example if someone asked you “do prefer chocolate or vanilla ice cream?”, it’s much easier to answer than if they asked you to score these 100 ice cream flavors from your favorite to least favorite. Essentially, you get much more accurate information from people when you present them with hypothetical scenarios and ask them to choose. However, a score is a lot more useful. You can figure out trends, like maybe you like sweet ice cream flavors and don’t like bitter flavors. Or you prefer to save infants and babies over adults, or you value human lives over animals. But luckily machine learning algorithms have become quite good at inferring scores from pairwise comparisons, especially if they can assume that such scores are obtained by combining the influences of different features, like sweetness or bitterness. Or in the case of the trolley problem, such features may include humans vs pets, swerving vs staying in line, and so on. But let’s keep it simple for now. Let’s just consider 3 factors, age, wealth and weight. Say there’s a person who really values the elderly, the rich and the err, voluptuous. And assume they value all of these things equally, so let’s say they assign a weight of 1 to each feature. So if we change any of these factors, this will affect the score.. In this alternative where the person they could save is skinny, poor and young, they would give a low score, and when the person they could save is rich, fat and old, they would give a high score. Now, note also that this judgment is what we have inferred from data. And thus we should actually be quite uncertain about our extrapolation. Plus, many other features that have not been modeled may affect the person’s preferences. As a result, we should not only predict a score, but we should also add that there is an uncertainty about this score, and we should even quantify this uncertainty. Of course, this is probably not how people would actually score such and such alternative. In some sense, our algorithm is not trying to mimic the person’s reasoning. It rather tries to extrapolate from the data from the moral machine experiment, how a voter would act when faced with a new scenario. Thereby, the algorithm learns a so-called scoring function. But the problem is we have too many of these scoring functions. In a real life situation, a self driving car will have seconds or even microseconds to analyze a situation and act accordingly. If a million people did the moral machine experiment, there would be a million scoring functions, which would take way too long to sift through and find an answer. So what the researchers did was combine all of these functions into one summarized function which can run in a matter of seconds. Now of course there is a trade off between how fast the algorithm runs and how accurate it will be. Obviously the ideal situation is to go and ask each voter what they would prefer, but because time is of the essence, the best we can do is crunch all their predicted preferences into one function which loses the least information possible. Surprisingly the researchers managed to maintain around 96.2% accuracy. This means that in 96.2% of cases the summarized function made the same decision as if you’d run the full 1 million functions. Pretty good! Thanks for answering our questions Le No worries Jade! Make sure to check out Le’s channel Science 4 all at the end of this video. Most of his videos are in French but some of them have subtitles… So you might think this is the end of it, but there’s still one last step. If we draw an analogy with a presidential election, we’ve just gathered, or generated, all the votes, but we still have to count them. How we count the votes can make a huge difference to the final decision, and many countries have different ways of counting votes in elections. In the 2016 presidential election, Trump won even though more people voted for Hillary because of the way the US tallies the votes. If the same election were run in France, Hillary would have won. This is where the field of Social Choice comes in. It’s the mathematical study of political science and deals with things like choosing the right voting mechanism to get the most desired outcome. In terms of self driving cars, we as a society can agree on a mechanism with the properties that we want, for example that it shows no bias, it’s efficient and it gives people an incentive to tell the truth rather than try to manipulate the system. In their paper Prof. Procaccia shows an example of a very simple but effective mechanism called Borda count, which assigns the ranking scores in an arithmetic progression. For example, the least favorite option would be given the score 0, and the next least preferred the score 1, then 2 and so on. This is very different to if they were scored on say an exponential progression, as this would change the outcome drastically. So that’s the gist of how it works, but before we go, Le had some insights that he was just bursting to share. I personally find Prof. Procaccia’s work absolutely fascinating! I’d argue that it really opens a new line of research at the intersection of ethics, social choice and computer science, which is really becoming crucial for our modern world. Indeed, self-driving cars are definitely not the only automated systems that face moral dilemmas. For one thing, autonomous weapons are being developed. They have the potential to better target their victims and greatly mitigate unwanted side effects. But they also are extremely scary and worrying. But there are also more prosaic everyday life AIs that actually already have huge impacts on our societies, like recommender systems. Many studies show that they raise huge moral issues in terms of privacy, biases or filter bubbles. Perhaps generalizations or variants of Prof. Procaccia’s work could allow to better understand what ethical values should or should not be implemented in such systems. But this raises still another issue. Indeed, the more complex the moral dilemmas are, the more it seems that expertise could be relevant. Don’t ethicists “know better” than the layman about the most important aspects of ethics? Don’t people eventually change their minds as they learn more about a particular topic? In some sense, Prof. Procaccia is laying the groundwork for a general approach, not about ethics, but about meta-ethics. Instead of asking directly what is right and what is wrong, he proposed a method to determine what is right and what is wrong. Perhaps one of the greatest challenges of the upcoming century could be to determine which meta-ethics should be used, rather than which ethics should be implemented. I don’t know. But I do find all of this fascinating! Machine learning is already a huge part of our lives and will continue to impact us in the future, so it’s a good idea to become familiar with the basics so you can make better informed decision. If you would like to learn more about machine learning and algorithms, I would recommend checking out today’s sponsor, brilliant.org. Brilliant is an interactive learning website specializing in math, physics and computer science. It covers so many different topics including an entire course dedicated to machine learning. This course will provide you with the basic tools needed to understand the algorithms and techniques used in everyday machine learning, including self driving cars. The same goes for this artificial neural networks course, which I would recommend starting with if you’re a beginner looking to learn more. This course gives you a real sense of how machines actually learn, starting with the concepts that link it to the human brain, leading up to more advanced techniques like demonstrating how a machine thinks about playing a game like naughts and crosses. There are tonnes of other courses which you can sign up to for free. However for a true learning experience I would recommend signing up to their premium membership. Brilliant is giving a 20% discount to the first 200 people who sign up with this link. Just go to brilliant.org/upandatom. It could make a great Holiday gift. Thanks for watching guys. Make sure to check out the video we did over on Le’s channel about the possibility of human-level AI by the year 2025. This is my last video for the year, it’s been a wonderful year with you guys. I wish you all a happy holidays and I will see you in 2019. Bye! in the new year.

Contents

Early life

McCann was born into an Irish republican family from the Clonard area of West Belfast. He was educated at primary level at St Gall's Primary School, Belfast, and at St Mary's Grammar School, Belfast. McCann did not finish his education as he was arrested after becoming involved in rioting. He was charged and convicted of "riotous behaviour" and sentenced to six months in prison. Later that year McCann joined the Provisional IRA.[1] He was later convicted and sentenced to two years imprisonment for the possession of explosives.[citation needed]

Paramilitary activity

In 1987 McCann along with another IRA member, Sean Savage, murdered two Royal Ulster Constabulary officers at Belfast docks.[2][3]

In 1988 McCann and Savage, along with Mairead Farrell, another IRA member, were sent to the British overseas territory of Gibraltar to plant a bomb in the town area, targeting a British Army band which paraded weekly in connection with the changing of the guard in front of the Governors' residence.

The British Government knew in advance about the operation, and specially dispatched to Gibraltar a British Army detachment to intercept the IRA team. Whilst McCann, Savage and Farrell were engaged on 6 March 1988 on a reconnaissance trip in Gibraltar before driving in a car-bomb, soldiers from the Special Air Service Regiment wearing civilian clothes confronted them in the streets of the town.[4] McCann was shot five times at close-range, the SAS soldiers later claiming that he had made an 'aggressive move' when approached.[5] Farrell who was with McCann was also shot dead.[6] Savage was walking separately behind McCann and Farrell within eyesight distance, and seeing them ahead being confronted and fired upon, fled, running several hundred yards back into Gibraltar town closely pursued on foot by another Special Air Service soldier, who caught up with him and shot him dead also.[7] All three IRA members were subsequently found to be unarmed.[8]

A car bomb ready to be driven into Gibraltar that had been created by McCann, Savage and Farrell was found 36 miles away in Spain by the Spanish Police two days after their deaths, containing 140 lb (64 kg) of Semtex with a device timed to go off during the changing of the guard in Gibraltar.[9]

Subsequent events

A documentary entitled Death on the Rock, was produced and broadcast on British television about the failed IRA operation in Gibraltar shortly after it had taken place, detailing the British and Spanish Government's actions and that of the IRA team, in an operation that the British Government had code-named Operation Flavius. The documentary also interviewed civilian eyewitnesses to the shooting of the Provisional IRA members, raising questions about the veracity of the British Government and its involved soldiers' accounts of it, focusing on whether the three terrorists had been offered the chance to surrender by the soldiers confronting them before they had been fired upon. It also questioned whether the violence used had been proportionate, in line with ongoing rumours in the British media of a purported "Shoot to Kill" policy that the British Government was at that point pursuing against the Provisional IRA in The Troubles.[10]

Funeral

At an IRA-sponsored collective funeral on 16 March 1988 for McCann's body along with that of Savage and Farrell's at the IRA plot in Milltown Cemetery in West Belfast, as the bodies were being lowered into the ground the funeral party came under a hand-grenade attack from a lone Loyalist paramilitary. The funeral immediately descending into chaotic scenes, as a running fight occurred between the lone gunman firing a handgun and throwing more grenades at a mob as it pursued him through the cemetery's grounds. Three mourners were killed and scores wounded in the incident.[11]

See also

References

  • Gerry Adams, Hope and History: Making Peace in Ireland, Brandon Books, 2003. ISBN 0-86322-330-3
  1. ^ Tírghrá, National Commemoration Centre, 2002. PB) ISBN 0-9542946-0-2 p.301
  2. ^ Blood & Rage - A Cultural History of Terrorism, Michael Burleigh, 2008, P332, ISBN 978-0-00-724127-9
  3. ^ Gibraltar: The truth
  4. ^ [1], paragraph 52.
  5. ^ [2], paragraph 61.
  6. ^ [3], paragraph 78.
  7. ^ [4], paragraphs 108-110.
  8. ^ https://www.opendemocracy.net/ourkingdom/david-elstein/death-on-rock-21-years-later-and-still-official-version-lives-on
  9. ^ 1988: IRA gang shot dead in Gibraltar BBC website
  10. ^ [5] Archived 2009-07-06 at the Wayback Machine).
  11. ^ 'Michael Stone kills three at IRA funerals', B.B.C. history, 16 March 1988. (2018) http://www.bbc.co.uk/history/events/michael_stone_kills_three_at_ira_funerals

External links

This page was last edited on 8 September 2019, at 19:14
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.