CSI 460 - Artificial Intelligence
- Prompt #11: The Final Prompt: 12/2
Read This. Pay attention closely to the Yudkowsky quote at the end.
It seems that the majority of researchers believe the singularity
will be within 30 years. What ethical responsibilities do
researchers have when creating superintelligence? What can we do to
mitigate unethical behavior when it involves superintelligence? |
Write your response in at least 125 words.
- Wumpus Due: 12/5
- Wumpus World Assignment |
wumpus.py - Wumpus Python File |
wump.txt - a sample wumpus world text
file
- Prompt #10: - No writing required this friday! 11/17/16 - Read and think about this abstract: Automated Criminality We'll discuss in the morning!
-
- Prompt #9: 11/11/16: Imagine the year is 3016. In 3016, an AI has
been created that can accurately and immediately predict the results
of an election. Let's say that this AI has been correct for all
elections for the last 1000 years. Instead of waiting until 3:30am in
the morning for one party to concede (or even worse, an entire month
for the electoral college to vote) our new election AI, can
immediately and accurately predict the outcome. Assuming this
predictor (also third-party verifiable) had the same security as
voting, should it be used instead of actual voting? To perhaps push
this a little further, in other words, how important is it that a
human actually feels like they can (not that they will) contribute to
who is elected?
- Prompt #8: 11/4/16: As we race towards intelligence in more and
more domains, what is one 'rule' that any artificial intelligence
should obey? Make up such a rule and defend it. (No response
needed for last week.)
- Mancala Minimax To Depth 4! due Friday
- Prompt #7: You are driving to work when suddenly 6 pedestrians
appear in the road before you. Should you swerve and kill
yourself or kill the 6 pedestrians before you? Self-driving cars
are a reality. A self-driving car is driving you to work when
suddenly 6 pedestrians appear in the road before the car. The car
must make a decision to save the passenger or kill the
pedestrians. What choice should it make? (No response needed for
last week.)
- Mancala Player
- Manacala Rules
- Prompt #6: Suppose that the judges accept the machines to assist
with recidivism. Also suppose that these machines continue to take
data based on recidivism and continue to learn. Lastly, suppose these
machines are all networked and allowed to learn from all data (and
weight local data a little more heavily as it means more). Judges only
intervene when there is data that is not processed by the
machine. Lastly, suppose this network of connected, updating machines
continually gets better at recidivism. Is it moral or ethical NOT to
use such a network? If your answer deviates from Prompt #5, explain
why. If it does not, explain why not.
- Week 7: No prompt - Friday off
- Prompt #5: Recidivism is a relapse into criminal behavior after a
criminal has been caught, sanctioned, and released. With such a huge
prison population in the US, there is a lot of data on this topic. The
data suggests that, in general, judges definitely know high risk and
low risk offenders and that their judgements in medium risk offenders
is challenging (as one might expect). In recent neural network
studies, results have shown that machines can judge recidivism better
than humans. This means that we can release more prisoners with the
same relapse rate or release the same number of prisoners with a lower
relapse rate. Given this very real fact, should machines be used to
decide when to keep a prisoner in jail? Is it moral or ethical to do
so? Is it moral or ethical NOT to do so?
A great talk on this is located at:
Sendhil Recidivism @ Cornell (250 words 10/7. Response from last prompt due)
- Prompt #N/A: No Prompt/Discussion This week! Hack away at Homework #1.
- Humans and Toast Homework #1
- Prompt #4: Being aware of one's self may be as simple as
understanding that you are hearing yourself. Machines have been built
which can use such tasks in problem solving (see
Dumbing
Pill.) Thus, in a sense, certain levels of consciousness and
self-awareness have been implemented in machines for quite awhile
now. The automated vaccuum cleaner is a simple reflex agent, but it
will clean away and power itself up. Imagine a fleet of networked
cleaners responsible for cleaning a mansion. If one gets damaged, the
others will pick up the slack and may even choose to (or not to)
repair the damaged machine or take it to a repair shop. So, the
cleaners perceive others around them and can adjust to accomplish a
task. But, they can still largely be simple reflex agents. What are
the key ingredients for consciousness at the adult human level (beyond
self-awareness)? Does the ability to perceive time matter? - 250 Words
Due 9/23 (plus update from #3)
- Prompt #3: If a machine can demonstrate a material so well that a human cannot tell that the machine does not 'know' the material, then does the machine, in fact, know the material? Explain. You may wish to review: (The Chinese Room Argument)- 250 words Due 9/16 (plus 125 response from #2)
- Prompt #2: Can/should a machine think? - 250 words Due 9/9
- Prompt #1: What is Intelligence? - 250 words Due 9/2
- Syllabus is here.