Level8.unit1-part2-on-controlling-AI.docx
《Level8.unit1-part2-on-controlling-AI.docx》由会员分享,可在线阅读,更多相关《Level8.unit1-part2-on-controlling-AI.docx(11页珍藏版)》请在咨信网上搜索。
1、00:01Im going to talk about a failure of intuitionthat many of us suffer from.Its really a failure to detect a certain kind of danger.Im going to describe a scenariothat I think is both terrifyingand likely to occur,and thats not a good combination,as it turns out.And yet rather than be scared, most
2、 of you will feelthat what Im talking about is kind of cool.00:25Im going to describe how the gains we makein artificial intelligencecould ultimately destroy us.And in fact, I think its very difficult to see how they wont destroy usor inspire us to destroy ourselves.And yet if youre anything like me
3、,youll find that its fun to think about these things.And that response is part of the problem.OK? That response should worry you.And if I were to convince you in this talkthat we were likely to suffer a global famine,either because of climate change or some other catastrophe,and that your grandchild
4、ren, or their grandchildren,are very likely to live like this,you wouldnt think,Interesting.I like this TED Talk.01:09Famine isnt fun.Death by science fiction, on the other hand, is fun,and one of the things that worries me most about the development of AI at this pointis that we seem unable to mars
5、hal an appropriate emotional responseto the dangers that lie ahead.I am unable to marshal this response, and Im giving this talk.01:30Its as though we stand before two doors.Behind door number one,we stop making progress in building intelligent machines.Our computer hardware and software just stops
6、getting better for some reason.Now take a moment to consider why this might happen.I mean, given how valuable intelligence and automation are,we will continue to improve our technology if we are at all able to.What could stop us from doing this?A full-scale nuclear war?A global pandemic?An asteroid
7、impact?Justin Bieber becoming president of the United States?02:08(Laughter)02:12The point is, something would have to destroy civilization as we know it.You have to imagine how bad it would have to beto prevent us from making improvements in our technologypermanently,generation after generation.Alm
8、ost by definition, this is the worst thingthats ever happened in human history.02:32So the only alternative,and this is what lies behind door number two,is that we continue to improve our intelligent machinesyear after year after year.At a certain point, we will build machines that are smarter than
9、we are,and once we have machines that are smarter than we are,they will begin to improve themselves.And then we risk what the mathematician IJ Good calledan intelligence explosion,that the process could get away from us.02:58Now, this is often caricatured, as I have here,as a fear that armies of mal
10、icious robotswill attack us.But that isnt the most likely scenario.Its not that our machines will become spontaneously malevolent.The concern is really that we will build machinesthat are so much more competent than we arethat the slightest divergence between their goals and our owncould destroy us.
11、03:23Just think about how we relate to ants.We dont hate them.We dont go out of our way to harm them.In fact, sometimes we take pains not to harm them.We step over them on the sidewalk.But whenever their presenceseriously conflicts with one of our goals,lets say when constructing a building like thi
12、s one,we annihilate them without a qualm.The concern is that we will one day build machinesthat, whether theyre conscious or not,could treat us with similar disregard.03:53Now, I suspect this seems far-fetched to many of you.I bet there are those of you who doubt that superintelligent AI is possible
13、,much less inevitable.But then you must find something wrong with one of the following assumptions.And there are only three of them.04:11Intelligence is a matter of information processing in physical systems.Actually, this is a little bit more than an assumption.We have already built narrow intellig
14、ence into our machines,and many of these machines performat a level of superhuman intelligence already.And we know that mere mattercan give rise to what is called general intelligence,an ability to think flexibly across multiple domains,because our brains have managed it. Right?I mean, theres just a
15、toms in here,and as long as we continue to build systems of atomsthat display more and more intelligent behavior,we will eventually, unless we are interrupted,we will eventually build general intelligenceinto our machines.04:59Its crucial to realize that the rate of progress doesnt matter,because an
16、y progress is enough to get us into the end zone.We dont need Moores law to continue. We dont need exponential progress.We just need to keep going.05:13The second assumption is that we will keep going.We will continue to improve our intelligent machines.And given the value of intelligence -I mean, i
17、ntelligence is either the source of everything we valueor we need it to safeguard everything we value.It is our most valuable resource.So we want to do this.We have problems that we desperately need to solve.We want to cure diseases like Alzheimers and cancer.We want to understand economic systems.
18、We want to improve our climate science.So we will do this, if we can.The train is already out of the station, and theres no brake to pull.05:53Finally, we dont stand on a peak of intelligence,or anywhere near it, likely.And this really is the crucial insight.This is what makes our situation so preca
19、rious,and this is what makes our intuitions about risk so unreliable.06:11Now, just consider the smartest person who has ever lived.On almost everyones shortlist here is John von Neumann.I mean, the impression that von Neumann made on the people around him,and this included the greatest mathematicia
20、ns and physicists of his time,is fairly well-documented.If only half the stories about him are half true,theres no questionhes one of the smartest people who has ever lived.So consider the spectrum of intelligence.Here we have John von Neumann.And then we have you and me.And then we have a chicken.0
21、6:45(Laughter)06:47Sorry, a chicken.06:48(Laughter)06:49Theres no reason for me to make this talk more depressing than it needs to be.06:53(Laughter)06:56It seems overwhelmingly likely, however, that the spectrum of intelligenceextends much further than we currently conceive,and if we build machines
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- Level8 unit1 part2 on controlling AI
1、咨信平台为文档C2C交易模式,即用户上传的文档直接被用户下载,收益归上传人(含作者)所有;本站仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。所展示的作品文档包括内容和图片全部来源于网络用户和作者上传投稿,我们不确定上传用户享有完全著作权,根据《信息网络传播权保护条例》,如果侵犯了您的版权、权益或隐私,请联系我们,核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
2、文档的总页数、文档格式和文档大小以系统显示为准(内容中显示的页数不一定正确),网站客服只以系统显示的页数、文件格式、文档大小作为仲裁依据,平台无法对文档的真实性、完整性、权威性、准确性、专业性及其观点立场做任何保证或承诺,下载前须认真查看,确认无误后再购买,务必慎重购买;若有违法违纪将进行移交司法处理,若涉侵权平台将进行基本处罚并下架。
3、本站所有内容均由用户上传,付费前请自行鉴别,如您付费,意味着您已接受本站规则且自行承担风险,本站不进行额外附加服务,虚拟产品一经售出概不退款(未进行购买下载可退充值款),文档一经付费(服务费)、不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
4、如你看到网页展示的文档有www.zixin.com.cn水印,是因预览和防盗链等技术需要对页面进行转换压缩成图而已,我们并不对上传的文档进行任何编辑或修改,文档下载后都不会有水印标识(原文档上传前个别存留的除外),下载后原文更清晰;试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓;PPT和DOC文档可被视为“模板”,允许上传人保留章节、目录结构的情况下删减部份的内容;PDF文档不管是原文档转换或图片扫描而得,本站不作要求视为允许,下载前自行私信或留言给上传者【丰****】。
5、本文档所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用;网站提供的党政主题相关内容(国旗、国徽、党徽--等)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
6、文档遇到问题,请及时私信或留言给本站上传会员【丰****】,需本站解决可联系【 微信客服】、【 QQ客服】,若有其他问题请点击或扫码反馈【 服务填表】;文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“【 版权申诉】”(推荐),意见反馈和侵权处理邮箱:1219186828@qq.com;也可以拔打客服电话:4008-655-100;投诉/维权电话:4009-655-100。