OpenAI全网疯传的53页PDF文档:计划2027年前开发出通用人工智能(英).pdf
《OpenAI全网疯传的53页PDF文档:计划2027年前开发出通用人工智能(英).pdf》由会员分享,可在线阅读,更多相关《OpenAI全网疯传的53页PDF文档:计划2027年前开发出通用人工智能(英).pdf(53页珍藏版)》请在咨信网上搜索。
1、 Revealing OpenAIs plan to create AGI by 2027 In this document I will be revealing information I have gathered regarding OpenAIs(delayed)plans to create human-level AGI by 2027.Not all of it will be easily verifiable but hopefully theres enough evidence to convince youSummary:OpenAI started training
2、 a 125 trillion parameter multimodal model in August of 2022.The first stage was Arrakis also called Q*.The model finished training in December of 2023 but the launch was canceled due to high inference cost.This is the original GPT-5 which was planned for release in 2025.Gobi(GPT-4.5)has been rename
3、d to GPT-5 because the original GPT-5 has been canceled.The next stage of Q*,originally GPT-6 but since renamed to GPT-7(originally for release in 2026),has been put on hold because of the recent lawsuit by Elon MuskQ*2025(GPT-8)was planned to be released in 2027 achieving full AGI.Q*2023=48 IQQ*202
4、4=96 IQ(delayed)Q*2025=145 IQ(delayed)Elon Musk caused the delay because of his lawsuit.This is why Im revealing the information now because no further harm can be done Ive seen many definitions of AGI artificial general intelligence but I will define AGI simply as an artificial intelligence that ca
5、n do any intellectual task a smart human can.This is how most people define the term now.2020 was the first time I was shocked by an AI system that was GPT-3.GPT-3.5,an upgraded version of GPT-3,is the model behind ChatGPT.When ChatGPT was released,I felt as though the wider world was finally catchi
6、ng up to something I was interacting with 2 years prior.I used GPT-3 extensively in 2020 and was shocked by its ability to reason.GPT-3,and its half-step successor GPT-3.5(which powered the now famous ChatGPT-before it was upgraded to GPT-4 in March 2023),were a massive step towards AGI in a way tha
7、t earlier models werent.The thing to note is,earlier language models like GPT-2(and basically all chatbots since Eliza)had no real ability to respond coherently at all.So why was GPT-3 such a massive leap?.Parameter Count“Deep learning”is a concept that essentially goes back to the beginning of AI r
8、esearch in the 1950s.The first neural network was created in the 50s,and modern neural networks are just“deeper”,meaning,they contain more layers theyre much,much bigger and trained on lots more data.Most of the major techniques used in AI today are rooted in basic 1950s research,combined with a few
9、 minor engineering solutions like“backpropogation”and“transformer models”.The overall point is that AI research hasnt fundamentally changed in 70 years.So,theres only two real reasons for the recent explosion of AI capabilities:size and data.A growing number of people in the field are beginning to b
10、elieve weve had the technical details of AGI solved for many decades,but merely didnt have enough computing power and data to build it until the 21st century.Obviously,21st century computers are vastly more powerful than 1950s computers.And of course,the internet is where all the data came from.So,w
11、hat is a parameter?You may already know,but to give a brief digestible summary,its analogous to a synapse in a biological brain,which is a connection between neurons.Each neuron in a biological brain has roughly 1000 connections to other neurons.Obviously,digital neural networks are conceptually ana
12、logous to biological brains.So,how many synapses(or“parameters”)are in a human brain?The most commonly cited figure for synapse count in the brain is roughly 100 trillion,which would mean each neuron(100 billion in the human brain)has roughly 1000 connections.If each neuron in a brain has 1000 conne
13、ctions,this means a cat has roughly 250 billion synapses,and a dog has 530 billion synapses.Synapse count generally seems to predict higher intelligence,with a few exceptions:for instance,elephants technically have a higher synapse count than humans yet display lower intelligence.The simplest explan
14、ation for larger synapse counts with lower intelligence is a smaller amount of quality data.From an evolutionary perspective,brains are“trained”on billions of years of epigenetic data,and human brains evolved from higher quality socialization and communication data than elephants,leading to our supe
15、rior ability to reason.Regardless,synapse count is definitely important.Again,the explosion in AI capabilities since the early 2010s has been the result of far more computing power and far more data.GPT-2 had 1.5 billion connections,which is less than a mouses brain(10 billion synapses).GPT-3 had 17
16、5 billion connections,which is getting somewhat close to a cats brain.Isnt it intuitively obvious that an AI system the size of a cats brain would be superior to an AI system smaller than a mouses brain?.Predicting AI PerformanceIn 2020,after the release of the 175 billion parameter GPT-3,many specu
17、lated about the potential performance of a model 600 times larger at 100 trillion parameters,because this parameter count would match the human brains synapse count.There was no strong indication in 2020 that anyone was actively working on a model of this size,but it was interesting to speculate abo
18、ut.The big question is,is it possible to predict AI performance by parameter count?As it turns out,the answer is yes,as youll see on the next page.Source:https:/ The above is from Lanrians LessWrong post.As Lanrian illustrated,extrapolations show that AI performance inexplicably seems to reach human
19、-level at the same time as human-level brain size is matched with parameter count.His count for the synapse number in the brain is roughly 200 trillion parameters as opposed to the commonly cited 100 trillion figure,but the point still stands,and the performance at 100 trillion parameters is remarka
20、bly close to optimal.By the way an important thing to note is that although 100 trillion is slightly suboptimal in performance,there is an engineering technique OpenAI is using to bridge this gap.Ill explain this towards the very end of the document because its crucial to what OpenAI is building.Lan
21、rians post is one of many similar posts online its an extrapolation of performance based on the jump between previous models.OpenAI certainly has much more detailed metrics and theyve come to the same conclusion as Lanrian,as Ill show later in this document.So,if AI performance is predictable based
22、on parameter count,and 100 trillion parameters is enough for human-level performance,when will a 100 trillion parameter AI model be released?.GPT-5 achieved proto AGI in late 2023 with an IQ of 48 The first mention of a 100 trillion parameter model being developed by OpenAI was in the summer of 2021
23、,mentioned offhand in a wired interview by the CEO of Cerebras(Andrew Feldman),a company which Sam Altman is a major investor of.Sam Altmans response to Andrew Feldman,at an online meetup and Q&A called AC10,which took place in September 2021.Its crucial to note that Sam Altman ADMITS to their plans
24、 for a 100 trillion parameter model.(Sources:https:/ reddit posting itself is sourced from a LessWrong post,which was deleted at Sam Altmans request:https:/ researcher Igor Baikov made the claim,only a few weeks later,that GPT-4 was being trained and would be released between December and February.A
25、gain,I will prove that Igor really did have accurate information,and is a credible source.This will be important soon Gwern is a famous figure in the AI world he is an AI researcherand blogger.He messaged Igor Baikov on Twitter(in September 2022)and this isthe response he received.Important to remem
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- OpenAI 全网疯传 53 PDF 文档 计划 2027 年前 开发 通用 人工智能
1、咨信平台为文档C2C交易模式,即用户上传的文档直接被用户下载,收益归上传人(含作者)所有;本站仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。所展示的作品文档包括内容和图片全部来源于网络用户和作者上传投稿,我们不确定上传用户享有完全著作权,根据《信息网络传播权保护条例》,如果侵犯了您的版权、权益或隐私,请联系我们,核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
2、文档的总页数、文档格式和文档大小以系统显示为准(内容中显示的页数不一定正确),网站客服只以系统显示的页数、文件格式、文档大小作为仲裁依据,平台无法对文档的真实性、完整性、权威性、准确性、专业性及其观点立场做任何保证或承诺,下载前须认真查看,确认无误后再购买,务必慎重购买;若有违法违纪将进行移交司法处理,若涉侵权平台将进行基本处罚并下架。
3、本站所有内容均由用户上传,付费前请自行鉴别,如您付费,意味着您已接受本站规则且自行承担风险,本站不进行额外附加服务,虚拟产品一经售出概不退款(未进行购买下载可退充值款),文档一经付费(服务费)、不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
4、如你看到网页展示的文档有www.zixin.com.cn水印,是因预览和防盗链等技术需要对页面进行转换压缩成图而已,我们并不对上传的文档进行任何编辑或修改,文档下载后都不会有水印标识(原文档上传前个别存留的除外),下载后原文更清晰;试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓;PPT和DOC文档可被视为“模板”,允许上传人保留章节、目录结构的情况下删减部份的内容;PDF文档不管是原文档转换或图片扫描而得,本站不作要求视为允许,下载前自行私信或留言给上传者【Stan****Shan】。
5、本文档所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用;网站提供的党政主题相关内容(国旗、国徽、党徽--等)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
6、文档遇到问题,请及时私信或留言给本站上传会员【Stan****Shan】,需本站解决可联系【 微信客服】、【 QQ客服】,若有其他问题请点击或扫码反馈【 服务填表】;文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“【 版权申诉】”(推荐),意见反馈和侵权处理邮箱:1219186828@qq.com;也可以拔打客服电话:4008-655-100;投诉/维权电话:4009-655-100。