2023年大规模语言模型中语言与知识报告.pdf
ML-Summit 2023大规模语言模型中语言与知识张奇复旦大学ML-Summit 2023 全球机器学习技术大会ML-Summit 2023目录Multilingual BERT 中存在多语言对齐现象1大语言模型中多语言对齐2大语言模型中的语言和知识分离3ML-Summit 202301Multilingual BERT 中存在多语言对齐现象ML-Summit 2023MULTILINGUAL BERT 中存在多语言对齐现象Xu et al.Cross-Linguistic Syntactic Difference in Multilingual BERT:How Good is It and How Does It Affect Transfer?EMNLP2022mBERT 不同层恢复各类语言语法关系的准确性。ML-Summit 2023MULTILINGUAL BERT 中存在多语言对齐现象Xu et al.Cross-Linguistic Syntactic Difference in Multilingual BERT:How Good is It and How Does It Affect Transfer?EMNLP2022mBERT 第 7 层的不同语法关系表示的可视化。ML-Summit 2023MULTILINGUAL BERT 中存在多语言对齐现象Xu et al.Cross-Linguistic Syntactic Difference in Multilingual BERT:How Good is It and How Does It Affect Transfer?EMNLP2022mBERT 第 7 层的不同语法关系表示的可视化在进行任务Fine-Tune之后,聚合对齐更加明显ML-Summit 2023在大语言模型中有类似现象吗?ML-Summit 202302大语言模型中多语言对齐ML-Summit 2023大语言模型中也存在类似现象Xu et al.Are Structural Concepts Universal in Transformer Language Models?Towards Interpretable Cross-Lingual Generalization,EMNLP 2023语言直接在句法关系上具有很强的对齐性ML-Summit 2023大语言模型中也存在类似现象Xu et al.Are Structural Concepts Universal in Transformer Language Models?Towards Interpretable Cross-Lingual Generalization,EMNLP 2023词性标注任务,可以通过跨语言训练得到非常高的结果ML-Summit 2023通过多语言模型预训练,多语言语义在模型中已经完成对齐ML-Summit 2023大规模语言模型中多语言对齐Zhao et al.LLaMA Beyond English:An Empirical Study on Language Capability Transfer.AAAI 2024 submittedML-Summit 2023大规模语言模型中多语言对齐Zhao et al.LLaMA Beyond English:An Empirical Study on Language Capability Transfer.AAAI 2024 submitted比较如下模型:LLaMA(Touvron et al.2023a)LLaMA2(Touvron et al.2023b)Chinese LLaMA(Cui,Yang,and Yao 2023b)基于LLaMA,扩展中文词元,30B中文Token语料二次训练(120GB)Chinese LLaMA2(Cui,Yang,and Yao 2023a)基于LLaMA2,扩展中文词元,30B中文Token语料二次训练Open Chinese LLaMA(OpenLMLab 2023)基于LLaMA,扩展中文词元,100B中英混合Token语料二次训练LLaMA+10K、LLaMA+100K、LLaMA+1M基于LLamA不扩展中文词元,直接使用中文语料二次训练ML-Summit 2023大规模语言模型中多语言对齐Zhao et al.LLaMA Beyond English:An Empirical Study on Language Capability Transfer.AAAI 2024 submittedML-Summit 2023TOKEN扩展对模型影响很大,扩展后丢失原始信息,需要大量训练才能恢复ML-Summit 2023SFT数据量扩展到950K后,1M这种量级二次预训练没有特别的意义ML-Summit 2023使用中文进行二次预训练并不能在知识层面提升模型能力ML-Summit 2023在其他低资源语言中表现很类似ML-Summit 2023训练过程中非常明显的CODING-SWITCH现象ML-Summit 2023训练过程中非常明显的CODING-SWITCH现象ML-Summit 2023在大语言模型训练中我们还可以看到这些现象ML-Summit 2023ML-Summit 2023Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback,Anthropic,2023大部分LLM 在迭代1轮之后,效果提升就很不明显train for one epochSimilarly to Wu et al.(2021),we find that our SFT models overfit on validation loss after 1 epochTraining language models to follow instructions with human feedback,OpenAI,2022ML-Summit 2023有智能,打不开打的开,没智能打的开,有智能看不透From:中科院软件所 韩先培ML-Summit 2023ML-Summit 2023这些现象是否以及如何体现在大语言模型参数中?ML-Summit 2023ML-Summit 2023ML-Summit 202303大语言模型中的语言与知识注意:非常初步的结果,很多结论和实验并不十分可靠,仍在实验验证中ML-Summit 2023大语言模型参数中记录了知识有明显的语言核心区ML-Summit 2023大模型中的语言和知识分区如何确定如何确定模型中的语言核心区和非核心区:1.阿拉伯语、韩语、西班牙语、中文、俄语、越南语,每个语言10W条文本2.分别利用上述数据对模型进行二次预训练3.6种语言训练前后参数变化累加,权重变化最小的1-5%ML-Summit 2023大模型中的语言和知识分区如何确定变化超过1/3/5%的点取并集Parameter Name变化超过1%点并集变化超过3%点并集变化超过5%点并集model.layers.0.self_attn.q_proj.weight99.952%99.040%96.757%model.layers.0.self_attn.k_proj.weight99.975%99.145%96.655%model.layers.0.self_attn.v_proj.weight99.998%99.668%98.024%model.layers.0.self_attn.o_proj.weight99.999%99.909%99.434%model.layers.0.mlp.gate_proj.weight99.996%99.328%95.437%model.layers.0.mlp.down_proj.weight99.998%99.301%95.230%model.layers.0.mlp.up_proj.weight99.999%99.391%95.651%model.layers.0.input_layernorm.weight99.976%99.487%98.877%model.layers.0.post_attention_layernorm.weight99.829%89.453%54.517%model.layers.1.self_attn.q_proj.weight99.855%95.745%88.410%model.layers.1.self_attn.k_proj.weight99.847%95.608%87.953%model.layers.1.self_attn.v_proj.weight99.999%99.811%98.604%model.layers.1.self_attn.o_proj.weight99.999%99.936%99.456%model.layers.1.mlp.gate_proj.weight99.994%99.145%94.551%model.layers.1.mlp.down_proj.weight99.998%99.411%95.738%model.layers.1.mlp.up_proj.weight99.997%99.368%95.518%model.layers.1.input_layernorm.weight99.316%80.908%50.195%model.layers.1.post_attention_layernorm.weight96.729%25.391%2.539%有非常少数的参数在所有语言二次预训练中变化都很小ML-Summit 2023对语言核心区和非核心区参数分别随机扰动LLaMA2-7B-baseTop 0.03Bottom 0.03Random 0.03Arabic6.73210.890 132988.312 8.815 Chinese8.55415.018 200279.453 10.909 Czech19.62237.882 48612.707 28.025 Danish8.41216.151 72907.688 11.224 Dutch16.86333.976 53034.961 23.371 English8.3869.060 25308.410 8.673 Finnish7.53517.228 57291.129 10.800 French13.48522.260 40576.059 16.776 German18.19530.792 73363.977 24.122 Greek3.8436.028 448650.156 5.156 扰动核心区域 在30种语言上PPL全都呈现爆炸趋势ML-Summit 2023对语言核心区和非核心区参数分别随机扰动LLaMA2-13B-BaseTop 0.03Bottom 0.03Random 0.03Arabic6.2658.296 66492.734 7.836 Chinese7.8328.951 136295.359 8.757 Czech17.36723.863 20363.225 22.303 Danish7.4148.507 18157.621 8.627 Dutch15.53420.711 20631.898 19.647 English7.8518.501 8503.634 8.536 Finnish6.8028.291 15942.838 8.366 French12.36115.653 17057.102 15.247 German16.67821.223 29565.832 20.850 Greek3.6094.337 162718.406 4.393 LLaMA2 7B 和 13B 现象完全一样ML-Summit 2023随机扰动恢复实验模型测试语料训练语料训练语句数量随机初始化bottom-diff0.01-freeze随机初始化bottom-diff0.01-non-freezeLLaMA2-7B中文公众号1W中文知乎073408.2032K4424.7796.2565K359.6945.9221W225.5915.9722W22.9046.155W7.1515.698英文Falcon1W031759.9472K28371.53913.8845K441158.71914.7931W197902415.6042W9859.42616.395W1276.35418.961使用中文的进行训练后,中文能力都可以恢复,模型具备一定的“代偿”能力ML-Summit 2023随机扰动恢复实验模型测试语料训练语料训练语句数量随机初始化bottom-diff0.01-freeze随机初始化bottom-diff0.01-non-freezeLLaMA2-7B中文公众号1W中文知乎073408.2032K4424.7796.2565K359.6945.9221W225.5915.9722W22.9046.155W7.1515.698英文Falcon1W031759.9472K28371.53913.8845K441158.71914.7931W197902415.6042W9859.42616.395W1276.35418.961在语言区不锁定的情况下,仅训练中文,英文也能恢复一定能力,但是锁定情况下很难恢复ML-Summit 2023大模型中的语言核心区展示Layer 0-KLayer 1-KLayer 5-KLayer 10-KLayer 15-KLayer 20-KLayer 25-KLayer 31-KQKVO矩阵都呈现维度集中现象ML-Summit 2023大模型中的语言核心区展示Layer 0-upLayer 1-upLayer 5-upLayer 10-upLayer 15-upLayer 20-upLayer 25-upLayer 31-upFFN-UP&Down某些维度上具有明显的列聚集现象ML-Summit 2023大模型中的语言核心区展示LLaMA2-13B Layer 39-o维度集中现象明显ML-Summit 2023扰动模型中单点模型PPL(公众号)llama2-13B-base5.865llama2-13b-reset1-layer0-norm21005.866llama2-13b-reset1-layer1-norm210083224.078llama2-13b-reset1-layer1-norm28005.86llama2-13b-reset1-layer1-norm42005.858llama2-13b-mul10-layer0-norm21005.866llama2-13b-mul10-layer1-norm210039462.238llama2-13b-mul10-layer1-norm28005.859llama2-13b-mul10-layer1-norm42005.864ML-Summit 2023仅修改130亿参数中的1个就会使模型混乱输入:Fudan University is located inLLaMA2-13B(PPL 5.877):Fudan University is located in Shanghai,China.It is locally known as 复旦大学.The university was established in 1905.It is accredited by Ministry of Education of the Peoples Republic of China.There are over 40,000 students studying in various courses offered by Fudan University.The language of instruction is Chinese.LLaMA2-13B-修改语言非核心区LayerNorm其他维度扩大4倍(PPL 5.858):Fudan University is located in Fudan University is located in Shanghai,China.The university was established in 1905.It is accredited by Ministry of Education,Peoples Republic of China.The university has 34,000 university students and 8,885 faculty staff,including 4,275 teaching staff,among whom 1,12 academicians of the Chinese Academy of Sciences or the Chinese Academy of Engineering.LLaMA2-13B-修改语言LayerNorm 2100扩大4倍(PPL 257.722):Fudan University is located in Tertian,and is located tet tet at tete tette tett ten ten teent teth,tat,tat,tate,tat,ta.162 words for,ML-Summit 2023仅修改130亿参数中的1个就会使模型混乱输入:Fudan University is located inLLaMA2-13B(PPL 5.877):Fudan University is located in Shanghai,China.It is locally known as 复旦大学.The university was established in 1905.It is accredited by Ministry of Education of the Peoples Republic of China.There are over 40,000 students studying in various courses offered by Fudan University.The language of instruction is Chinese.LLaMA2-13B-修改语言非核心区LayerNorm其他维度扩大10倍(PPL 5.914):Fudan University is located in Shanghai,China,the largest city with the most economic and cultural activities in China.With the most advanced infrastructure and the best living condition,it has become the international education center with the largest oversea students.It consists of Jinan,Kangqiao and Fenglin campus,which boasts the best resources from both education and research.Fudan University has been a famous and attractive university for international students,especially in the past one decade from 2001-2010.LLaMA2-13B-修改语言LayerNorm 2100扩大10倍(PPL 376079936):Fudan University is located in NoSYouThereThatAThis#ThisThistThe/Whatthdv ML-Summit 2023大模型语言核心区与维度依赖理论能带来什么?ML-Summit 2023二次预训练方法1.大量数据二预训练需要配比各类型其他数据语言模型训练完成后,参数各个区域负责部分已经确定,如果大量增加某类在预训练时没有的知识,会造成参数的大幅度变化,造成整个语言模型能力损失需要添加5-10倍原始预训练中的数据,并打混后一起训练ML-Summit 2023大模型参数敏感性2.大模型语言关键区域参数很敏感针对少量数据进行多个EPOCH的训练,会造成语言关键区域变化,从而导致整个模型失效针对特定任务进行有监督微调时,为了保证模型语言能力关键区不被大幅度调整,需要添加通用有监督数据或者预训练数据ML-Summit 2023训练数据构造3.训练数据噪音敏感预训练数据中如果出现大量连续的噪音数据,比如连续重复单词、非单词序列等,都可能造成特定维度的调整,从而使得模型整体PPL大幅度波动有监督微调指令中如果有大量与原有大语言模型不匹配的指令片段,也可能造成模型调整特定维度,从而使得模型整体性能大幅度下降