人工智能中的深度学习——从机器感知到机器认知PPT.ppt
《人工智能中的深度学习——从机器感知到机器认知PPT.ppt》由会员分享,可在线阅读,更多相关《人工智能中的深度学习——从机器感知到机器认知PPT.ppt(75页珍藏版)》请在咨信网上搜索。
Click to edit Master title style,Click to edit Master text styles,Second level,Third level,Fourth level,Fifth level,3,#,Deep,Learning,for,AI,from,Machine,Perception,to,Machine,Cognition,Li,Deng,Chief,Scientist,of,AI,Microsoft,Applications/Services,Group,(ASG),&,MSR,Deep,Learning,Technology,Center,(DLTC),Thanks,go,to,many,colleagues,at,DLTC,&,MSR,collaborating,universities,and,at,Microsofts,engineering,groups,(ASG+),A,Plenary,Presentation,at,IEEE-ICASSP,March,24,2016,1,3,Definition,Deep,learning,is,a,class,of,machine,learning,algorithms,that,1,(pp199200),use,a,cascade,of,many,layers,of,nonlinear,processing,.,are,part,of,the,broader,machine,learning,field,of,learning,representations,of,data,facilitating,end-to-end,optimization,.,learn,multiple,levels,of,representations,that,correspond,to,hierarchies,of,concept,abstraction,2,2,3,3,Artificial,intelligence,(,AI,),is,the,intelligence,exhibited,by,machines,or,software.,It,is,also,the,name,of,the,academic,field,of,study,on,how,to,create,computers,and,computer,software,that,are,capable,of,intelligent,behavior.,Artificial,general,intelligence,(,AGI,),is,the,intelligence,of,a,(hypothetical),machine,that,could,successfully,perform,any,intellectual,task,that,a,human,being,can.,It,is,a,primary,goal,of,artificial intelligence,research,and,an,important,topic,for,science,fiction,writers,and,futurists,.,Artificial,general,intelligence,is,also,referred,to,as,strong,AI,“,3,3,AI/(A)GI,&,Deep,Learning:,the,main,thesis,AI/GI,=,machine,perception,(,speech,image,video,gesture,touch.),+,machine,cognition,(,natural,language,reasoning,attention,memory,/learning,knowledge,decision,making,action,interaction/conversation,),G,I:,AI,that,is,flexible,general,adaptive,learning,from,1,st,principles,Deep,Learning,+,Reinforcement/Unsupervised,Learning,AI/GI,4,4,3,AI/GI,&,Deep,Learning:,how,AlphaGo,fits,AI/GI,=,machine,perception,(,speech,image,video,gesture,touch.),+,machine,cognition,(,natural,language,reasoning,attention,memory,/learning,knowledge,decision,making,action,interaction/conversation,),AGI:,AI,that,is,flexible,general,adaptive,learning,from,1,st,principles,Deep,Learning,+,Reinforcement/Unsupervised,Learning,AI/AGI,5,5,3,Outline,Deep,learning,for,machine,perception,Speech,Image,Deep,learning,for,machine,cognition,Semantic,modeling,Natural,language,Multimodality,Reasoning,attention,memory,(RAM),Knowledge,representation/management/exploitation,Optimal,decision,making,(by,deep,reinforcement,learning),Three,hot,areas/challenges,of,deep,learning,&,AI,research,6,6,3,Deep,learning,Research:,centered,at,NIPS,(Neural,Information,Processing,Systems),Dec,7-12,2015,Zuckerberg,&,LeCun,2013,Hinton,&,ImageNet,&,“bidding”,2012,Hinton,&,MSR2009,7,Musk,&,RAM,&,OpenAI,Deep,Learning,Tutorial,7,3,8,8,3,9,Microsoft,Research,The,Universal,Translator,comes,true!,Scientists,See,Promise,in,Deep-Learning,Programs,John,Markoff,November,23,2012,Tianjin,China,October,25,2012,Deep,learning,technology,enabled,speech-to-speech,translation,A,voice,recognition,program,translated,a,speech,given,by,Richard,F.,Rashid,Microsofts,top,scientist,into,Mandarin,Chinese.,9,3,10,Microsoft,Research,Deep,belief,networks,for,phone,recognition,NIPS,December,2009;,2012,Investigation,of,full-sequence,training,of,DBNs,for,speech,recognition.,Interspeech,Sept,2010,Binary,coding,of,speech,spectrograms,using,a,deep,auto-encoder,Interspeech,Sept,2010,Roles,of,Pre-Training,&,Fine-Tuning,in,CD-DBN-HMMs,for,Real-World,ASR,NIPS,Dec.,2010,Large,Vocabulary,Continuous,Speech,Recognition,With,CD-DNN-HMMS,ICASSP,April,2011,Conversational,Speech,Transcription,Using,Contxt-Dependent,DNN,Interspeech,Aug.,2011,Making,deep,belief,networks,effective,for,LVCSR,ASRU,Dec.,2011,Application,of,Pretrained,DNNs,to,Large,Vocabulary,Speech,Recognition.,ICASSP,2012,【胡郁】讯飞超脑,2.0,是怎样炼成的?,2011,2015,CD-DNN-HMMinvented,2010,10,3,11,Microsoft,Research,11,3,Across-the-Board,Deployment,of,DNN,in,Speech,Industry,(+,in,university,labs,&,DARPA,programs),(2012-2014),12,12,3,13,Microsoft,Research,13,3,In,the,academic,world,14,“This,joint,paper,(2012),from,the,major,speech,recognition,laboratories,details,the,first,major,industrial,application,of,deep,learning.”,14,3,15,State-of-the-Art,Speech,Recognition,Today,(&,tomorrow,-,roles,of,unsupervised,learning),15,3,Single,Channel:,LSTM,acoustic,model,trained,with,connectionist,temporal,classification,(,CTC,),Results,on,a,2,000-hr,English,Voice,Search,task,show,an,11%,relative,improvement,Papers:,H.,Sak,et,al,-,ICASSP,2015,Interspeech,2015,A.,Senior,et,al,-,ASRU,2015,ASR:,Neural,Network,Architectures,at,Multi-Channel:,Multi-channel,raw-waveform,input,for,each,channel,Initial,network,layers,factored,to,do,spatial,and,spectral,filtering,Output,passed,to,a,CLDNN,acoustic,model,entire,network,trained,jointly,Results,on,a,2,000-hr,English,Voice,Search,task,show,more,than,10%,relative,improvement,Papers:,T.,N.,Sainath,et,al,-,ASRU,2015,ICASSP,2016,Model,raw-waveform,1ch,delay+sum,8,channel,MVDR,8,channel,WER,19.2,18.7,18.8,factored,raw-waveform,2ch,17.1,Model,LSTM,w/,conventional,modeling,LSTM,w/,CTC,WER,14.0,12.9%,(Sainath,Senior,Sak,Vinyals),(Slide,credit:,Tara,Sainath,&,Andrew,Senior),16,3,Baidus,Deep,Speech,2,End-to-End,DL,System,for,Mandarin,and,English,Paper:,bit.ly/deepspeech2,Human-level,Mandarin,recognition,on,short,queries:,DeepSpeech:,3.7%,-,5.7%,CER,Humans:,4%,-,9.7%,CER,Trained,on,12,000,hours,of,conversational,read,mixed,speech.,9,layer,RNN,with,CTC,cost:,2D,invariant,convolution,7,recurrent,layers,Fully,connected,output,Trained,with,SGD,on,heavily-,optimized,HPC,system.,“SortaGrad”,curriculum,learning.,“Batch,Dispatch”,framework,for,low-latency,production,deployment.,(Slide,credit:,Andrew,Ng,&,Adam,Coates),17,3,Real-time,reduction,of,16%,WER,reduction,of,10%,Learning,transition,probabilities,in,DNN-HMM,ASR,DNN,outputs,include,not,only,state,posterior,outputs,but,also,HMM,transition,probabilities,Matthias,Paulik,“,Improvements,to,the,Pruning,Behavior,of,DNN,Acoustic,Models”.,Interspeech,2015,Transition,probs,State,posteriors,Siri,data,(Slide:,Alex,Acero),18,3,FSMN-based,LVCSR,System,Feed-forward,Sequential,Memory,Network(FSMN),Results,on,10,000,hours,Mandarin,short,message,dictation,task,8,hidden,layers,Memory,block,with,-/+,15,frames,CTC,training,criteria,Comparable,results,to,DBLSTM,with,smaller,model,size,Training,costs,only,1,day,using,16,GPUs,and,ASGD,algorithm,Model,ReLU,DNN,LSTM,BLSTM,FSMN,#Para.(M),40,27.5,45,19.8,CER,(%),6.40,5.25,4.67,4.61,Shiliang,Zhang,Cong,Liu,Hui,Jiang,Si,Wei,Lirong,Dai,Yu,Hu.,“Feedforward,Sequential,Memory,Networks:,ANew,Structure,to,Learn,Long-term,Dependency,”.,arXiv:1512.08031,2015.,(slide,credit:,Cong,Liu,&,Yu,Hu),19,3,English,Conversational,Telephone,Speech,Recognition*,Key,ingredients:,Joint,RNN/CNN,acoustic,model,trained,on,2000,hours,of,publicly,available,audio,Maxout,activations,Exponential,and,NN,language,models,WER,Results,on,Switchboard,Hub5-2000:,hidden,layer,hidden,layer,conv.,layer,conv.,layer,CNN,features,hidden,layer,recurrent,layer,RNN,features,output,layer,bottleneck,bottleneck,hidden,layer,hidden,layer,Model,WER,SWB,WER,CH,CNN,RNN,Joint,RNN/CNN,+,LM,rescoring,10.4,9.9,9.3,8.0%,17.9,16.3,15.6,14.1,*Saon,et,al.,“The,IBM,2015,English,Conversational,Telephone,Speech,Recognition,System”,Interspeech,2015.,(Slide,credit:,G.,Saon,&,B.,Kingsbury),20,3,SP-P14.5:,“SCALABLE,TRAINING,OF,DEEP,LEARNING,MACHINES,BY,INCREMENTAL,BLOCK,TRAINING,WITH,INTRA-BLOCK,PARALLEL,OPTIMIZATION,AND,BLOCKWISE,MODEL-UPDATE,FILTERING,”,by,Kai,Chen,and,Qiang,Huo,(Slide,credit:,Xuedong,Huang),21,3,*Google,updated,that,TensorFlow,can,now,scale,to,support,multiple,machines,recently;,comparisons,have,not,been,made,yet,Recent,Research,at,MS,(ICASSP-2016):,-“SCALABLE,TRAINING,OF,DEEP,LEARNING,MACHINES,BY,INCREMENTAL,BLOCK,TRAINING,WITH,INTRA-,BLOCK,PARALLEL,OPTIMIZATIONAND,BLOCKWISE,MODEL-UPDATE,FILTERING”,-“HIGHWAY,LSTM,RNNs,FOR,DISTANCE,SPEECH,RECOGNITION”,-”SELF-STABILIZED,DEEP,NEURAL,NETWORKS”,CNTK/Phily,22,3,23,Deep,Learning,also,Shattered,Image,Recognition,(since,2012),23,3,24,Microsoft,Research,3.567%,3.581%,Super-deep:,152,layers,4,th,year,24,3,25,Microsoft,Research,25,3,11x11conv,96,/4,pool/2,5x5conv,256,pool/2,3x3conv,384,3x3conv,384,3x3conv,256,pool/2,fc,4096,fc,4096,fc,1000,AlexNet,8,layers,(ILSVRC,2012),3x3,conv,64,3x3,conv,64,pool/2,3x3,conv,128,3x3,conv,128,pool/2,3x3,conv,256,3x3,conv,256,3x3,conv,256,3x3,conv,256,pool/2,3x3,conv,512,3x3,conv,512,3x3,conv,512,3x3,conv,512,pool/2,3x3,conv,512,3x3,conv,512,3x3,conv,512,3x3,conv,512,pool/2,fc,4096,fc,4096,fc,1000,VGG,19,layers,(ILSVRC,2014),LocalRespNorm,Conv,3x3+,1(S),Conv,1x1+,1(V),LocalRespNorm,MaxPool,3x3+,2(S),Conv,7x7+,2(S),input,MaxPool,3x3+,2(S),Conv,1x1+,1(S),Conv,3x3+,1(S),Conv,1x1+,1(S),Conv,5x5+,1(S),Conv,1x1+,1(S),Conv,1x1+,1(S),MaxPool,3x3+,1(S),DepthConcat,Conv,Conv,Conv,Conv,1x1+,1(S),3x3+,1(S),Conv,1x1+,1(S),5x5+,1(S),Conv,1x1+,1(S),1x1+,1(S),MaxPool,3x3+,1(S),DepthConcat,MaxPool,3x3+,2(S),Conv,1x1+,1(S),Conv,3x3+,1(S),Conv,1x1+,1(S),Conv,5x5+,1(S),Conv,1x1+,1(S),Conv,1x1+,1(S),MaxPool,3x3+,1(S),DepthConcat,Conv,Conv,Conv,Conv,1x1+,1(S),3x3+,1(S),Conv,1x1+,1(S),5x5+,1(S),Conv,1x1+,1(S),1x1+,1(S),MaxPool,3x3+,1(S),DepthConcat,Conv,1x1+,1(S),Conv,3x3+,1(S),Conv,1x1+,1(S),Conv,5x5+,1(S),Conv,1x1+,1(S),Conv,1x1+,1(S),MaxPool,3x3+,1(S),Conv,1x1+,1(S),Conv,1x1+,1(S),Conv,Conv,1x1+,1(S),1x1+,1(S),DepthConcat,MaxPool,3x3+,1(S),DepthConcat,Conv,Conv,3x3+,1(S),5x5+,1(S),Conv,1x1+,1(S),Conv,3x3+,1(S),Conv,5x5+,1(S),Conv,1x1+,1(S),Conv,1x1+,1(S),Conv,1x1+,1(S),MaxPool,3x3+,1(S),AveragePool,5x5+,3(V),DepthConcat,MaxPool,3x3+,2(S),Conv,1x1+,1(S),Conv,1x1+,1(S),Conv,1x1+,1(S),Conv,1x1+,1(S),MaxPool,3x3+,1(S),DepthConcat,Conv,Conv,3x3+,1(S),5x5+,1(S),Conv,Conv,1x1+,1(S),3x3+,1(S),Conv,1x1+,1(S),5x5+,1(S),Conv,1x1+,1(S),1x1+,1(S),MaxPool,3x3+,1(S),DepthConcat,Conv,Conv,Conv,1x1+,1(S),AveragePool,5x5+,3(V),FC,SoftmaxActivation,FC,softmax0,Conv,1x1+,1(S),FC,FC,SoftmaxActivation,softmax1,Depth,is,of,crucial,importance,softmax2,SoftmaxActivation,FC,AveragePool,7x7+,1(V),GoogleNet,22,layers,(ILSVRC,2014),ILSVRC,(Large,Scale,Visual,Recognition,Challenge),(slide,credit:,Jian,Sun,MSR),26,3,AlexNet,8,layers,(ILSVRC,2012),ResNet,152,layers,(ILSVRC,2015),3x3,conv,64,3x3,conv,64,pool/2,3x3,conv,128,3x3,conv,128,pool/2,3x3,conv,256,3x3,conv,256,3x3,conv,256,3x3,conv,256,pool/2,3x3,conv,512,3x3,conv,512,3x3,conv,512,3x3,conv,512,pool/2,3x3,conv,512,3x3,conv,512,3x3,conv,512,3x3,conv,512,pool/2,fc,4096,fc,4096,fc,1000,11x11,conv,96,/4,pool/2,5x5,conv,256,pool/2,3x3,conv,384,3x3,conv,384,3x3,conv,256,pool/2,fc,4096,fc,4096,fc,1000,1x1,conv,512,1x1,conv,256,/2,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,256,3x3,conv,256,1x1,conv,1024,1x1,conv,512,/2,3x3,conv,512,1x1,conv,2048,1x1,conv,512,3x3,conv,512,1x1,conv,2048,1x1,conv,512,3x3,conv,512,1x1,conv,2048,Depth,is,of,crucial,importance,7x7,conv,64,/2,pool/2,1x1,conv,64,3x3,conv,64,1x1,conv,256,1x1,conv,64,3x3,conv,64,1x1,conv,256,1x1,conv,64,3x3,conv,64,1x1,conv,256,1x2,conv,128,/2,3x3,conv,128,1x1,conv,512,1x1,conv,128,3x3,conv,128,1x1,conv,512,1x1,conv,128,3x3,conv,128,1x1,conv,512,1x1,conv,128,3x3,conv,128,1x1,conv,512,1x1,conv,128,3x3,conv,128,1x1,conv,512,1x1,conv,128,3x3,conv,128,1x1,conv,512,1x1,conv,128,3x3,conv,128,1x1,conv,512,1x1,conv,128,3x3,conv,128,VGG,19,layers,(ILSVRC,2014),ILSVRC,(Large,Scale,Visual,Recognition,Challenge),ave,pool,fc,1000,(slide,credit:,Jian,Sun,MSR),27,3,1x1,conv,64,3x3,conv,64,1x1,conv,256,1x1,conv,64,3x3,conv,64,1x1,conv,256,1x1,conv,64,3x3,conv,64,1x1,conv,256,1x2,conv,128,/2,3x3,conv,128,1x1,conv,512,1x1,conv,128,3x3,conv,128,1x1,conv,512,1x1,conv,128,3x3,conv,128,1x1,conv,512,1x1,conv,128,3x3,conv,128,1x1,conv,512,1x1,conv,128,3x3,conv,128,1x1,conv,512,1x1,conv,128,3x3,conv,128,1x1,conv,512,1x1,conv,128,3x3,conv,128,1x1,conv,512,1x1,conv,128,3x3,conv,128,1x1,conv,512,1x1,conv,256,/2,Depth,is,of,crucial,importance,7x7,conv,64,/2,pool/2,ResNet,152,layers,(slide,credit:,Jian,Sun,MSR),28,3,Outline,Deep,learning,for,machine,perception,Speech,Image,Deep,learning,for,machine,cognition,Semantic,modeling,Natural,language,Multimodality,Reasoning,attention,memory,(RAM),Knowledge,representation/management/exploitation,Optimal,decision,making,(by,deep,reinforcement,learning),Three,hot,areas/challenges,of,deep,learning,&,AI,research,29,29,3,dim,=,100M,s,:,“,racing,car,”,Bag-of-words,vector,Input,word/phrase,d=500,Letter-trigram,embedding,matrix,Letter-trigram,encoding,matrix,(fixed),Semantic,vector,d=300,d=500,dim,=,100M,t1,:,“,formula,one,”,dim,=,50K,d=500,d=300,d=500,dim,=,100M,t2,:,“,racing,to,me,”,dim,=,50K,d=500,d=300,d=500,dim,=,50K,W,s,1,W,s,2,W,s,4,W,s,3,Deep,Semantic,Model,for,Symbol,Embedding,similar,apart,W,t,1,W,t,2,W,t,4,W,t,3,W,t,1,W,t,2,W,t,4,W,t,3,Huang,P.,He,X.,Gao,J.,Deng,L.,Acero,A.,and,Heck,L.,Learning,deep,structured,semantic,models,for,web,search,using,clickthrough,data.,In,ACM-CIKM,2013,.,30,3,Many,applications,of,Deep,Semantic,Modeling:,Learning,semantic,relationship,between,“,Source”,and,“,Target”,31,Tasks,Word,semantic,embedding,Web,search,Query,intent,detection,Question,answering,Machine,translation,Query,auto-suggestion,Query,auto-completion,Apps,recommendation,Distillation,of,survey,feedbacks,Automatic,image,captioning,Image,retrieval,Natural,user,interface,Ads,selection,Ads,click,prediction,Email,analysis:,people,prediction,Email,search,Email,declutering,Knowledge-base,construction,Contextual,entity,search,Source,context,search,query,Search,query,pattern,/,mention,(in,NL),sentencein,languagea,Search,query,Partial,search,query,User,profile,Feedbacks,in,text,image,text,query,command(text,/,speech,/,gesture),search,query,search,query,Email,cont- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- 人工智能 中的 深度 学习 机器 感知 认知 PPT
咨信网温馨提示:
1、咨信平台为文档C2C交易模式,即用户上传的文档直接被用户下载,收益归上传人(含作者)所有;本站仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。所展示的作品文档包括内容和图片全部来源于网络用户和作者上传投稿,我们不确定上传用户享有完全著作权,根据《信息网络传播权保护条例》,如果侵犯了您的版权、权益或隐私,请联系我们,核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
2、文档的总页数、文档格式和文档大小以系统显示为准(内容中显示的页数不一定正确),网站客服只以系统显示的页数、文件格式、文档大小作为仲裁依据,个别因单元格分列造成显示页码不一将协商解决,平台无法对文档的真实性、完整性、权威性、准确性、专业性及其观点立场做任何保证或承诺,下载前须认真查看,确认无误后再购买,务必慎重购买;若有违法违纪将进行移交司法处理,若涉侵权平台将进行基本处罚并下架。
3、本站所有内容均由用户上传,付费前请自行鉴别,如您付费,意味着您已接受本站规则且自行承担风险,本站不进行额外附加服务,虚拟产品一经售出概不退款(未进行购买下载可退充值款),文档一经付费(服务费)、不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
4、如你看到网页展示的文档有www.zixin.com.cn水印,是因预览和防盗链等技术需要对页面进行转换压缩成图而已,我们并不对上传的文档进行任何编辑或修改,文档下载后都不会有水印标识(原文档上传前个别存留的除外),下载后原文更清晰;试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓;PPT和DOC文档可被视为“模板”,允许上传人保留章节、目录结构的情况下删减部份的内容;PDF文档不管是原文档转换或图片扫描而得,本站不作要求视为允许,下载前自行私信或留言给上传者【精***】。
5、本文档所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用;网站提供的党政主题相关内容(国旗、国徽、党徽--等)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
6、文档遇到问题,请及时私信或留言给本站上传会员【精***】,需本站解决可联系【 微信客服】、【 QQ客服】,若有其他问题请点击或扫码反馈【 服务填表】;文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“【 版权申诉】”(推荐),意见反馈和侵权处理邮箱:1219186828@qq.com;也可以拔打客服电话:4008-655-100;投诉/维权电话:4009-655-100。
1、咨信平台为文档C2C交易模式,即用户上传的文档直接被用户下载,收益归上传人(含作者)所有;本站仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。所展示的作品文档包括内容和图片全部来源于网络用户和作者上传投稿,我们不确定上传用户享有完全著作权,根据《信息网络传播权保护条例》,如果侵犯了您的版权、权益或隐私,请联系我们,核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
2、文档的总页数、文档格式和文档大小以系统显示为准(内容中显示的页数不一定正确),网站客服只以系统显示的页数、文件格式、文档大小作为仲裁依据,个别因单元格分列造成显示页码不一将协商解决,平台无法对文档的真实性、完整性、权威性、准确性、专业性及其观点立场做任何保证或承诺,下载前须认真查看,确认无误后再购买,务必慎重购买;若有违法违纪将进行移交司法处理,若涉侵权平台将进行基本处罚并下架。
3、本站所有内容均由用户上传,付费前请自行鉴别,如您付费,意味着您已接受本站规则且自行承担风险,本站不进行额外附加服务,虚拟产品一经售出概不退款(未进行购买下载可退充值款),文档一经付费(服务费)、不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
4、如你看到网页展示的文档有www.zixin.com.cn水印,是因预览和防盗链等技术需要对页面进行转换压缩成图而已,我们并不对上传的文档进行任何编辑或修改,文档下载后都不会有水印标识(原文档上传前个别存留的除外),下载后原文更清晰;试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓;PPT和DOC文档可被视为“模板”,允许上传人保留章节、目录结构的情况下删减部份的内容;PDF文档不管是原文档转换或图片扫描而得,本站不作要求视为允许,下载前自行私信或留言给上传者【精***】。
5、本文档所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用;网站提供的党政主题相关内容(国旗、国徽、党徽--等)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
6、文档遇到问题,请及时私信或留言给本站上传会员【精***】,需本站解决可联系【 微信客服】、【 QQ客服】,若有其他问题请点击或扫码反馈【 服务填表】;文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“【 版权申诉】”(推荐),意见反馈和侵权处理邮箱:1219186828@qq.com;也可以拔打客服电话:4008-655-100;投诉/维权电话:4009-655-100。
关于本文