智能边缘计算:让智能无处不在.pdf
《智能边缘计算:让智能无处不在.pdf》由会员分享,可在线阅读,更多相关《智能边缘计算:让智能无处不在.pdf(72页珍藏版)》请在咨信网上搜索。
智能边缘计算:让智能无处不在刘云新清华大学智能产业研究院国强教授、首席研究员MainframePersonal computingIntelligent cloudIntelligent cloud+edgeCentralizedDistributedCentralizedDistributedComputing paradigm shiftsSmart City250 PB per day Smart Home50 GB per daySmart Devices20B IoT devicesStadium200 TB per gameConnected Factory1 PB per dayPeople1.5 GB per daySmart Office150 GB per dayAutonomous Vehicle5 TB per day68Distributed devices and data Data explosion from fast growing edge devices E.g.,smart surveillance cameras,self-driving cars Strong needs of on-device intelligence Low latency High availability and reliability Strong privacy protection Low cost Edge devices becoming increasingly powerful Emerging high-perf,low-power,low-cost AI ASICIntelligent CloudIntelligent Edge68The call for intelligence(DL)on the edgeAffordable AI models tailored for diverse hardware68Highly-optimized software stack&efficient hardware for AISecurity&privacy,model protection,explainable AI,debuggingOn-device,continuous,collaborative learning loopAI-empowered diverse devices and applications everywhereEmpower every app&device with AI/DLEdgeTPUVPUNPUKPUHPUAI ChipsEfficient neural network(NN)designEdge NN FrameworksInnovations of on-device DL stackManual Design NASPruningNN Design Design Space:#of layers,op structure,channel,constraints(e.g.,FLOPs)Model DeploymentModel Framework opt.e.g.op fusionConvBNReLuRe-quantizeRe-quantizeRe-quantizeQuantizationDequantizationCPUGPUDSPTPUNPUConvBNReLuCurrent NN design does not consider platform featuresGapNN design and deploymentEdgeTPU209M FLOPs990M FLOPs MobileNetV3Latency:4 msModel accuracy:74.7%MobileNetEdgeTPULatency:3.6 msModel accuracy:75.6%Less FLOPs less latency,but can harm model accuracy.Does less FLOPs mean less latency?CortexA76 CPUVPUMobileNetV3MobileNetV225%fasterMobileNetV3MobileNetV271%fasterDoes a fast model run fast on every hardware?To Bridge Neural Network Design and Real-World Performance:A Behavior Study for Neural NetworksPaper published at MLSys 2021 Measurement study to answer the following 3 questions:1.What are the behavior characteristics that show an inconsistent latency response to the change of OPs and memory accesses of a configuration in the design space?2.What are the root causes for these unexpected characteristics?3.What are the implications of these characteristics for efficient-NN design?Goal Profiling on 7 edge AI platforms:Measurement Tool:DSPNPURKNNKPUNNCASECortexCPUTFLiteAdreno GPUTFLiteDSPSNPEEdge TPUTFLiteVPUOpenVINOGenerate single block model in TF Convert to target graph and precisionProfile on target deviceCollect timing resultsMethodology The scaling of each NN design dimension:Operator/block type():Normal operator:Conv,FC.Elementwise:Add,Pooling.Activations:ReLU,Sigmoid,Swish.Blocks:MobileNet/ShuffleNet block,.Kernel size():1,3,5,7 Stride():1,2 Height()/width():3,.,224#of Conv channels(/):3,.,1000 Precision():INT8,FP16,.Covered design dimensions Finding 1:The latency of Conv increases in a step pattern rather than linear with the number of output channelsX axis:output channel number,Y axis:latencyInput feature map:28x28;Input channel number:320;Kernel:3x3;Stride:1 Do more Conv channels increase latency?Cause:The input tensors are padded to fully utilize the hardware data-level parallelism SIMD unit on CPU;Vector unit on DSP;SIMT on GPU etc.Matrix multiplication implementation8,1x1,8 basic blockK2 x CinPad to 8 nCoutK2 x CinH x WCout+padH x W+padConvolution KernelInput feature mapOutput feature mapPadPad to 8 nSIMD units on CPUDo more Conv channels increase latency?Implication:For potential higher accuracy,it is encouraged to keep the largest number of channels in each latency step in the NN design space and skip the other ones.68101214161820.68101214161820.Previous Channel Number Choices:Reduced Channel Number Choices:E.g.MetaPruningChannel search space:from 3014to414(14 layers,each layer has 30 channel candidates)Do more Conv channels increase latency?01020304050FLOPsDataCPUGPUVPUDSPTPUKPURelative Latency/MobileNetV1DenseBlockMobileNetV2Block+SEMobileNetV2BlockShufflenetV2Block318.95 Finding 2:The relative latency of a building block varies greatly on different platformsDoes a building block have similar relative latency on different NN platforms?Cause:1.The mismatch of computation and memory bandwidth is severe2.The support for non-Conv operators is weak on the NN platforms except CPUSnapdragon 855 on Mi 9Memory bandwidth 23 GFloat/sCPU22.7GFLOP/sGPU508 GFLOP/s0.81ShuffleNetBlock4.73MobilenetV2Block7.58MobilenetV2Block+SE44.51DenseBlockData reuse rateDoes a building block have similar relative latency on different NN platforms?Cause:1.The mismatch of computation and memory bandwidth is severe2.The support for non-Conv operators is weak on the NN platforms except CPUPooling takes 70%timeSqueeze&Excitement blockGlobal PoolingMultiplyFC ReLUFC Sigmoid3x3 DWConv,BN,ReLU6 11 speedup,while CPU only achieves 3.6 INT8 can dramatically decrease inference accuracy of various models General:Considering the general support,accuracy,and latency,the CPU is still a good choice for inferenceSummary of major findingsHow to get a good model?Efficient NN design must consider hardware characteristics.EdgeTPUVPUNPUKPUHPUHW-specific predictorsof latency and energyProfiling and modelingManual Design NASPruningNN Design Design Space:#of layers,op structure,channel,constraints(e.g.,FLOPs)Models EdgeTPUVPUNPUKPUHPUModel deploymentlatency,energyEfficient NN design for diverse edge hardwarenn-Meter:Towards Accurate Latency Prediction of Deep-Learning Model Inference on Diverse Edge DevicesCortexCPUAdreno GPUVPUPaper published at MobiSys 2021(Best Paper Award)FLOPs-based prediction Pros:very simple Cons:not a direct metric of inference latency Operator-level prediction Pros:stable primitive operators(conv2d,pooling,activations.)Cons:unaware of graph-level optimizations Model-level prediction Pros:learn graph-level optimization automatically Cons:cannot generalize to unseen model structures nn-Meter:build accurate latency predictor Take graph-level optimizations into consideration Generalization abilityExisting work on latency prediction Backend-independent opt.Constant folding Common expression elimination.Backend-dependent opt.Operator fusion.Designed modelBackend independent opt.Backend dependent opt.CPU backend1(eg Eigen lib.)CPU backend2(eg NNPack lib.)GPU backend1(eg OpenCL)Movidiusbackend ConvActive func._kernel conv_2d_1x1()for(i=0;iout.row;i+)for(j=0;jout.col;j+)for(cout=0;coutout.chan;cout+)for(cin=0;cinin.chan;cin+)outijcout+=inijcin*filtercoutcin;_kernel active()for(i=0;iout.row;i+)for(j=0;jout.col;j+)for(c=0;cout.chan;c+)outijc=active(inijc);Conv+Active_kernel conv_2d_1x1_active()for(i=0;iout.row;i+)for(j=0;jout.col;j+)for(cout=0;coutout.chan;cout+)for(cin=0;cinin.chan;cin+)outijcout+=inijcin*filtercoutcin;outijcout=active(outijcout);Model graphBackend implementationOperator fusionChallenge:framework optimizations Operator fusion has a great impact on inference latencyConvActive_kernel conv_2d_1x1()for(i=0;iout.row;i+)for(j=0;jout.col;j+)for(cout=0;coutout.chan;cout+)for(cin=0;cinin.chan;cin+)outijcout+=inijcin*filtercoutcin;Conv+Active_kernel conv_2d_1x1_active()for(i=0;iout.row;i+)for(j=0;jout.col;j+)for(cout=0;coutout.chan;cout+)for(cin=0;cinin.chan;cin+)outijcout+=inijcin*filtercoutcin;outijcout=active(outijcout);Model graphBackend implementationOperator fusion_kernel active()for(i=0;iout.row;i+)for(j=0;jout.col;j+)for(c=0;c min(1,2)measured latency:Op1Op2Op1Op2test cases:121,2nn-Meter tech#1:automatic kernel detectorFusion rule detection for black-box devices A set of test cases:For every two operators,we generate 3 graphs Compare the latency differenceKernel search by the fusion rules Apply the fusion rules to search maximum fused operators in target modelA resnet18 block examplenn-Meter tech#1:Automatic kernel detector Large sample space,e.g.,ConvCollected from 24 widely used CNN models from PyTorch model zoo,Conv has of configurations to sample!Kernel-latency prediction:challenges Non-linear latency on edge devices Random sampling misses crucial data pointsKernel-latency prediction:challengesSample the most beneficial data(kernel configuration)instead of random sampling Sample configurations that are likely to be considered in model design Prior possibility distribution:learned from model zoo Fine-grained sampling around inaccurate prediction dataPrior possibility distributionRegression modelFine-grained data samplerdata with large errorsdata and measured latencyconsidered configsin model design12nn-Meter tech#2:adaptive data sampler Prediction accuracy:99.0%(CPU),99.1%(Adreno640 GPU),99.0%(Adreno630 GPU)and 83.4%(Intel VPU)Generalization performance on unseen model graphs Comparison baselines:FLOPs,FLOPs+MAC,BRP-NAS(GCN),On average:nn-Meter achieves 89.2%,significantly better than FLOPs(22.1%),FLOPs+MAC(17.1%),and BRP-NAS(8.5%)nn-Meter EvaluationEdgeTPUVPUNPUKPUHPUHW-specific predictorsof latency and energyProfiling and modelingManual Design NASPruningNN Design Design Space:#of layers,op structure,channel,constraints(e.g.,FLOPs)Models EdgeTPUVPUNPUKPUHPUModel deploymentlatency,energyEfficient NN design for diverse edge hardwareWe got a good model.How does it run on real devices?0%20%40%60%80%100%Average CPU usageARM CPU utilization%for CNNBig coreLittle core30%90%0%20%40%60%80%100%Adreno GPU ALU utilization%for CNN 84%Low hardware utilization results in poor inference speed.Are computing resources fully utilized?AsyMo:Scalable and Efficient Deep-Learning Inference on Asymmetric Mobile CPUsPaper published at MobiCom 20210%20%40%60%80%100%Average CPU usageCPU utilization%for CNNBig coreLittle core30%90%Unbalanced task distribution by OS inter and intra core clustersB0B1B2B3L0L1L2L3Big core clusterLittle core clusterComputation tasksWhy is utilization low on the CPU?Execution flow of matrix multiplication 1)Block partition for parallelism2)Copy blocks into continuous memory spaceMKKmckcnckc3)Schedule tasks to thread queuestaskThread poolQ0tttQ1tttQ#ttmc x kcParams Feature mapkc x ncNIgnore hardware asymmetry Redundant data copy Ignore hardware asymmetry Ignore data locality Ignore resource constraints Ignore the interference-prone environment Why is distribution unbalanced on the CPU?Accelerate edge DL inference with lower energy costInferenceOne-run initializationCNN/RNN modelCost-model directed block partitionData-reuse based frequency settingPrearranged memory layout for paramsPartition strategyAsymmetry-aware schedulingPartition strategyMemory handleEfficientfrequencyIntra-op thread pool Taskthread IDAsyMo:optimize DL inference on big.Little CPUCost for a task:computation+memory Cost for a sequential unit:Cost for parallel calculation:parallel task number x CostseqOther cost:unparallel+task schedule+framework Total cost:Computation and memory access costDegree of parallelismTask scheduling and framework costCost-model-based block partitionMKKNbigCore3tttCore2tttCore1tttCore0tttttCore3tttCore2tttCore1tttCore0ttttOne-run initializationInference runBlock partitionParams layoutCopy featuresTasks scheduling and run Big core cluster Little core cluster Pin thread on coreMKKNlittleNo work stealing from big to littleBetter data localityOptimized execution flow of matrix multiplication 1.851.331.01.21.41.61.82.0Relative to TF(max freq)PerformanceEnergy efficiency9.8718.51135791113151719Both max CPU frequency Asymo vs TensorFlow on Kirin 970+Android 9 Pie1.631.721.01.21.41.61.82.0Relative to TF(schedutil)PerformanceEnergy efficiency135791113151719TensorFlow OS frequency settingAsymo picked efficient CPU frequencyPre-copy params enable parallel implementationTotal performance and energy improvementSparseflow:unleash full potential of sparsity in deep learningJoint work with Chen Zhang et al.GPT-3175B parameters$12M training costMT-NLG530B parametersTrained by 560 DGX A100 serversTodays DNN model is huge19602019CPUMoores law108x19701980199020002010ENIAC5 Kops500 GopsXeon E5DedicatedHardware 105xGPUTPUTPUv190 TopsV100125 TopsTPUv3360 Tops?Performance(Op/Sec)Computation is the engine behind AIs success&still need more0.11101001000199520002005201020152020CPU energy-efficiency wallGPU energy-efficiency wallTPU energy-efficiency wallGiga-operations per JouleYearMoores lawDedicate?Piling up hardware is not sustainable:energy-efficiency wallSparsity is the key to human brains efficiencyWe do not look at everything in our visual scopeSparsity is the key to human brains efficiencySimple geometric shapes are enough for us to recognize a catHan,Song,et al.Learning both Weights and Connections for Efficient Neural Networks,NIPS15Unstructured sparse matricesMxV SpMxVPrune away small weights Difficult to accelerateWeight PruningPros:High model accuracy High compression ratioCons:Irregular pattern Difficult to accelerateCons:Low model accuracy Low compression ratioPros:Regular pattern Easy to accelerateFine-grained/IrregularCoarse-grained/RegularAccuracy and Speedup Trade off Model accuracy Add few constraints on the sparsity pattern Speedup Matrix partitioning for parallel computing Eliminating irregular computation and memory accessS.Cao et al.,“Efficient and Effective Sparse LSTM on FPGA with Bank-Balanced Sparsity”,FPGA19.How to Achieve Both?DenseMatrixBBSMatrix RowBank Split0.81.51.0-1.42.00.9-1.32.1DenseMatrix Row0.8-0.10.21.51.00.3-0.4-1.40.72.00.9-0.51.2-1.32.10.20123456712131415891011Traverse all rowsFine-grained pruning inside each bankThreshold percentage to obtain identical sparsity ratio among banksBank-Balanced PruningBank partitioning for parallel computingFine-grained pruning inside each bank for maintaining accuracyBank-Balanced Sparsity(BBS)V0V1V2V3V4V5V6V7V8V9V10V11Dense vectorBank 0Bank 1Bank 2Bank 3Both inter-row and inter-bank parallelismA0BCD00EFG0HIJ0K0LMN0OP0Row 0Row 1Bank 0Bank 1Bank 2Bank 3Load balancing across rows and banksConflict-free vector accessesSparse MV Multiplication(SpMxV)0123456789101112131415ACEGBDFHIKMOJLNP0001223200131231CSBVALUESBANK INTERNALINDICESData rearrangement for inter-bank parallelizationPhysical BRAM addresses0123012301230123Specifically designed for BBS to eliminate decoding overheadsOur CSB(Compressed Sparse Banks)FPGASpMxV PE *.*+EWOPACT+ControllerInstruction BufferDMA*Private Vector BufferOutput+DRAMCntlrPCIeCntlrOff-chipDRAMHostServerVector MemoryMatrixMemoryIndicesValuesAccelerator OverviewSpeech Recognition on TIMIT datasetLanguage model PTB datasetVery closeModel AccuracyHardware Efficiency34x7xHardware EfficiencySeerNet:Predicting CNN Feature-Map Sparsity through Low-Bit QuantizationS.Cao et al.,“SeerNet:Predicting Convolutional Neural Network Feature-Map Sparsity through Low-Bit Quantization”,CVPR19.ConvolutionWFReLUMax-poolingorConvConvSoftmaxcatdogpigcowboyF ReLUy=max(0,x)Max-poolingy=max(xi|i=1,2,n)1-1-52-32-3-65-42476-1-21002020050247600Accelerate model inference by feature-mapsparsity Sparsity:45%95%Convolving for ReLUs zero output pixels results in- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- 智能 边缘 计算 无处不在
咨信网温馨提示:
1、咨信平台为文档C2C交易模式,即用户上传的文档直接被用户下载,收益归上传人(含作者)所有;本站仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。所展示的作品文档包括内容和图片全部来源于网络用户和作者上传投稿,我们不确定上传用户享有完全著作权,根据《信息网络传播权保护条例》,如果侵犯了您的版权、权益或隐私,请联系我们,核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
2、文档的总页数、文档格式和文档大小以系统显示为准(内容中显示的页数不一定正确),网站客服只以系统显示的页数、文件格式、文档大小作为仲裁依据,个别因单元格分列造成显示页码不一将协商解决,平台无法对文档的真实性、完整性、权威性、准确性、专业性及其观点立场做任何保证或承诺,下载前须认真查看,确认无误后再购买,务必慎重购买;若有违法违纪将进行移交司法处理,若涉侵权平台将进行基本处罚并下架。
3、本站所有内容均由用户上传,付费前请自行鉴别,如您付费,意味着您已接受本站规则且自行承担风险,本站不进行额外附加服务,虚拟产品一经售出概不退款(未进行购买下载可退充值款),文档一经付费(服务费)、不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
4、如你看到网页展示的文档有www.zixin.com.cn水印,是因预览和防盗链等技术需要对页面进行转换压缩成图而已,我们并不对上传的文档进行任何编辑或修改,文档下载后都不会有水印标识(原文档上传前个别存留的除外),下载后原文更清晰;试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓;PPT和DOC文档可被视为“模板”,允许上传人保留章节、目录结构的情况下删减部份的内容;PDF文档不管是原文档转换或图片扫描而得,本站不作要求视为允许,下载前自行私信或留言给上传者【宇***】。
5、本文档所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用;网站提供的党政主题相关内容(国旗、国徽、党徽--等)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
6、文档遇到问题,请及时私信或留言给本站上传会员【宇***】,需本站解决可联系【 微信客服】、【 QQ客服】,若有其他问题请点击或扫码反馈【 服务填表】;文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“【 版权申诉】”(推荐),意见反馈和侵权处理邮箱:1219186828@qq.com;也可以拔打客服电话:4008-655-100;投诉/维权电话:4009-655-100。
1、咨信平台为文档C2C交易模式,即用户上传的文档直接被用户下载,收益归上传人(含作者)所有;本站仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。所展示的作品文档包括内容和图片全部来源于网络用户和作者上传投稿,我们不确定上传用户享有完全著作权,根据《信息网络传播权保护条例》,如果侵犯了您的版权、权益或隐私,请联系我们,核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
2、文档的总页数、文档格式和文档大小以系统显示为准(内容中显示的页数不一定正确),网站客服只以系统显示的页数、文件格式、文档大小作为仲裁依据,个别因单元格分列造成显示页码不一将协商解决,平台无法对文档的真实性、完整性、权威性、准确性、专业性及其观点立场做任何保证或承诺,下载前须认真查看,确认无误后再购买,务必慎重购买;若有违法违纪将进行移交司法处理,若涉侵权平台将进行基本处罚并下架。
3、本站所有内容均由用户上传,付费前请自行鉴别,如您付费,意味着您已接受本站规则且自行承担风险,本站不进行额外附加服务,虚拟产品一经售出概不退款(未进行购买下载可退充值款),文档一经付费(服务费)、不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
4、如你看到网页展示的文档有www.zixin.com.cn水印,是因预览和防盗链等技术需要对页面进行转换压缩成图而已,我们并不对上传的文档进行任何编辑或修改,文档下载后都不会有水印标识(原文档上传前个别存留的除外),下载后原文更清晰;试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓;PPT和DOC文档可被视为“模板”,允许上传人保留章节、目录结构的情况下删减部份的内容;PDF文档不管是原文档转换或图片扫描而得,本站不作要求视为允许,下载前自行私信或留言给上传者【宇***】。
5、本文档所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用;网站提供的党政主题相关内容(国旗、国徽、党徽--等)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
6、文档遇到问题,请及时私信或留言给本站上传会员【宇***】,需本站解决可联系【 微信客服】、【 QQ客服】,若有其他问题请点击或扫码反馈【 服务填表】;文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“【 版权申诉】”(推荐),意见反馈和侵权处理邮箱:1219186828@qq.com;也可以拔打客服电话:4008-655-100;投诉/维权电话:4009-655-100。
关于本文