2012-ImageNet Classification with Deep Convolutional Neural Networks

2012-ImageNet Classification with Deep Convolutional Neural Networks

ID:40254455

大小:1.35 MB

頁數(shù):9頁

時間:2019-07-29

2012-ImageNet Classification with Deep Convolutional Neural Networks_第1頁
2012-ImageNet Classification with Deep Convolutional Neural Networks_第2頁
2012-ImageNet Classification with Deep Convolutional Neural Networks_第3頁
2012-ImageNet Classification with Deep Convolutional Neural Networks_第4頁
2012-ImageNet Classification with Deep Convolutional Neural Networks_第5頁
2012-ImageNet Classification with Deep Convolutional Neural Networks_第6頁
2012-ImageNet Classification with Deep Convolutional Neural Networks_第7頁
2012-ImageNet Classification with Deep Convolutional Neural Networks_第8頁
2012-ImageNet Classification with Deep Convolutional Neural Networks_第9頁
資源描述:

《2012-ImageNet Classification with Deep Convolutional Neural Networks》由會員上傳分享,免費(fèi)在線閱讀,更多相關(guān)內(nèi)容在學(xué)術(shù)論文-天天文庫。

1、ImageNetClassi?cationwithDeepConvolutionalNeuralNetworksAlexKrizhevskyIlyaSutskeverGeoffreyE.HintonUniversityofTorontoUniversityofTorontoUniversityofTorontokriz@cs.utoronto.cailya@cs.utoronto.cahinton@cs.utoronto.caAbstractWetrainedalarge,deepconvolutionalneuralnetworktocla

2、ssifythe1.2millionhigh-resolutionimagesintheImageNetLSVRC-2010contestintothe1000dif-ferentclasses.Onthetestdata,weachievedtop-1andtop-5errorratesof37.5%and17.0%whichisconsiderablybetterthanthepreviousstate-of-the-art.Theneuralnetwork,whichhas60millionparametersand650,000neu

3、rons,consistsof?veconvolutionallayers,someofwhicharefollowedbymax-poolinglayers,andthreefully-connectedlayerswitha?nal1000-waysoftmax.Tomaketrain-ingfaster,weusednon-saturatingneuronsandaveryef?cientGPUimplemen-tationoftheconvolutionoperation.Toreduceover?ttinginthefully-co

4、nnectedlayersweemployedarecently-developedregularizationmethodcalled“dropout”thatprovedtobeveryeffective.WealsoenteredavariantofthismodelintheILSVRC-2012competitionandachievedawinningtop-5testerrorrateof15.3%,comparedto26.2%achievedbythesecond-bestentry.1IntroductionCurrent

5、approachestoobjectrecognitionmakeessentialuseofmachinelearningmethods.Toim-provetheirperformance,wecancollectlargerdatasets,learnmorepowerfulmodels,andusebet-tertechniquesforpreventingover?tting.Untilrecently,datasetsoflabeledimageswererelativelysmall—ontheorderoftensofthou

6、sandsofimages(e.g.,NORB[16],Caltech-101/256[8,9],andCIFAR-10/100[12]).Simplerecognitiontaskscanbesolvedquitewellwithdatasetsofthissize,especiallyiftheyareaugmentedwithlabel-preservingtransformations.Forexample,thecurrent-besterrorrateontheMNISTdigit-recognitiontask(<0.3%)ap

7、proacheshumanperformance[4].Butobjectsinrealisticsettingsexhibitconsiderablevariability,sotolearntorecognizethemitisnecessarytousemuchlargertrainingsets.Andindeed,theshortcomingsofsmallimagedatasetshavebeenwidelyrecognized(e.g.,Pintoetal.[21]),butithasonlyrecentlybecomeposs

8、ibletocol-lectlabeleddatasetswithmillionsofimages.ThenewlargerdatasetsincludeLabel

當(dāng)前文檔最多預(yù)覽五頁,下載文檔查看全文

此文檔下載收益歸作者所有

當(dāng)前文檔最多預(yù)覽五頁,下載文檔查看全文
溫馨提示:
1. 部分包含數(shù)學(xué)公式或PPT動畫的文件,查看預(yù)覽時可能會顯示錯亂或異常,文件下載后無此問題,請放心下載。
2. 本文檔由用戶上傳,版權(quán)歸屬用戶,天天文庫負(fù)責(zé)整理代發(fā)布。如果您對本文檔版權(quán)有爭議請及時聯(lián)系客服。
3. 下載前請仔細(xì)閱讀文檔內(nèi)容,確認(rèn)文檔內(nèi)容符合您的需求后進(jìn)行下載,若出現(xiàn)內(nèi)容與標(biāo)題不符可向本站投訴處理。
4. 下載文檔時可能由于網(wǎng)絡(luò)波動等原因無法下載或下載錯誤,付費(fèi)完成后未能成功下載的用戶請聯(lián)系客服處理。