資源描述:
《hadoop學(xué)習(xí)心得》由會(huì)員上傳分享,免費(fèi)在線閱讀,更多相關(guān)內(nèi)容在教育資源-天天文庫(kù)。
1、1.FileInputFormatsplitsonlylargefiles.Here“l(fā)arge”meanslargerthananHDFSblock.ThesplitsizeisnormallythesizeofanHDFSblock,whichisappropriateformostapplications;however,itispossibletocontrolthisvaluebysettingvariousHadoopproperties.2.SothesplitsizeisblockSize.3.M
2、akingtheminimumsplitsizegreaterthantheblocksizeincreasesthesplitsize,butatthecostoflocality.4.OnereasonforthisisthatFileInputFormatgeneratessplitsinsuchawaythateachsplitisallorpartofasinglefile.Ifthefileisverysmall(“small”meanssignificantlysmallerthananHDFSbl
3、ock)andtherearealotofthem,theneachmaptaskwillprocessverylittleinput,andtherewillbealotofthem(oneperfile),eachofwhichimposesextrabookkeepingoverhead.hadoop處理大量小數(shù)據(jù)文件效果不好:hadoop對(duì)數(shù)據(jù)的處理是分塊處理的,默認(rèn)是64M分為一個(gè)數(shù)據(jù)塊,如果存在大量小數(shù)據(jù)文件(例如:2-3M一個(gè)的文件)這樣的小數(shù)據(jù)文件遠(yuǎn)遠(yuǎn)不到一個(gè)數(shù)據(jù)塊的大小就要按一個(gè)數(shù)據(jù)塊來(lái)進(jìn)行處理
4、。這樣處理帶來(lái)的后果由兩個(gè):1.存儲(chǔ)大量小文件占據(jù)存儲(chǔ)空間,致使存儲(chǔ)效率不高檢索速度也比大文件慢。2.在進(jìn)行MapReduce運(yùn)算的時(shí)候這樣的小文件消費(fèi)計(jì)算能力,默認(rèn)是按塊來(lái)分配Map任務(wù)的(這個(gè)應(yīng)該是使用小文件的主要缺點(diǎn))那么如何解決這個(gè)問(wèn)題呢?1.使用Hadoop提供的Har文件,Hadoop命令手冊(cè)中有可以對(duì)小文件進(jìn)行歸檔。2.自己對(duì)數(shù)據(jù)進(jìn)行處理,把若干小文件存儲(chǔ)成超過(guò)64M的大文件。FileInputFormatisthebaseclassforallimplementationsofInputForma
5、tthatusefilesastheirdatasource(seeFigure7-2).Itprovidestwothings:aplacetodefinewhichfilesareincludedastheinputtoajob,andanimplementationforgeneratingsplitsfortheinputfiles.Thejobofdividingsplitsintorecordsisperformedbysubclasses.AnInputSplithasalengthinbytes,
6、andasetofstoragelocations,whicharejusthostnamestrings.Noticethatasplitdoesn’tcontaintheinputdata;itisjustareferencetothedata.AsaMapReduceapplicationwriter,youdon’tneedtodealwithInputSplitsdirectly,astheyarecreatedbyanInputFormat.AnInputFormatisresponsibleforc
7、reatingtheinputsplits,anddividingthemintorecords.BeforeweseesomeconcreteexamplesofInputFormat,let’sbrieflyexaminehowitisusedinMapReduce.Here’stheinterface:publicinterfaceInputFormat{InputSplit[]getSplits(JobConfjob,intnumSplits)throwsIOException;RecordRe
8、adergetRecordReader(InputSplitsplit,JobConfjob,Reporterreporter)throwsIOException;}TheJobClientcallsthegetSplits()method.Onatasktracker,themaptaskpassesthesplittothegetRecordReader()