K - The type of the Trevni key to read.V - The type of the Trevni value to read.
A subset schema to be read may be specified with
AvroJob.setInputKeySchema(org.apache.hadoop.mapreduce.Job, org.apache.avro.Schema) and
AvroJob.setInputValueSchema(org.apache.hadoop.mapreduce.Job, org.apache.avro.Schema).
public class AvroTrevniKeyValueInputFormat<K,V> extends org.apache.hadoop.mapreduce.lib.input.FileInputFormat<AvroKey<K>,AvroValue<V>>
InputFormat for Trevni files.
This implement was modeled off
AvroKeyValueInputFormat to allow for easy
transition
A MapReduce InputFormat that reads from Trevni container files of key/value generic records.
Trevni container files that container generic records with the two fields 'key' and 'value' are expected. The contents of the 'key' field will be used as the job input key, and the contents of the 'value' field will be used as the job output value.
| Constructor and Description |
|---|
AvroTrevniKeyValueInputFormat() |
| Modifier and Type | Method and Description |
|---|---|
org.apache.hadoop.mapreduce.RecordReader<AvroKey<K>,AvroValue<V>> |
createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context) |
addInputPath, addInputPathRecursively, addInputPaths, computeSplitSize, getBlockIndex, getFormatMinSplitSize, getInputDirRecursive, getInputPathFilter, getInputPaths, getMaxSplitSize, getMinSplitSize, getSplits, isSplitable, listStatus, makeSplit, makeSplit, setInputDirRecursive, setInputPathFilter, setInputPaths, setInputPaths, setMaxInputSplitSize, setMinInputSplitSizepublic org.apache.hadoop.mapreduce.RecordReader<AvroKey<K>,AvroValue<V>> createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context) throws IOException, InterruptedException
createRecordReader in class org.apache.hadoop.mapreduce.InputFormat<AvroKey<K>,AvroValue<V>>IOExceptionInterruptedExceptionCopyright © 2009–2020 The Apache Software Foundation. All rights reserved.